Search is not available for this dataset
text
string
meta
dict
\section{Online Resources} \begin{itemize} \item \href{https://www.khanacademy.org/math/multivariable-calculus}{Khan Academy - Multivariable Calculus} \item \href{http://tutorial.math.lamar.edu/Classes/CalcIII/CalcIII.aspx}{Paul's Online Notes - Calc III} \item \href{https://www.youtube.com/playlist?list=PLF83D74BA4DE75897}{PatrickJMT - Calculus / Third Semester / Multivariable Calculus} \item \href{https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/}{MIT OpenCourseWare - Multivariable Calculus} \item \href{https://www.whitman.edu/mathematics/multivariable/multivariable.pdf}{Stewart - Single and Multivariable Calculus} \end{itemize}
{ "alphanum_fraction": 0.7833089312, "avg_line_length": 85.375, "ext": "tex", "hexsha": "a6c3aa9d4c6cf64c163ec553db39c5d5fe1fe764", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_path": "multiCalc/additionalMaterials/onlineResources.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_path": "multiCalc/additionalMaterials/onlineResources.tex", "max_line_length": 141, "max_stars_count": 39, "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_path": "multiCalc/additionalMaterials/onlineResources.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "num_tokens": 205, "size": 683 }
% declare document class and geometry \documentclass[12pt]{article} % use larger type; default would be 10pt \usepackage[margin=1in]{geometry} % handle page geometry \input{../header.tex} \title{Astro 270 -- Astrophysical Dynamics -- Lec01-02} \author{UCLA, Fall 2014} \date{\formatdate{02}{10}{2014}} % Activate to display a given date or no date (if empty), % otherwise the current date is printed %\date{\formatdate{07}{10}{2014}} \begin{document} \setlength{\unitlength}{1mm} \maketitle \textit{Read Chapters 1 and Sections 2.1-2.3} \section{Galaxy Dynamics} Since there are so many stars, can treat the system either as a fluid or as a system of particles. We may want to consider it as a particle if we want to account for star-interactions or as a fluid to consider its bulk motion. We will have a distribution function which is a function of position, velocity and time. We typically normalize this such that it is normalized to unity when integrated over phase space \begin{equation} \int f(\v{x},\v{v},\v{t}) d^3\v{x} d^3\v{v} = 1 \end{equation} The mean free path should be small in comparison to the macroscopic length scales if we're assuming that it's a fluid in the \textbf{continuum approximation}. Quantities such as the density and velocity are continuous and they are independent parameters. We have three regimes that we can deal with if we abandon the fluid approximatiosn \begin{itemize} \item Lumpy \item Smooth - fluid approximation \item In-between - globular clusters \end{itemize} Any system that we want to describe, we can do it any of these three ways but we have to pick which one. \section{Defining Basic Parameters} We have two particles at two positions $\v{x_1}$ and $\v{x_2}$ We'll define the difference between them $\v{x_{12}}$. The gravitational acceleration between them is given by $\v{\ddot{r}} = \frac{G m_2}{|\v{x_{12}}|^3} \v{x_{12}}$ \begin{equation} \v{\ddot{x_1}} = \sum_{j\ne 1} \frac{G m_j}{|\v{x_{ji}}|^3} \v{x_{ji}} \end{equation} WE can express this in terms of a potential \begin{equation} \v{\ddot{x_i}} = \frac{\pd \Phi}{\pd x_i} = -\v{\nabla} \Phi \end{equation} Now let's define the gravitational potential \begin{equation} W = -\frac{1}{2} \sum_i \sum_{j\ne i} \frac{Gm_1 m_j}{|x_{ij}|} = \frac{1}{2} \sum_i m_i \Phi (x_i) \end{equation} We can rewrite this in terms of a typical radius $r_h$ \begin{equation} W = -\alpha \frac{GM^2}{r_h} \end{equation} Kinetic energy is just \begin{equation} T = \frac{1}{2} \sum_j m_j |\dot{x_j}|^2 \end{equation} In an isolated system \begin{equation} E = T + W = constant \end{equation} \subsection{Virial Theorem} Let's take the moment of inertia of the system which is defined as \begin{equation} I = \sum_i m_i |x_i^2| \end{equation} Right now we are concerned with the second derivative of the moment of inertia and subsituting in our previous expresion for $\v{\ddot{x_i}}$ \begin{equation} \ddot{I} = 2\sum_i (m_i |\vec{v_i}|^2 + 2m_i \v{\ddot{x_i}} - \v{x_i} ) = 4T + 2 \sum_i m_i \v{x_i} ( \sum_{j\ne1} \frac{G m_j(\v{x_j} - \v{x_j})}{|\v{x_j} - \v{x_j}|}) \end{equation} \begin{equation} \sum_i \sum_{j\ne i} G m_i m_j \frac{\v{x_j} \cdot (\v{x_j} - \v{x_i})}{|\v{x_j} - \v{x_i}|^3} = \sum_j \sum_{i\ne j} G m_i m_j \frac{x_j \cdot (\vec{x_i} - x_j)} {|\v{x_i} - \v{x_j}|^3} \end{equation} These are equivalent, meaning that they are equal to half of the sum, which is \begin{equation} \frac{1}{2} G m_i m_j \sum_i \sum{j\ne i} (\v{x_i} - \v{x_j})(\v{x_j} - \v{x_i}) / (|\v{x_i} - \v{x_j}|^3) \end{equation} Taking out the minsu sign we get our expression for W \begin{equation} W = -\frac{1}{2} \sum_i \sum_{j\ne i} \frac{Gm_1 m_j}{|x_{ij}|} \end{equation} So oerall \begin{equation} \ddot{I} = 4T + 2W \end{equation} \begin{equation} 2T + W = 0 \end{equation} Note that this only occurs if the system is in a bound isolated system in equilibrium. The overal energy of the system is T + W, we can say for any gravitational system \begin{equation} E = W / 2 \end{equation} This is most used to estimate the masses of things. \section{Continuum} We expressed the previous sections in terms of sums, but now we want to take out the particle-partical interactions and put it in terms of $\rho(\v{x})$ Now our potential is defined as \begin{equation} \Phi(\v{x}) = -G \int \frac{\rho(\v{x'})}{|\v{x} - \v{x'}|} d^3x' \end{equation} We can use Gauss's therem to state \begin{equation} \int \v{\nabla} \Phi \v{d^2s} = 4\pi G M_{enclose} \end{equation} We can then use the divergence theorem \begin{equation} \int \nabla^2 \Phi \v{d^3x} = \int \v{\nabla} \Phi \cdot d^2\v{s} \end{equation} What we get then is that the integral of the divergence over the volume: \begin{equation} \int \nabla^2 \Phi d^3\v{x} = 4\pi G \int \rho(\v{x'}) d^3\v{x'} \end{equation} We then end up with Poissons equation because we can drop the integral \begin{equation} \nabla^2 \Phi = 4\pi G \rho \end{equation} This is an important equation that we'll come back to. Also in the continuum approximation, we have an expression for the gravitational potential W which is analogous to our previous definition in terms of the summation \begin{equation} W = \frac{1}{2} \int \rho(\v{x'}) \Phi(\v{x'}) d^3\v{x} \end{equation} With these results, we can start applying it to simple situations. The simplest one is a spherical potential. \section{Spherical potential} We can use the spherical symmetry to make two claims. A body inside a spherical shell of matter does not experience any force from outside of the shell. OUtisde of the shell, the force is the same as a point mass all at the center. This is due to Gauss' theorem. The contribution at a distance of r of an infinitesimal shell of thickess $dr$ at distance r' \begin{equation} \delta \Phi = - \frac{G M_{shell}}{r} = - \frac{4\pi G r'^2 \rho(r')}{r} \end{equation} if r is less than r' then there is no force, but there is still a potential, but it is a constant. We can choose what that constant is so that we have a meaningful potential. We choose it such that the contribution of the potential is continuous at the boundary. So to do that, we can choose that the contribution is \begin{equation} \frac{-G M_{shell}}{r'} = - \frac{4\pi G r'^2 \rho(r') dr'}{r'} \end{equation} That is independent of our location r. In total, we now combine those two where the first term is for the interior protion \begin{equation} \phi(r) = -G\left[ \frac{1}{r} \int^r_0 r\pi \rho(r') dr' + \int_r^{\infty} 4\pi \rho(r') r' dr' \right] \end{equation} The interior's potential is not zero, but the derivative is zero Dark matter tends to aggregate in a spherical halo, so we will use this formalism for a first order approximation for the potential of what a galaxy is doing. \section{Velocity} One of the key observables in a galaxy or in any dynamical system are velocities which helps us see what is happening but also how it evolves. We use the velocity of the galaxy to measure the mass of the galaxy. The circular velocity is \begin{equation} \frac{v_c^2}{r} = \frac{GM(r)}{r^2} \end{equation} \begin{equation} v_c^2 = \frac{GM(r)}{r} = -rF(r) = r(\v{\nabla}{\Phi}) \end{equation} The other velocity that is an important determinant of the evolution of a system is the escape velocity which is defined in terms of how much energy it takes for a particle to reach infinity (i.e. where E = 0) from where it starts off. That makes it easy that we can write it from conservation of energy \begin{equation} \frac{mv_e^2}{2} + m\Phi(r) = 0 \end{equation} \begin{equation} v_e^2 = -2\Phi(r) \end{equation} \subsubsection{Homogeneous Sphere} Let's look at some examples of spherical systems and see what their potentials are and how the systems might effect the behavior or particles. The simplest is a homogeneous sphere. \begin{equation} \phi = const \end{equation} \begin{equation} M_{enc} = \int_0^r 4\pi r'^2 \rho dr' = \frac{4}{3} \pi r^3 \rho \end{equation} The total force is \begin{equation} F(r) = - \frac{GM(r)}{r^2} = - \frac{4\pi G\rho}{3} r \end{equation} This is the differential equation for a harmonic oscillator. The core of a cluster can be approximated as such. Meaning \begin{equation} \ddot{r} = -\omega^2r \end{equation} We get an equation for \begin{equation} \omega^2 = \frac{4\pi G}{3} \rho \end{equation} The period of this oscillation is \begin{equation} P = \frac{2\pi}{\omega} = \sqrt{\frac{3\pi}{G\rho}} \end{equation} Each star has an oscillation. THe question now arises, if we have a Keplerian orbit, the period of the rotation is equal to the period of the oscillation. Are these the same? No, see homework \section{Dynamical Time} At this point we have this timescale of a period. Let's define an important timescale that we encounter, which is the \textbf{dynamical time} of a system. It is defined as \begin{equation} t_{dyn} = (\frac{3\pi}{16 G\rho}) = P/4 \end{equation} This is not dependent on our system We can say in general \begin{equation} t \sim (G\rho)^{1/2} \end{equation} THis corresponds to the time between the peak and the middle of an oscillation, or the average to the furthest it'll go. This dynamical time is the time that any given system or a particle within a stellar system will do something dramatic like cross the system. It is related intimately with the free-fall time. \subsubsection{Back to Homogeneous Sphere} If the radius of the homogeneous sphere b is interior to our radius (i.e. we're outside of the sphere) then it acts as a point mass \begin{equation} \Phi_{r <b} = -\frac{GM_{tot}}{r} \end{equation} If radius is less than B. The potential needs to be written in two terms of to describe the potential, the first of which is r dependent and the other which is r' dependent \begin{equation} - G \left[ \frac{1}{r} \frac{4\pi\rho r^3}{3} + \int^b_3 4\pi\rho r'dr'\right] \end{equation} Which can be written as \begin{equation} -\frac{GM_{tot}}{b} \left[ \frac{r^2}{b^2} + \frac{3}{2} ( 1 - \frac{r^2}{b^2})\right] = - \frac{GM_{tot}}{b} \left[ \frac{3}{2} - \frac{r^2}{2b^2}\right] \end{equation} Now is this continuous at r=b. Yes, it's a quick check. Our choice of constants was appropriate to ensure that the potential is continuous. \subsubsection{Singular Isothermal Sphere} Means that the energy distribution and so \begin{equation} \rho = \rho_0 \left(\frac{r_0}{r}\right)^2 \end{equation} Characteristics of the isothermal sphere will not b e derived \begin{equation} M(r) = \int_0^r 4\pi r^2 dr \rho(r) = M_0 \frac{r}{r_0} \end{equation} In this case \begin{equation} M_0 = 4\pi \rho_0 r_0^3 \end{equation} \begin{equation} v_c^2 = \frac{GM(r)}{r} = const = \frac{GM_0}{r_0} \end{equation} This implies that the circulal velocity is a cosntant with radius. so it has a flat rotation curvce. This tells us that the isothermal sphere is a good approximation for a galaxy. \begin{equation} \Phi(r) = -G[\frac{1}{r} \int_0^r 4\pi \rho(r') r'^2 dr + \int^\infty_r 4\pi \rho(r') r'^2 dr'] \end{equation} THis is not an appropriate way to treat the isothermal sphere as the integral will not covnerge. It is more useful to look at differences in potential \begin{equation} \Phi(r) - \Phi(r_0) = -G\int_0^r \frac{M(r')}{r'^2} dr = \frac{GM_0}{r_0} \int_{r_0}^r \frac{dr'}{r'} = v_c^2 \ln(r/r_0) \end{equation} \subsection{Plummer Model} We're going to now start wiht a potential and try to work back to a density distribution. We know the potential for a point mass \begin{equation} \Phi(r) = -\frac{GM}{r} \end{equation} Now let's generalize that \begin{equation} \Phi(r) = -\frac{GM}{\sqrt{r^2 + b^2}} \end{equation} This makes it so that the potential converges at r = 0. B is some scale radius which tells us where the model goes from being roughly constant to looking like the point mass. We get the density distribution by invoking Poisson's equation \begin{equation} \nabla^2\Phi = 4\phi G\rho \end{equation} In spherical coordinates \begin{equation} \frac{1}{r^2} \frac{d}{dr} ( r^2 \frac{d\Phi}{dr}) \end{equation} The result is \begin{equation} 4\pi G\rho = \frac{3GMb^2}{(r^2 + b^2)^{3/2}} \end{equation} \begin{equation} \rho(r) = \frac{3M/4\pi b^3}{(1+r^2/b^2)^{5/2}} \end{equation} As r goes to zero we get a constant density distrubtion \begin{equation} \frac{3M}{4\pi b^3} \end{equation} and r goes to infinity \begin{equation} \rho \sim r^{-5} \end{equation} This is nice because the density falls off However, orbits cannot be described analytically in this model. A different model for this is the isochrone potential. \begin{equation} \Phi(r) = -\frac{GM}{b + \sqrt{b^2+r^2}} \end{equation} \section{Two-power density model} \begin{equation} \rho(r) = \frac{\rho_0}{(r/1)^{\alpha} ( 1 + r/a)^{\beta-\alpha}} \end{equation} Look up Dehnen models, Herquist model, Jaffe model, NFW \subsection{Defionitions} \subsubsection{Integrals of motion} Function \begin{equation} I(\v{x}, \v{v}) = I(\v{x_0}, \v{v_0}) \end{equation} Inviariant along the orbit of a particle. For energy conservation, it means that energy is an integral of motion. This is different from a constant of motion \subsubsection{Constant of motion} \begin{equation} C(\v{x},\v{v}, t) = C(\v{x_o}, \v{v_0},t) \end{equation} An example is that if we know the equations of motion we can determine at some specific time $t_0$ what the position must or will be in the future. So the three space cooordinates and three velocity coordinates are related to constants of motion that are determined by the constraints determined of the constants of motion. We are more interested in the integrals of motion \section{Various Integrals of Motion} For these purposes let us assume that the potential is static or invariant \subsection{Energy} Let's consider the dot product between \begin{equation} \dot{r} \cdot \ddot{r} = -\dot{r} \cdot \v{\nabla} \Phi \end{equation} \begin{equation} \frac{d}{dt} (\frac{1}{2} |\dot{r}|^2) = \frac{d\Phi(r)}{dt} \end{equation} Oops. \subsection{Energy} Assume spherically symmetric \begin{equation} \Phi = \Phi(r) \end{equation} \begin{equation} \ddot{r} = F(r) \uv{r} \end{equation} \begin{equation} \v{r}\times \v{F} = 0 = \v{r} \times \v{\ddot{r}} \end{equation} \begin{equation} \frac{d}{dt} (\v{r} \v{\dot{r}}) = 0 \end{equation} So we get three conservations \begin{equation} \v{L} = \v{r}\times \v{v} \end{equation} Lessing this to axisymmetric, we know that the symmetry that we can work with is the azimuthal angle $\phi$. \begin{equation} \frac{\pd \Phi}{\pd \phi} = 0 \end{equation} Means that phi is conserved \section{Fuckballsacks} \begin{equation} \v{r} = r\uv{r} \end{equation} Unit vector shit See classical notes. \end{document}
{ "alphanum_fraction": 0.7145784943, "avg_line_length": 35.103117506, "ext": "tex", "hexsha": "a2bad7c45b69a92698e36706d4c30b8235541399", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_path": "astrodynamics/lec01-02.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_path": "astrodynamics/lec01-02.tex", "max_line_length": 373, "max_stars_count": null, "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_path": "astrodynamics/lec01-02.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4905, "size": 14638 }
\chapter{Installation} This chapter shows how {\ViennaGrid} can be integrated into a project and how the examples are built. The necessary steps are outlined for several different platforms, but we could not check every possible combination of hardware, operating system, and compiler. If you experience any trouble, please write to the mailing list at \\ \begin{center} \texttt{viennagrid-support$@$lists.sourceforge.net} \end{center} % ----------------------------------------------------------------------------- % ----------------------------------------------------------------------------- \section{Dependencies} % ----------------------------------------------------------------------------- % ----------------------------------------------------------------------------- \label{dependencies} \begin{itemize} \item A recent C++ compiler (e.g.~{\GCC} version 4.1.x or above, Clang 3.0 or above, and Visual C++ 2005 or above are known to work) \item {\CMake}~\cite{cmake} as build system (optional, but recommended for building the examples) \end{itemize} \section{Generic Installation of ViennaGrid} \label{sec:viennagrid-installation} Since {\ViennaGrid} is a header-only library, it is sufficient to copy the \lstinline|viennagrid/| source folder either into your project folder or to your global system include path. %If you do not have {\ViennaData} installed, proceed in the same way for the respective source folder \lstinline|viennadata/|. On Unix-like operating systems, the global system include path is usually \lstinline|/usr/include/| or \lstinline|/usr/local/include/|. On Windows, the situation strongly depends on your development environment. We advise to consult the documentation of the compiler on how to set the include path correctly. With Visual Studio 9.0 this is usually something like \texttt{C:$\setminus$Program Files$\setminus$Microsoft Visual Studio 9.0$\setminus$VC$\setminus$include} and can be set in \texttt{Tools -> Options -> Projects and Solutions -> VC++-\-Directories}. % ----------------------------------------------------------------------------- % ----------------------------------------------------------------------------- \section{Building the Examples and Tutorials} % ----------------------------------------------------------------------------- % ----------------------------------------------------------------------------- For building the examples, we suppose that {\CMake} is properly set up on your system. The various examples and their purpose are listed in Tab.~\ref{tab:tutorial-dependencies}. \begin{table}[tbp] \begin{center} \begin{tabular}{l|p{9.3cm}} File & Purpose\\ \hline \texttt{accessor.cpp} & Demonstrates the use of accessors, cf.~Chapter \ref{chap:data} \\ \texttt{algorithms.cpp} & Demonstrates the algorithms provided, cf.~Chapter \ref{chap:algorithms} \\ \texttt{coboundary\_iteration.cpp} & Shows how to iterate over co-boundary elements, cf.~Chapter \ref{chap:iterators} \\ \texttt{copy\_elements.cpp} & Shows how to copy elements from one mesh to another \\ \texttt{dump\_info.cpp} & Reads a Netgen or VTK mesh from file and prints some statistics \\ \texttt{element\_erase.cpp} & Demonstrates how to erase single elements from a mesh \\ \texttt{finite\_volumes.cpp} & Generic implementation of the finite volume method (assembly) \\ \texttt{interface.cpp} & Demonstrates how the interface algorithm works, cf.~Chapter \ref{chap:algorithms} \\ \texttt{io.cpp} & Explains input-output operations, cf.~Chapter \ref{chap:io} \\ \texttt{iterators.cpp} & Shows how the mesh and segments can be traversed, cf.~Chapter \ref{chap:iterators} \\ \texttt{mesh\_setup.cpp} & Fill a mesh with cells, cf.~Chapter \ref{chap:meshsetup} \\ \texttt{multi\_segment.cpp} & Explains multi-segment capabilities, cf.~Chapter \ref{chap:meshsetup} \\ \texttt{named\_segments.cpp} & Exemplary use of named segments, cf.~Chapter \ref{chap:meshsetup} \\ \texttt{neighbor\_iteration.cpp} & Shows how to iterate over neighbor elements, cf.~Chapter \ref{chap:iterators} \\ \texttt{polygon.cpp} & ViennaGrid also supports polygons, cf.~Chapter \ref{chap:meshsetup} \\ \texttt{read\_plc.cpp} & Shows ho to use the PLC reader, cf.~Chapter \ref{chap:io} \\ \texttt{segments.cpp} & Shows how to use segmentations and segments, cf.~Chapter \ref{chap:meshsetup} \\ \texttt{thin\_mesh.cpp} & Creates a mesh consisting of only vertices and cells, cf.~Chapter \ref{chap:meshsetup} \\ \texttt{vmesh\_creator.cpp} & Example for using the vmesh-writer, cf.~Chapter \ref{chap:io} \\ \end{tabular} \caption{Overview of the sample applications in the \texttt{examples/tutorial} folder} \label{tab:tutorial-dependencies} \end{center} \end{table} \subsection{Linux} To build the examples, open a terminal and change to: \begin{lstlisting} $> cd /your-ViennaGrid-path/build/ \end{lstlisting} Execute \begin{lstlisting} $> cmake .. \end{lstlisting} to obtain a Makefile and type \begin{lstlisting} $> make \end{lstlisting} to build the examples. If desired, one can build each example separately instead: \begin{lstlisting} $> make algorithms #builds the algorithms tutorial \end{lstlisting} \TIP{Speed up the building process by using multiple concurrent jobs, e.g. \keyword{make -j4}.} Some of the tutorials need to access mesh data in \texttt{examples/data/}. To ensure that the data is accessible, please execute all tutorials directly from the \texttt{build} folder, e.g. \begin{lstlisting} $ build> examples/tutorial/algorithms \end{lstlisting} \subsection{Mac OS X} \label{apple} The tools mentioned in Section \ref{dependencies} are available on Macintosh platforms too. For the {\GCC} compiler the Xcode~\cite{xcode} package has to be installed. To install {\CMake}, external portation tools such as Fink~\cite{fink}, DarwinPorts~\cite{darwinports}, or MacPorts~\cite{macports} have to be used. The build process of {\ViennaGrid} is similar to Linux. \subsection{Windows} In the following the procedure is outlined for \texttt{Visual Studio}: Assuming that {\CMake} is already installed, \texttt{Visual Studio} solution and project files can be created using {\CMake}: \begin{itemize} \item Open the {\CMake} GUI. \item Set the {\ViennaGrid} base directory as source directory. \item Set the \texttt{build/} directory as build directory. \item Click on 'Configure' and select the appropriate generator (e.g.~\texttt{Visual Studio 9 2008}) \item Click on 'Generate' (you may need to click on 'Configure' one more time before you can click on 'Generate') \item The project files can now be found in the {\ViennaGrid} build directory, where they can be opened and compiled with Visual Studio (provided that the include and library paths are set correctly, see Sec.~\ref{sec:viennagrid-installation}). \end{itemize} Note that the examples should be executed from the \texttt{build/} folder respectively in order to access the correct input files.
{ "alphanum_fraction": 0.6774739216, "avg_line_length": 50.6714285714, "ext": "tex", "hexsha": "29211eea4740fc9269efaf5b092446e08e450877", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-05-20T00:51:58.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-03T07:14:15.000Z", "max_forks_repo_head_hexsha": "6e47c8d098a0b691d6b9988f2444cd11d440f4c2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "viennagrid/viennagrid-dev", "max_forks_repo_path": "doc/manual/installation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6e47c8d098a0b691d6b9988f2444cd11d440f4c2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "viennagrid/viennagrid-dev", "max_issues_repo_path": "doc/manual/installation.tex", "max_line_length": 140, "max_stars_count": 7, "max_stars_repo_head_hexsha": "6e47c8d098a0b691d6b9988f2444cd11d440f4c2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "viennagrid/viennagrid-dev", "max_stars_repo_path": "doc/manual/installation.tex", "max_stars_repo_stars_event_max_datetime": "2019-06-27T14:24:49.000Z", "max_stars_repo_stars_event_min_datetime": "2015-09-13T03:50:58.000Z", "num_tokens": 1711, "size": 7094 }
\documentclass[10pt]{sigplanconf} \usepackage[compact]{titlesec} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{url} \usepackage{color} \usepackage{multirow} \setlength{\textfloatsep}{8pt} \newcommand{\Value}{\mathbf{value}} \newcommand{\Any}{\mathbf{any}} \newcommand{\Nil}{\mathbf{nil}} \newcommand{\Self}{\mathbf{self}} \newcommand{\False}{\mathbf{false}} \newcommand{\True}{\mathbf{true}} \newcommand{\Boolean}{\mathbf{boolean}} \newcommand{\Integer}{\mathbf{integer}} \newcommand{\Number}{\mathbf{number}} \newcommand{\String}{\mathbf{string}} \newcommand{\Void}{\Nil{*}} \newcommand{\Const}{\mathbf{const}} \newcommand{\mylabel}[1]{\; (\textsc{#1})} \newcommand{\env}{\Gamma} \newcommand{\penv}{\Pi} \newcommand{\senv}{\Sigma} \newcommand{\subtype}{<:} \newcommand{\ret}{\rho} \newcommand{\self}{\sigma} \def\dstart{\hbox to \hsize{\vrule depth 4pt\hrulefill\vrule depth 4pt}} \def\dend{\hbox to \hsize{\vrule height 4pt\hrulefill\vrule height 4pt}} \begin{document} \toappear{} \special{papersize=8.5in,11in} \setlength{\pdfpageheight}{\paperheight} \setlength{\pdfpagewidth}{\paperwidth} \conferenceinfo{DLS '15}{October 27th, 2015, Pittsburgh, PA, USA} \copyrightyear{2015} \copyrightdata{978-1-nnnn-nnnn-n/yy/mm} \doi{nnnnnnn.nnnnnnn} \titlebanner{Draft} \preprintfooter{A Formalization of Typed Lua} \title{A Formalization of Typed Lua} \authorinfo{André Murbach Maidl} {PUC-Rio, Brazil} {[email protected]} \authorinfo{Fabio Mascarenhas} {UFRJ, Brazil} {[email protected]} \authorinfo{Roberto Ierusalimschy} {PUC-Rio, Brazil} {[email protected]} \maketitle \begin{abstract} Programmers often migrate from a dynamically typed to a statically typed language when their simple scripts evolve into complex programs. Optional type systems are one way of having both static and dynamic typing in the same language, while keeping its dynamically typed semantics. This makes evolving a program from dynamic to static typing a matter of describing the implied types that it is using and adding annotations to make those types explicit. Designing an optional type system for an existing dynamically typed language is challenging, as its types should feel natural to programmers that are already familiar with this language. In this work, we give a formal description of Typed Lua, an optional type system for Lua, with a focus on two of its novel type system features: incremental evolution of imperative record and object types that is both lightweight and type-safe, and {\em projection types}, a combination of flow typing, functions that return multiple values, and multiple assignment. While our type system is tailored to the features and idioms of Lua, its features can be adapted to other imperative scripting languages. \end{abstract} \category{D.3.1}{Programming Languages}{Formal Definitions and Theory}[Type Systems] \terms Languages, Design \keywords Optional Type Systems, Gradual Typing, Lua \section{Introduction} \label{sec:intro} Dynamically typed languages forgo static type checking in favor of using run-time {\em type tags} to classify the values they compute, so its operations can use these tags to perform run-time (or dynamic) checks and signal errors in case of invalid operands~\cite{pierce2002tpl}. The lack of static types allows programmers to write code that might have required a complex type system to statically type, at the cost of hiding defects that will only be caught after deployment if the programmers do not properly test their code. In contrast, statically typed languages help programmers detect defects during development, and also provide a conceptual framework that helps programmers define modules and interfaces that can be combined to structure the development of large programs. The early error detection and better tools for structuring programs are two advantages of statically typed languages that lead programmers to migrate their code from a dynamically typed to a statically typed language when their scripts evolve into complex programs~\cite{tobin-hochstadt2006ims}. This migration from dynamic to static typing usually involves different languages that have distinct syntax and semantics, requiring a complex rewrite of existing programs instead of incremental evolution. Ideally, programming languages should offer programmers the option to choose between static and dynamic typing. \emph{Optional type systems}~\cite{bracha2004pluggable} and \emph{gradual typing}~\cite{siek2006gradual} are two approaches for blending static and dynamic typing in the same language. Their aim is to offer programmers the option of adding type annotations where static typing is needed, allowing the incremental migration from dynamic to static typing. The difference between these two approaches is the way they treat the runtime semantics of the language: optional type systems have the {\em trace preservation} property~\cite{strongscript}, where type annotations do not influence runtime behavior, and the original semantics of the dynamically typed language is preserved, while gradual typing changes the semantics to include more extensive runtime checking in the interface between typed and untyped code. In an optionally-typed language, dynamically typed code can lead to type errors in statically typed code, even if the optional type system is sound. In a gradually-typed language, the extra runtime checks ensure that this does not occur. The decision between the two approaches is not binary, and languages can have both optional and gradual types~\cite{liketypes,strongscript}. A key feature in optional type systems is usability. Optional type systems retrofitted to an existing language should fit the language's idioms, adding static type safety to them. If the system is too simple it may require too much of a change in the way programmers use the language. On the other hand, if it is too complex it may overload the programmers with types and error messages that are hard to understand. The most challenging aspect of designing optional type systems is to find the right amount of complexity for a type system that feels natural to the programmers. Lua is a small imperative language with lexically-scoped first-class functions where the only data structuring mechanism is the \emph{table}, an associative array. Lua prefers to provide mechanisms instead of fixed policies due to its primary use as an embedded language for configuration and extension of other applications. This means that even features such as a module system and object orientation are a matter of convention on how programs use the syntactic sugar and metaprogramming facilities of tables, instead of built-in language constructs. Tables can efficiently represent arrays, records, maps, modules, objects, etc. The result is a fragmented ecosystem of libraries, and different ideas among Lua programmers on how they should use the language features or on how they should structure programs. One example of this fragmentation is the proliferation of object systems. Typed Lua~\cite{maidl2014tl,maidl2015tl} is an optional type system for Lua that is rich enough to preserve some of the idioms that Lua programmers are already familiar with, while helping programmers structure large Lua programs and catch errors at compile-time. In this paper, we give a formal basis to an updated version of Typed Lua's original design, with a focus on two novel type system features that Typed Lua uses to type some tricky Lua idioms. The lack of standard policies is a challenge for the design of an optional type system for Lua. For this reason, the design of Typed Lua has been informed by a mostly automated survey of Lua idioms used by a large corpus of Lua libraries. This paper also presents the methodology of this survey and a summary of its results. The first idioms that we focus on this paper is the use of assignment to build records and objects one field at a time, as in the Lua fragment in Figure~\ref{fig:newpoint}, where the {\tt new\_point} function is a constructor for objects with {\tt x} and {\tt y} fields and a {\tt move} method. \begin{figure}[t] {\footnotesize \begin{verbatim} function new_point(x, y) local point = {} point.x, point.y = x, y function point:move(dx, dy) self.x, self.y = self.x + dx, self.y + dy end return point end \end{verbatim} } \caption{Building an object piece-by-piece} \label{fig:newpoint} \end{figure} The {\tt new\_point} function builds the object piece-by-piece, from an initially empty table. Typed Lua lets record and object types evolve incrementally via assignment while not sacrificing width subtyping, where we can form a supertype of a table type by ``forgetting'' fields. The other idiom that we focus on combines overloading on the return values of a function, multiple return values, multiple assignment to decompose multiple return values, and tag checks to discriminate several returned values by checking just one of them. The Lua fragment in Figure~\ref{fig:over} shows a function with overloaded multiple returns, and how a caller of this function can discriminate which list of multiple values the function returned. \begin{figure}[t] {\footnotesize \begin{verbatim} function idiv(x, y) if y == 0 then return nil, "division by zero" else return x // y, x % y end end local q, r = idiv(a, b) if q then assert(a == q*b + r) else error(r) end \end{verbatim} } \caption{Overloading on multiple returns} \label{fig:over} \end{figure} In the {\bf then} branch it is guaranteed that both {\tt q} and {\tt r} are integers, and in the {\bf else} branch it is guaranteed that {\tt q} is {\tt nil} and {\tt r} is a string. In Typed Lua, the return type of {\tt idiv} can be a union of tuples describing the possible return values. Typed Lua deconstructs unions of tuples into {\em projection types}, and these types track the dependencies between the types of each value in the tuple. The remainder of the paper is organized as follows: Section \ref{sec:statistics} presents the methodology and results of our survey of Lua idioms; Section \ref{sec:types} presents the types of Typed Lua's type system; Section~\ref{sec:subtyping} presents their subtyping and consistent-subtyping relations; Section \ref{sec:rules} presents the typing rules for the two idioms that we focus on this paper; Section \ref{sec:related} reviews related work; Section \ref{sec:conclusion} presents our conclusions, and outlines plans for future work. \section{A Survey of Lua Idioms} \label{sec:statistics} In order to find out how Lua is used in practice, to inform the design of Typed Lua, we did a survey of existing Lua libraries. Our corpus is the repository for the LuaRocks package manager~\cite{hisham2013luarocks}, downloaded on February 1st, 2014. We did not survey all of the Lua scripts in the repository: scripts that were not compatible with Lua 5.2 (the version of Lua at that time) were ignored, and we also ignored machine-generated scripts and test scripts, as those could skew the result towards non-idiomatic uses of Lua. This left 2,598 scripts out of a total of 3,928, from 262 different projects. The scripts were parsed, and their abstract syntax trees analyzed to collect the data that we show in the rest of this section. Tables are the core data structure of Lua programs, and the language has features that let tables be used as tuples, lists, maps, records, abstract data types, classes, objects, modules, etc. We performed several analyses to quantify the different ways that Lua programs use tables. Our first analysis surveyed table constructors (expressions that create and initialize a table) to find out how a table begins its life. Of all of the 23,185 table constructors of our corpus, 36\% of them create a record (a table with string keys such as \texttt{\{x = 1, y = 2\}}), 29\% of them create a list or a tuple (a table with consecutive integer keys such as \texttt{\{"one", "two", "three", "four"\}}), 26\% of them create an empty table, and 8\% of them create a table with both a record part and a list part. The remaining 1\% of them create a table with non-literal keys. We also analyzed expressions that access a table (130,448 instances), either to read a value out of it (86\% of such expressions) or to write a value into it (14\% of such expressions). Most of the expressions that read a value out of a table use a literal key, either a literal string (89\% of the reads) or a literal number (4\% of the reads). Lua has syntactic sugar that turns field accesses using ``dot'' notation into table accesses through literal string keys. In 45\% of the expressions that read a value out of a table, its value is immediately called. These calls are split almost evenly between function calls and method calls (25\% of reads are function calls, 20\% of reads are method calls). These results show that the use of tables as a namespacing mechanism for modules, as objects, and as records, is prevalent. Expressions that write into a table also mostly use literal keys (69\% of them use a literal string, 2\% of them use a literal number), although a large 29\% of writes use non-literal keys, in contrast to only 7\% of reads. In order to gauge how frequently tables are used as collections, we looked for the presence of code that iterates over tables, and found out that 23\% of the scripts in our corpus iterate over a map at least once, and 27\% of the scripts iterate over a list at least once. Besides measuring the number of method calls, we also measured other kinds of expressions and statements to gauge how frequently Lua programmers use the object-oriented paradigm. Our corpus has 24,858 function declarations, and 23\% of these are method declarations. Of the 262 projects in our corpus, 63\% use {\em metatables}, a metaprogramming mechanism that lets the program extend the behavior of tables and is mostly used for implementing prototype-based inheritance. Typed Lua's table types, described in the next section, reflect the way Lua programmers use Lua tables in practice, according to the results above. A table type can express records, tuples, maps, lists, objects, or a combination of those. Lua modules are tables containing the modules' exports. The current idiomatic way to define a module is to populate an initially empty table with the members the module wishes to export, and return this table at the end of the script that defines the module. Around two-thirds of the modules in our corpus use this idiom. The other third uses a deprecated idiom, where a call to the {\tt module} function at the top of the script installs a new global environment, and the exported members are global variables assigned to in the rest of the module. The global environment is a also a table, so the two idioms are equivalent in terms of how Typed Lua deals with module definitions. Modules in Typed Lua make extensive use of Typed Lua's rules for evolution of table types (Section~\ref{sec:tables}). In order to gauge how common is the use of dynamic type checking as a way to define overloaded functions, we measured how many of the functions inspect the tags of their input parameters, and found out that 9\% of the functions in our corpus do this, split evenly between using the tag in an assertion, as a form of type checking, and using the tag to decide which code to execute, as a form of overloading. While Typed Lua supports both union types and flow typing, general support for overloaded functions still has to improve. A Lua idiom that is specially problematic for static type systems is overloading on the number and types and multiple return values. Around 6\% of functions use this idiom as a way to signal errors: in case of errors, the function returns either {\tt nil} or {\tt false} plus either an error message or error object instead of its usual return values or throwing an exception. The caller then tests the first returned value to check if an error occurred or not. Standard union types and flow typing cannot deal with this, and Typed Lua introduces {\em projection types}, a way of tracking dependencies between types in a union of tuples after a destructuring assignment. \section{Types} \label{sec:types} \begin{figure}[t] {\scriptsize $$ \begin{array}{rlr} F ::= & & \textsc{first-level types:}\\ & \;\; L & \textit{literal types}\\ & | \; B & \textit{base types}\\ & | \; \Nil & \textit{nil type}\\ & | \; \Value & \textit{top type}\\ & | \; \Any & \textit{dynamic type}\\ & | \; \Self & \textit{self type}\\ & | \; F_{1} \cup F_{2} & \textit{union types}\\ & | \; S_{1} \rightarrow S_{2} & \textit{function types}\\ & | \; \{\overline{F:V}\}_{unique|open|fixed|closed} & \textit{table types}\\ & |\; x & \textit{type variables}\\ & | \; \mu x.F & \textit{recursive types}\\ %\multicolumn{3}{c}{}\\ L ::= & & \textsc{{\tiny literal types:}}\\ & \False \; | \; \True \; | \; {\it int} \; | \; {\it float} \; | \; {\it string} &\\ %\multicolumn{3}{c}{}\\ B ::= & & \textsc{{\tiny base types:}}\\ & \Boolean \, |\, \Integer \, |\, \Number\, |\, \String &\\\\ %\multicolumn{3}{c}{}\\ %\multicolumn{3}{c}{}\\ V ::= & & \textsc{{\tiny value types:}}\\ & \;\; F \; | \; \Const \; F &\\ S ::= & & \textsc{second-level types:}\\ & \;\; P & \textit{tuple types}\\ & | \; S_{1} \sqcup S_{2} & \textit{unions of tuple types}\\ %\multicolumn{3}{c}{}\\ P ::= & & \textsc{{\tiny tuple types:}}\\ %& \;\; \Void & \textit{void type}\\ & | \; F{*} & \textit{variadic types}\\ & | \; F \times P & \textit{pair types}\\\\ \end{array} $$ } \caption{Type Language} \label{fig:typelang} \end{figure} \begin{figure}[t] {\footnotesize $$ \begin{array}{rlr} T ::= & & \textsc{expression types:}\\ & \;\; F & \textit{first-level types}\\ & | \; \phi(F_{1},F_{2}) & \textit{filter types}\\ & | \; \pi_{i}^{x} & \textit{projection types}\\ E ::= & & \textsc{expression list types:}\\ & \;\; T{*} & \textit{variadic types}\\ & | \; T \times E & \textit{pair types}\\ %K ::= & & \textsc{{\small key types:}}\\ %& \;\; L \; | \; B \; | \; \Value &\\ %\multicolumn{3}{c}{}\\ R ::= & & \textsc{filter result type:}\\ & \;\; \mathbf{void} & \textit{void type}\\ & |\; F & \textit{first-level types} \end{array} $$ } \caption{Special Types} \label{fig:special} \end{figure} Figure~\ref{fig:typelang} presents the abstract syntax of the core Typed Lua types. They are split into two categories: \emph{first-level types} and \emph{second-level types}. First-level types represent first-class Lua values and second-level types represent tuples of values that appear as parameters and return values of functions. Types are ordered by a subtype relationship, so Lua values may belong to several distinct types. The details of the subtyping relationship are given later in Section~\ref{sec:subtyping}. Literal types represent the type of literal boolean, number and string values. Having just one value each, their main application is in constructing table types with fields where it is guaranteed that the value of that field will not be {\tt nil}. The base types represent values that Lua tags as booleans, strings, or numbers during runtime. Prior to version Lua 5.3 all Lua numbers were double-precision floating point numbers. Lua 5.3 introduced the possibility of having 64-bit integers as numbers, with automatic promotion of integer values to floating-point representation as needed. As automatic promotion makes integers essentially subtypes of floating-point numbers, Typed Lua uses $\Number$ as the type of floating-point numbers, and $\Integer$ as the type of integers. The type $\Nil$ is the type of {\tt nil}, the value that Lua uses for undefined variables, missing parameters, and missing table keys. Many programming languages include a bottom type to represent an empty value that programmers can use as a default expression. We could have used the type $\Nil$ for this role, but this would lead to reduced static type safety. Typed Lua's type system does not have a bottom type. The type $\Value$ is the top type, which represents any Lua value. Its main purpose is to be able to easily express function types that silently ignore extra arguments, as well as easily ignoring extra r-values in multiple assignment, preserving the semantics of Lua for these operations, as they do not sacrifice type safety. The type $\Self$ is the receiver in object-oriented method definitions and method calls, and lets Typed Lua have objects as regular table types with methods as regular function types with an explicit receiver parameter. Table types represent some of the forms that Lua tables can take. What a particular table type denotes depends on its {\em tag}: the {\em unique}, {\em open} and {\em fixed} tags all denote tables where it is guaranteed that every key belongs to one of the {\em key types} $F_i$. %We restrict key types to literal types, %base types, and the top type to make the system simpler %for the user, as our survey of Lua programs showed that %this restriction reflects the way Lua tables are commonly used. Fields are assignable by default, but can be marked as non-assignable with the {\bf const} qualifier in its key type. Having a field be {\bf const} is not a guarantee that its value cannot change, as the table may have aliases with a non-{\bf const} type for that field. Non-assignable fields enable {\em depth subtyping} between table types: these fields are covariant on the types of their values, while assignable fields are always invariant. The difference between {\em unique}, {\em open} and {\em fixed} tables is the guarantees that the type system gives about the reference to a value of that type. A reference to an {\em unique} table is guaranteed to point to a table that has no other references to it, so it allows greater flexibility in reinterpreting its type; a reference to an {\em open} table is guaranteed to have only {\em closed} references pointing to the same table (we will explain the meaning of the {\em closed} tag shortly), which still lets the type system reinterpret the type of a reference, but with more restrictions; a reference to a {\em fixed} table can have any number of {\em fixed} or {\em closed} references point to it, so its type cannot change anymore. Table types with the {\em fixed} tag are suitable for typing tables that are used as maps, lists, or even a mixture of those. The guarantee that there are no keys with a type that is not one of the key types allows type-safe iteration on these tables. This guarantee forbids width subtyping, which is a necessity for object-oriented programming, so we have the {\em closed} tag, which removes the guarantee and allows width subtyping between other table types and {\em closed} table types. A {\em closed} table type is akin to an object-oriented {\em interface}. Any table type has to be \emph{well-formed}. Informally, a table type is well-formed if key types do not overlap. A formal definition of this relation depends on the consistent-subtyping relation which we present in the next section, so we delay the definition until Section~\ref{sec:tables}, where the typing rules for table types are explained. The type $\Any$ is the dynamic type, and allows interaction between statically and dynamically typed portions of a program. It is neither a top nor a bottom type, although it can work as either, as a value of any type can be coerced to $\Any$ and vice-versa. Gradual type systems augment these coercions with information that can be used to blame the dynamically typed part of the program for type errors that may occur in the statically typed part. Currently Typed Lua only warns the programmer that coercions to and from $\Any$ may be unsafe. Second-level types are either tuple types or unions of tuple types, and represent sequences of values that appear as arguments, return values, and the left and right sides of assignments. Tuple types are heterogeneous lists of first-level types that end with a {\em variadic type}. A variadic type $F{*}$ represents an unbounded sequence of values of type $F \cup \Nil$. Lua pads sequences with {\tt nil} to have as many values as needed, and in Typed Lua sequences where the number of values is statically known have a $\Nil{*}$ tail to reflect this semantics. For clarity, we use the symbol $\sqcup$ to represent the union between two different tuple types, and reserve $\cup$ for the union of two first-level types. Figure~\ref{fig:special} presents some types that, while they may not be present in type annotations, play some special role in the type system. {\em Expression types} denote the type of expressions in the type system, which, in addition to first-level types, can also be {\em filter types}, which are the result of flow typing, and {\em projection types}, which are the result of decomposing a union of tuples. {\em Expression list types} denote the type of lists of expressions. Finally, {\em filter result types} are also used in flow typing; the {\bf void} type is a type with no values. \section{Subtyping and Consistent-Subtyping} \label{sec:subtyping} Typed Lua's type system uses subtyping to order its types and consistent-subtyping~\cite{siek2007objects,siek2013mutable} to model compatibility between types in a way that allows the interaction between statically and dynamically typed parts of the program without collapsing the subtyping relation. The consistent-subtyping relation is defined in terms of subtyping, so we focus the discussion in this section on characterizing the subtyping relation. The subtyping rules are a deduction system for the subtyping relation $\subtype$ between two types of the same kind. The variable $\senv$ is a set of subtyping assumptions used by the subtyping relation between recursive types. Most of the subtyping rules are standard, so we will focus on the interesting cases. \begin{figure}[t] {\footnotesize \[ \begin{array}{c} \begin{array}{c} \mylabel{S-Pair}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2} \;\;\; \senv \vdash P_{1} \subtype P_{2}} {\senv \vdash F_{1} \times P_{1} \subtype F_{2} \times P_{2}} \end{array} \begin{array}{c} \mylabel{S-Variadic}\\ \dfrac{\senv \vdash F_{1} \cup \Nil \subtype F_{2} \cup \Nil} {\senv \vdash F_{1}{*} \subtype F_{2}{*}} \end{array}\\\\ \begin{array}{c} \mylabel{S-VarExpand}\\ \dfrac{\senv \vdash F_{1} \cup \Nil \subtype F_{2} \;\;\; \senv \vdash F_{1}{*} \subtype P_{2}} {\senv \vdash F_{1}{*} \subtype F_{2} \times P_{2}} \end{array} \\ \\ \begin{array}{c} \mylabel{S-VarDrop}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2} \cup \Nil \;\;\; \senv \vdash P_{1} \subtype F_{2}{*}} {\senv \vdash F_{1} \times P_{1} \subtype F_{2}{*}} \end{array} \end{array} \] } \caption{Subtyping for second-level types} \label{fig:subtuples} \end{figure} Figure~\ref{fig:subtuples} shows the subtyping rules for second-level types. Expression list types have similar subtyping rules. Tuple types are covariant, so the subtyping rule {\sc S-Pair} for pair types is straightforward. The semantics of Lua lets the right-hand side of an assignment produce more values than l-values in the left-hand side, with extra values silently dropped. It also lets the right-hand side produce fewer values than l-values in the left-hand side, using {\tt nil} to plug the holes. The same behavior applies for lists of arguments vs. parameters. Typed Lua preserves this behavior in assignment by synthesizing tuple types that have a $\Value{*}$ tail for the left side of an assignment. Non-variadic functions have a stricter behavior: their parameter lists have a $\Nil{*}$ tail, so a function can be passed any number of extra arguments as long as they are literal {\tt nil}, but the type system will reject extra arguments of any other type. Missing arguments are also fine if the corresponding parameters have {\tt nil} as a subtype: this is the Lua idiom for optional parameters. Functions that want to keep the Lua behavior of just ignoring any extra arguments can be variadic with type $\Value{*}$ as the tail of their parameter tuple. The remaining subtyping rules for second-level types reflect this semantics. Rule {\sc S-Variadic} states that variadic types are covariant (just having $F_1{*} \subtype F_2{*}$ iff $F_1 \subtype F_2$ would be too restrictive, as $\Nil{*}$ would not be a subtype of $\Number{*}$). Rules {\sc S-VarExpand} and {\sc S-VarDrop} state the conditions when variadic tuples of different length are compatible. We could have the axioms $\senv \vdash F{*} \subtype F \cup \Nil \times F{*}$ and $\senv \vdash F \cup \Nil \times F{*} \subtype F{*}$ instead of the two rules above, which can be derived from these axioms by transitivity, but having the rules makes the subtyping relation easier to implement. \begin{figure}[t] {\footnotesize \[ \begin{array}{c} \begin{array}{c} \mylabel{S-TableFCtoC}\\ \dfrac{\begin{array}{c} \forall j \; \exists i \;\;\; \senv \vdash F_{i} \subtype F_{j}' \;\ \senv \vdash F_{j}' \subtype F_{i} \; \senv \vdash V_{i} \subtype_{c} V_{j}' \end{array}} {\begin{array}{c} \senv \vdash \{\overline{F:V}\}_{fixed|closed} \subtype \{\overline{F^\prime:V^\prime}\}_{closed} \end{array}} \end{array}\\\\ \begin{array}{c} \mylabel{S-TableUtoC}\\ \dfrac{\begin{array}{c} \forall i \; \forall j \;\;\; \senv \vdash F_{i} \subtype F_{j}' \to \senv \vdash V_{i} \subtype_{u} V_{j}' \\ \forall j \; \nexists i \;\;\; \senv \vdash F_{i} \subtype F_{j}' \to \senv \vdash \Nil \subtype_{o} V_{j}' \end{array}} {\senv \vdash \{\overline{F:V}\}_{unique} \subtype \{\overline{F^\prime:V^\prime}\}_{closed}} \end{array}\\\\ \begin{array}{c} \mylabel{S-TableOtoC}\\ \dfrac{\begin{array}{c} \forall i \; \forall j \;\;\; \senv \vdash F_{i} \subtype F_{j}' \to \senv \vdash V_{i} \subtype_{c} V_{j}' \\ \forall j \; \nexists i \;\;\; \senv \vdash F_{i} \subtype F_{j}' \to \senv \vdash \Nil \subtype_{o} V_{j}' \end{array}} {\senv \vdash \{\overline{F:V}\}_{open} \subtype \{\overline{F^\prime:V^\prime}\}_{closed}} \end{array}\\\\ \begin{array}{c} \mylabel{S-TableUtoUOF}\\ \dfrac{\begin{array}{c} \forall i \; \exists j \;\;\; \senv \vdash F_{i} \subtype F_{j}' \land \senv \vdash V_{i} \subtype_{u} V_{j}' \\ \forall j \; \nexists i \;\;\; \senv \vdash F_{i} \subtype F_{j}' \to \senv \vdash \Nil \subtype_{o} V_{j}' \end{array}} {\begin{array}{c} \senv \vdash \{\overline{F:V}\}_{unique} \subtype \{\overline{F^\prime:V^\prime}\}_{unique|open|fixed} \end{array}} \end{array}\\\\ \begin{array}{c} \mylabel{S-TableOtoOF}\\ \dfrac{\begin{array}{c} \forall i \; \exists j \;\;\; \senv \vdash F_{i} \subtype F_{j}' \land \senv \vdash V_{i} \subtype_{c} V_{j}' \\ \forall j \; \nexists i \;\;\; \senv \vdash F_{i} \subtype F_{j}' \to \senv \vdash \Nil \subtype_{o} V_{j}' \end{array}} {\begin{array}{c} \senv \vdash \{\overline{F:V}\}_{open} \subtype \{\overline{F^\prime:V^\prime}\}_{open|fixed} \end{array}} \end{array}\\\\ \begin{array}{c} \mylabel{S-TableFtoF}\\ \dfrac{\begin{array}{c} \forall j \; \exists i \;\;\; \senv \vdash F_{i} \subtype F_{j}' \; \senv \vdash F_{j}' \subtype F_{i} \; \senv \vdash V_{i} \subtype_{c} V_{j}'\\ \forall i \; \exists j \;\;\; \senv \vdash F_{i} \subtype F_{j}' \; \senv \vdash F_{j}' \subtype F_{i} \; \senv \vdash V_{i} \subtype_{c} V_{j}'\\ \end{array}} {\senv \vdash \{\overline{F:V}\}_{fixed} \subtype \{\overline{F^\prime:V^\prime}\}_{fixed}} \end{array} \end{array} \] } \caption{Subtyping for table types} \label{fig:tablesub} \end{figure} \begin{figure}[t] {\footnotesize \[ \begin{array}{c} \begin{array}{c} \mylabel{S-FieldClsMtoM}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2} \;\;\; \senv \vdash F_{2} \subtype F_{1}} {\senv \vdash F_{1} \subtype_{c} F_{2}} \end{array} \; \begin{array}{c} \mylabel{S-FieldClsItoI}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2}} {\senv \vdash \Const \; F_{1} \subtype_{c} \Const \; F_{2}} \end{array} \\ \\ \begin{array}{c} \mylabel{S-FieldClsMtoI}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2}} {\senv \vdash F_{1} \subtype_{c} \Const \; F_{2}} \end{array}\\\\ \begin{array}{c} \mylabel{S-FieldUnqMtoM}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2}} {\senv \vdash F_{1} \subtype_{u} F_{2}} \end{array} \; \begin{array}{c} \mylabel{S-FieldUnqItoM}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2}} {\senv \vdash \Const \; F_{1} \subtype_{u} F_{2}} \end{array} \\ \\ \begin{array}{c} \mylabel{S-FieldUnqItoI}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2}} {\senv \vdash \Const \; F_{1} \subtype_{u} \Const \; F_{2}} \end{array} \; \begin{array}{c} \mylabel{S-FieldUnqMtoI}\\ \dfrac{\senv \vdash F_{1} \subtype F_{2}} {\senv \vdash F_{1} \subtype_{u} \Const \; F_{2}} \end{array} \\ \\ \begin{array}{c} \mylabel{S-FieldOpnNtoM}\\ \dfrac{\senv \vdash \Nil \subtype F} {\senv \vdash \Nil \subtype_{o} F} \end{array} \; \begin{array}{c} \mylabel{S-FieldOpnNtoI}\\ \dfrac{\senv \vdash \Nil \subtype F} {\senv \vdash \Nil \subtype_{o} \Const \; F} \end{array} \end{array} \] } \caption{Subtyping for fields} \label{fig:subfield} \end{figure} Figure~\ref{fig:tablesub} shows the subtyping rules for table types. These rules use the auxiliary relations $\subtype_{c}$, $\subtype_{u}$, and $\subtype_{o}$ shown in Figure~\ref{fig:subfield} to handle depth subtyping where applicable. Rule {\sc S-TableFCtoC} for subtyping between a {\em fixed} or {\em closed} table type and a {\em closed} table type resembles the standard subtyping rule between mutable records, with width subtyping plus depth subtyping from a mutable or immutable field to an immutable field. Rule {\sc S-TableUtoC} for subtyping between {\em unique} and {\em closed} table types is more permissive, as the typing rules of Section~\ref{sec:tables} guarantee that an {\em unique} table type can only appear on the left side of a subtyping judgment if the reference to the table with the {\em unique} tag is being destroyed. This rule allows width subtyping, depth subtyping on any field, not only immutable fields, as well as the addition of new fields and the merging of existing fields, as long as the types of the values of new and merged fields include $\Nil$. The rationale for the permissiveness of {\sc S-TableUtoC} derives from the fact that reading a non-existing key just yields $\Nil$ instead of being an error in Lua semantics, plus the fact that the type system guarantees that a table with an {\em unique} type has no keys with a value that does not belong to one of its key types, and does not have any aliases. With these guarantees, a table with type $\{1{:}\String, 2{:}\String, ``x"{:}\Integer, ``y"{:}\Value\}_{unique}$ can be safely recast as a table with type $\{\Number{:}\String \cup \Nil, ``x"{:}\Number, ``z"{:} \Number \cup \Nil\}_{closed}$ as long as it becomes inaccessible with the original type, which Section~\ref{sec:tables} will show that is the case for any type judgment where this subtyping rule is used. Rule {\sc S-TableOtoC} for subtyping from an {\em open} table type to a {\em closed} table type also allows width subtyping, merging fields, and adding new fields, but forbids depth subtyping between mutable fields or an immutable and a mutable field, as the typing rules guarantee that the {\em open} reference is destroyed, but the {\em open} table may already have {\em closed} aliases. Rule {\sc S-TableUtoUOF} for subtyping from {\em unique} to {\em unique}, {\em open} or {\em fixed} table types is still permissive, but forbids standard width subtyping, as all these table types keep the guarantee that the values of all keys in tables of these types belong to one of their key types. Merging fields and adding new fields are still allowed. Rule {\sc S-TableOtoOF} for subtyping from {\em open} to {\em open} or {\em fixed} table types mixes the depth subtyping restrictions of rule {\sc S-TableOtoC} with the width subtyping restrictions of rule {\sc S-TableUtoUOF}, but still allows merging fields and adding new fields. Finally, rule {\sc S-TableFtoF} for subtyping between {\em fixed} table types is the most restrictive, and does not allow width subtyping, merging fields, or adding fields, just standard depth subtyping. We will revisit the subtyping rules among table types in Section~\ref{sec:tables}. \begin{figure}[t] {\footnotesize \[ \begin{array}{c} \begin{array}{c} \mylabel{S-Amber}\\ \dfrac{\senv[x_{1} \subtype x_{2}] \vdash F_{1} \subtype F_{2}} {\senv \vdash \mu x_{1}.F_{1} \subtype \mu x_{2}.F_{2}} \end{array} \; \begin{array}{c} \mylabel{S-Assumption}\\ \dfrac{x_{1} \subtype x_{2} \in \senv} {\senv \vdash x_{1} \subtype x_{2}} \end{array}\\\\ \begin{array}{c} \mylabel{S-UnfoldRight}\\ \dfrac{\senv \vdash F_{1} \subtype [x \mapsto \mu x.F_{2}]F_{2}} {\senv \vdash F_{1} \subtype \mu x.F_{2}} \end{array} \; \begin{array}{c} \mylabel{S-UnfoldLeft}\\ \dfrac{\senv \vdash [x \mapsto \mu x.F_{1}]F_{1} \subtype F_{2}} {\senv \vdash \mu x.F_{1} \subtype F_{2}} \end{array} \end{array} \] } \caption{Subtyping for recursive types} \label{fig:subrec} \end{figure} \begin{figure}[t] {\scriptsize $$ \begin{array}{rlr} s ::= & & \textsc{statements:}\\ & \;\; \mathbf{skip} & \textit{skip}\\ & | \; s_{1} \; ; \; s_{2} & \textit{sequence}\\ & | \; \overline{l} = el & \textit{multiple assignment}\\ & | \; \mathbf{while} \; e \; \mathbf{do} \; s \; | \; \mathbf{if} \; e \; \mathbf{then} \; s_{1} \; \mathbf{else} \; s_{2} & \textit{control flow}\\ & | \; \mathbf{local} \; \overline{id{:}F} = el \; \mathbf{in} \; s & \textit{variable declaration}\\ & | \; \mathbf{local} \; \overline{id} = el \; \mathbf{in} \; s & \textit{variable declaration}\\ & | \; \mathbf{rec} \; id{:}F = e \; \mathbf{in} \; s & \textit{recursive declaration} \\ & | \; \mathbf{return} \; el & \textit{return} \\ & | \; \lfloor a \rfloor_{0} & \textit{function call}\\ & | \; \mathbf{fun} \; id_{1}{:}id_{2} \; (pl){:}S \; s \;;\; \mathbf{return} \; el & \textit{method declaration}\\ e ::= & & \textsc{expressions:}\\ & \;\; \mathbf{nil} & \textit{nil}\\ & | \; k & \textit{other literals}\\ & | \; id & \textit{variable access}\\ & | \; e_{1}[e_{2}] & \textit{table access}\\ & | \; {<}F{>} \; id & \textit{type coercion}\\ & | \; f & \textit{function declaration}\\ & | \; \{ \; \overline{[e_{1}] = e_{2}} \; \} \; | \; \{ \; \overline{[e_{1}] = e_{2}},me \; \} & \textit{table constructor}\\ & | \; e_{1} + e_{2} \; | \; e_{1} \; {..} \; e_{2} \; | \; e_{1} == e_{2} \; | \; e_{1} < e_{2} & \textit{binary operations}\\ & | \; e_{1} \;\&\; e_{2} \; | \; e_{1} \; \mathbf{and} \; e_{2} \; | \; e_{1} \; \mathbf{or} \; e_{2} & \textit{binary operations}\\ & | \; \mathbf{not} \; e \; | \; \# \; e & \textit{unary operations} \\ & | \; \lfloor me \rfloor_{1} & \textit{one result}\\ l ::= & & \textsc{lvalues:}\\ & \;\; id_{l} & \textit{variable assignment}\\ & | \; e_{1}[e_{2}]_{l} & \textit{table assignment}\\ & | \; id[e] \; {<}V{>} & \textit{type coercion}\\ k ::= & & \textsc{literals:}\\ & \;\; \mathbf{false} \; | \; \mathbf{true} \; | \; {\it int} \; | \; {\it float} \; | \; {\it string} & \\ el ::= & & \textsc{expression lists:}\\ & \;\; \overline{e} \; | \; \overline{e}, me & \\ me ::= & & \textsc{multiple results:}\\ & \;\; a & \textit{application}\\ & | \; {...} & \textit{vararg expression}\\ a ::= & & \textsc{calls:}\\ & \;\; e(el) & \textit{function call}\\ & | \; e{:}n(el) & \textit{method call}\\ f ::= & & \textsc{functions:}\\ & \;\; \mathbf{fun} \; (pl){:}S \; s \;;\; \mathbf{return} \; el & \\ pl ::= & & \textsc{parameter lists:}\\ & \;\; \overline{id{:}F} \; | \; \overline{id{:}F},{...}{:}F & \\ \end{array} $$ } \caption{Abstract Syntax} \label{fig:syntax} \end{figure} Figure~\ref{fig:subrec} shows the subtyping rules for recursive types. Rule {\sc S-Amber} is the \emph{Amber rule}~\cite{cardelli1986amber} for subtyping between two recursive types. Rules {\sc S-UnfoldRight} and {\sc S-UnfoldLeft} unfold the recursive type for establishing subtyping between a non-recursive and a recursive type, or vice-versa. The dynamic type $\Any$ is only a subtype of itself and of $\Value$, and no type except $\Any$ is a subtype of $\Any$. The type $\Any$ interacts with other types through the {\em consistency} and {\em consistent-subtyping} relations~\cite{siek2007objects}. We use the symbol $\sim$ for consistency, and $\lesssim$ for consistent-subtyping. The type $\Any$ is trivially consistent with all types, and all types are consistent with $\Any$. A type is also consistent with itself. Two union types $F_1 \cup F_2$ and $F_3 \cup F_4$ are consistent if either $F_1 \sim F_3$ and $F_2 \sim F_4$ or $F_1 \sim F_4$ and $F_2 \sim F_3$. Two function types $S_1 \rightarrow S_2$ and $S_3 \rightarrow S_4$ are consistent if and only if $S_3 \sim S_1$ and $S_2 \sim S_4$. Two table types are consistent if and only if they have the same key types, and the value types of each key type are consistent. Two recursive types are consistent if and only if their contents are consistent assuming that their type variables are consistent. Unions of tuple types follow the same rules as unions of first-level types. Variadic types are consistent if and only if their base types are consistent. Finally, pair types $F_1 \times P_1$ and $F_2 \times P_2$ are consistent if and only if $F_1 \sim F_2$ and $P_1 \sim P_2$. In a language with subtyping, the consistency relation is too restrictive. We need to combine it with subtyping into a {\em consistent-subtyping} relation. The definition is straightforward: a type is a consistent-subtype of another if the first type is a subtype of a type that is consistent with the other, or if the first type is consistent with a type that is a subtype of the other. The implementation of Typed Lua also uses consistent-subtyping to normalize and simplify union types. For instance, the union type $\Boolean \cup \Any$ simplifies to $\Any$, because $\Boolean$ is a consistent-subtype of $\Any$. Another example is the union type $\Number \cup \Nil \cup 1$, that simplifies to $\Number \cup \Nil$, because $1$ is a consistent-subtype of $\Number$. \section{Typing rules} \label{sec:rules} \begin{figure*}[t] {\footnotesize \[ \begin{array}{c} \begin{array}{c} \mylabel{T-Constructor}\\ \dfrac{\begin{array}{c} \overline{\env_v, \penv \vdash e_1: F_1, \env_f} \;\;\; \overline{\env_1, \penv \vdash e_2: F_2, \env_v} \\ T = \{\overline{F_1{:}vt(F_1,F_2)}\}_{unique} \;\;\; \env_{2} = merge(\overline{\env_{f}}) \;\;\; wf(T) \end{array}} {\env_{1}, \penv \vdash \{\;\overline{[e_{1}] = e_{2}}\;\}:T, \env_{2}} \end{array} \begin{array}{c} \mylabel{T-NewFieldUnique}\\ \dfrac{\begin{array}{c} \env_{1}(id) = \{ \overline{F:V} \}_{unique}\\ \env_{1}, \penv \vdash e:F^{new}, \env_{2} \;\;\; \overline{F^{new} \not\lesssim F} \;\;\; V^{new} = vt(F^{new}, V) \end{array}} {\begin{array}{l} \env_{1}, \penv \vdash id[e] {<}V{>}:V^{new}, \env_{2}[id \mapsto \{ \overline{F{:}V}, F^{new}{:}V^{new}\}_{unique}] \end{array}} \end{array}\\\\ \begin{array}{c} \mylabel{T-IdRead}\\ \dfrac{\env(id) = F} {\env, \penv \vdash id:close(F), \env[id \mapsto open(F)]} \end{array} \begin{array}{c} \mylabel{T-IdWrite}\\ \dfrac{\env(id) = F} {\env, \penv \vdash id_{l}:F, \env} \end{array} \begin{array}{c} \mylabel{T-IndexIdRead}\\ \dfrac{\begin{array}{c} \env_{1}(id) = \{\overline{F:V}\} \;\;\; \env_{1}, \penv \vdash e_{2}:F^e, \env_{2} \;\;\; F^e \lesssim F_{i} \end{array}} {\env_{1}, \penv \vdash id[e_{2}]:rconst(V_{i}), \env_{2}} \end{array} \\ \\ \begin{array}{c} \mylabel{T-IndexExpRead}\\ \dfrac{\begin{array}{c} \env_{1}, \penv \vdash e_{1}:\{\overline{F:V}\}, \env_{2}\\ \env_{2}, \penv \vdash e_{2}:F^e, \env_{3} \;\;\; F^e \lesssim F_{i} \end{array}} {\env_{1}, \penv \vdash e_{1}[e_{2}]:rconst(V_{i}), \env_{3}} \end{array} \begin{array}{c} \mylabel{T-CoerceClosed}\\ \dfrac{\env(id) \subtype F \;\;\; tag(F,closed)} {\env, \penv \vdash {<}F{>} \; id:F, \env[id \mapsto reopen(F)]} \end{array} \begin{array}{c} \mylabel{T-CoerceFixed}\\ \dfrac{\env(id) \subtype F \;\;\; tag(F,fixed)} {\env, \penv \vdash {<}F{>} \; id:F, \env[id \mapsto F]} \end{array}\\\\ \begin{array}{c} \mylabel{T-Function}\\ \dfrac{\begin{array}{c} closeall(\env_{1})[\overline{id \mapsto F}], \penv[\ret \mapsto S] \vdash s, \env_{2}\\ \env_{3} = openset(\env_{1}, frv(\mathbf{fun} \; (\overline{id{:}F}){:}S \; s)) \\ \env_{4} = closeset(\env_{3}, fav(\mathbf{fun} \; (\overline{id{:}F}){:}S \; s)) \end{array}} {\env_{1}, \penv \vdash \mathbf{fun} \; (\overline{id{:}F}){:}S \; s:F_1 \times \ldots F_n \times \Void \rightarrow S, \env_{4}} \end{array} \begin{array}{c} \mylabel{T-Local}\\ \dfrac{\begin{array}{c} \env_{1}, \penv \vdash el:P, \env_{2} \\ P \lesssim F_1 \times \ldots \times F_n \times \Value{*} \\ \env_{2}[\overline{id \mapsto F}], \penv \vdash s, \env_{3} \end{array}} {\env_{1}, \penv \vdash \mathbf{local} \; \overline{id{:}F} = el \; \mathbf{in} \; s, (\env_{3} - \{\overline{id}\})[\overline{id \mapsto \env_2(id)}]} \end{array} \end{array} \] } \caption{Typing Rules for Table Types} \label{fig:tablerules} \end{figure*} Figure \ref{fig:syntax} presents the abstract syntax of Core Typed Lua, a large subset of concrete syntax that the Typed Lua compiler handles. We use this abstract syntax in the presentation of the typing rules for tables and evolution of table types (Section~\ref{sec:tables}) and for functions with overloaded multiple returns and projection types (Section~\ref{sec:projections}). Core Typed Lua simplifies the presentation of typing rules without sacrificing expressiveness. The main differences between Core Typed Lua and the concrete syntax are: \begin{itemize} \item lack of labels and goto statements, as they are incompatible with the {\em flow typing} of Typed Lua's type system; \item local variable declarations have an explicit block, instead of being just statements; \item no {\bf elseif} clause in {\bf if} statements; \item no {\bf repeat} and {\bf for} loops, just {\bf while}; \item the only arithmetic operator is $+$, the only relational operators are $==$ and $<$, and the only bitwise operator is $\&$, as all other operators have typing rules identical to one of these; \item type coercion for evolution of table types is explicit, with two coercion terms, the l-value $id[e] \; {<}V{>}$ for adding a field to a table through assignment, and the r-value ${<}F{>} \;id$ for reinterpreting a table type to a compatible type that is expected by the context; \item all function declarations are anonymous, so there is a statement for recursive declarations; \item function or method applications that throw away all results or all but the first result are explicitly annotated as such, instead this being implicit in their syntactic context; \item identifiers that appear in l-value position are explicitly annotated as such with an $l$ subscript. \end{itemize} We present the typing rules as a deduction system for two typing relations, one for typing statements and another for typing expressions. Typing judgments for statements have the form $\env_{1}, \penv \vdash s, \env_{2}$, meaning that given a type environment $\env_{1}$ and a {\em projection environment} $\penv$ we can prove that a statement $s$ produces a new type environment $\env_{2}$. Typing judgments for expressions have the form $\env_{1}, \penv \vdash e : T, \env_{2}$, meaning that given a type environment $\env_{1}$ and a projection environment $\penv$ we can prove that an expression $e$ has type $T$ and produces a new type environment $\env_{2}$. Only the most relevant rules are shown. The first author's Ph. D. thesis has the full set of rules~\cite{maidl2015tl}. The type environment is standard, and maps identifiers to expression types. Some typing rules of Section~\ref{sec:tables} can change the type of an identifier, so the typing relation has both an input and an output type environment. The {\em projection environment} $\Pi$ maps projection labels to second-level types, and we explain its use in Section~\ref{sec:projections}. \subsection{Tables and Evolution of Table Types} \label{sec:tables} \begin{figure} {\footnotesize \[ \begin{array}{rcl} wf(\{\overline{F:V}\}_{u|o|f|c}) & = & \forall i \; ((\nexists j \; i \not= j \,\wedge\, F_{i} \lesssim F_{j}) \,\wedge \\ & & wf(V_{i}) \,\wedge\, \lnot tag(V_{i},unique) \,\wedge\\ & & \lnot tag(V_{i},open))\\ wf({\bf const}\; F) & = & wf(F) \\ wf(F_1 \cup F_2) & = & wf(F_1) \,\wedge\, wf(F_2) \\ wf(\mu x.F) & = & wf(F) \\ wf(S_1 \rightarrow S_2) & = & wf(S_1) \,\wedge\, wf(S_2) \\ wf(S_1 \sqcup S_2) & = & wf(S_1) \,\wedge\, wf(S_2)\\ wf(F{*}) &=& wf(F) \\ wf(F \times P) &=& wf(F) \,\wedge\,wf(P)\\ wf(F) & = & \top \;\;\;\mathrm{for\; all\; other\; cases}\\\\ tag(F_1 \cup F_2, t) & = & tag(F_1, t) \,\vee\, tag(F_2,t) \\ tag(\{\overline{F:V}\}_t, t) & = & \top\\ \multicolumn{3}{c}{ \;\;\;\;\mathrm{other\;recusive\;cases\;are\;similar,\;other\;base\;cases\;are\;}\bot}\\ \\\\ vt(L, V) & = & fix(V) \\ vt(F_1, F_2) & = & nil(fix(F_2))\\ vt(F_1, {\bf const}\;F_2) & = & {\bf const}\;nil(fix(F_2))\\\\ nil(T) & = & T \;\;\; \mathrm{if}\;\Nil \lesssim T\\ nil(T) & = & T \cup \Nil \;\;\; \mathrm{otherwise}\\\\ fix(\{\overline{F:V}\}_{unique|open}) & = & \{\overline{F:V}\}_{fixed}\\ \multicolumn{3}{c}{\mathrm{other\;cases\;by\;straightforward\;structural\;recursion}} \end{array} \]} \caption{Auxiliary Functions for Table Types} \label{fig:funcs} \end{figure} \begin{figure*} {\footnotesize \[ \begin{array}{rcl} merge(\overline{\env}) & = & reduce(\overline{\env}, merge2)\\ merge2(\env_1,\env_2) & = & \{\overline{(id,merget(\env_1(id),\env_2(id))}\}\\ merget(T_1, T_2) & = & T_1 \;\;\; \mathrm{if\;} T_2 \lesssim T_1\\ merget(T_1, T_2) & = & T_2 \;\;\; \mathrm{if\;} T_1 \lesssim T_2\\ \multicolumn{3}{c}{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{the\;next\;case\;applies\;if}\; \overline{V^l \lesssim_u V_r \,\vee\, V^r \lesssim_u V_l}\;\mathrm{and\;the\;right\; side\; is\;}wf}\\ merget(\{\overline{F:V^l},\overline{F^{\prime}:V^\prime}\}_{unique}, \{\overline{F:V^r},\overline{F^{\prime\prime}:V^{\prime\prime}}\}_{unique}) &=& \{\overline{F:sup_u(V^l,V^r)},\overline{F^\prime:V^\prime},\overline{F^{\prime\prime}:V^{\prime\prime}}\}_{unique} \\ \multicolumn{3}{c}{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{the\;next\;case\;applies\;if}\; \overline{V^l \lesssim_c V_r \,\vee\, V^r \lesssim_c V_l}\;\mathrm{and\;the\;right\; side\; is\;}wf}\\ merget(\{\overline{F:V^l},\overline{F^{\prime}:V^{\prime}}\}_{unique|open}, \{\overline{F:V^r},\overline{F^{\prime\prime}:V^{\prime\prime}}\}_{unique|open}) &=& \{\overline{F:sup_c(V^l,V^r)},\overline{F^\prime:V^\prime},\overline{F^{\prime\prime}:V^{\prime\prime}}\}_{open} \\ merget(T_1, T_2) & = & \bot \;\;\; \mathrm{otherwise}\\\\ join(\env_1,\env_2) & = & \{\overline{(id,joint(\env_1(id),\env_2(id))}\}\\ joint(T_1, T_2) & = & T_1 \;\;\; \mathrm{if\;} T_2 \lesssim T_1\\ joint(T_1, T_2) & = & T_2 \;\;\; \mathrm{if\;} T_1 \lesssim T_2\\ \multicolumn{3}{c}{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{the\;next\;case\;applies\;if}\; \overline{V^l \lesssim_u V_r \,\vee\, V^r \lesssim_u V_l}\;\mathrm{and\;the\;right\; side\; is\;}wf}\\ joint(\{\overline{F:V^l},\overline{F^{\prime}:V^{\prime}}\}_{unique}, \{\overline{F:V^r},\overline{F^{\prime\prime}:V^{\prime\prime}}\}_{unique}) &=& \{\overline{F:sup_u(V^l,V^r)},\overline{F^\prime:nil(V^\prime)},\overline{F^{\prime\prime}:nil(V^{\prime\prime})}\}_{unique}\\ joint(T_1, T_2) & = & \bot \;\;\; \mathrm{otherwise}\\\\ sup_u(V_1, V_2) \,=\, V_2 \;\;\; \mathrm{if}\; V_1 \lesssim_u V_2 & & sup_u(V_1, V_2) \,=\, V_1 \;\;\; \mathrm{if}\; V_2 \lesssim_u V_1 \\ \\ reopen(\{\overline{F:V}\}_{closed}) \,=\, \{\overline{F:V}\}_{open} & & reopen(F) \,=\, F\end{array} \]} \caption{Merge and Join Functions} \label{fig:mergejoin} \end{figure*} The simplest expression involving tables is the empty table constructor $\{\}$. The empty table constructor always has type $\{\}_{unique}$. A more interesting example is $\{ [``x"] = 1, [``y"] = 2 \}$, which has type $\{ ``x":1 , ``y": 2 \}_{unique}$. The subtyping rule for {\em unique} tables lets us assign this table to a variable with a more general type: $\{ ``x":\Number , ``y": \Number \}_{unique}$, or even $\{ \String:\Number \cup \Nil \}_{unique}$. Figure~\ref{fig:tablerules} presents the typing rules for table types. Rule {\sc T-Constructor} is the rule for constructors like the examples above, that do not end with an expression that potentially returns multiple values. Due to the system for changing the table type of variables that we will explain in this section, any expression can potentially change the type environment. Lua semantics does not dictate an order of evaluation among the pairs of field initializers, so all of them are typed in the same input type environment. The $merge$ auxiliary function then combines output environments, diverging in case of conflicting changes. Fields can always be added to {\em unique} and {\em open} tables in one environment but not the other. Two environments can add a field with the same key, as long as they have the same type or one is a supertype of the other, respecting any restrictions with regards to depth subtyping that the table may have. Rule {\sc T-Constructor} also checks if the resulting type is {\em well-formed} with the predicate $wf$. Checking for well-formedness avoids ambiguous table types. An example of ambiguous table type is the type $\{1:\Number, \Integer:\String, \Any:\Boolean\}$, where indexing it with a key of type $\Integer$ could yield either a number, a string or a boolean at runtime. Well-formed table types also do not allow \emph{unique} and \emph{open} table types to appear in the type of the values. At the moment, local variables are the only kind of reference that the system can track, so this restriction is necessary. This limitation of the type system is the reason why the {\sc T-Constructor} rule uses the auxiliary function $fix$ (in the definition of $vt$) to change any {\em open} and {\em unique} table types used in the field initializer to {\em fixed}. As an example, the table constructor $\{[``x"] = 1, [``y"] = \{[``z"] = 2\}\}$ has type $\{``x":1, ``y":\{``z":2\}_{fixed}\}_{unique}$. There are two ways the type of {\em unique} and {\em open} tables can change in Typed Lua. The first way is by assignment to a non-existing field. Let us go back to the example shown in Figure~\ref{fig:newpoint}. Assuming that the type of the $x$ and $y$ is $\Number$, we can translate the first three lines of function {\tt new\_point} to the abstract syntax of Figure~\ref{fig:syntax}: {\footnotesize \begin{center} \begin{tabular}{ll} \multicolumn{2}{l}{$\mathbf{local} \; point: \{\}_{unique} = \{\} \; \mathbf{in}$}\\ & \multicolumn{1}{l}{$point[``x"] \; {<}\Number{>} = x;\; point[``y"] \; {<}\Number{>} = y$} \end{tabular} \end{center} } The expressions on the left-side of the assignment are type coercion l-values, and their type-checking is responsible for changing the type of $point$. The type annotations in these expressions are not required in actual Typed Lua code, as the compiler infers the types from the corresponding expression on the right side. Rule {\sc T-NewFieldUnique} typechecks coercion l-values on {\em unique} table types, guaranteeing that the new type is still well-formed. In the example above, the left side of the first assignment changes the type of $point$ to $\{``x":\Number\}_{unique}$. The rest of the assignment typechecks, as the right-hand side has type $\Number$. The second assignment changes the type of $point$ to $\{``x":\Number, ``y": \Number\}_{unique}$. We could add both fields with a single assignment statement, as in the following example: {\footnotesize \begin{center} \begin{tabular}{ll} \multicolumn{2}{l}{$\mathbf{local} \; point: \{\}_{unique} = \{\} \; \mathbf{in}$}\\ & \multicolumn{1}{l}{$point[``x"] \; {<}\Number{>},point[``y"] \; {<}\Number{>} = x,y;$} \end{tabular} \end{center} } In this case, the first coercion expression would still change the type of $point$ to $\{``x":\Number\}_{unique}$, but the second would change it to $\{``y":\Number\}_{unique}$. Merging the two yields $\{``x":\Number, ``y": \Number\}_{unique}$ through the third case of function $merget$ defined in Figure~\ref{fig:funcs}. Open tables have a rule {\sc T-NewFieldOpen} that is not shown in Figure~\ref{fig:tablerules} due to brevity. The only thing that changes from one rule to the other is the tag in the type of $id$. There are analogous rules for adding methods to {\em unique} and {\em open tables} as a side-effect of typechecking a method declaration. The type system needs to track aliasing of tables, as the flexibility of {\em unique} and {\em open} tables depend on respectively having no aliases and having only {\em closed} aliases. The system conservatively assumes that a table has been aliased if its identifier appears in r-value position, with a couple of exceptions which we will explain shortly. Rule {\sc T-IdRead} is responsible for tracking aliasing. Its auxiliary function $close$ is similar to function $fix$ defined in Figure~\ref{fig:funcs}, except that it changes the tag of {\em unique} or {\em open} tables to {\em closed}. Its purpose is to construct a {\em closed} type for the alias, if necessary. Its auxiliary function $open$ is similar to $fix$ and $close$, except that it changes the type of {\em unique} tables to {\em open}. Its purpose is to track the fact that an $id$ now has an alias, if necessary. Consider the following example, where we make an alias of an {\em unique} table: {\footnotesize \begin{center} \begin{tabular}{lll} \multicolumn{3}{l}{$\mathbf{local} \; a:\{\}_{unique} = \{\} \; \mathbf{in}$}\\ & \multicolumn{2}{l}{$\mathbf{local} \; b:\{\}_{closed} = a \; \mathbf{in}\;\;a[``x"] \; {<}\String{>} = ``foo";$} \end{tabular} \end{center} } This example typechecks; after the initialization of $b$ the type of $a$ is {\em open} instead of {\em unique}, but this still lets the assignment add a new field to $a$. If we changed the tag of the type annotation of $b$ to any other tag the example would not typecheck anymore, as the type of expression $a$ is $\{\}_{closed}$. If we did not change the tag on the type produced by $a$ we could break type safety, as we could declare $b$ with tag {\em unique} or {\em open} and add fields that would conflict with the type of the fields in $a$, but at runtime both reference the same table. It is not a problem for an {\em unique} or {\em open} table to appear as an l-value, because {\sc T-IdRead} is sufficient to forbid the creation of an alias to another {\em unique} or {\em open} table. For this reason, identifiers that appear as l-values have their own rule {\sc T-IdWrite}. Another use of the identifier that does not create an alias is when indexing the table. Rule {\sc T-IndexIdRead} covers this case (function $rconst$ just strips the {\bf const} qualifier from the value type, if present). For comparison, rule {\sc T-IndexExpRead} applies for indexing expressions where the expression denoting the table is not an identifier. The rules for indexing expressions in l-value position are similar, except to make sure that the field is not {\bf const} . A second way of changing a table type is by a {\em coercion expression}. Coercion expressions let an {\em unique} or {\em open} table be used in a context that expects a table type with a different shape. Coercion expressions always create an alias, so the type produced by the coercion is always {\em closed} or {\em fixed}. If the coercion is to a {\em closed} table type the type of the table changes to an {\em open} type with the same shape, but if the coercion is to a {\em fixed} type the table has to assume the same type. Rules {\sc T-CoerceClosed} and {\sc T-CoerceFixed} type coercion expressions ${<}F{>} \; id$. As with coercion l-values, the Typed Lua compiler automatically infers the type $F$, so to the programmer a coercion expression looks just like an ordinary use of the table. Notice that the type of $id$ after {\sc T-CoerceClosed} will be an {\em open} type only if $F$ is a plain {\em closed} table type. Coercion is only possible towards a supertype, although we have seen in Section~\ref{sec:subtyping} that the lack of aliases makes the subtyping relation of {\em unique} very flexible. In the example below, we use a coercion expression to widen the type of a field in an {\em unique} table, and then add another field, to show that the table is now {\em open}: {\footnotesize \begin{center} \begin{tabular}{llll} \multicolumn{4}{l}{$\mathbf{local} \; a:\{\}_{unique} = \{ \} \; \mathbf{in}$}\\ & \multicolumn{3}{l}{$a[``x"] \; {<}\String{>} = ``foo";\; a[``y"] \; {<}\String{>} = ``bar";$}\\ & \multicolumn{3}{l}{$\mathbf{local} \; b:\{``x":\String, ``y":\String \cup \Nil \}_{closed} =$}\\ & & \multicolumn{2}{l}{${<}\{``x":\String, ``y":\String \cup \Nil\}_{closed}{>} \; a \; \mathbf{in}$}\\ & & & \multicolumn{1}{l}{$a[``z"] \; {<}\Integer{>} = 1$} \end{tabular} \end{center} } The type of $a$ by the time of the inner {\bf local} block is $\{``x":\String, ``y":\String\}_{unique}$. This type is a subtype of $\{``x":\String, ``y":\String\cup\Nil\}_{closed}$, as {\em unique} tables have depth subtyping even in non-{\bf const} fields. The coercion expression and the initialization of $b$ typecheck, and the type of $a$ changes to $\{``x":\String, ``y":\String\cup\Nil\}_{open}$. As the type is {\em open}, we can continue adding fields to it, but are now restricted on future coercions. For example, we will not be able to widen the type of $``x"$ to $\String \cup \Nil$, as that could break type safety (by writing $\Nil$ to $``x"$ using reference $a$, and reading $``x"$ using the alias $b$). Lua has first-class functions, so another way of aliasing {\em unique} and {\em open} references is by closing over them in a function body. In the environment used for typing the function body, all {\em unique} and {\em open} tables are {\em closed} with the auxiliary function $closeall$. Assignment to a closed-over table can be unsafe, because the {\em reference} to it is aliased as well. In the following example, assume that $g$ receives a parameter of type $\{ ``x": \String \cup \Nil\}_{closed}$: {\footnotesize \begin{center} \begin{tabular}{llll} \multicolumn{4}{l}{$\mathbf{local} \; a:\{\}_{open} = \{\} \; \mathbf{in}$}\\ & \multicolumn{3}{l}{$\mathbf{local} \; f:\Void \rightarrow \Void =\;\mathbf{fun} \; ():\Void\;\;\; a = \{[``x"]=5\} $}\\ & & \multicolumn{2}{l}{$\mathbf{in} \; \lfloor g({<}\{``x": \String \cup \Nil\}_{closed}{>} \; a) \rfloor_{0}$} \end{tabular} \end{center} } The type of $a$ on line 2 is $\{\}_{closed}$, so the assignment would type-check. But then the function call on the last line would also type-check, as $\{\}_{open}$ can be coerced to $\{ ``x": \String \cup \Nil\}$. A solution is to change the type of $a$ from {\em open} to {\em closed} after the function declaration. In order to do that, we use the auxiliary function $fav$ ({\em free assigned-to variables}) to collect this set of identifiers, and then function $closeset$ changes the type environment to so any of these variables that has an {\em unique} or {\em open} table type becomes {\em closed}. Closed-over variables that are not assigned still may change their type if they have an {\em unique} table type and are aliased inside the function body. We use the auxiliary function $frv$ ({\em free referenced variables}) to collect this set, and then function $openset$ changes any variables with an {\em unique} table type in the environment to {\em open}. Rule {\sc T-Function} types non-variadic function declarations, and shows how these auxiliary functions work together. Notice that the environment $\env_{2}$ resulting from typing the function body is just ignored. The rule for declaring variadic functions and recursive functions are similar. Functions $closeall$, $closeset$, $openset$, $fav$, and $frv$ are straightforward, and were omitted for brevity. The following example typechecks, and uses all the parts of {\sc T-Function}: {\footnotesize \begin{center} \begin{tabular}{llll} \multicolumn{4}{l}{$\mathbf{local} \; a:\{\}_{unique}, b:\{\}_{unique} = \{\}, \{\} \; \mathbf{in}$}\\ & \multicolumn{3}{l}{$\mathbf{local} \; f:\Integer \times \Void \rightarrow \Integer \times \Void =$}\\ & & \multicolumn{2}{l}{$\mathbf{fun} \; (x:\Integer):\Integer \times \Void$}\\ & & & \multicolumn{1}{l}{$b = a \;;\; \mathbf{return} \; x + 1$}\\ & \multicolumn{3}{l}{$\mathbf{in} \; a[``x"] \; {<}\Integer{>} = 1 \;;\; \lfloor f(a[``x"]) \rfloor_0$}\\ \end{tabular} \end{center} } The assignment $b=a$ typechecks because, at that point, both $b$ and $a$ have type $\{\}_{closed}$ due to $closeall$. The set $fav$ is $\{b\}$, and the set $frv$ is $\{a\}$. This causes the type of $b$ to also be $\{\}_{closed}$ in the body of the inner {\bf local} statement, due to $closeset$, and the type of $a$ to be $\{\}_{open}$ due to $openset$. Being $\{\}_{open}$, adding a new field is allowed, changing the type of $a$ to $\{``x": \Integer\}_{open}$ after the last semicolon. After the call to $f$, the variable $b$ is an alias to $a$ but this is not an issue, as $\{\}_{closed}$ is a supertype of $\{``x": \Integer\}_{open}$. \subsection{Projection Types} \label{sec:projections} \begin{figure*}[t] {\footnotesize \label{fig:projtypes} \[ \begin{array}{c} \begin{array}{c} \mylabel{T-LocalProj}\\ \dfrac{\begin{array}{c} \env_{1}, \penv \vdash el:E, \env_{2}, (x,S)\\ \env_{3} = \env_{2}[id_{1} \mapsto proj(E, 1), \ldots, id_{n} \mapsto proj(E, n)]\\ \env_{3}, \penv[x \mapsto S] \vdash s, \env_{4} \;\;\; n = |\;\overline{id}\;| \end{array}} {\env_{1}, \penv \vdash \mathbf{local} \; \overline{id} = el \; \mathbf{in} \; s, (\env_{4}-\{\overline{id}\})[\overline{id \mapsto \env_2(id)}]} \end{array} \begin{array}{c} \mylabel{T-ExpListProj}\\ \dfrac{\begin{array}{c} \env_{1}, \penv \vdash e_{i}:F_{i}, \env_{i+1} \;\;\; \env_{1}, \penv \vdash me:S, \env_{n+2} \;\;\; n=|\overline{e}|\\ S = S_1 \sqcup S_2 \;\;\;m = max(|S_1|, |S_2|) \;\;\; x \notin \Pi\\ \env_{f} = merge(\env_{1}, ..., \env_{n+2}) \end{array}} {\env_{1}, \penv \vdash \overline{e},me:}\\ {F_{1} \times ... \times F_{n} \times \pi_{1}^{x} \times ... \times \pi_{m}^{x}{*}, \env_{f}, (x,S)} \end{array}\\\\ \begin{array}{c} \mylabel{T-IfFilterProjThen}\\ \dfrac{\begin{array}{c} \env_{1}(id) = \pi_{i}^{x} \\ S_t = fonilpt(\Pi(x), i)\\ finil(proj(\Pi(x), i)) = {\bf void} \\ \env_{1}, \penv[x \mapsto S_t] \vdash s_{1}, \env_{2} \end{array}} {\env_{1}, \penv \vdash \mathbf{if} \; id \; \mathbf{then} \; s_{1} \; \mathbf{else} \; s_{2}, \env_{2}} \end{array} \begin{array}{c} \mylabel{T-IfFilterProjElse}\\ \dfrac{\begin{array}{c} \env_{1}(id) = \pi_{i}^{x} \\ S_e = finilpt(\Pi(x), i)\\ fonil(proj(\Pi(x), i)) = {\bf void} \\ \env_{1}, \penv[x \mapsto S_e] \vdash s_{2}, \env_{2} \end{array}} {\env_{1}, \penv \vdash \mathbf{if} \; id \; \mathbf{then} \; s_{1} \; \mathbf{else} \; s_{2}, \env_{2}} \end{array} \begin{array}{c} \mylabel{T-IfFilterProjBoth}\\ \dfrac{\begin{array}{c} \env_{1}(id) = \pi_{i}^{x} \\ S_t = fonilpt(\Pi(x), i) \; S_e = finilpt(\Pi(x), i)\\ \env_{1}, \penv[x \mapsto S_t] \vdash s_{1}, \env_{2} \\ \env_{1}, \penv[x \mapsto S_e] \vdash s_{2}, \env_{3} \\ \env_{4} = join(\env_{2}, \env_{3}) \end{array}} {\env_{1}, \penv \vdash \mathbf{if} \; id \; \mathbf{then} \; s_{1} \; \mathbf{else} \; s_{2}, \env_{4}} \end{array} \end{array} \]} \caption{Rules for Projection Types} \end{figure*} \begin{figure}[t] {\scriptsize \label{fig:projtypes} \[ \begin{array}{c} \begin{array}{rcl} fonilpt(P_1 \sqcup P_2, i) & = & P_2 \;\; \mathrm{if} \; fonil(proj(P_1, i)) = {\bf void} \\ fonilpt(P_1 \sqcup P_2, i) & = & P_1 \;\; \mathrm{if} \; fonil(proj(P_2, i)) = {\bf void} \\ fonilpt(P_1 \sqcup P_2, i) & = & P_1 \sqcup P_2 \;\; \mathrm{otherwise}\\ fonilpt(P \sqcup S, i) & = & fonilpt(S,i) \;\; \mathrm{if} \; fonil(proj(P, i)) = {\bf void} \\ fonilpt(P \sqcup S, i) & = & P \sqcup fonilpt(S,i) \;\; \mathrm{otherwise}\\ fonilpt(S \sqcup P_2, i) & = & fonilpt(S,i) \;\; \mathrm{if} \; fonil(proj(P, i)) = {\bf void} \\ fonilpt(S \sqcup P, i) & = & fonilpt(S,i) \sqcup P \;\; \mathrm{otherwise}\\ fonilpt(S_1 \sqcup S_2, i) & = & fonilpt(S_1,i) \sqcup fonilpt(S_2,i) \end{array}\\\\ \begin{array}{rcl} fonil(F_1 \cup F_2) & = & fonil(F_1) \;\; \mathrm{if}\;fonil(F_2) = {\bf void}\\ fonil(F_1 \cup F_2) & = & fonil(F_2) \;\; \mathrm{if}\;fonil(F_1) = {\bf void}\\ fonil(F_1 \cup F_2) & = & fonil(F_1) \cup fonil(F_2) \;\; \mathrm{otherwise}\\ %\multicolumn{4}{c}{\mathrm{the\;next\;two\;cases\;apply\;if\;}F % \;\mathrm{is\;not\;a\;union}}\\ fonil(F) & = & {\bf void} \;\; \mathrm{if} \; F \subtype \Nil \; \mathrm{and} \; \Nil \subtype F\\ fonil(F) & = & F \;\; \mathrm{otherwise} \end{array} \end{array} \]} \caption{Auxiliary Functions for Projection Types} \end{figure} As we have seen in Section~\ref{sec:statistics}, Lua programmers often overload the return type of functions to denote errors, returning a {\tt nil} result and an error message in case of an error instead of the usual return values. As an example, consider again the {\tt idiv} function from Figure~\ref{fig:over}, which returns the quotient and rest of integer division, or a division by zero error. In Typed Lua, we can give a return type $(\Integer \times \Integer \times \Nil*) \sqcup (\Nil \times \String \times \Nil{*})$ to that function. As is also shown in Figure~\ref{fig:over}, the idiom for using such a function is to test the first returned value. We can translate that code to the abstract syntax of Figure~\ref{fig:syntax}, leaving placeholders for the body of the {\bf then} and {\bf else} branches: {\footnotesize \begin{center} \begin{tabular}{ll} \multicolumn{2}{l}{$\mathbf{local} \, q, r = idiv(a, b) \, \mathbf{in} \, \mathbf{if} \, q \, \mathbf{then} \, s_1 \, \mathbf{else} \, s_2$} \end{tabular} \end{center} } A standard way of decomposing the union in the assignment would let the type of $q$ be $\Integer \times \Nil$ and the type of $r$ be $\Integer \times \String$. Flow typing would narrow the type of $q$ to $\Integer$ in the {\bf then} branch and to $\Nil$ in the {\bf else} branch, but would leave the type of $r$ unchanged. {\em Projection types} are a general way to make the dependency between the types of $q$ and $r$ survive the decomposition of the tuple, so flow typing can narrow all of the components by testing just one of them. Intuitively, a projection type $\pi_{i}^{x}$ denotes the union of the i-th components of all the tuples in the union of tuple types denoted by the label $x$. Unions of tuple types referenced by projections are kept in their own {\em projection environment} $\Pi$. A fresh label is generated whenever the last term in an expression list has an union of tuple types as its type. Figure~\ref{fig:projtypes} presents the rules for introducing and filtering projection types. Notice that in the above example the local variables $q$ and $r$ are declared without an explicit type. The rule for typing this {\bf local} statement, {\sc T-LocalProj}, introduces a new label in the projection environment used for typing the {\bf if} statement. The first judgment above the bar of rule {\sc T-LocalProj} uses the auxiliary typing relation $\env_{1}, \penv \vdash el : E, \env_{2}, (x,S)$. Rule {\sc T-ExpListProj} is the typing rule for this auxiliary relation. It does the work of generating a fresh projection label, and projection types associated with it, if the last term of the expression list has a union of tuple types as its type. The rule generates as many projection types as the the length of the longest tuple. In the $idiv$ example above, the combination of these two rules binds $x$ to $(\Integer \times \Integer \times \Nil{*}) \sqcup (\Nil \times \String \times \Nil{*})$ in the projection environment $\Pi$, and $q$ and $r$ respectively to $\pi_{1}^{x}$ and $\pi_{2}^{x}$ in the type environment used for typing the {\bf if} statement. At this point, using $q$ or $r$ as an r-value will project their types into either $\Integer \cup \Nil$ or $\Integer \cup \String$ via rule {\sc T-IdReadProj}, which just uses auxiliary function $proj$. The more interesting cases are inside the branches of the {\bf if} statement, where rule {\sc T-IfFilterNilProj} filters the whole union of tuples referenced by the projection type. Rule \textsc{T-IfFilterProjBoth} uses the auxiliary functions \emph{fonilpt} ({\em filter-out nil in projection type}) and \emph{finilpt} ({\em filter-in nil in projection type}) to filter the union referenced by $\pi_{i}^{x}$, respectively filtering $\Nil$ out at every i-th component (removing the entire tuple if it is $\Nil$) and filtering $\Nil$ in at every i-th component (removing the entire tuple if it is not $\Nil$). These functions respectively use the functions $fonil$ and $finil$ to filter regular unions, plus $proj$ to test projected components. The definitions of $finilpt$ and $finil$ are analogous to the definitions of $fonilpt$ and $fonil$. Rule {\sc T-IfFilterProjBoth} also uses the auxiliary function $join$ defined in Figure~\ref{fig:funcs} to try to combine the environments that result from typing the two branches. In a $join$ only one environment is guaranteed to be from code that has been executed. In our example, variable $x$ would be bound to $\Integer \times \Integer \times \Nil{*}$ inside the {\bf then} branch, and bound to $\Nil \times \String \times \Nil{*}$ inside the {\bf else} branch. Now $q$ and $r$ project to $\Integer$ and $\Integer$ in the {\bf then} branch, and to $\Nil$ and $\String$ in the {\bf else} branch. Rules {\sc T-IfFilterProjThen} and {\sc T-IfFilterProjElse} respectively cover the case where the {\bf else} branch is unreachable and the case where the {\bf then} branch is unreachable, because either the projected type of $\pi_i^x$ is not a supertype of $\Nil$ or is $\Nil$. Assignment to an l-value with a projection type is forbidden (there is no rule {\sc T-IdWriteProj}). Allowing assignment is unsound, as it can break the dependency among the components of the union. In the example below, the projected type of $r$ outside of the {\bf if} statement is $\Integer \cup \String$, so the assignment seems correct, but inside $s_1$ the projected type of $r$ would still be $\Integer$: {\footnotesize \begin{center} \begin{tabular}{ll} \multicolumn{2}{l}{$\mathbf{local} \, q, r = idiv(a, b) \, \mathbf{in}\; r = ``foo" ;\; \mathbf{if} \, q \, \mathbf{then} \, s_1 \, \mathbf{else} \, s_2$} \end{tabular} \end{center} } \section{Related Work} \label{sec:related} Tidal Lock~\cite{tidallock} is a prototype of another optional type system for Lua. It covers just a small subset of Lua in its current form. Its most remarkable feature is how it structures its table types to support a form of type evolution through imperative assignment. Typed Lua uses the same general idea of letting the type of a table evolve through assignment, but the structure of both the table types and the typing rules that support this are completely different. Sol~\cite{sol} is another experimental optional type system for Lua. While it has some similarities to Typed Lua, it has more limited table types: Sol tables can only be lists, maps, and objects that follow a specific object-oriented idiom that Sol introduces. There is no evolution of table types. Lua Analyzer~\cite{luaanalyzer} is another optional type system for Lua that is specially designed to work with Löve Studio, an IDE for game development using Lua. It is unsound by design, and primarily for supporting IDE autocompletion. There is no evolution of table types. Typed Racket~\cite{tobin-hochstadt2008ts} is a statically typed version of the Racket language, which is a Scheme dialect. The main purpose of Typed Racket is to allow programmers to combine untyped modules, which are written in Racket, with typed modules, which are written in Typed Racket. Typed Racket tracks values that cross the boundary between the typed and untyped parts to be able to correctly assign blame to type errors that occur in the typed parts, so is a gradual type system. Typed Racket introduced {\em occurrence typing}, a form of flow typing~\cite{tobin-hochstadt2010ltu}, where type predicates are used to refining union types. As this use of type predicates is common in other dynamically-typed languages, related systems have appeared~\cite{guha2011tlc,winther2011gtp,pearce2013ccf}. Gradualtalk~\cite{allende2013gts} is a Smalltalk dialect with a gradual type system. Its type system combines nominal and structural typing. It includes function types, union types, structural object types, nominal object types types, a self type, and parametric polymorphism. Like Typed Lua, Gradualtalk formalizes the interaction between typed and untyped code with consistent-subtyping. Due to the performance impact of the runtime checks that ensure the gradual typing guarantees, Gradualtalk can be downgraded into an optional type system through a configuration switch~\cite{allende2013cis}. Reticulated Python~\cite{vitousek2014deg} is a gradual type system for Python. It is structural, based on subtyping, and includes list types, dictionary types, tuple types, function types, set types, object types, class types, and recursive types. Besides static type checking, Reticulated Python also introduces three different approaches for inserting runtime assertions that preserve the gradual typing guarantee. Several dynamically typed languages now have optional type systems: Clojure~\cite{bonnaire-sergeant2015typed-clojure}, JavaScript, Python~\cite{mypy}, and PHP. While Lua has some similarities to all of these languages, none of these optional type systems have the features described in this paper. Most of these languages do not have idioms that inspired these features; JavaScript has the idiom of adding fields to an initially empty record through assignment, but TypeScript sidesteps the issue by having allowing an empty record to have any record or object type, as the type of missing fields is the bottom type of its type system. Grace~\cite{black2013sg} and Dart are two languages that have been designed from scratch to have an optional type system, instead of being existing dynamically typed languages with a retrofitted type system. \section{Conclusions} \label{sec:conclusion} We have presented a formalization of Typed Lua, an optional type system for Lua, with a focus on rules for two novel features of the system that type unusual idioms present in Lua programs. In the first idiom, records and objects are defined through assignment to an initially empty table. In Typed Lua's type system the type of a table can change, either by assignment or by using the table in a context that expects a different (but compatible) type from its current one. The system tracks aliasing of tables in order to do this type evolution in a type-safe way. In the second idiom, a function that can return different kinds of tuples of return values (for example, one tuple for its usual path, another for error conditions) is modeled by having an union of tuple types as its return type. A destructuring assignment on a union of tuple types decomposes the tuple in a way that the dependencies between the types of each member of the union can be tracked. Narrowing the type of one of the tuple's components with a type predicate can narrow the types of the other components. Usability has been a concern in the design of Typed Lua since the beginning. We realized that a design based solely on what is possible by the semantics of Lua could lead to a complex type system. For this reason, we surveyed a large corpus of Lua code to identify important idioms that our type system should support. We performed several case studies to evaluate how successful we were in our goal of providing an usable type system. We evaluated 29 modules from 8 different case studies, and we could give precise static types to 83\% of the 449 values that these modules export. For half of the modules, we could give precise static types to at least 89\% of the exports from each module. The methodology, detailed results and analysis of this evaluation is part of the first author's Ph. D. thesis~\cite{maidl2015tl}. Our evaluation results show that our type system can statically type check several Lua idioms and features, though the evaluation also exposed several limitations. We found that the three main limitations are the lack of full support for overloaded functions, parametric polymorphism, and operator overloading. Overcoming these limitations is one major target for future work, as it will allow us to statically type check more programs. Unlike some optional type systems, such as the type systems of the TypeScript and Dart languages, we designed Typed Lua without deliberately unsound parts. However, we still do not have proofs that the novel features of our type system are sound. We are working on a generalization of the typing rules for evolution of table types where tagged types can be attached to arbitrary references instead of just local variables, and expect that a proof of soundness will come out of this effort. Typed Lua has a working implementation\footnote{Available at \url{https://github.com/andremm/typedlua}.} that Lua programmers can already use as a framework to document, test, and structure their applications. Even applications where a full conversion to static type checking is unfeasible in the current state of the type system, or too much work, Lua programmers can still use Typed Lua to document the external interfaces of their libraries, giving the benefits of static typing checking to the users of those libraries. We already have feedback from Lua projects that are evaluating the use of Typed Lua, such as the LuaRocks package manager, which is a key piece of infrastructure for the Lua library ecosystem, and ZeroBrane Studio, an IDE for Lua programs that is itself written in Lua. \acks The authors thank the anonymous reviewers for their feedback. Work on Typed Lua has been supported by grants from Google Summer of Code as well as CAPES and CNPq, two Brazilian Research Councils. \bibliographystyle{abbrvnat} \bibliography{paper} \end{document}
{ "alphanum_fraction": 0.7107098396, "avg_line_length": 44.9657608696, "ext": "tex", "hexsha": "da65a1761927c2f74d3233bb34ca2f0633969750", "lang": "TeX", "max_forks_count": 60, "max_forks_repo_forks_event_max_datetime": "2022-03-28T07:36:49.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-20T06:08:51.000Z", "max_forks_repo_head_hexsha": "b0cb071644c1a23b7963564a9cb77b3d5ee4daa7", "max_forks_repo_licenses": [ "MIT", "Unlicense" ], "max_forks_repo_name": "drinkwithwater/typedlua", "max_forks_repo_path": "doc/papers/dls15/paper.tex", "max_issues_count": 93, "max_issues_repo_head_hexsha": "b0cb071644c1a23b7963564a9cb77b3d5ee4daa7", "max_issues_repo_issues_event_max_datetime": "2018-11-07T12:49:53.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-02T02:30:08.000Z", "max_issues_repo_licenses": [ "MIT", "Unlicense" ], "max_issues_repo_name": "drinkwithwater/typedlua", "max_issues_repo_path": "doc/papers/dls15/paper.tex", "max_line_length": 464, "max_stars_count": 486, "max_stars_repo_head_hexsha": "2ff68c56668da62b0fa8e7256f70f3ca1475586a", "max_stars_repo_licenses": [ "MIT", "Unlicense" ], "max_stars_repo_name": "andremm/typedlua", "max_stars_repo_path": "doc/papers/dls15/paper.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T02:31:14.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-17T00:51:56.000Z", "num_tokens": 24955, "size": 82737 }
\subsection{What is DevOps?} A definition of Research Software Engineering Operations (RSE-ops) can best be derived by first explaining the philosophy behind DevOps \cite{aws} is a term that refers to best practices to bridge development and operations. It was coined in 2008 \cite{devops-paper}, and has grown out of a web services oriented model. The term has expanded to refer to other subsets of expertise in this area such as cloud security operations, which is called "DevSecOps," adding the "Sec" for "Security." In DevOps, best practices are generally defined around: \begin{enumerate} \item continuous integration: automated integration of code changes into a repository \item continuous delivery: automated release of software in short cycles \item monitoring and logging: tracking errors and system status in an automated way \item communication and collaboration: interaction between team members and optimally working together \item "infrastructure as code": provisioning resources through simple text files \item using micro-services: single function modules that can be assembled into a more complex service \end{enumerate} The above best practices are done for the purposes of speed and efficiency, reliability, scale, and collaboration. It has been shown that teams that adopt these practices can see improvements in productivity, efficiency, and quality across the board \cite{devops-changed-things}. It is a culture because along with these best practices, it also alludes to a way of thinking and working. Where there were barriers before between development and operations teams, DevOps brought them down. You can grow a community around these ideas, which is a powerful thing. \subsubsection{DevOps as the Driver of the Cloud} And surely the statistics are alarmingly good, as teams that practice DevOps outperform their peers in number and speed of deployments, recovery from downtime events, and employee ability to work on new things over tenuous maintenance \cite{devops-stats-2016}. Recognizing these gains and providing structure for collaboration, training, and projects was arguably just one of the goals of the Cloud Native Computing Foundation (CNCF), which was founded in 2015 \cite{Wikipedia_contributors2021-jt}. Specifically, the primary stated reason for foundation of CNCF was to foster community and support around container technologies, which often are the core unit of automation and DevOps practices \cite{Nishanil_undated-fk}). A new term, "cloud-native" was coined with this title, which is heavily reliant on DevOps. DevOps practices are considered the fundamental base of taking on a cloud-native approach, and another term, "Cloud Native DevOps" \cite{cloud-native-devops-oreilly} was even coined to specifically refer to the application of DevOps practices to the cloud. Since the two are so inexplicably intertwined, for the remainder of this paper, we will refer to them interchangeably \cite{Choice2020-lt}. % TODO: make a diagram or table explaining DevOps, RSE-ops, etc. % TODO: how to link site with content here, and provide list of software for each?
{ "alphanum_fraction": 0.8104554201, "avg_line_length": 129.9166666667, "ext": "tex", "hexsha": "8363ae54c1c09297ec3e6173da3f121075accd59", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-09-20T01:20:20.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-08T00:21:55.000Z", "max_forks_repo_head_hexsha": "8756decfd1f947b5989bcbaa7a3faeaeef3ae0ca", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ArangoGutierrez/rse-ops", "max_forks_repo_path": "src/what-is-devops.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "8756decfd1f947b5989bcbaa7a3faeaeef3ae0ca", "max_issues_repo_issues_event_max_datetime": "2021-09-20T02:56:20.000Z", "max_issues_repo_issues_event_min_datetime": "2021-09-06T22:15:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ArangoGutierrez/rse-ops", "max_issues_repo_path": "src/what-is-devops.tex", "max_line_length": 1211, "max_stars_count": 2, "max_stars_repo_head_hexsha": "8756decfd1f947b5989bcbaa7a3faeaeef3ae0ca", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ArangoGutierrez/rse-ops", "max_stars_repo_path": "src/what-is-devops.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-20T03:02:19.000Z", "max_stars_repo_stars_event_min_datetime": "2021-09-20T01:20:18.000Z", "num_tokens": 650, "size": 3118 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Development Release Series 6.5}\label{sec:History-6-5} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This is the development release series of Condor, The details of each version are described below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-6-5-5}Version 6.5.5} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent New Features: \begin{itemize} \item Condor-G jobs can now use matchmaking. \item \Condor{advertise} has a new option, -tcp. \item Added initial support for the \Condor{gridshell}. Documentation is not yet available, but will be coming soon. \item Added the \Macro{TRUST\_UID\_DOMAIN} config file setting. For more information about this feature, see section~\ref{param:TrustUidDomain} on page~\pageref{param:TrustUidDomain}. \item Added the \Macro{REQUIRE\_LOCAL\_CONFIG\_FILE} config file setting. If this setting is False, the absence of the file specified in the \Macro{LOCAL\_CONFIG\_FILE} config file setting is not treated as an error. \item Added a new command, \Condor{wait}, which will watch a log file until a job or set of jobs complete. \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item The \Condor{starter} was reporting ImageSize to be much bigger than reality for multi-threaded jobs in Linux. If the jobs were ever evicted, this could cause them to never match to another machine or to be unnecessarily restricted. \item The \Condor{starter} no longer seg-faults when attempting to run a job as nobody on HPUX. This bug was introduced in version 6.3.2 and effected all job universes. \item Fixed a bug related to defining \Macro{CONDOR\_IDS} in the configuration file. This new feature was added in version 6.5.3, but it did not work properly with the Standard and PVM universes. Now, all parts of Condor should work correctly without permission problems if \MacroNI{CONDOR\_IDS} is defined in the config file. \item Under certain rare circumstances (e.g., running \Condor{rm} on a \Condor{dagman} job which was running a POST script for a node which had no PRE script defined), DAGMan could dereference a nonexistent pointer and segfault. This has now been fixed. \item Fixed a minor bug in the \Opt{-jobad} option for the \Condor{cod} command-line tool when activiating Computing On Demand (COD) jobs. Now, the parser for the user-specified ClassAd will ignore comments and whitespace. \item In Standard Universe, a transient failure to read the user's password file entry (e.g. with an overloaded NIS server) could result in the job running to completion and exiting the queue without updating the user log. \item Fixed a bug which prevented the user from specifying Hawkeye jobs with colons and / or spaces in the executable name. The \MacroNI{HAWKEYE\_JOBS} macro now allows for the individual fields to be quoted, solving this problem. \end{itemize} \noindent Known Bugs: \begin{itemize} \item There is a suspected-but-not-yet-confirmed bug in DAGMan's \Opt{-MaxJobs} feature that can lead to a segfault which re-occurs during recovery. Users are advised to avoid using this feature until the bug is found and fixed. \Condor{dagman} jobs submitted without the \Opt{-MaxJobs} feature are not affected. Likewise, the \Opt{-MaxPre} and \Opt{-MaxPost} features are not affected and can be safely used. \item For large numbers of jobs, \Condor{analyze} may run out of memory and fail. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-6-5-4}Version 6.5.4} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent New Features: \begin{itemize} \item Added the \Opt{-jobad} option to the \Condor{cod} command-line tool when activiating Computing On Demand (COD) jobs. This allows the user to specify dynamic attributes at the point when they request the job to start, instead of having to define everything ahead of time in the Condor configuration files at the execution site. \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item The \Condor{startd} will now immediately exit with a fatal error if there is a syntax error in any of the policy expressions (\MacroNI{START}, \MacroNI{SUSPEND}, \MacroNI{PREEMPT}, etc) as defined in the configuration file. Before, the \Condor{startd} might run for a long time before reporting the error, or would silently ignore the expression. For example, if the \MacroNI{START} expression contained a syntax error, the \Condor{startd} would never match against any resources without warning the administrator that the \MacroNI{START} expression could not be parsed. \item The \Condor{starter} now checks to make sure that the value of the \MacroNI{JOB\_RENICE\_INCREMENT} expression is within the valid range of 0 to 19, and adjusts out-of-range values to the closest valid value, printing a warning to the StarterLog. Previously, no warnings were printed. Also, under Windows, values above 19 used to be incorrectly adjusted to 0 (i.e., normal priority) instead of 19 (i.e., idle priority class). \item When Condor is testing to see if a given submit machine is in the same \MacroNI{UID\_DOMAIN} as the execution machine, it now uses a case-insensitive test. Previous versions of Condor that used a case-sensitive comparison caused problems for sites that have mixed-case hostnames in their DNS records. \item When the \Condor{shadow} is trying to claim a \Condor{startd}, if the IP address of the \Condor{shadow} cannot be resolved, the \Condor{startd} used to refuse the request. Now, the \Condor{startd} accepts the request and uses the IP address of the \Condor{shadow} in all log messages, tool output, and the \Attr{ClientMachine} ClassAd attribute that are normally set to the hostname. \item Process handles would intermittantly be lost on Windows, causing the \Condor{startd} and other daemons to EXCEPT(). This has been fixed. \end{itemize} \noindent Known Bugs: \begin{itemize} \item None. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-6-5-3}Version 6.5.3} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Karen's table \begin{center} \begin{table}[hbt] \begin{tabular}{|ll|} \hline \emph{Architecture} & \emph{Operating System} \\ \hline \hline Hewlett Packard PA-RISC (both PA7000 and PA8000 series) & HPUX 10.20 \\ \hline Sun SPARC Sun4m,Sun4c, Sun UltraSPARC & Solaris 2.6, 2.7, 8, 9 \\ \hline Silicon Graphics MIPS (R5000, R8000, R10000) & IRIX 6.5 \\ \hline Intel x86 & Red Hat Linux 6.2, 7.2 \\ & Red Hat Linux 8 (clipped) \\ \hline & Red Hat Linux 9 (clipped) \\ \hline & Windows NT 4.0 (clipped) \\ \hline & Windows NT 2000 (clipped) \\ \hline & Windows NT XP (clipped) \\ \hline ALPHA & Digital Unix 4.0 \\ & Red Hat Linux 7.2 (clipped) \\ \hline & Tru64 5.1 (clipped) \\ \hline PowerPC & Macintosh OS X (clipped) \\ Itanium & Red Hat Linux 7.2 (clipped) \\ \end{tabular} \caption{\label{vers-hist-sup-plat}Condor 6.5.3 supported platforms} \end{table} \end{center} \noindent New Features: \begin{itemize} \item Added support for \$RANDOM\_CHOICE(xxx,yyy,...) in the configuration file parsing and in the submit description file, so we can now do things like: \begin{verbatim} CKPT_SERVER_HOST = $RANDOM_CHOICE(check1.example.com,check2.example.com) \end{verbatim} \item Added a new installation and configuration tool to Condor. Instead of using the complicated \Condor{install} script to install and configure Condor, users can now try the simplified \Condor{configure} script. The new method does not ask any interactive questions. Instead, you set the few installation options you need to specify for your site as command-line arguments. For most sites, this method is much better than the question-driven \Condor{install} script, and we recommend that new users of Condor try \Condor{configure}. All the options it supports are documented in the new \Condor{configure} man page, section~\ref{man-condor-configure} on page~\pageref{man-condor-configure}. However, since this is the first public release of the new script, there may be some problems with how it works under certain conditions, so we are still including \Condor{install} in the 6.5.3 release if you run into problems with \Condor{configure} and would prefer to use the old method. \item You can now define \MacroNI{CONDOR\_IDS} in the Condor configuration files, not just as an environment variable. This setting is used to specify what effective user id the Condor daemons should run as when they are spawned with root privileges. This is the effective user id that Condor daemons write to their log files as, manipulate the job queue, and so on. Therefore, the \File{log} and \File{spool} directories should be owned by this user. If the Condor daemons are spawned as root and \verb@CONDOR_IDS@ is not set, Condor searches for a ``condor'' user in the local user information database (the \File{/etc/passwd} file, NIS, etc). For more information about the \verb@CONDOR_IDS@ setting, see section~\ref{sec:uids} on page~\pageref{sec:uids} and/or section~\ref{param:CondorIds} on page~\pageref{param:CondorIds}. \item Added the \Dflag{HOSTNAME} debugging level to print out verbose messages regarding how Condor resolves hostnames, domain names, IP addresses, and so on. If set, the Condor daemons also print out messages showing how they initialize their own local hostname and IP address. This is useful for sites that are having trouble getting Condor to work because of problems with their DNS or NIS installation. \item Improvements for Computing On Demand (COD) support: \begin{itemize} \item Simplified the interface to the \Condor{cod} command-line tool for managing COD claims. All commands that require a ClaimID argument no longer require \Opt{-name} or \Opt{-addr}, since the ClaimID contains the contact information for the \Condor{startd} that created it. \item If you request a claim from a specific virtual machine with the \Opt{-name} option to \Condor{cod\_request} (for example \verb$condor_cod_request -name vm3@hostname$) the tool will now automatically append a requirement to your request to ensure that your claim comes from the virtual machine you requested. \item \verb@condor_status -cod -long@ now provides the ClaimID string for any COD claims in your pool. This is useful in case you misplace or do not save the ClaimID returned to you from \Condor{cod\_request}. \end{itemize} \item Improvements for PVM support: \begin{itemize} \item Added support for \Procedure{pvm\_export}. \item Added support for any tasks the Master spawns to also spawn other tasks. Before this fix, only the master process could spawn tasks. \item Added the ability for multiple tasks to be run on a resource simultaneously. \item Added support for Condor-PVM to run multiple tasks under a single Condor-PVM starter. Currently, this feature must be turned on by a specific option in the submit description file. \end{itemize} \item Improvements for Windows: \begin{itemize} \item Added support for encrypting the execute directory on Windows 2000/XP. This feature is enabled by setting \MacroNI{ENCRYPT\_EXECUTE\_DIRECTORY} = True. \item Added the \MacroNI{VMx\_USER} config file parameter for specifying an account to run jobs as (instead of the condor nobody accounts). A different account can be specified for each VM. \end{itemize} \item Added new option \Opt{-dagman} to \Condor{submit\_dag} which allows specification of an alternate \Condor{dagman} executable. \item Added new entry in a DAG input file that adds a macro (using the "+" syntax) for use in the submit description file. The new entry is called VARS. \item Improved \Opt{-config} option to \Condor{config\_val} to print descriptive lines to stderr and filenames to stdout, so that its output can be cleanly piped into tools like xargs. \item On Unix, added the \MacroNI{VMx\_USER} config file parameter for specifying an account to run jobs as instead of ``nobody''. (i.e. when \Macro{UID\_DOMAIN}s do not match and \Macro{SOFT\_UID\_DOMAIN} = false). A different account can be specified for each VM. \item Added \Macro{EXECUTE\_LOGIN\_IS\_DEDICATED}. When set to True, this tells the \Condor{starter} to track job processes by login, instead of by process tree, which prevents so-called lurker processes. This is turned off by default. \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item Fixed the serious bug with periodic checkpointing that was introduced in version 6.5.2. \item Fixed a security hole in the \Condor{schedd} when it was running not as root on UNIX. Starting with Condor version 6.5.1, if you did not run the \Condor{schedd} as root, it would incorrectly allow other users to modify the job queue. In particular, a user could remove another user's jobs, to submit jobs to another user's personal \Condor{schedd}, and so on. This bug has now been fixed, and the \Condor{schedd} will no longer allow users to remove or modify other user's jobs. \item Fixed some bugs in the \Condor{kbdd} that prevented it from communicating with the \Condor{startd} to send updates about console activity (key presses and mouse movements). The \Condor{kbdd} is only needed on Digital Unix and IRIX platforms, so this bug did not effect most Condor users. \item Fixed a minor bug in the \Condor{startd} that caused it to exit with a fatal error when it fails to execute the \Condor{starter}. This problem is now handled gracefully and is not considered a fatal error. \item Fixed a minor problem introduced in version 6.5.2 in \Condor{submit}. New attributes for specifying Condor's File Transfer mechanism were added in version 6.5.2. \Condor{submit} was supposed to be backwards compatible with old job description files, but in a few cases, it would incorrectly give a fatal error when it saw certain combinations of the old syntax. Now, all old job description files are properly recognized by the new \Condor{submit}. For more information on using the Condor File Transfer mechanism, see section~\ref{sec:file-transfer} on page~\pageref{sec:file-transfer}, or the \Condor{submit} man page on page~\pageref{man-condor-submit}. \item Fixed a minor problem (introduced in version 6.5.2 while fixing another bug) that made it difficult to use a \Condor{master} to spawn the Hawkeye daemons. Now, if you install Hawkeye on a machine, add \verb@HAWKEYE@ to your \MacroNI{DAEMON\_LIST}, and define \MacroNI{HAWKEYE} to be the path to the \Prog{hawkeye\_master} program, the \Condor{master} will spawn everything correctly. \item PVM-related bugs: \begin{itemize} \item Fixed the \Attr{Environment} attribute in the submit description to be honored by both the master process and the slave tasks. \item Fixed a few PVM deadlock scenarios during PVM startup. \end{itemize} \item Fixed a bug which caused the ``KFlops'' published in the startd's ClassAd to have a large variance on fast CPUs. \item Fixed a bug which could cause inter-daemon communication problems. A side effect of this fix, however, causes all daemons to use 3 more file descriptors. \item Fixed a misleading error message generated by \Condor{q} when it can not find the address of the \Condor{schedd} to query. Previously, it suggested a fairly obscure problem as the likely source of the error. The same error occurs when the \Condor{schedd} is not running, which is a far more common situation. \item Fixed a couple of related of bugs which could cause the \Condor{collector} to seg-fault under unusual circumstances. \item Fixed a minor bug where Condor could get confused if a machine had only a fully qualified domain name in DNS, but no simple hostname without the domain name. In this case, Condor daemons and tools used to exit with a fatal error. Now, they function properly, since there's no reason for them to consider this a problem. \item Fixed a minor bug on Windows where the \Condor{starter} would incorrectly determine the VM number if StarterLog path contained periods. \end{itemize} \noindent Known Bugs: \begin{itemize} \item The VARS entry within a DAG input file is not propagated to the rescue DAG. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-6-5-2}Version 6.5.2} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent Known Bugs: \begin{itemize} \item There is a serious bug with periodic checkpointing that was introduced in version 6.5.2. This bug only effects jobs submitted to the Standard universe, since those are the only jobs that can perform periodic checkpoints in Condor. As soon as the \Macro{PERIODIC\_CHECKPOINT} expression evaluates to TRUE on a machine, the \Condor{startd} will get in a loop where it continues to request a periodic checkpoint every 5 seconds until the job is evicted from the machine. The work-around for this bug in version 6.5.2 is to set \verb@PERIODIC_CHECKPOINT = False@ in your \File{condor\_config} file. \end{itemize} \noindent New Features: \begin{itemize} \item The collector and negotiator can now run on configurable ports, instead of relying on hard-coded values. To use this feature, many places in Condor where you could previously only provide a hostname now understand ``hostname:port'' notation. For example, in your config file, you can now use: \begin{verbatim} COLLECTOR_HOST = $(CONDOR_HOST):9650 NEGOTIATOR_HOST = $(CONDOR_HOST):9651 FLOCK_TO = your-other.collector.domain.org:7002 \end{verbatim} If you define \Macro{COLLECTOR\_HOST} in this way, all Condor tools will automatically use the specified port if you are using them in the local pool (so you do not need to use any special options to the tools to get them to find your \Condor{collector} listening on the new port). In addition, the \Opt{-pool} option to all Condor tools now understands the ``hostname:port'' notation for remote pools. To query a remote pool with a collector listening on a non-standard port, you can use this: \begin{verbatim} condor_status -pool your-other.collector.domain.org:7002 \end{verbatim} \item When the Condor daemons start up, they now log the names of the Configuration files they are using, right after the startup banner. \item The \Condor{config\_val} program now has a \Opt{-verbose} option which will tell you in which configuration file and line number a condor configuration parameter is defined, and a \Opt{-config} option which will simply list all of the configuration files in use by Condor. \item There are new attributes to control Condor's file transfer mechanism. Not only will they hopefully be more clear and easy to use than the old ones, they also provide new functionality. There is now an option to only transfer files ``IF\_NEEDED''. In this case, if the job is matched with a machine in the same \Attr{FileSystemDomain}, the files are not transfered, but if the job runs at a machine in a different \Attr{FileSystemDomain}, the files are transfered automatically. For more information on using the Condor File Transfer mechanism, see section~\ref{sec:file-transfer} on page~\pageref{sec:file-transfer}, or the \Condor{submit} man page on page~\pageref{man-condor-submit}. \item Added support for Condor's new ``Computing on Demand'' functionality. Documentation is not yet available, but will be coming soon. \item DAGMan no longer requires that all jobs in a DAG have the same log file. \item The value of the \Macro{JOB\_RENICE\_INCREMENT} configuration parameter can now be an arbitrary ClassAd expression rather than just a static integer. The expression will be evaluated by the \Condor{starter} for each job just before it runs, and can refer to any attribute in the job ClassAd. \item The \Condor{collector} now publishes statistics about the running jobs. In particular, it now publishes the number of jobs running in each Universe, both as a snapshot, and the maximum of the snapshots for each month. \item The UNIX \Condor{collector} has the ability to fork off child processes to handle queries. The \Macro{COLLECTOR\_QUERY\_WORKERS} parameter is used to specify the maximum number of these worker processes and defaults to zero. \item All daemons now publish ``sequence number'' and ``start time'' information in their ClassAds. \item The Collector now maintains and publishes update statistics using the above ClassAd ``sequence number'' and ``start time'' information. History information is stored for the past COLLECTOR\_CLASS\_HISTORY\_SIZE updates and is also published in the Collector's ClassAd as a hex string. \item The \Condor{schedd} can now use TCP connections to send updates to pools that it is configured to flock to. You can now define \Macro{TCP\_UPDATE\_COLLECTORS} list and any collectors listed there, including ones the \Condor{schedd} is flocking with, will be updated with TCP. Also, the \Condor{master} uses the same list to decide if it should use TCP to update any collectors listed in the \Macro{SECONDARY\_COLLECTORS\_LIST}. For more infomation on TCP collector updates in Condor and if your site would want to enable them, read section~\ref{sec:tcp-collector-update} on ``Using TCP to Send Collector Updates'' on page~\pageref{sec:tcp-collector-update}. \item You no longer need to define \Macro{FLOCK\_VIEW\_SERVERS} in your config file if you have configured a \Condor{schedd} to flock to other pools. This is now handled automatically, so you only have to define \Macro{FLOCK\_TO}. \item If no \Macro{STARTD\_EXPRS} is specified, the \Condor{startd} now defaults to ``JobUniverse''. \item Globus universe jobs under Condor-G now send email on job completion based on the notification setting in the submit file. \item When submitting a Globus universe job to Condor-G the input, output, and error files can now optionally be an http, https, ftp, or gsiftp URL. (The actual file transfer is handled by globus-url-copy on the remote site.) \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item DAGMan now correctly reports an error and rejects DAGs which contain two nodes with the same name, regardless of their case. (DAGMan has rejected duplicate node names since Condor 6.4.6, but until now it would fail to do so if there was any difference in their case.) \item When \Condor{submit\_dag} checks job submit files for proper ``log'' statements, it now correctly recognizes lines with leading whitespace. \item Fixed a minor bug whereby DAGMan was not removing its lock file after successful completion. \item Fixed a bug introduced in version 6.5.0 whereby the \Macro{UID\_DOMAIN} attributes of jobs and resources were being compared in a case-sensitive manner, resulting in erroneous failures. \item The \Condor{master} used to always pass a \Opt{-f} on the command line to all daemons defined in the \Macro{DAEMON\_LIST} config file setting. However, if you include entries which are not Condor daemons, the \Condor{master} will no longer add a \Opt{-f}. \item The files specified in \Attr{transfer\_input\_files} and/or \Attr{transfer\_output\_files} can now contain spaces in the filenames. If there are multiple names, they \emph{must} be seperated by a comma. Any spaces are considered part of a filename, not separators between filenames. This allows filenames containing spaces (which are common on Windows platforms) to be easily described. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-6-5-1}Version 6.5.1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent New Features: \begin{itemize} \item DAGMan now supports both Condor computational jobs and Stork data placement (DaP) jobs. (See \URL{http://www.cs.wisc.edu/condor/stork/} for more info on Stork.) \item Starter exceptions, such as failure to open standard input, are now recorded in the user log. \item Added new \Opt{-forcex} argument to \Condor{rm} to force the immediate local removal of (typically Globus universe) jobs in the 'X' state, regardless of their remote state. \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item When transfer\_files is being used, the path to the stdout/stderr files was not being respected. After hese files have been transferred, they are now copied to the location specified in the submit file. \item A DAGMan bug introduced in Condor 6.5.0 has been fixed, where DAGMan could crash (with a failed assertion) when recovering from a rescue DAG. \item Fixed a bug in the example condor\_config.generic and hawkeye\_config files, where COLLECTOR\_HOST was being included in the default STARTD\_EXPRS in non-string form, resulting in an invalid value for that attribute in machine classads. \end{itemize} \noindent Known Bugs: \begin{itemize} \item DAGMan doesn't detect when users mistakenly specify two DAG nodes with the same node name; instead it waits for the same node to complete twice, which never happens, and so DAGMan goes off into never-never land. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{\label{sec:New-6-5-0}Version 6.5.0} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent New Features: \begin{itemize} \item A fresh value of RemoteWallClock is now used when evaluating user policy expressions, such as periodic\_remove. \item The IOProxy handler now handles escaped characters (whitespace) in filenames. \item condor.boot is now configured to work automatically with Red Hat chkconfig. \item A new log\_xml option has been added to condor\_submit. It is documented in the condor\_submit portion of the manual. \item A new DAGMan option to produce dot files was added. Dot is a program that creates visualizations of DAGs. This feature is documented in Section~\ref{sec:DAGMan}. \item The email report from condor\_preen is now less cryptic, and more self-explanatory. \item Specifying full device paths (e.g., ``/dev/mouse'') instead of bare device names (e.g., ``mouse'') in CONSOLE\_DEVICES in the config file is no longer an error. \item The condor\_submit tool now prints a more helpful, specific error if the specified job executable is not found, or can't be accessed. \item The startd ``cron'' (Hawkeye) now permits zero length ``prefix'' strings. \item A number of new Hawkeye modules have been added, and most have various bug fixes and improvements. \item Added support for a new config parameter, Q\_QUERY\_TIMEOUT, which defines the timeout that \Condor{q} uses when communicating with the \Condor{schedd}. \item Added the ability to use TCP to send ClassAd updates to the Condor{collector}, though the feature is disabled by default. Read section~\ref{sec:tcp-collector-update} on ``Using TCP to Send Collector Updates'' on page~\pageref{sec:tcp-collector-update} for more details and a discussion of when a site would need to enable this functionality. \item Scheduler Universe has been ported to Windows. This enables \Condor{dagman} to run on Windows as well. \end{itemize} \noindent Bugs Fixed: \begin{itemize} \item Hawkeye will no longer busy-loop if a ``continuous mode'' module with a period of 0 fails to execute for some reason. (Now, for continuous mode modules, a period of 0 is automatically reset to be 1, and a warning appears in the log.) \item Fixed a very rare potential bug when initializing a user log file. Improved the error messages generated when there are problems initializing the user log to include string descriptions of the errors, not just the error number (errno). \item The default value in the config files for \Macro{LOCK} is now defined in terms of \Macro{LOG}, instead of using \Macro{LOCAL\_DIR} and appending ``log''. This is a very minor correction, but in the rare cases where the log directory is being redefined for some reason, we usually want that to apply to the lock files, as well. Of course, if the log directory is on NFS, \Macro{LOCK} should still be customized to point to a directoy on a local file system. \item Fixed a bug in the \Condor{schedd} where Scheduler Universe jobs with \Macro{copy\_to\_spool} = false would fail. \end{itemize} \noindent Known Bugs: \begin{itemize} \item DAGMan doesn't detect when users mistakenly specify two DAG nodes with the same node name; instead it waits for the same node to complete twice, which never happens, and so DAGMan goes off into never-never land. \end{itemize}
{ "alphanum_fraction": 0.7318934387, "avg_line_length": 40.3214285714, "ext": "tex", "hexsha": "4903472837ad70b865cfb6ef83713ebf4b7ffdfc", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-07-14T20:20:05.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-09T01:42:58.000Z", "max_forks_repo_head_hexsha": "9e00a5874cc2579f5fdc81bb778f540b40b48c87", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pavlo-svirin/htcondor", "max_forks_repo_path": "doc/version-history/6-5.history.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9e00a5874cc2579f5fdc81bb778f540b40b48c87", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pavlo-svirin/htcondor", "max_issues_repo_path": "doc/version-history/6-5.history.tex", "max_line_length": 78, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9e00a5874cc2579f5fdc81bb778f540b40b48c87", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pavlo-svirin/htcondor", "max_stars_repo_path": "doc/version-history/6-5.history.tex", "max_stars_repo_stars_event_max_datetime": "2015-05-22T16:26:34.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-22T16:26:34.000Z", "num_tokens": 7504, "size": 29354 }
% !Tex root = checkedc.tex \chapter{Bounds declarations for structure types} \label{chapter:structure-bounds} This chapter extends reasoning about bounds to objects with structure types by introducing bounds declarations for structure members. Structure types may have members with \arrayptr\ types. Those members must have \emph{member bounds declarations} associated with them in order for the values stored there to be used to access memory. The declarations of structure members may include \keyword{where} clauses that declare member bounds. The member bounds declarations may also be placed inline. A structure declaration may also include \keyword{where} clauses at the same level as member declarations. Here are examples: \begin{lstlisting} struct S { array_ptr<int> data where data : count(num); int num; }; \end{lstlisting} or \begin{lstlisting} struct S { array_ptr<int> data : count(num); int num; }; \end{lstlisting} or \begin{lstlisting} struct S { array_ptr<int> data; int num; where data : count(num); }; \end{lstlisting} Member bounds declarations are program invariants that are assumed to be true by default for objects of that type. A member bounds declaraton may be suspended for a specific object to allow for initialization of the object or modification of the members involved in the member bounds declarations. The member bounds declaration is declared to hold again after the initialization or modification is done. Here is an example of variable of type \code{S} being initialized: \begin{lstlisting} void f(int len) { S y where suspends(y.data); array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len); y.data = newarr; y.num = len where holds(y.data); // the member bounds for y.data now holds ... } \end{lstlisting} \keyword{Suspends} and \keyword{holds} are dataflow-sensitive declarations of the states of the member bounds declarations for specific members of variables. They can be applied to variable members that have member bounds. They also can be applied to variables or variable members with structure types as syntactic short-hand. In that case, they apply to the nested members of the variable or variable member. The example could also be written as: \begin{lstlisting} void f(int len) { S y where suspends(y); array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len)) y.data = newarr; y.num = len where holds(y); // the member bounds for y.data now holds ... } \end{lstlisting} Making member bounds declarations be invariants provides a way to deal with issues caused by aliasing. There can be pointers to data structures or members of data structures. There may be multiple pointers to a single structure object in memory. When there are multiple pointers, the pointers are called aliases because they all name the same memory location. Aliasing makes it hard to reason about programs. Consider the example: \begin{lstlisting} f(S *q, S *r, bool b) { if (b) { q->arr = malloc(sizeof(int)*5); q->len = 5; } else { r->arr = malloc(sizeof(int)*5); r->len = 5; } } \end{lstlisting} Even when b is true, the value \code{r->arr} may still be changed by a call to f. This can happen when \code{q} and \code{r} are the same value and are aliases for the same memory location. Changing one named value (\code{q->arr}) can have the effect of changing some other value with a distinct name (\code{r->arr}). In general, it is difficult to know whether an assignment through one pointer variable is affecting the members of other pointer variables. Member bounds declarations being program invariants for structure members allows localized reasoning about the members. A programmer can assume that the bounds declarations are true for members of objects of that type. The member bounds declaration may be suspended temporarily for specific objects while they are being initialized or modified. The bounds declarations for variables with structure types and the \keyword{suspends} and \keyword{holds} declarations will be checked statically for validity. Section~\ref{section:checking-bounds-with-structures} extends the rules from Chapter~\ref{chapter:checking-bounds} to variables with structure types and to \keyword{suspends} and \keyword{holds} declarations. In the rest of this chapter, to simplify the description, assumptions about address-taken variables similar to those in Section~\ref{section:bounds-declarations} are made. It is assumed that none of the variables or members of variables on the left-hand side of bounds-declarations have their addresses taken. It is assumed also that the values of variables or members of variables whose addresses are taken are not used in bounds expressions. \section{Declaring bounds for structure members} Member bounds declarations have the form: \begin{tabbing} \var{member}\=\var{-bounds-decl:}\\ \> \var{member-path} \code{:} \var{member-bounds-exp} \\ \\ \var{member-path:}\\ \> \var{identifier} \\ \> \var{member-path} \code{.} \var{identifier} \end{tabbing} A member path is a sequence of one or more member names, separated by the `\code{.}' operator. The sequence of members must be a valid sequence of member accesses for the structure type. The common case of using a member of the structure type is simply a member path of length 1. Member bounds expressions are similar to the bounds expressions described in Section~\ref{section:bounds-declarations}, except that members of the structure type are used in place of variables in the non-modifying expressions. In addition, pointer indirection and indirect member references are excluded. A structure member whose type is \arrayptr\ or a checked array type may have at most one bounds declared for it. The typing rules for member bounds declarations are similar to those for variable bounds declarations. For bounds declarations of the form \boundsdecl{\var{member-path}}{\boundscount{\var{e1}}}, the \var{member-path} cannot have the type \arrayptrvoid\ and the expression \var{e1} must have an integral type. For bounds declarations of the form \boundsdecl{\var{member-path}}{\bounds{\var{e1}}{\var{e2}}}, the types of \var{e1} and \var{e2} must be pointers to the same type. Typically \var{member-path} is also a pointer to that type or an array of that type. However, \var{member-path} can be a pointer to or an array of a different type. A structure consists of a list of member declarations, each of which consists of a type specifier followed by one or more structure member declarators. Structure member declarators are changed to allow optional in-line specification of member bounds and \keyword{where} clauses. \begin{tabbing} \var{struct}\=\var{-member-declarator:}\\ \> \var{declarator where-clause\textsubscript{opt}} \\ \> \var{declarator\textsubscript{opt}} \code{:} \var{constant-expression} \\ \> \var{declarator} \code{:} \var{member-bounds-exp} \var{where-clause\textsubscript{opt}} \\ \\ The list of member declarations is extended to include \keyword{where} clauses:\\ \\ \var{struct-member-declaration:}\\ \> \ldots{} \\ \> \var{where-clause}\textsubscript{opt} \\ \\ The remaining syntax for specifying a structure remains unchanged: \\ \\ \var{struct-or-union-specifier:}\\ \> \var{struct-or-union identifier\textsubscript{opt}} \lstinline|{| \var{struct-member-declaration-list} \lstinline|}| \\ \\ \var{struct-member-declaration-list:} \\ \> \var{struct-member-declaration} \\ \> \var{struct-member-declaration-list struct-member-declaration} \\ \\ \var{struct-member-declaration:} \\ \> \var{specifier-qualifier-list struct-member-declarator-list} \code{;} \\ \\ \var{struct-member-declarator-list:} \\ \> \var{struct-member-declarator} \\ \> \var{struct-declarator-list} \code{,} \var{struct-member-declarator} \end{tabbing} A member bounds expression can use members and child members of the structure being declared. Any member paths occurring in the member bounds expressions must start with members of the structure type being declared. Here is an example of the use of child members: \begin{lstlisting} struct A { array_ptr<int> data; }; struct N { int num; }; struct S { A a where count(a.data) == n.num; N n; }; \end{lstlisting} Allowing member bounds to use nested members of members complicates explaining concepts. Sometimes concepts will be explained using member bounds that use only immediate members and then generalized to handle nested members. \section{Bounds declarations for variables with structure types} To describe facts about members of specific variables, the left-hand sides of bound declarations are generalized to allow members of variables. The term \emph{variable member path} stands for variables or variables with member accesses. Variable member paths are used where variables were allowed: \begin{tabbing} \var{bounds}\=\var{-decl:}\\ \> \var{var-member-path} \code{:} \var{bounds-exp} \\ \\ \var{var-member-path:} \\ \> \var{identifier} \\ \> \var{var-member-path} \code{.} \var{identifier} \end{tabbing} The first identifier in a variable member path must be the name of a variable. The rest of the path, if there is one, must describe a member path for the structure type that has an \arrayptr\ type. The member path can be used as a name for its associated member bounds. Inline bounds declarations are still restricted to variables. \section{Declaring the state of member bounds declarations for variables} Programmers may declare the state of a member bounds declaration for a variable using two kinds of facts: \begin{tabbing} \var{fact:}\= \\ \> \var{\ldots{}} \\ \> \code{suspends(}\var{var-member-path}\code{)} \\ \> \code{holds(}\var{var-member-path}\code{)} \end{tabbing} If the \var{var-member-path} has an \arrayptr\ type, it must have the form \var{x.path,} where \var{x} is a variable name. The fact \code{suspends}(\var{x\code{.}path}\code{)} means that the member bounds declared by the type of \var{x} for \var{path} may not hold for \var{x}. The fact \code{holds(}\var{x\code{.}path}\code{)} means that the member bounds declared by the type of x for \var{path} holds now for \var{x}. As a convenient short-hand notation, the \var{var-member-path} can have a structure type. In that case, the declaration applies to all member bounds for the structure type and child members of the structure type. \subsection{Parameters and return values} The state of member bounds declarations can be declared for parameters and return values using \keyword{suspends} and \keyword{holds} as well. By default, the state is \code{holds}. Consider the following structure type definitions: \begin{lstlisting} struct S { array_ptr<int> data : count(num); int num; } struct T { S arr1; S arr2; array_ptr<float> weights : count(len); int len; } \end{lstlisting} Here are some function declarations involving the state of member bounds: \begin{lstlisting} T f(T x where holds(x)) where holds(return_value) // the default T f(T x where suspends(x.arr1)) where suspends(return_value.arr2) T f(T x where suspends(x.arr1.data) where suspends(return_value.arr1.data) \end{lstlisting} \subsection{Extent of declarations of member bounds state for variables} \label{section:member-bounds-state-extent} Declarations of the state of member bounds declarations are dataflow-sensitive and follow rules similar to flow-sensitive bounds declarations. We first define the set of state declarations that apply to a function component, where a function component is a statement or variable declaration. The state declarations for variables and variable members with structure types are expanded to state declarations of the individual members with \arrayptr\ type. It is assumed that declarations of automatic structure variables without initializers implicitly have \keyword{suspends} declarations for the variables. All other declarations of structure variables are assumed implicitly to have \keyword{holds} declarations for the variables. The other declarations are either automatic variables with initializers or variables with static storage, which are initialized to 0. Any \arrayptr\ members initialized to 0 have \boundsany, so they satisfy their member bounds declarations. For any declaration of member bounds state for \var{v}.\var{mp}, where \var{v} is a variable and \var{mp} is a member path, if \begin{enumerate} \item There is some path from the declaration to the function component, and \item \var{v} occurs in the function component, and \item There is no other declaration of member bounds state for \var{v.mp} along the path \end{enumerate} then the declaration of member bounds state applies to the function component. Member bounds state declarations must be consistent and agree along all paths to a function component. If a variable occurring in a function component has more than one state declaration that applies to it at the component, then all the state declarations applying to it at the component must be syntactically identical. This avoids ambiguity about which state declaration applies to an occurrence of a variable. It is an error for the member bounds state declarations to disagree. The following example illustrates the declaration of member bounds state. The structure \code{S} represents a variable length array, where \code{data} holds a pointer to an array and \code{num} is the length of the array. The function \code{f} takes a length parameter \code{len} and creates an initialized instance of \code{S} in the variable \code{y}. It then copies \code{y} to \code{z}. \begin{lstlisting} struct S { array_ptr<int> data where count(data)== num; int num; }; \end{lstlisting} Here is a version of \code{f} where all member bounds state declarations are made explicit. For structure variable members whose member bounds are suspended, the bounds declarations are made explicit as well. \begin{lstlisting} void f(int len) { S y where suspends(y.data); S z where suspends(z.data); int i, j; array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len); y.data = newarr where y.data : count(len); y.num = len where holds(y.data); // the member bounds for y.data now holds z = y where holds(z.data); } \end{lstlisting} This can be written more succinctly as: \begin{lstlisting} void f(int len) { S y; S z; int i, j; array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len); y.data = newarr where y.data : count(len); y.num = len where holds(y); // y is initialized now z = y where holds(z); // z is initialized now } \end{lstlisting} \section{Integration of member bounds and bounds for variables} A member of a structure variable may be covered by its member bounds declaration and a bounds declaration at the same time. This coexistence happens when a member is initialized to satisfy its member bounds declaration. Here is a version of \code{f} that is annotated with both member bounds and bounds declarations for variable members. \begin{lstlisting} void f(int len) { S y; array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len); y.data = newarr where newarr : count(len) && y.data : count(len); y.num = len where holds(y.data) && y.data : count(len) && y.data : count(y.num); ... } \end{lstlisting} After the assignment \code{y.num = len}, the member bounds holds for \code{y.data} and the bounds declaration \code{y.data : count(y.num)} holds as well. \subsection{Determining bounds for a use of an \arrayptr\ member of a variable} \label{section:determining-variable-member-bounds} When a member path \var{mp} of a variable \var{x} is used and \var{x}\code{.}\var{mp} has type \arrayptr, the bounds for \var{x}\code{.}\var{mp} are determined using these rules: \begin{itemize} \item If the use is within the extent of a bounds declaration \var{x}\code{.}\var{mp} \code{:} \var{bounds-exp} and \var{bounds-exp} is not \boundsunknown, \var{bounds-exp} is the bounds. \item Otherwise, if the state of the member bounds for \var{x}\code{.}\var{mp} is \code{holds}, the member bounds for \var{x}\code{.}\var{mp} is used. \item Otherwise, the bounds of \var{x}\code{.}\var{mp} is \boundsunknown. \end{itemize} \subsection{Suspends declarations and bounds for variables.} When the member bounds declaration \var{mb} for a variable member is suspended by a statement of the form: \var{e2} \keyword{where} \code{suspends(}\var{x\code{.}mp}\code{);} where \var{x} is a variable and \var{mp} is a member path, there is an implicit bounds declaration for \var{x}\code{.}\var{mp} at the point of suspension. This happens unless \var{e2} modifies a member \var{m} of \var{x} that occurs in \var{mb}. In addition, the state of the member bounds for \var{x}\code{.}\var{mp} must be \code{holds} before the statement. The member bounds declaration \var{mb} is converted to a bounds declaration by prefixing each occurrence of a member path in \var{mb} with the expression ``\var{x}\code{.}''. For example, given \begin{lstlisting} S copy_and_resize(S arg, int len) { array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len) where suspends(arg); for (int i = 0; i < arg.num; i++) { newarr[i] = arg.data[i]; } arg.data = newarr where arg.data : count(len); arg.num = len where arg.data : count(arg.num) where holds(arg); // member bounds for arg holds now return arg; } \end{lstlisting} There is implicitly a bounds declaration at the suspension of the member bounds for arg: \begin{lstlisting} array_ptr<int> newarr : count(len) = malloc(sizeof(int) * len) where suspends(arg) && arg.data : count(arg.num); \end{lstlisting} If the suspends were done after the assignment to \code{arg.data}: \begin{lstlisting} arg.data = newarr suspends(arg); \end{lstlisting} there would not be an implicit bounds declaration because \code{arg.data} is modified by the assignment. At a declaration of a structure variable x, no implicit bounds declarations are inserted if the declaration suspends the member bounds for a member of x. There was no point at which the member bounds was known to be true. For example, \begin{lstlisting} f(S arg where suspends(arg)) \end{lstlisting} does not have an implicit bounds declaration of the form \lstinline|arg.data :count(arg.len)|. \subsection{Holds declarations and bounds for variables} At a \keyword{holds} declaration for a variable member path \var{x.mp}, the member bounds for \var{x.mp} must be implied by the bounds declarations that are true about members of \var{x} and any facts that true at the point of the \keyword{holds} declaration. \section{Bounds-safe interfaces} \label{section:structure-bounds-safe-interfaces} Just as existing functions can have bounds-safe interfaces declared for them, existing structure types can have bounds-safe interfaces declared for them. This allows checked code to use those data structures and for the uses to be checked. Existing unchecked code is unchanged. To create a bound-safe interface for a structure type, a programmer declares member bounds or interface types for structure members with unchecked pointer types. Here is a member bounds declaration for a structure that is a counted buffer of characters: \begin{lstlisting} struct CountedBuffer { char *arr : count(len); int len; } \end{lstlisting} Here are bounds-safe interface types for members of a structure for binary tree nodes. The structure contains pointers to two other nodes. In checked code, pointer arithmetic on those pointers is not allowed. \begin{lstlisting} struct BinaryNode { int data; BinaryNode *left : itype(ptr<BinaryNode>); BinaryNode *right : itype(ptr<BinaryNode>); } \end{lstlisting} If bounds information is declared for one member of a structure with an unchecked pointer type, it must be declared for all other members of the structure with unchecked pointer types. It is important to understand that the \emph{semantics of unchecked pointers in unchecked contexts does not change even when bounds are declared for the pointers}. The declared bounds are used only by checked code that uses the data structure, when storing checked pointers into the data structure, and when converting unchecked pointers read from the data structure to checked pointers. Code in unchecked contexts that uses only unchecked pointer types is compiled as though the bounds-safe interface has been stripped from the source code. \section{Extending checking validity of bounds} \label{section:checking-bounds-with-structures} This section describes how to extend the checking in Chapter~\ref{chapter:tracking-bounds} to check variables with structure bounds and expressions with structure operations such as member assignment and member access. This section is primarily of interest to compiler writers. In Chapter~\ref{chapter:checking-bounds}, contexts map pointer variables to their bounds. Contexts are extended to map structure variables to descriptions of bounds for their members. The bounds for members of a variable will be described using a set of pairs, where the first element of each pair is a member path whose type is \arrayptr\ and the second element of each pair is a bounds expression. The bounds expression may use variable member paths in addition to variables. In Section~\ref{section:inferring-expression-bounds}, the inference of bounds for an expression determines the bounds expression that applies to the value of the expression. For structure-typed expressions, inference is generalized to determine the bounds expressions that apply to the \arrayptr\ members of the value of the expression. This information is represented using the same representation used for contexts: a set of pairs, where the first element is a member path whose type is \arrayptr\ and the second element is a bounds expression. Given this representation, it is easy to define the rules for inferring bounds for member accesses: a member access prunes the set of pairs and shortens the member paths. In Section~\ref{section:checking-complex-expressions}, checking of assignment expressions updates contexts. The contexts are then used to check that expression statements imply declared bounds. They are also used to check expressions nested within an expression that contain assignment expressions. Given the representation of contexts, the rules for updating contexts are extended to update contexts pointwise for structure variable assignments and member assignments. For a structure variable assignment, the entire set of pairs for a variable is updated. For an assignment to a member of a structure variable, only the set of entries associated with that member are updated. Both forms of assignment invalidate bounds expressions that use variable member paths that are changed by the assignment. An expression statement is checked by determining the updated context for the expression statement, determining the expected context for bounds after the statement, and then checking that the updated context implies the validity of the bounds in the expected context (Section ~\ref{section:checking-full-expressions}). For each variable in the expected context, it is checked that the bounds expression in the updated context implies the expected bounds expression. This is easily extended to a structure variable by checking for each member path for the variable that the bounds expression in the updated context implies the expected bounds expression. \subsection{Determining contexts} The context for every statement can be determined by using dataflow analyses extended pointwise to structure variables. Each structure variable is expanded into the set of variable member paths that represent all \arrayptr-typed variable member paths beginning with the variable. First, an iterative forward dataflow analysis is done to determine the member bounds state at every point in a function, following the rules in Section~\ref{section:member-bounds-state-extent}. The analysis works on the set of variable member paths. It computes for each variable member path whether the member bounds are valid at each program point. It is a compile-time error if the analysis determines that the state of member bounds is inconsistent along different paths. Second, a generalized version of the extent dataflow analysis of Section~\ref{section:computing-extent} is done. Each structure variable is expanded into the set of variable member paths that represent all \arrayptr-typed variable member paths begining with the variable. The analysis assigns one lattice value to each of the variable member path for each program point in the function where the variable is in scope. The context for a statement is computed using the results of the dataflow analyses. For each structure variable in scope for the statement, the set of \arrayptr-typed variable member paths beginning at the variable is determined. The set is mapped to a set of pairs as follows: \begin{itemize} \item For the first element, the structure variable is removed from the beginning of the path to create a member path. \item For the second element, the extent dataflow analysis is consulted for the \arrayptr\ member path. If the bounds expression is not \boundsunknown, it is used. If it is \boundsunknown, the result of the dataflow analysis for the member bounds state is examined. If the member bounds state is valid, the member bounds is used. It is transformed to use variable member bounds by prefixing the member paths that occur in it with the structure variable. Otherwise, \boundsunknown\ is used. \end{itemize} \subsection{Inferring bounds for expressions without assignments} This section describes extensions to Section~\ref{section:inferring-expression-bounds} for determining bounds. The notation \boundsinfer{\var{e}}{\var{s}} is overloaded for expressions with structure types to mean that \var{e} has a set of member paths with individual bounds expressions. \begin{itemize} \item Variables: If \var{x} has a structure type, the context is consulted for \var{x} for the set \var{s} of pairs of member paths and bounds expressions. \boundsinfer{\var{x}}{\var{s}}. \item Member access: Given an expression \var{e}\code{.}\var{m}, where \var{m} is a member name and \boundsinfer{\var{e}}{\var{s}}, \begin{itemize} \item If \var{e}\code{.}\var{m} has a structure type, then \begin{itemize} \item The set $\var{s}^{\prime}$ is defined as follows. For each pair \texttt{(\var{mp}, \var{b})} in \var{s}, if \var{mp} has the form \var{m}\code{.}\var{rp}, then \texttt{(\var{rp}, \var{b})} is included in $\var{s}^\prime$. \item Given $\var{s}^\prime$, \boundsinfer{\var{e}\code{.}\var{m}}{$\var{s}^\prime$}. \end{itemize} \item Otherwise, \begin{itemize} \item If $\var{s} = \texttt{\{(\var{m}, \var{b})\}}$, then \boundsinfer{\var{e}\code{.}\var{m}}{b} \item Otherwise, \boundsinfer{\var{e}\code{.}\var{m}}{\boundsunknown} \end{itemize} \end{itemize} \item Function calls: {\em To be filled in} \end{itemize} \subsection{Assignment expressions} {\em To be filled in.} \section{Compatibility of structure types with bounds declarations} The C Standard defines compatibility of two structure types declared in separate translation units \cite[Section 6.2.7]{ISO2011}. This definition is extended to include member bounds declarations and member bounds-safe interfaces. If the structure types are completed in both translation units, for each pair of corresponding members, \begin{itemize} \item If the members have unchecked pointer type, \begin{itemize} \item If the members both have bounds-safe interfaces, the bounds-safe interfaces must either both be bounds expressions or both be interface types. If both have bounds expressions, the bounds expressions must be syntactically identical after being placed into a canonical form. If both have interface types, the interface types must be compatible. \item Otherwise, one member must have a bounds-safe interface and the other member must omit a bounds-safe interface, or both members must omit bounds-safe interfaces. \end{itemize} \item Otherwise, both members must have member bounds declarations or both members must not have member bounds declarations. If both members have member bounds declarations, the bounds expressions must be syntactically identical after being placed into canonical form. \end{itemize}
{ "alphanum_fraction": 0.761863224, "avg_line_length": 40.8262108262, "ext": "tex", "hexsha": "7bb8523fa62d08425131a6c439dfa07081c59540", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8ba26d171b70ad066cda21afa173415911cb18f2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oalign/checkedc", "max_forks_repo_path": "spec/bounds_safety/structure-bounds.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8ba26d171b70ad066cda21afa173415911cb18f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oalign/checkedc", "max_issues_repo_path": "spec/bounds_safety/structure-bounds.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "8ba26d171b70ad066cda21afa173415911cb18f2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oalign/checkedc", "max_stars_repo_path": "spec/bounds_safety/structure-bounds.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6901, "size": 28660 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Beamer Presentation % LaTeX Template % Version 1.0 (10/11/12) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) % % Modified by Jeremie Gillet in November 2015 to make an OIST Skill Pill template % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND THEMES %---------------------------------------------------------------------------------------- \documentclass{beamer} \mode<presentation> { \usetheme{Madrid} \definecolor{OISTcolor}{rgb}{0.65,0.16,0.16} \usecolortheme[named=OISTcolor]{structure} %\setbeamertemplate{footline} % To remove the footer line in all slides uncomment this line %\setbeamertemplate{footline}[page number] % To replace the footer line in all slides with a simple slide count uncomment this line \setbeamertemplate{navigation symbols}{} % To remove the navigation symbols from the bottom of all slides uncomment this line } \setbeamertemplate{itemize items}[default] \setbeamertemplate{enumerate items}[default] \usepackage{graphicx} % Allows including images \usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables \usepackage{textpos} % Use for positioning the Skill Pill logo \usepackage{fancyvrb} \usepackage{tikz} \usepackage{hyperref} \usepackage{listings} \usepackage{color} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=tb, language=python, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\small\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \newcommand*{\lstitem}[1]{ \setbox0\hbox{\lstinline{#1}} \item[\usebox0] % \item[\hbox{\lstinline{#1}}] \hfill \\ } %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \title[Skill Pill]{Skill Pill: Julia} % The short title appears at the bottom of every slide, the full title is only on the title page \subtitle{Lecture 3: Data Structures and Algorithms} \author{Valentin Churavy \and James Schloss} % Your name \institute[OIST] % Your institution as it will appear on the bottom of every slide, may be shorthand to save space { Okinawa Institute of Science and Technology \\ % Your institution for the title page \textit{[email protected]} \\ \textit{[email protected]} % Your email address } \date{July 11, 2017} % Date, can be changed to a custom date \begin{document} \setbeamertemplate{background}{\includegraphics[width=\paperwidth]{SPbackground.png}} % Adding the background logo \begin{frame} \vspace*{1.4cm} \titlepage % Print the title page as the first slide \end{frame} \setbeamertemplate{background}{} % No background logo after title frame \addtobeamertemplate{frametitle}{}{% Adding the Skill Pill logo on the title screen after title frame \begin{textblock*}{100mm}(.92\textwidth,-0.9cm) \includegraphics[height=0.85cm]{julia.pdf} \end{textblock*}} \begin{frame} \tableofcontents \end{frame} \begin{frame} \frametitle{Disclaimer} When designing this Skill Pill, we assumed the following \begin{itemize} \item You have seen and are familiar with common data structures \item You know how to program and use programming as part of your daily work. \end{itemize} As such, we have designed today's lesson so that you may begin using Julia in your work as soon as possible. \end{frame} \section{Data Structures} \begin{frame} \frametitle{Datastructures.jl} \begin{columns} \column{0.5\textwidth} DataStructures.jl has the following data structures: \begin{itemize} \item Deque (based on block-list) \item Stacks and Queues \item Accumulators and Counters \item Disjoint Sets \item Binary Heap \item Mutable Binary Heap \item Ordered Dicts and Sets \item Dictionaries with Defaults \item Trie (Tree) \item Linked List \item Sorted Dict, Multi-Dict and Set \item Priority Queue \end{itemize} \pause \column{0.5\textwidth} All information regarding these Data Structures can be found Here: \url{ http://datastructuresjl.readthedocs.io/en/latest/index.html/} \vspace{0.5cm} All of these algorithms can be viewed with \texttt{@edit}. We'll use two more before hopping into algorithms: \texttt{Binary Trees} and \texttt{Priority Queues} \end{columns} \end{frame} \begin{frame}[fragile] \frametitle{Using simple trees} First, let's get used to using DataStructures.jl by Depth-First searching in binary trees. You might be used to binary tree nodes that look like this (C++): \begin{lstlisting}[language=c++] struct node{ double weight; node *left; node *right; node *parent; }; \end{lstlisting} Basically, each node has parents and children. \end{frame} \begin{frame}[fragile] \frametitle{Using simple trees} In Julia, the tree nodes might look like: \begin{lstlisting} abstract BT type BTree <: BT weight::Float64 bitpath::String left::BT right::BT end type Empty <: BT end \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Depth-First-Search} To search through a tree, we need a \textbf{recursive} strategy. One of these strategies is known as \textbf{Depth-First Search} (or corresponding \textbf{Breadth-First Search}). \vspace{0.5cm} These searching algorithms always go from the root $\rightarrow$ nodes. In C++: \begin{lstlisting}[language=c++] void depth_first_search(node* &node){ if (root->right){ depth_first_search(node->right); } if (root->left){ depth_first_search(node->left); } } \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Depth-First Search} In Julia, this looks like: \begin{lstlisting} function DFS(node::BTree) if !isa(node.right, Empty) DFS(node.right) end if !isa(node.left, Empty) DFS(node.left) end end \end{lstlisting} \begin{block}{Exercise} Write a DFS that acts on a binary tree and output a binary string that traverses to a leaf node. \end{block} \end{frame} \begin{frame}[fragile] \frametitle{PriorityQueue} \texttt{Priority Queues} are found in the \texttt{DataStructures.jl} Package: \begin{lstlisting} julia> using DataStructures julia> pq = PriorityQueue() julia> pq["a"] = 15 julia> pq["b"] = 20 julia> pq DataStructures.PriorityQueue{Any,Any,Base.Order.ForwardOrdering} with 2 entries: "b" => 20 "a" => 15 julia> dequeue!(pq) "a" \end{lstlisting} \end{frame} \begin{frame} \frametitle{Other Data Structures} Obviously, all your favorite data structures can be implemented in Julia, but for now, we will move on to... \end{frame} \section{Algorithms} \begin{frame} \frametitle{Algorithms in Julia} \pause \begin{center} \huge{Implement your favorite algorithm in Julia} \vspace{0.5cm} \small PS: We're here to help! \end{center} \end{frame} \begin{frame}{What is next?} \begin{description} \item[Last Session] Parallel computing, threading, GPUs? Up to grabs. \end{description} Join us for the exciting conclusion! \end{frame} \end{document}
{ "alphanum_fraction": 0.6976209786, "avg_line_length": 27, "ext": "tex", "hexsha": "a4a09c742e30b953dcffa818966d0428a8126376", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-08-13T03:44:47.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-23T01:55:08.000Z", "max_forks_repo_head_hexsha": "eb719677ae84bd26c1e3fdb889cf2b54f65d47ae", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oist/skillpill-julia", "max_forks_repo_path": "slides/day3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eb719677ae84bd26c1e3fdb889cf2b54f65d47ae", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oist/skillpill-julia", "max_issues_repo_path": "slides/day3.tex", "max_line_length": 179, "max_stars_count": 3, "max_stars_repo_head_hexsha": "4ef5de50fe6fdc846278ed47eaeeb73acb121a02", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vchuravy/skillpill-julia", "max_stars_repo_path": "slides/day3.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-19T01:00:44.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-11T18:24:02.000Z", "num_tokens": 2049, "size": 7398 }
\chapter{Auxiliary-Field Quantum Monte Carlo} \label{chap:afqmc} The AFQMC method is an orbital-space formulation of the imaginary-time propagation algorithm. We refer the reader to one of the review articles on the method \cite{AFQMC_review,PhysRevLett.90.136401,PhysRevE.70.056702} for a detailed description of the algorithm. It uses the Hubbard-Stratonovich transformation to express the imaginary-time propagator, which is inherently a 2-body operator, as an integral over 1-body propagators, which can be efficiently applied to an arbitrary Slater determinant. This transformation allows us to represent the interacting many-body system as an average over a noninteracting system (e.g., Slater determinants) in a time-dependent fluctuating external field (the Auxiliary fields). The walkers in this case represent nonorthogonal Slater determinants, whose time average represents the desired quantum state. QMCPACK currently implements the phaseless AFQMC algorithm of Zhang and Krakauer \cite{PhysRevLett.90.136401}, where a trial wavefunction is used to project the simulation to the real axis, controlling the fermionic sign problem at the expense of a bias. This approximation is similar in spirit to the fixed-node approximation in real-space DMC but applied in the Hilbert space where the AFQMC random walk occurs. \section{Theoretical Background} ... Coming Soon ... \section{Input} The input for an AFQMC calculation is fundamentally different to the input for other real-space algorithms in QMCPACK. The main source of input comes from the Hamiltonian matrix elements in an appropriate single particle basis. This must be evaluated by an external code and saved in a format that QMCPACK can read. More details about file formats follow. The input file has six basic xml-blocks: \texttt{AFQMCInfo}, \texttt{Hamiltonian}, \texttt{Wavefunction}, \texttt{WalkerSet}, \texttt{Propagator}, and \texttt{execute}. The first five define input structures required for various types of calculations. The \texttt{execute} block represents actual calculations and takes as input the other blocks. Nonexecution blocks are parsed first, followed by a second pass where execution blocks are parsed (and executed) in order. Listing 15.1 shows an example of a minimal input file for an AFQMC calculation. Table~\ref{table:afqmc_basic} shows a brief description of the most important parameters in the calculation. All xml sections contain a ``name'' argument used to identify the resulting object within QMCPACK. For example, in the example, multiple Hamiltonian objects with different names can be defined. The one actually used in the calculation is the one passed to ``execute'' as ham. \begin{lstlisting}[style=QMCPXML,caption=Sample input file for AFQMC.] <?xml version="1.0"?> <simulation method="afqmc"> <project id="Carbon" series="0"/> <AFQMCInfo name="info0"> <parameter name="NMO">32</parameter> <parameter name="NAEA">16</parameter> <parameter name="NAEB">16</parameter> </AFQMCInfo> <Hamiltonian name="ham0" info="info0"> <parameter name="filename">../fcidump.h5</parameter> </Hamiltonian> <Wavefunction name="wfn0" type="MSD" info="info0"> <parameter name="filetype">ascii</parameter> <parameter name="filename">wfn.dat</parameter> </Wavefunction> <WalkerSet name="wset0"> <parameter name="walker_type">closed</parameter> </WalkerSet> <Propagator name="prop0" info="info0"> </Propagator> <execute wset="wset0" ham="ham0" wfn="wfn0" prop="prop0" info="info0"> <parameter name="timestep">0.005</parameter> <parameter name="blocks">10000</parameter> <parameter name="nWalkers">20</parameter> </execute> </simulation> \end{lstlisting} %The following table lists some of the most practical parameters in the \texttt{execute} block %The following table lists some of the practical parameters \begin{table}[h] \begin{center} \caption{Input options for AFQMC in QMCPACK.\label{table:afqmc_basic}} \begin{tabularx}{\textwidth}{l l l l l X } \hline \multicolumn{6}{l}{\texttt{afqmc} method} \\ \hline \multicolumn{6}{l}{parameters in \texttt{AFQMCInfo}} \\ & \bfseries name & \bfseries datatype & \bfseries values & \bfseries default & \bfseries description \\ & \texttt{NMO } & integer & $\ge 0$ & no & Number of molecular orbitals \\ & \texttt{NAEA } & integer & $\ge 0$ & no & Number of active electrons of spin-up \\ & \texttt{NAEB } & integer & $\ge 0$ & no & Number of active electrons of spin-down \\ \multicolumn{6}{l}{parameters in \texttt{Hamiltonian}} \\ & \texttt{info } & argument & & & Name of \texttt{AFQMCInfo} block \\\\ & \texttt{filename } & string & & no & Name of file with the hamiltonian \\ & \texttt{filetype } & string & hdf5 & yes & Native HDF5-based format of QMCPACK \\ \multicolumn{6}{l}{parameters in \texttt{Wavefunction}}\\ & \texttt{info } & argument & & & Name of \texttt{AFQMCInfo} block \\ & \texttt{type } & argument & MSD & no & Linear combination of (assumed non-orthogonal) Slater determinants \\ & \texttt{ } & & PHMSD & & CI-type multi-determinant wave function \\ & \texttt{filetype } & string & ascii & no & ASCII data file type \\ & \texttt{ } & & hdf5 & & HDF5 data file type \\ \multicolumn{6}{l}{parameters in \texttt{WalkerSet}} \\ & \texttt{walker$\_$type } & string & collinear & yes & Request a collinear walker set. \\ & \texttt{ } & & closed & no & Request a closed shell (doubly-occupied) walker set. \\ \multicolumn{6}{l}{parameters in \texttt{Propagator}} \\ & \texttt{type } & argument & afqmc & afqmc & Type of propagator \\ & \texttt{info } & argument & & & Name of \texttt{AFQMCInfo} block \\ & \texttt{hybrid } & string & yes & yes & Use hybrid propagation algorithm. \\ & \texttt{ } & & no & & Use local energy based propagation algorithm. \\ \multicolumn{6}{l}{parameters in \texttt{execute}} \\ & \texttt{wset } & argument & & & \\ & \texttt{ham } & argument & & & \\ & \texttt{wfn } & argument & & & \\ & \texttt{prop } & argument & & & \\ & \texttt{info } & argument & & & Name of \texttt{AFQMCInfo} block \\ & \texttt{nWalkers } & integer & $\ge 0$ & 5 & Initial number of walkers per task group \\ & \texttt{timestep } & real & $> 0$ & 0.01 & Time step in 1/a.u. \\ & \texttt{blocks } & integer & $\ge 0$ & 100 & Number of blocks \\ & \texttt{step } & integer & $> 0$ & 1 & Number of steps within a block \\ & \texttt{substep } & integer & $> 0$ & 1 & Number of substeps within a step \\ & \texttt{ortho } & integer & $> 0$ & 1 & Number of steps between walker orthogonalization. \\ \hline \end{tabularx} \end{center} \end{table} The following list includes all input sections for AFQMC calculations, along with a detailed explanation of accepted parameters. Since the code is under active development, the list of parameters and their interpretation might change in the future.\\ \texttt{AFQMCInfo}: Input block that defines basic information about the calculation. It is passed to all other input blocks to propagate the basic information: \texttt{<AFQMCInfo name="info0">} \begin{itemize} \item \textbf{NMO}. Number of molecular orbitals, i.e., number of states in the single particle basis. \item \textbf{NAEA}. Number of active electrons-alpha, i.e., number of spin-up electrons. \item \textbf{NAEB}. Number of active electrons-beta, i.e., number of spin-down electrons. \end{itemize} \texttt{Hamiltonian}: Controls the object that reads, stores, and manages the \texttt{hamiltonian}. \texttt{<Hamiltonian name="ham0" type="SparseGeneral" info="info0">} \begin{itemize} \item \textbf{filename}. Name of file with the \texttt{Hamiltonian}. This is a required parameter. \item \textbf{cutoff\_1bar}. Cutoff applied to integrals during reading. Any term in the Hamiltonian smaller than this value is set to zero. (For filetype=``hdf5'', the cutoff is applied only to the 2-electron integrals). Default: 1e-8 \item \textbf{cutoff\_decomposition}. Cutoff used to stop the iterative cycle in the generation of the Cholesky decomposition of the 2-electron integrals. The generation of Cholesky vectors is stopped when the maximum error in the diagonal reaches this value. In case of an eigenvalue factorization, this becomes the cutoff applied to the eigenvalues. Only eigenvalues above this value are kept. Default: 1e-6 \item \textbf{nblocks}. This parameter controls the distribution of the 2-electron integrals among processors. In the default behavior (nblocks=1), all nodes contain the entire list of integrals. If nblocks $>$ 1, the of nodes in the calculation will be split in nblocks groups. Each node in a given group contains the same subset of integrals and subsequently operates on this subset during any further operation that requires the hamiltonian. The maximum number of groups is NMO. Currently only works for filetype=``hdf5'' and the file must contain integrals. Not yet implemented for input hamiltonians in the form of Cholesky vectors or for ASCII input. Coming soon! Default: No distribution \item \textbf{printEig}. If ``yes'', prints additional information during the Cholesky decomposition. Default: no \item \textbf{fix\_2eint}. If this is set to ``yes'', orbital pairs that are found not to be positive definite are ignored in the generation of the Cholesky factorization. This is necessary if the 2-electron integrals are not positive definite because of round-off errors in their generation. Default: no \\ \end{itemize} \texttt{Wavefunction}: controls the object that manages the trial wavefunctions. This block expects a list of xml-blocks defining actual trial wavefunctions for various roles. \texttt{<Wavefunction name="wfn0" type="MSD/PHMSD" info="info0">} \begin{itemize} \item \textbf{filename}. Name of file with wavefunction information. \item \textbf{cutoff}. cutoff applied to the terms in the calculation of the local energy. Only terms in the Hamiltonian above this cutoff are included in the evaluation of the energy. Default: 1e-6 \item \textbf{nnodes}. Defines the parallelization of the local energy evaluation and the distribution of the \texttt{Hamiltonian} matrix (not to be confused with the list of 2-electron integrals managed by \texttt{Hamiltonian}. These are not the same.) If nnodes $>$ 1, the nodes in the simulation are split into groups of nnodes, each group works collectively in the evaluation of the local energy of their walkers. This helps distribute the effort involved in the evaluation of the local energy among the nodes in the group, but also distributes the memory associated with the wavefunction among the nodes in the group. Default: No distribution \item \textbf{ndet}. Number of determinants to read from file. Default: Read all determinants. \item \textbf{cutoff}. For sparse hamiltoniants, this defines the cutoff applied to the half-rotated 2-electron integrals. Default: 0.0 \item \textbf{nbatch}. This turns on(>=1)/off(==0) batched calculation of density matrices and overlaps. -1 means all the walkers in the batch. Default: 0 (CPU) / -1 (GPU) \item \textbf{nbatch\_qr}. This turns on(>=1)/off(==0) batched QR calculation. -1 means all the walkers in the batch. Default: 0 (CPU) / -1 (GPU) \end{itemize} \texttt{WalkerSet}: Controls the object that handles the set of walkers. \texttt{<WalkerSet name="wset0">} \begin{itemize} \item \textbf{walker\_type}. Type of walker set: closed or collinear. Default: collinear \item \textbf{pop\_control}. Population control algorithm. Options: ``simple'': Uses a simple branching scheme with a fluctuating population. Walkers with weight above max\_weight are split into multiple walkers of weight reset\_weight. Walkers with weight below min\_weight are killed with probability (weight/min\_weight); ``pair'': Fixed-population branching algorithm, based on QWalk's branching algorithm. Pairs of walkers with weight above/below max\_weight/min\_weight are combined into 2 walkers with weights equal to $(w_1+w_2)/2$. The probability of replicating walker w1 (larger weight) occurs with probability $w_1/(w_1+w_2)$, otherwise walker w2 (lower weight) is replicated; ``comb'': Fixed-population branching algorithm based on the Comb method. Will be available in the next release. Default: ``pair'' \item \textbf{min\_weight}. Weight at which walkers are possibly killed (with probability weight/min\_weight). Default: 0.05 \item \textbf{max\_weight}. Weight at which walkers are replicated. Default: 4.0 \item \textbf{reset\_weight}. Weight to which replicated walkers are reset to. Default: 1.0 \end{itemize} \texttt{Propagator}: Controls the object that manages the propagators. \texttt{<Propagator name="prop0" info="info0">} \begin{itemize} \item \textbf{cutoff}. Cutoff applied to Cholesky vectors. Elements of the Cholesky vectors below this value are set to zero. Only meaningful with sparse hamiltonians. Default: 1e-6 \item \textbf{substractMF}. If ``yes'', apply mean-field subtraction based on the ImpSamp trial wavefunction. Must set to ``no'' to turn it off. Default: yes \item \textbf{vbias\_bound}. Upper bound applied to the vias potential. Components of the vias potential above this value are truncated there. The bound is currently applied to $\sqrt{\tau} v_{bias}$, so a larger value must be used as either the time step or the fluctuations increase (e.g. from running a larger system or using a poor trial wavefunction). Default: 3.0 \item \textbf{apply\_constrain}. If ``yes'', apply the phaseless constrain to the walker propagation. Currently, setting this to ``no'' produces unknown behavior, since free propagation algorithm has not been tested. Default: yes \item \textbf{hybrid}. If ``yes'', use hybrid propagation algorithm. This propagation scheme doesn't use the local energy during propagation, leading to significant speed ups when its evaluation cost is high. The local energy of the ImpSamp trial wavefunction is never evaluated. To obtain energy estimates in this case, you must define an Estimator xml-block with the \texttt{Wavefunction} block. The local energy of this trial wavefunction is evaluated and printed. It is possible to use a previously defined trial wavefunction in the Estimator block, just set its ``name'' argument to the name of a previously defined wavefunction. In this case, the same object is used for both roles. Default: no \item \textbf{nnodes}. Controls the parallel propagation algorithm. If nnodes $>$ 1, the nodes in the simulation are split into groups of nnodes nodes, each group working collectively to propagate their walkers. Default: 1 (Serial algorithm) \item \textbf{nbatch}. This turns on(>=1)/off(==0) batched calculation of density matrices and overlaps. -1 means all the walkers in the batch. Default: 0 (CPU) / -1 (GPU) \item \textbf{nbatch$\_$qr}. This turns on(>=1)/off(==0) batched QR calculation. -1 means all the walkers in the batch. Default: 0 (CPU) / -1 (GPU) \end{itemize} \texttt{execute}: Defines an execution region. \texttt{<execute wset="wset0" ham="ham0" wfn="wfn0" prop="prop0" info="info0">} \begin{itemize} \item \textbf{nWalkers}. Initial number of walkers per core group (see ncores). This sets the number of walkers for a given gorup of ``ncores" on a node; the total number of walkers in the simulation depends on the total number of nodes and on the total number of cores on a node in the following way: $ \#_walkers_total = nWalkers * \#_nodes * \#_cores_total / ncores $. \\ Default: 5 \item \textbf{timestep}. Time step in 1/a.u. \\ Default: 0.01 \item \textbf{blocks}. Number of blocks. Slow operations occur once per block (e.g., write to file, slow observables, checkpoints), \\ Default: 100 \item \textbf{step}. Number of steps within a block. Operations that occur at the step level include load balance, orthogonalization, branching, etc. \\ Default: 1 \item \textbf{substep}. Number of substeps within a step. Only walker propagation occurs in a substep. \\ Default: 1 \item \textbf{ortho}. Number of steps between orthogonalization. Default: 1 \item \textbf{ncores}. Number of nodes in a task group. This number defines the number of cores on a node that share the parallel work associated with a distributed task. This number is used in the \texttt{Wavefunction} and \texttt{Propagator} task groups. The walker sets are shares by the ncores on a given node in the task group. \item \textbf{checkpoint}. Number of blocks between checkpoint files are generated. If a value smaller than 1 is given, no file is generated. If \textbf{hdf\_write\_file} is not set, a default name is used. \textbf{Default: 0} %\item \textbf{samplePeriod}. Number of blocks between sample collection. \textbf{Default: 0} \item \textbf{hdf\_write\_file}. If set (and checkpoint>0), a checkpoint file with this name will be written. \item \textbf{hdf\_read\_file}. If set, the simulation will be restarted from the given file.\\ \end{itemize} Within the \texttt{Estimators} xml block has an argument \textbf{name}: the type of estimator we want to measure. Currently available estimators include: ``basic'', ``energy'', ``mixed\_one\_rdm'', and ``back\_propagation''. The basic estimator has the following optional parameters: \begin{itemize} \item \textbf{timers}. print timing information. Default: true \end{itemize} The back\_propagation estimator has the following parameters: \begin{itemize} \item \textbf{ortho}. Number of back-propagation steps between orthogonalization. Default: 10 \item \textbf{nsteps}. Maximum number of back-propagation steps. Default: 10 \item \textbf{naverages}. Number of back propagation calculations to perform. The number of steps will be chosed equally distributed in the range {0,nsteps}. Default: 1 \item \textbf{block\_size}. Number of blocks to use in the internal average of the back propagated estimator. This is used to block data and reduce the size of the output. Default: 1 \item \textbf{nskip}. Number of blocks to skip at the start of the calculation for equilibration purposes. Default: 0 \end{itemize} \section{File formats} ... Coming Soon ... \section{Advice/Useful Information} AFQMC calculations are computationally expensive and require some care to obtain reasonable performance. The following is a growing list of useful advice for new users, followed by a sample input for a large calculation. \begin{itemize} \item Generate Cholesky-decomposed integrals with external codes instead of the 2-electron integrals directly. The generation of the Cholesky factorization is faster and consumes less memory. \item Use the hybrid algorithm for walker propagation. Set steps/substeps to adequate values to reduce the number of energy evaluations. This is essential when using large multideterminant expansions. \item Adjust cutoffs in the wavefunction and propagator bloxks until desired accuracy is reached. The cost of the calculation will depend on these cutoffs. \item Adjust ncores/nWalkers to obtain better efficiency. Larger nWalkers will lead to more efficient linear algebra operations but will increase the time per step. Larger ncores will reduce the time per step but will reduce efficiency because of inefficiencies in the parallel implementation. For large calculations, values between 6--12 for both quantities should be reasonable, depending on architecture. \end{itemize} \begin{lstlisting}[style=QMCPXML,caption=Example of sections of an AFQMC input file for a large calculation.] ... <Hamiltonian name="ham0" type="SparseGeneral" info="info0"> <parameter name="filename">fcidump.h5</parameter> <parameter name="cutoff_1bar">1e-6</parameter> <parameter name="cutoff_decomposition">1e-5</parameter> </Hamiltonian> <Wavefunction name="wfn0" type="MSD" info="info0"> <parameter name="filetype">ascii</parameter> <parameter name="filename">wfn.dat</parameter> </Wavefunction> <WalkerSet name="wset0"> <parameter name="walker_type">closed</parameter> </WalkerSet> <Propagator name="prop0" info="info0"> <parameter name="hybrid">yes</parameter> </Propagator> <execute wset="wset0" ham="ham0" wfn="wfn0" prop="prop0" info="info0"> <parameter name="ncores">8</parameter> <parameter name="timestep">0.01</parameter> <parameter name="blocks">10000</parameter> <parameter name="steps">10</parameter> <parameter name="substeps">5</parameter> <parameter name="nWalkers">8</parameter> <parameter name="ortho">5</parameter> </execute> \end{lstlisting}
{ "alphanum_fraction": 0.7256890642, "avg_line_length": 81.5977011494, "ext": "tex", "hexsha": "87cfa31dd20c31960ec035456e832dd2b02c6002", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0ab00b7d5bfac2ade860aac513c6e571ae2af13d", "max_forks_repo_licenses": [ "NCSA" ], "max_forks_repo_name": "simonpintarelli/qmcpack", "max_forks_repo_path": "manual/afqmc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0ab00b7d5bfac2ade860aac513c6e571ae2af13d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "NCSA" ], "max_issues_repo_name": "simonpintarelli/qmcpack", "max_issues_repo_path": "manual/afqmc.tex", "max_line_length": 1265, "max_stars_count": null, "max_stars_repo_head_hexsha": "0ab00b7d5bfac2ade860aac513c6e571ae2af13d", "max_stars_repo_licenses": [ "NCSA" ], "max_stars_repo_name": "simonpintarelli/qmcpack", "max_stars_repo_path": "manual/afqmc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5559, "size": 21297 }
\chapter{The greatness of Rudrākṣa} Sūta said: 1. O sage Śaunaka, highly intelligent, of the form of Śiva, noble-minded, please listen to the greatness of \tl{rudrākṣa}. I shall explain it briefly. 2. \tl{Rudrākṣa} is a favourite bead of Śiva. It is highly sanctifying. It removes all sins by sight, contact and \tl{japas}. 3. O sage, formerly the greatness of \tl{rudrākṣa} was declared to the Goddess by Śiva, the supreme soul, for rendering help to the worlds. Śiva said: 4. O Śivā, Maheśāni, be pleased to hear the greatness of \tl{rudrākṣa}. I speak out love for you from a desire for the benefit of the devotees of Śiva. 5-7. O Mahesāni, formerly I had been performing penance for thousands of divine years. Although I had controlled it rigorously, my mind was in flutter. Out of sport, I being self-possessed just opened my eyes, O Goddess, from a desire of helping the worlds. Drops of tears fell from my beautiful half-closed eyes. From those tear-drops there cropped up the \tl{rudrākṣa} plants. 8. They became immobile. In order to bless the devotees they were given to the four \tl{varṇas} devoted to the worship of Viṣṇu. 9-10. \tl{Rudrākṣas} grown in Gauḍa\footnote{Gauḍa-deśa, according to \tl{Skandapurāṇa}, was the central part of Bengal extending from Vaṅga to the borders of Orissa: \tl{vaṅgadeśāṃ samārabhya bhūvaneśāntagaḥ śive | gauḍadeśaḥ samākhyātaḥ sarvavidyāviśāradaḥ}} land became great favourites of Śiva. They were grown in Mathurā, Laṅkā, Ayodhyā, Malaya\footnote{Malaya: a mountain range on the west of Malabar, the western ghats, abounding in sandal trees.}, Sahya\footnote{Sahya: It is one of the seven principal ranges, the other six being Mahendra, Malaya, Sūktimat, Rikṣa, Vindhya and Pāripātra or Pāriyātra.} mountain, Kāśī and other places. They are competent to break asunder the clustered sins unbearable to the others, as the sacred texts have declared. 11. At my bidding they were classified into \tl{brāhmaṇas, kṣatriyas, vaiśyas and śūdras}. These \tl{rudrākṣas} are of auspicious nature. 12. The colours of the four types of \tl{rudrākṣas} are respectively white, red, yellow and black. All people shall wear the \tl{rudrākṣa} of their own \tl{varna}. 13. If they desire their benefit, namely worldly pleasures and salvation and if the devotees of Śiva wish to gratify Śiva they must wear the \tl{rudrākṣa}. 14. A \tl{rudrākṣa} of the size of an \emph{Emblic myrobalan} (or Dhātrīphala) is mentioned as the most excellent; one of the size of the fruit of the jujube tree (or Badarīphala) is spoken of as the middling. 15. O Pārvatī, lovingly listen to this from a desire for the benefit of the devotees. The meanest of \tl{rudrākṣas} is of the size of a gram according to this excellent classification. 16. O Maheśvarī, even the \tl{rudrākṣa} which is only of the size of the fruit of the jujube accords the benefit and heightens happiness and good fortune. 17. That which is of the size of the \emph{Emblic myrobalan} is conducive to the destruction of all distresses. That which is of the size of a Guñjā (the berry) is conducive to the achievement of the fruit of all desires. 18. The lighter the \tl{rudrākṣa}, the more fruitful it is. Each of these is fruitful and that of a weight of one tenth is considered by scholars as the most fruitful. 19. The wearing of \tl{rudrākṣa} is recommended for the sake of destroying sins. Hence that which is conducive to the achievement of every object has to be worn certainly. 20. O Parameśvarī, no other necklace or garland is observed in the world to be so auspicious and fruitful as the \tl{rudrākṣa}. 21. O Goddess, \tl{rudrākṣas} of even size, glossy, firm, thick and having many thornlike protrusions yield desires and bestow worldly pleasures and salvation for ever. 22. Six types of \tl{rudrākṣas} shall be discarded: that which is defiled by worms, is cut and broken, has no thornlike protrusions, has cracks and is not circular. 23. That which has a natural hole from end to end is the most excellent; that which is bored through by human effort is the middling one. 24. The wearing of \tl{rudrākṣa} is spoken of as conducive to the destruction of great sins. If eleven hundred \tl{rudrākṣas} are worn on the person, the man assumes the form of Rudra. 25. Even in hundreds of years it is impossible to describe adequately the benefit derived by wearing eleven hundred and fifty \tl{rudrākṣas}. 26. A devout man shall make a coronet consisting of five hundred and fifty \tl{rudrākṣa}s. 27. A person of pious nature shall make three circular strings in the manner of the sacred thread, each having three hundred and sixty beads. 28. O Maheśvarī, three \tl{rudrākṣas} must be worn on the tuft and six in each of the ears right and left. 29-30. Hundred and one \tl{rudrākṣas} shall be worn round the neck; eleven \tl{rudrākṣas} shall be worn round each of the arms, elbows and wrists. Devotees of Śiva shall have three \tl{rudrākṣas} in the sacred thread and round the hips five \tl{rudrākṣas} shall be tied. 31. O Parameśvarī, the person by whom so many \tl{rudrākṣas} are worn is worthy of being bowed to and adored by all like Maheśa. 32. Such a person while in contemplation shall be duly seated and addressed “O Śiva”. Seeing him, every one is freed from sins. 33. This is the rule regarding eleven hundred \tl{rudrākṣas}s If so many are not available, another auspicious procedure I mention to you. 34-36. One \tl{rudrākṣa} shall be worn on the tuft, thirty on the head, fifty round the neck; sixteen in each of the arms; twelve round each of the wrists; five hundred on the shoulders, and three strings each having hundred and eight in the manner of the sacred thread. He who wears in all a thousand \tl{rudrākṣas} and is of firm resolve in performing rites is bowed to by all \tl{devas} like Rudra himself. 37-39. One \tl{rudrākṣa} shall be worn on the tuft, forty on the forehead, thirty-two round the neck; hundred and eight over the chest; six in each of the ears; sixteen round each of the arms; O lord of sages, according to the measurement of the forearms, twelve or twice that number shall be worn there. A person who wears so many, out of love, is a great devotee of Śiva. He shall be worshipped like Śiva. He is worthy of being always honoured by all. 40. It shall be worn on the head repeating \tl{īśāna mantra}\footnote{ \tl{Vājasaneyisaṃhitā} (of \tl{Śuklayajurveda}) 39.8.}; on the ears with \tl{tripuruṣa mantra}\footnote{Ibid. 17.11.}; round the neck with \tl{aghora mantra}\footnote{Ibid. 16.2.} and on the chest also likewise. 41. The wise devotee shall wear the \tl{rudrākṣa} round the forearms with \tl{aghora bīja} mantra. A string of fifteen beads shall be worn on the stomach with \tl{vāmadeva mantra}\footnote{TA. 10.44.1; Mahā 4.17.2.}. 42. With five mantras—\tl{sadyojāta} \etc three, five or seven garlands shall be worn. Or all beads shall be worn with the \tl{mūla mantra}\footnote{The five-syllabled mantra “\tl{namaḥ śivāya}” is the basic \tl{mūla} mantra of Śiva.}. 43. A devotee of Śiva shall refrain from eating meat, garlic, onion, red garlic, potherb, \tl{śleṣmātaka}, pig of rubbish and liquors. 44. O Umā, daughter of the mountain, the white \tl{rudrākṣa} shall be worn by the \tl{brāhmaṇa}, the red by the \tl{kṣatriya}, the yellow by the \tl{vaiśya}, the black by the \tl{śūdra}. This is the path indicated by the Vedas. 45. Whether he is a householder, forest-dweller, ascetic or of any order, none shall go out of this secret advice. Only by great merits can the opportunity to wear the \tl{rudrākṣa} be obtained. If he misses it he will go to hell. 46. The \tl{rudrākṣas} of the size of an \emph{Emblic myrobalan} and those of lighter weight but depressed with thorns, those eaten by worms or without holes and those characterized by other defects shall not be worn by those wishing for auspicious results. They shall avoid small ones of the size of gram. O Umā, \tl{rudrākṣa} is an auspicious complement to my phallic image. The small one is always praiseworthy. 47. People of all \tl{varṇas} and \tl{āśramas} even women and \tl{śūdras} can wear \tl{rudrākṣa} at the bidding of Śiva. The ascetics shall wear it with the \tl{Praṇava}. 48. If any one wears it during the day he is freed from sins committed during the night; if he wears it during the night he is freed from the sins committed during the day. Similar is the result with its wearing during morning, midday or evening. 49. Those who wear \tl{tripuṇḍra}, the matted hair and the \tl{rudrākṣa} do not go to Yama’s abode. 50-52. [Yama’s directive to his attendants:—] “Those who wear at least one \tl{rudrākṣa} on their heads, \tl{tripuṇḍra} on the forehead and repeat the five-syllabled mantra shall be honoured by you all. They are indeed saintly men. You can bring the man here who has no \tl{rudrākṣa} on his person, and no \tl{tripuṇḍra} on his forehead and who does not utter the five-syllabled mantra. All those who have the ash and \tl{rudrākṣa} shall be honoured always by us after knowing their power. They shall never be brought here”. 53. Yama commanded his attendants like this. They too remained quiet agreeing to it. In fact they were surprised. 54. Hence Mahādevī, the \tl{rudrākṣa} as well as the person who wears it is my favourite. O Pārvatī, even if he has committed sins he becomes pure. 55. He who wears \tl{rudrākṣa} round the hands and arms and over the head cannot be killed by any living being. He shall roam in the world in the form of Rudra. 56. He shall be respected by the Gods and Asuras always. He shall be honoured like Śiva. He removes the sin of any one seen by him. 57. If a person is not liberated after meditation and acquisition of knowledge he shall wear \tl{rudrākṣa}. He shall be freed from all sins and attain the highest goal. 58. A mantra repeated with \tl{rudrākṣa} is a crore times more efficacious. A man wearing \tl{rudrākṣa} derives a hundred million times more merit. 59. O Goddess, as long as the \tl{rudrākṣa} is on the person of a living soul he is least affected by premature death. 60. One shall attain Rudra on seeing a person with \tl{tripuṇḍra}, his limbs covered with \tl{rudrākṣa} and repeating the \tl{mṛtyuñjaya mantra} \footnote{\tl{Vājasaneyisaṃhitā} (of \tl{Śuklayajurveda}) 30.60.}. 61. He is a favourite of the five deities\footnote{The five deities referred to here are: the sun, Gaṇeśa, Goddess Durgā, Rudra and Viṣṇu. Cp. \tl{ādityaṃ gaṇanāthaṃ ca devīṃ rudraṃ ca keśavam | pañcadaivatamityuktaṃ sarvakarmasu pūjayet}} and a favourite of all gods. O beloved, a devotee shall repeat all mantras wearing a garland of \tl{rudrākṣa}s (or counting on the beads). 62. Even the devotees of Viṣṇu and other deities shall unhesitatingly wear the \tl{rudrākṣa}. Especially the devotee of Rudra shall wear \tl{rudrākṣas} always. 63. \tl{Rudrākṣas} are of various types. I shall explain their different classifications. O Pārvatī, hear with great devotion. These \tl{rudrākṣas} bestow worldly pleasures and salvation. 64. A \tl{rudrākṣa} of a single face is Śiva Himself. It bestows worldly pleasures and salvation. The sin of \tl{brāhmaṇa-slaughter} is washed off at its mere sight. 65. Where it is adored, Fortune cannot be far off. Harms and harassments perish. All desires are fulfilled. 66. A \tl{rudrākṣa} with two faces is Īśa, the lord of \tl{devas}. It bestows the fulfilment of all desires. Especially, that \tl{rudrākṣa} quickly quells the sin of cow-slaughter. 67. A \tl{rudrākṣa} with three faces always bestows means of enjoyment. As a result of its power all lores become firmly established. 68. A \tl{rudrākṣa} of four faces is Brahmā Himself. It quells the sin of man-slaughter. Its vision and its contact instantaneously bestow the achievement of the four aims of life. 69. A \tl{rudrākṣa} with five faces is Rudra Himself. Its name is Kālāgni. It is lordly. It bestows all sorts of salvation and achievement of all desired objects. 70. A five-faced \tl{rudrākṣa} dispels all sorts of sins such as accrue from sexual intercourse with a forbidden woman and from eating forbidden food. 71. A \tl{rudrākṣa} with six faces is Kārtikeya. A man who wears it on the right arm is certainly absolved of the sins of \tl{brāhmaṇa-slaughter} and the like. 72. A \tl{rudrākṣa} with seven faces, O Maheśāni, is called Anaṅga. O Deveśī, by wearing it even a poor man becomes a great lord. 73. A \tl{rudrākṣa} with eight faces is called Vasumūrti and Bhairava. By wearing it a man lives the full span of life. After death he becomes the trident-bearing lord (Śiva). 74. A \tl{rudrākṣa} with nine faces is also Bhairava. Its sage is Kapila. Its presiding goddess is Durgā of nine forms, Maheśvarī Herself. 75. That \tl{rudrākṣa} shall be worn on the left hand with great devotion. He shall certainly become Sarveśvara like me. 76. O Maheśānī, a \tl{rudrākṣa} with ten faces is Lord Janārdana Himself. O Deveśī, by wearing it, the devotee shall achieve the fulfilment of all desires. 77. O Parameśvarī, a \tl{rudrākṣa} with eleven faces is Rudra. By wearing it one becomes victorious everywhere. 78. One shall wear the twelve-faced \tl{rudrākṣa} on the hair of the head. All the twelve Ādityas (suns) are present therein. 79. A \tl{rudrākṣa} with thirteen faces is Viśvedeva. By wearing it, a man will attain the realisation of all desires. He will derive good fortune and auspiciousness. 80. A \tl{rudrākṣa} with fourteen faces is the highest Śiva. It shall be worn on the head with great devotion. It quells all sins. 81. O daughter of the king of mountains, thus I have explained to you the different types of \tl{rudrākṣas} based on the number of faces. Please listen to the mantras with devotion: \begin{itemize} \item\tl{oṃ hrīṃ namaḥ} (single-faced) \item\tl{oṃ namaḥ} (2 faces) \item\tl{oṃ klīṃ namaḥ} (3 faces) \item\tl{oṃ hrīṃ namaḥ} (4 faces) \item\tl{oṃ hrīṃ namaḥ} (5 faces) \item\tl{oṃ hrīṃ huṃ namaḥ} (6 faces) \item\tl{oṃ huṃnamaḥ} (7 faces) \item\tl{oṃ huṃ namaḥ} (8 faces) \item\tl{oṃ hrīṃ huṃ namaḥ} (9 faces) \item\tl{oṃ hrīṃ namaḥ namaḥ} (10 faces) \item\tl{oṃ hrīṃ huṃ namaḥ} (11 faces) \item\tl{oṃ krauṃ kṣauṃ rauṃ namaḥ} (12 faces) \item\tl{oṃ hrīṃ namaḥ} (13 faces) \item\tl{oṃ nama} (14 faces) \end{itemize} 82. For the achievement of all desired objects, the devotee shall wear the \tl{rudrākṣa} with mantras. He shall have great devotion and faith. He shall be free from lethargy. 83. The man who wears the \tl{rudrākṣa} without mantra falls into a terrible hell and stays there during the tenure of fourteen Indras. 84-85. On seeing a man with the garland of \tl{rudrākṣas}, all evil spirits, ghosts, \tl{piśācas}, witches like \tl{dākinī} and \tl{śākinī}, other malignant spirits, evil charms and spells \etc fly away suspecting a quarrel. 86. Seeing a devotee with the garland of \tl{rudrākṣas}, O Pārvatī, Śiva, Viṣṇu, Devī, Gaṇapati, the sun and all the Gods are pleased. 87. Thus realising its greatness the \tl{rudrākṣa} must be worn well, O Maheśvarī, repeating the mantras with devotion to make virtues flourish. 88. Thus, the greatness of ash and \tl{rudrākṣa} that bestow worldly pleasures and salvation, was explained to Girijā by Śiva, the supreme soul. 89. The persons who apply ash and wear \tl{rudrākṣa} are great favourites of Śiva. Enjoyment of worldly pleasures and salvation are certainly due to their influence. 90. He who applies ash and wears \tl{rudrākṣa} is called a devotee of Śiva. A person devoted to the \tl{japa} of the five-syllabled mantra is a perfect and noble being. 91. If Mahādeva is worshipped without the \tl{tripuṇḍra} of ash and without the garland of \tl{rudrākṣa}, he does not bestow the fruit of cherished desire. 92. Thus, O lord of sages, whatever has been asked has now been explained. The greatness of ash and \tl{rudrākṣa} bestows the luxuriant fulfilment of all desires. 93. He who regularly listens to the highly auspicious greatness of ash and \tl{rudrākṣa} with devotion shall attain the fulfilment of all desires. 94. He will enjoy all happiness here. He will be blessed with sons and grandsons. In the next world he will attain salvation. He will be a great favourite of Śiva. 95. O lordly sages, thus the compendium of \tl{Vidyeśvara-saṃhitā} has been narrated to you all. As ordered by Śiva it bestows achievement of everything and salvation.
{ "alphanum_fraction": 0.7689260027, "avg_line_length": 47.9707602339, "ext": "tex", "hexsha": "35fb2a151b87da54d77997d4bdb55d89631f9cb2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "535ad048995a35d3ebabd84842788e60f99158f2", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "elvendrim/shivamahapurana", "max_forks_repo_path": "samhita-vidyeshvara/vidyeshvara-25.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "535ad048995a35d3ebabd84842788e60f99158f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "elvendrim/shivamahapurana", "max_issues_repo_path": "samhita-vidyeshvara/vidyeshvara-25.tex", "max_line_length": 82, "max_stars_count": null, "max_stars_repo_head_hexsha": "535ad048995a35d3ebabd84842788e60f99158f2", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "elvendrim/shivamahapurana", "max_stars_repo_path": "samhita-vidyeshvara/vidyeshvara-25.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5510, "size": 16406 }
\section{Loss functions for point predictions}
{ "alphanum_fraction": 0.7959183673, "avg_line_length": 12.25, "ext": "tex", "hexsha": "b80b63fd3bfed367b43e23bb4e165a4c2dba0be6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/parametric/03-00-PointLoss.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/parametric/03-00-PointLoss.tex", "max_line_length": 46, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/parametric/03-00-PointLoss.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10, "size": 49 }
\section{201512-2} \input{problem/6/201512-2-p.tex}
{ "alphanum_fraction": 0.7307692308, "avg_line_length": 17.3333333333, "ext": "tex", "hexsha": "1e798238efdcf04faca19ee84e7e323c562347ee", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "xqy2003/CSP-Project", "max_forks_repo_path": "problem/6/201512-2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "xqy2003/CSP-Project", "max_issues_repo_path": "problem/6/201512-2.tex", "max_line_length": 32, "max_stars_count": 1, "max_stars_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "xqy2003/CSP-Project", "max_stars_repo_path": "problem/6/201512-2.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-14T01:47:19.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-14T01:47:19.000Z", "num_tokens": 22, "size": 52 }
\chapter*{Abstract} \chapterbelowskip \fixme{English abstract here} \vfill\null
{ "alphanum_fraction": 0.7804878049, "avg_line_length": 11.7142857143, "ext": "tex", "hexsha": "3c0451bc822e1da93d32b3faa2b1020feccdcd55", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f8adaa084c5150f4c60b914b3f30f57ed8e2216f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "mike402/thesis-dummy", "max_forks_repo_path": "src/cover/abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f8adaa084c5150f4c60b914b3f30f57ed8e2216f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "mike402/thesis-dummy", "max_issues_repo_path": "src/cover/abstract.tex", "max_line_length": 29, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f8adaa084c5150f4c60b914b3f30f57ed8e2216f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "mike402/thesis-dummy", "max_stars_repo_path": "src/cover/abstract.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-25T19:49:06.000Z", "max_stars_repo_stars_event_min_datetime": "2021-09-25T19:49:06.000Z", "num_tokens": 25, "size": 82 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Carlos Santos % % ECE 351-51 % % Lab 1 % % 1/28/2020 % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[12pt]{article} % Language and font encoding \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{amsmath} \usepackage{caption} \usepackage{float} \usepackage{caption} \usepackage{subcaption} \usepackage{rotating} \usepackage{setspace} % Sets page size and margins \usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} % Useful packages \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{listings} \usepackage{gensymb} %Line Spacing \setstretch{1.5} %Info for Title Page \title{ECE 351 Lab 1 Report} \author{Carlos Santos} \begin{document} %Make a Title Page \vspace{\fill} \maketitle \vspace{\fill} \clearpage %Section Summaries \section*{Section Summaries} Section 3: The one thing I love more about python is the documentation. Spyder is also well documented full of useful shortcuts. \\ Section 4: Printing out stuff is easier in Python! There are plenty of useful libraries that make doing math easy. \\ Section 5: Latex is a pretty neat language for making documents! I'll continue to use it this semester. %Questions \section*{Questions} 1) Which course are you most excited for in your degree? Which course have you enjoyed the most so far? \\ The course I am most excited for is ECE 440. I also really enjoyed ECE 340. \\ 2) Leave any feedback on the clarity of the expectations, instructions, and deliverables. \\ This lab is pretty straight forward. The only thing to add is the header stuff you showed us. \end{document}
{ "alphanum_fraction": 0.7027762657, "avg_line_length": 26.2428571429, "ext": "tex", "hexsha": "5c881d73bd908332374d2f999f02d30e20db97f7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8070ef2b32f770d76a98993fbcbd8fd11fa5c27c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "carlkid1499/ECE351-Reports", "max_forks_repo_path": "ECE351_Lab_1_main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8070ef2b32f770d76a98993fbcbd8fd11fa5c27c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "carlkid1499/ECE351-Reports", "max_issues_repo_path": "ECE351_Lab_1_main.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "8070ef2b32f770d76a98993fbcbd8fd11fa5c27c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "carlkid1499/ECE351-Reports", "max_stars_repo_path": "ECE351_Lab_1_main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 492, "size": 1837 }
\section{Package Overview } % \pagenumbering{arabic} % \setcounter{secnumdepth}{5} \setlength{\parskip}{0.5em} % The Message Passing Envionment Utility ({\tt mpeu}) is a library that was designed and developed at the Data Assimilation Office (DAO) to support its data assimilation system. It sits on top of vendors utilities such as MPI, BLAS, shared-memory primitives, etc. % {\tt mpeu} is written is Fortran 90 and is portable to many platforms. It provides the following services: % \begin{itemize} \item Management of resources \item Manipulation of strings \item F90 module-style access to MPI \item Portable/flexible definition of types \item Support for multiprocessor stdout/stderr \item Error handling/shutdown \item Timing/load balance monitoring tools \item Sorting tools \end{itemize} % The {\tt mpeu} library can be easily included in any application (using MPI) and can be ported into any platform with little (or no) modification. This report is a user's documentation that describes the above {\tt mpeu} services. Each of the services is contained a module. In the next sections, we present the functions of each module and provides enough examples to show how the library can be efficiently employed in a given application.
{ "alphanum_fraction": 0.7868589744, "avg_line_length": 32.8421052632, "ext": "tex", "hexsha": "f8537a5545856ac9c998ffb9a6a65daae432aef2", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2022-03-11T16:26:29.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-05T18:00:44.000Z", "max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_forks_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_forks_repo_name": "joeylamcy/gchp", "max_forks_repo_path": "Shared/GMAO_mpeu/doc/PackageOverview.tex", "max_issues_count": 105, "max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_issues_repo_issues_event_max_datetime": "2022-03-22T02:12:16.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-08T19:27:23.000Z", "max_issues_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_issues_repo_name": "joeylamcy/gchp", "max_issues_repo_path": "Shared/GMAO_mpeu/doc/PackageOverview.tex", "max_line_length": 75, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_stars_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_stars_repo_name": "joeylamcy/gchp", "max_stars_repo_path": "Shared/GMAO_mpeu/doc/PackageOverview.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:39:30.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-02T14:23:30.000Z", "num_tokens": 307, "size": 1248 }
%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%% % % This is the LaTeX source for the instructions to authors using % the LaTeX document class 'llncs.cls' for contributions to % the Lecture Notes in Computer Sciences series. % http://www.springer.com/lncs Springer Heidelberg 2006/05/04 % % It may be used as a template for your own input - copy it % to a new file with a new name and use it as the basis % for your article. % % NB: the document class 'llncs' has its own and detailed documentation, see % ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\modelname, \modelname \documentclass[a4paper]{llncs} %\documentclass[a4paper]{llncs} \usepackage{tikz} \usetikzlibrary{positioning,fit} \usepackage{amssymb} \setcounter{tocdepth}{3} \usepackage{graphicx} %%%%TODO: REMOVE THE TERM metacompleteness %% MY COOL PACKAGES \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} %\usepackage{times} \usepackage{tgtermes} %\usepackage{bera} \usepackage[scaled=0.93]{beramono} \usepackage[normalem]{ulem} \usepackage{cite} \usepackage{float} \usepackage{wrapfig} \usepackage{scalefnt} \usepackage{setspace} % %\usepackage{listings} %\lstset{basicstyle=\ttfamily\small, %% numbers=left, numberstyle=\tiny, stepnumber=1, numbersep=5pt, % language=java, mathescape=true, escapechar=\!, % emph={object}, emphstyle=\textbf, % showstringspaces=false %} %%%% \usepackage{hyperref} \usepackage{cleveref} % http://tex.stackexchange.com/questions/10102/multiple-references-to-the-same-footnote-with-hyperref-support-is-there-a-bett/10116#10116 \crefformat{footnote}{#2\footnotemark[#1]#3} \usepackage{xcolor} \definecolor{entityColor}{RGB}{0,100,200} \definecolor{attributeColor}{RGB}{0,100,50} \definecolor{relationColor}{RGB}{160,0,30} \usepackage{listings} \lstdefinestyle{reqT}{ %belowcaptionskip=1\baselineskip, breaklines=true, %showstringspaces=false, showspaces=false, %breakatwhitespace=true, basicstyle=\ttfamily\fontsize{8}{10}\selectfont, emph={Ent,Meta,Item,Label,Section,Term,Actor,App,Component,Domain,Module,Product,Release,Resource,Risk,Service,Stakeholder,System,User,Class,Data,Input,Member,Output,Relationship,Design,Screen,MockUp,Function,Interface,State,Event,Epic,Feature,Goal,Idea,Issue,Req,Ticket,WorkPackage,Breakpoint,Barrier,Quality,Target,Scenario,Task,Test,Story,UseCase,VariationPoint,Variant}, emphstyle=\bfseries\color{entityColor}, emph={[2]has,is,superOf,binds,deprecates,excludes,helps,hurts,impacts,implements,interactsWith,precedes,requires,relatesTo,verifies}, emphstyle={[2]\color{relationColor}}, emph={[3]Attr,Code,Constraints,Comment,Deprecated,Example,Expectation,FileName,Gist,Image,Spec,Text,Title,Why,Benefit,Capacity,Cost,Damage,Frequency,Min,Max,Order,Prio,Probability,Profit,Value,Status}, emphstyle={[3]\itshape \color{attributeColor}}, } \lstset{style=reqT} \newcommand{\keywords}[1]{\par\addvspace\baselineskip \noindent\keywordname\enspace\ignorespaces#1} \begin{document} %\floatstyle{ruled} %\newfloat{Specification}{htbp}{lop} \mainmatter % start of an individual contribution % first the title is needed \title{What is essential? -- A pilot survey on views about the requirements metamodel of reqT.org} \titlerunning{DRAFT MANUSCRIPT -- Submitted to REFSQ'16.} \author{Bj\"orn Regnell} % %\authorrunning{Regnell, "Requirements Modeling for Code Lovers"} \institute{Dept. of Computer Science, Lund University, Sweden \\ \url{[email protected]} } \maketitle %%%% ABSTRACT \begin{abstract} [{\bf Context \& motivation}] This research preview paper presents ongoing work on the metamodel of a free software requirements modeling tool called reqT that is developed in an educational context. The work aims to make an initial validation of a survey instrument that elicits views on the metamodel of the reqT tool, which aims to engage computer science students in Requirements Engineering (RE) through an open source DSL embedded in the Scala programming language. [{\bf Question}] The research question is: Which RE concepts are essential to include in the metamodel for a requirements engineering tool in an educational context? [{\bf Principal ideas}] A survey instrument is developed, with a list of 92 concepts (49 entities, 15 relations and 28 attributes) and a set of questions for each concept, to elicit the respondents' views on the usage and interpretation of each concept. [{\bf Contribution}] The survey is initially validated in a pilot study involving 14 Swedish RE scholars as subjects. The survey results indicate that the survey is feasible. The analysis of the responses suggest that many of the concepts in the metamodel are used frequently by the respondents and there is a large degree of agreement among the respondents about the meaning of the concepts. The results are encouraging for future work on empirical validation of the relevance of the reqT metamodel. \keywords{requirements engineering, metamodel, CASE tool, engineering education, embedded domain-specific language, empirical software engineering.} \end{abstract} %%%%% INTRO \section{Introduction} There are many challenges in teaching Requirements Engineering (RE) \cite{Memon2010, Regev2011}, including advancing students' requirements modelling skills that can be used effectively in an unstructured, non-ideal, real-world situation \cite{Callele2006}. When teaching RE modelling we may ask ourselves: What are the \textit{essential} RE concepts that we should include in a taught metamodel for requirements? This paper investigates this questions in conjunction with the on-going work of developing a metamodel for \href{http://reqT.org}{reqT.org}, an open source requirements engineering tool used in RE education \cite{Regnell2013}. A survey instrument is presented aiming to elicit the frequency of RE term usage and the degree of interpretation agreement. The responses from 14 Swedish RE scholars are analysed and discussed and conclusions suggest that a large subset of the concepts of the current reqT metamodel can be seen as ''essential'' in that a majority of the subjects use them while agreeing with the concepts' definitions. The presented work represents an initial validation of the survey instrument. Further work involving more subjects is needed to draw conclusions with more certainty. \section{Background} There are nowadays numerous commercial RE tools available, but many are expensive, complex and not sufficiently open \cite{Carillo2011}. A major aim of the reqT open source project is to provide a small but scalable, semi-formal and free software package for an educational setting \cite{Regnell2013} that can inspire code-loving computer science students to learn more about requirements modeling. The tool development started in 2011 at Lund University, where reqT is used in RE teaching at MSc level in student role-playing projects.\footnote{\label{note:ets170}The Lund Univ. MSc-level RE course can be found at: \scriptsize\url{http://cs.lth.se/education}} A critical issue is how to choose the essential RE concepts that allows for sufficient expressiveness, while not overloading the metamodel with esoteric concepts just for the sake of completeness. %The reqT tool is used in a course based on a specific text book \cite{Lauesen2002} and a specific student project concept \cite{ets170}, and the concepts of the earlier versions of the reqT requirements metamodel reflect that context. The reqT metamodel includes three types of concepts: entities, attributes and relations. Entities and attributes are nodes in a graph data structure, while relations are edges that can connect entities with sub-graphs. Thus a tree-like structure can be created of arbitrary depth spanning the graph that models some chunk of requirements. The code below shows a toy example of an orthogonal variability model \cite{metzger2007variability} expressed in the reqT Scala-embedded DSL \cite{Regnell2013} illustrating a small part of its metamodel. Other parts of the metamodel contains concepts that enable e.g. goal modelling, %\cite{liu2004} use case modelling, and user story modelling, see further Appendix A. \begin{spacing}{1.1} \begin{lstlisting} Model( Component("appearance") has ( VariationPoint("color") has ( Min(0), Max(2), Variant("blue"), Variant("red"), Variant("green")), VariationPoint("shape") has ( Min(1), Max(1), Variant("round"), Variant("square")), VariationPoint("payment") has ( Min(1), Max(2), Variant("cash"), Variant("credit")), VariationPoint("payment") requires Variant("cash"), Variant("round") excludes Variant("red"), Variant("green") requires Variant("square")), Component("appearance") requires VariationPoint("shape"), App("free") has Component("appearance"), App("free") binds (VariationPoint("shape") binds Variant("round")), App("premium") has Component("appearance"), App("premium") binds ( VariationPoint("color") binds (Variant("red"), Variant("green")), VariationPoint("shape") binds (Variant("round"), Variant("square")), VariationPoint("payment") binds Variant("cash"))) \end{lstlisting} \end{spacing} \noindent Entities in the above code listing are in bold, attributes in italics and relations start with a lower case letter. In the reqT editor, entities, attributes, and relations are syntax-coloured in blue, green and red respectively. A reqT model written in the above syntax is actually valid Scala code that, when executed, generates a data structure that can be traversed and manipulated using Scala scripts. Visualisations can be generated by export to GraphViz. Export is also available to HTML and spreadsheet formats. \section{Methodology and Data Collection} In order to validate RE scholar's opinions of the metamodel, a survey instrument was developed including the 49 entities, 15 relations and 28 attributes. All concepts and definitions are listed in Appendix A. \footnote{The survey is available at \scriptsize\url{https://github.com/reqT/reqT/tree/3.0.x/survey}} The concepts were gathered from various sources including the IREB Glossary \footnote{\scriptsize\url{https://www.ireb.org/en/cpre/cpre-glossary/}}, Wikipedia, agile development, variability \cite{metzger2007variability} and goal modelling, and the text book\cite{Lauesen2002} used in an RE course at Lund Univ.\cref{note:ets170} \vspace{-0.5cm} \begin{figure}[h] \centering \includegraphics[width=0.99\textwidth]{img/survey-screen-dump} \caption{A screen dump of a part of the survey instrument.} \label{fig:survey} \end{figure} \vspace{-0.5cm} \noindent The data collection was made during a Swedish national network meeting with academic RE scholars in spring 2015. The survey was filled in during the meeting using the participants' own laptops in a spreadsheet shown in Fig.~\ref{fig:survey}. The subjects were given around 20 minutes to complete the survey. Most of the subjects handed in the survey via email directly after the session, while a few finished it after the meeting. \section{Data Analysis} \textbf{Subject background.} The background questions in the survey regards the role of the subject, as shown in Table \ref{table:background}. The analyzed\footnote{One subject answered NO on all background questions and was therefore excluded.} \input{summary.tex} The response rate was 100\% after a reminder was emailed to one missing subject. %\vspace{-0.6cm} %\noindent \textbf{Frequency analysis.} The degree of ''essentiality'' is characterized as the number of subjects that has responded that they (1) use the concept at least in an informal, non-persistent way, \textit{and} that they (2) use the concept in a similar meaning as in the definition in Appendix A. Fig.~\ref{fig:survey} shows the definitions of the three-level ordinal scales of Questions $Q1_{usage}$ and $Q2_{meaning}$ respectively. Table \ref{table:frequency} shows the results of the frequency counts. If an ''essentiality threshold'' is chosen at $N/2$ then only the $9$ concepts from row $n = 7$ and below in Table~\ref{table:frequency} are considered ''non-essential'', hence showing that more than 90\% of the metamodel concepts have a majority of the subjects that use them and agree upon their definitions. Each concept has at least one subject that uses it and agrees with its definition. The following 19 concepts were reported ''missing'': S01: \emph{or}, S02: \emph{bug}, \emph{threshold}, S04: \emph{role}, \emph{problem}, \emph{motivates}, \emph{and}, \emph{or}, \emph{pattern}, \emph{submodel}, S06: \emph{plug-in}, \emph{informalism}, S07: \emph{full sentence}, S09: \emph{satisfaction}, \emph{satisfies}, \emph{customer}, S11: \emph{system-of-interest}, \emph{verification}, \emph{validation}, S13: \emph{context}. Thus, the concept 'or' was the only concept that had consensus among several subjects (S01, S04) as considered ''missing''. {The anonymised data and analysis scripts (developed using Scala and Apache POI) are available at: \footnotesize\url{https://github.com/bjornregnell/reqT-survey}} \vspace{-0.3cm} \begingroup \begin{table}[H] \setlength{\tabcolsep}{4pt} % Default value: 6pt \renewcommand{\arraystretch}{1.4} % Default value: 1 \centering \fontsize{8.5}{9}\selectfont \caption{Background of subjects, $N = 15$. The subjects were given anonymous ids S01--S15.} \label{table:background} \vspace{-0.2cm} \begin{tabular}{p{0.42\textwidth}| p{0.54\textwidth}} \textit{Background question} & \textit{Subject responding YES} \\ \hline \input{background.tex} \end{tabular} \end{table} \endgroup \vspace{-3.6em} \begingroup \setlength{\tabcolsep}{4pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table} %\centering \begin{center} \fontsize{7}{8}\selectfont \caption{Frequency analysis, where $n$ is the number of subjects that for the respective concept answered $(Q1_{usage}>= 1)$ \textit{and} $(Q2_{meaning} = 2)$. In total there are 92 concepts (49 entities, 15 relations and 28 attributes). The higher up in the table, the more ''essential''. For $n = 0, 2, 3, 5$ the were no concepts with answers by that number of subjects.} \label{table:frequency} \begin{tabular}{l | p{0.33\textwidth} | p{0.30\textwidth} | p{0.25\textwidth}} \textit{$n$} & \textit{Entities} & \textit{Attributes} & \textit{Relations} \\ \hline \input{essential.tex} \end{tabular} \end{center} %\vspace{0.1cm} %\fontsize{9}{10}\selectfont %\textbf{Table 3.} $Q3$ \emph{Missing Concepts} according to subjects: %S01: or, %S02: bug, threshold, %S04: role, problem, motivates, and, or, pattern, submodel, %S06: plug-in, informalism, %S07: full sentence, %S09: satisfaction, satisfies, customer, %S11: system-of-interest, verification, validation, S13: context. \end{table} \endgroup %%% %%%%%%%%%%%%%% DISCUSSION \section{Discussion and Conclusion}\label{section:discussion} The presented survey is a pilot investigation with two main contributions: (1) the survey instrument together with the data collection and analysis approach, which are shown to be feasible in the presented context, and (2) the pilot study results: for more than 90\% of the 92 reqT metamodel concepts a majority of the 14 participating RE scholars claim to use them and agree upon their definitions. Only 1 concept was considered missing by more than one subject, while in total 19 additional concepts were reported missing by some subject. {\bf Limitations. } It can be questioned if ''essentiallity'' of a set of RE concepts can be characterized by how many RE scholars that use them and agree upon their definition, but it can also be argued that concept usage in an educational context is interesting to investigate when developing a metamodel for an academic RE tool. %However, if someone use a certain concept and wants to model it, then the metamodel of the applied modelling approach needs to have it, in order not to risk that the person finds the metamodel lacking. A major threat to external validity is the limited number of subjects. Due to few subjects and the high degree of homogeneity among subjects with respect to background, it is difficult to analyse and draw conclusions e.g. about potential differences in opinions between e.g. teachers and developers. Some subjects needed more time and completed their survey offline, which may give a variation in how carefully the responses were considered. %The survey was conducted in conjunction with a seminar and demo of reqT. %In order to avoid any positive bias, the survey was held prior to the seminar and demo. This in turn may introduce a threat of limited understanding of reqT. {\bf Further work.} When developing a metamodel it is interesting not just to ask if the concepts to include are essential, but also to pose the question if the set of concepts is complete. If some essential concept is missing from some stakeholder's viewpoint, then the metamodel is not sufficient. With more subjects participating in the presented RE metamodel survey, the analysis of answers to further questions on alternative terms and missing concepts will be enabled and beneficial to the further development of a comprehensive and complete, but not overloaded, RE metamodel. %Further directions of research include (1) incorporation of constraints on models for support of prioritization and release planning \cite{Regnell2011}, (2) more elaborate semantic checks to better guide requirements modelers, and (3) graphical visualization of requirements graph models. (4) Natural Language Processing technology including e.g. ambiguity risk detection may be interesting to combine with reqT. (5) It is also important to further investigate the pedagogic advantages and limitations of the approach. %A major objective of this research preview paper is to expose the latest version of reqT to the community of RE scholars and to invite discussions and contributions. %\vspace{-0.2cm} \vspace{0.2cm} {\small\noindent\textbf{Acknowledgments.} Thanks to Tobias Kaufmann and Klaus Pohl for contributions to the variability model in Section 2. This work is partly funded by VINNOVA within the EASE project. } \bibliography{refs} \bibliographystyle{splncs03} %%%%%%%%%%%%%%% APPENDIX WITH DEFINITIONS \clearpage %\hspace{-0.9cm} \section*{Appendix A: Definitions of Metamodel Concepts of reqT v3.0} \begin{minipage}{1\linewidth} \begin{minipage}{0.30\textwidth} \begin{table}[H] %\centering \vspace{-1.0cm} \fontsize{6}{6.5}\selectfont %\caption{.}%: Entities, Attributes and Relations.} \begin{tabular}{l p{4.6cm}} \textbf{\textit{Entity}} & \textbf{\textit{Definition}} \\ \input{entity-defs.tex} \end{tabular} \end{table} \end{minipage} \hspace{4em plus 1fill} \begin{minipage}{0.46\textwidth} \begin{table}[H] %\centering \fontsize{6}{6.5}\selectfont %\caption{Definitions of attributes and relations.} \vspace{-1.0cm} \label{table-entities} \begin{tabular}{|l p{4.1cm}} \textbf{\textit{Attribute}} & \textbf{\textit{Definition}} \\ \input{attribute-defs.tex} & \\ \textbf{\textit{Relation}} & \textbf{\textit{Definition}} \\ \input{relation-defs.tex} \end{tabular} \end{table} \end{minipage} \end{minipage} \end{document}
{ "alphanum_fraction": 0.7692626561, "avg_line_length": 64.33, "ext": "tex", "hexsha": "2d9066b4645d8ad3f7b7d14a235df7e607a034a0", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-02-13T18:34:18.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-13T18:34:18.000Z", "max_forks_repo_head_hexsha": "d4f859ca3087858f02aead537df78551c67a18ec", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bjornregnell/reqT-survey", "max_forks_repo_path": "paper/proceedings/paper-refsq2016-final.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d4f859ca3087858f02aead537df78551c67a18ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bjornregnell/reqT-survey", "max_issues_repo_path": "paper/proceedings/paper-refsq2016-final.tex", "max_line_length": 1395, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d4f859ca3087858f02aead537df78551c67a18ec", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bjornregnell/reqT-survey", "max_stars_repo_path": "paper/proceedings/paper-refsq2016-final.tex", "max_stars_repo_stars_event_max_datetime": "2015-10-02T13:30:07.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-02T13:30:07.000Z", "num_tokens": 4943, "size": 19299 }
\section{Compact and Locally Compact Spaces} \subsection{Compact Spaces} \paragraph{2.} I further assume that $X$ is Hausdorff. \begin{proof} Assume, to obtain a contradiction, that $F_n = K_n\setminus O \ne\varnothing$ for every $n$. Since $F_n \supset F_{n+1}$, $\{F_n\}$ is collection of closed subsets of compact set $K_1$ with finite intersection property. Hence, $\bigcap F_n$ is nonempty, contradicting with $\bigcap K_n \subset O$. \end{proof} \paragraph{3.} \begin{proof} Let $F$ be a closed set and $x \notin F$. Since $X$ is Hausdorff, for every $y \in F$, there are two disjoint open sets $U_y$ and $O_y$ s.t. $y \in U_y$ and $x \in O_y$. Since $X$ is compact, so is $F$. Note that $\{U_y\}$ is an open cover for $F$. Hence, it has a finite subcover $\{U_{y_i}\}_{i=1}^n$. Let $U = \bigcup U_{y_i}$ and $O = \bigcap O_{y_i}$. Clear that they are disjoint open sets s.t. $F \subset U$ and $x \in O$. \end{proof} \paragraph{6.} \begin{proof} Let $\vep > 0$ be fixed. Since $\mcal{F}$ is equicontinuous, for every $x \in X$, there is a neighborhood $O_x$ s.t. for every $x\hp \in X$, $\sigma(f(x), f(x\hp)) < \vep$. Clear that $\{O_x\}$ is an open cover and since $X$ is compact, it has a finite subcover $\{O_{x_i}\}_{i=1}^m$. For $x_i$, since $f_n(x_i) \to f(x_i)$, there is an integer $N_i$ s.t. for every $n > N$, $\sigma(f_n(x_i), f(x_i)) < \vep$. Since $\sigma(f_n(x_i), f_n(x)) < \vep$ holds for all $n$, $\sigma(f(x_i), f(x)) \le \vep$. Hence, for every $x \in O_{x_i}$ and $n > N$, \[ \sigma(f_n(x), f(x)) \le \sigma(f_n(x), f_n(x_i)) + \sigma(f_n(x_i), f(x_i)) + \sigma(f(x_i), f(x)) < 3\vep. \] Let $N = \max N_{x_i}$ and we get the desired result. \end{proof} \subsection{Countable Compactness and the Bolzano-Weierstrass Property} \paragraph{9.} \begin{proof} $\,$\par (a) It follows immediately from the definition and Problem 8.20. (b) For every $\alpha \in \mathbb{R}$, \[ f + g < \alpha \quad\text{iff}\quad f < \alpha - g \quad\text{iff}\quad \exists q\in \mathbb{Q} \text{ s.t. } f < q,\, q < \alpha - g. \] Hence, \[ \{f + g < \alpha\} = \bigcup_{q \in \mathbb{Q}} \{f < q\} \cap \{g < \alpha - q\}, \] which is open. Thus, $f + g$ is also upper semicontinuous. (c) Since $(f_n)$ is a decreasing sequence, we can write $f(x) = \inf_n f_n(x)$. Hence, for every $\alpha \in \mathbb{R}$, $f < \alpha$ iff there exists some $n$ s.t. $f_n(x) < \alpha$. Hence, \[ \{f < \alpha\} = \bigcup_n \{f_n < \alpha\}, \] which is open. Thus, $f$ is also upper semicontinuous. (d) Note that $(f_n - f)$ is a decreasing sequence of upper semicontinuous functions that converges to $0$. Hence, by Dini's theorem, the convergence is uniform. (e) Suppose that $x \in \{f < \alpha\}$. Let $\vep$ be a positive real number. Since $f_n \to f$ uniformly, there is an integer $n$ s.t. $|f(y) - f_n(y)| < \vep$ for all $y \in X$. Meanwhile, since $f_n$ is upper semicontinuous, there is a $\delta > 0$ s.t. for every $y$ in the $\delta$-ball $B$ centered at $x$, $f_n(y) < f_n(x) + \vep$. Hence, for every $y \in B$, \begin{align*} f(y) = f(y) - f_n(y) + f_n(y) - f_n(x) + f_n(x) - f(x) + f(x) \le 3\vep + f(x). \end{align*} Thus, for sufficiently small $\vep > 0$, we have $B \subset \{f < \alpha\}$. Namely, $\{f < \alpha\}$ is open whence $f$ is upper semicontinuous. \end{proof} \paragraph{10.} \begin{proof} $\,$\par (i. $\Rightarrow$ iii.) Let $f$ be a bounded continuous real-valued function and $M := \sup f < \infty$. Let $F_n = \{f \ge M - 1/n\}$. Since $f$ is continuous, $F_n$ is closed. Note that $(F_n)$ is a countable family of closed sets with finite intersection property. Hence, $\bigcap F_n = \{f \ge M\}$ is nonempty as $X$ is countably compact. Namely, the maximum can be attained. (iii. $\Rightarrow$ ii.) Let $f$ be a continuous function and assume, to obtain a contradiction, that $f$ is unbounded. Then the function $-1/(|f| + 1)$ is a continuous bounded function whose maximum can not be attained. Contradiction. (ii. $\Rightarrow$ i.) Assume, to obtain a contradiction, that $X$ does not have the Bolzano-Weierstrass property, that is, there is a sequence $(x_n)$ in $X$ that has no cluster point. Then $F := \{x_n\}_{n=1}^\infty$ is closed. Define $f: F \to \mathbb{R}$ by $f(x_n) = n$. Note that $f$ is continuous on $F$ and by Tietze's extension theorem, it can be continuously extended to $X$. However, $f$ is unbounded, contradicting (ii.). Thus $X$ has the Bolzano-Weierstrass and, therefore, is countably compact. \end{proof} \subsection{Products of Compact Spaces} \paragraph{13.} \begin{proof} Let $E$ be a closed and bounded set in $\mathbb{R}^n$. Then it is contained in some closed cube $K = \prod_{i=1}^n[a_i, b_i]$. By Tychonoff's theorem, $K$ is compact. Thus, $K$, a closed subset of a compact set, is also compact. \end{proof} \paragraph{15.} \begin{proof} Let $X = \prod_{n=1}^\infty$ be the product of sequentially compact spaces $(X_n)$. Let $(x_n)$ be a sequence in $X$. Since $X_1$ is sequentially compact, we may choose a subsequence $x_n^1$ of $x_n$ s.t. the first coordinate of $(x_n^1)$ converges to some $x^1$. Similarly, from $(x_n^1)$ we may choose a subsequence $x_n^2$ whose second coordinate converges to some $x^2$. Proceed inductively and we get a sequence of sequence. Finally, consider the sequence $(x_n^n)$. Since each coordinate converges and we are dealing with the product topology, $x_n^n$ converges to $(x_1, x_2, \dots)$. \end{proof} \subsection{Locally Compact Spaces} \paragraph{18.} \begin{proof} For every $x \in K$, since $X$ is locally compact, there is an open set $O_x$ with $\cl O_x$ compact. Note that $\{O_x\}_{x\in K}$ is an open cover for the compact set $K$. Hence, it has a finite subcover $\{O_{x_i}\}_{i=1}^n$. Then, $O = \bigcup_{i=1}^n O_{x_i}$ is an open set containing $K$ whose closure is compact. \end{proof} \paragraph{19.} \begin{proof} $\,$\par (a) Since $X$ is a locally compact Hausdorff space and $K$ is compact, there exists an open set $V$ with compact closure s.t. $V\supset K$. Since $\cl V$ is compact, it is normal. Therefore, by Urysohn's lemma, there is a continuous function $f: \cl V \to [0, 1]$ s.t. $f\equiv 1$ on $K$ and $f\equiv 0$ on $\partial V$. Extend $f$ to $X$ by setting $f \equiv 0$ outsides $\cl V$. Then $f$ is continuous and $f \equiv 1$ on $K$. Meanwhile, since $\supp f\subset \cl V$, it is also compact. \end{proof} \paragraph{24.} Assume that $X$ is also Hausdorff. \begin{proof} $\,$\par (a) Clear that if $F$ is closed, so is $F\cap K$ for each closed compact $K$. For the reverse, we show that $F^c$ is open. Let $x \notin F$. Since $X$ is locally compact, there is a neighborhood $U$ of $x$ whose closure is compact. If $F\cap\cl U = \varnothing$, then we are done. If $F\cap \cl U \ne \varnothing$, then by the hypothesis, it is closed. Therefore, $U\setminus(F\cap\cl U)$ is again an open neighborhood of $x$. Since $X$ is a locally compact Hausdorff space, we can find an open neighborhood $V$ with $\cl V \subset U\setminus(F\cap\cl U)$. In both cases, $F^c$ is open. (b) Suppose that for each closed compact $K$, $F\cap K$ is closed. For every $x \in \cl F$, since $X$ is first-countable, there exists a sequence $(x_n) \subset F$ which converges to $x$. Then $E := \{x\}\cup\{x_n\}_{n=1}^\infty$ is closed and compact. Thus, by the hypothesis, $F\cap E$ is also closed, which implies that $x \in F$. Hence, $F$ is closed. \end{proof} \paragraph{26.} Assume that $X$ is Hausdorff. \begin{proof} Let $x$ be an arbitrary point in $X$ and $V$ any neighborhood of $x$. Since $X$ is a locally compact Hausdorff space, we may choose a open neighborhood $U_1$ of $x$ whose closure is compact and contained by $V$. Since $O_1$ is dense, $U_1\cap O_1$ is a nonempty neighborhood of $x$. Then, choose a neighborhood $U_2$ of $x$ s.t. $\cl U$ is compact and $\cl U \subset U_1\cap O_1$. Proceed inductively and we get a sequence $(U_n)$ s.t. $\cl U_n$ is compact and $\cl U_{n+1} \subset O_n\cap U_n$. Since $(\cl U_n)$ is a nested sequence of compact sets, $\bigcap \cl U_n$ is nonempty. Choose $x_* \in \bigcap\cl U_n$. For every $O_n$, $x_* \in \cl U_{n+1} \subset O_n$. Thus, $x_* \in V\cap\bigcap_{n=1}^\infty O_n$. Namely, $\bigcap_{n=1}^\infty O_n$ is dense. \end{proof} \paragraph{29.} \begin{proof} $\,$\par (a) Let $F$ be a closed subset of a locally compact space $X$. For every $x\in F\subset X$, there is a neighborhood $U$ of $x$ whose closure is compact in $X$. Then, $U\cap F$ is also compact. Thus, $F$ is locally compact. (b) Let $O$ be an open subset of a locally compact Hausdorff space $X$. For every $x\in O \subset X$, there is a neighborhood $U$ of $x$ whose closure is compact in $X$ and contained by $O$. Note that $\cl U$ is also compact in $O$. Thus, $O$ is locally compact. \end{proof}
{ "alphanum_fraction": 0.6381909548, "avg_line_length": 40.8660714286, "ext": "tex", "hexsha": "6cfad6276ac50138c5127aa9b0e22bcfbb034ff2", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-02-28T00:05:28.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-28T00:05:28.000Z", "max_forks_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Engineev/solutions", "max_forks_repo_path": "real_analysis_3rd/ch9_compact_and_locally_compact_spaces.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Engineev/solutions", "max_issues_repo_path": "real_analysis_3rd/ch9_compact_and_locally_compact_spaces.tex", "max_line_length": 79, "max_stars_count": 9, "max_stars_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Engineev/solutions", "max_stars_repo_path": "real_analysis_3rd/ch9_compact_and_locally_compact_spaces.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-25T17:37:17.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-13T08:36:44.000Z", "num_tokens": 3284, "size": 9154 }
\chapter{Methods and Constructors}\label{refmethcon} \index{Methods,} \index{Methods,invocation of} \index{Parentheses,in method calls} \index{Comma,in method calls} \index{,} Instructions in \nr{} are grouped into \emph{methods}, which are named routines that always belong to (are part of) a \emph{class}. \index{References,to methods} Methods are invoked by being referenced in a term (see page \pageref{refterms}) , which may be part of an expression or be a clause in its own right (a method call instruction). In either case, the syntax used for a method invocation is: \begin{shaded} \begin{alltt} \emph{symbol}([\emph{expression}[,\emph{expression}]...]) \end{alltt} \end{shaded} \index{Arguments,of methods} \index{Arguments,passing to methods} The \emph{symbol}, which must be non-numeric, is called the \emph{name} of the method. It is important to note that the name of the method must be followed immediately by the "\keyword{(}", with \textbf{no} blank in between, or the construct will not be recognized as a method call (a \emph{blank operator} would be assumed at that point instead). The \emph{expression}s (separated by commas) between the parentheses are called the \emph{arguments} to the method. Each argument expression may include further method calls. The argument expressions are evaluated in turn from left to right and the resulting values are then passed to the method (the procedure for locating the method is described below). The method then executes some algorithm (usually dependent on any arguments passed, though arguments are not mandatory) and will eventually return a value. This value is then included in the original expression just as though the entire method reference had been replaced by the name of a variable whose value is that returned data. For example, the \textbf{substr} method is provided for strings of type \textbf{R\textsc{exx}} and could be used as: \begin{lstlisting} c='abcdefghijk' a=c.substr(3,7) /* would set A to "cdefghi" */ \end{lstlisting} Here, the value of the variable \textbf{c} is a string (of type \textbf{R\textsc{exx}}). The \textbf{substr} (substring) method of the \textbf{R\textsc{exx}} class is then invoked, with arguments \textbf{3} and \textbf{7}, on the value referred to by \textbf{c}. That is, the the properties available to (the context of) the \textbf{substr} method are the properties constructed from the literal string \textbf{'abcdefghijk'}. The method returns the substring of the value, starting at the third character and of length seven characters. \index{Arguments,passing to methods} A method may have a variable number of arguments: only those required need be specified. For example, \textbf{'ABCDEF'.substr(4)} would return the string \textbf{'DEF'}, as the \textbf{substr} method will assume that the remainder of the string is to be returned if no length is provided. Method invocations that take no arguments may omit the (empty) parentheses in circumstances where this would not be ambiguous. See the section on \emph{Terms} (see page \pageref{refterms}) for details. \textbf{Implementation minimum:} At least 10 argument expressions should be allowed in a method call. \section{Method call instructions}\label{refmcalli} \index{Method call instructions,} \index{Subroutines,calling} When a clause in a method consists of just a term, and the final part of the term is a method invocation, the clause is a \emph{method call instruction}: \begin{shaded} \begin{alltt} \emph{symbol}([\emph{expression}[,\emph{expression}]...]); \end{alltt} \end{shaded} The method is being called as a subroutine of the current method, and any returned value is discarded. In this case (and in this case only), the method invoked need not return a value (that is, the \keyword{return} instruction that ends it need not specify an expression). \footnote{ A method call instruction is equivalent to the \keyword{call} instruction of other languages, except that no keyword is required. } A method call instruction that is the first instruction in a constructor (see below) can only invoke the special constructors \textbf{this} and \textbf{super}. \section{Method resolution (search order)}\label{refsmeth} \index{Resolution of methods,} \index{Search order,for methods} \index{Methods,resolution of} Method resolution in \nr{} proceeds as follows: \begin{itemize} \item If the method invocation is the first part (stub) of a term, then: \begin{enumerate} \item The current class is searched for the method (see below for details of searching). \item If not found in the current class, then the superclasses of the current class are searched, starting with the class that the current class extends. \item If still not found, then the classes listed in the \keyword{uses} phrase of the \keyword{class} instruction are searched for the method, which in this case must be a static method (see page \pageref{refstatmet}) . Each class from the list is searched for the method, and then its superclasses are searched upwards from the class; this process is repeated for each of the classes, in the order specified in the list. \item If still not found, the method invocation must be a constructor (see below) and so the method name, which may be qualified by a package name, should match the name of a primitive type or a known class (type). The specified class is then searched for a constructor that matches the method invocation. \end{enumerate} \item If the method invocation is not the first part of the term, then the evaluation of the parts of the term to the left of the method invocation will have resulted in a value (or just a type), which will have a known type (the continuation type). Then: \begin{enumerate} \item The class that defines the continuation type is searched for the method (see below for details of searching). \item If not found in that class, then the superclasses of that class are searched, starting with the class that that class extends. \end{enumerate} If the search did not find a method, an error is reported. If the search did find a method, that is the method which is invoked, except in one case: \begin{itemize} \item If the evaluation so far has resulted in a value (an object), then that value may have a type which is a subclass of the continuation type. If, within that subclass, there is a method that exactly overrides (see page \pageref{refoverrid}) the method that was found in the search, then the method in the subclass is invoked. \end{itemize} This case occurs when an object is earlier assigned to a variable of a type which is a superclass of the type of the object. This type simplification hides the real type of the object from the language processor, though it can be determined when the program is executed. \end{itemize} \index{Methods,searching for} \index{Matching methods,} Searching for a method in a class proceeds as follows: \begin{enumerate} \item Candidate methods in the class are selected. To be a candidate method: \begin{itemize} \item the method must have the same name as the method invocation (independent of the case (see page \pageref{refcase}) of the letters of the name) \item the method must have the same number of arguments as the method invocation (or more arguments, provided that the remainder are shown as optional in the method definition) \item it must be possible to assign the result of each argument expression to the type of the corresponding argument in the method definition (if strict type checking is in effect, the types must match exactly). \end{itemize} \item If there are no candidate methods then the search is complete; the method was not found. \item If there is just one candidate method, that method is used; the search is complete. \item If there is more than one candidate method, the sum of the costs of the conversions (see page \pageref{refcosts}) from the type of each argument expression to the type of the corresponding argument defined for the method is computed for each candidate method. \item The costs of those candidates (if any) whose names match the method invocation exactly, including in case, are compared; if one has a lower cost than all others, that method is used and the search is complete. \item The costs of all the candidates are compared; if one has a lower cost than all others, that method is used and the search is complete. \item If there remain two or more candidates with the same minimum cost, the method invocation is ambiguous, and an error is reported. \end{enumerate} \begin{shaded}\noindent \textbf{Note: }\marginnote{\color{gray}3.02}When a method that is not an exact match to a call is found in a class, superclasses of that class are also searched for methods which may have a lower-cost of conversion and the method with the lowest cost, hence the closest match, is used to resolve the search. \end{shaded}\indent The current method of method resolution has been chosen to maximize interoperability with Java-language programs.\footnote{This in contrast to all versions before 3.02, where this rule was: \emph{When a method is found in a class, superclasses of that class are not searched for methods, even though a lower-cost method may exist in a superclass.}. The latter was chosen to guard the program optimally against changes in superclasses.} \section{Method overriding}\label{refoverrid} \index{Methods,overriding} \index{Overriding methods,} A method is said to \emph{exactly override} a method in another class if \begin{enumerate} \item the method in the other class has the same name as the current method \item the method in the other class is not \keyword{private} \item the other class is a superclass of the current class, or is a class that the current class implements (or is a superclass of one of those classes). \item the number and type of the arguments of the method in the other class exactly match the number and type of the arguments of the current method (where subsets are also checked, if either method has optional arguments). \end{enumerate} For example, the \textbf{R\textsc{exx}} class includes a \textbf{substr} (see page \pageref{refsubstr}) method, which takes from one to three strings of type \textbf{R\textsc{exx}}. In the class: \begin{lstlisting} class mystring extends Rexx method substr(n=Rexx, length=Rexx) return this.reverse.substr(n, length) method substr(n=int, length=int) return this.reverse.substr(Rexx n, Rexx length) \end{lstlisting} the first method exactly overrides the \textbf{substr} method in the \textbf{R\textsc{exx}} class, but the second does not, because the types of the arguments do not match. A method that exactly overrides a method is assumed to be an extension of the overridden method, to be used in the same way. For such a method, the following rules apply: \begin{itemize} \item It must return a value of the same type\footnote{observing what is stated in the next paragraph} as the overridden method (or none, if the overridden method returns none). \item It must be at least as visible as the overridden routine. For example, if the overridden routine is \keyword{public} then it must also be \keyword{public}. \item If the overridden method is \keyword{static} then it must also be \keyword{static}. \item If the overridden method is not \keyword{static} then it must not be \keyword{static}. \item If the underlying implementation checks exceptions (see page \pageref{refexcep}) , only those checked exceptions that are signalled by the overridden method may be left uncaught in the current method. \end{itemize} \section{Return Types} \index{Return Types} \nr{} allows covariant\marginnote{\color{gray}3.02} return types such as have been allowed in Java since the version 1.5 release. Prior to Java 1.5, in order for a method to override or implement a method from another class, the return type of the methods had to be an exact match. Since the Java 1.5 release, methods which override a superclass method or implement an interface class method are allowed to have a return type which is a subclass of the return type of the method replaced or implemented. An exact match is no longer required. \section{Constructor methods}\label{refcons} \index{Constructors,} \index{,} \index{Methods,constructor} \index{Objects,constructing} \index{Instance, of a class,} \index{Class,instances of} As described above, methods are usually invoked in the context of an existing value or type. A special kind of method, called a constructor method, is used to actually create a value of a given type (an object). Constructor methods always have the same short name as the class in which they are found, and construct and return a value of the type defined by that class (sometimes known as an \emph{instance} of that class). If the class is part of a package, then the constructor call may be qualified by the package name. \textbf{Example constructors:} \begin{alltt} File('Dan.yr.Ogof') java.io.File('Speleogroup.letter') R\textsc{exx}('some words') netrexx.lang.R\textsc{exx}(1) \end{alltt} \index{Constructors,default} There will always be at least one constructor if values can be created for a class. \nr{} will add a default public constructor that takes no arguments if no constructors are provided, unless the components of the class are all static or constant, or the class is an interface class. All constructors follow the same rules as other methods, and in addition: \begin{enumerate} \item Constructor calls always include parentheses in the syntax, even if no arguments are supplied. This distinguishes them from a reference to the type of the same name. \item Constructors must call a constructor of their superclass (the class they extend) before they carry out any initialization of their own. This is so any initialization carried out by the superclass takes place, and at the appropriate moment. Only after this call is complete can they make any reference to the special words \textbf{this} or \textbf{super} (see page \pageref{refspecial}) . Therefore, the first instruction in a constructor must be either a call to the superclass, using the special constructor \textbf{super()} (with optional arguments), or a call to to another constructor in the same class, using the special constructor \textbf{this()} (with optional arguments). In the latter case, eventually a constructor that explicitly calls \textbf{super()} will be invoked and the chain of local constructor calls ends. As a convenience, \nr{} will add a default call to \textbf{super()}, with no arguments, if the first instruction in a constructor is not a call to \textbf{this()} or \textbf{super()}. \item The properties of a constructed value are initialized, in the order given in the program, after the call to \textbf{super()} (whether implicit or explicit). \item By definition, constructors create a value (object) whose type is defined by the current class, and then return that value for use. Therefore, the \keyword{returns} keyword on the \keyword{method} instruction (see page \pageref{refmethod}) that introduces the constructor is optional (if given, the type specified must be that of the class). Similarly, the only possible forms of the \keyword{return} instruction used in a constructor are either "\textbf{return this;}", which returns the value that has just been constructed, or just "\textbf{return;}", in which case, the "\textbf{this}" is assumed (this form will be assumed at the end of a method, as usual, if necessary). \end{enumerate} \index{Example,of constructors} Here is an example of a class with two constructors, showing the use of \textbf{this()} and \textbf{super()}, and taking advantage of some of the assumptions: \begin{lstlisting}[label=mychars,caption=MyChars.nrx] class MyChars extends SomeClass properties private /* the data 'in' the object */ value=char[] /* construct the object from a char array */ method MyChars(array=char[]) /* initialize superclass */ super() value=array -- save the value /* construct the object from a String */ method MyChars(s=String) /* convert to char[] and use the above */ this(s.toCharArray()) \end{lstlisting} Objects of type \textbf{MyChars} could then be created thus: \begin{alltt} myvar=MyChars("From a string") \end{alltt} or by using an argument that has type \textbf{char[]}.
{ "alphanum_fraction": 0.7743133792, "avg_line_length": 43.6728723404, "ext": "tex", "hexsha": "130604c89ce7249fb005c701d77b684053139f5e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_forks_repo_licenses": [ "ICU" ], "max_forks_repo_name": "RexxLA/NetRexx", "max_forks_repo_path": "documentation/nrl/nr3methc.tex", "max_issues_count": 25, "max_issues_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_issues_repo_issues_event_max_datetime": "2022-02-01T16:14:50.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-24T12:13:53.000Z", "max_issues_repo_licenses": [ "ICU" ], "max_issues_repo_name": "RexxLA/NetRexx", "max_issues_repo_path": "documentation/nrl/nr3methc.tex", "max_line_length": 541, "max_stars_count": null, "max_stars_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_stars_repo_licenses": [ "ICU" ], "max_stars_repo_name": "RexxLA/NetRexx", "max_stars_repo_path": "documentation/nrl/nr3methc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4030, "size": 16421 }
\documentclass[14pt]{article} \usepackage{fancyhdr} \usepackage{extramarks} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{tikz} \usepackage[plain]{algorithm} \usepackage{algpseudocode} \usepackage{enumitem} \usepackage{relsize} \usepackage{scrextend} \usepackage{listings} \usepackage{xcolor} \usepackage{textcomp} \usetikzlibrary{automata,positioning} % % C++ Code Listing Configuration % \definecolor{listinggray}{gray}{0.9} \definecolor{lbcolor}{rgb}{0.9,0.9,0.9} \lstset{ backgroundcolor=\color{lbcolor}, tabsize=4, % rulecolor=, language=[GNU]C++, basicstyle=\scriptsize, upquote=true, aboveskip={1.5\baselineskip}, columns=fixed, showstringspaces=false, extendedchars=false, breaklines=true, prebreak = \raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}}, frame=single, numbers=left, showtabs=false, showspaces=false, showstringspaces=false, identifierstyle=\ttfamily, keywordstyle=\color[rgb]{0,0,1}, commentstyle=\color[rgb]{0.026,0.112,0.095}, stringstyle=\color[rgb]{0.627,0.126,0.941}, numberstyle=\color[rgb]{0.205, 0.142, 0.73}, % \lstdefinestyle{C++}{language=C++,style=numbers}’. } \lstset{ backgroundcolor=\color{lbcolor}, tabsize=4, language=C++, captionpos=b, tabsize=3, frame=lines, numbers=left, numberstyle=\tiny, numbersep=5pt, breaklines=true, showstringspaces=false, basicstyle=\footnotesize, % identifierstyle=\color{magenta}, keywordstyle=\color[rgb]{0,0,1}, commentstyle=\color{Darkgreen}, stringstyle=\color{red} } % % Basic Document Settings % \topmargin=-0.45in \evensidemargin=0in \oddsidemargin=0in \textwidth=6.5in \textheight=9.0in \headsep=0.25in \linespread{1.1} \pagestyle{fancy} \lhead{\hmwkAuthorName} \chead{\hmwkClass\ (\hmwkClassInstructor): \hmwkTitle} \rhead{\firstxmark} \lfoot{\lastxmark} \cfoot{\thepage} \renewcommand\headrulewidth{0.4pt} \renewcommand\footrulewidth{0.4pt} \setlength\parindent{0pt} % % Create Problem Sections % \newcommand{\enterProblemHeader}[1]{ \nobreak\extramarks{}{Problem \arabic{#1} continued on next page\ldots}\nobreak{} \nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{} } \newcommand{\exitProblemHeader}[1]{ \nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{} \stepcounter{#1} \nobreak\extramarks{Problem \arabic{#1}}{}\nobreak{} } \setcounter{secnumdepth}{0} \newcounter{partCounter} \newcounter{homeworkProblemCounter} \setcounter{homeworkProblemCounter}{1} \nobreak\extramarks{Problem \arabic{homeworkProblemCounter}}{}\nobreak{} % % Homework Problem Environment % % This environment takes an optional argument. When given, it will adjust the % problem counter. This is useful for when the problems given for your % assignment aren't sequential. See the last 3 problems of this template for an % example. % \newenvironment{homeworkProblem}[1][-1]{ \ifnum#1>0 \setcounter{homeworkProblemCounter}{#1} \fi \section{Problem \arabic{homeworkProblemCounter}} \setcounter{partCounter}{1} \enterProblemHeader{homeworkProblemCounter} }{ \exitProblemHeader{homeworkProblemCounter} } % % Homework Details % - Title % - Due date % - Class % - Section/Time % - Instructor % - Author % \newcommand{\hmwkTitle}{Homework\ \#5} \newcommand{\hmwkDueDate}{November 3rd, 2016} \newcommand{\hmwkDueTime}{2:30pm} \newcommand{\hmwkClass}{CS 477} \newcommand{\hmwkClassInstructor}{Monica Nicolescu} \newcommand{\hmwkAuthorName}{Matthew J. Berger} % % Title Page % \title{ \vspace{2in} \textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\ \normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate\ at \hmwkDueTime}\\ \vspace{0.1in}\large{\textit{\hmwkClassInstructor}} \vspace{3in} } \author{\textbf{\hmwkAuthorName}} \date{} \renewcommand{\part}[1]{\textbf{\large Part \Alph{partCounter}}\stepcounter{partCounter}\\} % % Various Helper Commands % % Useful for algorithms \newcommand{\alg}[1]{\textsc{\bfseries \footnotesize #1}} % For derivatives \newcommand{\deriv}[1]{\frac{\mathrm{d}}{\mathrm{d}x} (#1)} % For partial derivatives \newcommand{\pderiv}[2]{\frac{\partial}{\partial #1} (#2)} % Integral dx \newcommand{\dx}{\mathrm{d}x} % Alias for the Solution section header \newcommand{\solution}{\textbf{\large Solution}} % Alias for the Extra Credit section header \newcommand{\extracredit}{\textbf{\large Extra Credit}} % Alias for the Big-O Symbol \newcommand{\bigo}{\mathcal{O}} % Probability commands: Expectation, Variance, Covariance, Bias \newcommand{\E}{\mathrm{E}} \newcommand{\Var}{\mathrm{Var}} \newcommand{\Cov}{\mathrm{Cov}} \newcommand{\Bias}{\mathrm{Bias}} \begin{document} \maketitle \pagebreak \begin{homeworkProblem} \begin{itemize} \item[\textbf{1.)}] \textbf{(U \& G Required)[100 points]} Suppose you are consulting for a company that manufactures PC equipment and ships it to distributors all over the country. For each of the n next weeks, they have a projected supply si of equipment (measured in pounds), which has to be shipped by an air freight carrier. Each week’s supply can be carried by one of two air freight companies, A or B. \begin{itemize} \item Company A charges a fixed rate r per pound, so it costs $r * s_i$ to ship a week’s supply ($s_i$) \item Company B makes contracts for a fixed amount c per week, independent of the weight. However, contracts with company B must be made in blocks of four consecutive weeks at a time. \end{itemize} A schedule, for the PC company, is a choice of air freight company (A or B) for each of the n weeks with the restriction that company B, whenever it is chosen, must be chosen for blocks of four contiguous weeks at a time. The cost of the schedule is the total amount paid to companies A and B, according to the description above. You are asked to give a polynomial time algorithm that takes a sequence of supply values $s_1, s_2, \dots, s_n$ and returns a schedule of minimum cost. In order to achieve this, you need to answer the following questions: \begin{enumerate}[label=\alph*.)] \item\ [20 points] Determine and \textbf{prove} the optimal substructure of the problem and write a recursive formula of an optimal solution (i.e., define the variable that you wish to optimize and explain how a solution to computing it can be obtained from solutions to subproblems). \\ \textbf{Submit}: the recursive formula, along with definitions and explanations on what is computed.\\ \solution\\ We'll call the algorithm $MINCOST(i)$. In this case, the algorithm computes the lowest cost possible to ship the the PC equipment to distributors for the first $i$ weeks. We'll also declare an algorithm called $OPTIMIZE(i,j)$ to find the company for the j-th week that would achieve $MINCOST(i)$. The best schedule for the first $i$ weeks will either be acquired through choosing company A for the i-th week (a single week) or by choosing company B for the previous 3 weeks as well as the i-th one (weeks $i,i-1,i-2,$ and $i-3$). The optimal substructure of this problem can be represented by the equation below: \item[(1)]\hfil \( MINCOST(i) = min\{MINCOST(i-1) + r *s_i,\ MINCOST(i-4)+4c\} \) for $i \geq 4$ However for $i < 4$, we must choose company A for each week: \item[(2)]\hfil \( MINCOST(0) = 0\) \item[(3)]\hfil \( MINCOST(i) = MINCOST(i-1)+r*s_i\) \item\ [30 points] Write an algorithm that computes an optimal solution to this problem, based on the recurrence above. Implement your algorithm in C/C++ and run it on the following values: \begin{itemize} \item $r = 1$ \item $c = 10$ \item the sequence of $s_i$ values: 11, 9, 9, 12, 12, 12, 12, 9, 9, 11 \end{itemize} \textbf{Submit}: \begin{itemize} \item A printed version of the algorithm (name your algorithm schedule.c or schedule.cpp) \item A printout of the table that contains the solutions to the subproblems, run on the values given above (print the entire table!) \end{itemize} \solution \begin{lstlisting} #include <algorithm> #define NUM_WEEKS 10 #define COMPANY_A 0 #define COMPANY_B 1 int main() { int r = 1; int s[NUM_WEEKS] = { 11, 9, 9, 12, 12, 12, 12, 9, 9, 11 }; int c = 1; int minCost[NUM_WEEKS] = {0}; int opt[NUM_WEEKS][NUM_WEEKS] = {{0}}; for(int i = 1; i < NUM_WEEKS; i++) { if(i < 4) { minCost[i] = minCost[i-1] + (r * s[i]); for(int j = 1; j < i-1; j++) { opt[i][j] = opt[i-1][j]; } opt[i][i] = COMPANY_A; } else { int costA = minCost[i-1] + (r * s[i]); int costB = minCost[i-4] + (4 * c); minCost[i] = std::min(costA,costB); if(costA < costB) { for(int j = 1; j < i-1; j++) { opt[i][j] = opt[i-1][j]; } opt[i][i] = COMPANY_A; } else { for(int j = 1; j < i-4; j++) { opt[i][j] = opt[i-4][j]; } for(int x = 3; x >= 0; x--) { opt[i][i-x] = COMPANY_B; } } } } for(int i = 0; i < NUM_WEEKS; i++) { std::string choice = (opt[i][i] == COMPANY_A) ? "Company A" : "Company B"; std::cout << "Week " << i << ": " << choice << std::endl; } return 0; } \end{lstlisting} \item\ [20 points] Update the algorithm you developed at point (b) to enable the reconstruction of the optimal solution, i.e., which company was used in an optimal solution for shipping. (Hint: use an auxiliary table like we did in the examples in class.) Include these updates in your algorithm implementation from point (b).\\ \textbf{Submit}: \begin{itemize} \item A printed version of the algorithm (name your algorithm schedule\_1.c or schedule\_1.cpp). \item A printout of the values that you obtain in the table containing the additional information needed to reconstruct the optimal solution, run on the values given above (print the entire table!) \end{itemize} \item\ [30 points] Using the additional information computed at point (c), write an algorithm that outputs which company was used for shipping in the optimal schedule. Implement this algorithm in C/C++. \\ \textbf{Submit}: \begin{itemize} \item A printed version of the algorithm (name your algorithm schedule\_2.c or schedule\_2.cpp). \item A printout of the \textbf{solution} to the problem, i.e., the optimal schedule. (e.g., A, A, B, A, B) \end{itemize} \end{enumerate} \end{itemize} \end{homeworkProblem} \end{document}
{ "alphanum_fraction": 0.6696939424, "avg_line_length": 31.0169014085, "ext": "tex", "hexsha": "5c182225fcf07fdc0518bd650f41e8220deb4d16", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c1a3e79ecffb87ae47acbad0ad2d83c0adbac868", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "matthewjberger/cs477Berger", "max_forks_repo_path": "Homework_5/homework5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c1a3e79ecffb87ae47acbad0ad2d83c0adbac868", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "matthewjberger/cs477Berger", "max_issues_repo_path": "Homework_5/homework5.tex", "max_line_length": 529, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c1a3e79ecffb87ae47acbad0ad2d83c0adbac868", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "matthewjberger/cs477Berger", "max_stars_repo_path": "Homework_5/homework5.tex", "max_stars_repo_stars_event_max_datetime": "2016-09-27T19:25:00.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-27T19:25:00.000Z", "num_tokens": 3313, "size": 11011 }
\documentclass[oneside]{article} \usepackage[margin=1in]{geometry} \usepackage{mathpazo,graphicx,hyperref} \hypersetup{ colorlinks = true, %Colours links instead of ugly boxes urlcolor = blue, %Colour for external hyperlinks linkcolor = blue, %Colour of internal links citecolor = red %Colour of citations } \begin{document} \title{Milestone 2} \author{Guillaume Labranche \and William Bain \and Si Mei Zhang} \maketitle \tableofcontents \section{Design decisions} \subsection{Typechecker} In accordance with the instructions our type checking is executed in a single pass without forward declaration.\footnote{See \url{https://mycourses2.mcgill.ca/d2l/le/161312/discussions/threads/258597/View}.} We are working in Java and we implemented the typechecker as a subclass of the \verb|DepthFirstAdapter| class provided by SableCC. This made it easy to traverse the AST without extra boilerplate. Since we did not have a reliable way of editing the extending the AST classes generated by SableCC, we stored type information for the AST in a hash table mapping from nodes to Go types.\footnote{By extending I mean adding methods and properties to a class used by the parser (here probably \texttt{PExp}, the abstract expression production class), not just creating a subclass.} In most cases it was sufficient to apply typechecks after the the typechecker had recursed over the child nodes and typechecked them. There were two cases where more fine-grained control was necessary: \begin{itemize} \item In short assignment statements, the list of variables is implemented as a list of expressions for reasons having to do with the parser implementation. It was therefore necessary to stop the typechecker from typechecking the variables before they had been added to the symbol table. \item To typecheck function declarations and if statements, it was necessary to open a scope after having typechecked some but not all of the child nodes. \end{itemize} In each of these special cases we overrode the \verb|case| method which controls the recursion over the child nodes. A further implementation detail that is worth noting is our implementation of struct type declarations. To prevent code duplication, we treat struct fields as if they were variable declarations: we open a scope in the symbol table upon entering the struct and enter each field as if it were a variable. Then we pop the scope and use it to build the struct class. \subsection{Symbol table} Initially the symbol table was simply a scope with a reference to the parent scope. Traversing up and down the scopes meant overwriting the variable used to reference the scope, so we decided to make \texttt{SymbolTable} be what its name suggests, and take care of scoping up and down. It holds a double-ended queue (Java's \texttt{ArrayDeque}) to store the scopes. We have methods for different use cases: searching for an identifier through all scope levels, and searching only in the current scope. We have also added loggers to enable the \verb|-dumpsymtab| and \verb|-dumpsymtaball| CLI options. \subsection{Type and symbol class hierarchy} We put considerable effort into developing and revising the hierarchy representing GoLite types and symbols (figure \ref{fig:symbol_table_uml}). We adopted the following goals, listed here roughly in descending order of precedence: \begin{enumerate} \item Type safety: as much as possible, it should not be possible to use an object in a place where it is not allowed. This should be detected at compile time. \item Simplicity: there should not be more classes or objects than are necessary. For instance, we did not want to use a class \texttt{AliasTypeSymbol(Symbol s)} to wrap every type alias entered in the symbol table. \item DRY: it should not be necessary to implement functionality in multiple places. \end{enumerate} \begin{figure} \includegraphics[width=\textwidth]{symbol_table_uml} \caption{Class hierarchy for GoLite types and symbols (property and method listings are non-exhaustive)} \label{fig:symbol_table_uml} \end{figure} These considerations led us to a number of particular decisions. First, we place functions outside the type hierarchy. While in Go functions are first class citizens, with their type given by the function signature, in GoLite they are not. Therefore, instead of implementing functions as instances of \texttt{Variable}, which would potentially allow them to be used where another objects of another type are required, we implemented a \texttt{Function} class which represents a particular function and stores both the function's identifier and its associated type information directly. Second, we made \texttt{Symbol} an interface which is implemented by the \texttt{Variable}, \texttt{Function}, and \texttt{AliasType} classes. The difficulty here was that since \texttt{AliasType} is a subclass of \texttt{Type}, it could not also inherit from an abstract \texttt{Symbol} class. By making \texttt{Symbol} an interface, we allow \texttt{AliasType} instances into the symbol table directly without thereby making it possible to enter other types---for instance the \texttt{BuiltInType} which we use for \texttt{int}, \texttt{bool}, \texttt{alias}, and \texttt{rune}---illicitly. The downside of this design is that the \texttt{getId} and \texttt{setId} methods required by \texttt{Symbol} had to be implemented at least twice. However, this was sufficiently trivial that it did not change our decision. In fact, the duplication is so trivial that we did not even create a shared superclass for \texttt{Function} and \texttt{Variable} to reduce the duplication there.\footnote{We \emph{did}, however, create a \texttt{NamedType} class which is extended by both \texttt{BuiltInType} and \texttt{AliasType}. This class also implements the \texttt{isIdentical} and \texttt{getRepresentation} methods.} A final decision worth noting is our use of the \texttt{VoidType} class. Since unfortunately Java does not make it possible to statically forbid null pointers, it is easy to introduce them accidentally. To help catch such cases, we use \texttt{VoidType} instead of \texttt{null} to represent the return type of a function that does not return a value. \subsection{Typed pretty printer} We extended the \verb|PrettyPrinter| class we implemented in Milestone 1 to have it print extra type information. By overriding the \verb|defaultOut| method, it was simple to have it look up and output the type of any node which is an instance of the \verb|PExp| (expression production) class as an inline comment. \section{Scoping rules} The scoping rules are in accordance with the GoLite specifications: \begin{itemize} \item In a given scope, it is legal to refer to an identifier that was previously declared in the scope or one of the parent scopes. \item It is illegal to declare a new type, variable or function using an identifier previously defined in the current scope only. \item The following language structures create new scopes for themselves: \texttt{if}, \texttt{else}, \texttt{for}, \texttt{func}, \texttt{struct}, standalone block (\verb|{...}|). The if and else branches of the if statement also create their own scopes. \end{itemize} \section{Type check operations} In some cases there are multiple code paths which lead to the same error. For instance, the helper method \texttt{ensureUndeclared(TId id)} is called in a number of places to ensure that an identifier has not previously been declared in the current scope. While we have done fuzz testing, we have not created an exhaustive suite of failing programs to cover each possible path to the error in such cases. \subsection{General} \begin{description} \item[Redeclaration of variable in same scope:]\ \\ \verb|redeclaration_of_var_in_same_scope.go|\\ \verb|redeclare_of_formal_param_in_func.go|\\ \verb|redeclare_of_func_in_same_scope.go|\\ \verb|redeclare_of_type_in_same_scope.go|\\ \verb|redeclare_of_var_in_same_scope.go| \end{description} \subsection{Declaration} Few type checks were necessary beyond checks for identifier redeclaration. \subsubsection{Function declaration} \begin{description} \item[Function declares a return type but does not finish with a return:] \verb|func_with_returntype_missing_return.go|\footnote{Technically, this is a weeding error.} \end{description} \subsubsection{Variable declaration} \begin{description} \item[Declared type and expression type don't match:] \verb|type_var_spec_mismatch.go| \end{description} \subsection{Statements} \subsubsection{Assignment} \begin{description} \item[Assignment to a variable with different underlying basic type:] \verb|use_different_type_in_assignment.go| \end{description} \subsubsection{Op assignment} \begin{description} \item[Lvalue and RHS are of different types:] \verb|op_assign_type_mismatch.go| \item[Operands are of an invalid type:] \verb|op_assign_bool.go| \end{description} \subsubsection{Increment/decrement statement} \begin{description} \item[Lvalue is non-numeric:] \verb|postfix_non_numeric.go| \end{description} \subsubsection{Short variable declaration} \begin{description} \item[No new variable declared:] \verb|short_var_dec_no_var.go| \item[Variable redeclared with new type:] \verb|short_var_dec_redeclared_new_type.go| \end{description} \subsubsection{Print and println statements} \begin{description} \item[Can only print base types or their aliases:] \verb|print_invalid_type.go| \end{description} \subsubsection{Return statement} \begin{description} \item[Function without return type returns a value:] \verb|func_no_returntype_has_return.go| \item[Return type does not match declared type:] \verb|function_returns_wrong_type.go| \end{description} \subsubsection{If statement} \begin{description} \item[Condition is not a boolean:] \verb|if_expr_not_bool.go| \end{description} \subsubsection{Switch statement} \begin{description} \item[Switch expression type is not a bool:] \verb|switch_caseExpr_isnt_bool.go| \item[Alternative expression type mismatch:] \verb|switch_cases_mismatched_types.go| \item[Body fails to typecheck:] \verb|switch_body_fails_typecheck.go| \item[Alternative fails to typecheck:] \verb|switch_alternative_fails_typecheck.go| \end{description} \subsubsection{For statement} \begin{description} \item[Condition is not a boolean:] \ \\ \verb|forloop_3pt_expr_not_bool.go| \\ \verb|forloop_expr_not_bool.go| \end{description} \subsection{Expressions} \subsubsection{Variable reference} \begin{description} \item[Use of undefined variable:] \verb|identifier_undefined_in_symtab.go|, \end{description} \subsubsection{Array access} \begin{description} \item[Index is not an integer:]\ \\ \verb|indexing_index_not_int.go| \item[Indexed expression is not a slice or array:]\ \\ \verb|indexing_expr_not_slice_or_array.go| \end{description} \subsubsection{Field access} \begin{description} \item[Field not in struct type:] \verb|field_missing.go| \item[Expression is not a struct:] \verb|field_from_non_struct.go| \end{description} \subsubsection{Function call} \begin{description} \item[Argument type mismatch:] \verb|function_argument_type_mismatch.go| \end{description} \subsubsection{Type cast} \begin{description} \item[Type cast to or from an uncastable type:]\ \\ \verb|typecast_from_string.go|\\ \verb|typecast_to_string.go|\\ \verb|typecast_with_struct.go| \end{description} \subsubsection{Append} \begin{description} \item[Append to a slice of the wrong type:] \verb|append_expr_bad_type.go| \item[Append to a non-slice variable:] \verb|append_id_not_slice.go| \item[Append referencing an undeclared variable] \verb|append_id_undefined.go| \item[LHS of assignment is not a slice] \verb|append_lhs_type_not_slice.go| \end{description} \subsubsection{Operator expressions} \begin{description} \item[Operands should be of a type supported by the operator]: \\ \verb|binary_op_unsupported_type.go| \item[Operands should have the same type:] Covering a number of operators and types...\\ \verb|field_selection_id_mismatched_type.go|\\ \verb|binary_gt_mismatched_types.go|\\ \verb|binary_le_mismatched_types.go|\\ \verb|binary_lshift_mismatched_types.go|\\ \verb|binary_op_unsupported_type.go |\\ \verb|binary_or_mismatched_types.go|\\ \verb|binary_expr_mismatched_types6.go|\\ \verb|binary_slash_mismatched_types.go|\\ \verb|binary_rshift_mismatched_types.go| \end{description} \subsubsection{Unary expressions} \begin{description} \item[Unary plus/minus take a numeric type:] \ \\ \verb|unary_arith_neg_bool.go|\\ \verb|unary_arith_neg_string.go|\\ \verb|unary_plus_expr_bool.go|\\ \verb|unary_plus_expr_string.go| \item[Bitwise negation takes a integer type:] \ \\ \verb|unary_bitwise_neg_notint1.go|\\ \verb|unary_bitwise_neg_notint2.go| \item[Logical negation takes a boolean:] \ \\ \verb|unary_logic_neg_notbool.go| \end{description} \section{Breakdown of major contributions} \begin{description} \item[Guillaume Labranche] Type checker architecture, symbol table implementation, type checker implementation (especially toplevel declarations and types) \item[William Bain] Type checker architecture, command-line interface, type checker implementation (especially expressions and types) \item[Si Mei Zhang] Type checker implementation (especially statements), test programs \end{description} \end{document}
{ "alphanum_fraction": 0.7855931579, "avg_line_length": 54.9109311741, "ext": "tex", "hexsha": "62b1c140d284576f36c3bcaff02ac1a6c1be12f3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7118c090346e90aec1b24f56b7f9321d395538d5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gumgl/comp520", "max_forks_repo_path": "doc/milestone2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7118c090346e90aec1b24f56b7f9321d395538d5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gumgl/comp520", "max_issues_repo_path": "doc/milestone2.tex", "max_line_length": 1211, "max_stars_count": 2, "max_stars_repo_head_hexsha": "7118c090346e90aec1b24f56b7f9321d395538d5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gumgl/comp520", "max_stars_repo_path": "doc/milestone2.tex", "max_stars_repo_stars_event_max_datetime": "2015-09-03T05:34:39.000Z", "max_stars_repo_stars_event_min_datetime": "2015-08-21T06:22:25.000Z", "num_tokens": 3287, "size": 13563 }
\section{Summary} \label{sec:summary} \begin{frame}{Multigroup Neutron Diffusion Equation} \begin{equation} \label{eq:multigroup_diffusion} \grad \cdot \current_g(\vr) + \Sigma_{r,g}(\vr) \phi_g(\vr)= \frac{\chi_g(\vr)}{\lambda} \sum_{g'=1}^{G} \nu\Sigma_{f,g'}(\vr) \phi_{g'}(\vr) + \sum_{\substack{g'=1 \\ g' \ne g}}^{G} \Sigma_{s,g' \rightarrow g}(\vr) \phi_{g'}(\vr) \end{equation} \vspace{-2\baselineskip} \begin{conditions} % custom environment designed for this purpose \vr & spatial position vector, \\ \current_g(\vr) & net neutron current for energy group $g$ \units{$\frac{1}{\text{cm}^2 \; \text{s}}$}, \\ \phi_g(\vr) & \parbox[t]{\columnwidth}{fundamental eigenvector, \\ scalar neutron flux for energy group $g$ \units{$\frac{1}{\text{cm}^2 \; \text{s}}$},} \\ \Sigma_{r,g}(\vr) & macroscopic removal cross section for energy group $g$ \units{$\frac{1}{\text{cm}}$}, \\ \chi_g(\vr) & fission spectrum for energy group $g$,\\ \lambda & \parbox[t]{\columnwidth}{fundamental eigenvalue, \\ effective neutron multiplication factor,} \\ \nu \Sigma_{f,g}(\vr) & \parbox[t]{\columnwidth}{number of fission neutrons times macroscopic fission \\ cross section in energy group $g$ \units{$\frac{1}{\text{cm}}$},} \\ \Sigma_{s,g' \rightarrow g} (\vr) & \parbox[t]{\columnwidth}{macroscopic scatter cross section from energy group $g'$ to \\ energy group $g$ \units{$\frac{1}{\text{cm}}$},} \\ G & total number of energy groups. \end{conditions} \end{frame} \begin{frame}{\glsentryshort{nem} Equations} Transverse integrated multigroup neutron diffusion equation. \\ Note: node indices $i,j,k$ have been omitted. \begin{align} \label{eq:transverse_multigroup_diffusion} \frac{d \current_{g,u} (u)}{d u} + \overline{\Sigma_{r,g}} \phi_{g,u}(u) &= Q_{g,u}(u) - L_{g,u}(u) \\ \label{eq:current_approximation} \current_{g,u}(u) &= - \overline{D_g} \, \frac{d \phi_{g,u}(u)}{du} \end{align} \vspace{-\baselineskip} \begin{conditions} u & coordinate direction (i.e. $u = x,y,z$), \\ \overline{\Sigma_{r,g}} & average value of $\Sigma_{r,g}(\vr)$ in node $i,j,k$, \\ \overline{D_g} & average value of diffusion coefficient in node $i,j,k$, \\ Q_{g,u}(u) & transverse integrated neutron source, \\ L_{g,u}(u) & transverse leakage. \end{conditions} \end{frame} \begin{frame}{\glsentryshort{nem} Projections} Basis functions are typically polynomials.\\ \citeauthor{qe2paper} select Legendre polynomials. \begin{align} \label{eq:flux_expansion} \phi_{g,u}(u) &= \sum_{n=0}^{N_{\phi} = 4} a_{g,u,n} \, f_{u,n}(u), \\ \label{eq:source_expansion} Q_{g,u}(u) &= \sum_{n=0}^{N_Q = 2} q_{g,u,n} \, f_{u,n}(u), \\ \label{eq:leakage_expansion} L_{g,u}(u) &= \sum_{n=0}^{N_L = 2} l_{g,u,n} \, f_{u,n}(u), \end{align} \vspace{-\baselineskip} \begin{conditions} a_{g,u,n} & expansion coefficient of $\phi_{g,u}(u)$, \\ q_{g,u,n} & expansion coefficient of $Q_{g,u}(u)$, \\ l_{g,u,n} & expansion coefficient of $L_{g,u}(u)$, \\ f_{u,n}(u) & $n^{th}$ Legendre polynomial. \end{conditions} \end{frame} \begin{frame}{Local Elimination} \begin{itemize} \item Odd and even coefficients can be solved separately \cite{gehinThesis}. \item \citeauthor{qe2paper} show $a_{g,u,1}$ and $a_{g,u,3}$ can be written in terms of each other. \item $a_{g,u,1-3}$ and $a_{g,u,2-4}$ are introduced. \item Solution vector, $\Phi_g$ is length $10 \times N \times G$. \end{itemize} \begin{equation} \label{eq:solution_vector} \vPhi_g = \begin{pmatrix} \current_{g,x,+} \\ \current_{g,y,+} \\ \vspace{8pt} \current_{g,z,+} \\ \overline{\phi_g} \\ a_{g,x,1-3} \\ a_{g,y,1-3} \\ a_{g,z,1-3} \\ a_{g,x,2-4} \\ a_{g,y,2-4} \\ a_{g,z,2-4} \end{pmatrix} \end{equation} \end{frame} \begin{frame}{\glsentryshort{jfnk} Theory} The $m^{th}$ Newton step. \begin{equation} \label{eq:newton_step} \jacobian (\vx^m) \cdot \step^m = - \residual(\vx^m) \end{equation} The step proceeds. \begin{equation} \vx^{m+1} = \vx^{m} + \step^{m} \end{equation} \end{frame} \begin{frame}{Inexact Newton Condition} The Newton step is solved with a Krylov solver. \\ \glsentryshort{gmres} and \glsentryshort{bicgstab} are both investigated. \begin{equation} \label{eq:inexact_newton_condition} \| \residual(\vx^m) + \jacobian(\vx^m) \cdot \step^m \| \le \eta_m \| \residual(\vx^m) \| \end{equation} The Krylov solver does not require an explicit Jacobian, only the Jacobian-vector product which can be approximated with finite differences. \begin{equation} \label{eq:dirder} \jacobian(\vx^m) \cdot \vv \approx \frac{\residual(\vx^m + \epsilon \vv) - \residual(\vx^m)}{\epsilon} \end{equation} Typically, $\epsilon = \sqrt{\epsilon_{mach}} \approx 10^{-8}$. \end{frame} \begin{frame}{Choice of Physics-Based Preconditioner} \begin{equation} \label{eq:left_precondition} \| \mm_L^{-1} \residual(\vx^m) + \mm_L^{-1} \left( \jacobian(\vx^m) \cdot \step^m \right) \| \le \eta_m \| \mm_L^{-1} \residual(\vx^m) \| \end{equation} \begin{itemize} \item Preconditioner should approximate the Jacobian inverse \cite{textbookkelley}. \item \citeauthor{gill_azmy} investigate several choices of preconditioner and conclude that preconditioning with $\approx 5$~\glspl{pi} is ideal. \item \citeauthor{jfnk_wielandt} present similar results. \item \citeauthor{qe2paper} develop a preconditioner based on available data. \item Solved using \gls{tdma} and then \gls{adi} method. \item No preconditioner comparison provided. \end{itemize} \end{frame} \begin{frame}{Convergence Rates of \glsentryshort{jfnk} and \glsentryshort{pi} Methods} \begin{itemize} \item \gls{pi}. \begin{itemize} \item Converges linearly at a rate determined by the dominance ratio \cite{nakamura}. \begin{equation} d = \frac{\lambda_1}{\lambda_0} \end{equation} \item Typically, $d > 0.95$ is common and the \gls{ws} is used (to be discussed) \cite{gehinThesis}. \begin{equation} d' = \frac{\frac{1}{\lambda_0} - \frac{1}{\lambda'}} {\frac{1}{\lambda_1} - \frac{1}{\lambda'}} \end{equation} % if \lambda_0 = 1.0, \lambda_1 = 0.95, \lambda' = \lambda_0 + 0.03 % d = 0.95, d' = 0.35625 \end{itemize} \item \gls{jfnk}. \begin{itemize} \item Convergence rate determined by Jacobian properties (e.g. Lipschitz constant) \cite{textbookkelley}. \item Not affected by dominance ratio \cite{gill_azmy}. \item Will not be affected by \gls{ws} despite claim of \citeauthor{qe2paper}. \end{itemize} \end{itemize} \end{frame}
{ "alphanum_fraction": 0.6118411756, "avg_line_length": 38.2540540541, "ext": "tex", "hexsha": "cd797030f12636b36e21cba2206317d12da3cfcc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "da790b3bca756652ae97e8f9b0d3d83e13a163ec", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wcdawn/WilliamDawn-QE2", "max_forks_repo_path": "presentation/sec_summary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "da790b3bca756652ae97e8f9b0d3d83e13a163ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wcdawn/WilliamDawn-QE2", "max_issues_repo_path": "presentation/sec_summary.tex", "max_line_length": 86, "max_stars_count": 1, "max_stars_repo_head_hexsha": "da790b3bca756652ae97e8f9b0d3d83e13a163ec", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wcdawn/WilliamDawn-QE2", "max_stars_repo_path": "presentation/sec_summary.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-30T15:17:59.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-30T15:17:59.000Z", "num_tokens": 2434, "size": 7077 }
\chapter{Log} The logging facility itself is platform specific since the default log interface depends on the target system.. The interface is defined platform independently in {\em Log.h}. The platform independed formatting is located in \verb|common/Log.cc| and tghe platform, specific ctor ist located in the platform specific directory as \verb|Log.cc|\footnote{The use of the same file name should be resolved in the future!}. The following log levels are defined: \begin{description} \item[INFO] general information about the program execution \item[DEBUG] messages used for debugging the runtime system. Note that many messages may affect the application execution. \item[WARN] messages aabout situations, where a backup solution is used. E.g. start of the application without root priviledes in Linux will forbid the usage of the priority scheduler. The normal scheduler will be used instead. \item[ERROR] are diagnostic information is case of signal raising. \end{description} The log interface defines the methods \verb|Log::info()|, \verb|Log::debug()|, Log::warn()| and \verb|Log::error()|. Each method takes at least on string argument like printf. Plain text is passed to the corresponding output. The following format elements are in use: \verb|%d|, \verb|%u|, \verb|%s|, \verb|%f| and \verb|%c|. No additional parameters are allowed with these formatting options. The output device is platform specific.
{ "alphanum_fraction": 0.7780879274, "avg_line_length": 44.78125, "ext": "tex", "hexsha": "d8085d22f055ef6f61f45a8442bb4c7583f759a2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d7db83b0ea15b7ba0f6244d918432c830ddcd697", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "BenniN/OpenPEARLThesis", "max_forks_repo_path": "OpenPEARL/openpearl-code/runtime/doc/log.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d7db83b0ea15b7ba0f6244d918432c830ddcd697", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "BenniN/OpenPEARLThesis", "max_issues_repo_path": "OpenPEARL/openpearl-code/runtime/doc/log.tex", "max_line_length": 78, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d7db83b0ea15b7ba0f6244d918432c830ddcd697", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "BenniN/OpenPEARLThesis", "max_stars_repo_path": "OpenPEARL/openpearl-code/runtime/doc/log.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-15T07:26:00.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-15T07:26:00.000Z", "num_tokens": 330, "size": 1433 }
\documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \title{MATH 542 Homework 6} \author{Saket Choudhary\\[email protected]} \begin{document} \maketitle \section*{Problem 1} \subsection*{Problem 1a} To find: $f_{y_2,y_4}(y_1,y_3) = \int_{-\infty}^{\infty}f(y_1,y_2,y_2,y_4)dy_2dy_4$ For marginalising a MVN, we simply drop the irrelevant terms(terms with respect to which marginalisation is performed, as they integrate to 1) Joint Marginal distribution of $y_1,y_3$: $f_{y_2,y_4}(y_1,y_3) \sim N(\begin{pmatrix}1 \\ 3\end{pmatrix},\begin{pmatrix}4 & -1 \\ -1 &5\end{pmatrix})$ \subsection*{Problem 1b} $f_{y_1,y_3,y_4}(y_2) \sim N(3, 5)$ \subsection*{Problem 1c} $z=y_1+2y_2-y_3+3y_4$ Thus, $z=aY$ where $a = \begin{pmatrix}1 & 2 & -1 & 3 \end{pmatrix}$ and $Y=\begin{pmatrix}y_1 & y_2 & y_3 & y_4\end{pmatrix}'$ Thus, $Ez = aE[y] = -4$ \begin{align*} Var(z) &= aVar(y)a'\\ & = 79 \text{ using `R`} \end{align*} \subsection*{Problem 1d} $z_1=a_1y$ and $z_2 = a_2y$ where $a_1 = \begin{pmatrix}1 & 1 & -1 & -1 \end{pmatrix}$ and $a_2 = \begin{pmatrix}-3 & 1 & 2 & -2 \end{pmatrix}$ Then $f_{z_1,z_2} \sim N(\begin{pmatrix} \mu_1 \\ \mu_2 \end{pmatrix}, S)$ $\mu_1 = a_1'E[y] = 2$ $\mu_2 = a_2'E[y] = 9$ $\Sigma^z_{11} = a_1\Sigma a_1^T = 11$ $\Sigma^z_{22} = a_2\Sigma a_2^T = 154$ $\Sigma^z_{12} = \Sigma^z_{21} = a_2\Sigma a_2^T = -6$ Thus, $Z = \begin{pmatrix} z_1 \\ z_2 \end{pmatrix} \sim N(\begin{pmatrix} 2\\ 9\end{pmatrix}, \begin{pmatrix} 11 & -6\\ -6 & 154\end{pmatrix})$ \subsection*{Problem 1e} $\mu' = \mu_1 + \Sigma_{12}\Sigma_{22}^{-1}(x_2-\mu_2)$ $Cov = \Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}$ $\mu' = \begin{pmatrix}1 \\ 2 \end{pmatrix} + \begin{pmatrix} -1 & 2\\ 3 & -2 \end{pmatrix} \begin{pmatrix} 5 & -4\\ -4 & 4\end{pmatrix}^{-1} \begin{pmatrix}y_3-3\\ y_4+2\end{pmatrix} = \begin{pmatrix}1 \\ 2 \end{pmatrix} + \begin{pmatrix} 1&1.5\\ 1 & 0.5 \end{pmatrix} \begin{pmatrix}y_3-3\\ y_4+2\end{pmatrix}$ $Cov' = \begin{pmatrix} 4 & 2\\ 2 & 6 \end{pmatrix}-\begin{pmatrix} -1 & 2\\ 3 & -2 \end{pmatrix} \begin{pmatrix} 5 & -4\\ -4 & 4\end{pmatrix}^{-1}\begin{pmatrix}-1 & 3\\ 2 & -2 \end{pmatrix}= \begin{pmatrix}2 & 2\\ 2& 4 \end{pmatrix}$ $f(y_1, y_2 |y_3, y_4) = N(\mu', Cov)$ \subsection*{Problem 1f} $\mu' = \begin{pmatrix} 1 \\3 \end{pmatrix} + \begin{pmatrix} -1 & 2\\ 5 &-4 \end{pmatrix}\begin{pmatrix} 3 & -2 \\-4 & 4\end{pmatrix}^{-1}\begin{pmatrix}y_2-2\\ y_4+2\end{pmatrix} = \begin{pmatrix} 1 \\3 \end{pmatrix} + \begin{pmatrix} 1 & 1 \\ 1 & -0.5\end{pmatrix}\begin{pmatrix}y_2-2\\ y_4+2\end{pmatrix}$ $Cov' = \begin{pmatrix} 4 & 2\\ -1 & 3 \end{pmatrix} - \begin{pmatrix} -1 & 2\\ 5 &-4 \end{pmatrix}\begin{pmatrix} 3 & -2 \\-4 & 4\end{pmatrix}^{-1} \begin{pmatrix} 2 & 6\\ 2 & -2\end{pmatrix} $ $f(y_1, y_2 |y_3, y_4) = N(\mu', Cov)$ \subsection*{Problem 1g} $Cov(y_1,y_3) = -1 $ \subsection*{Problem 1h} $\mu' = 1 - \begin{pmatrix} 2 & -1 & 2 \end{pmatrix} \begin{pmatrix} 6 & 3 & -2\\ 3 & 5 & -4 \\ -2 & -4 & 4 \end{pmatrix}^{-1} \begin{pmatrix} y_2-2\\ y_3-3\\ y_4+2 \end{pmatrix} $ $Cov' = 4 - \begin{pmatrix} 2 & -1 & 2 \end{pmatrix} \begin{pmatrix} 6 & 3 & -2\\ 3 & 5 & -4 \\ -2 & -4 & 4 \end{pmatrix}^{-1} \begin{pmatrix} 2 \\ -1 \\ 2 \end{pmatrix}$ $f(y_1, y_2 |y_3, y_4) = N(\mu', Cov)$ \section*{Problem 2} Since $\sigma_{12} = \sigma_{13} = \sigma_{14} = 0$ and $y$ follows a MVN, by Theorem 2.2, $y_1$ is pairwise independent with $y_2,y_3,y_4$ \section*{Problem 3} \begin{align*} y_2 - \Sigma_{21}\Sigma_{11}^{-1}y_1 &= \begin{pmatrix} 0_{n-r \times r} & I_{n-r \times n-r} \end{pmatrix} \begin{pmatrix}y_1 \\ y_2\end{pmatrix} + \begin{pmatrix} -\Sigma_{21}\Sigma_{11}^{-1} & 0_{n \times r}\end{pmatrix}\begin{pmatrix}y_1 \\ y_2\end{pmatrix} \\ E(y_2 - \Sigma_{21}\Sigma_{11}^{-1}y_1) &= \begin{pmatrix} 0_{n-r \times r} & I_{n-r \times n-r} \end{pmatrix} \begin{pmatrix}Ey_1\\ Ey_2 \end{pmatrix} + \begin{pmatrix} -\Sigma_{21}\Sigma_{11}^{-1} & 0_{n \times r}\end{pmatrix}\begin{pmatrix}Ey_1 \\ Ey_2\end{pmatrix}\\ &= \mu_2 - \Sigma_{21}\Sigma_{11}^{-1}\mu_1 \end{align*} \begin{align*} Cov(y_2 - \Sigma_{21}\Sigma_{11}^{-1}y_1 ) &= Cov(\begin{pmatrix} 0_{n-r \times r} & I_{n-r \times n-r} \end{pmatrix} \begin{pmatrix}y_1 \\ y_2\end{pmatrix} + \begin{pmatrix} -\Sigma_{21}\Sigma_{11}^{-1} & 0_{n \times r}\end{pmatrix}\begin{pmatrix}y_1 \\ y_2\end{pmatrix})\\ &= Cov(aY+bY)\\ &= (a+b)Var(Y)(a+b)^T\\ &= \begin{pmatrix} -\Sigma_{21}\Sigma_{11}^{-1} & I \end{pmatrix} \begin{pmatrix} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{21} & \Sigma_{22} \end{pmatrix}\begin{pmatrix} -(\Sigma_{11}^{-1})^T\Sigma_{21}^T \\ I^T \end{pmatrix}\\ &= \begin{pmatrix}-\Sigma_{21}+\Sigma_{21} & -\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12}+\Sigma_{22} \end{pmatrix} \begin{pmatrix}-(\Sigma_{21}\Sigma_{11}^{-1})^T \\ I^T\end{pmatrix}\\ &= \Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12} \end{align*} \section*{Problem 4} \subsection*{Problem 4a} Given $t=\frac{z}{\sqrt{\frac{u}{\rho}}} \sim t(\rho)$ we know the following facts: \begin{itemize} \item $Z \sim N(0,1)$ \item $u \sim \chi^2_\rho$ \item $Z$ and $u$ are independent \end{itemize} $t^2 = \frac{z^2}{{\frac{u}{\rho}}} \sim \frac{\chi_1^2}{\chi_\rho^2} \sim F(1,\rho)$ \subsection*{Problem 5.3} We consider first the following vector: $Z=\begin{pmatrix} \bar{Y} & Y_1-Y_2 & Y_2-Y_3 \dots Y_n-Y_{n-1}\end{pmatrix}'$ Let's call $X = \begin{pmatrix} Y_1-Y_2 & Y_2-Y_3 \dots Y_{n-1}-Y_{n}\end{pmatrix}'$ so that it allows us to write $\sum_{i=1}^{n-1}(Y_i-Y_{i+1})^2=X'X$ Now $Z=\begin{pmatrix} \bar{Y} & X \end{pmatrix}$ \begin{align*} \begin{pmatrix} \bar{Y} & Y_1-Y_2 & Y_2-Y_3 \dots Y_{n-1}-Y_{n}\end{pmatrix}' &= \begin{pmatrix} 1/n & 1/n & 1/n & \dots & 1/n\\ 1 & -1 & 0 & \dots & 0\\ 0 & 1 & -1 & \dots & 0\\ \vdots \\ 0 & 0 & 0 & \dots & -1\\ \end{pmatrix} \begin{pmatrix} Y_1 & Y_2 & Y_3 &\dots Y_n \end{pmatrix}'\\ %&= (I-\frac{1}{n}1_n1_n')\begin{pmatrix} Y_1 & Y_2 & Y_3 &\dots Y_n \end{pmatrix}' Z &= AY \end{align*} Also $Z \sim N(A\mu, A\Sigma A')$ $A\mu = \begin{pmatrix} \mu & 0 & 0 \dots 0 \end{pmatrix}$ $A\Sigma A' = AA'$ since $\Sigma = I$ $AA' = \begin{pmatrix} 1/n & 0 & 0 & \dots & 0\\ 0 & 2 & -1 & \dots & 0\\ 0 & -1 & 2 & -1 & \dots 0\\ \vdots 0 & 0 & 0 & 0 & \dots 2 \end{pmatrix}$ Thus, $Z=\begin{pmatrix}\bar{Y} & X_{n \times 1}\end{pmatrix}'$ is a MVN such that $\bar{Y}$ and $X$ are independent (since the covariance is 0) We also know that $\sum_{i=1}^{n-1}(Y_i-Y_{i+1})^2=X'X = h(X)$ Since, functions of independent random variables are also independent $\bar{Y}$ and $h(X) = X'X$ are independent. \subsection*{Problem 5.11} $Z = \begin{pmatrix} \phi & 1 & 0 & 0 & \dots & 0\\ 0 & \phi & 1 & 0 & \dots & 0\\ 0 & 0 & \phi & 1 & \dots & 0\\ \vdots\\ 0 & 0 & 0 & 0 & \dots & 1\\ \end{pmatrix} \begin{pmatrix} y_1\\ y_2 \\ y_3\\ \vdots\\ y_n\end{pmatrix} = AY$ Since $Y \sim N(0, \sigma^2I)$ $\implies$ $Z \sim N(0, \sigma^2 AA^T)$ where $AA^T = \begin{pmatrix}\phi^2+1 & \phi & 0 & 0 & \dots & 0\\ \phi & 1+\phi^2 & \phi & 0 & \dots & 0\\ 0 & \phi & 1+\phi^2 & \phi & \dots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & 1+\phi^2 & \phi \end{pmatrix}$ \end{document}
{ "alphanum_fraction": 0.5968184312, "avg_line_length": 36.6432160804, "ext": "tex", "hexsha": "78f75968da641ebc17d065aeb43295d395efa751", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2022-02-10T03:21:09.000Z", "max_forks_repo_forks_event_min_datetime": "2015-09-25T19:06:45.000Z", "max_forks_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "saketkc/hatex", "max_forks_repo_path": "2016_Spring/MATH-542/HW06/hw06.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_issues_repo_issues_event_max_datetime": "2015-09-23T21:21:52.000Z", "max_issues_repo_issues_event_min_datetime": "2015-09-16T23:11:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NeveIsa/hatex", "max_issues_repo_path": "2016_Spring/MATH-542/HW06/hw06.tex", "max_line_length": 312, "max_stars_count": 19, "max_stars_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NeveIsa/hatex", "max_stars_repo_path": "2016_Spring/MATH-542/HW06/hw06.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:20:47.000Z", "max_stars_repo_stars_event_min_datetime": "2015-09-10T02:45:33.000Z", "num_tokens": 3468, "size": 7292 }
\documentclass[10pt,letterpaper]{article} \usepackage[top=0.85in,left=2.75in,footskip=0.75in]{geometry} % amsmath and amssymb packages, useful for mathematical formulas and symbols \usepackage{amsmath,amssymb} % Use adjustwidth environment to exceed column width (see example table in text) \usepackage{changepage} % Use Unicode characters when possible \usepackage[utf8x]{inputenc} % textcomp package and marvosym package for additional characters \usepackage{textcomp,marvosym} % cite package, to clean up citations in the main text. Do not remove. \usepackage{cite} % Use nameref to cite supporting information files (see Supporting Information section for more info) \usepackage{nameref,hyperref} \usepackage[capitalize]{cleveref} \usepackage[inline]{enumitem} \usepackage[per-mode=symbol,group-separator={,},group-minimum-digits=4]{siunitx} \usepackage{listings} \usepackage{todonotes} \usepackage{subcaption} \usepackage{url} % line numbers \usepackage[right]{lineno} % ligatures disabled \usepackage{microtype} \DisableLigatures[f]{encoding = *, family = * } % color can be used to apply background shading to table cells only \usepackage{xcolor} % array package and thick rules for tables \usepackage{array} \usepackage{physics} % for \norm \usepackage{graphicx} \graphicspath{ {../paper_figures} } % create "+" rule type for thick vertical lines \newcolumntype{+}{!{\vrule width 2pt}} % create \thickcline for thick horizontal lines of variable length \newlength\savedwidth \newcommand\thickcline[1]{% \noalign{\global\savedwidth\arrayrulewidth\global\arrayrulewidth 2pt}% \cline{#1}% \noalign{\vskip\arrayrulewidth}% \noalign{\global\arrayrulewidth\savedwidth}% } % \thickhline command for thick horizontal lines that span the table \newcommand\thickhline{\noalign{\global\savedwidth\arrayrulewidth\global\arrayrulewidth 2pt}% \hline \noalign{\global\arrayrulewidth\savedwidth}} % Remove comment for double spacing %\usepackage{setspace} %\doublespacing % Text layout \raggedright \setlength{\parindent}{0.5cm} \textwidth 5.25in \textheight 8.75in % Bold the 'Figure #' in the caption and separate it from the title/caption with a period % Captions will be left justified \usepackage[aboveskip=1pt,labelfont=bf,labelsep=period,justification=raggedright,singlelinecheck=off]{caption} \renewcommand{\figurename}{Fig} % Use the PLoS provided BiBTeX style \bibliographystyle{plos2015} % Remove brackets from numbering in List of References \makeatletter \renewcommand{\@biblabel}[1]{\quad#1.} \makeatother % Header and Footer with logo \usepackage{lastpage,fancyhdr,graphicx} \usepackage{epstopdf} %\pagestyle{myheadings} \pagestyle{fancy} \fancyhf{} %\setlength{\headheight}{27.023pt} %\lhead{\includegraphics[width=2.0in]{PLOS-submission.eps}} \rfoot{\thepage/\pageref{LastPage}} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrule}{\hrule height 2pt \vspace{2mm}} \fancyheadoffset[L]{2.25in} \fancyfootoffset[L]{2.25in} \lfoot{\today} \begin{document} \vspace*{0.2in} % Title must be 250 characters or less. \begin{flushleft} {\Large \textbf\newline{Supplemental Material: Multidimensional analysis and detection of informative features in human brain white matter} % Please use "sentence case" for title and headings (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns). } \newline % Insert author names, affiliations and corresponding author email (do not include titles, positions, or degrees). \\ Adam Richie-Halford\textsuperscript{1*}, Jason Yeatman\textsuperscript{2}, Noah Simon\textsuperscript{3}, Ariel Rokem\textsuperscript{4}, \\ \bigskip \textbf{1} eScience Institute, University of Washington, Seattle, WA, USA \\ \textbf{2} Graduate School of Education and Division of Developmental and Behavioral Pediatrics, Stanford University, Stanford, CA, USA \\ \textbf{3} Department of Biostatistics, University of Washington, Seattle, WA, USA \\ \textbf{4} Department of Psychology, University of Washington, Seattle, WA, USA \\ \bigskip % Insert additional author notes using the symbols described below. Insert symbol callouts after author names as necessary. % Use the asterisk to denote corresponding authorship and provide email address in note below. * [email protected] \end{flushleft} \section{Bundle and coefficient profiles} \label{sec:bundle-profiles} Here, we present the bundle profiles and $\hat{\beta}$ coefficients for each dataset. Throughout this section, diffusion metrics are plotted along the length of eighteen bundles: right corticospinal (CSTR), left corticospinal (CSTL), right uncinate (UNCR), left uncinate (UNCL), left inferior fronto-occipital fasciculus (IFOL), right inferior fronto-occipital fasciculus (IFOR), right arcuate (ARCR), left arcuate (ARCL), right thalamic radiation (ATRR), left thalamic radiation (ATRL), right cingulum cingulate (CGCR), left cingulum cingulate (CGCL), callosum forceps major (CFP), callosum forceps minor (CFA), right inferior longitudinal fasciculus (ILFR), left inferior longitudinal fasciculus (ILFL), right superior longitudinal fasciculus (SLFR), and left superior longitudinal fasciculus (SLFL). We display results for two different diffusion metrics: fractional anisotropy (FA) and mean diffusivity (MD), which are extracted from different diffusion models depending on the dataset: diffusion tensor imaging (DTI) for the ALS and WH datasets and diffusion kurtosis imaging (DKI) for the HBN and Cam-CAN datasets \cite{jensen2005diffusion}. The diffusion metric is always plotted on the left $y$-axis while the $\hat{\beta}$ coefficients are displayed on the twin axis on the right-hand-side. The scale of the $\hat{\beta}$-axis is shared between the FA and MD metrics so that one can compare the relative imporance of each metric. \subsection{ALS bundle profiles} \Cref{fig:als-bp:fa,fig:als-bp:md} show the bundle profiles and regression coefficients for the ALS dataset FA and MD metrics, respectively. These figures reinforce the findings in the main text that ALS is localized to the corticospinal tract. In this study, SGL selected the right corticospinal tract (CSTR) as important and regularized coefficients in the CSTL. Yet, \cref{fig:als-bp:fa} also shows group FA differences in the CSTL. This highlighted a potential drawback of the SGL method, discussed in the main text in the context of age regression. Namely, SGL is not guaranteed to identify \emph{all} important features. In this case, if the diagnostic signal in the CSTL is redundant to that in the CSTR, SGL will regularize the CSTL features, thereby reducing its sparsity penalty without any corresponding increase in loss. This parsimony cuts both ways; it is a feature of the method when one seeks an efficient predictive model, but is a disadvantage of the method when one wants an exhaustive explanation of feature importance. We will use the phrase ``parsimony pitfall'' to refer to the case when SGL regularizes away redundant but obviously important features. \begin{figure} \includegraphics[width=0.9\textwidth]{sarica_coefs_profiles_fa.pdf} \caption{% {% \bf Fractional anisotropy (FA) bundle profiles and $\hat{\beta}$ coefficients for ALS classification. } \label{fig:als-bp:fa} } \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth]{sarica_coefs_profiles_md.pdf} \caption{% {% \bf Mean diffusivity (MD) Bundle profiles and $\hat{\beta}$ coefficients for ALS classification. } \label{fig:als-bp:md} } \end{figure} \subsection{WH bundle profiles} \Cref{fig:wh-bp:fa,fig:wh-bp:md} show the bundle profiles and regression coefficients for the WH dataset. In contrast to the ALS classification case, the $\hat{\beta}$ coefficients are distributed widely through the brain, supporting the interpretation that aging is a large and continuous whole-brain process. These figures also show that SGL behaves much more like the lasso than the group lasso, as discussed in the main text. The parsimony pitfall is most evident in the IFOL and IFOR bundles in \cref{fig:wh-bp:md}. \begin{figure} \includegraphics[width=0.9\textwidth]{wh_coefs_profiles_fa.pdf} \caption{% {% \bf Fractional anisotropy (FA) bundle profiles and $\hat{\beta}$ coefficients for age regression in the WH dataset. } \label{fig:wh-bp:fa} } \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth]{wh_coefs_profiles_md.pdf} \caption{% {% \bf Mean diffusivity (MD) bundle profiles and $\hat{\beta}$ coefficients for age regression in the WH dataset. } \label{fig:wh-bp:md} } \end{figure} \subsection{HBN bundle profiles} \Cref{fig:hbn-bp:fa,fig:hbn-bp:md} show the bundle profiles and regression coefficients for the HBN dataset. Like the WH dataset, the $\hat{\beta}$ coefficients are distributed widely through the brain and SGL behaves more like the lasso than the group lasso. In contrast the the WH results, the bundle profiles show different behaviors. For example the SLFL and SLFR bundle profiles in \cref{fig:wh-bp:md} and \cref{fig:hbn-bp:md} have different concavity. This is unsurprising, however, given the differences between these datasets \begin{enumerate*}[% label=(\roman*),% before=\unskip{: },% itemjoin={{, }},% itemjoin*={{, and }}] \item different diffusion models, with DTI for the WH dataset and DKI for the HBN dataset \item different age ranges and distributions (which is evident in the figure legends), with HBN being a developmental dataset, while WH is a lifespan maturation dataset \item different anatomical extents, with the WH streamlines truncated to remain with the bundle's bounding regions of interest (the default behavior in the legacy \texttt{mAFQ}) and the HBN streamlines allowed to retain their full extent from whole-brain tractography (the default behavior in \texttt{pyAFQ}). \end{enumerate*} Thus, one should use caution when comparing bundle profiles and $\hat{\beta}$ coefficients between the WH, HBN, and Cam-CAN models. The parsimony pitfall is most evident in the UNCL, UNCR, ARCL, SLFL, and SLFR bundles in \cref{fig:hbn-bp:md}. \begin{figure} \includegraphics[width=0.9\textwidth]{hbn_coefs_profiles_fa.pdf} \caption{% {% \bf Fractional anisotropy (FA) bundle profiles and $\hat{\beta}$ coefficients for age regression in the HBN dataset. } \label{fig:hbn-bp:fa} } \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth]{hbn_coefs_profiles_md.pdf} \caption{% {% \bf Mean diffusivity (MD) bundle profiles and $\hat{\beta}$ coefficients for age regression in the HBN dataset. } \label{fig:hbn-bp:md} } \end{figure} \subsection{Cam-CAN bundle profiles} \Cref{fig:cc-bp:fa,fig:cc-bp:md} show the bundle profiles and regression coefficients for the Cam-CAN dataset. Like the WH and HBN datasets, the $\hat{\beta}$ coefficients are distributed widely through the brain and SGL behaves more like the lasso than the group lasso. As before, one must be cautious about comparing bundle profiles and $\hat{\beta}$ coefficients between models. While the HBN and Cam-CAN datasets share the same diffusion model and refrain from clipping streamlines, the age distributions for the two are roughly disjoint, with the WH age distribution straddling the two. The parsimony pitfall is again evident in the UNCL, UNCR, ARCL, SLFL, and SLFR bundles in \cref{fig:cc-bp:md}. \begin{figure} \includegraphics[width=0.9\textwidth]{cc_coefs_profiles_fa.pdf} \caption{% {% \bf Fractional anisotropy (FA) bundle profiles and $\hat{\beta}$ coefficients for age regression in the Cam-CAN dataset. } \label{fig:cc-bp:fa} } \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth]{cc_coefs_profiles_md.pdf} \caption{% {% \bf Mean diffusivity (MD) bundle profiles and $\hat{\beta}$ coefficients for age regression in the Cam-CAN dataset. } \label{fig:cc-bp:md} } \end{figure} \bibliography{paper} \end{document}
{ "alphanum_fraction": 0.7473939394, "avg_line_length": 37.1621621622, "ext": "tex", "hexsha": "29b23c84dd6af4db84d7b99bb6a0c007d5fcab4b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-06-09T15:19:46.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-09T15:19:46.000Z", "max_forks_repo_head_hexsha": "29619775520c4d77cd887016494e9c487604b848", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "arokem/afq-insight-paper", "max_forks_repo_path": "reports/manuscript/supplemental.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "29619775520c4d77cd887016494e9c487604b848", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "arokem/afq-insight-paper", "max_issues_repo_path": "reports/manuscript/supplemental.tex", "max_line_length": 309, "max_stars_count": null, "max_stars_repo_head_hexsha": "29619775520c4d77cd887016494e9c487604b848", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "arokem/afq-insight-paper", "max_stars_repo_path": "reports/manuscript/supplemental.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3343, "size": 12375 }
\section{Determinants} \begin{definition} Let $A \in M_{n \times n} (F)$. If $n =1$, let $A=(A_{11})$ and we define $\text{det}(A) = A_{11}$. For $n \geq 2$, $\text{det}(A)$ (or $\determinate{A}$) is defined as \begin{equation} \determinate{A} = \sum_{j=1}^n (-1)^{i + j} A_{ij} \times \determinate{\tilde{A}_{ij}} \end{equation} where $\tilde{A}_{ij}$ is obtained from $A$ by deleting row $i$ and column $j$. This is called \cindex{Laplace expansion}. \qed \end{definition} \begin{theorem} A function $\delta: M_{n \times n} (F) \rightarrow F$ is the same as $\determinate{A}$ if it satisfies the following 3 properties: \begin{enumerate} \item It is \cindex{$n$-linear function}: for a scalar $k$, \begin{equation} \determinate{\begin{bmatrix} a_1\\ \vdots\\ u + kv \\ \vdots\\ a_n \end{bmatrix}} = \determinate{\begin{bmatrix} a_1\\ \vdots\\ u \\ \vdots\\ a_n \end{bmatrix}} + k \determinate{\begin{bmatrix} a_1\\ \vdots\\ v \\ \vdots\\ a_n \end{bmatrix}} \end{equation} \item It is \cindex{alternating}: $\delta(A) = 0$ if any two adjacent rows are identical. \item $\delta(I) = 1$. \end{enumerate} The determinate is linear on each row when the remaining rows are held fixed. \qed \end{theorem} \begin{theorem} The effect of elementary row operation on the determinant of a matrix $A$ is: \begin{enumerate} \item interchange any two rows: $\determinate{B} = - \determinate{A}$. \item multiply a row: $\determinate{B} = k \determinate{A}$. \item add a multiple of a row to another: $\determinate{B} = \determinate{A}$. \end{enumerate} \end{theorem} \begin{theorem} If $\rank{A_{n \times n}} < n$, then $\determinate{A} = 0$. \end{theorem} \begin{proof} If $\rank{A_{n \times n}} < n$, one row is a linear combination of all other rows. \end{proof} \begin{theorem} \begin{equation} \determinate{AB} = \determinate{A} \times \determinate{B} \end{equation} \end{theorem} \begin{theorem} A matrix $A \in M_{n \times n}(F)$ is invertible $\Leftrightarrow$ $\determinate{A} \neq 0$. If it is invertible, $\determinate{A^{-1}} = \dfrac{1}{\determinate{A}}$. \end{theorem} \begin{definition} The \cindex{cofactor} of $A$ is defined as \begin{equation} \cofactor{A}_{ij} = (-1)^{i+j} \determinate{\tilde{A}_{ij}} \end{equation}\qed \end{definition} If the determinate is calculated using cofactor operation, the performance is $n!$ multiplication. However if it is calculated using elementary row operation, the performance is $\dfrac{n^3 + 2n - 3}{3}$ multiplication. \begin{definition} The \cindex{adjugate} of $A$ is defined as \begin{equation} \adjugate{A} = (\cofactor{A})^\top \end{equation} \end{definition} \begin{theorem} The inverse of invertible square matrix A is: \begin{equation*} A^{-1} = \frac{1}{\determinate{A}} \adjugate{A} \end{equation*} \end{theorem} \begin{theorem}[\cindex{Cramer's Rule}] Let $Ax=b$ be a system of $n$ equation with $n$ unknowns. If $\determinate{A} \neq 0$, the system has a unique solution: \begin{equation} x_k = \frac{\determinate{M_k}}{\determinate{A}} \end{equation} where $M_k$ is a $n\times n$ matrix obtained from $A$ by replacing column $k$ of $A$ by $b$. \end{theorem} \begin{proof} Let $a_k$ be the $k$th column of $A$ and $X_k$ denote the matrix obtained from replacing the column $k$ of identity matrix $I_n$ by $x$. Then $A X_k = M_k$: \begin{equation*} \begin{aligned} A X_k &= A \begin{bmatrix} 1 & & & x & \\ & 1 & & x & \\ && \ddots & \vdots \\ &&& x & \\ &&& \vdots & \ddots \\ &&& x& & 1 \end{bmatrix} \\ &= \begin{bmatrix} Ae_1, Ae_2, \dots, Ax, \dots, Ae_n \end{bmatrix} \\ & = \begin{bmatrix} a_1, a_2, \dots, b, \dots, a_n \end{bmatrix} \\ &= M_k \end{aligned} \end{equation*} Evaluate $X_k$ by cofactor expansion along row $k$ produces \begin{equation*} \determinate{X_k} = x_k \times \determinate{I_{n-1}} = x_k \end{equation*} Hence \begin{equation*} \determinate{M_k} = \determinate{A X_k} = \determinate{A} \times \determinate{X_k} = \determinate{A} \times x_k \end{equation*} Therefore \begin{equation*} x_k = \frac{\determinate{M_k}}{\determinate{A}} \end{equation*} \end{proof} Note: Cramer's Rule is too slow for real world calculation. \begin{theorem} In geometry, for a square matrix $A \in M_{n\times n}(F)$, $\absolutevalue{\determinatetext{A}}$ is the \cindex{n-dimensional volume} of the parallelepiped having vector $A_{i,\cdot}$ as adjacent sides. \end{theorem}
{ "alphanum_fraction": 0.6165491497, "avg_line_length": 30.9102564103, "ext": "tex", "hexsha": "7325e502c0b49d6d9b13edffa489a8a30fffa63b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-10-01T23:34:47.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-01T23:34:47.000Z", "max_forks_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "elvisren/machine-learning-notes", "max_forks_repo_path": "src/linear_algebra/la.4.determinants.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "elvisren/machine-learning-notes", "max_issues_repo_path": "src/linear_algebra/la.4.determinants.tex", "max_line_length": 219, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "elvisren/machine-learning-notes", "max_stars_repo_path": "src/linear_algebra/la.4.determinants.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-04T17:28:22.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-07T03:05:08.000Z", "num_tokens": 1664, "size": 4822 }
\section{Future Work} As mentioned earlier, \abcd can be used along with other primitives in order to form more complex contracts. Depending on the application-specific domain, the current structure of \abcd might be either sufficient or not. In a future work, we aim to customize this structure to be fully compatible for integration in more complex systems. On the other hand, there exist many variations of HTLC which in turn imply different categories of atomic swaps with different properties. The modular structure of our design enables similar variations on ABCD with the associated properties which can be explored in a future work. \newfateme{In particular, a privacy-preserving scriptless version of ABCD using adaptor signatures may be possible under the framework of Schnorr signatures} \new{\cite{10.1007/978-3-030-54455-3_38}.}
{ "alphanum_fraction": 0.8139810427, "avg_line_length": 140.6666666667, "ext": "tex", "hexsha": "f3bb043900f1a6ee3f0f7a6f88c4f6c3790e1912", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "da47329f322e20aa37a80ca71b75134c1f574427", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "incentivus/articles", "max_forks_repo_path": "ABCD/sections/future-works.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "da47329f322e20aa37a80ca71b75134c1f574427", "max_issues_repo_issues_event_max_datetime": "2021-03-20T05:06:18.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-12T16:28:14.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "incentivus/articles", "max_issues_repo_path": "ABCD/sections/future-works.tex", "max_line_length": 481, "max_stars_count": 2, "max_stars_repo_head_hexsha": "da47329f322e20aa37a80ca71b75134c1f574427", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "incentivus/ICBTA", "max_stars_repo_path": "ABCD/sections/future-works.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-05T09:11:33.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-08T02:06:01.000Z", "num_tokens": 175, "size": 844 }
% Title: Dtao Dam Forest Monastery % Author: Ajahn Jayasāro \chapterAuthor{Ajahn Jayasāro} \chapterNote{An interview conducted by Tan Paññāvuddho.} \chapter{Dtao Dam Forest Monastery} \tocChapterNote{By \theChapterAuthor. An interview conducted by\newline Tan~Paññāvuddho.} \markright{\theChapterAuthor} Dtao Dam Forest Monastery is a branch monastery of Wat Pah Nanachat, located in the mountainous, thickly-forested jungle of Mae Nam Noy, Sai Yok National Park in the Kanchanaburi Province of Thailand. Mostly through the tireless efforts of the monastery's main lay supporter, a brave and determined woman called Tivaporn Srivorakul, the pristine, lush quality of the Dtao Dam forest has been well preserved. Tivaporn operates a tin mine in the Dtao Dam area, employing Burmese, Karen and Mon workers who live in this remote border region in order to escape the social and political strife of present-day Burma. Despite great pressure on her for many years, she has stood up to all those interested in destroying the forest. At the onset of each hot season, the monks and novices of Wat Pah Nanachat make a three-day \emph{tudong} through the National Park into the monastery, where they spend two months in retreat. This year (April 1999), the \emph{tudong} was cancelled due to cattle and drug-smuggling activity in the outlying area, forcing the Sangha to be brought in by four-wheel drive vehicles. The following interview covering these and other issues related to Dtao Dam was conducted by Tan Paññāvuddho at Ajahn Jayasāro's \emph{dtieng} in the Dtao Dam forest. \emph{Tan Paññāvuddho}: Tan Ajahn, to begin, could you give a brief history of Dtao Dam Forest Monastery? How was this place founded, and how has it developed and evolved to reach its present state today? \emph{Ajahn Jayasāro}: In 1979 Ajahn Pasanno was on \emph{tudong} in Kanchanaburi. He got to know Khun Sunan, the owner of another mine in this part of Kanchanaburi, and she built a \emph{kuṭī} for him in the forest near her mine where he spent the Rains Retreat. That mine is now abandoned -- we pass it as we come into Dtao Dam. Years later the owner of the Dtao Dam mine, Yom Tivaporn, was struggling to preserve the forest here in the National Park. She thought this would be a wonderful place for forest monks to live and train, and hoped that their presence might also deter hunters and loggers. So she invited Ajahn Pasanno to bring some monks in and do a retreat here. She offered to make sure food and any other requisites would be provided, as there's no village for alms-round. The idea was that if it was appropriate some basic \emph{kuṭīs} would be built and two or three monks could stay on for the Rains Retreat. Everything worked out as she hoped. \emph{Tan Paññāvuddho}: So in what year did Ajahn Pasanno first bring the monks? \emph{Ajahn Jayasāro}: About eight years ago. That year two monks spent the Rains Retreat here in \emph{kuṭīs} built at the foot of this mountain. Since then monks have spent the rainy season here on a regular basis. Gradually, over the years, there have been developments. A few years ago we had a tractor come in and cut a road up to the upper \emph{sāla}. Three \emph{kuṭīs} have been built up on the ridge there. As you know, where we are now (to the west of the upper \emph{sāla} on a different mountain, where the inner \emph{sāla} is located) is a very densely-forested plateau through which the stream meanders. We have twenty \emph{dtiengs} scattered throughout the forest for the monks to use during the hot season. On this plateau the climate is very pleasant in the hot season, but extremely damp and unpleasant in the rainy season; there are many leeches, and if you put a robe out to dry after washing it, it may still be wet after twenty-four hours. Hence the need to build the upper \emph{sāla} and three small \emph{kuṭīs} on the more exposed ridge over to the east of here, so monks can stay at Dtao Dam during the rainy season. We also built a lower \emph{sāla} down at the base of these two mountains. The monks walk down in the early morning, take their daily meal and then come straight back up the mountain afterwards. There's no village for monks to go on alms-round, which makes this quite an unusual place for monks to live and puts serious restrictions on how many can stay here on a long-term basis. But we have a lot of lay supporters, particularly in Kanchanaburi, who give Yom Tivaporn food and provisions to bring to the monastery when the whole community comes in the hot season. There are a couple of Burmese families who cook for the monks, particularly one lady. \emph{Tan Paññāvuddho}: Is it common for monks to inhabit forest in National Parks? \emph{Ajahn Jayasāro}: About six years ago there was a period in which the presence of forest monks was looked upon by the government as being detrimental to the forests. In a number of highly publicized cases monks were accused of felling trees to build lodgings and developing monasteries in a way that harmed the forest. A government plan proposed having all monks living in national parks or reserved forests ejected from them. In fact, although there have been isolated instances of monks being insensitive to forests they were living in, this has never been one of the major problems facing forests in this country. But anyone who lives out in country areas knows that the presence of monks is the major factor inhibiting deforestation. It's hard to say to what extent the plan was devised out of ignorance, and to what extent it was influenced by the forces who want to get the monks out of the way. Fortunately, however, there was an about-face -- the plan aroused a lot of opposition. Tan Chao Khun Payutto wrote an influential pamphlet explaining the importance of the forest to Buddhism and Buddhist monasticism, and subsequently a new plan emerged which involved giving opportunities for monastic communities to help to care for the forest. The new plan allowed monasteries to apply for permission from the Royal Forestry Department to look after a certain area of land, from a thousand \emph{rai} upwards to five thousand \emph{rai}. Looking after the forest in this case means little more than living in it in very basic dwellings. The respect that monks command, at least amongst the local people, is acknowledged to inhibit the destruction of the forest. The Forestry Department is seriously understaffed and underfunded. The U-turn regarding the role of monks in forest conservation was more or less an admission of defeat. They know that they can't prevent the forest from disappearing and the government prefers to spend its money on other things. So we applied to participate in this programme. In fact, we asked for a lot more land than we were eventually given. The head of the National Park (suspected by many to be deeply implicated in illegal logging in the park and now transferred) was not supportive and cut our application down to a thousand \emph{rai}. The only condition laid down for us to be here is that we don't do a lot of building and don't cut down any trees. We have fulfilled our plan to build the three small \emph{sālas} and the three \emph{kuṭīs}, so as for the material development of Dtao Dam, I can't see that there is really much more that needs to be done. Anyway, Tivaporn feels that if there was to be any more building, the officials hostile to her would use it as a pretext to accuse her of something. \emph{Tan Paññāvuddho}: Who are these officials? Why are they apparently so hostile to Tivaporn? \emph{Ajahn Jayasāro}: Well, she's a thorn in the side of the `baddies'. She stands in the way of certain people making a lot of money. Every branch of the Thai civil service is plagued by corruption. It seems, at least from the outside, that the corrupt officials outnumber the honest ones to a frightening degree. In many places corruption is institutionalized, difficult to avoid being sucked into because social pressures to conform in the workplace are so powerful. Of course there are some who manage it, but if you stay honest it means you're unlikely to advance very far in your career. Your boss may well not put you forward for promotions. Some people who are ambitious and feel they have something to offer society justify their corruption by arguing that it is the only way they can get into a position where they can change things for the better. In the eyes of many people in this country, forests mean money. Dtao Dam is, to put it bluntly, big bucks. Some of these people who hate Tivaporn don't see nature as we do when they see a forest, they don't see wildlife -- they see money in a previous existence. Trees are money. Land and animals are money. And because there is so much corruption, there's so much influence-peddling up to the very highest levels. That means the laws don't have the kind of irrevocable fixed quality that they have in the West, at least in most people's minds. In Thailand you always feel that there's a way to get around things. You can often get things changed, get exceptions made, if you know the right people, or if you are the right people yourself. There actually was a law prohibiting gas pipelines through national forests, so a special law was passed to make it all right. At the same time, a sub-clause allowing mining operations in national parks somehow slipped in. There are various kinds of scams for forestry and national park officials, the most obvious one being turning a blind eye to the illegal logging interests. As time goes on and good timber becomes increasingly rare, the whole business becomes more and more lucrative. National parks can be eligible for grants for reforestation, but the area in question has first to be classified as degraded forest. So a common ploy is to light a fire or cut down some trees and then get the designation of the land changed. Once it has been declared `degraded forest' you can apply for funds for reforestation. Then you can use one part of the grant for reforestation but keep the rest for yourself and your henchmen. You can also let the loggers have more of the original big trees. And so on. On the national level, forests all over the country are under pressure from the growing human population. The whole question of land settlement has become a political hot yam and it brought down the government before last. The question that has arisen with some urgency over the past few years is: what do you do when poor and desperate people encroach on national park or reserved forest in order to clear the land and grow themselves some food? If you evict them, where are you going to put them? What will be the political repercussions for political parties which seek to present themselves as protectors of the poor? The answer given by the previous government was to pass a law giving these people squatters' rights, which in effect encouraged people all over the country to clear land in national parks for farming. So there are many pressures on Dtao Dam. Corrupt local politicians, government officials, businessmen, soldiers, border police: a lot of ethically challenged people have their eyes on the place. Theoretically it should not be too hard to protect Dtao Dam. To get wood out there's only one road, and there's a border police barrier across it and a barracks overlooking it. But of course, all the people at the police post are on a very low wage, far from home. People at the National Park are involved. Everyone takes their share. \emph{Tan Paññāvuddho}: Tan Ajahn, a couple of weeks ago several of us climbed to the top of the tallest local mountain peak, where one can get a view in all four directions. From there it is forest as far as the eye can see. You have mentioned before that there are plans to build a road from Burma into Thailand though this mountain pass. Why are various parties so interested in building a road through such remote mountainous forest? \emph{Ajahn Jayasāro}: If you look on a map, you'll find this pass is the shortest and most direct route between the city of Kanchanaburi and the Burmese port of Tavoy, which is maybe forty kilometres away. That's not a long way to cut a road. It would provide Thai industry with direct access from Bangkok, through Kanchanaburi, to the Andaman Sea -- port facilities, holiday resorts and so on. The army officers, politicians etc. who've been buying up land on either side of the prospective road would make huge profits when the land prices go up. And of course, once the road is cut you open up the entire forest for exploitation. In every case that I can think of, a road built through a forested area has signalled the beginning of the end for that forest. I don't think there's any question that the road will be built -- it's just a question of what route it will take. The pass here happens to be the easiest traverse of this mountain range. Further to the south and to the north the mountains are much more difficult to cross. This is why in the past, during conflicts between the Thai or Burmese, this was the pass that the invading armies would march through. It's a kind of gap in the mountain range. From where we are sitting right now at an elevation of about seven hundred metres, we're only about a fifteen-minute walk from Burma. From an engineering point of view it would be a relatively easy job. Driving along this route, you'd hardly notice the gradient. \emph{Tan Paññāvuddho}: So out there, what interested parties besides the monks are working to preserve this Dtao Dam forest? \emph{Ajahn Jayasāro}: Well, the Thai environmental movement is in many ways still in its infancy. Probably only in the last five to ten years has it had any kind of muscle. It is only very recently that the authorities have even felt a need to pay lip-service to environment protection. Of course there have always been people warning against the destruction of the environment, but during the boom economy that preceded the economic collapse, many people just assumed that environmental degradation was the price you had to pay for prosperity. Preserving forest was considered the concern of romantics or people not living in the real world. After all, what does a tree contribute to the gross national product? Now, of course, with the economy on the rocks, the reasoning is that there are more pressing priorities. Another telling factor is the perception of forest. Until recently forests were associated in Thailand, and indeed throughout Asia generally, with backwardness. Cities mean civilization. For example, you have the Thai word \emph{Pah-thuean}. `\emph{Pah}' means `forest' and `\emph{thuean}' means `uncivilized', implying that people who live in the forest are backward and uncivilized, and that everything associated with nature is the past. That's been a strong underlying idea in the view of the urban population, and particularly perhaps among businessmen and politicians. There has never been any sense of the forest as a national heritage that should be looked after. And this prejudice is still very strong. It's only recently that there have been attempts to introduce a more progressive understanding of nature in the school curriculum, with the aim of creating a new respect for nature. Perhaps a period of alienation from nature is necessary. As far as I know, love of nature did not develop in the West until the Industrial Revolution either. If you look at the history of Western art, for example, it wasn't until the eighteenth century that there were pure landscapes, paintings in which nature was considered an interesting topic in itself, rather than as merely a backdrop for human beings. So there are various non-governmental environmental groups at work. But as far as the government itself is concerned, the Ministry of the Environment is officially called the Ministry of Industry, Technology and the Environment, which gives you some idea of the priorities and conflict of interests there. \emph{Tan Paññāvuddho}: All three interests lumped together? \emph{Ajahn Jayasāro}: All lumped together. And it's the same with the Forestry Department. It has a dual role: firstly, caring for the forest, secondly, the promotion of agro-forestry. So it's inevitably tied up with big business and not free to preserve the forest. There are good people, educated people particularly, who are starting to appreciate nature and coming together to protect what is left, but they often feel somewhat helpless and intimidated by all the forces against them. These people live in the cities, a long way away. As you can see, I am not particularly optimistic, but there is a bright side. Here at Dtao Dam we have representatives from the palace helping us. Right from the first days of the Thai kingdom in Sukhothai, there was a custom whereby on Wan Phra anyone could go to the palace and ring a certain bell to request an audience with the King in which to let him know their grievances. You could discuss a problem where you hadn't been given a fair deal by the government authorities, for instance, or where you'd been the victim of corruption. It was a very special appeal court that bypassed the judicial system. This custom has come down to the present day. Now it takes the form of an office in the palace to which anyone can write and appeal. The people who work in this office liaise with the Queen's private secretaries to investigate these claims, so you go right to the top. And since the power of the monarchy is strong, this office exercises a lot of influence outside the conventional power structures of parliament. People from this office have been instrumental in getting some of the most corrupt officials transferred, for example. We also have friends in various conservation bodies, as well as a senior minister in the present government. \emph{Tan Paññāvuddho}: What about Nature Care, the environmental preservation organization founded by Ajahn Pasanno? How is it involved? \emph{Ajahn Jayasāro}: Nature Care is an NGO, a non-governmental organization, set up originally in Ubon to help preserve the forest bordering the Mekong River. Subsequently, with our association with Tivaporn, Nature Care established a branch in Kanchanaburi. This facilitates applications for funds from businesses and government bodies for conservation purposes. Returning to Dtao Dam again, one of the things that complicates the issue here is Tivaporn's mine. She has been running this mine since well before the area was declared a National Park, and she's deeply in debt. The mine also functions as a kind of welfare programme for over 100 workers and their families, most of whom are undocumented immigrants from Burma. Her mining concession will last for another four or five years. But people who wish she wasn't here -- she is the main obstacle to people destroying the forest -- have gone as far as to threaten her life on a number of occasions. Others have instigated various rumours to blacken her reputation. One of the things that people say is that she invited the monks here as a front. Or they say that she's trying to create the image of being an environmentalist, when all she is interested in is keeping her mine going. She has even been accused of building a private luxury resort here. \emph{Tan Paññāvuddho}: That's pretty unbelievable! \emph{Ajahn Jayasāro}: Well, you'd think so, but as the people spreading it were powerful and influential, this accusation apparently reached the highest circles of the government and the royal family. As you know, last year Tivaporn was given a prize as Thai Citizen of the Year by a leading charity of which the Crown Princess is the patron. The charity was worried they'd been deceived and decided to investigate. Of course, they found the charge was baseless. But these kinds of slurs are leaked to the newspapers. Also, senior army officers are given folders containing facts about Dtao Dam which distort the picture. For instance, Tivaporn was shown a folder two weeks ago in which her signature for an application for funds for reforestation of Dtao Dam had been forged. It's not clear whether someone is attempting to embezzle these funds or is trying to accuse her of receiving the funds and not using the money for reforestation. But this kind of thing is going on all the time. All the various parties involved, even the environmental groups, have their own contacts, and they hear these things. Naturally, they're not always sure what to believe. So the policy for us has been to invite these people in to see for themselves. Another problem down the road is when the mining concession ends. One of the plans is to have various kinds of scientific projects going on, especially ones related to biodiversity and botanical research. This is an area rich in biodiversity. A botanist was here a few weeks ago and he was extremely excited by what he saw, all kinds of things that he'd never come across before. So there is a hope that there will be some kind of botanical or biological station at the base of the mountain, where graduate students can come and do research. \emph{Tan Paññāvuddho}: What about the animal life? We've seen all kinds of exotic animals, from elephants to black panthers to white tigers, and bizarre-looking creatures unlike anything I've ever seen before. \emph{Ajahn Jayasāro}: That's why we have those animal-sighting forms, to document that these animals are really here. It's a matter of getting this information into the hands of the people with the right intention towards forests. \emph{Tan Paññāvuddho}: Tan Ajahn, you've alluded to it somewhat, but could you articulate your role as Abbot and as a forest monk in addressing these problems? \emph{Ajahn Jayasāro}: Well, as you know conventionally speaking, I am the Abbot of Dtao Dam, but most of the year I live at Wat Pah Nanachat. I'm able to come out for a month or two in the hot season, but I keep in contact by telephone with Tivaporn when I'm back in Ubon, to stay abreast of what's going on and give her support and encouragement. She gets bullied and slandered a lot. It's a lot to put up with, apart from the ordinary pressures of running a business in adverse circumstances and struggling to make enough money to keep going. You know what an incredible drive it is in and out from here, and you know how often she does that. She goes out for a day, then comes back in again, then drives a truck all the way to Phuket to sell the ore from the mine, then drives all the way back up again to meet with all these academics and scientists here for a visit tomorrow. I've asked her to phone me right away if anything really difficult or heavy comes up. For example, she gave us the advance warning about the cattle and drug-smuggling going on across the border this year. Also, a role that one plays as a forest monk who is also an Abbot is to be a liaison or central figure. Being the Abbot of a large monastery, and having been in Thailand for many years, I've come to know a lot of people and I can help interested parties get in contact with each other. So to summarize my role, I'd say it involves first, giving moral, spiritual support, Dhamma teaching, encouragement and reflections. Second, in the social role as the Abbot of a well-known monastery who knows lots of people, I can help the right people get in touch with each other. \emph{Tan Paññāvuddho}: Seems potentially like a pretty adversarial situation. How do you manage to maintain a non-partisan position in a scenario that is very partisan? \emph{Ajahn Jayasāro}: The abbot of a monastery usually plays the role of being the referee or the impartial resort for both sides in a dispute, whereas here it's a little different in that we're part of it. I'm not absolutely equanimous about this. I'm not totally impartial. I'm definitely on the side of the people who want to save this forest. But I find it important to avoid the `us' and `them' way of thinking. Also, I don't personally have to confront these people trying to destroy the forest the way Tivaporn does. I've met very few of the leading figures. And Thai society being what it is, one always keeps up social proprieties. If someone were to wish bad things for the forest monastery and curse us to our backs, if he met me he'd probably bow and speak very politely. For a monk the principle is always, `What's correct according to Dhamma-Vinaya?; what's wholesome, what's right?', and standing up for that in certain circumstances, while being sensitive to time and place and the way to go about things. And not to come across as being adversarial. For example, if something illegal is going on in the forest, I wouldn't confront the person directly, but I might try to see their superior, or superior's superior. Rather than being a problem between me and someone, it's a problem in the wider community that calls for the right people to be alerted. \emph{Tan Paññāvuddho}: In personal terms, training here in this remote, wild forest has been a very enjoyable and profound experience. Could you put into words why it is so important for monks to train in forests? What are the advantages? How do we reflect on and learn from nature in the context of Dhamma practice? \emph{Ajahn Jayasāro}: Well, the practice of Dhamma is one in which it's very important to develop the ability to calm the mind, to make the mind peaceful and concentrated. That being so, it's essential to have a conducive environment in which there is nothing too jarring or too exciting. So we lead a very simple life, one bared down to the essentials, not surrounded by anything man-made or anything that's going to pull you out of yourself. Living in a forest, there's nothing really, nowhere for your eyes to go -- just greenness and trees all around. It automatically encourages you to incline inwards. The natural rhythms of the forest, of the trees and the streams, give a sense of uplift and well-being to our minds. They ground us. This provides a very important foundation for the meditation practice. It comes to feel natural to be by yourself and you come to delight in solitude. Sitting meditation and walking meditation become obvious ways to spend time, not something you have to push yourself into. I think that for most people support from the environment is still vitally important for success in practice. Ajahn Chah would often talk of the relationship between physical seclusion and seclusion from the Hindrances. The forest is not quiet, but it is tranquil and it is teaching you the laws of nature all the time. The things you see around you are just natural phenomena. You're surrounded by birth, ageing, sickness and death, arising and passing away in the most raw and obvious forms. As you reflect on those principles internally, your contemplations find a resonance outside of you. The phases of the moon, dawn and dusk, the play of heat and cold, the whole natural environment attain an increased profundity, because they express the nature of things you're investigating internally. You feel a sense of harmony and a seamless unity between the inner and the outer. Living at the foot of a tree, keeping the \emph{dhutaṅga} practices, we also have this wonderful feeling of being the inheritors of a tradition that stretches back for over 2,500 years. We are not living so differently at all from the way the great monks of the Buddha's time lived. That sense of being a part of something larger, something noble that stretches in an unbroken line right back to the Buddha: I think that's a very wholesome feeling, one that a monk may cherish. This particular forest, being home to so many wild animals, gives us the opportunity to look at fear, anxiety and attachment to the body in a very direct way, seeing the effect they have on our mind, the sense of urgency they give us. While living in the forest, a lot of these phrases: `a sense of urgency', `making every moment count' -- teachings we've read about and studied -- really come alive. In a way it's difficult to articulate, I feel a sense of rightness, a feeling of `This is exactly how I should be living', and `This is how a monk lives'. \emph{Tan Paññāvuddho}: In the Suttas, so many passages from the Buddha's enlightenment to the \emph{Parinibbāna} take place under trees. The Buddha always lived and practised in the forest when possible. With the forest disappearing, and the subsequent likelihood that this will be the last generation of forest monks, how do you see a Buddhist monk responding to a predicament like this? \emph{Ajahn Jayasāro}: Well, you don't have any choice really. There's not so much that can be done. As you become a more senior monk and you have more responsibilities and opportunity to teach the Dhamma to lay Buddhists, you can at least point out the value of the forest, how little remains and the need to look after it. It's also important to point out the relationship between the forest and the Buddhist religion. Then as a younger monk, just make the best of the forested areas available while you can. But I think it's really important to have young monastics come out and experience this way of practice right from the beginning of their monastic career, because it can create such a strong impression. You know you'll remember this for the rest of your life. Hopefully you will continue to have the possibility to keep coming here or to places like this to train. But even if that is not the case, for monks to have the experience of living simply in a forest like this, even once, has a ripple effect. As monks become more senior and have their own monasteries and their own disciples, they will pass on their love of nature and appreciation of the role of solitude in monastic life. But you know, I must admit I sometimes feel that this destruction is not going to stop until every last tree outside of private hands is gone. To effect a real change, it has to start in the schools. Last week, when I went out to see what was going on with the various arson fires in the area, Tivaporn was running a retreat for school kids in the village at the edge of the forest. And the kids loved it -- they really responded to the teaching about nature very well. At one point the children were asked what their parents do for a living, and three of them replied that it was illegal logging. It's the same everywhere. In Huay Ka Kaeng, just north-east of here, there is a lot of wild forest. It's labelled a World Heritage Site. It's also the place where a forest park official called Seup Nakasatheean killed himself as a gesture to call attention what is happening to the forests across Thailand. A foundation was set up in his name and the army was sent in to look after and patrol the land. But still there is cutting going on to this day. I myself don't see any fundamental changes being made until there comes a point where laws are laws, and whoever breaks the law is wrong and is dealt with appropriately. But I feel reasonably confident about the prospect of saving Dtao Dam. It is really hard to say when you don't know everything going on, really hearing everything second or third-hand. Still, overall, I think there are enough people with influence who know about Dtao Dam to keep this tract of forest intact. \emph{Tan Paññāvuddho}: You mentioned that you went to a conference about Dtao Dam last year. What was that like? \emph{Ajahn Jayasāro}: Well, yes, it was strange for me. One has the idea that Dtao Dam is a forest far away from anywhere else, where we come on retreat every year. But at this conference I walked into a large room with academics who all seemed to be experts on Dtao Dam. I was wondering where these people came from. They were talking about the biology and topography and all kinds of esoteric subjects. \emph{Tan Paññāvuddho}: Ajahn, when I'm practising \emph{sīla, samādhi, paññā} in long periods of solitude in the forest, I sometimes get the sense that this in itself feels like the most direct, authentic response to the threat of this forest being destroyed, not to mention all the other environmental, economic and socio-political calamities on earth. But I find it challenging to articulate this. If I were to try to explain it to somebody who hasn't had much experience with meditation, I wouldn't know how to put it into words. But the sense of authenticity seems true and real. \emph{Ajahn Jayasāro}: Yes, I agree, but I also have the same kind of difficulty in explaining it, why it is best to practice in this way. One thing I said the other day: if we look at the root of all mankind's self-imposed difficulties, there is a common underlying cause. We find that because mankind doesn't know himself, he constantly acts in conflict with his own best interests, living merely as the puppet of desires, fears and delusions. But practising \emph{sīla, samādhi, paññā} deals with these things at their very roots. That way one really works with the whole structure of what is going on, rather than just responding to a particular expression of it. We study the mind to understand what greed is, what hatred is, what delusion is, even in their most subtle forms, while developing the skilful means to abandon them, to let them go. That seems to me to be as an intelligent response as any other. But with trying to save the forest or whatever, I myself am always wary of falling into the trap of `I've got to do it', or `We've got to do this'. Once you fall into this `We've got to!' mentality, you've lost it already. Yes, I will do what I can. But who knows what will happen? These things occur due to causes and conditions, many of them way out of my control. \clearpage \section{Recent developments in Dtao Dam} \emph{In the following little sequel, Ajahn Siripañño, who has been the abbot of Dtao Dam Monastery for the last five years, gives an overview of some of the developments which have taken place since the original article was written.} Since the publication of this article, a whole host of factors regarding the situation at Dtao Dam have changed, mainly for the better. What remain almost entirely unchanged are the pristine nature of the jungle there and the ongoing presence of a small number of forest monks dedicated to living in the open, under the beautiful tree canopy, with the call of gibbons and crickets echoing around. Regarding Khun Tivaporn and the mine, in 2003 the mine lease expired, and with it permission for the hermitage. Certain forces within the National Park Department tried their best to force the monks to leave, presumably hoping to exploit the area commercially in some way. However, pressure from monastery supporters, the national press and, finally, from within the National Parks Department itself, resulted in the monastery being given permission to stay on, with the agreement to be renewed every five years. More stability was achieved when in 2009 the Thai government created a nationwide `Buddhist National Parks' project, with the specific aim of enabling legitimate monasteries to stay in forest areas and make use of national park land for Dhamma practice, while at the same time relying on the Sangha for help in preserving the forests by keeping an eye open for illegal hunting, logging, forest fires and other dangers. In fact, with the closure of the mine and the departure from the area of Tivaporn's family and all the workers, the hermitage has become a completely isolated spot right on the Burmese border, with very few people coming in and out other than a seasoned bunch of off-road vehicle drivers. As for Tivaporn herself, this hardy woman, now approaching 70, runs a community centre on the property surrounding her house on the outskirts of Kanchanaburi Town, teaching seminars on environmental matters and sustainable livelihood. She also maintains a centre in the village nearest to Dtao Dam, Tung Ma Sa Yo, which provides work opportunities for her former mine-workers, many of whom remain undocumented due to Thai bureaucratic\linebreak complexities. The road linking Bangkok and Davoy (now usually marked on maps as Tawei) is currently under construction. Luckily, the possibility that it might actually pass right through the Dtao Dam area was averted when sufficient pressure from environmental groups made it clear that this would not be acceptable. The route now passes some twenty kilometres south of the monastery. Food is mainly brought in by monastery supporters, cooked by one or two resident workers and supplemented by a vegetable garden and forest fruits and vegetables, roots, shoots and herbs which can be collected. Every year a group of monks and novices from Wat Pah Nanachat go there to spend the hot season in the same way as they have done for some twenty years. The three original \emph{kuṭīs} provide enough shelter for a small group of monastics to spend each Rains Retreat there. During the last two Decembers a group of students, parents and teachers from Panyaprateep school have visited for a few days with Ajahn Jayasāro. Animals, large and small, are still in evidence. Two years ago a herd of elephants strolled through the hermitage, completely demolishing our inner \emph{sāla}. Tigers and other forest cats are still being sighted, and recently a protective mother bear rushed at one of our monks high up on a mountain after her cub had wandered towards him. (Admittedly the monk was using a mobile phone at the time, which might have alarmed her even more. Both the monk and the monastery Nokia survived unscathed). A rustling in the bushes may turn out to be the reclusive Dtao Dam itself (a six-legged black turtle), a porcupine with eight-inch quills or the bizarre Malayan tapir, a cow-sized beast resembling a combination of elephant and rhino. These incidents, though, are very rare. What are experienced daily are the beautiful singing of gibbons and the chatter of monkeys. A strange rhythmic whooshing sound high in the sky will be the flight of a giant hornbill, or even a pair mated for life, as they fly through the valleys looking for their regular spots to feast on figs and insects in the tree canopy. All this and more, set to music: night and day the jungle noises beat out a samba to rival any carnival. The streams are still flowing, and the waterfall cascades down several levels before levelling off at the foot of the mountain on the top of which the monks dwell. The sun rises in the east over Thailand and sets in the west over Burma; the timeless rhythm of nature undisturbed by man's whims and fancies. Long may it be so.
{ "alphanum_fraction": 0.7949042284, "avg_line_length": 59.5969230769, "ext": "tex", "hexsha": "b8ce853be085e7dee29f062df2709919caa47615", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8c70e360908fe326f497be340e6e720bce3150b4", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "profound-labs/forest-path", "max_forks_repo_path": "manuscript/tex/dtao-dam.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8c70e360908fe326f497be340e6e720bce3150b4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "profound-labs/forest-path", "max_issues_repo_path": "manuscript/tex/dtao-dam.tex", "max_line_length": 89, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8c70e360908fe326f497be340e6e720bce3150b4", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "profound-labs/forest-path", "max_stars_repo_path": "manuscript/tex/dtao-dam.tex", "max_stars_repo_stars_event_max_datetime": "2017-05-14T17:09:16.000Z", "max_stars_repo_stars_event_min_datetime": "2017-05-14T17:09:16.000Z", "num_tokens": 9289, "size": 38738 }
%%%%%%%%%%%%%%%%%%%% author.tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % sample root file for your "contribution" to a contributed volume % % Use this file as a template for your own input. % %%%%%%%%%%%%%%%% Springer %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % RECOMMENDED %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[graybox]{svmult} % choose options for [] as required from the list % in the Reference Guide \usepackage{type1cm} % activate if the above 3 fonts are % not available on your system % \usepackage{makeidx} % allows index generation \usepackage{graphicx} % standard LaTeX graphics tool % when including figure files \usepackage{multicol} % used for the two-column index \usepackage[bottom]{footmisc}% places footnotes at page bottom \usepackage{newtxtext} % \usepackage{newtxmath} % selects Times Roman as basic font \usepackage{url} %%Nirav added \newenvironment{spmatrix}[1] {\def\mysubscript{#1}\mathop\bgroup\begin{pmatrix}} {\end{pmatrix}\egroup_{\textstyle\mathstrut\mysubscript}} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\spn}{span} \usepackage{bm} \usepackage{tikz} \usepackage{wrapfig} \usepackage[export]{adjustbox} \usepackage{subcaption} \usepackage[numbers]{natbib} \usepackage{float} \captionsetup{compatibility=false} %%Nirav added over % see the list of further useful packages % in the Reference Guide \makeindex % used for the subject index % please use the style svind.ist with % your makeindex program %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{Discontinuous Galerkin Model Order Reduction of Geometrically Parametrized Stokes Equation} % Use \titlerunning{Short Title} for an abbreviated version of % your contribution title if the original one is too long \author{Nirav Vasant Shah, Martin Hess and Gianluigi Rozza} % Use \authorrunning{Short Title} for an abbreviated version of % your contribution title if the original one is too long \institute{Nirav Vasant Shah, \email{[email protected]} \at Scuola Internazionale Superiore di Studi Avanzati - via Bonomea, 265 - 34136 Trieste ITALY, \and Martin Hess, \email{[email protected]} \at Scuola Internazionale Superiore di Studi Avanzati - via Bonomea, 265 - 34136 Trieste ITALY, \and Gianluigi Rozza, \email{[email protected]} \at Scuola Internazionale Superiore di Studi Avanzati - via Bonomea, 265 - 34136 Trieste ITALY. } \titlerunning{DG MOR of Geometrically Parametrized Stokes Equation} \authorrunning{Shah et. al.} % % Use the package "url.sty" to avoid % problems with special characters % used in your e-mail or web address % \maketitle \abstract*{The present work focuses on the geometric parametrization and the reduced order modeling of the Stokes equation. We discuss the concept of a parametrized geometry and its application within a reduced order modeling technique. The full order model is based on the discontinuous Galerkin method with an interior penalty formulation. We introduce the broken Sobolev spaces as well as the weak formulation required for an affine parameter dependency. The operators are transformed from a fixed domain to a parameter dependent domain using the affine parameter dependency. The proper orthogonal decomposition is used to obtain the basis of functions of the reduced order model. By using the Galerkin projection the linear system is projected onto the reduced space. During this process, the offline-online decomposition is used to separate parameter dependent operations from parameter independent operations. Finally this technique is applied to an obstacle test problem.The numerical outcomes presented include experimental error analysis, eigenvalue decay and measurement of online simulation time.\\ \textbf{Keywords} Discontinuous Galerkin method, Stokes flow, Geometric parametrization, Proper orthogonal decomposition} \abstract{The present work focuses on the geometric parametrization and the reduced order modeling of the Stokes equation. We discuss the concept of a parametrized geometry and its application within a reduced order modeling technique. The full order model is based on the discontinuous Galerkin method with an interior penalty formulation. We introduce the broken Sobolev spaces as well as the weak formulation required for an affine parameter dependency. The operators are transformed from a fixed domain to a parameter dependent domain using the affine parameter dependency. The proper orthogonal decomposition is used to obtain the basis of functions of the reduced order model. By using the Galerkin projection the linear system is projected onto the reduced space. During this process, the offline-online decomposition is used to separate parameter dependent operations from parameter independent operations. Finally this technique is applied to an obstacle test problem.The numerical outcomes presented include experimental error analysis, eigenvalue decay and measurement of online simulation time.\\ \textbf{Keywords} Discontinuous Galerkin method, Stokes flow, Geometric parametrization, Proper orthogonal decomposition} \section{Introduction} \label{introduction} %The subject of the mathematical applications in fluid mechanics starts with one of the variants of the Navier-Stokes equation. In case of the laminar flow, i.e. when fluctuations are negligible, this linearized form of the Navier-Stokes equation is the Stokes equation. Discontinuous Galerkin Method (DGM) has gained interest as numerical method for the elliptic problems ~\cite{peraire} as well as the hyperbolic problems ~\cite{hyperbolic}. DGM uses polynomial approximation of a suitable degree providing higher accuracy as well as allows discontinuity at the interface, by the concept of numerical flux, allowing greater flexibility. Additionally, since the Dirichlet conditions are applied as boundary penalty, it avoids the necessity to construct a subspace of Sobolev space. Geometric parametrization has emerged as a important application of the Parametric Partial Differential Equations (PPDEs) and as an alternative to the shape optimization. The concept of geometric parametrization allows to transfer operator evaluated on one geometric domain to another geometric domain efficiently. Model Order Reduction (MOR) on the other hand allows reducing the size of the system to be solved by working with the smaller system containing only dominant components. The faster computations obtained by MOR has helped in many query context, real time computation and quick transfer of computational results to industrial problems. As evident from above advantages, the application of geometric parametrization and reduced order modeling to discontinuous Galerkin method will remain at the forefront of scientific work. The present work is organized as follow. We first explain the concept of geometric parametrization. Thereafter, the governing equations, broken Sobolev spaces and weak formulation are stated. The affine expansion, key to the online-offline decomposition, is introduced next. In the subsequent section, proper orthogonal decomposition with online-offline decomposition is briefly described. Finally, an obstacle test problem demonstrates the application the introduced method with outcomes involving comparison of full order and reduced order model solutions, error analysis and eigenvalue decay. \section{Geometric parametrization}\label{geometric_parametrization_section} Let us consider $\Omega = \Omega(\mu) \in \mathbb{R}^d$ as an open bounded domain. The parameter tuple $\mu \in \mathbb{P}$, where $\mathbb{P}$ is the parameter space, completely characterizes the domain. Also, consider a parameter tuple $\bar{\mu} \in \mathbb{P}$, as the known parameter tuple and $\Omega(\bar{\mu})$ as the reference domain, whose configuration is completely known. The invertible mapping $\bm{F}(\cdot,\mu) : \Omega(\bar{\mu}) \rightarrow \Omega(\mu)$ links the reference domain and the parametrized domain. In the case of affine transformation, $\bm{F}$ is of the form, \begin{equation*}\label{affine_F} \begin{split} x = \bm{F}(\hat{x},\mu) = \bm{G}_F(\mu)\hat{x} + c_F(\mu) \ ; \\ \forall x \in \Omega(\mu) \ , \ \hat{x} \in \Omega(\bar{\mu}) \ , \ \bm{G}_F(\mu) \in \mathbb{R}^{d \times d} \ , \ c_F \in \mathbb{R}^{d \times 1} \ . \end{split} \end{equation*} The boundary of $\Omega(\mu)$, that is $\partial \Omega(\mu)$ is divided into a Neumann boundary $\Gamma_N(\mu)$ and a Dirichlet boundary $\Gamma_D(\mu)$ i.e. $\partial \Omega(\mu) = \Gamma_N(\mu) \cup \Gamma_D(\mu)$. In order to have $\bm{F}(\hat{x},\mu)$ an affine form, the domain $\Omega(\mu)$ is divided into $n_{su}$ triangular subdomains such that $\Omega(\mu) = \bigcup\limits_{i=1}^{n_{su}} \Omega_i(\mu) \ , \ \Omega_i(\mu) \bigcap \Omega_j(\mu) = \emptyset \ , \ \text{for} \ i \neq j$. \section{Discontinuous Galerkin formulation} \label{DG_formulation} The domain $\Omega$ is divided into $N_{el}$ number of triangular elements $\tau_k$ such that $\Omega = \bigcup\limits_{k=1}^{N_{el}} \tau_k$. The triangulation $\mathcal{T}$ is the set of all triangular elements i.e. $\mathcal{T} = \lbrace \tau_k \rbrace_{k=1}^{N_{el}}$. The internal boundary is denoted by $\Gamma = \bigcup\limits_{k=1}^{N_{el}} \partial \tau_k \backslash \partial \Omega$. $\overrightarrow{n}$ is the outward pointing normal to an edge of element. The governing equations in strong form can be stated as, \begin{flalign}\label{stokes_strong_form} \begin{split} \text{Stokes equation: } & -\nu \Delta \overrightarrow{u} + \nabla p = \overrightarrow{f} \ , \ \text{in } \Omega \ , \\ \text{Continuity equation: } & \nabla \cdot \overrightarrow{u} = 0 \ , \ \text{in} \ \Omega \ , \\ \text{Dirichlet condition: } & \overrightarrow{u} = \overrightarrow{u}_D \ , \ \text{on } \Gamma_D \ , \\ \text{Neumann condition: } & -p \overrightarrow{n} + \nu \overrightarrow{n} \cdot \nabla \overrightarrow{u} = \overrightarrow{t} \ , \ \text{on} \ \Gamma_N \ . \end{split} \end{flalign} The velocity vector field $\overrightarrow{u}$ and pressure scalar field $p$ are the unknowns. $\nu$ is the material property known as kinematic viscosity. Vector $\overrightarrow{f}$ is the external force term or source term. $\overrightarrow{u}_D$ is the Dirichlet velocity and vector $\overrightarrow{t}$ is the Neumann value. Let us introduce the broken Sobolev spaces for the unknowns. \begin{equation*} \label{velocity_pressure_test} \begin{split} \text{For velocity: } \mathbb{V} = \lbrace \overrightarrow{\phi} \in (L^2(\Omega))^d | \ \overrightarrow{\phi} |_{\tau_k} \in (P^D(\tau_k))^d \ , \ \tau_k \in \mathcal{T} \rbrace \ , \\ \text{For pressure: } \mathbb{Q} = \lbrace \psi \in (L^2(\Omega)) | \ \psi |_{\tau_k} \in (P^{D-1}(\tau_k)) \ , \ \tau_k \in \mathcal{T} \rbrace \ . \end{split} \end{equation*} Here, $P^D(\tau_k)$ denotes the space of polynomials of degree $D, \ D \geq 2$ over $\tau_k$. In finite dimensional or discrete system, velocity approximation $\overrightarrow{u}_h(x)$ and pressure approximation $p_h(x)$ at any point $x \in \Omega$ are given by, \begin{equation}\label{velocity_pressure_coefficients} \overrightarrow{u}_h(x) = \sum\limits_{i=1}^{u_{ndofs}} \overrightarrow{\phi}_i \hat{u}_i \ , \ p_h(x) = \sum\limits_{i=1}^{p_{ndofs}} \psi_i \hat{p}_i \ , \end{equation} where $\hat{u}_i$'s and $\hat{p}_i$'s are coefficients of velocity basis functions and pressure basis functions respectively. We expect that $\overrightarrow{u}_h \rightarrow \overrightarrow{u}$ and $p_h \rightarrow p$ as $u_{ndofs} \rightarrow \infty$ and $p_{ndofs} \rightarrow \infty$ respectively. Considering the scope of present work, the convergence analysis will not be discussed here. The readers are advised to refer to \cite{pacciarini},\cite{jump_mean_operator},\cite{riviere}. In the subsequent sections, $\left( \cdot \right),\left( \cdot \right)_{\Gamma_D},\left( \cdot \right)_{\Gamma_N},\left( \cdot \right)_{\Gamma}$ represent the $L^2$ scalar product over $\Omega,\Gamma_D,\Gamma_N,\Gamma$ respectively. The jump operator $\left[ \cdot \right]$ and the average operator $\lbrace \cdot \rbrace$ are important concepts in the DGM formulation and are required to approximate the numerical flux. We use the jump and average operators as represented in \cite{jump_mean_operator}. The weak form of the Stokes equation is given by, \begin{gather}\label{stokes_weak_ch3} a_{IP}(\overrightarrow{u},\overrightarrow{\phi}) + b(\overrightarrow{\phi},p) + \left( \lbrace p \rbrace,[\overrightarrow{n} \cdot \overrightarrow{\phi}] \right)_{\Gamma \cup \Gamma_D} = l_{IP}(\overrightarrow{\phi}) \ , \end{gather} \begin{equation} \begin{split} a_{IP}(\overrightarrow{u},\overrightarrow{\phi}) = \left( \nabla \overrightarrow{u}, \nabla \overrightarrow{\phi} \right) + C_{11} \left( [\overrightarrow{u}],[\overrightarrow{\phi}] \right)_{\Gamma \cup \Gamma_D} \\ - \nu \left( \lbrace \nabla \overrightarrow{u}\rbrace ,[\overrightarrow{n} \otimes \overrightarrow{\phi}] \right)_{\Gamma \cup \Gamma_D} - \nu \left( [\overrightarrow{n} \otimes \overrightarrow{u}], \lbrace \nabla \overrightarrow{\phi} \rbrace \right)_{\Gamma \cup \Gamma_D} \ , \end{split} \end{equation} \begin{gather} b(\overrightarrow{\phi},\psi) = -\int_{\Omega} \psi \nabla \cdot \overrightarrow{\phi} \ , \\ l_{IP}(\overrightarrow{\phi}) = \left( \overrightarrow{f},\overrightarrow{\phi} \right) + \left( \overrightarrow{t},\overrightarrow{\phi} \right)_{\Gamma_N} + C_{11} \left(\overrightarrow{u}_D,\overrightarrow{\phi}\right)_{\Gamma_D} - \left( \overrightarrow{n} \otimes \overrightarrow{u}_D, \nu \nabla \overrightarrow{\phi} \right)_{\Gamma_D} \ . \end{gather} The penalty parameter $C_{11}>0$ is an empirical constant to be kept large enough to maintain the coercivity of $a_{IP}(\overrightarrow{u},\overrightarrow{\phi})$ (see \cite{jump_mean_operator}). The weak form of the continuity equation is as follows, \begin{equation}\label{contiuity_weak_ch3} \begin{split} b(\overrightarrow{u},\psi) + ({\psi},[\overrightarrow{n} \cdot \overrightarrow{u}])_{\Gamma \cup \Gamma_D} = (\psi,\overrightarrow{n} \cdot \overrightarrow{u}_D)_{\Gamma_D} \ . \end{split} \end{equation} In the discrete form the system of equations can be written as, \begin{equation} \label{Stokes_matrix_ch3} \begin{spmatrix}{\textrm{Stiffness matrix}} \bm{A} & \bm{B} \\ \bm{B}^T & \bm{0} \end{spmatrix} \begin{spmatrix}{\textrm{Solution vector}} U \\ P \end{spmatrix} = \begin{spmatrix}{\textrm{Right hand side (Known)}} F_1 \\ F_2 \end{spmatrix} \textrm{.} \end{equation} Here, $\bm{A}_{ij} = a_{IP} (\overrightarrow{\phi}_i,\overrightarrow{\phi}_j)$, $\bm{B}_{ij} = b(\overrightarrow{\phi}_i,\psi_j) + \left( \lbrace \psi_j \rbrace , [n \cdot \overrightarrow{\phi}_i]\right)_{\Gamma \cup \Gamma_D}$, $F_1 = l_{IP}(\overrightarrow{\phi}_i)$ and $F_2 = \left( \psi_j,\overrightarrow{n} \cdot \overrightarrow{u}_D \right)_{\Gamma_D}$ for $i=1,\ldots,u_{ndofs}$ and $j=1,\ldots,p_{ndofs}$. The column vectors $U$ and $P$ are coefficients $\hat{u}_i$'s and $\hat{p}_i$'s respectively (equation \eqref{velocity_pressure_coefficients}). \section{Affine expansion} We evaluate and solve the Stokes equation weak formulation on the reference domain $\Omega({\bar{\mu}})$. Given a parameter tuple $\mu \neq \bar{\mu}$, we need to evaluate the linear systems of equations \eqref{Stokes_matrix_ch3} on a new domain $\Omega(\mu)$. To accomplish this, we use the affine expansion using linear nature of equation and diving $\Omega(\bar{\mu})$ into triangular subdomains $\Omega_i(\bar{\mu}) \ , \ i = \lbrace 1,2,\ldots,n_{su} \rbrace$ as explained earlier in the section \ref{geometric_parametrization_section}. The affine expansion of operators is essentially a change of variables and has been explained in the literatures such as \cite{CRBM}. However, it is pertinent to explain two expansions as specific to DGM formulation. \begin{itemize} \item In order to transfer the terms containing jump and average operator the following approach is used in present analysis. \begin{equation*}\label{jump_average_term_split} \begin{split} \left(\lbrace \nabla \overrightarrow{\phi} \rbrace , \left[ \overrightarrow{n} \otimes \overrightarrow{\phi} \right] \right) = \left( \nabla \overrightarrow{\phi}^+ , \overrightarrow{n}^+ \otimes \overrightarrow{\phi}^+ \right) + \left( \nabla \overrightarrow{\phi}^+ , \overrightarrow{n}^- \otimes \overrightarrow{\phi}^- \right) + \\ \left( \nabla \overrightarrow{\phi}^- , \overrightarrow{n}^+ \otimes \overrightarrow{\phi}^+ \right) + \left( \nabla \overrightarrow{\phi}^- , \overrightarrow{n}^- \otimes \overrightarrow{\phi}^- \right) \ . \end{split} \end{equation*} Each term on the right hand side of the above equation can be transformed using the affine map. \item The coercivity term $C_{11}\left( [\overrightarrow{\phi}],[\overrightarrow{u}] \right)_{\Gamma \cup \Gamma_D}$ is not transformed but used as evaluated on reference domain $\Omega(\bar{\mu})$. The affine transformation is given by, \begin{equation*} \begin{split} C_{11}\left( [\overrightarrow{\phi}(x),\overrightarrow{u}(x)] \right)_{\Gamma(\mu) \cup \Gamma_D(\mu)} = C_{11} \alpha \left( [\overrightarrow{\phi}(\bm{F}(\hat{x})),\overrightarrow{u}(\bm{F}(\hat{x}))] \right)_{\Gamma(\bar{\mu}) \cup \Gamma_D(\bar{\mu})} \ , \\ \alpha = \frac{\text{length of }\left( \Gamma(\mu) \cup \Gamma_D(\mu)\right)}{\text{length of }\left( \Gamma(\bar{\mu}) \cup \Gamma_D(\bar{\mu})\right)} \ , \ \hat{x} \in \Omega(\bar{\mu}) \ , \ x \in \Omega(\mu) \ . \end{split} \end{equation*} Since, $C_{11}$ is an empirical coefficient replacing $C_{11} \alpha$ with $C_{11}$ will not change the formulation as long as the coercivity of $a_{IP}(\overrightarrow{u},\overrightarrow{\phi}) $ over parameter space $\mathbb{P}$ is maintained. \end{itemize} \section{Reduced basis method}\label{rb_section} In this section, the snapshot proper orthogonal decomposition method and the offline-online decomposition are briefly described. For detailed explanation, we refer to ~\cite{CRBM}. As first step, the solutions based on $\mu_n, n \in \lbrace 1,....,n_s \rbrace$ are calculated i.e. $n_s$ snapshots are generated. The velocity snapshots and the pressure snapshots are stored in $\bm{S}_v \in \mathbb{R}^{u_{ndofs} \times n_s}$ and $\bm{S}_p \in \mathbb{R}^{p_{ndofs} \times n_s}$ respectively. Let us also introduce inner product matrices $\bm{M}_v \in \mathbb{R}^{u_{ndofs} \times u_{ndofs}}$ and $\bm{M}_p \in \mathbb{R}^{p_{ndofs} \times p_{ndofs}}$. \begin{gather*} \bm{M}_{v,ij} = \int_{\Omega} \overrightarrow{\phi}_i \cdot \overrightarrow{\phi}_j + \sum_{k=1}^{N_{el}} \int_{\tau_k} \nabla \overrightarrow{\phi}_i : \nabla \overrightarrow{\phi}_j \ , \ i,j = 1, \ldots, u_{ndofs} \ , \\ \bm{M}_{p,ij} = \int_{\Omega} \psi_i \psi_j \ , \ i,j = 1, \ldots, p_{ndofs} \ . \end{gather*} The dimension of the reduced basis is denoted as $N$ and it is asserted that $N << u_{ndofs}, \ N < n_s$. Next, the spectral decomposition of the snapshots is performed. \begin{equation}\label{snapshot_eigen_value} \bm{S}_v^T \bm{M}_v \bm{S}_v = \bm{V} \bm{\Theta} \bm{V}^T \ . \end{equation} The columns of $\bm{V}$ are eigenvectors and $\Theta$ has eigenvalues $\theta_i \ , \ 1 \leq i,j \leq n_s$, in sorted order ($\theta_1 \geq \ldots \geq \theta_{n_s}$) such that, $\Theta_{ij} = \theta_i \delta_{ij}$. The projection matrix $\bm{B}_v \in \mathbb{R}^{u_{ndofs} \times N}$, used for the projection from the space of the full order model to the space of the reduced order model, is given by, \begin{equation} \bm{B}_v = \bm{S}_v \bm{V} \bm{\Theta}^{-\frac{1}{2}} \bm{R} \ , \ \bm{R} = [\bm{I}_{N \times N} ; \bm{0}_{(n_s-N) \times N}] \ , \end{equation} where, $\bm{I}_{N \times N}$ is the identity matrix of size $N \times N$. The reduced basis space $\bm{B}_p$ can be generated in a similar manner using the pressure snapshots $\bm{S}_p$ and the inner product matrix $\bm{M}_p$. Above procedure is performed during the offline phase. The discrete system of equations is projected onto the reduced basis space by Galerkin projection as, \begin{equation} \label{Stokes_matrix_reduced} \begin{spmatrix}{\tilde{K}} \bm{B}_v^T \bm{A}(\mu) \bm{B}_v & \bm{B}_v^T \bm{B}(\mu) \bm{B}_p \\ \bm{B}_p^T \bm{B}(\mu)^T \bm{B}_v & \bm{0} \end{spmatrix} \begin{spmatrix}{\zeta} U_N \\ P_N \end{spmatrix} = \begin{spmatrix}{\tilde{F}} \bm{B}_v^T F_1(\mu) \\ \bm{B}_p^T F_2(\mu) \end{spmatrix} \ . \end{equation} The solution vectors $U$ and $P$ (equation \eqref{Stokes_matrix_ch3}) are then computed as $U = \bm{B}_v U_N \ , \ P = \bm{B}_p P_N$. The projection onto the reduced basis space, solution of smaller system of equations and computation of $U$ and $P$ are steps performed during online phase. \section{A numerical example} The numerical experiments were performed using RBmatlab ~\cite{rbmatlab},~\cite{master_thesis}. The reference domain $\Omega({\bar{\mu}})$ is the unit square domain $[0,1] \times [0,1]$ with triangle with vertices $(0.3,0),(0.5,0.3),(0.7,0)$ as obstacle. The geometric parameters are the coordinates of the tip of the obstacle i.e. $\bar{\mu} = (0.5,0.3)$. The boundary ${x=0}$ is a Dirichlet boundary with inflow velocity at point $(0,y)$ as $u = (y(1-y), 0)$. The boundary ${x = 1}$ is a Neumann boundary with zero Neumann value i.e. $\overrightarrow{t} = (0, 0)$. Other boundaries are Dirichlet boundary with no slip condition. The source term is $\overrightarrow{f} = (0,0)$. The training set contained $100$ uniformly distributed random parameters within the interval $[0.4,0.6] \times [0.4,0.6]$. The test set contained $10$ uniformly distributed random parameters within the interval $[0.4,0.6] \times [0.4,0.6]$. For velocity basis function polynomial of degree $P^D = 2$ and for pressure basis function polynomial of degree $P^{D-1} = 1$ were used. The number of velocity degrees of freedom and pressure degrees of freedom were $u_{ndofs} = 4704$ and $p_{ndofs} = 1176$ respectively. Figure \ref{dg_rb_solution_47_33} compares the solutions computed by DGM and Reduced Basis (RB) at parameter value $\mu = (0.47,33)$ with reduced basis of size $10$. The drop in error w.r.t to the increased size of the reduced basis space (Figure \ref{error_vs_basis}) is inline with the expectation based on the eigenvalue decay (Figure \ref{ev_decay}). The average speedup was $20.6$. Typically, during the offline phase, the full order system was assembled in $35.37$ seconds and was solved in $6.74$ seconds. During the online phase, the reduced system was assembled in $2.03$ seconds and was solved in $0.009$ seconds. \section{Some concluding remarks} As demonstrated by the numerical example, proper orthogonal decomposition can accelerate the computations involving geometrically parametrized discontinuous Galerkin interior penalty formulation while maintaining the reliability of solution above minimum acceptable limit. The paper also discussed, the specific issues related to the geometric parametrization and the affine expansion as pertaining to the discontinuous Galerkin interior penalty formulation. We expect the current work to contribute towards exploring further potentials in the field of geometric parametrization and reduced basis approach for the discontinuous Galerkin method. \begin{figure}[H] %[t!] % "[t!]" placement specifier just for this example \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{offline_velocity_1_at_47_33.jpg} \caption{Velocity $x-$direction DGM solution} \label{vel_x_dg} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{online_velocity_1_at_47_33.jpg} \caption{Velocity $x-$direction RB solution} \label{vel_x_rb} \end{subfigure} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{velocity_error_1_at_47_33.jpg} \caption{$x-$component of Velocity absolute error $\overrightarrow{u}_h-\overrightarrow{u}_N$} \label{error_x_vel} \end{subfigure} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{offline_velocity_2_at_47_33.jpg} \caption{Velocity $y-$direction DGM solution} \label{vel_y_dg} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{online_velocity_2_at_47_33.jpg} \caption{Velocity $y-$direction RB solution} \label{vel_y_rb} \end{subfigure} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{velocity_error_2_at_47_33.jpg} \caption{$y-$component of Velocity absolute error $\overrightarrow{u}_h-\overrightarrow{u}_N$} \label{error_y_vel} \end{subfigure} %\begin{subfigure}{0.31\textwidth} %\includegraphics[width=\linewidth]{offline_pressure_at_47_33.jpg} %\caption{Pressure DGM solution} \label{pre_dg} %\end{subfigure}\hspace*{\fill} %\begin{subfigure}{0.31\textwidth} %\includegraphics[width=\linewidth]{online_pressure_at_47_33.jpg} %\caption{Pressure RB solution} \label{pre_rb} %\end{subfigure} %\begin{subfigure}{0.31\textwidth} %\includegraphics[width=\linewidth]{pressure_error_at_47_33.jpg} %\caption{Pressure absolute error $p_h-p_N$} \label{pre_error} %\end{subfigure} \caption{DGM and RB solution $\mu = (0.47,0.33)$} \label{dg_rb_solution_47_33} \end{figure} \begin{figure}[H] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{size_vs_reduced_basis_velocity_error_semilog.jpg} \caption{Size of the reduced basis space vs. Relative error in velocity with inner product induced by $\bm{M}_v$} \label{error_vs_basis_velocity} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{size_vs_reduced_basis_pressure_error_semilog.jpg} \caption{Size of the reduced basis space vs. Relative error in pressure with inner product induced by $\bm{M}_p$} \label{error_vs_basis_pressure} \end{subfigure} \caption{Size of the reduced basis space vs Relative error} \label{error_vs_basis} \end{figure} \begin{figure}[H] \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{x_velocity_eigen_value_semilog.jpg} \caption{$x-$Velocity eigenvalues (semilog scale)} \label{vel_x_ev} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{y_velocity_eigen_value_semilog.jpg} \caption{$y-$Velocity eigenvalues (semilog scale)} \label{vel_y_ev} \end{subfigure} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{pressure_eigen_value_semilog.jpg} \caption{Pressure eigenvalues (semilog scale)} \label{pressure_ev} \end{subfigure} \caption{Eigenvalue decay}\label{ev_decay} \end{figure} \bibliographystyle{spbasic} \bibliography{references} %\input{authorsample.bbl} \end{document}
{ "alphanum_fraction": 0.7361007602, "avg_line_length": 79.1622418879, "ext": "tex", "hexsha": "34af34130efcbf69bd268d7db236327710befec8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ae1d94605296e5851262d5a424f3a3864a383c1f", "max_forks_repo_licenses": [ "AFL-3.0" ], "max_forks_repo_name": "niravshah241/valorisation_work", "max_forks_repo_path": "nirav/enumath_work/review_submission/contributed-books/author/authorsample.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ae1d94605296e5851262d5a424f3a3864a383c1f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AFL-3.0" ], "max_issues_repo_name": "niravshah241/valorisation_work", "max_issues_repo_path": "nirav/enumath_work/review_submission/contributed-books/author/authorsample.tex", "max_line_length": 1110, "max_stars_count": 2, "max_stars_repo_head_hexsha": "ae1d94605296e5851262d5a424f3a3864a383c1f", "max_stars_repo_licenses": [ "AFL-3.0" ], "max_stars_repo_name": "niravshah241/valorisation_work", "max_stars_repo_path": "nirav/enumath_work/1st_submission/contributed-books/author/authorsample.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-27T13:16:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-27T13:15:55.000Z", "num_tokens": 7639, "size": 26836 }
\documentclass{article} \usepackage[utf8]{inputenc} %\usepackage[T1]{fontenc} \usepackage{tikz} \usepackage{amsthm} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathtools} \usepackage{graphicx} \newtheorem*{definition*}{Definition} \newtheorem*{property*}{Property} \newcommand\sbullet[1][.5]{\mathbin{\vcenter{\hbox{\scalebox{#1}{$\bullet$}}}}} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \DeclarePairedDelimiter{\abs}{|}{|} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \author{phunc20} \title{Backprop: A Simple Case} %\date{21 January, 2022} %\date{January 22, 2022} \date{January 21, 2021} \begin{document} \maketitle %\tableofcontents %\begin{abstract} %\end{abstract} \section{Try to Find $\frac{\partial{L}}{\partial{W_1}}$} Let's try to find a typical gradient, say $\frac{\partial{L}}{\partial{W_1}}$, and hope that the other gradients could be computed in a similar manner. \begin{tikzpicture}[level distance=10mm] \tikzstyle{every node}=[fill=red!60,circle,inner sep=1pt] \tikzstyle{level 1}=[sibling distance=20mm, set style={{every node}+=[fill=red!45]}] \tikzstyle{level 2}=[sibling distance=10mm, set style={{every node}+=[fill=red!30]}] \tikzstyle{level 3}=[sibling distance=5mm, set style={{every node}+=[fill=red!15]}] \node {23} child {node {17} child {node {6} child {node {5}} child {node {7}} } child {node {13} child {node {12}} child[fill=none] {edge from parent[draw=none]} } } child {node {14} child {node {10}} child {node {1}} }; \end{tikzpicture} \end{document}
{ "alphanum_fraction": 0.6660777385, "avg_line_length": 22.3421052632, "ext": "tex", "hexsha": "43858dd9c4c24a3f72ebd71efbed4b41f04e075d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "04674829311cde7bb173252b8a41620aae4b14ba", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "phunc20/algorithms", "max_forks_repo_path": "CLRS/ch06-heapsort/01-heap/exos.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "04674829311cde7bb173252b8a41620aae4b14ba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "phunc20/algorithms", "max_issues_repo_path": "CLRS/ch06-heapsort/01-heap/exos.tex", "max_line_length": 92, "max_stars_count": null, "max_stars_repo_head_hexsha": "04674829311cde7bb173252b8a41620aae4b14ba", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "phunc20/algorithms", "max_stars_repo_path": "CLRS/ch06-heapsort/01-heap/exos.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 576, "size": 1698 }
\section{32. The undo tree} Vim provides multi-level undo. If you undo a few changes and then make a new change you create a branch in the undo tree. This text is about moving through the branches. %\localtableofcontentswithrelativedepth{+1} % I couldn't get this to work TMC \etocsetnexttocdepth{2} % This produces the same result TMC \etocsettocstyle{\subsection*{Contents}}{} \localtableofcontents \subsection{Undo up to a file write} Sometimes you make several changes, and then discover you want to go back to when you have last written the file. You can do that with this command: \begin{Verbatim}[samepage=true] :earlier 1f \end{Verbatim} The "\texttt{f}" stands for "file" here. You can repeat this command to go further back in the past. Or use a count different from 1 to go back faster. If you go back too far, go forward again with: \begin{Verbatim}[samepage=true] :later 1f \end{Verbatim} Note that these commands really work in time sequence. This matters if you made changes after undoing some changes. It's explained in the next section. Also note that we are talking about text writes here. For writing the undo information in a file see |\texttt{:h undo-persistence}|. \subsection{Numbering changes} \label{Numbering changes} In section |\hyperref[Undo and Redo]{\texttt{Undo and Redo}}| we only discussed one line of undo/redo. But it is also possible to branch off. This happens when you undo a few changes and then make a new change. The new changes become a branch in the undo tree. Let's start with the text "one". The first change to make is to append " too". And then move to the first `o' and change it into `w'. We then have two changes, numbered 1 and 2, and three states of the text: \begin{Verbatim}[samepage=true] one | change 1 | one too | change 2 | one two \end{Verbatim} If we now undo one change, back to "one too", and change "one" to "me" we create a branch in the undo tree: \begin{Verbatim}[samepage=true] one | change 1 | one too / \ change 2 change 3 | | one two me too \end{Verbatim} You can now use the |\texttt{:h u}| command to undo. If you do this twice you get to "one". Use |\texttt{:h CTRL-R}| to redo, and you will go to "one too". One more |\texttt{:h CTRL-R}| takes you to "me too". Thus undo and redo go up and down in the tree, using the branch that was last used. What matters here is the order in which the changes are made. Undo and redo are not considered changes in this context. After each change you have a new state of the text. Note that only the changes are numbered, the text shown in the tree above has no identifier. They are mostly referred to by the number of the change above it. But sometimes by the number of one of the changes below it, especially when moving up in the tree, so that you know which change was just undone. \subsection{Jumping around the tree} So how do you get to "one two" now? You can use this command: \begin{Verbatim}[samepage=true] :undo 2 \end{Verbatim} The text is now "one two", you are below change 2. You can use the |\texttt{:h :undo}| command to jump to below any change in the tree. Now make another change: change "one" to "not": \begin{Verbatim}[samepage=true] one | change 1 | one too / \ change 2 change 3 | | one two me too | change 4 | not two \end{Verbatim} Now you change your mind and want to go back to "me too". Use the |\texttt{:h g-}| command. This moves back in time. Thus it doesn't walk the tree upwards or downwards, but goes to the change made before. You can repeat |\texttt{:h g-}| and you will see the text change: \begin{Verbatim}[samepage=true] me too one two one too one \end{Verbatim} Use |\texttt{:h g+}| to move forward in time: \begin{Verbatim}[samepage=true] one one too one two me too not two \end{Verbatim} Using |\texttt{:h :undo}| is useful if you know what change you want to jump to. |\texttt{g-}| and |\texttt{:h g+}| are useful if you don't know exactly what the change number is. You can type a count before |\texttt{:h g-}| and |\texttt{:h g+}| to repeat them. \subsection{Time travelling} When you have been working on text for a while the tree grows to become big. Then you may want to go to the text of some minutes ago. To see what branches there are in the undo tree use this command: \begin{Verbatim}[samepage=true] :undolist number changes time 3 2 16 seconds ago 4 3 5 seconds ago \end{Verbatim} Here you can see the number of the leaves in each branch and when the change was made. Assuming we are below change 4, at "not two", you can go back ten seconds with this command: \begin{Verbatim}[samepage=true] :earlier 10s \end{Verbatim} Depending on how much time you took for the changes you end up at a certain position in the tree. The |\texttt{:h :earlier}| command argument can be "\texttt{m}" for minutes, "\texttt{h}" for hours and "\texttt{d}" for days. To go all the way back use a big number: \begin{Verbatim}[samepage=true] :earlier 100d \end{Verbatim} To travel forward in time again use the |\texttt{:h :later!}| command: \begin{Verbatim}[samepage=true] :later 1m \end{Verbatim} The arguments are "\texttt{s}", "\texttt{m}" and "\texttt{h}", just like with |\texttt{:h :earlier}|. If you want even more details, or want to manipulate the information, you can use the |\texttt{:h undotree()}| function. To see what it returns: \begin{Verbatim}[samepage=true] :echo undotree() \end{Verbatim} \clearpage
{ "alphanum_fraction": 0.6890307002, "avg_line_length": 31.5108695652, "ext": "tex", "hexsha": "50dd83bc8372ef427501d76921c87658e13d643d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cfcf5be0bf33362cd326a283c8ae65a9caa59adf", "max_forks_repo_licenses": [ "OML" ], "max_forks_repo_name": "tristanchase/LaTeX-Vim-User-Manual", "max_forks_repo_path": "src/usr_32.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cfcf5be0bf33362cd326a283c8ae65a9caa59adf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "OML" ], "max_issues_repo_name": "tristanchase/LaTeX-Vim-User-Manual", "max_issues_repo_path": "src/usr_32.tex", "max_line_length": 145, "max_stars_count": null, "max_stars_repo_head_hexsha": "cfcf5be0bf33362cd326a283c8ae65a9caa59adf", "max_stars_repo_licenses": [ "OML" ], "max_stars_repo_name": "tristanchase/LaTeX-Vim-User-Manual", "max_stars_repo_path": "src/usr_32.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1561, "size": 5798 }
\chapter*{Acknowledgements} \thispagestyle{empty} I would like to express my profound gratitude to all the people and institutions that made possible this doctoral thesis. First to Dr. Mammen, the director of this thesis and my mentor, for all his support, patience, continuous teaching and great advice. None of all the great things that have happened to me during the last years of my life would have been possible without him. To Dr. Carrión Ribas, the tutor and co-director this Ph.D. for all his great guidance and encouragement in moving forward this project. Her infinite care and understanding made this process as easy as it could possibly be. To Dr. Selva-O'Callaghan, great mentor and friend, for introducing me to the fascinating world of myositis, for his wise, and kind guidance along the initial steps of my academic career. To Maria, for her loving comprehension during not just one, but two doctoral theses. Without her by my side I would not have dared to initiate this project or been able to finish it. Finally, to Noa. She probably will not remember this period of her life, but without her good character and the boost of joy that she gives me every morning, doing this Ph.D. would have been infinitely more complicated.
{ "alphanum_fraction": 0.797275641, "avg_line_length": 89.1428571429, "ext": "tex", "hexsha": "c3502cdd858ae4626d2f9c75cd85f0c7f7d1d787", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "48981693fe216ac196591322dff55d5ab89cf761", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "iagopinal/phd_bioinformatics", "max_forks_repo_path": "chapters/5_acknowledgements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "48981693fe216ac196591322dff55d5ab89cf761", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "iagopinal/phd_bioinformatics", "max_issues_repo_path": "chapters/5_acknowledgements.tex", "max_line_length": 257, "max_stars_count": null, "max_stars_repo_head_hexsha": "48981693fe216ac196591322dff55d5ab89cf761", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "iagopinal/phd_bioinformatics", "max_stars_repo_path": "chapters/5_acknowledgements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 265, "size": 1248 }
\section{Local properties}
{ "alphanum_fraction": 0.7586206897, "avg_line_length": 7.25, "ext": "tex", "hexsha": "e03df8ee6dcbfa1ecd835ffd582f74d9abc15c2f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/geometry/topologyFinite/08-00-Local_properties.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/geometry/topologyFinite/08-00-Local_properties.tex", "max_line_length": 26, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/geometry/topologyFinite/08-00-Local_properties.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 29 }
\begin{figure*}[htbp] \centering \subfloat[Runtime.]{\includegraphics[width=0.4\textwidth]{figs/montage_turn-around-new.pdf} \label{fig:montage_turnaround-new}} \subfloat[Core-hours.]{\includegraphics[width=0.4\textwidth]{figs/montage_core-hour-new.pdf} \label{fig:montage_core-hour-new}} \caption{\small Montage workflow performance (Cori): (a) workflow runtime and (b) core-hours usage for Montage with and without \systemname. Montage shows shorter runtimes without \systemname. For values of n larger than 32, \systemname runs consume less core-hours. \revcomment{Reviewer 1 has comments about these results. I am not sure what was meant.} % However, the sequence stages use less core-hours in \systemname due to dynamic elastic resource allocation. } % Gonzalo Edit %\caption{\small \fix{Montage workflow (Cori): (a) workflow runtime and (b) Core-hours usage for Montage with \systemname and without \systemname. sequence stages are allocated 1 node (i.e., 32 cores) and use one core. Parallel stages scale linearly, while sequence stages suffer due to I/O after 32 cores. The sequence stages incur lower core-hour expenditure in \systemname. Parallel stages use roughly same core-hours in \systemname and non\systemname cases. However, the sequence stages use less core-hours in \systemname %due to dynamic elastic resource allocation.}} \label{fig:montage_results} \vspace{-0.4cm} \end{figure*} \subsection{Effect of Stage Elasticity} %In one of its use cases, \systemname %adjusts workflow resources to the static, %but different requirements of each workflow stage. %In this case, elasticity is achieved by including a call to %\systemname API in the workflow, %before the start of each stage to %express its resource requirements. %In this section, performance results of workflows %run in this use case are presented. The stage elasticity in \systemname allows workflows to request resource changes between the stages of a workflow (as described in Section~\ref{subsec:user}). In this section, we present the results of using stage elasticity in \systemname for different workflows. In our evaluation, we measure \systemname performance using workflow runtime and allocated core-hours. Values observed in each experiment are presented in Figures~\ref{fig:montage_results} to~\ref{fig:synthetic_results}. Each bar in the figure presents the average value (with standard deviation bars) over three repetitions of an experiment. Experiments include runs with and without \systemname and different resource allocation (32, 64, 128 and 256 cores). %Y-axis of the figures indicates the measured metric and The X-axis represents the peak CPUs (\textit{n}) allocated for the workflow in a particular experiment. In non\systemname runs, a value \textit{n} on the X-axis corresponds to the number of CPU cores allocated during the complete lifecycle of a workflow. In \systemname runs, \textit{n} is the maximum number of CPU cores allocated during the duration of a workflow. %number of CPU cores allocated to a parallel stages. Sequence stages always %allocate 32 cores. %\fix{Movied this under Metrics.} %The runtime of a workflow is calculated as the time between the execution %start of the first stage and completion of the last stage of the workflow. %In the figures, main contributors to the runtimes are decomposed as stacked %bars including: runtime of each stage (labeled as \emph{[order in the %execution]-[Sequential/Parallel]}), wait time of each stage if run as an %independent job (\emph{[order in the execution]-Queue}), and aggregated runtime %consumed by checkpoint and restart operations (\emph{Checkpointing}). % moved under metrics %The core-hours measured in this section correspond to allocation, %including resources left unused by a workflow. Core-hours are labeled %and decomposed in stacked bars like the runtime figures. Wait time values %are not included since jobs do not consume core-hours when they wait. %Y-axes workflow indicate averagethe runtime expressed in seconds, with the standard deviation shown by vertical intervals over stage times. The turnaround time is calculated at the start of first workflow task to end of the workflow execution. %For each workflow, we compare the different factors contributing to the total turnaround time, represented by the bars in Figures %\ref{fig:montage_turnaround-new}, \ref{fig:blast_turnaround-new} and \ref{fig:synthetic_turnaround-new}. %These factors are stage executions time, checkpoint, restart, process kill times, and inter job wait times (Queue Time). %Each bar presents the average workflow execution runtime in seconds with and without \systemname for four allocation sizes \textit{n}: 32, 64, 128 and 256 cores. % %\fix{Pasted from below needs to be integrated.} % %In this section, we compare the core-hours used under both execution modes - \systemname and non\systemname for the three workflows as shown in Figures \ref{fig:montage_core-hour-new} \ref{fig:blast_core-hour-new} and \ref{fig:synthetic_core-hour-new}. %The X-axis show the peak CPUs allocated in each run and Y-axis shows the total core-hours. %We also show the standard deviation with vertical bars over stage times related to the experimental runs. %Since \systemname uses one node for sequential stages, %its core-hours is lower than non\systemname in all sequential stages. %Each bar in the Figures represents the Core-Hour contributions from each Stage execution runtime and the Overhead times (Checkpoint, Restart and Process Kill times), %which were summed up and joined together in one stack as they are very small compared to the total core-hour expenditure. %In this section, we evaluate and compare the performance and resource usage of %workflows with and without \systemname as per the different workflow patterns. \vspace{-0.3cm} \subsubsection{Montage} \begin{figure*}[htbp] \centering \subfloat[Runtime.]{\includegraphics[width=0.4\textwidth]{figs/blast_turn-around-new.pdf} \label{fig:blast_turnaround-new}} \subfloat[Core-hours.]{\includegraphics[width=0.4\textwidth]{figs/blast_corehrs-new.pdf} \label{fig:blast_core-hour-new}} \caption{\small BLAST workflow performance (Cori): (a) workflow runtime and (b) core-hours usage for BLAST with \systemname and without \systemname. BLAST execution is dominated by its first parallel stage. BLAST runtimes and core-hours are similar under both approaches, with slightly longer times and higher core-hour numbers in \systemname.} % \caption{\small \fix{BLAST workflow (Cori): (a) Total turnaround time and (b) Core-hours usage for BLAST with \systemname and without \systemname. The sequence stages are allocated 1 node (i.e., 32 cores) and use one core. The parallel stage scales linearly, which is the dominant stage in BLAST. The sequence stage is very small due to its short operation. %%\fix{What??? - \systemname contribute to lower core-hour expenditure in \systemname.} %Parallel stages scale roughly the same way in \systemname and non\systemname, though \systemname incurs a small overhead due to checkpoint and restart. Core-hours usage does not improve with \systemname due to extrememly small sequence stage. Scaling down incurs a checkpoint/restart overhead that overshadows the scaled down resource allocation for the sequence stage.}} \label{fig:blast_results} \vspace{-0.4cm} \end{figure*} \begin{figure*}[htbp] \centering \subfloat[Runtime.]{\includegraphics[width=0.4\textwidth]{figs/synthetic_turn-around-new.pdf} \label{fig:synthetic_turnaround-new}} \subfloat[Core-hours.]{\includegraphics[width=0.4\textwidth]{figs/synthetic_core-hours-new.pdf} \label{fig:synthetic_core-hour-new}} \caption{\small Synthetic workflow performance (Cori): (a) workflow runtime and (b) core-hours usage for the Synthetic workflow with \systemname and without \systemname. Synthetic workflows show significantly shorter runtimes without \systemname. Until 128 cores, less core-hours are allocated under \systemname. For higher values of n, \systemname uses less core-hours. } % \caption{\small \fix{Synthetic workflow (Cori): (a) Total turnaround time and (b) Core-hours usage for the Synthetic workflow with \systemname and without \systemname. The sequence stages are allocated 1 node (i.e., 32 cores) and use one core. Parallel stages scale linearly, but the overall workflow perform porrly in \systemname due to large number of memory allocation/deallocation calls. DMTCP monitors all memory allocation operations, resulting in high runtime overheads in \systemname. \systemname uses larger core-hours for 32 cores, but uses lesser core-hours for increasing cores. Core-hours usage for the parallel stage increases due to larger overheads, but the core-hours usage decreases significantly for the sequence stage because \systemname allocates fewer resources (1 node, 32 cores as compared to 256 cores) for the sequence stage.}} % In (b), \systemname contribute to lower core-hour expenditure in \systemname. Parallel stages contribute roughly in the same way in \systemname and Non\systemname.} } \label{fig:synthetic_results} \vspace{-0.4cm} \end{figure*} In this section, we evaluate the performance and resource usage of Montage. \noindent\textbf{Workflow Runtime.} Figure~\ref{fig:montage_turnaround-new} compares the workflow runtime for Montage with and without \systemname. %Gonzalo: this is already said before, so I comment it out. % %Montage is an I/O intensive workflow and most of the overheads %are concentrated in the parallel filesystem, especially for intermediate %I/O. %with file %handling happening in between all stages depicted in Figure %\ref{fig:montage_dag}. When running without \systemname, the workflow runtime does not change substantially across different values of \textit{n} and the shortest one is observed for $n=32$ (single node on Cori). %This result is counterintuitive, as it was expected %that assigning more results to the workflow tasks would %reduce its runtime. For larger $n$, the runtime of sequence stages (\montageSecond and \montageFourth) increases equally or more than the runtime gains in the parallel ones (\montageFirst and \montageThird). %Execution analysis associates this phenomenon to changes This is likely due to the inter-stage data caching for different values of $n$. %because sequential stages run always on %a node independently of $n$- For instance, if $n=32$, \montageFirst runs on a single node and all its output data ($4.5$ GiB) is cached locally (and will eventually be written to the file system). As a consequence, \montageSecond reads its input data mainly from memory. However, for $n=64$, \montageFirst runs across two nodes, caching one half of its output data on each node. When \montageSecond starts on one of the two nodes, only half of its input data is locally cached. The runtime of the I/O intensive stage becomes $15\%$ longer than for $n=32$. %For larger $n$, the cached data size decreases proportionally %and the runtime keeps increasing accordingly. This effect is also observed for \montageFourth since it is also an I/O intensive sequential stage preceded by a parallel one (\montageThird). However, since its input data is larger than for \montageSecond ($38$ GiB), the effect is more noticeable, e.g., runtime of \montageFourth from $n=32$ to $n=64$ increases $52\%$. % % %For instance, stage \emph{mProjDiff} runs over a parallel template, it is I/O %intensive, and, as expected, its runtime becomes shorter for larger value of $n$. %\emph{mConcatBg} is a sequential stage, also I/O intensive, and its runtime %increases significantly from $n=32$ to $n=64$ and increases slightly %for larger values of $n$. %Since \emph{mConcatBg} is a sequential stage, $n$ does not include %on the code execution our its output write. %As a consequence, the increase of runtime of this stage %must be related to the stage input read. %The size of the intermediate data was measured, \emph{mConcatBg} %reads $4.5$ GiB of data, which is significantly smaller than the %memory of Cori's compute nodes ($128$ GiB). %However, as $n$ increases, the effect of this caching is different. %For $n=32$, \emph{mProjDiff} runs in a single node % its output cached by the local I/O subsystem, %and then read from memory by the \emph{mConcatBg} stage. %For $n=64$, \emph{mProjDiff} runs on two nodes, all of its %output is stored in the file system, but half of it is cached %in a compute node, and the other half in another. %The \emph{mConcatBg} stage runs on one of the nodes %used \emph{mProjDiff}, so $50\%$ of the data is not locally %cached, which makes the read slower than for $n=32$. %This explains the large difference in \emph{mConcatBg} runtime %between $n=32$ and $n=64$. %For $n=128$, and $n=256$, %the locally cached data becomes even smaller %($25\%$ and $12.5\%$), %which explains the slight increase in runtime for %larger values of $n$. % %The increase of runtime of \emph{mJPEG} for larger values of $n$ is %even more significant than for \emph{mConcatBg}. %This is explained by the I/O nature of \emph{mJPEG} and %the preceding parallel stage and the larger size of %the intermediate date ($38$ GiB). %\grfix{Why does it not change? Most people will expect it to go down with increased cores} %Parallel %\emph{mProjDiff} and $mBackground$ stages represent shorter times of the %total runtime and show linear scaling \grfix{don't understand - due to % resource distribution among compute nodes}. Sequential \emph{mConcatBg} %and \emph{mJPEG} stages are affected by scaling (64, 128 and 256), where %sequential \emph{mJPEG} runs longer than 32 cores. The reason for this %performance loss in the fourth stage for both scenarios are \grfix{I % dont understand this - too vague and hand wavy} due to the way files %are cached in the I/O buffer among compute nodes. When run with \systemname, Montage workflow runtime presents a different pattern. Again, workflows running on a single node ($n=32$) present the shortest runtime because the tasks run on a single node and in the same job allocation. For $n>32$, the inter job wait time increases the workflow runtime significantly compared to $n=32$. This is expected since for $n>32$, \systemname runs each section of the workflow in a separate job to adjust the resource allocation to the desired size (Figure \ref{fig:montage_dag}). As $n$ increases, the runtime for the sequence stages does not change and parallel stages becomes shorter, decreasing the overall runtime. %This is different from the pattern observed in non-\systemname runs %because workflow divided between jobs cannot benefit from intermediate data caching, %as new resources are allocated for each job. %For \systemname, the runtime doesn't change substantially when %\textit{n} increases. \grfix{If this is the same for both, we can start % with that in both. What is the point of showing n on X axis if it % doesnt change?} Parallel stages \emph{mProjDiff} and $mBackground$ scale %linearly, similarly to Non\systemname. Sequential \emph{mConcatBg} and %\emph{mJPEG} stages are affected by scaling due to the I/O buffering, and %differences in runtime per stage are low when compared to %Non\systemname. \grfix{Why? why? why?} A major contributor for a longer %turnaround time for \systemname in comparison to Non\systemname were %Queue \emph{mConcatBg}, $mBackground$ and \emph{mJPEG} times, added in between %the stages. \grfix{Why?} Workflow-aware schedulers could be used to mitigate this %behavior \cite{rodrigo2017enabling}. \grfix{How?} % The comparison between running Montage with or without \systemname shows that runtime is longer with \systemname in all cases. When run on a single node, the 20\% runtime increase is due to monitoring overhead of DMTCP. For $n>32$, the workflow runtime difference is contributed by the inter-job wait time and longer runtime of the stages. The inter job wait time is dependent on the current workflow of the system and out of the control of \systemname. %The stages run longer with \systemname because of %DMTCP's monitoring overhead and the impossibility of inter-stage data caching. For most cases, total stage runtime is $\approx 20\%$ longer with \systemname. As we scale to $n=256$, non\systemname runs can no longer benefit from inter-stage data caching due to data being distributed across multiple node, and hence, stage runtime overhead in \systemname is reduced to $10\%$ as compared to non\systemname. %The runtime results show that the performance %of the workflow with \systemname is affected only due to the checkpoint-restart %overhead and the underlying application characteristics. \noindent\textbf{Workflow core-hours.} Core-hours consumed by all the experiments with Montage on Cori are detailed in Figure \ref{fig:montage_core-hour-new}. In cases without \systemname, larger allocations increase the core-hours consumed. Without elasticity, the sequence stages consume significantly more core-hours since their runtime is not reduced by the larger resource allocation. Also, parallel stages, when scaling up from $n=32$ to $n=256$, consume slightly more core-hours due to the increasing overheads of the initial setup in Tigres for launching the parallel tasks across multiple nodes %due to imperfections. For Montage, $n=32$ (single node on Cori), 72\% of the workflow runtime is consumed by serial stages (\montageSecond and \montageFourth). %The importance of the sequential stages %causes the linear increase of core-hours consumed as $n$ grows. With \systemname, for values of $n>32$, doubling the allocated resources induces small variations in consumed core hours. For example, stepping $n$ up from 64 to 128, increase core-hours consumption by 11\%. Core-hour usage increases are attributed to the natural overhead of less than perfect parallelism in the code, and the initial overhead of distributing the tasks through Tigres across multiple nodes. Otherwise, with \systemname, there is no resource wastage and checkpointing core-hours are very small ($<1\%$). However, there is a larger step between $n=32$ and $n=64$, with an increase of 35\% of core hours. This is due to the loss in efficiency in the sequence stages due to lack of caching of intermediate data. Finally, comparing runs of Montage with the two approaches, for values $n>32$ (elasticity is possible), \systemname requires significantly less core-hours ($76 \%$ for 256 cores) than non\systemname due to elastic management of resources. The core-hour results in Montage show that with increasing parallelism, \systemname utilizes resources more efficiently than non\systemname due to diverse level of parallelism. % for workflows with diverse level of parallelism. % This demonstrate the capacity of the system % to adjust resource allocations to the workflow %demand efficiently and to minimzie resources wastage. % % %Non\systemname core-hour expenditure gets worse as the number of CPUs allocated \textit{n} scales out. %It goes from 1\% worse in 64 cores to up to 400\% worse in 256 cores. %It is mainly due to its poor core reservation as one big job, mostly affected in sequential stages (\montageSecond and \montageFourth). %Since \systemname sequential stages runtime overheads are lower in comparison to non\systemname, \montageSecond and \montageFourth stages were the ones that negatively affected non\systemname core-hours consumption. %As \systemname allocates only one node (32 cores) for these stages, it spent less core-hour than non\systemname in all cases. % %\grfix{This is hanging and not clear where it fits into the rest of the % explanation} In both Non\systemname and \systemname, when scaling %out to more than 32 cores (one node), the cache miss ratio might %increase as the workflow tasks are placed in different nodes at each %stage. This is important particularly when the workflow transitions %from a parallel to a sequential stage as these tasks try to access %files that are not cached in the nodes they were placed, causing many %I/O cache misses. Given these cache misses, the requested files have %to be loaded at runtime, decreasing the I/O performance and increasing %overall stage runtime. \\ \vspace{-0.3cm} \subsubsection{BLAST} This section focuses on evaluating the impact of \systemname on BLAST. \noindent\textbf{Workflow runtime.} The runtime of all the experiments running BLAST on Cori are presented in Figure~\ref{fig:blast_turnaround-new}. When run without \systemname, BLAST's runtime is dominated by the first parallel stage (\blastFirst occupies $>99\%$ of the total runtime in all cases). This stage scales well over more resources and overall workflow runtime is significantly reduced when run over more resources. For instance, for $n=64$ the runtime is less than half ($53\%$ shorter) than for $n=32$. %As expected the runtime of the sequential stage (\blastSecond) does not change %as resources are increased. Similar workflow runtimes are observed in BLAST when run with \systemname. DMTCP's monitoring overhead is relatively small but becomes more significant for larger values of $n$. e.g., DMTCP increases BLAST's runtime by $1\%$ for $n=32$ and $8\%$ for $n=256$. For all values of $n$, checkpoint, restart, and queue times increase the overall workflow runtime by $\approx 2$ minutes, which is not significant compared to the overall workflow runtime. In summary, BLAST scales well as the workflow allocation is increased, and has very little overhead when run with \systemname. % For non\systemname workflows, \blastFirst stage represents the largest % portion of total execution time as can be seen in Figure \ref{fig:blast_turnaround-new}. %\blastFirst tasks are equally distributed among compute-nodes as resources scale, %equally reducing the load in each compute node and explaining its linear scalability. %On the other hand, \blastSecond stages are unaffected by scaling, since %only one compute node is necessary to execute its simple file merge operation. % %The comparison between running BLAST with or without \systemname %shows that runtime is longer with \systemname in all cases. %For \systemname, similarly to non\systemname, results shown stage %runtime decreases in half when the number of nodes \textit{n}. %Extra overheads are due to checkpoint, restart and queue times. %Excluding overheads, stage runtimes show low overhead in comparison to non\systemname. %This is so because BLAST compares DNA sequences with the sequence %database residing in memory. %This type of operation does not incur in any substantial overhead to \systemname, %running under DMTCP, enabling it to have a nearly zero stage runtime footprint in relation to non\systemname. %Finally, $Sequence Merging$ stages are short and unaffected by scaling, \begin{figure*}[htbp] \centering \subfloat[Runtime.]{\includegraphics[width=0.4\textwidth]{figs/blast_execution_new.pdf} \label{fig:blast_execution}} \subfloat[Core-hours.]{\includegraphics[width=0.4\textwidth]{figs/blast_charged_new.pdf} \label{fig:blast_charged}} \caption{\small Runtime elasticity on BLAST (Gordon) vs static allocation: (a) workflow runtime and (b) core-hour usage for BLAST. The \systemname coordinated job starts on a single 16 core node and expands to the peak core allocation after 60 seconds of execution. } % \caption{\small Runtime elasticity on BLAST (Gordon): Workflow turnaround time (a) and Core-hour usage (b) versus maximum cores utilized for BLAST with \systemname and without \systemname. The \systemname coordinated job starts on a single 16 core node and expands to the Peak Core Allocation after 60 seconds of execution. } \label{fig:blast_gordon} \vspace{-0.4cm} \end{figure*} \begin{figure*}[htbp] \centering \subfloat[Performance overhead]{\includegraphics[width=0.4\textwidth]{figs/time_dmtcp.png} \label{fig:time_dmtcp}} \subfloat[Checkpoint overhead]{\includegraphics[width=0.4\textwidth]{figs/size_dmtcp.png} \label{fig:size_dmtcp}} \caption{\small \systemname overheads: (a) shows time required for checkpoint and restart vs. the number of processes/tasks being tracked by \systemname, (b) shows total storage required on filesystem for a checkpoint versus the number of processes/tasks being tracked by \systemname.} \label{fig:overheads} \vspace{-0.4cm} \end{figure*} \noindent\textbf{Workflow core-hours.} Core-hours consumed with BLAST (on Cori) are detailed in Figure \ref{fig:blast_core-hour-new}. Similar core-hours are observed with and without \systemname and different values of $n$, e.g., the maximum %and values differ less than 9\% from the average. This is caused by the domination of (\blastFirst) over the execution of the workflow that makes other non-scaling stages irrelevant in terms of core-hours. %As observed in Figure~\ref{fig:blast_turnaround-new}, %this parallel stage runtime decreases significantly %when its parallelism is increased. The comparison between $n=32$ and $n=64$ cases present an unexpected result: the core-hours are reduced when parallelism is increased. This is caused by the unexpected super-linear reduction of runtime in that step of $n$ observed in Figure~\ref{fig:blast_turnaround-new}. %\fix{I really liked these statements, but the language sounds very philosophical.} In the next steps of $n$, core-hours increase slowly from the expected imperfection of parallelism in the code. Comparison between using and not using \systemname %Comparison between \systemname and non\systemname shows that% for all values of $n$, \systemname consumes 1.5\% to 5\% more core-hours with no clear correlation to $n$. Checkpointing overhead in all cases consumes less than 0.5\%. This leads to the conclusion that the additional core-hours consumed by \systemname are for DMTCP execution overhead. %We can see \systemname shows a small overhead. %\blastFirst stage in BLAST is parallel and \systemname had a low execution runtime overhead, thus making Non\systemname spend roughly the same core-hours as \systemname. %Core-hours are then mainly dominated by the Parallel \blastFirst stages. %as it is a file merge operation. \vspace{-0.25cm} \subsubsection{Synthetic} This section describes our evaluation of the Synthetic workflow with \systemname. \noindent\textbf{Workflow runtime.} The runtimes observed of all experiments with Synthetic workflow are presented in Figure~\ref{fig:synthetic_turnaround-new}. %The Synthetic workflow was used for %illustrating a memory-allocation-intensive workload. When run without \systemname, the workflow runtime becomes shorter for larger values of $n$. This reduction is the result of shorter stage runtimes of the \synSecond stage when more resources are available (\synFirst is sequential and thus its runtime is constant): \synSecond runtime is reduced $40-49\%$ each time $n$ is doubled. The Synthetic runs with \systemname present much longer runtimes than without \systemname. This is due to %very important DMTCP monitoring overhead that slows down the execution of all stages by a $2.2-2.4$ factor. Detailed analysis of the workflow reveals that most of the operations performed by the workflow were memory management (allocation and free). These operations are heavily monitored by DTMCP that traps all the memory management calls. %The stage executions are consequently, slowed down. The workflow runtime evolution for larger values of $n$ is as expected: \synFirst runtimes remain constant, and \synSecond runtime is reduced significantly (again $40-49\%$). Finally, \systemname checkpoint overheads are minimal (6 seconds for all values of $n$) and the queue times for the second job is typically a few minutes. \noindent\textbf{Workflow core-hours.} Core-hour consumed in all experiments with Synthetic are detailed in Figure \ref{fig:synthetic_core-hour-new}. For non\systemname, a larger resource allocation implies a significant increase in core-hours consumed by the workflow. This increase is mainly due to the wastage of the resources by the \synFirst stage. %The \synSecond also consumes a bit more resources for %larger values of $n$, but the increase is not significant. Runs of Synthetic with \systemname consume almost the same core-hours for all values of $n$. This is caused by the constant resource consumption of both stages in the workflow. \synFirst consumes the same core hours because elasticity % provided by the system allows to execute \synFirst over 32 cores in all cases. \synSecond runtime decreases proportionally to the increase in assigned resources, keeping its core-hours consumption almost unchanged. %Comparison between \systemname and non\systemname %cases show a case of break-even. %For values of $n<128$ non\systemname %consume less core-hours. %This is because Synthetic workflow is memory-allocation-intensive and %suffers a great runtime overhead when executed with DMTCP. %However, as $n$ increases, the wasted core-hours %in \synFirst with non\systemname %become larger than the DMTCP overhead, %and \systemname becomes more efficient. % in terms of core-hours. %For non\systemname we see high increases in \synFirst stage core-hour usage due to its sequential operations. %As the number of cores (X axis) is allocated for whole execution, only the \synSecond stage could effectively use all of them. %In \systemname we see nearly constant total core-hour usage for all %cases since all \synFirst stages used only one node and %\synSecond stages scaled linearly. %These two compensating factors help \systemname having lower core-hour usage starting at 128 cores. %Even though runtimes for both stages in \systemname were $2.4x$ higher (see previous subsection), the number of cores used was more efficient, especially in the \synFirst parts, which are larger than \synSecond stages. %This shows a use case where even though \systemname had a higher runtime overhead, it would still make efficient resource usage in relation to core-hours. %Surprisingly, this is not always the case due to the high overhead introduced by DMTCP which tracks all memory allocations done in this workflow (explained in the previous subsection). % %Core-hours for non\systemname and \systemname are detailed in Figure \ref{fig:synthetic_core-hour-new}. %For non\systemname we see high increases in \synFirst stage core-hour usage due to its sequential operations. %As the number of cores (X axis) is allocated for whole execution, only the \synSecond stage could effectively use all of them. %In \systemname we see nearly constant total core-hour usage for all %cases since all \synFirst stages used only one node and %\synSecond stages scaled linearly. %These two compensating factors help \systemname having lower core-hour usage starting at 128 cores. %Even though runtimes for both stages in \systemname were $2.4x$ higher (see previous subsection), the number of cores used was more efficient, especially in the \synFirst parts, which are larger than \synSecond stages. %This shows a use case where even though \systemname had a higher runtime overhead, it would still make efficient resource usage in relation to core-hours. %Surprisingly, this is not always the case due to the high overhead introduced by DMTCP which tracks all memory allocations done in this workflow (explained in the previous subsection). % %the runtime of \synFirst is the same %for all values of $n$ ($\approx 600s$), % %For non\systemname, there is no scaling in \synFirst stages since its tasks are executed in only one node. %For \synSecond stages, linear scaling is seen as %all task operations are equally divided among available compute nodes. % %The comparison between running Synthetic with or without \systemname shows that runtime is greatly longer for \systemname in all cases. %For \systemname, there is no scaling in \synFirst stages, similarly to %non\systemname. %For \synSecond stages we see again a linear scaling in all %cases. %Checkpoint and Restart times are negligible and it is due to %the small code memory footprint in the synthetic process. %Finally, queue times were different among runs as they vary based on system utilization \cite{nurmi2006evaluation, nurmi2007qbets}. % %The stage runtime overhead in the Synthetic workflow increases significantly when run with \systemname. %For all experiments, sequential stage runtimes (\synFirst) more than doubled under \systemname. %Differently to other workflows, the Synthetic workflow has %no I/O and is fully written in Python. %Also, the main code running during most of its execution %is composed by a loop performing mathematical operations. %However, each loop iteration includes the creation of two list structures that are immediately discarded after the mathematical operation. %This forces two memory allocation system calls per iteration, %which, under \systemname, are inspected and slowed down by DMTCP. %Thus, the Synthetic workflow is memory allocation intensive %since, in each stage, it creates and immediately %destroys 5.6 billion Python list objects. %We can then conclude that runtime overheads will be very significant %for application intensive in memory allocation operations. % Gonzalo: We cannot explain the overhead in the synthetic workflows % in detail. By now, I trim this out. % %\fix{This paragraph is hanging and does not flow with the above and % the explanation comes too late} \fix{which- This} high overhead in %the Synthetic workflow is because its code calls the Python built-in %$sum()$ function in each of its stage tasks, either in $s-Sequential$ %and in $s-Parallel$. This function is implemented in C and is called %through Cython, and thus it is an external call provided through a %shared library outside to the Tigres Python process itself. \grfix{C % calls should be faster so it is hard to believe this the way it is % written} DMTCP tracks all system calls a user program does, except %file handler ones (as was the case for most of Montage and BLAST %operations). Calling a function in a shared object triggers a system %call, thus triggering the DMTCP tracking system, which decreased %compute performance due to its complex internal logic for providing %checkpoint-restart mechanisms \cite{dmtcp_benchmarks}. The $sum()$ %function was called $28*10^8$ times in the $s-Sequential$ stage and %$11*10^7$ in the $s-Parallel$ stage. Each one of these calls are %registered by the DMTCP tracking system, increasing the runtime %overhead for both stages. \grfix{this is both too detailed and too % little detail - hard for me to understand really what is going on} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\iffalse %\begin{figure} % \centering % \includegraphics[width=3.18in]{figs/blast_turn-around-new.pdf} % \caption{Total turn-around time versus maximum cores utilized for BLAST with \systemname and without \systemname. Parallel stages scale linearly, while Sequential are very small due to its short operation. \systemname's sequential stages use 32 cores. } % \label{fig:blast_turnaround-new} %\end{figure} % %\begin{figure} % \centering % \includegraphics[width=3.18in]{figs/blast_corehrs-new.pdf} % \caption{Core-hour versus maximum cores utilized for BLAST with \systemname and without \systemname. \systemname's sequential stages use 32 cores and contribute to lower core-hour expenditure in \systemname. Parallel stages contribute roughly in the same way in \systemname and Non\systemname. } % \label{fig:blast_core-hour-new} %\end{figure} %\fi %\iffalse %\begin{figure} % \centering % \includegraphics[width=0.5\textwidth]{figs/synthetic_turn-around-new.pdf} % \caption{Total turn-around time versus maximum cores utilized for Synthetic with \systemname and without \systemname. Parallel stages scale linearly, while Sequential stages are within same intervals for both cases. \systemname's sequential stages use 32 cores. } % \label{fig:synthetic_turnaround-new} %\end{figure} % %\begin{figure} % \centering % \includegraphics[width=0.5\textwidth]{figs/synthetic_core-hours-new.pdf} % \caption{Core-hour versus maximum cores utilized for Synthetic with \systemname and without \systemname. \systemname's sequential stages use 32 cores and contribute to lower core-hour expenditure in \systemname. Parallel stages contribute roughly in the same way in \systemname and Non\systemname. } % \label{fig:synthetic_core-hour-new} %\end{figure} %\fi
{ "alphanum_fraction": 0.782556391, "avg_line_length": 58.1478537361, "ext": "tex", "hexsha": "7d9d9fc7724d709a26333f5d0d3b43772fa353d2", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-08-19T15:33:38.000Z", "max_forks_repo_forks_event_min_datetime": "2020-08-19T15:33:38.000Z", "max_forks_repo_head_hexsha": "e4bb6bdc7182a94dfd096a7092ee7dadd152774b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "perrenyang/dac-man", "max_forks_repo_path": "examples/data/paper_test/ehpc_camera_ready/results-stage-elasticity.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "e4bb6bdc7182a94dfd096a7092ee7dadd152774b", "max_issues_repo_issues_event_max_datetime": "2020-09-12T02:06:04.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-22T20:23:01.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "perrenyang/dac-man", "max_issues_repo_path": "examples/data/paper_test/ehpc_camera_ready/results-stage-elasticity.tex", "max_line_length": 858, "max_stars_count": 3, "max_stars_repo_head_hexsha": "e4bb6bdc7182a94dfd096a7092ee7dadd152774b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "perrenyang/dac-man", "max_stars_repo_path": "examples/data/paper_test/ehpc_camera_ready/results-stage-elasticity.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-02T18:06:48.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-19T09:51:53.000Z", "num_tokens": 8852, "size": 36575 }
\chapter*{Abbreviations} \addcontentsline{toc}{chapter}{Abbreviations} \begin{tabular}{ll} EX & Here is the example of abbreviations\\ 3D & Three-dimensional\\ AR & Augmented Reality\\ VR & Virtual Reality\\ CPU & Central Processing Unit\\ GPU & Graphics Processing Unit \end{tabular}
{ "alphanum_fraction": 0.7272727273, "avg_line_length": 29.7, "ext": "tex", "hexsha": "3b35e622ed8a3b21b91bf77000609a244ee5f0cc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f28102e1051f64e4d81314cfd20c27d29bd6cc5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kotaro-inoue/hknu_latex_template", "max_forks_repo_path": "src/abbreviations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f28102e1051f64e4d81314cfd20c27d29bd6cc5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kotaro-inoue/hknu_latex_template", "max_issues_repo_path": "src/abbreviations.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f28102e1051f64e4d81314cfd20c27d29bd6cc5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kotaro-inoue/hknu_latex_template", "max_stars_repo_path": "src/abbreviations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 86, "size": 297 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Programming/Coding Assignment % LaTeX Template % % This template has been downloaded from: % http://www.latextemplates.com % % Original author: % Ted Pavlic (http://www.tedpavlic.com) % % Note: % The \lipsum[#] commands throughout this template generate dummy text % to fill the template out. These commands should all be removed when % writing assignment content. % % This template uses a Perl script as an example snippet of code, most other % languages are also usable. Configure them in the "CODE INCLUSION % CONFIGURATION" section. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass{article} \usepackage{fancyhdr} % Required for custom headers \usepackage{lastpage} % Required to determine the last page for the footer \usepackage{extramarks} % Required for headers and footers \usepackage[usenames,dvipsnames]{color} % Required for custom colors \usepackage{graphicx} % Required to insert images \usepackage{listings} % Required for insertion of code \usepackage{courier} % Required for the courier font \usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template \usepackage{url} \usepackage{mathtools} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} % Margins \topmargin=-0.45in \evensidemargin=0in \oddsidemargin=0in \textwidth=6.5in \textheight=9.0in \headsep=0.25in \linespread{1.1} % Line spacing % Set up the header and footer \pagestyle{fancy} \lhead{\hmwkAuthorName} % Top left header \chead{\hmwkClass\ (\hmwkClassInstructor\ \hmwkClassTime): \hmwkTitle} % Top center head \rhead{\firstxmark} % Top right header \lfoot{\lastxmark} % Bottom left footer \cfoot{} % Bottom center footer \rfoot{Page\ \thepage\ of\ \protect\pageref{LastPage}} % Bottom right footer \renewcommand\headrulewidth{0.4pt} % Size of the header rule \renewcommand\footrulewidth{0.4pt} % Size of the footer rule \setlength\parindent{0pt} % Removes all indentation from paragraphs %---------------------------------------------------------------------------------------- % CODE INCLUSION CONFIGURATION %---------------------------------------------------------------------------------------- \definecolor{MyDarkGreen}{rgb}{0.0,0.4,0.0} % This is the color used for comments \lstloadlanguages{Perl} % Load Perl syntax for listings, for a list of other languages supported see: ftp://ftp.tex.ac.uk/tex-archive/macros/latex/contrib/listings/listings.pdf \lstset{language=Perl, % Use Perl in this example frame=single, % Single frame around code basicstyle=\small\ttfamily, % Use small true type font keywordstyle=[1]\color{Blue}\bf, % Perl functions bold and blue keywordstyle=[2]\color{Purple}, % Perl function arguments purple keywordstyle=[3]\color{Blue}\underbar, % Custom functions underlined and blue identifierstyle=, % Nothing special about identifiers commentstyle=\usefont{T1}{pcr}{m}{sl}\color{MyDarkGreen}\small, % Comments small dark green courier font stringstyle=\color{Purple}, % Strings are purple showstringspaces=false, % Don't put marks in string spaces tabsize=5, % 5 spaces per tab % % Put standard Perl functions not included in the default language here morekeywords={rand}, % % Put Perl function parameters here morekeywords=[2]{on, off, interp}, % % Put user defined functions here morekeywords=[3]{test}, % morecomment=[l][\color{Blue}]{...}, % Line continuation (...) like blue comment numbers=left, % Line numbers on left firstnumber=1, % Line numbers start with line 1 numberstyle=\tiny\color{Blue}, % Line numbers are blue and small stepnumber=5 % Line numbers go in steps of 5 } % Creates a new command to include a perl script, the first parameter is the filename of the script (without .pl), the second parameter is the caption \newcommand{\perlscript}[2]{ \begin{itemize} \item[]\lstinputlisting[caption=#2,label=#1]{#1.pl} \end{itemize} } %---------------------------------------------------------------------------------------- % DOCUMENT STRUCTURE COMMANDS % Skip this unless you know what you're doing %---------------------------------------------------------------------------------------- % Header and footer for when a page split occurs within a problem environment \newcommand{\enterProblemHeader}[1]{ \nobreak\extramarks{#1}{#1 continued on next page\ldots}\nobreak \nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak } % Header and footer for when a page split occurs between problem environments \newcommand{\exitProblemHeader}[1]{ \nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak \nobreak\extramarks{#1}{}\nobreak } \setcounter{secnumdepth}{0} % Removes default section numbers \newcounter{homeworkProblemCounter} % Creates a counter to keep track of the number of problems \newcommand{\homeworkProblemName}{} \newenvironment{homeworkProblem}[1][Problem \arabic{homeworkProblemCounter}]{ % Makes a new environment called homeworkProblem which takes 1 argument (custom name) but the default is "Problem #" \stepcounter{homeworkProblemCounter} % Increase counter for number of problems \renewcommand{\homeworkProblemName}{#1} % Assign \homeworkProblemName the name of the problem \section{\homeworkProblemName} % Make a section in the document with the custom problem count \enterProblemHeader{\homeworkProblemName} % Header and footer within the environment }{ \exitProblemHeader{\homeworkProblemName} % Header and footer after the environment } \newcommand{\problemAnswer}[1]{ % Defines the problem answer command with the content as the only argument \noindent\framebox[\columnwidth][c]{\begin{minipage}{0.98\columnwidth}#1\end{minipage}} % Makes the box around the problem answer and puts the content inside } \newcommand{\homeworkSectionName}{} \newenvironment{homeworkSection}[1]{ % New environment for sections within homework problems, takes 1 argument - the name of the section \renewcommand{\homeworkSectionName}{#1} % Assign \homeworkSectionName to the name of the section from the environment argument \subsection{\homeworkSectionName} % Make a subsection with the custom name of the subsection \enterProblemHeader{\homeworkProblemName\ [\homeworkSectionName]} % Header and footer within the environment }{ \enterProblemHeader{\homeworkProblemName} % Header and footer after the environment } %---------------------------------------------------------------------------------------- % NAME AND CLASS SECTION %---------------------------------------------------------------------------------------- \newcommand{\hmwkTitle}{Assignment\ \#2} % Assignment title \newcommand{\hmwkDueDate}{Thursday,\ September 30th,\ 2014} % Due date \newcommand{\hmwkClass}{CS 381} % Course/class \newcommand{\hmwkClassTime}{12:00pm} % Class/lecture time \newcommand{\hmwkClassInstructor}{Prof. Grigorescu} % Teacher/lecturer \newcommand{\hmwkAuthorName}{Yao Xiao(xiao67)} % Your name %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \title{ \vspace{2in} \textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\ \normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate}\\ \vspace{0.1in}\large{\textit{\hmwkClassInstructor\ \hmwkClassTime}} \vspace{3in} } \author{\textbf{\hmwkAuthorName}} \date{} % Insert date here if you want it to appear below your name %---------------------------------------------------------------------------------------- \everymath{\displaystyle} \begin{document} \maketitle %---------------------------------------------------------------------------------------- % TABLE OF CONTENTS %---------------------------------------------------------------------------------------- \setcounter{tocdepth}{1} % Uncomment this line if you don't want subsections listed in the ToC \newpage %\tableofcontents \newpage %---------------------------------------------------------------------------------------- % PROBLEM 1 %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem} Give the solution to the following recurrences by applying the Master theorem when possible, or by any other of the methods we’ve learned when the Master theorem doesn’t apply. Assume T (1) = 1. \\ 1. T (n) = T (n/5) + T (7n/10) + 1 \\ 2. T (n) = 2T (n - 1) + 5 \\ 3. T (n) = 6T (2n/5) + log 2 n \\ 4. T (n) = 2T (n/2) + $n^{1.4}$ \\ 5. T (n) = 12T (n/12) + 11n \\ 6. T (n) = T ( $\sqrt{n}$) + 12. \\ \problemAnswer{ $1.T(n)=T(n/5)+T(7n/10)$ We can't use master therom. Assume $T(n)<=cn $ \\ So $T(n)<=1/5cn+7/10cn+1$\\ =$0.9cn+1$\\ =$cn-0.9cn+1$\\ $<=cn$\\ So $T(n)=\Theta(n)$ \\ \\ $2. T(n)=2T(n-1)+5 = 4T(n-2)+10 = 8T(n-3)+20 = 2^k*T(n-k)+5^{2^{k-1}}$\\ \\ $when \ k=n-1 \ T(n)=2^{n-1}*T(1)+5^{2^{n-2}}=2^{n-1}+5^{2^{n-2}} $ \\ \\ $3.T(n) = a \; T\!\left(\frac{n}{b}\right) + f(n) \;\;\;\; \mbox{where} \;\; a \geq 1 \mbox{, } b > 1 $ \\ $ a=6 \\ b=5/2 \\ f(n)=log_{2}{n}\\ c=log_{5/2}6 \approx 1.95 \\ f(n)=\Omega(n^c) \\ T(n)=\Theta(log_{2}n)$ \\ \\ $4. a=2 \\ b=2 \\ f(n)=n^{1.4} log_2{2}=1<1.4 \\ So T(n)=\Theta(n^{1.4}) $ \\ \\ $5. a=12 \\ b=12 \\ f(n)=11n \\ log_{12}{12}=1=c \\ So T(n)=\Theta(nlogn )$ \\ \\ $6. T(n)=T(\sqrt{n})+O(1) \\ Let m=log_{2}n n=2^m$ \\ $ T(2^m)=T(2^{m/2})+O(1)$ \\ $ Set K(m)=T(2^m), \ K(m)=K(m/2)+O(1), \ where a=1 b=2 $ \\ $ K(m)=\Theta(log2(m)) T(n)=\Theta(log2(log2(n))) $\\ } \end{homeworkProblem} %---------------------------------------------------------------------------------------- % PROBLEM 2 %---------------------------------------------------------------------------------------- \begin{homeworkProblem} Dan and Alex play the following game: Dan picks an integer from 1 to m and Alex is trying to figure it out by asking Dan as few questions as possible. Dan is willing to respond truthfully, but he announces that he will only answer ‘yes’ or ‘no’ to any of Alex’s questions. Design a strategy such that Alex can always guess the number using only o(m) questions \problemAnswer{ Ask every time if the number is bigger than (upperbound+lowerbound)/2, and divide change the upper bound and lower bound based on the answer. } \end{homeworkProblem} %---------------------------------------------------------------------------------------- % PROBLEM 3 %---------------------------------------------------------------------------------------- \begin{homeworkProblem} You are given two n-digit numbers a, b > 0, and have to come up with a fast algorithm to multiply them. \\ 1. What is a trivial upper bound on the number of steps it takes to compute the product ab ? \\ 2. Design a divide-and-conquer procedure that uses O(n1.58 ) digit-operations. \\ \problemAnswer{ 1. The upper bound will be $O(n^2)$ since you have to multiple by each number. \\ 2. Set $A=x1*10^{n}+x0 \ B=y1*10^{n}+y0 $ \\ $ So \ AB=x1y1*10^{2n}+(x1y0+x0y1)10^n+x0y0 $ \\ $ where (x1y0+x0y1)=(x1+x0)(y1+y0)-x1y1-x0y0 $ \\ So we just need to compute for every step $ x0y0 x1y1 (x1+x0)(y1+y0) $ there are three component and each time we divide the number by half \\ \\ The formula will be $ T(n)=3(t(n/2))+c$ \\ \\ The master theorem gives $T(n)=\Theta(log_{2}3)$ \\ \textit{Source: Piazza post @47} } \end{homeworkProblem} \begin{homeworkProblem} \problemAnswer{ 1. The time would become $ T(n)=T(\ceil{n/11})+T(9/10)+\Theta(n)=1.11cn+\Theta(n) $ 2. The time would become $ T(n)=T(\ceil{n/3})+T(5/6n)+\Theta(n)\neq cn+\Theta(n) $ Since after selecting, the element which could be abandoned is only $1/6$ of the total elements } \end{homeworkProblem} %---------------------------------------------------------------------------------------- \begin{homeworkProblem} I don't know , but $http://en.wikipedia.org/wiki/Closes\_pair\_of\_points\_problem$ \end{homeworkProblem} \end{document}
{ "alphanum_fraction": 0.5975046585, "avg_line_length": 41.0066445183, "ext": "tex", "hexsha": "337288b94eda2ea53142c3a04a67e6ec898a865f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3bbd86a29491571fdab839b0048924070649eb4f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "xarvey/cs381", "max_forks_repo_path": "HW2/assignment_2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3bbd86a29491571fdab839b0048924070649eb4f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "xarvey/cs381", "max_issues_repo_path": "HW2/assignment_2.tex", "max_line_length": 194, "max_stars_count": null, "max_stars_repo_head_hexsha": "3bbd86a29491571fdab839b0048924070649eb4f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "xarvey/cs381", "max_stars_repo_path": "HW2/assignment_2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3331, "size": 12343 }
\chapter{Kinematics of Fluid Flow}\label{c2} \section{Specification of the fluid flow}\label{s1} \begin{itemize} \item Eulerian description of a fluid flow is similar to the field description in electrodynamics. We specify various dynamical variables as functions of $\vec{x}$ and $t$. The flow is described by the Eulerian velocity $\vec{u}(\vec{x}, t)$. \item Lagrangian description of a fluid flow is similar to the particle description in mechanics. We identify a small enough element of fluid by its center of mass $\vec{a}$ and describe the fluid flow using the Lagrangian velocity $\vec{v}(\vec{a}, t)$. In what follows, the Eulerian description is used almost exclusively. \item A stream line is an imaginary line in the fluid whose tangent at any point is parallel to the (Eulerian) velocity at that point. Let the $d\vec{x}$ be an element of the stream line. That is, $d\vec{x} = dx_1\uvec{1} + dx_2\uvec{2} + dx_3\uvec{3}$. If $\vec{u}$ is the velocity at this element, $\vec{u} \vp d\vec{x} = 0$, because the two vectors are parallel. Therefore, if $\vec{u} = u_1\uvec{1} + u_2\uvec{2} + u_3\uvec{3}$, \[ \uvec{1}(u_2 dx_3 - u_3dx_2) + \uvec{2}(u_3dx_1 - u_1dx_3) + \uvec{3}(u_1dx_2 - u_2dx_1) = 0 \] Therefore, each component is zero, or \[ \frac{dx_1}{u_1} = \frac{dx_2}{u_2} = \frac{dx_3}{u_3} \] If the flow is steady, that is $\vec{u}$ is independent of $t$ then the stream lines at any point are unchanged over any time interval. Further, if we consider the set of all stream lines passing through all points of a closed curve then they form what is called a stream tube. A stream tube carries the same mass of fluid at any point. For supposing this was not true then at some point along the stream tube either mass gets in it or goes out of it. At such a point, the velocity vector is not tangent to the stream line, which cannot happen. \item A path line is the trajectory taken by a selected fluid element. A streak line associated with a fixed point $\vec{x}_0$ in the fluid is path taken by \emph{all} elements of the fluid passing through $\vec{x}_0$. For example, imagine a gas let out of a container opening along the positive $x_1$ axis. The path lines of the gas and the streak lines will be along the positive $x_1$ axis. If the container is now rotated to open along positive $x_2$ axis, the streak lines of the new gas elements getting out the container will change. However, the path lines of elements already out will remain unchanged. The following figure, taken from \href{http://web.mit.edu/16.unified/www/FALL/fluids/Lectures/f08.pdf}{Professor Paulo Lozano's lectures}, explain the concepts quite clearly. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.5]{c2f1}} \caption{Path lines and streak lines} \label{c2f1} \end{figure} In a steady flow, the stream lines, path lines and streak lines coincide. \item A flow field $\vec{u}(\vec{x}, t)$ is said to be two dimensional if it is unchanged by moving along one of the three axes. For example, flow along $x_1$ axis between two infinite plates along the planes $x_3 = 0$ and $x_3 = 1$ is unchanged if we move along the $x_2$ axis. Such a flow field is two dimensional. A flow of a fluid in a cylindrical pipe is not two dimensional. A flow field $\vec{u}(\vec{x}, t)$ is said to be axi-symmetric if, expressed in cylindrical coordinates, is independent of $\varphi$. \item Consider a steady flow in a Venturi tube. Every fluid element, as it approaches the constricted portion, accelerates and then decelerates after it passes through it. The change in its velocity is solely due to the different velocity field prevalent near the constriction. The acceleration and the subsequent deceleration both happen in a steady flow. This change is captured by the concept of a material derivative. Thus, \begin{eqnarray*} \vec{u}(\vec{x} + \delta\vec{x}, t + \delta{t}) &=& \vec{u}(\vec{x}, t) + \delta\vec{x}\cdot\grad{\vec{u}} + \delta{t}\pdt{\vec{u}}{t} + O(\delta{t}^2) + O(\delta\vec{x}^2) \\ &=& \vec{u}(\vec{x}, t) + \delta{t}\vec{u}\cdot\grad{\vec{u}} + \delta{t}\pdt{\vec{u}}{t} + O(\delta{t}^2) \\ &=& \vec{u}(\vec{x}, t) + \delta{t}\left(\pdt{\vec{u}}{t} + \vec{u}\cdot\grad{\vec{u}}\right) + O(\delta{t}^2) \end{eqnarray*} In general, if $\theta$ is any parameter associated with the flow, \[ \theta(\vec{x} + \delta\vec{x}, t + \delta{t}) = \theta(\vec{x}, t) + \delta{t}\left(\pdt{\theta}{t} + \vec{u}\cdot\grad{\theta}\right) + O(\delta{t}^2) \] and we define the material derivative of $\theta$ as \[ \md{\theta} = \pdt{\theta}{t} + \vec{u}\cdot\grad{\theta} \] The first term on the right hand side is called the local rate of change of $\theta$ at a point $\vec{x}$. It is zero under steady conditions. The second term on the right hand side is called the convective rate of change. \end{itemize} \section{Conservation of mass}\label{s2} \begin{itemize} \item Volume of material element can be written as \[ \tau = \int d\vec{l}\cdot\un dS \] If each area element $\un dS$ moves by a distance $\delta\vec{l}$ then the volume becomes, \[ \tau + \delta\tau = \int d(\vec{l} + \delta\vec{l})\cdot\un dS \] If $\vec{u}$ is the velocity of the surface element, $\delta\vec{l} = \vec{u}\delta t$ so that \[ \tau + \delta\tau = \int d(\vec{l} + \vec{u}\delta t)\cdot\un dS = \int d\vec{l}\cdot\un dS + \delta{t}\int d\vec{u}\cdot\un dS \] or, \[ \delta\tau = \delta{t}\int d\vec{u}\cdot\un dS \] or \[ \td{\tau}{t} = \int d\vec{u}\cdot\un dS = \int\dive\vec{u}dV \] In the limit $\tau \rightarrow 0$, we can approximate the right hand side as $\tau\dive{\vec{u}}$ and hence, \begin{equation}\label{c2s2e1} \lim_{\tau \rightarrow 0} \frac{1}{\tau}\td{\tau}{t} = \dive{\vec{u}} \end{equation} \item A strict definition of an incompressible fluid is the one whose density does not get affected by changes in pressure. Density of a fluid may change because of heat conduction (thermal expansion) or changes in concentration of solute. However, such changes are not indicative of compressibility of the fluid. In absence of temperature and/or concentration gradients, an incompressible fluid is the one whose density remains unchanged. Thus, if $\rho$ is the density of a fluid element, an incompressible fluid has \[ \md{\rho} = 0 \] or, equivalently $\dive{\vec{u}} = 0$. The equivalent follows from equation of continuity (mass conservation). \item A stream tube ends when the velocity of the fluid drops to zero. In an incompressible fluid, this can happen only at the boundaries with solids. Therefore, a stream tube is either closed, or ends on a solid boundary or extends to infinity. \item Let us apply the equation of conservation of mass, \begin{equation}\label{c2s2e1a} \pdt{\rho}{t} + \dive{(\rho\vec{u})} = 0, \end{equation} to two circumstances \begin{itemize} \item If the fluid is incompressible, then $\rho$ is a constant and hence $\dive{\vec{u}} = 0$, \item If the fluid is compressible, but the flow is steady. Therefore $\rho_t = 0$ and hence $\dive{(\rho\vec{u})} = 0$. \end{itemize} If we now assume that the flow is two dimensional, then $\dive{\vec{u}} = 0$ implies \[ \pdt{u_1}{x_1} + \pdt{u_2}{x_2} = 0, \] or $u_1 dx_2 + (-u_2)dx_1$ is an exact differential and hence can be written as $d\psi$, in which case, \begin{eqnarray*} u_1 &=& \pdt{\psi}{x_2} \\ u_2 &=& -\pdt{\psi}{x_1} \end{eqnarray*} Since $d\psi = u_1 dx_1 - u_2 dx_2$, we have, \[ \psi - \psi_0 = \int d\psi = \int (u_1 dx_1 - u_2 dx_2) \] This equation is interpreted as a flux if we consider an open surface formed by translating the curve $OP$, parallel to the $z$ axis, by a unit distance. That is, the flux is \[ \int_0^1 (\psi - \psi_0)dx_3 = \int_0^1 \int (u_1 dx_1 - u_2 dx_2) dx_3, \] which after integration gives the previous equation. \item We will now argue why $\psi$ is a constant along a stream line. We first observe that since a stream line is everywhere tangential to $\vec{u}$, there is no flux across a stream line. But since flux is $\psi - \psi_0$. Along a stream line, $\psi = \psi_0$, or that $\psi$ is unchanged. Therefore, $\psi$ is called the stream function. \item For a flow described in polar coordinates, $\dive{\vec{u}} = 0$ implies, \[ \frac{1}{r}\pdt{(ru_r)}{r} + \frac{1}{r}\pdt{u_\theta}{\theta} = 0 \] or $ru_r d\theta - u_\theta dr$ is an exact differential, say $d\psi$, and hence, \begin{eqnarray*} ru_r &=& \pdt{\psi}{\theta} \\ u_\theta &=& -\pdt{\psi}{r} \end{eqnarray*} \item In the case of an axisymmetric flow, $\dive{\vec{u}} = 0$ implies, \[ \pdt{u_x}{x} + \frac{1}{\sigma}\pdt{(\sigma u_\sigma)}{\sigma} = 0 \] This can as well be written as, \[ \frac{1}{\sigma}\pdt{(\sigma u_x)}{x} + \frac{1}{\sigma}\pdt{(\sigma u_\sigma)}{\sigma} = 0 \] or that $\sigma u_x d\sigma - \sigma u_\sigma dx = d\psi$, an exact differential. Therefore, \begin{eqnarray*} \sigma u_x &=& \pdt{\psi}{\sigma} \\ \sigma u_\sigma &=& -\pdt{\psi}{x} \end{eqnarray*} or \begin{eqnarray*} u_x &=& \frac{1}{\sigma}\pdt{\psi}{\sigma} \\ u_\sigma &=& -\frac{1}{\sigma}\pdt{\psi}{x} \end{eqnarray*} Since $d\psi = \sigma u_x d\sigma - \sigma u_\sigma dx$, we have, \[ \psi - \psi_0 = \int d\psi = \int (\sigma u_x d\sigma - \sigma u_\sigma dx) \] This equation can be interpreted as a flux if we consider an open surface formed by rotating the curve about the $x$ axis by $2\pi$ (or any other angle in $[0, 2\pi]$). Thus, the flux is, \[ \int_0^{2\pi} (\psi - \psi_0)d\theta = \int_0^{2\pi} \int (\sigma u_x d\sigma - \sigma u_\sigma dx) d\theta, \] which after integration gives the previous equation. \end{itemize} \subsection{Exercises} \begin{enumerate} \item We use the fact that the volume of a tetrahedron with points $\vec{A}$, $\vec{B}$, $\vec{C}$ and $\vec{D}$ is \[ V = \frac{1}{6}(\vec{A} - \vec{D})\cdot(\vec{B} - \vec{D}) \vp (\vec{C} - \vec{D}) \] Consider a small tetrahedron $P_0Q_0R_0T_0$ around a material point $P_0$. Let the coordinates of the four points be $\vec{X}^{(0)}$, $\vec{X}^{(0)} + \delta\vec{X}^{(1)}$, $\vec{X}^{(0)} + \delta\vec{X}^{(2)}$ and $\vec{X}^{(0)} + \delta\vec{X}^{(3)}$. Then, its volume is \[ \delta V = \frac{1}{6}\delta\vec{X}^{(1)}\cdot\left(\delta\vec{X}^{(2)} \vp \delta\vec{X}^{(3)} \right) = \frac{\epsilon_{rst}}{6}\delta{X}_r^{(1)}\delta{X}_s^{(2)}\delta{X}_t^{(3)} \] Let a deformation take this material tetrahedron to $P_1Q_1R_1T_1$ around the material point $P_1$. Let the coordinates of the four points be $\vec{x}^{(0)}$, $\vec{x}^{(0)} + \delta\vec{x}^{(1)}$, $\vec{x}^{(0)} + \delta\vec{x}^{(2)}$ and $\vec{x}^{(0)} + \delta\vec{x}^{(3)}$. Then, its volume is \[ \delta v = \frac{1}{6}\delta\vec{x}^{(1)}\cdot\left(\delta\vec{x}^{(2)} \vp \delta\vec{x}^{(3)} \right) = \frac{\epsilon_{ijk}}{6}\delta{x}_i^{(1)}\delta{x}_j^{(2)}\delta{x}_k^{(3)} \] If $\vec{X}^{(0)} = (X_1, X_2, X_3)$ and $\vec{x}^{(0)} = (x_1, x_2, x_3)$ then $x_i = x_i(X_1, X_2, X_3)$. Therefore, \[ dx_i = \pdt{x_i}{X_1}dX_1 + \pdt{x_i}{X_2}dX_2 + \pdt{x_i}{X_3}dX_3 = \pdt{x_i}{X_r}dX_r \] Therefore, \begin{eqnarray*} \delta x_i^{(1)} &=& \pdt{x_i}{X_r}\delta X_r^{(1)} \\ \delta x_j^{(2)} &=& \pdt{x_j}{X_s}\delta X_s^{(2)} \\ \delta x_k^{(3)} &=& \pdt{x_k}{X_t}\delta X_t^{(3)} \end{eqnarray*} Note that, in the above relations, the coordinates of $\delta\vec{x}^{(1)}$ depends only on those of $\delta\vec{X}^{(1)}$ and so on. Thus, \[ \delta v = \frac{\epsilon_{ijk}}{6}\pdt{x_i}{X_r}\delta X_r^{(1)}\pdt{x_j}{X_s}\delta X_s^{(2)}\pdt{x_k}{X_t}\delta X_t^{(3)} \] or \[ \delta v = \frac{\epsilon_{ijk}}{6}\pdt{x_i}{X_r}\pdt{x_j}{X_s}\pdt{x_k}{X_t}\delta X_r^{(1)}\delta X_s^{(2)}\delta X_t^{(3)} \] We now use the result that for a tensor $\{T_{ij}\}$, \[ \epsilon_{mpq}\det(\{T_{ij}\}) = \frac{\epsilon_{ijk}}{6}T_{im}T_{jp}T_{kq} \] so that \[ \frac{\epsilon_{ijk}}{6}\pdt{x_i}{X_r}\pdt{x_j}{X_s}\pdt{x_k}{X_t} = \epsilon_{rst}\frac{\partial(x_1, x_2, x_3)}{\partial(X_1, X_2, X_3)} \] and hence, \[ \delta v = \epsilon_{rst}\frac{\partial(x_1, x_2, x_3)}{\partial(X_1, X_2, X_3)}\delta X_r^{(1)}\delta X_s^{(2)}\delta X_t^{(3)} \] or, \[ \delta v = \frac{\partial(x_1, x_2, x_3)}{\partial(X_1, X_2, X_3)}\epsilon_{rst}\delta X_r^{(1)}\delta X_s^{(2)}\delta X_t^{(3)} \] or, \[ \delta v = \frac{\partial(x_1, x_2, x_3)}{\partial(X_1, X_2, X_3)} \delta V \] If the mass in the two material elements is the same, then \[ \frac{m}{\delta v}\frac{\partial(x_1, x_2, x_3)}{\partial(X_1, X_2, X_3)} = \frac{m}{\delta V} \] If $\rho_0 = m/V$ and $\rho = m/v$\footnote{This solution is adapted from the development of mass conservation in A. J. M. Spencer's Continuum Mechanics\cite{spencer2004continuum}.} \[ \frac{\partial(x_1, x_2, x_3)}{\partial(X_1, X_2, X_3)} = \frac{\rho_0}{\rho} \] \end{enumerate} \section{Analysis of the relative motion near a point}\label{s3} \begin{itemize} \item The continuum hypothesis allows us to assume that the velocity $\vec{u}$ is a continuous function of $\vec{x}$. Therefore, if $\vec{u}(\vec{x}, t)$ is the velocity of the fluid at a point $\vec{x}$ then the velocity, at the same time instant, at a neighboring point $\vec{x} + \vec{r}$ is, \[ \vec{u}(\vec{x} + \vec{r}, t) = \vec{u}(\vec{x}, t) + \vec{r}\cdot\nabla\vec{u} + O(r^2) \] The difference in velocity $\delta\vec{u}$ can be written in Cartesian tensor form, \[ \delta{u}_i = r_j\pdt{u_i}{x_j} \] It can be further manipulated as, \[ \delta{u}_i = \frac{r_j}{2}\left(\pdt{u_i}{x_j} + \pdt{u_j}{x_i}\right) + \frac{r_j}{2}\left(\pdt{u_i}{x_j} - \pdt{u_j}{x_i}\right), \] We define the rate of strain tensor $e_{ij}$ and the vorticity tensor $\xi_{ij}$ as, \begin{eqnarray*} e_{ij} &=& \frac{1}{2}\left(\pdt{u_i}{x_j} + \pdt{u_j}{x_i}\right) \\ \xi_{ij} &=& \frac{1}{2}\left(\pdt{u_i}{x_j} - \pdt{u_j}{x_i}\right) \end{eqnarray*} \item The symmetric tensor $e_{ij}$ can be diagonalized by choosing an set of coordinate axes, appropriately rotated with respect to the original ones. It then takes a form \[ \begin{pmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{pmatrix} \] The term $r_i e_{ij}$, in this set of axes becomes, \[ \begin{pmatrix} a_1 & 0 & 0 \\ 0 & a_2 & 0 \\ 0 & 0 & a_3 \end{pmatrix} \begin{pmatrix} r_1^\op \\ r_2^\op \\ r_3^\op \end{pmatrix} = a_1r_1^\op + a_2r_2^\op + a_3r_3^\op \] The contribution $a_1r_1^\op + a_2r_2^\op + a_3r_3^\op$ to $\delta{u}_i$ means that the fluid elements at $\vec{x}$ and $\vec{x} + \vec{r}$, which could have been identical if the fluid werenot in motion, now differ in shape. The one at $\vec{x} + \vec{r}$ is elongated by a factor of $a_i$ along the $r_i$ axis. Thus a fluid element that is initially spherical becomes ellipsoidal. If the fluid is incompressible, the volume of the ellipsoid is same as that of the sphere. If the fluid is compressible, then we can decompose $e_{ij}$ as \[ \frac{e_{ii}}{3}\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} + \begin{pmatrix} e_{11} - (e_{ii}/3) & e_{12} & e_{13} \\ e_{21} & e_{22} - (e_{ii}/3) & e_{23} \\ e_{31} & e_{32} & e_{33} - (e_{ii}/3) \end{pmatrix}, \] where the first (isotropic) term contributes to change in volume while the second (deviatoric) term contributes to pure straining motion. \item The anti-symmetric tensor $\xi_{ij}$ is \[ \begin{pmatrix} 0 & \partial_2 u_1 - \partial_1 u_2 & \partial_1 u_3 - \partial_3 u_1 \\ \partial_1 u_2 - \partial_2 u_1 & 0 & \partial_3 u_2 - \partial_2 u_3 \\ \partial_3 u_1 - \partial_1 u_3 & \partial_2 u_3 - \partial_3 u_2 & 0 \end{pmatrix}, \] where the notation $\partial_i u_j$ means \[ \pdt{u_j}{x_i} \] We observe that it has only three independent elements, (unlike $6$ in the case of $e_{ij}$) and hence can be written as $\xi_{ij} = -\epsilon_{ijk}\omega_k/2$, where $\omega_k$ are components of the local vorticity vector $\vec{\omega}$. Therefore, the contribution to $\delta\vec{u}$ is \[ r_j\xi_{ij} = -\frac{1}{2}\epsilon_{ijk}r_j\omega_k, \] which in Gibbs form is \[ -\frac{1}{2}\vec{r}\vp\vec{\omega} = \frac{\vec{\omega}}{2}\vp\vec{r} \] This is same as the velocity of a rigid body rotating with an angular velocity $\vec{\omega}/2$. (Recall the relation $\vec{v} = \vec{\omega}\vp\vec{r}$ between tangential velocity, $\vec{v}$ and rotational velocity $\vec{\omega}$, in a uniform circular motion.) \item Since $\vec{\omega} = \curl{\vec{u}}$, the vorticity is twice the effective angular velocity of the fluid. We explain it as follows. Consider a small, circular loop of radius $a$ around $\vec{x}$. The tangential component of velocity averaged along the circumference of the loop is \[ \frac{1}{2\pi a}\oint\vec{u}\cdot d\vec{r} \] Dividing further by $a$, \[ \frac{1}{2\pi a^2}\oint\vec{u}\cdot d\vec{r} = \frac{1}{2\pi a^2}\int\curl{\vec{u}}\cdot\un dA \approx \frac{1}{2\pi a^2} \curl{\vec{u}} \pi a^2 = \frac{\curl{\vec{u}}}{2} \] \item Consider a spherical element of fluid at $\vec{x} + \vec{r}$. It has a velocity \[ \vec{u}(\vec{x} + \vec{r}) = \vec{u}(\vec{x}) + \vec{r}\cdot\nabla\vec{u} + O(r^2) \] Therefore, it has an angular momentum, \begin{equation}\label{c2s3e1} \vec{L} = \int \vec{r}\vp\left(\vec{u} + \vec{r}\cdot\nabla\vec{u}\right)\rho dV, \end{equation} about the point $\vec{x}$. In Cartesian tensor notation, it is \begin{equation}\label{c2s3e2} L_i = \int\epsilon_{ijk} r_j\left(u_k + r_l\pdt{u_k}{x_l}\right)\rho dV \end{equation} Consider the first term on the right hand side, \[ \vec{L}_1 = \int \vec{r}\vp\vec{u}\rho dV \] If the fluid element is small, $\vec{u}$ and $\rho$ are almost constant throughout it. Therefore, we have, \[ \vec{L}_1 = -\rho\vec{u}\vp\int\vec{r}dV \] Since $\vec{r} = x\uvec{x} + y\uvec{y} + z\uvec{z}$, when expressed in spherical polar coordinates, $\vec{r}dV$ is \[ \left(r\sin\theta\cos\varphi\uvec{x}+r\sin\theta\sin\varphi\uvec{y}+r\cos\theta\uvec{z}\right)r^2\sin\theta dr d\theta d\varphi \] or, \begin{eqnarray*} \vec{L}_1 &=& -\rho\vec{u}\vp\uvec{x}\int_0^R\int_0^{\pi}r^3\sin^2\theta dr d\theta \int_0^{2\pi}\cos\varphi d\varphi + \\ & & -\rho\vec{u}\vp\uvec{y}\int_0^R\int_0^{\pi}r^3\sin^2\theta dr d\theta \int_0^{2\pi}\sin\varphi d\varphi + \\ & & -2\pi\rho\vec{u}\vp\uvec{z}\int_0^R r^3 dr \int_0^{\pi} \sin\theta\cos\theta d\theta \\ &=& 0 + 0 - \frac{\pi\rho R^4}{2}\vec{u}\vp\uvec{z} \int_0^{\pi} \frac{\sin 2\theta}{2} d\theta \\ &=& 0 \end{eqnarray*} The second term on the right hand side of \eqref{c2s3e1} can be best manipulated in Cartesian tensor form, \[ {L_2}_i = \int\epsilon_{ijk} r_jr_l\pdt{u_k}{x_l} \rho dV \] Once again, under the assumption of a small fluid element, \begin{equation}\label{c2s3e3} {L_2}_i = \epsilon_{ijk}\pdt{u_k}{x_l} \int r_j r_l \rho dV = \epsilon_{ijk}\pdt{u_k}{x_l} I_{jl}, \end{equation} where $I_{ij}$ is the moment of inertia tensor of the fluid element about any axis passing through the center. \item It can be shown that for a spherical fluid element, \[ I_{ij} = \int \rho r_j r_l dV, \] is a diagonal tensor. This is because the \enquote*{product of inertia} terms $I_{xy}, I_{yz}$ and $I_{zx}$ are all zero. We will show the proof for one of them, the rest are similar. \begin{eqnarray*} I_{xy} &=& \int xy\rho dV \\ &=& \int_0^R\int_0^\pi\int_0^{2\pi} r^4\sin^3\theta\sin\phi\cos\varphi dr d\theta d\varphi \\ &=& \int_0^R\int_0^\pi r^4\sin^3\theta dr d\theta \int_0^{2\pi} \frac{\sin 2\varphi}{2}d\varphi \\ &=& \int_0^R\int_0^\pi r^4\sin^3\theta dr d\theta \times 0 \\ &=& 0 \end{eqnarray*} Thus, \begin{equation}\label{c2s3e4} I_{xy} = I_{yz} = I_{zx} = 0 \end{equation} \item We will next show that the \enquote*{moment of inertia} terms $I_{xx}, I_{yy}, I_{zz}$ are all equal to $I/2$, where $I$ is the moment of inertia of the sphere through any axis passing through the center. Thus, \begin{eqnarray*} I_{xx} &=& \int x^2 \rho dV \\ &=& \rho\int_0^R\int_0^\pi\int_0^{2\pi} r^4\sin^3\theta\cos^2\varphi dr d\theta d\varphi \\ &=& \rho\int_0^R r^4 dr \int_0^\pi \sin^3\theta d\theta \int_0^{2\pi}\cos^2\varphi d\varphi \\ &=& \rho\frac{R^5}{5} \frac{4}{3} \pi \\ &=& \rho\frac{4\pi}{15}R^5 \\ &=& \frac{MR^2}{5}, \end{eqnarray*} where $M$ is the mass of the fluid element. We can similarly show that \[ I_{yy} = I_{zz} = \frac{MR^2}{5} \] Now, the moment of inertia of a sphere of mass $M$ and radius $R$ about any axis passing through the center is $I = 2MR^2/5$. Therefore, \begin{equation}\label{c2s3e5} I_{xx} = I_{yy} = I_{zz} = \frac{I}{2} \end{equation} \item From equations \eqref{c2s3e4} and \eqref{c2s3e5}, \[ I_{jl} = \frac{I}{2}\delta_{jl} \] Therefore, from \eqref{c2s3e3}, \[ {L_2}_i = \frac{1}{2}\epsilon_{ijk}\pdt{u_k}{x_j}I = \frac{1}{2}\omega_i I, \] which, in Gibbs notation, is \[ \vec{L}_2 = \frac{1}{2}I\vec{\omega} \] Since $\vec{L}_1 = 0$, we observe that the total angular momentum of the spherical fluid element is, \[ \vec{L} = \frac{1}{2}I\vec{\omega} \] This is the angular momentum the spherical element would have had if it was rotating as a rigid body with angular velocity $\vec{\omega}/2$. \item Since \[ \vec{u}(\vec{x} + \vec{r}) = \vec{u}(\vec{x}) + \vec{r}\cdot\grad{\vec{u}} \] or \[ u_i(\vec{x} + \vec{r}) = u_i(\vec{x}) + r_j e_{ij} + r_j \xi_{ij} \] We can write $r_j e_{ij} = \partial\Phi/\partial x_i$, where \[ \Phi = \frac{1}{2}r_i e_{ij} r_j \] We also saw that \[ r_j \xi_{ij} = \frac{1}{2}\epsilon_{ijk}\omega_{j} r_k \] Therefore, \begin{equation}\label{c2s3e6} \vec{u}(\vec{x} + \vec{r}) = \vec{u}(\vec{x}) + \grad{\Phi} + \frac{\vec{\omega}}{2} \vp \vec{r} \end{equation} The three terms on the right hand side are contributions due to \begin{itemize} \item Uniform translation \item Pure straining motion \item Rigid body rotation \end{itemize} \item We will show that for a simple shearing motion, the principal rates of strain are \[ \frac{1}{2}\pdt{u_1}{x_2}, -\frac{1}{2}\pdt{u_1}{x_2}, 0 \] Since \[ \delta\vec{u} = r_2\pdt{u_1}{x_2}\uvec{1}, \] the rate of strain tensor is, \[ \begin{pmatrix} 0 & a & 0 \\ a & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = a\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \] where \[ a = \frac{1}{2}\pdt{u_1}{x_2} \] Using the secular equation for the tensor $e_{ij}$, we readily observe that its eigenvalues are $a, -a, 0$. Therefore, the diagonalized form of the tensor is \[ \begin{pmatrix*}[r] a & 0 & 0 \\ 0 & -a & 0 \\ 0 & 0 & 0 \end{pmatrix*} \] Thus, the principal rates of strain are $a, -a, 0$ or, \[ \frac{1}{2}\pdt{u_1}{x_2}, -\frac{1}{2}\pdt{u_1}{x_2}, 0 \] Further, the eigen vectors corresponding to these eigenvalues are \[ \frac{a}{\sqrt{2}}\begin{pmatrix*}[r] 1 \\ -1 \\ 0 \end{pmatrix*}, \frac{a}{\sqrt{2}}\begin{pmatrix*}[r] -1 \\ 1 \\ 0 \end{pmatrix*}, \begin{pmatrix}0 \\ 0 \\ 1 \end{pmatrix} \] Eigenvalues and eigenvectors can be calculated in R using the following code. \begin{verbatim} e <- matrix(c(0, 1, 0, 1, 0, 0, 0, 0, 0), 3, 3, byrow = T) eigen(e)$values eigen(e)$vectors \end{verbatim} \item From equation \eqref{c2s3e6} it is we observe that any relative velocity field can be written as \[ \delta\vec{u} = \vec{u}(\vec{x} + \vec{r}) - \vec{u}(\vec{x}) = \grad{\Phi} + \frac{\vec{\omega}}{2} \vp \vec{r}, \] which is a sum of pure straining motion and a rigid body rotation. The pure straining motion can itself be written as a symmetrical expansion (or contraction) and a simple shearing motion. This follows from the fact that any tensor can be written as a sum of the isotropic tensor and a deviatoric tensor. For example, \[ \begin{pmatrix} e_{11} & e_{12} & e_{13} \\ e_{21} & e_{22} & e_{23} \\ e_{31} & e_{32} & e_{33} \end{pmatrix} = \frac{e_{ii}}{3}\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} + \begin{pmatrix} e_{11} - \frac{e_{ii}}{3} & e_{12} & e_{13} \\ e_{21} & e_{22} - \frac{e_{ii}}{3} & e_{23} \\ e_{31} & e_{32} & e_{33} - \frac{e_{ii}}{3} \end{pmatrix} \] The first term on the right hand side describes a symmetrical expansion (or contraction) while the second term is describes pure shear. Pure shear keeps the volume unchanged. This is because, the dilatation is given by trace of the strain tensor \footnote{Refer to the Wikipedia article on \href{https://en.wikipedia.org/wiki/Infinitesimal_strain_theory\#Volumetric_strain}{Volumetric strain}}. Therefore, the trace of rate of strain tensor will give the rate of dilatation. Since the trace of the second tensor on the right hand side is zero, we conclude that it keeps the volume of a fluid element unchanged. \end{itemize} \section{Velocity distribution with specified rate of expansion and vorticity}\label{c2s4} \begin{itemize} \item We saw in \eqref{c2s2e1} that the divergence of the velocity field is the local rate of expansion of the fluid. Further, the curl of the velocity field was defined to be the local vorticity. The goal of this section is to determine $\vec{u}$, given $\Delta = \dive{\vec{u}}$ and $\vec{\omega} = \curl{\omega}$. \item Let us begin with finding the velocity $\vec{u}_e$ when divergence is given and curl is zero. The subscript \enquote*{e} denotes that the rate of expansion is specified. Since $\curl{\vec{u}_e} = 0$, there is a scalar function $\phi(\vec{x})$ such that $\vec{u}_e = \grad{\phi}$. Therefore, $\dive{\vec{u}_e} = \Delta(\vec{x}) \Rightarrow \nabla^2\phi = \Delta(\vec{x})$. We show in point \ref{c2sa2} of appendix that the solution of Poisson's equation $\nabla^2\phi = \Delta(\vec{x})$ is \[ \phi(\vec{x}) = - \frac{1}{4\pi}\int_{-\infty}^{\infty}\frac{\Delta(\vec{x}^\op)}{|\vec{x}-\vec{x}^\op|} dV^\op \] Therefore, \[ \vec{u}_e = \grad{\phi} = \frac{1}{4\pi}\int_{-\infty}^{\infty} \frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^3}\Delta(\vec{x}^\op)dV^\op, \] where we have used the result, \[ \grad\frac{1}{|\vec{x}-\vec{x}^\op|} = -\frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^3} \] This expression is similar to that of electrostatic field (in gaussian units) due to a volume charge density. \item We interpret \[ \delta\vec{u}_e(\vec{x}) = \frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^2}\frac{\Delta(\vec{x}^\op)}{4\pi|\vec{x}-\vec{x}^\op|}\delta V^\op \] as the irrotational velocity distribution \enquote{due to} a source $\Delta(\vec{x}^\op)$ in a volume element $\delta V^\op$ around the point $\vec{x}^\op$. If we take the divergence of this equation, \[ \dive{\delta\vec{u}_e(\vec{x})} = \frac{\Delta(\vec{x}^\op)\delta V^\op}{4\pi}\dive{\frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^3}} = \Delta(\vec{x}^\op)\delta V^\op \delta(\vec{x} - \vec{x}^\op) \] where we have used the result\footnote{See chapter 1 of 'Introduction to Electrodyamics' by David Griffiths\cite{griffiths1999introduction}.} \[ \dive{\frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^3}} = 4\pi\delta(\vec{x} - \vec{x}^\op) \] The velocity field $\delta\vec{u}_e$ has a non-zero rate of expansion only at $\vec{x}^\op$. We can interpret it like the velocity field due to a \enquote*{point source}. \item Let us begin with finding the velocity $\vec{u}_v$ when curl is given and divergence is zero. The subscript \enquote*{v} denotes that the vorticity is specified. Since $\dive{\vec{u}_v} = 0$, there is a vector function $\vec{B}(\vec{x})$ such that $\vec{u}_v = \curl{\vec{B}}$. Therefore, \[ \vec{\omega} = \curl{\vec{u}_v} = \curl{\curl{\vec{B}}} = \grad{(\dive{\vec{B}})} - \nabla^2\vec{B} \] We can choose the vector $\vec{B}$ such that its divergence is zero\footnote{This is analogous to Coulomb gauge in electrodynamics.}, so that \[ \vec{\omega} = -\nabla^2\vec{B} \] which is same as \[ \nabla^2 B_i = -\omega_i, \] We once again have Poisson equation, whose solution we found to be \[ B_i(\vec{x}) = -\frac{1}{4\pi}\int_{-\infty}^{\infty}\frac{-\omega_i(\vec{x}^\op)}{|\vec{x}-\vec{x}^\op|} dV^\op \] or \[ \vec{B}(\vec{x}) = \frac{1}{4\pi}\int_{-\infty}^{\infty}\frac{\vec{\omega}(\vec{x}^\op)}{|\vec{x}-\vec{x}^\op|} dV^\op \] This equation is analogous to the expression for magnetic vector potential due to a given current density. Further, \[ \vec{u}_v = \curl{\vec{B}} = \frac{1}{4\pi}\int_{-\infty}^{\infty}\vec{\omega}(\vec{x}^\op)\vp\grad{\frac{1}{|\vec{x}-\vec{x}^\op|}} dV^\op \] or \begin{eqnarray*} \vec{u}_v &=& \frac{1}{4\pi}\int_{-\infty}^{\infty}\vec{\omega}(\vec{x}^\op)\vp\frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^3} dV^\op \\ &=& -\frac{1}{4\pi}\int_{-\infty}^{\infty} \frac{(\vec{x} - \vec{x}^\op) \vp \vec{\omega}(\vec{x}^\op)}{|\vec{x}-\vec{x}^\op|^3} dV^\op \end{eqnarray*} Despite its analogy with the expression for the magnetic field produced due to a given current density, there is no \enquote*{cause and effect} relation between $\vec{u}_v$ and $\vec{\omega}$. In the context of fluid dynamics, this is just a kinematical relation. \item Defining $\vec{s} = \vec{x}-\vec{x}^\op$, we can interpret the above result as the sum of contribution from different volume elements of the fluid, each element contributing \[ \delta\vec{u}_v = -\frac{\vec{s} \vp \vec{\omega}(\vec{x}^\op)}{4\pi s^3}\delta V^\op \] However, unlike the case of $\delta\vec{u}_e$, this cannot be velocity \enquote*{due to} a vortex of strength $\vec{\omega}$ in volume $\delta V^\op$ around $\vec{x}^\op$ because such a vorticity field will not have a zero divergence. \item If $\vec{u}$ is a velocity field with a given divergence and curl then it can be written as $\vec{u} = \vec{u}_e + \vec{u}_v + \vec{v}$, where $\vec{v}$ is solenoidal as well as irrotational. It is determined by the boundary conditions imposed on $\vec{u}$. \end{itemize} \section{Singularities in the rate of expansion} \begin{itemize} \item The velocity \enquote*{due to} a given rate of expansion $\Delta$ is \[ \vec{u}_e = \grad{\phi} = \frac{1}{4\pi}\int_{-\infty}^{\infty} \frac{\vec{x} - \vec{x}^\op}{|\vec{x}-\vec{x}^\op|^3}\Delta(\vec{x}^\op)dV^\op \] Let the function $\Delta$ be such that it is non zero only in a small volume $\epsilon$ around the point $\vec{x}^\op$. In that case, the above integral can be approximated as, \[ \vec{u}_e = \frac{1}{4\pi}\frac{\vec{s}}{s^3}\int_{\epsilon} \Delta(\vec{x}^\tp) dV(\vec{x}^\tp), \] where $\vec{s} = \vec{x} - \vec{x}^\op$. If we now allow $\epsilon$ to shrink as small as possible while leaving the value of the integral \[ \int_{\epsilon} \Delta(\vec{x}^\tp) dV(\vec{x}^\tp) \] constant, say $m$, then we have, \[ \vec{u}_e = \frac{1}{4\pi} \frac{m\vec{s}}{s^3} \] Thus the velocity can be attributed to a \enquote*{point source} of strength $m$. For such a point source, the velocity potential is \[ \phi = -\frac{1}{4\pi} \frac{m}{s} \] The two preceeding equations are analogous to those for electric field and electric potential due to a point charge, if we use gaussian units. \item {\color{red}{The remark that the flow on one side of a rigid plane containing a small hole through which fluid is sucked at the rate $M$ units of volume per second is approximately the same as the flow produced in an unbounded fluid by a point source of strength $-M$ is not clear to me.}} \item A source doublet at a point $\vec{x}^\op$ is a point source of strength $m$ at $\vec{x}^\op + \delta\vec{x}^\op/2$ and a sink of strength $-m$ at $\vec{x}^\op - \delta\vec{x}^\op/2$ such that \[ \vec{\mu} = \lim_{\delta{x}^\op \rightarrow 0} m\delta\vec{x}^\op, \] remains finite. This is analogous to the definition of a point electric dipole. The potential at $\vec{x}$ due to a source doublet at $\vec{x}^\op$ is \[ \phi_e(\vec{x}) = \lim_{\delta{x}^\op \rightarrow 0} \frac{m}{4\pi} \left(\frac{1}{|\vec{x} - \vec{x}^\op + \delta\vec{x}^\op/2|} - \frac{1}{|\vec{x} - \vec{x}^\op - \delta\vec{x}^\op/2|}\right) \] If we write $\vec{s} = \vec{x} - \vec{x}^\op$ then \[ \frac{1}{|\vec{x} - \vec{x}^\op + \delta\vec{x}^\op/2|} = \frac{1}{|\vec{s} + \delta\vec{x}^\op/2|} = \frac{1}{\sqrt{s^2 + \vec{s}\cdot\delta\vec{x}^\op}}, \] where the last expression is true up to first order in $\delta{x}$. Thus, \[ \frac{1}{|\vec{x} - \vec{x}^\op + \delta\vec{x}^\op/2|} = \frac{1}{s}\left(1 + \frac{\vec{s}\cdot\delta\vec{x}^\op}{s^2}\right)^{-1/2} \] Once again, using Taylor expansion to first order in $\delta{x}$, \[ \frac{1}{|\vec{x} - \vec{x}^\op + \delta\vec{x}^\op/2|} = \frac{1}{s}\left(1 - \frac{1}{2}\frac{\vec{s}\cdot\delta\vec{x}^\op}{s^2}\right) = \frac{1}{s} - \frac{1}{2}\frac{\vec{s}\cdot\delta\vec{x}^\op}{s^3} \] Similarly, \[ \frac{1}{|\vec{x} - \vec{x}^\op - \delta\vec{x}^\op/2|} = \frac{1}{s} + \frac{1}{2}\frac{\vec{s}\cdot\delta\vec{x}^\op}{s^3} \] and hence, \[ \phi_e(\vec{x}) = -\lim_{\delta{x}^\op \rightarrow 0} \frac{m}{4\pi}\frac{\vec{s}\cdot\delta\vec{x}^\op}{s^3} \] Since \[ \grad{\left(\frac{1}{s}\right)} = -\frac{\vec{s}}{s^3} \] we have, \[ \phi_e(\vec{x}) = \frac{\vec{\mu}}{4\pi}\cdot\grad{\left(\frac{1}{s}\right)} \] \item To get the velocity due to a source doublet, we use the results, \begin{eqnarray*} \grad{\vec{s}} &=& \uvec{1}\uvec{1} + \uvec{2}\uvec{2} + \uvec{3}\uvec{3} \\ \grad{\left(\frac{1}{s^3}\right)} &=& -3\frac{\vec{s}}{s^5} \end{eqnarray*} so that \[ \vec{u}_e(\vec{x}) = \frac{1}{4\pi}\left(-\frac{\vec{\mu}}{s^3} + 3 \frac{\vec{\mu}\cdot\vec{s}}{s^5}\vec{s}\right) \] Once again, this expression is similar to electric field at $\vec{x}$ due to a point dipole at $\vec{x}^\op$. \item If $\vec{\mu}$ is along the $z$-axis then $\vec{u}_e$ has no dependence on $\phi$ (in both cylindrical and spherical polar coordinates). That is what is meant by the statement that $\vec{u}_e$ is axi-symmetric. \item If $\vec{x}^\op = 0$, $\vec{s} = \vec{x}$. Further, the radial component of the velocity is $\vec{u}_e\cdot\vec{s} = \vec{u}_e\cdot\vec{x}$. Therefore, \[ \vec{u}_e(\vec{x})\cdot\vec{x} = \frac{1}{4\pi}\left(-\frac{\vec{\mu}}{x^3} + 3 \frac{\vec{\mu}\cdot\vec{x}}{x^5}\vec{x}\right)\cdot\vec{x} = \frac{1}{2\pi}\frac{\vec{\mu}\cdot\vec{x}}{x^3} \] If $\uvec{x}$ is the unit vector along the radial direction (in spherical polar coordinates), then the radial component of $\vec{u}_e$ is \[ \frac{\vec{u}_e(\vec{x})\cdot\vec{x}}{x} = \frac{1}{2\pi}\frac{\vec{\mu}\cdot\vec{x}}{x^4} = \frac{1}{2\pi}\frac{{\mu}\cos\theta}{x^3} \] \item The stream function for a source doublet is given in equation (2.5.5) of the book. To plot it, we write it in Cartesian coordinates. Thus, \begin{eqnarray*} s &=& x^2 + y^2 + z^2 \\ \sin\theta &=& \frac{x^2 + y^2}{s^2} \end{eqnarray*} and hence \[ \psi = \frac{\mu}{4\pi}\frac{x^2 + y^2}{(x^2 + y^2 + z^2)^{3/2}} \] We plot the function in the $XZ$ plane in which $y = 0$. Figure \ref{c2f2} shows the streamlines. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.5]{c2f2}} \caption{Streamlines in an axial plane} \label{c2f2} \end{figure} It was generated using the following R code. \begin{verbatim} # To draw the stream lines as in the textbook, # interchange x and z axes. The diagram is # drawn in XZ plane. z <- seq(from = -2, to = 2, len = 1000) x <- seq(from = 1e-6, to = 2, len = 1000) f <- function(x, z) { x^2/(x^2 + z^2)^(3/2) } psi <- outer(x, z, FUN = 'f') # Values of level curves lv <- c(1e-6, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10) # To draw the contours with z axis set # horizontally, we transpose the psi. Xl <- c(0.4, 0.6) Yl <- c(0.0, 0.25) contour(t(psi), levels = lv, xlim = Xl, ylim = Yl) title('Streamlines of a source doublet') \end{verbatim} A few points to note: \begin{itemize} \item We don't have to plot $\vec{u}_e$. Plotting $\psi$ suffices to get a visual impression of the flow. \item The plot of streamlines does not give us the direction of the flow. For an axisymmetric flow, as this one, the velocity in spherical polar coordinates is given by equation (2.2.14) as \[ \vec{u}_e = \frac{1}{s^2\sin\theta}\pdt{\psi}{\theta}\uvec{s} - \frac{1}{s\sin\theta}\pdt{\psi}{s}\uvec{\theta}, \] where for sake of consistency, we have used $s$ instead of $r$. \item The source doublet at $\vec{x}^\op$ comprised of a point source of strength $m$ at $\vec{x}^\op + \delta\vec{x}^\op/2$ and a sink of strength $-m$ at $\vec{x}^\op - \delta\vec{x}^\op/2$. Thus, the direction of $\vec{\mu}$ is along the positive $z$ axis, shown horizontal in figure \ref{c2f2}. Therefore, the direction of the velocity will be anti-clockwise along the streamlines. \end{itemize} \item Now let us consider a source doublet of strength $\vec{\mu}$ placed at $\vec{x}^\op + \delta\vec{x}^\op/2$ and that of strength $-\vec{\mu}$ placed at $\vec{x}^\op - \delta\vec{x}^\op/2$. The potential due to them is \[ \phi_e(\vec{x}) = \frac{\vec{\mu}}{4\pi}\cdot\grad{\left(\frac{1}{\vec{x} - \vec{x}^\op - \delta\vec{x}^\op/2}\right)} - \frac{\vec{\mu}}{4\pi}\cdot\grad{\left(\frac{1}{\vec{x} - \vec{x}^\op + \delta\vec{x}^\op/2}\right)} \] which, in Cartesian tensor notation, and some simplification gives, \[ \phi_e(\vec{x}) = \frac{\mu_i}{4\pi}\frac{\partial}{\partial x_i}\left(\frac{1}{\vec{s} - \delta\vec{x}^\op/2} - \frac{1}{\vec{s} + \delta\vec{x}^\op/2}\right) \] which is, \[ \phi_e(\vec{x}) = \frac{\mu_i}{4\pi}\frac{\partial}{\partial x_i}\frac{\vec{s}\cdot\delta\vec{x}^\op}{s^3} = \frac{\mu_i}{4\pi}\frac{\partial}{\partial x_i}\frac{s_j\delta x_j}{s^3} \] Let $\nu_{ij} = \mu_i \delta x_j$. Using the fact that \[ \grad{\left(\frac{1}{s}\right)} = -\frac{\vec{s}}{s^3} \] we get \[ \phi_e(\vec{x}) = -\frac{\nu_{ij}}{4\pi}\frac{\partial^2}{\partial x_i \partial x_j}\left(\frac{1}{s}\right) \] This expression is analogous to the potential at $\vec{x}$ due to an electric quadrupole at $\vec{x}^\op$. The quadrupole moment tensor is $\nu_{ij}$. \item There is another way to get equations (2.5.7) and (2.5.8) of the book. Consider an infinite line of point sources with density $m$. Without loss of generality, we can make the $z$ axis coincide it. We want to find the velocity at a point $(x, y, 0)$ on the $XY$ plane. Consider a cylindrical surface with axis along the $z$ axis and passing through $(x, y, 0)$. Let it have a length $L$.=such that the $z$ coordinate ranges from $-L/2$ to $L/2$. Integrate $\dive{\vec{u}_e} = \Delta$ over the volume of this imaginary surface. Thus, \[ \int\dive{\vec{u}_e}\un dV = \int\Delta dV \] or, since $\Delta = m\delta(x)\delta(y)$, \[ \int\vec{u}_e\cdot\un dA = \int m\delta(x)\delta(y) dx dy dz = \int_{-L/2}^{L/2} mdz = mL \] Since $\vec{u}_e$ is only in the radial direction, the surface integral on the left hand side across the upper and lower surfaces is zero. (The unit vector $\un$ on these surfaces is perpendicular to $\vec{u}_e$.) On the curved surface, $\vec{u}_e$ and $\un$ are parallel to each other and hence, \[ u_e 2\pi\sigma L = mL \] or that \[ u_e = \frac{m}{2\pi} \frac{1}{\sigma} \] or, remembering that $\vec{u}_e$ is along the radial direction, \[ \vec{u}_e = u_e \uvec{\sigma} = u_r \frac{\vec{\sigma}}{\sigma} = u_e \frac{x\uvec{x} + y\uvec{y}}{\sqrt{x^2 + y^2}} = \frac{x\uvec{x} + y\uvec{y}}{\sigma} \] or \[ \vec{u}_e = \frac{m}{2\pi} \left(\frac{x}{\sigma}\uvec{x} + \frac{y}{\sigma}\uvec{y}\right) \] Since $\vec{u}_e = \grad{\phi}$ and if the point $A$ is a \enquote*{reference point} for the potential then, \[ \int_A^{\sigma} \vec{u}_e\cdot d\vec{x}^\op = \int_A^{\sigma} \grad{\phi}\cdot d\vec{x}^\op \] or \[ \frac{m}{2\pi} \ln\sigma - \frac{m}{2\pi} \ln\sigma_A = \phi(\sigma) - \phi(\sigma_A) \] This technique is similar to that of finding electric field due to an infinite line of charges, distributed uniformly. \end{itemize} \section{The vorticity distribution}\label{c2s6} \begin{itemize} \item Unlike the velocity field, the vorticity is always solenoidal. A vortex line is an imaginary line in the fluid which is always tangential to the vorticity field. If $d\vec{x}$ is a small element of the vortex line, then because it is always tangential to $\vec{\omega}$, $\vec{\omega} \vp d\vec{x} = 0$. That is, \[ \uvec{x}(\omega_y dz - \omega_z dy) + \uvec{y}(\omega_z dx - \omega_x dz) + \uvec{z}(\omega_x dy - \omega_y dx) = 0 \] or \[ \frac{dx}{\omega_x} = \frac{dy}{\omega_y} = \frac{dz}{\omega_z} \] is the equation of the vortex lines. A surface in the fluid formed by all vortex lines passing through an irreducible curve in the fluid is called a vortex tube. \item A vortex tube cannot end in the interior of the fluid. For if it did, then if $C$ is the curve at the end of the vortex tube and $A$ is the area enclosed by it, \[ \int_A \vec{\omega}\cdot\un dA = 0 \] Let $C^\op$ be a curve which coincides with the cross section of the vortex tube, a little behind its end. Let $A^\op$ is the area enclosed by it. Let $S$ be the curved area between $C^\op$ and $C$. Then, if $V$ is the volume bound by $A$, $A^\op$ and $S$, \[ \int_V \dive{\vec{\omega}} dV = \int_{A} \vec{\omega}\cdot\un dA + \int_{S} \vec{\omega}\cdot\un dA + \int_{A^\op} \vec{\omega}\cdot\un dA \] Of the three integrals on the right hand side of the above equation, the first one is zero because the vortex tube ends there, the second one is zero because the surface is tangential to the vorticity field and the third one is non-zero. Therefore, \[ \int_V \dive{\vec{\omega}} dV \ne 0 \] violating the fact that $\vec{\omega}$ is always solenoidal. \item An immediate consequence of the above point is that vortex lines are closed curves. (The way lines of magnetic field are closed curves.) \item It is also true that \[ \int_A \vec{\omega}\cdot\un dA \] is the same across all cross sections of a vortex tube. It is thus a characteristic of the vortex tube and is called the strength of the tube. \item Let $A$ be an open surface inside a fluid, bound by a closed curve $C$. The quantity, \[ \int\vec{\omega}\cdot\un dA = \oint_C \vec{u}\cdot d\vec{x}, \] is called the circulation round $C$. \item Consider a flow in which the vorticity is quite high only in the neighborhood of a line. Consider a vortex tube enclosing the line. Shrink its cross section such that as its cross section goes to zero, the integral \[ \int_A \vec{\omega}\cdot\un dA \] remains constant, say $\kappa$. Such a singularity is called a vortex line. Since \[ \int_A \vec{\omega}\cdot\un dA = \kappa, \] if $\delta\vec{l}$ is a line element along the line vortex, \[ \delta\vec{l}\int_A \vec{\omega}\cdot\un dA = \kappa\delta\vec{l}, \] But $\delta\vec{l}\cdot\un dA = dV$ and hence, \[ \int_{\delta V} \vec{\omega} dV = \kappa\delta\vec{l} \] That is, for a small volume element, $\vec{\omega}\delta V = \kappa\delta\vec{l}$. Since \[ \vec{u}_v = -\frac{1}{4\pi}\int\frac{\vec{s} \vp \vec{\omega}(\vec{x}^\op)}{s^3} dV(\vec{x}^\op), \] for a vortex line of strength $\kappa$, we have \begin{equation}\label{c2s6e1} \vec{u}_v = -\frac{\kappa}{4\pi}\int \frac{\vec{s} \vp d\vec{l}(\vec{x}^\op)}{s^3} \end{equation} \item Consider an infinitely long line vortex of strength $\kappa$. Referring to figure 2.6.1 of the book, it is clear that $s^2 = \sigma^2 + l^2$, $l$ being measured along the line vortex. If $\theta$ is the angle between $d\vec{l}$ and $\vec{s}$, \[ |\vec{s} \vp d\vec{l}| = sdl\sin\theta = sdl\sin(\pi - \theta) = sdl\frac{\sigma}{s} = \sigma dl \] and hence, \[ |\vec{u}_v| = \frac{\kappa}{4\pi}\int_{-\infty}^\infty \frac{\sigma}{(\sigma^2 + l^2)^{3/2}} dl \] Put $l = \sigma\tan\alpha$, so that \begin{eqnarray*} \int_{-\infty}^\infty \frac{\sigma}{(\sigma^2 + l^2)^{3/2}} dl &=& \int_{-\pi/2}^{\pi/2} \frac{\sigma^2\sec^2\alpha}{\sigma^3\sec^3\alpha} d\alpha \\ &=& \frac{1}{\sigma}\int_{-\pi/2}^{\pi/2}\cos\alpha d\alpha \\ &=& \frac{2}{\sigma} \end{eqnarray*} so that \[ |\vec{u}_v| = \frac{\kappa}{2\pi\sigma} \] Further, the direction of $\vec{u}_v$ is along $\uvec{\phi}$ (of cylindrical coordinate system). Therefore, from (2.2.10) of the book, \[ \frac{\kappa}{2\pi\sigma} = -\pdt{\psi}{\sigma} \] or that \[ \sigma = -\frac{\kappa}{2\pi}\ln\sigma \] \item Since \[ \grad{\left(\frac{1}{s}\right)} = -\frac{\vec{s}}{s^3}, \] we have from \eqref{c2s6e1}, \[ \vec{u}_v = \frac{\kappa}{4\pi}\int \grad{\left(\frac{1}{s}\right)} \vp d\vec{l}(\vec{x}^\op) \] Further, since $\curl{(\psi\vec{A})} = \psi\curl{\vec{A}} + \grad{\psi}\vp\vec{A}$, \[ \curl{\left(\frac{d\vec{l}(\vec{x}^\op)}{s}\right)} = \grad{\left(\frac{1}{s}\right)} \vp d\vec{l}(\vec{x}^\op) \] so that, \[ \vec{u}_v = \frac{\kappa}{4\pi} \int \curl{\left(\frac{d\vec{l}(\vec{x}^\op)}{s}\right)} = \curl\int \frac{\kappa d\vec{l}(\vec{x}^\op)}{4\pi s} \] Using \eqref{c2sae6}, \[ \int \frac{\kappa d\vec{l}(\vec{x}^\op)}{4\pi s} = -\frac{\kappa}{4\pi}\int\nabla^\op\left(\frac{1}{s}\right)\vp\un dA(\vec{x}^\op) \] or, \[ \vec{u}_v = -\frac{\kappa}{4\pi} \curl \int\nabla^\op\left(\frac{1}{s}\right)\vp\un dA(\vec{x}^\op) = -\frac{\kappa}{4\pi}\int\curl{\left(\nabla^\op\left(\frac{1}{s}\right)\vp\un\right)} dA(\vec{x}^\op) \] Now, since $\un$ depends only on $\vec{x}^\op$, \[ \curl{\left(\nabla^\op\left(\frac{1}{s}\right)\vp\un\right)} = \un\dive{\nabla^\op\left(\frac{1}{s}\right)} + (\un\cdot\nabla)\nabla^\op\left(\frac{1}{s}\right) \] Further, \[ \dive{\nabla^\op\left(\frac{1}{s}\right)} = -\dive{\grad{\left(\frac{1}{s}\right)}} = -\nabla^2\left(\frac{1}{s}\right) = 0, \] because $1/s$ is a solution of Laplace's equation in three dimensions. Thus, \[ \curl{\left(\nabla^\op\left(\frac{1}{s}\right)\vp\un\right)} = (\un\cdot\nabla)\nabla^\op\left(\frac{1}{s}\right) \] and hence, \[ \vec{u}_v = -\frac{\kappa}{4\pi} \int (\un\cdot\nabla)\nabla^\op\left(\frac{1}{s}\right)dA(\vec{x}^\op) \] Since, \[ \grad{\left(\frac{1}{s}\right)} = -\frac{\vec{s}}{s^3} = -\nabla^\op\left(\frac{1}{s}\right) \] we have, \[ \vec{u}_v = -\frac{\kappa}{4\pi} \int \un \cdot \nabla \frac{\vec{s}}{s^3} dA(\vec{x}^\op) \] Since $\un$ depends only on $\vec{x}^\op$, \[ \grad{\left(\un\cdot\frac{\vec{c}}{s^3}\right)} = \un \cdot \nabla \frac{\vec{s}}{s^3} + \un \vp \curl{\frac{\vec{s}}{s^3}} \] But, \[ \frac{\vec{s}}{s^3} = -\grad{\left(\frac{1}{s}\right)} \] therefore its curl vanishes and hence, \[ \grad{\left(\un\cdot\frac{\vec{s}}{s^3}\right)} = \un \cdot \nabla \frac{\vec{s}}{s^3} \] so that, \[ \vec{u}_v = -\frac{\kappa}{4\pi} \int \grad{\left(\un\cdot\frac{\vec{s}}{s^3}\right)} dA(\vec{x}^\op) \] or, \[ \vec{u}_v = -\frac{\kappa}{4\pi} \grad{\int \left(\frac{\un\cdot\vec{s}}{s^3}\right) dA(\vec{x}^\op)} \] The line vortex is a closed curve and it subtends a solid angle $\Omega$ at the point $\vec{x}$, as shown in figure \ref{c2f3}. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.5]{c2f3}} \caption{Solid angle subtended by a line vortex} \label{c2f3} \end{figure} Thus, \[ \Omega(\vec{x}) = \int \left(\frac{\un\cdot\vec{s}}{s^3}\right) dA(\vec{x}^\op) \] so that, \[ \vec{u}_v = -\frac{\kappa}{4\pi} \grad{\Omega(\vec{x})} \] \item Equation (2.6.5) of the book gives stream function for a line vortex of strength $\kappa$ as \[ \psi = -\frac{\kappa}{2\pi}\ln\sigma, \] where $\sigma^2 = (x - x^\op)^2 + (y - y^\op)^2$. In a vortex doublet, two line vortices of strengths of opposite signs are close to each other. If a line vortex of strength $\kappa$ is at $\vec{x}^\op + \delta\vec{x}^\op/2$ then its stream function is \[ \psi_{+} = -\frac{\kappa}{2\pi}f(\vec{x}; \vec{x}^\op + \delta\vec{x}^\op/2), \] where \[ f(\vec{x}; \vec{x}^\op) = \ln\sigma \] Similarly, a line vortex of strength $-\kappa$ at $\vec{x}^\op - \delta\vec{x}^\op$ has a stream function, \[ \psi_{-} = -\frac{\kappa}{2\pi}f(\vec{x}; \vec{x}^\op - \delta\vec{x}^\op/2), \] If $\vec{u}_1$ is associated with a vector potential $\vec{B}_1 = (0, 0, \psi_1)$ and $\vec{u}_2$ is associated with $\vec{B}_2 = (0, 0, \psi_2)$ , then it is clear that if $\vec{u} = \vec{u}_1 + \vec{u}_2$ then since \[ \vec{u} = \curl{\vec{B}_1} + \curl{\vec{B}_2} = \curl{(\vec{B}_1 + \vec{B}_2)}, \] the vector potential associated with $\vec{u}$ is $\curl{(\vec{B}_1 + \vec{B}_2)} = (0, 0, \psi_1 + \psi_2)$. Therefore, the stream function of vortex doublet is \[ \psi = \psi_1 + \psi_2 = -\frac{\kappa}{2\pi}\left(f\left(\vec{x}; \vec{x}^\op + \frac{\delta\vec{x}^\op}{2}\right) - f\left(\vec{x}; \vec{x}^\op - \frac{\delta\vec{x}^\op}{2}\right)\right) \] Expanding $f$ as a Taylor series and retaining the first order terms, we get \[ \psi = -\frac{\kappa}{2\pi}\delta\vec{x}^\op\cdot\nabla^\op{f} = \frac{\kappa}{2\pi}\frac{\delta\vec{x}^\op\cdot\vec{s}}{\sigma^2} \] If \[ \vec{\lambda} = \lim_{\delta\vec{x}^\op \rightarrow 0} \kappa\delta\vec{x}^\op \] then \[ \psi = \frac{1}{2\pi}\frac{\vec{\lambda}\cdot\vec{s}}{\sigma^2} \] \item If we now want to plot the stream function, we assume that the vortex doublet of unit strength $2\pi$ is aligned along the $x$-axis. That is, $\vec{\lambda} = 2\pi\uvec{x}$. If we also assume that the vortex doublet is at the origin then $(x^\op, y^\op) = (0, 0)$. Therefore, in Cartesian coordinates, the stream function is \[ \psi = \frac{1}{2\pi}\frac{2\pi x}{x^2 + y^2} \] Figure \ref{c2f4} shows the stream lines. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.5]{c2f4}} \caption{Streamlines due to a vortex doublet} \label{c2f4} \end{figure} It was generated using the following R code. \begin{verbatim} # To draw the stream lines as in the textbook, # interchange x and z axes. The diagram is # drawn in XZ plane. y <- seq(from = -2, to = 2, len = 1000) x.pos <- seq(from = 1e-6, to = 2, len = 500) x.neg <- seq(from = -2, to = -1e-6, len = 500) x <- c(x.neg, x.pos) f <- function(x, y) { (x)/(x^2 + y^2) } psi <- outer(x, y, FUN = 'f') # Values of level curves lv <- c(-5, -4, -3, -2, -1, 1e-6, 1, 2, 3, 4, 5) # To draw the contours with z axis set # horizontally, we transpose the psi. Xl <- c(0.2, 0.8) Yl <- c(0.35, 0.65) contour(psi, levels = lv, xlim = Xl, ylim = Yl) title('Streamlines of a vortex doublet')) \end{verbatim} Since the vortex doublet consisted of vortex of strength $+\kappa$ at $\vec{x}^\op + \delta\vec{x}^\op/2$ and another one of $-\kappa$ at $\vec{x}^\op - \delta\vec{x}^\op/2$, the stream lines on the right have an anti-clockwise direction and those on the left have a clockwise direction. \item Sometimes there is a considerable concentration of vorticity on a surface in a fluid. It shows up as a large collection of line vortices, all lying on the surface. We define the strength density of such a vortex sheet by \[ \vec{\Gamma} = \int\vec{\omega}dx_n, \] where $x_n$ is the distance normal to the surface and the integral is over a small range $\epsilon$ containing the surface. We assume that as $\epsilon \rightarrow 0$, the integral remains constant so that $\vec{\Gamma}$ is a local characteristic of the vortex sheet. Recall that the strength of a vortex tube is defined as \[ \kappa = \int\vec{\omega}\cdot\un dA, \] where $\un$ is a unit vector normal to the cross section of the tube over which we are evaluating the surface integral. In particular, it is \emph{not} a normal to the vortex sheet. It is in fact, in the vortex sheet. We can write it as \[ \kappa = \iint\vec{\omega}\cdot\un dx dy \approx \int\vec{\omega}dx \cdot \un \left(\delta y\right) = \left(\vec{\Gamma}\cdot\un\right)\delta y, \] or $\vec{\Gamma}\cdot\un$ is the strength of the vortex tube \emph{per unit width} of the tube. Refer to figure \ref{c2f5} for a diagram of the vortex sheet and the tubes it contains. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.35]{c2f5}} \caption{Sheet vortex} \label{c2f5} \end{figure} The rectangle depicts the vortex sheet and the blue circles denote the vortex tubes. The arrow across the sheet depicts the range of integration in the definition of $\vec{\Gamma}$. The vorticity $\vec{\omega}$ and the strength density of the vortex sheet $\vec{\Gamma}$ are parallel to each other and perpendicular to the plane of the diagram. \item The idea of considering $\epsilon \rightarrow 0$ is equivalent to considering the vortex sheet to be infinite in extent. We will use this interpretation in the next point. \item Let us now consider the velocity \enquote*{due to} a vortex sheet. Equation (2.4.11) of the book is \[ \vec{u}_v = -\frac{1}{4\pi}\int\frac{\vec{s}\vp\vec{\omega}(\vec{x}^\op)}{s^3} dV(\vec{x}^\op) \] If most of the vorticity is concentrated in a vortex sheet, we can approximate $\vec{\omega} dV(\vec{x}^\op)$ by $\vec{\Gamma} dA(\vec{x}^\op)$ so that \[ \vec{u}_v = -\frac{1}{4\pi}\int\frac{\vec{s}\vp\vec{\Gamma}(\vec{x}^\op)}{s^3} dA(\vec{x}^\op) \] In the case of a single plane sheet vortex on which $\vec{\Gamma}$ is uniform, \[ \vec{u}_v = \frac{\vec{\Gamma}}{4\pi}\vp\int\frac{\vec{s}}{s^3} dA(\vec{x}^\op) \] Close to the sheet, the plane can be considered to be infinitely large and yet having a uniform $\vec{\Gamma}$. In such a situation, we can write $\vec{s}$ as a sum of two vectors, one parallel to $\vec{\Gamma}$ and the other parallel to $\un$. If $\uvec{\Gamma}$ is a unit vector parallel to $\vec{\Gamma}$ then, \[ \vec{s} = (\vec{s}\cdot\uvec{\Gamma})\uvec{\Gamma} + (\vec{s}\cdot\un)\un \] so that \[ \vec{\Gamma} \vp \vec{s} = \vec{\Gamma} \vp (\vec{s}\cdot\un)\un \] and hence, \[ \vec{u}_v = \frac{\vec{\Gamma}}{4\pi}\vp\int\frac{\vec{s}\cdot\un}{s^3}\un dA(\vec{x}^\op) \] We can write the integral as, \[ \vec{u}_v = \frac{\vec{\Gamma}}{4\pi}\vp\un \int d\Omega, \] where $\Omega$ denotes the solid angle subtended by the sheet vortex at $\vec{x}$. Since we are integrating only over the \enquote{below} $\vec{x}$, the value of the integral is half of the total solid angle so that, \[ \vec{u}_v = \frac{\vec{\Gamma}}{4\pi}\vp\un 2\pi = \frac{1}{2}\vec{\Gamma}\vp\un \] If $\vec{\Gamma}$ is perpendicular to the plane of figure \ref{c2f6} and pointing outwards, the velocity just above and just below the plane are equal and opposite to each other. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.35]{c2f6}} \caption{Velocity around a sheet vortex} \label{c2f6} \end{figure} \item Now consider a sheet vortex in the form of a cylinder of arbitrary cross section. The generators of the cylinder are lines along the curved surface of the cylinder, connecting its cross sections. Let $\vec{\Gamma}$ be always perpendicular to the generators and of the same magnitude all over. Equation (2.6.8) of the book is \[ \vec{u}_v(\vec{x}) = -\frac{1}{4\pi}\int\frac{\vec{s}\vp\vec{\Gamma}(\vec{x}^\op)}{s^3} dA(\vec{x}^\op) \] Consider a point $\vec{x}^\op$ on the surface of the cylinder. Let $d\vec{l}(\vec{x}^\op)$ be a line element in the direction of $\vec{\Gamma}$ and $dm(\vec{x}^\op)$ be a line element perpendicular to $\vec{\Gamma}$. Then, $\vec{\Gamma}(\vec{x}^\op) dA(\vec{x}^\op) = \vec{\Gamma}(\vec{x}^\op) dl(\vec{x}^\op) dm(\vec{x}^\op) = \Gamma d\vec{l}(\vec{x}^\op)dm(\vec{x}^\op)$ so that \[ \vec{u}_v(\vec{x}) = -\frac{\Gamma}{4\pi}\int_{-\infty}^\infty \oint \frac{\vec{s}\vp d\vec{l}(\vec{x}^\op)}{s^3} dm(\vec{x}^\op), \] where the first integral is over $m$ and the second one over $\vec{l}$. Interchanging the order of integration, \[ \vec{u}_v(\vec{x}) = -\frac{\Gamma}{4\pi} \oint \int_{-\infty}^\infty \frac{\vec{s}dm(\vec{x}^\op)}{s^3} \vp d\vec{l}(\vec{x}^\op), \] Write $\vec{s} = \vec{p} + \vec{m}$, where $\vec{p}$ is in the cross sectional plane while $\vec{m}$ is perpendicular to it. Further, let $\uvec{p}$ and $\uvec{m}$ be the unit vectors along $\vec{p}$ and $\vec{m}$ so that, \[ \int_{-\infty}^{\infty} \frac{\vec{s}}{s^3}dm = \uvec{p}\int_{-\infty}^{\infty} \frac{p}{(p^2 + m^2)^{3/2}} dm + \uvec{m}\int_{-\infty}^{\infty} \frac{m}{(p^2 + m^2)^{3/2}} dm \] Let the right hand side be written as $\uvec{p}I_1 + \uvec{m}I_2$. In $I_1$, put $m = p\tan\alpha$ so that \[ I_1 = \int_{-\pi/2}^{\pi/2} \frac{p^2\sec^2\alpha}{p^3\sec^3\alpha} d\alpha = \frac{1}{p}\int_{-\pi/2}^{\pi/2}\cos\alpha d\alpha = \frac{2}{p} \] In $I_2$, put $n^2 = m^2 + p^2$ so that \[ I_2 = \int_{-\infty}^{\infty} \frac{ndn}{n^3} = 0 \] and hence \[ \int_{-\infty}^{\infty} \frac{\vec{s}}{s^3}dm = \uvec{p}\frac{2}{p} = \frac{2\vec{p}}{p^2} \] We now have, \[ \vec{u}_v(\vec{x}) = -\frac{\Gamma}{2\pi}\oint\frac{\vec{p} \vp d\vec{l}(\vec{x}^\op)}{p^2} \] \item We will now argue that \[ \oint\frac{\vec{p} \vp d\vec{l}(\vec{x}^\op)}{p^2} \] is zero if the observation point $\vec{x}$ is outside the cylinder and $2\pi$ if it is inside. For a point $\vec{x}$ that is inside the cylinder, we can write $d\vec{l}(\vec{x}^\op) = pd\theta$ so that \[ \oint\frac{\vec{p} \vp d\vec{l}(\vec{x}^\op)}{p^2} = \int_0^{2\pi} \uvec{a} d\theta = 2\pi\uvec{a}, \] where $\uvec{a}$ is perpendicular to both $\vec{p}$ and $d\vec{l}(\vec{x}^\op)$, which means that it is parallel to the axis of the cylinder. Now consider a point $\vec{x}$ that is outside the cylinder. Referring to figure \ref{c2f7}, let the angle between $\vec{p}$ and $d\vec{l}(\vec{x}^\op)$ span from $\pi/2 - \theta$ (at point $A$) to $\pi/2 + \theta$ (at point $C$). The angle keeps on increasing from $\pi/2 + \theta$ at $C$ to $3\pi/2$ at $D$ and $2\pi + \pi/2 - \theta = \pi/2 - \theta$ at $A$. Therefore, \[ \oint\frac{\vec{p} \vp d\vec{l}(\vec{x}^\op)}{p^2} = \int_{\pi/2 - \theta}^{\pi/2 + \theta}\uvec{b}d\theta + \int_{\pi/2 + \theta}^{\pi/2 - \theta}\uvec{b}{p}d\theta = 0 \] where $\uvec{b}$ is a unit vector in the direction of $\vec{p} \vp d\vec{l}(\vec{x}^\op)$. The key to this proof is the independence of the integrand from $p$. Writing $dl = pd\theta$ cancels the second power of $p$ in the denominator, the first power being cancelled by $\vec{p}$\footnote{This result is similar to the one in electrodynamics where the magnetic field outside a solenoid is zero while inside it is parallel to the axis}. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=.35]{c2f7}} \caption{Integration when the field point is outside the cylinder} \label{c2f7} \end{figure} \item In order to prove that the component of $\vec{u}_v$ normal to the sheet vortex is continuous across the sheet, we consider a \enquote*{gaussian pillbox} as shown in figure \ref{c2f8}. Let $\vec{u}_v = \vec{u}_p + \vec{u}_n$, where $\vec{u}_p$ is the component of velocity in the plane of the sheet vortex and $\vec{u}_n$ is the component normal to it. Integrating $\dive{\vec{u}_v}$ over the pillbox, \[ \int\dive{\vec{u}_v}dV = \oint\vec{u}_v\cdot\un dA. \] If the pillbox is shrunk so that its height is very small, for an incompressible fluid, \[ 0 = \int_{U} \vec{u}_v^{u} dA - \int_{L} \vec{u}_v^{l} dA, \] where $U$ and $L$ are the upper and lower plane surfaces of the pillbox. Since their areas are identical, \[ \int_U (\vec{u}_v^{u} - \vec{u}_v^{l})dA = 0, \] or that the normal component of $\vec{u}_v$ is continuous across the sheet. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=0.9]{c2f8}} \caption{A gaussian pillbox straddling the sheet vortex} \label{c2f8} \end{figure} \end{itemize} \section{Irrotational and solenoidal velocity distributions}\label{c2s7} \begin{itemize} \item Most fluids behave, under a wide range of flow conditions, as if they were nearly incompressible. Further, over large parts of the velocity field, the vorticity is indeed zero. Therefore, it is useful to study fields $\vec{v}$ that satisfy $\dive{\vec{v}} = 0$ and $\curl{\vec{v}} = 0$. Such a fluid element does not undergo a change in volume, nor does it have a local rotation. The second of the above equations suggests that we can find a scalar function $\phi(\vec{x})$ such that $\vec{v} = \grad{\phi}$, which put in the first equation gives \[ \nabla^2\phi = 0, \] or that $\phi$ is a harmonic function. Solutions of Laplace's equations are smooth, with all derivatives remaining smooth throughout the domain, except possibly at the boundary points. ( This is in contrast with the hyperbolic equations whose solutions may have discontinuities in the form of shocks.) \item Figure 2.7.1, to my eyes, is a multiply connected region and is identical to fig. 2.8.1. If we have to interpret it as a singly connected region, then we assume that there is no inner boundary for finite flows and no outer boundary in the case of infinite flows. \item Let us consider only finite flows. Since $\vec{v} = \grad{\phi}$, \[ \dive{(\phi\vec{v})} = \phi\dive{\vec{v}} + \vec{v}\cdot\grad{\phi} = 0 + v^2, \] so that \[ \int v^2 dV = \int\dive{(\phi\vec{v})}dV = \int \phi\vec{v}\cdot\un dA \] If the normal component of the velocity on the boundary of the volume is zero, then the integrand on the right hand side is zero so that \begin{equation}\label{c2s7e1} \int v^2 dV = 0, \end{equation} which, for real values of $\vec{v}$ implies $\vec{v} = 0$ throughout the volume. Similar analysis works for infinite flows, where we do not have the outer boundary. \item If $\phi_1$ and $\phi_2$ are two solutions of $\nabla^2\phi = 0$ then so is $\phi = \phi_1 - \phi_2$. Let $\vec{v} = \grad{\phi}, \vec{v}_1 = \grad{\phi}_1$ and $\vec{v}_2 = \grad{\phi}_2$. Equation \eqref{c2s7e1}, applied to $\vec{v}$ gives $\vec{v} = 0$ or that $\vec{v}_1 = \vec{v}_2$ if $(\vec{v}_1 - \vec{v}_2)\cdot\un = 0$ on the boundary. \item An often used mathematical idealization is that of a fluid of an infinite extent. When we use it, the inner boundary in figure 2.7.1 comes into play and the outer one does not. It will be shown later that one of the conditions for having a unique solution to $\nabla^2\phi = 0$ is that the normal component of $\vec{v}$ take a prescribed value on the inner boundary. \item Recall that the velocity of the fluid is $\vec{u} = \vec{u}_e + \vec{u}_v + \vec{v}$, where $\vec{u}_e$ is due to a given rate of expansion, $\vec{u}_v$ is due to a given vorticity and $\vec{v}$ is what is needed to satisfy boundary conditions. It is the last component that is irrotational and solenoidal and therefore described by the velocity potential $\phi$. Further, the boundary conditions must be prescribed in terms of $\vec{v}$ and \emph{not} $\vec{u}$. \item An example of a fluid of infinite extent and having the internal boundary is when a rigid body (of size small in comparison of the extent of the fluid) moves through a fluid otherwise at rest. If the body moves with a velocity $\vec{U}$ then the normal component of $\vec{v}$ is \[ \vec{U}\cdot\un - (\vec{u}_e + \vec{u}_v)\cdot\un \] Neither the body's acceleration nor its past history determines the motion of the fluid, only its instantaneous velocity $\vec{U}$ matters. \item Consider a two dimensional, irrotational and solenoidal field. If $\vec{v} = \grad{\phi}$ and if $v_x = kx$ and $v_y = ly$ then $\dive{\vec{v}} = 0$ gives $l = -k$. The relation $v_x = \partial\phi/\partial x$ gives $\phi(x, y) = kx^2/2 + f_1(y)$. Similarly, $v_y = \partial\phi/\partial y$ gives $\phi(x, y) = -ky^2/2 + f_2(x)$. The two expressions for $\phi$ imply that $\phi(x, y) = k(x^2 - y^2)/2 + c_1$, where $c_1$ is a constant. If $\psi$ is the stream function corresponding to $\vec{v}$ then $\vec{v} = \curl{\vec{B}}$, where $\vec{B} = (0, 0, \psi)$. That is, \begin{eqnarray*} v_x &=& \pdt{\psi}{y} \\ v_y &=& -\pdt{\psi}{x} \end{eqnarray*} The first of the above equations gives $\psi(x, y) = kxy + g_1(x)$ while the second one gives $\psi(x, y) = kxy + g_2(y)$. The two together give $\psi(x, y) = kxy + c_2$, where $c_2$ is another constant. Both $\phi$ and $\psi$ are equations of rectangular hyperbolae. \item If we repeat the analysis of the previous point now for a axisymmetric flow described in a cylindrical coordinate system, \[ \dive{\vec{v}} = \pdt{v_x}{x} + \frac{1}{\sigma}\pdt{(\sigma v_\sigma)}{\sigma} \] If $v_x = kx$ and $v_\sigma = l\sigma$ then \[ \dive{\vec{v}} = k + \frac{1}{\sigma}2l\sigma \] so that $\dive{\vec{v}} = 0 \Rightarrow l = -k/2$. $v_x = \partial\phi/\partial x$ gives $\phi(x, \sigma) = kx^2/2 + f_1(\sigma)$, while $v_\sigma = \partial\phi/\partial\phi$ gives $\phi(x, \sigma) = -k\sigma^2/4 + f_2(x)$. The two together give \[ \phi(x, y) = \frac{k}{2}\left(x^2 - \frac{\sigma^2}{2}\right) \] We can always replace the constant $k/2$ by another one, $k$. The equations for stream function (refer to (2.2.11) in the book) are \begin{eqnarray*} v_x &=& \frac{1}{\sigma}\pdt{\psi}{\sigma} \\ v_\sigma &=& -\frac{1}{\sigma}\pdt{\psi}{x} \end{eqnarray*} The first of these gives $\psi(x, \sigma) = kx\sigma^2/2 + g_1(x)$ while the second of these gives $\psi(x, \sigma) = kx\sigma^2/2 + g_2(\sigma)$. Once again, we can replace the constant $k/2$ with $k$ and combine the two relations to get $\psi(x, \sigma) = kx\sigma^2$. Figure \ref{c2f9} shows the contour plot of velocity potential and stream function. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=0.25]{c2f9}} \caption{$\phi$ and $\psi$ for axisymmetric flow in cylindrical coordinates.} \label{c2f9} \end{figure} The figures were generated using the code: \begin{verbatim} N <- 100 x <- seq(from = -2, to = 2, len = N) sigma <- seq(from = -2, to = 2, len = N) f1 <- function(x, sigma) { x^2 - sigma^2/2 } f2 <- function(x, sigma) { x*sigma^2 } lv <- seq(from = -1, to = 1, len = 10) phi <- outer(x, sigma, FUN = 'f1') psi <- outer(x, sigma, FUN = 'f2') par(mfrow = c(1, 2)) contour(phi, levels = lv) title('phi') contour(psi, levels = lv) title('psi') \end{verbatim} \end{itemize} \section{Irrotational solenoidal flow in doubly-connected regions} \begin{itemize} \item We will show that although $\phi$ is multivalued in a doubly-connected region, $\vec{v}$ is not so. Let $\phi(\vec{x}, n)$ denote the value of $\phi$ at $\vec{x}$ after going aroung the cylinder $n$ times. Thus, $\phi(\vec{x}, n) = \phi(\vec{x}, 0) + n\kappa$. However, $\grad{\phi(\vec{x}, n)} = \grad{\phi(\vec{x}, 0)} + 0$, which means that the velocity $\vec{v}$ does not change by going around the cylinder. \item Equation (2.6.6) of the books says \[ \vec{u}_v(\vec{x}) = -\frac{\kappa}{4\pi}\grad{\Omega}, \] where \[ \Omega(\vec{x}) = \int\frac{\vec{s}\cdot\un}{s^3}dA(\vec{x}^\op) \] Since $\vec{v} = \grad{\phi}$, we observe that \[ \phi(\vec{x}) = -\frac{\kappa}{4\pi}\Omega, \] where $\Omega$ is the solid angle subtended at $\vec{x}$ by the closed line vertex. If we go round the line vortex once, we cover a solid angle $\Omega + 4\pi$ so that $\phi$ gets an additional value of $-\kappa$. \item To prove equation (2.8.5), we use another, but equivalent, definition of a solid angle. The solid angle subtended by an area at a point is the area of a unit sphere centerd at the point that is intersected by a cone joining the the point to the edges of the area. Refer to figure \ref{c2f3} for a picture of the cone. Let the line vortex be along the $z$ axis and let it be turned into a loop by an infinite semicircle lying in the $yz$ plane. If the field point is on the negative $z$ azix, the cone collapes to a sheet and the region of intersection with the sphere reduces to zero, making the solid angle also zero. Let us use the negative $y$ axis as the \enquote*{zero} of the angle $\phi$. If $\vec{x}$ moves by $\pi/2$ to reside on the positive $x$ axis, the generators of the cone are perpendicular to each other in the $xy$ plane. For one of them is along the $x$ axis and the other is perpendicular to it, running parallel to the $y$ axis. If, instead, $\vec{x}$ moves only by an angle $\Theta$, the angle between the generators is also $\Theta$ as shown in figure \ref{c2f10}. \begin{figure}[!ht] \centering \centerline{\includegraphics[scale=0.5]{c2f10}} \caption{Solid angle subtended by an infinite line vortex.} \label{c2f10} \end{figure} Note that the variables of integration in $\Omega$ are the coordinates with respect to a spherical polar coordinate system centerd at the unit sphere. Therefore, the solid angle subtended is \[ \Omega = \int_0^{\pi}\int_{0}^{\Theta}\sin\theta d\theta d\varphi = \int_0^{\pi} \sin\theta d\theta \int_{0}^{\Theta} d\varphi = -2\Theta, \] so that, \[ \phi(\vec{x}) = -\frac{\kappa}{4\pi}(-2\Theta) = \frac{\kappa}{2\pi}\Theta \] We assumed the field point in the $xy$ plane. There is no loss of generality in doing so because of the infinite extent of the line vortex. \item In order to prove equation (2.8.9) of the book, note that $\vec{v}_1\cdot\vec{v}_2 = \grad{\phi}_1\cdot\grad{\phi}_2$. Now, \[ \dive{\phi_1\grad{\phi}_2} = \grad{\phi}_1\cdot\grad{\phi}_2 + \phi_1\nabla^2\phi_2 \] Since $\phi_2$ is a harmonic function, $\nabla^2\phi_2 = 0$ so that $\grad{\phi}_1\cdot\grad{\phi}_2 = \dive{\phi_1\grad{\phi}_2}$ and hence, \[ \int \vec{v}_1\cdot\vec{v}_2 dV = \int \dive{\phi_1\grad{\phi}_2} dV \] Using divergence theorem on the right hand side, \[ \int \vec{v}_1\cdot\vec{v}_2 dV = \int \phi_1\grad{\phi}_2\cdot\un dA \] The bottom of page 113 showed that $\un\cdot\grad{\phi}_2 = 0$, making the right hand side of the above equation vanish. \end{itemize} \subsection{Exercise} \begin{enumerate} \item In equation (2.8.10), the left hand side is fixed by choice of the velocity of the fluid. It depends only on the volume and the external experimental conditions. The value of the integral $\kappa\int\phi_1\vec{v}_1\cdot\un dA$ is also fixed because $\phi_1$ is single valued and $\vec{v}_1\cdot\un$ is specified by the problem. Since two out of three terms are fixed by experiment, the third one too is fixed, that is, independent of choice of the barrier $S$. In equation (2.8.8), although the LHS is fixed by experiment, since $\phi$ is not single valued, the term $\kappa\int\phi\vec{v}\cdot\un dA$ depends on the choice of the barrier. Therefore, the third and remaining term of the equation too depends on choice of the barrier $S$. \end{enumerate} \section{Three dimensional flow fields extending to infinity}\label{c2s9} \begin{itemize} \item Consider the expression for velocity field due to a given rate of expansion, \[ \vec{u}_e = \frac{1}{4\pi}\int_0^\infty\int_0^{2\pi}\int_0^\pi \frac{\vec{s}}{s^3}\Delta(\vec{x}^\op){r^\op}^2\sin\theta^\op dr^\op d\theta^\op d\varphi^\op \] Divide the domain of integral into an inner spherical region of area $\alpha r$ and the one outside it. The right hand side than then be written as a sum of two integrals, $\vec{I}_1$ and $\vec{I}_2$ of which \[ \vec{I}_2 = \frac{1}{4\pi}\int_{\alpha r}^\infty\int_0^{2\pi}\int_0^\pi \frac{\vec{s}}{s^3}\Delta(\vec{x}^\op){r^\op}^2\sin\theta^\op dr^\op d\theta^\op d\varphi^\op \] Let $\Delta(\vec{x}^\op) = a{r^\op}^{-n}$. In the region of integration, $s \approx r$, where $r = |\vec{x}|$. Therefore, \[ \vec{I}_2 = \frac{1}{4\pi}\frac{\vec{r}}{r^3}\int_{\alpha r}^\infty\int_0^{2\pi}\int_0^\pi a{r^\op}^{-n} {r^\op}^2\sin\theta^\op dr^\op d\theta^\op d\varphi^\op \] or \[ \vec{I}_2 = \frac{\uvec{r}}{r^2}\frac{a}{(\alpha r)^{-3 + n}} = \uvec{r} O(r^{-n + 1}) \] Therefore, when the field point $\vec{x}$ is very far away from the source point $\vec{x}^\op$, we can approximate $\vec{u}_e(\vec{x})$ as \[ \vec{u}_e = \frac{1}{4\pi}\int_0^{\alpha r}\int_0^{2\pi}\int_0^\pi \frac{\vec{x}}{x^3}\Delta(\vec{x}^\op){r^\op}^2\sin\theta^\op dr^\op d\theta^\op d\varphi^\op \] Since \[ \grad{\left(\frac{1}{r}\right)} = -\frac{\vec{x}}{x^3} \] where $r = |\vec{x}|$, we have \[ \vec{u}_e = -\frac{1}{4\pi}\int_0^{\alpha r}\int_0^{2\pi}\int_0^\pi \grad{\left(\frac{1}{r}\right)} \Delta(\vec{x}^\op){r^\op}^2\sin\theta^\op dr^\op d\theta^\op d\varphi^\op \] or \[ \vec{u}_e = -\frac{1}{4\pi}\grad{\left(\frac{1}{r}\right)}\left\{\int_0^{\alpha r}\int_0^{2\pi}\int_0^\pi \Delta(\vec{x}^\op){r^\op}^2\sin\theta^\op dr^\op d\theta^\op d\varphi^\op\right\} \] or, writing the integral more succinctly, \[ \vec{u}_e = -\frac{1}{4\pi}\grad{\left(\frac{1}{r}\right)}\left\{\int \Delta(\vec{x}^\op)dV(\vec{x}^\op)\right\} \] If the integral $\smallint \Delta(\vec{x}^\op)dV(\vec{x}^\op)$ vanishes, we go to the next term in the Taylor expansion of $\vec{s}/s^3$ and write $\vec{u}_e(\vec{x})$ in terms of the source dipole. Refer to point \ref{c2sa4} for the \enquote*{multipole} expansion of electrostatic field, which is analogous to $\vec{u}_e$. \item Velocity field due to a given vorticity distribution is \[ \vec{u}_v(\vec{x}) = -\frac{1}{4\pi}\int\frac{\vec{s}\vp\vec{\omega}(\vec{x}^\op)}{s^3} dV(\vec{x}^\op) \] Proceeding in similar was as we did for a given rate of expansion, for a localized vorticity distribution, we get \begin{eqnarray*} \vec{u}_v &=& \frac{1}{4\pi}\grad{\left(\frac{1}{r}\right)}\vp\left\{\int \vec{\omega}(\vec{x}^\op)dV(\vec{x}^\op)\right\} \\ &=& -\frac{1}{4\pi}\left\{\int \vec{\omega}(\vec{x}^\op)dV(\vec{x}^\op)\right\} \vp \grad{\left(\frac{1}{r}\right)} \end{eqnarray*} Consider $\dive{(x_i\vec{\omega})} = \grad{x_i}\cdot\vec{\omega} + x_i\dive{\omega} = \uvec{i}\cdot\vec{\omega} = \omega_i$. Therefore, \[ \int\vec{\omega}(\vec{x}^\op)dV(\vec{x}^\op) = \int\dive{(x_i^\op\vec{\omega}(\vec{x}^\op))}dV(\vec{x}^\op) \] Using divergence theorem on the right hand side, \[ \int\vec{\omega}(\vec{x}^\op)dV(\vec{x}^\op) = \int x_i\vec{\omega}(\vec{x}^\op)\cdot\un dA \] If we choose $\alpha$ such that on the surface of the sphere of radius $\alpha r$, the vorticity is very small, then the right hand side of the above equation is zero. \item We now prove the identity, \[ \vec{A}\vp(\vec{B}\cdot\grad{\vec{f}}) = \curl{\left(\vec{A}(\vec{B}\cdot\vec{f})\right)}, \] where $\vec{A}$ and $\vec{B}$ are constant vectors while $\vec{f}$ is an irrotational vector field. Starting from the right hand side, \[ \curl{\left(\vec{A}(\vec{B}\cdot\vec{f})\right)} = (\curl{\vec{A}})(\vec{B}\cdot\vec{f}) + \vec{A}\vp\grad{(\vec{B}\cdot\vec{f})} \] which follows immediately if we write $\phi(\vec{x}) = \vec{B}\cdot\vec{f}$. Since, $\vec{A}$ is a constant vector, \[ \curl{\left(\vec{A}(\vec{B}\cdot\vec{f})\right)} = \vec{A}\vp\grad{(\vec{B}\cdot\vec{f})} \] Now, \[ \grad{(\vec{B}\cdot\vec{f})} = \vec{B}\cdot\grad{\vec{f}} + \vec{f}\cdot\grad{\vec{B}} + \vec{B}\vp\curl{\vec{f}} + \vec{f}\vp\curl{\vec{B}} \] Since $\vec{B}$ is a constant vector and $\vec{f}$ is an irrotational field, \[ \grad{(\vec{B}\cdot\vec{f})} = \vec{B}\cdot\grad{\vec{f}} \] Therefore, \begin{equation}\label{c2s9e1} \curl{\left(\vec{A}(\vec{B}\cdot\vec{f})\right)} = \vec{A}\vp\left(\vec{B}\cdot\grad{\vec{f}}\right) \end{equation} \item We go back to the velocity field due to a given vorticity distribution \[ \vec{u}_v(\vec{x}) = -\frac{1}{4\pi}\int\frac{\vec{s}\vp\vec{\omega}(\vec{x}^\op)}{s^3} dV(\vec{x}^\op) \] and write it as \[ \vec{u}_v(\vec{x}) = \frac{1}{4\pi}\int\grad{\left(\frac{1}{s}\right)}\vp\vec{\omega}(\vec{x}^\op) dV(\vec{x}^\op) \] We know that \[ \frac{1}{s} = \frac{1}{r} + \vec{x}^\op\cdot\grad{\left(\frac{1}{s}\right)} \] When $r = |\vec{x}|$ is very large as compared to $r^\op = |\vec{x}^\op|$, we can approximate $s$ as $r$ so that \[ \frac{1}{s} = \frac{1}{r} + \vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)} \] We saw previously that approximating $s^{-1}$ as $r^{-1}$ led to vanishing velocity. We therefore consider the second term to get \[ \vec{u}_v(\vec{x}) = \frac{1}{4\pi}\int\grad{\left(\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}\right)}\vp\vec{\omega}(\vec{x}^\op) dV(\vec{x}^\op) \] Now since $\vec{x}^\op$ is a constant in a gradient operation, \[ \grad{\left(\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}\right)} = \vec{x}^\op\cdot\grad{\left(\grad{\left(\frac{1}{r}\right)}\right)} + \vec{x}^\op\vp\curl{\left(\grad{\left(\frac{1}{r}\right)}\right)} \] Since curl of a gradient is always zero, \[ \grad{\left(\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}\right)} = \vec{x}^\op\cdot\grad{\left(\grad{\left(\frac{1}{r}\right)}\right)} \] so that \[ \vec{u}_v(\vec{x}) = \frac{1}{4\pi}\int\vec{x}^\op\cdot\grad{\left(\grad{\left(\frac{1}{r}\right)}\right)}\vp\vec{\omega}(\vec{x}^\op) dV(\vec{x}^\op) \] or \[ \vec{u}_v(\vec{x}) = -\frac{1}{4\pi}\int\vec{\omega}(\vec{x}^\op)\vp\left\{\vec{x}^\op\cdot\grad{\left(\grad{\left(\frac{1}{r}\right)}\right)}\right\}dV(\vec{x}^\op) \] From \eqref{c2s9e1}, \[ \vec{u}_v(\vec{x}) = -\frac{1}{4\pi}\int\curl{\left\{\vec{\omega}(\vec{x}^\op)\left(\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}\right)\right\}}dV(\vec{x}^\op) \] Since curl is with respect to unprimed coordinates. \begin{equation}\label{c2s9e2} \vec{u}_v(\vec{x}) = -\frac{1}{4\pi}\curl\int \vec{\omega}(\vec{x}^\op) \left(\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}\right) dV(\vec{x}^\op) \end{equation} \item We will now prove that \[ \int(x_i\omega_j + x_j\omega_i)dV = 0 \] and if $\omega$ decays faster than $r^{-4}$, the right hand side is zero. Consider \[ \dive{(x_ix_j\vec{\omega})} = \grad{(x_ix_j)}\cdot\vec{\omega} + x_ix_j\dive{\vec{\omega}} = \grad{(x_ix_j)}\cdot\vec{\omega} \] Now, $\grad{(x_ix_j)} = x_i\grad{x_j} + x_j\grad{x_i} = x_i\delta_{kj}\uvec{k} + x_j\delta_{ik}\uvec{k} = x_i\uvec{j} + x_j\uvec{i}$ so that \[ \grad{(x_ix_j)}\cdot\vec{\omega} = (x_i\uvec{j} + x_j\uvec{i})\cdot\vec{\omega} = x_i\omega_j + x_j\omega_i \] and hence \[ \int\dive{(x_ix_j\vec{\omega})}dV = \int(x_i\omega_j + x_j\omega_i)dV \] The left hand side can be written as \[ \int (x_i x_j \vec{\omega})\cdot\un dA \] If $\omega$ decays faster than $r^{-4}$ then the volume over which we integrated can be made large enough so that $\omega$ is almost zero on its surface making \begin{equation}\label{c2s9e3} \int(x_i\omega_j + x_j\omega_i)dV = 0 \end{equation} Writing in Gibbs vector form, \begin{equation}\label{c2s9e4} \int(\vec{x}\vec{\omega} + \vec{\omega}\vec{x})dV = 0 \end{equation} \item An immediate consequence of \eqref{c2s9e4} is that if $\vec{f}$ is a vector field \begin{equation}\label{c2s9e5} \int(\vec{x}\vec{\omega}\cdot\vec{f} + \vec{\omega}\vec{x}\cdot\vec{f})dV = 0 \end{equation} so that \[ -\int \vec{\omega}\vec{x}\cdot\vec{f}dV = -\int \vec{\omega}\vec{x}\cdot\vec{f}dV + \frac{1}{2}\int(\vec{x}\vec{\omega}\cdot\vec{f} + \vec{\omega}\vec{x}\cdot\vec{f})dV \] or \[ -\int \vec{\omega}\vec{x}\cdot\vec{f}dV = \frac{1}{2}\int(\vec{x}\vec{\omega}\cdot\vec{f} - \vec{\omega}\vec{x}\cdot\vec{f})dV \] If $\vec{f} = \grad{(1/r)}$, \[ -\int \vec{\omega}\vec{x}\cdot\grad{\left(\frac{1}{r}\right)}dV = \frac{1}{2}\int(\vec{x}\vec{\omega}\cdot\grad{\left(\frac{1}{r}\right)} - \vec{\omega}\vec{x}\cdot\grad{\left(\frac{1}{r}\right)}dV \] Since \[ \grad{\left(\frac{1}{r}\right)}\vp\left(\vec{\omega}(\vec{x}^\op)\vp\vec{x}^\op\right) = \vec{\omega}(\vec{x}^\op)\grad{\left(\frac{1}{r}\right)}\cdot\vec{x}^\op - \vec{x}^\op\grad{\left(\frac{1}{r}\right)}\cdot\vec{\omega}(\vec{x}^\op) \] we have \[ \vec{x}^\op\grad{\left(\frac{1}{r}\right)}\cdot\vec{\omega}(\vec{x}^\op) - \vec{\omega}(\vec{x}^\op)\grad{\left(\frac{1}{r}\right)}\cdot\vec{x}^\op = -\grad{\left(\frac{1}{r}\right)}\vp\left(\vec{\omega}(\vec{x}^\op)\vp\vec{x}^\op\right) \] so that \[ -\int\vec{\omega}(\vec{x}^\op)\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}dV = -\frac{1}{2}\int\grad{\left(\frac{1}{r}\right)}\vp\left(\vec{\omega}(\vec{x}^\op)\vp\vec{x}^\op\right)dV \] or \[ -\int\vec{\omega}(\vec{x}^\op)\vec{x}^\op\cdot\grad{\left(\frac{1}{r}\right)}dV = \frac{1}{2}\int\grad{\left(\frac{1}{r}\right)}\vp\left(\vec{x}^\op\vp\vec{\omega}(\vec{x}^\op)\right)dV \] From \eqref{c2s9e2}, \[ \vec{u}_v(\vec{x}) = \frac{1}{8\pi}\curl\int \grad{\left(\frac{1}{r}\right)}\vp\left(\vec{x}^\op\vp\vec{\omega}(\vec{x}^\op)\right) dV(\vec{x}^\op) \] or \begin{equation}\label{c2s9e6} \vec{u}_v(\vec{x}) = \frac{1}{8\pi}\curl\left\{\grad{\left(\frac{1}{r}\right)}\vp\int\left(\vec{x}^\op\vp\vec{\omega}(\vec{x}^\op)\right) dV(\vec{x}^\op)\right\} \end{equation} If $\vec{A}$ is a constant vector and $\vec{f}$ is a function of $\vec{x}$ then $\grad{(\vec{A}\cdot\vec{f})} = \vec{A}\cdot\grad{\vec{f}} + \vec{A}\vp\curl{\vec{f}}$ and $\curl{(\vec{f}\vp\vec{A})} = -\vec{A}\dive{\vec{f}} + \vec{A}\cdot\grad{\vec{f}}$. Using the first equation in the second, we get \[ \curl{(\vec{f}\vp\vec{A})} = -\vec{A}\dive{\vec{f}} + \grad{(\vec{A}\cdot\vec{f})} - \vec{A}\vp\curl{\vec{f}} \] If $\vec{f} = \grad{(1/r)}$, then its curl and divergence are zero so that \[ \curl{\left(\grad{\left(\frac{1}{r}\right)}\vp\vec{A}\right)} = \grad{\left(\grad{\left(\frac{1}{r}\right)}\cdot\vec{A}\right)} \] Using this in \eqref{c2s9e6}, we get \begin{equation}\label{c2s9e7} \vec{u}_v(\vec{x}) = \frac{1}{8\pi}\grad{\left(\grad{\left(\frac{1}{r}\right)}\cdot\int\left(\vec{x}^\op\vp\vec{\omega}(\vec{x}^\op)\right) dV(\vec{x}^\op)\right)} \end{equation} The book uses an approximation sign instead of equality because this relation is true only very far away from the source. \item In the section on the behaviour of $\phi$ at large distances, the analysis begins with Green's theorem whose requirements are that the functions $F$ and $G$ be single-valued, finite and continuous throughout the volume of integration. The same restrictions are placed on derivatives of $F$ and $G$. These requirements force us to exclude the point $P$ from the region of integration by enclosing it in a sphere of radius $\epsilon$. \item In order to get (2.9.9) we evaluate the limit \[ L = -\lim_{\epsilon \rightarrow 0}\int \left(\phi(\vec{x}^\op)\pdt{s^{-1}}{s} - \frac{1}{s}\pdt{\phi(\vec{x}^\op)}{s}\right)\Big|_{s = \epsilon} \epsilon^2 d\Omega(\vec{x}^\op) \] The integrand is simplified as \[ -\frac{\phi(\vec{x}^\op)}{s^2} - \frac{1}{s}\pdt{\phi(\vec{x}^\op)}{s} = -\frac{1}{s^2}\left(\phi(\vec{x}^\op) + s\pdt{\phi(\vec{x}^\op)}{s}\right) \] Noting that the position vector of $P$ is $\vec{x}$ we replace the primed variable with the unprimed one. Evaluated at $s = \epsilon$ it is \[ -\frac{1}{\epsilon^2}\left(\phi(\vec{x}) + \epsilon\pdt{\phi(\vec{x})}{s}\right) = -\frac{\phi(\vec{x} + \vec{\epsilon})}{\epsilon^2}, \] where $\vec{\epsilon}$ is the radius vector of the sphere around $P$. We further get, \[ L = +\lim_{\epsilon \rightarrow 0}\int\frac{\phi(\vec{x} + \vec{\epsilon})}{\epsilon^2}\epsilon^2 d\Omega(\vec{x}) = 4\pi\phi(\vec{x}) \] \item After accounting for the above term, equation (2.9.8) of the book becomes \[ \left. \begin{array}{l} \int(F\grad{G} - G\grad{F})\cdot\un_2dA_2 - \\ \int(F\grad{G} - G\grad{F})\cdot\un_1dA_1 + \\ + 4\pi\phi(\vec{x}) \end{array} \right\} = \int(F\nabla^2G - G\nabla^2F)dV \] If $F(\vec{x}^\op) = \phi(\vec{x}^\op)$ and $G(\vec{x}^\op) = s^{-1}$ then the right hand side of the above equation is zero and \[ 4\pi\phi(\vec{x}) = \int(F\grad^\op{G} - G\grad^\op{F})\cdot\un_1dA_1 - \int(F\grad^\op{G} - G\grad^\op{F})\cdot\un_2dA_2, \] where $\grad^\op$ denotes the gradient with respect to primed coordinates. Let $T_2$ denote the second term. \begin{eqnarray*} T_2 &=& \int(F\grad^\op{G} - G\grad^\op{F})\cdot\un_2dA_2 \\ &=& \int\left(\phi(\vec{x}^\op)\grad^\op{\left(\frac{1}{s}\right)} - \left(\frac{1}{s}\right)\grad^\op{\phi(\vec{x}^\op)}\right)\cdot\un_2 dA_2 \\ \end{eqnarray*} Since $A_2$ is the surface of a sphere of radius $R$, \[ T_2 = -\frac{1}{R^2}\int \phi(\vec{x}^\op) dA_2 - \frac{1}{R}\int\grad^\op{\phi(\vec{x}^\op)}\cdot\un_2 dA_2 \] Since $\dive{\vec{v}} = 0$ everywhere in $V$, \[ \int\dive{\vec{v}}dV = 0 \] or \[ \int\vec{v}\cdot\un_2 dA_2 - \int\vec{v}\cdot\un_1 dA_1 = 0 \] Let \[ \int\grad^\op{\phi}(\vec{x}^\op)\cdot\un_2 dA_2 = \int\grad^\op{\phi}(\vec{x}^\op)\cdot\un_1 dA_1 = m, \] say so that \[ T_2 = -\frac{1}{R^2}\int \phi(\vec{x}^\op) dA_2 - \frac{m}{R} = 4\pi\langle\phi(\vec{x}, R)\rangle - \frac{m}{R}, \] where $\langle\cdot\rangle$ denotes the average. Thus, \[ 4\pi\phi(\vec{x}) = \int(F\grad^\op{G} - G\grad^\op{F})\cdot\un_1dA_1 + 4\pi\langle\phi(\vec{x}, R)\rangle - \frac{m}{R} \] or \[ \phi(\vec{x}) = \langle\phi(\vec{x}, R)\rangle - \frac{m}{4\pi R} + \frac{1}{4\pi}\int\left(\phi(\vec{x}^\op)\grad^\op{\left(\frac{1}{s}\right)} - \left(\frac{1}{s}\right)\grad^\op{\phi(\vec{x}^\op)}\right)\cdot\un_1dA_1 \] Note that the gradients in the integrand are all with respect to primed coordinates. If we want them to be in terms of un-primed coordinates, we get an overall negative sign to the integral. \begin{equation}\label{c2s9e8} \phi(\vec{x}) = \langle\phi(\vec{x}, R)\rangle - \frac{m}{4\pi R} - \frac{1}{4\pi}\int\left(\phi(\vec{x}^\op)\grad{\left(\frac{1}{s}\right)} - \left(\frac{1}{s}\right)\grad{\phi(\vec{x}^\op)}\right)\cdot\un_1dA_1 \end{equation} \item We can write the equation for flux of $\phi$ as \[ m = \int\grad^\op{\phi}(\vec{x}^\op)\cdot\un_2 dA_2 = R^2\int \left(\pdt{\phi(\vec{x}^\op)}{s}\right)_{s=R} d\Omega(\vec{x}^\op) \] Pulling out the differential operator (reverse of integration under the differential sign), \[ m = R^2\frac{\partial}{\partial R}\int \left(\phi(\vec{x}^\op)\right)_{s=R} d\Omega(\vec{x}^\op) \] or, \[ \frac{m dR}{R^2} = \partial\int \left(\phi(\vec{x}^\op)\right)_{s=R} d\Omega(\vec{x}^\op) \] or \[ 4\pi C - \frac{m}{R} = \int \left(\phi(\vec{x}^\op)\right)_{s=R} d\Omega(\vec{x}^\op), \] where $C$ is a constant of integration. Using the definition of $\langle\phi(\vec{x}, R)\rangle$, we see that \[ \langle\phi(\vec{x}, R)\rangle = C - \frac{m}{4\pi R} \] \item Since $R$ is a fixed constant, $\grad{\langle\phi(\vec{x}, R)\rangle} = \grad{C}$ so that \[ \grad{C} = \grad{\left(\frac{1}{4\pi R^2}\int\phi(\vec{x}^\op)\cdot\un dA_2\right)} = \frac{1}{4\pi R^2}\int\grad{\vec{x}^\op}\cdot\un dA_2 \] or \[ \grad{C} = -\frac{1}{4\pi R^2}\int\grad^\op{\vec{x}^\op}\cdot\un dA_2 = -\frac{1}{4\pi R^2}\int\vec{v}\cdot\un dA_2 \] Since $\vec{v}$ is zero on the outer boundary, we see that $\vec{C} = 0$ or that $C$ is independent of $R$. \item Since \[ C = \langle\phi(\vec{x}, R)\rangle + \frac{m}{4\pi R} \] we can write \eqref{c2s9e8} as \[ \phi(\vec{x}) = C - \frac{1}{4\pi}\int\left(\phi(\vec{x}^\op)\grad{\left(\frac{1}{s}\right)} - \left(\frac{1}{s}\right)\grad{\phi(\vec{x}^\op)}\right)\cdot\un_1dA_1 \] This does not appear to be a very encouraging equation because $\phi$ appears on the right hand side as well. It appears as \enquote*{prescribed value} of $\phi$ on the inner boundary. The book mentions that it can be converted \emph{in principle} into the prescribed value of $\grad{\phi}\cdot\un$ on the surface, although it does not tell how so. \item We observed that \[ \int\vec{v}\cdot\vec{v}dV = \int\dive{(\phi\vec{v})}dV \] Since $\vec{v}$ is a solenoidal field, $\vec{v}\cdot\vec{v}$ is also equal to $\dive{\left((\phi - C)\vec{v}\right)}$. Applying this relation in the present context, \[ \int\vec{v}\cdot\vec{v}dV = \lim_{R \rightarrow \infty}\int\left((\phi - C)\vec{v}\right)\cdot\un_2 dA_2 - \int\left((\phi - C)\vec{v}\right)\cdot\un_1 dA_1 \] The limit is applicable only to the first term because the inner boundary is fixed while the outer, spherical one is assumed to be at an infinite distance. Since $\phi(\vec{x}) \rightarrow C$ in the limit $R \rightarrow \infty$, the first term goes to zero and \[ \int\vec{v}\cdot\vec{v}dV = - \int\left((\phi - C)\vec{v}\right)\cdot\un_1 dA_1 \] or \[ \int\vec{v}\cdot\vec{v}dV = -\int\phi\vec{v}\cdot\un dA_1 + C\int\vec{v}\cdot\un_1 dA_1 = -\int\phi\vec{v}\cdot\un dA_1 + Cm, \] where $m$ is the flux of $\phi$. \item We previously observed that (see bottom of page 87 of the book) that $\vec{v}$ is determined from the prescribed boundary conditions. We will find $\vec{v}$ under the condition $\vec{v}\cdot\un = \vec{U}\cdot\un$, where $\vec{U}$ is the velocity of a rigid body in a translational motion in the fluid, which is at rest at infinity. Thus, the mathematical statement of the problem is - solve $\nabla^2\phi = 0$ subject to the boundary conditions \begin{enumerate} \item $\phi(\vec{x}) \rightarrow$ a constant as $r = |\vec{x}| \rightarrow \infty$. We can choose the constant to be zero. \item $\un\cdot\grad{\phi} = \un\cdot\vec{U}$ \end{enumerate} We try a solution of the form $\phi(\vec{x}) = \vec{U}\cdot\vec{\Phi}(\vec{x})$, where $\vec{\Phi}$ is independent of $\vec{U}$. Since $\vec{\Phi}$ is determined by the boundary condition on the surface of the rigid body, its value depends not on $\vec{x}$ but on its position vector with respect to an arbitrarily chosen material point, say $\vec{x}_0$, of the body. Thus, \[ \phi(\vec{x}) = \vec{U}\cdot\vec{\Phi}(\vec{x} - \vec{x}_0) \] For a spherically symmetric body with its center instantaneously at the origin, $\vec{x}_0 = 0$. Further, \[ \nabla^2\phi = \nabla^2(U_i\Phi_i) = U_i\nabla^2\Phi_i \] Since $U_i$ is, in general, not zero, $\nabla^2\phi = 0$ implies $\nabla^2\Phi_i = 0$ or $\nabla^2\vec{\Phi} = 0$. A solution of Laplace's equation which is also a component of a vector is \[ \frac{\partial}{\partial x_i}\left(\frac{1}{r}\right) \] Therefore, \[ \vec{\Phi} = \alpha\grad{\left(\frac{1}{r}\right)}, \] where $\alpha$ is an arbitrary constant. Therefore, \[ \phi(\vec{x}) = \alpha\vec{U}\cdot\grad{\left(\frac{1}{r}\right)} = -\alpha\frac{\vec{U}\cdot\vec{x}}{r^3} \] and hence, \begin{eqnarray*} \vec{v} &=& \grad{\phi} \\ &=& -\alpha\grad{\left[\vec{U}\cdot\grad{\left(\frac{1}{r}\right)}\right]} \\ &=& -\alpha(\vec{U}\cdot\nabla)\grad{\left(\frac{1}{r}\right)} - \alpha\vec{U}\vp\left\{\curl{\grad{\left(\frac{1}{r}\right)}}\right\} \end{eqnarray*} Since curl of a gradient is always zero, \[ \vec{v} = -\alpha(\vec{U}\cdot\nabla)\grad{\left(\frac{1}{r}\right)} \] \end{itemize} \section{Two dimensional flow fields extending to infinity}\label{c2s10} \begin{itemize} \item A two dimensional motion is the one whose velocity field is of the form $\vec{u}(x, y) = (u, v, 0)$ or equivalent. A two dimensional flow field extending to infinity is the one which extends to infinity in $x$ and $y$ (or any other pair) of directions. It is either bounded in the $z$ (or the remaining) direction or it extends to infinity without motion diminishing as one travels along the $z$ direction. In such a flow field, the assumption made in the previous section that the velocity on the outer sphere is zero does not hold good. That is why it needs a separate treatment. \item The construct of a large sphere, centered at the field point and including all internal boundaries is used only to examine the behaviour of $\phi$. Therefore, all formulae in section 2.9 of the book, up to equation (2.9.7), are applicable to two dimensional flow fields. \item It is possible to use equation (2.9.8) of the book for the plane by replacing $dV$ by $dA$ and $dA$ by $dl$ because the divergence theorem can be \href{http://mathworld.wolfram.com/ DivergenceTheorem.html}{applied to plane} as well. Refer to sub section \ref{c2sa9} for a proof. \item We saw in section \ref{c2sa6} that $\ln|\vec{x} - \vec{x}^\op|$ is a solution of Laplace's equation in two dimensions. That is why we choose $F(\vec{x}^\op) = \phi(\vec{x}^\op)$ and $G(\vec{x}^\op) = \ln s$ in equation(2.9.8). Doing so, we get \begin{eqnarray*} \int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_2 - \int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_1 &=& \\ \int(\phi(\vec{x}^\op){\nabla^\op}^2\ln s - \ln s {\nabla^\op}^2\phi(\vec{x}^\op))dA^\op, & & \end{eqnarray*} where the subscript $1$ to $\nabla$ indicates differentiation with respect to the primed coordinates. The function $\ln s$ blows up at the field point $\vec{x}$. Therefore, we exclude it by enclosing it in a small circle of radius $\epsilon$ centerd at $\vec{x}$. The exclusion appears on the left hand side in the form of an additional term \[ L = -\lim_{\epsilon \rightarrow 0}\int\left(\phi(\vec{x}^\op)\pdt{(\ln s)}{s} - \ln s\pdt{\phi(\vec{x}^\op)}{s}\right)\Big|_{s=\epsilon}\epsilon d\theta \] Interchanging the order of integration and limit the integrand is, \[ \lim_{\epsilon \rightarrow 0}\left(\frac{\phi(\vec{x}^\op)}{\epsilon} - \ln\epsilon\pdt{\phi(\vec{x}^\op)}{s}\Big|_{s=\epsilon} \right) \epsilon = \phi(\vec{x}^\op), \] because \[ \lim_{\epsilon \rightarrow 0}\epsilon\ln\epsilon = 0, \] by l'H\^{o}pital's rule. Therefore, $L = -2\pi\phi(\vec{x}^\op)$. Since $\phi$ and $\ln s$ are harmonic functions, the right hand side of the 2d analog of (2.9.8) is \[ \int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_2 - \int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_1 - 2\pi\phi(\vec{x}^\op) = 0 \] or \[ \phi(\vec{x}) = \frac{1}{2\pi}\int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_2 - \frac{1}{2\pi}\grad^\op\int(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_1 \] Since the first integral on the right hand side is taken over a circle of radius $R$ centerd at $\vec{x}$, \begin{equation}\label{c2s10e1} \phi(\vec{x}) = \frac{1}{2\pi R}\int\phi(\vec{x}^\op)dl^\op_2 + \frac{\ln R}{2\pi}\int\grad^\op{\phi(\vec{x}^\op)}\cdot d\vec{l}^\op_2 - \frac{1}{2\pi}\int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_1 \end{equation} Now, \[ \frac{1}{2\pi R}\int\phi(\vec{x}^\op)dl^\op_2 = \langle\phi\rangle, \] the mean value of $\phi$ over the outer boundary while \[ -\int\grad^\op{\phi(\vec{x}^\op)}\cdot d\vec{l}^\op_2 = \int\grad{\phi(\vec{x}^\op)}\cdot d\vec{l}^\op_2 = m, \] is the flux through it, so that \eqref{c2s10e1} becomes, \begin{equation}\label{c2s10e2} \phi(\vec{x}) = \langle\phi\rangle - \frac{m}{2\pi}\ln R - \frac{1}{2\pi}\int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_1 \end{equation} \item The analog of the flux relation (2.9.10) is \[ m = \int\grad^\op\phi(\vec{x}^\op)\cdot d\vec{l}^\op_2 \] Over a circle of radius $R$ centerd at $\vec{x}$, \[ m = R\int\pdt{\phi}{s}\Big|_{s=R} d\theta(\vec{x}^\op) = R\frac{\partial}{\partial R}\int\phi(\vec{x}^\op)\Big|_{s=R}d\theta(\vec{x}^\op) \] or \[ m\ln R = \int\phi(\vec{x}^\op)\Big|_{s=R}d\theta(\vec{x}^\op) - 2\pi C, \] where $C$ is a constant of integration. Thus, \[ \frac{m}{2\pi}\ln R = \langle\phi\rangle - C \] so that equation \eqref{c2s10e2} becomes \[ \phi(\vec{x}) = C - \frac{1}{2\pi}\int\grad^\op(\phi(\vec{x}^\op)\ln s)\cdot d\vec{l}^\op_1 \] In terms of derivative with respect to unprimed coordinates, \[ \phi(\vec{x}) = C + \frac{1}{2\pi}\int\grad{(\phi(\vec{x}^\op)\ln s)}\cdot d\vec{l}^\op_1 \] or \begin{equation}\label{c2s10e3} \phi(\vec{x}) = C + \frac{1}{2\pi}\int\left(\phi(\vec{x}^\op)\grad\ln s + \ln s\grad\phi(\vec{x}^\op)\right)\cdot d\vec{l}^\op_1 \end{equation} or \[ \phi(\vec{x}) - C = \frac{1}{2\pi}\int\left(\frac{\phi(\vec{x}^\op)}{s} + \ln s\grad\phi(\vec{x}^\op)\right)\cdot d\vec{l}^\op_1 \] or \[ \phi(\vec{x}) - C - \frac{m}{2\pi}\ln r = \frac{1}{2\pi}\int\left(\frac{\vec{s}}{s}\phi(\vec{x}^\op) + \ln s\grad\phi(\vec{x}^\op)\right)\cdot d\vec{l}^\op_1 - \frac{m}{2\pi}\ln r \] Using the fact that \[ m = \int\grad^\op\phi(\vec{x}^\op)\cdot d\vec{l}^\op_2 \] on the right hand side we get \[ \phi(\vec{x}) - C - \frac{m}{2\pi}\ln r = \frac{1}{2\pi}\int\left(\frac{\vec{s}}{s}\phi(\vec{x}^\op) + \ln\frac{s}{r}\grad\phi(\vec{x}^\op)\right)\cdot d\vec{l}^\op_1 \] Unlike the three dimensional case, as $r \rightarrow \infty$, it is not $\phi$ but \[ \phi(\vec{x}) - \frac{m}{2\pi}\ln r \] that tends to $C$. We need the term $m\ln r/(2\pi)$ because it creates the term $\ln(s/r)$, whose argument to $1$ as $r \rightarrow \infty$. \item We can reconcile the behaviour in two and three dimensions if we state that it is the difference between the velocity potential and the source term that tends to a constant as $r \rightarrow \infty$. In three dimensions, the source term being of $O(r^{-1})$ itself vanishes at infinity. On the other hand, in two dimensions, the source term rises logarithmically at infinity. \item We first observe that \begin{eqnarray*} \left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\cdot\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right] &=& \\ \dive\left\{\left[\phi - C - \left(\frac{m}{2\pi}\ln r\right)\right]\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\right\} & & \end{eqnarray*} This is because $\vec{v}$ is a solenoidal field and $\ln r$ is a harmonic function. Therefore, \begin{eqnarray*} \int\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\cdot\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]dA &=& \\ \int\dive\left\{\left[\phi - C - \left(\frac{m}{2\pi}\ln r\right)\right]\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\right\}dA & & \end{eqnarray*} Using divergence theorem in the plane (refer to \ref{c2sa9}) on the right hand side \begin{eqnarray*} \int\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\cdot\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]dA &=& \\ \int\left\{\left[\phi - C - \left(\frac{m}{2\pi}\ln r\right)\right]\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\right\}\cdot\un_2 dl - & & \\ \int\left\{\left[\phi - C - \left(\frac{m}{2\pi}\ln r\right)\right]\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\right\}\cdot\un_1 dl & & \end{eqnarray*} Choose the outer boundary as large as possible by making the radius of the outer circle tend to infinity. In this limit the integrand tends to zero and hence, \begin{eqnarray*} \int\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\cdot\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]dA &=& \\ -\int\left\{\left[\phi - C - \left(\frac{m}{2\pi}\ln r\right)\right]\left[\vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right)\right]\right\}\cdot\un_1 dl & & \end{eqnarray*} If \begin{eqnarray*} \Phi &=& \phi - \left(\frac{m}{2\pi}\ln r\right) \\ \vec{V} &=& \vec{v} - \grad\left(\frac{m}{2\pi}\ln r\right) \end{eqnarray*} then the previous equation becomes, \[ \int\vec{V}\cdot\vec{V}dA = -\int(\Phi - C)\vec{V}\cdot\un_1 dl \] Now let, if possible, $\Phi$ and $\Phi^\ast$ be two solutions of Laplace's equation in two dimensions with identical boundary conditions. Then $\Phi - \Phi^\ast$ too will be a solution and \[ \int(\vec{V} - \vec{V}^\ast)\cdot(\vec{V} - \vec{V}^\ast)dA = -\int\left((\Phi - \Phi^\ast) - (C - C^\ast)\right)(\vec{V} - \vec{V}^\ast)\cdot\un_1 dl \] By virtue of boundary conditions, $\grad{\phi}\cdot\un = \vec{v}\cdot\un$ is identical on the boundary for both solutions. Since the factor $m\ln r/(2\pi)$ is same for both solutions, \[ \vec{V}\cdot\un = \vec{V}^\ast\cdot\un \] on the boundary. Therefore the right hand side is zero making \[ \int(\vec{V} - \vec{V}^\ast)\cdot(\vec{V} - \vec{V}^\ast)dA = 0, \] which is possible only if $\vec{V} = \vec{V}^\ast$. Thus, in two dimensions the unique solution of Laplace equation is \[ \phi - \left(\frac{m}{2\pi}\ln r\right) \] \end{itemize} \section{Exercises}\label{c2s11} \begin{enumerate} \item Consider a material line element joining points $P$ and $Q$. Let $r$ be the length of the material line element. If $\vec{x}_0$ is the position vector of $P$ then that of $Q$ is $\vec{x}_0 + r\uvec{r}$, where $\uvec{r}$ is a unit vector parallel to the line element, at a certain instance of time. Let, after a time $\delta t$, $P$ get to $\vec{x}_0 + \vec{X}$ so that $Q$ goes to $(\vec{x}_0 + r\uvec{r}) + r\uvec{r}\cdot\grad\vec{X}$, where we have ignored terms of higher order in $r\uvec{r}$. If the line element was along $r\uvec{r}$ initially, it is now along $q\uvec{q} = r(\uvec{r} + \uvec{r}\cdot\grad\vec{X})$. Thus, \[ q^2 = r^2(1 + 2\uvec{r}\cdot\grad\vec{X}\cdot\uvec{r}), \] where we have ignored terms higher than second order in $\uvec{r}$. Thus, \[ \frac{q^2}{r^2} = 1 + 2\uvec{r}\cdot\grad\vec{X}\cdot\uvec{r} \] or \[ \frac{q}{r} = \left(1 + 2\uvec{r}\cdot\grad\vec{X}\cdot\uvec{r}\right)^{1/2} = 1 + \uvec{r}\cdot\grad\vec{X}\cdot\uvec{r} \] or \[ \frac{q}{r} - 1 = \frac{q - r}{r} = \uvec{r}\cdot\grad\vec{X}\cdot\uvec{r} \] The quantity $(q - r)/r$ is the strain of the element, $\lambda$. Therefore, the rate of change of strain is, \[ \td{\lambda}{t} = \uvec{r}\cdot\grad\vec{U}\cdot\uvec{r}, \] where $\vec{U}$ is the velocity of $Q$ relative to $P$. We can as well write it as \[ \td{\lambda}{t} = \frac{\vec{r}\cdot\grad\vec{U}\cdot\vec{r}}{r^2} \] The quantity $\vec{r}\cdot\grad\vec{U}\cdot\vec{r}$ is the rate-of-strain ellipsoid centerd at $P$. The point $Q$ lies on it. If one imagines a sphere centered at $P$ and of radius $r$ then $Q$ lies on it in the unstrained state. As time goes by, the point $Q$ finds itself on the ellipsoid described by $\vec{r}\cdot\grad\vec{U}\cdot\vec{r}$. \item Given that \[ \vec{B}_v(\vec{x}) = \frac{1}{4\pi}\int\frac{\vec{\omega}(\vec{x}^{\op})}{s}dV(\vec{x}^{\op}) - \frac{1}{4\pi}\int\frac{\un\vp\vec{u}(\vec{x}^{\op})}{s}dA(\vec{x}^{\op}) \] Therefore, \[ \curl\vec{B}_v(\vec{x}) = \frac{1}{4\pi}\int\curl\frac{\vec{\omega}(\vec{x}^{\op})}{s}dV(\vec{x}^{\op}) - \frac{1}{4\pi}\int\curl\frac{\un\vp\vec{u}(\vec{x}^{\op})}{s}dA(\vec{x}^{\op}) \] Using the fact that \[ \grad\left(\frac{1}{s}\right) = -\frac{\vec{s}}{s^3}, \] we get \begin{eqnarray*}\nonumber \vec{u}(\vec{x}) &=& \vec{u}_1(\vec{x}) + \vec{u}_2(\vec{x}) \\ &=& -\frac{1}{4\pi}\int\frac{\vec{s}\vp\vec{\omega}(\vec{x}^{\op})}{s^3}dV(\vec{x}^{\op})+ \frac{1}{4\pi}\int\frac{\vec{s}\vp(\un\vp\vec{u}(\vec{x}^{\op}))}{s^3}dA(\vec{x}^{\op}) \end{eqnarray*} We will now show that $\curl\vec{u}_2(\vec{x})=0$ that is $\vec{u}_2(\vec{x})$ does not contribute to the vorticity of the fluid. \begin{eqnarray*} \curl\vec{u}_2(\vec{x}) &=& \frac{1}{4\pi}\int\curl\frac{\vec{s}\vp(\un\vp\vec{u}(\vec{x}^{\op}))}{s^3}dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int\curl\frac{(\un\vec{u}(\vec{x}^{\op})-\vec{u}(\vec{x}^{\op})\un)\cdot\vec{s}}{s^3}dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int(\un\vec{u}(\vec{x}^{\op})-\vec{u}(\vec{x}^{\op})\un)\cdot\curl\frac{\vec{s}}{s^3}dA(\vec{x}^{\op}) \end{eqnarray*} Now \begin{eqnarray*} \curl\frac{\vec{s}}{s^3} &=& \frac{\curl\vec{s}}{s^3} + \vec{s}\vp\nabla\frac{1}{s^3} \\ &=& 0 - \vec{s}\vp\frac{\vec{s}}{s^5}\\ &=& 0 \end{eqnarray*} Thus $\curl\vec{u}_2(\vec{x}) = 0$. We will now show that $\curl\vec{u}_1(\vec{x}) = \vec{\omega}(\vec{x})$. \[ \curl\vec{u}_1(\vec{x}) = -\frac{1}{4\pi}\int\curl\frac{\vec{s}\vp\omega(\vec{x}^{\op})}{s^3}dV(\vec{x}^{\op}) \] Now \[ \curl\frac{\vec{s}\vp\omega(\vec{x}^{\op})}{s^3} = -\vec{\omega}(\vec{x}^{\op})\dive\frac{\vec{s}}{s^3} - \vec{\omega}(\vec{x}^{\op})\cdot\grad\left(\frac{\vec{s}}{s^3}\right), \] where we have used the fact{\footnote{See chapter 1 of Griffith's Classical Electrodynamics\cite{griffiths1999introduction}}} that \[ \dive\frac{\vec{s}}{s^3} = 4\pi\delta(\vec{x}-\vec{x}^{\op}) \] \begin{eqnarray*} \curl\frac{\vec{s}\vp\omega(\vec{x}^{\op})}{s^3} &=& -\vec{\omega}(\vec{x}^{\op})4\pi\delta(\vec{x}-\vec{x}^{\op}) - \vec{\omega}(\vec{x}^{\op})\cdot\left(\frac{\grad\vec{s}}{s^3}+\vec{s}\grad\frac{1}{s^3}\right) \\ &=& -4\pi\vec{\omega}(\vec{x}^{\op})\delta(\vec{x}-\vec{x}^{\op})-\vec{\omega}(\vec{x}^{\op})\cdot\left(\frac{\vec{e}_s\vec{e}_s}{s^3}-\frac{\vec{s}\vec{s}}{s^5}\right) \end{eqnarray*} Since $\vec{s}=s\vec{e}_s$, the last factor is $0$. Therefore, \[ \curl\vec{u}_1(\vec{x}) = \int\vec{\omega}(\vec{x}^{\op})\delta(\vec{x}-\vec{x}^{\op})dV(\vec{x}^{\op}) = \vec{\omega}(\vec{x}) \] Thus \[ \curl\curl\vec{B}_v(\vec{x}) = \curl\vec{u}_1(\vec{x}) + \curl\vec{u}_2(\vec{x}) = \vec{\omega}(\vec{x}) \] \item Green's theorem is \[ \int(f \grad g - g \grad f)\cdot\un dA(\vec{x}) = \int(f\nabla^2g - g\nabla^2f)dV(\vec{x}) \] We will now consider the three cases: \begin{itemize} \item[(i)] Choose \begin{eqnarray*} f &=& \frac{1}{4\pi s} \\ g &=& \phi(\vec{x}) \end{eqnarray*} in the statement of Green's theorem. Now, \begin{eqnarray*} \int(f \grad g)\cdot\un dA(\vec{x}) &=& \frac{1}{4\pi}\int\left(\frac{\grad\phi(\vec{x})}{s}\right)\cdot\un dA(\vec{x}) \\ \int(g \grad f)\cdot\un dA(\vec{x}) &=& \frac{1}{4\pi}\int\phi(\vec{x})\un\cdot\grad\left(\frac{1}{s}\right)dA(\vec{x}) \\ &=& \frac{1}{4\pi}\int\vec{\mu}\cdot\grad\left(\frac{1}{s}\right)dA(\vec{x}), \end{eqnarray*} where we put $\vec{\mu} = \phi(\vec{x})\un$. The first term is the velocity induced by source distribution of $\grad\phi(\vec{x})$ over the surface (refer to equation (2.9.20) of the book) while the second one is due to velocity induced due to source doublet distribution of $\vec{\mu}$ (refer to equation (2.5.3) of the book). The integrand of right hand side of Green's theorem is \[ \frac{1}{4\pi s}\nabla^2\phi - \phi\nabla^2\left(\frac{1}{4\pi s}\right) \] Since $\phi(\vec{x})$ satisfies Laplace's equation, the integrand is just the second of the above terms. Therefore, the right hand side is \begin{eqnarray*} -\int\left(\phi(\vec{x}){\nabla}^2\frac{1}{4\pi s}\right)dV(\vec{x}) &=& \int\left(\phi(\vec{x})\dive\frac{\vec{s}}{4\pi s^3}\right)dV(\vec{x}) \\ &=& \int\phi(\vec{x})\delta(\vec{x}-\vec{x})dV(\vec{x}) \\ &=& \phi(\vec{x}) \end{eqnarray*} Thus $\phi(\vec{x})$ is due to a source distribution of $\nabla\phi(\vec{x})$ over the surface and a source doublet distribution of $\phi(\vec{x})\un$. \item[(ii)] In this case, we apply Green's theorem separately to the region, $R_o$, outside the boundary and $R_i$, the one inside it. In $R_o$, choose \begin{eqnarray*} f &=& \frac{1}{4\pi s} \\ g &=& \phi^\ast(\vec{x}) \end{eqnarray*} while in $R_i$ choose \begin{eqnarray*} f &=& \frac{1}{4\pi s} \\ g &=& \phi(\vec{x}) \end{eqnarray*} Green's theorem applied in $R_i$ gives, \begin{equation}\label{c2s11e1} \frac{1}{4\pi}\int \frac{1}{s}\grad\phi\cdot\un_i dA - \frac{1}{4\pi}\int\phi\grad\left(\frac{1}{s}\right)\cdot\un_i dA = -\frac{1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV, \end{equation} where we used the fact that $\nabla^2\phi = 0$. Doing the same in $R_o$ gives, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{1}{s}\grad\phi^\ast\cdot\un_o dA - &=& -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \\ \frac{1}{4\pi}\int\phi^\ast\grad\left(\frac{1}{s}\right)\cdot\un_o dA & & \end{eqnarray*} Since $\un_o = -\un_i$, we have \begin{eqnarray} -\frac{1}{4\pi}\int \frac{1}{s}\grad\phi^\ast\cdot\un_i dA + &=& -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \label{c2s11e2} \\ \frac{1}{4\pi}\int\phi^\ast\grad\left(\frac{1}{s}\right)\cdot\un_i dA & & \nonumber \end{eqnarray} Adding \eqref{c2s11e1} and \eqref{c2s11e2}, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{1}{s}\grad\left(\phi - \phi^\ast\right)\cdot\un_i dA - &=& -\frac{1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV \\ \frac{1}{4\pi}\int\left(\phi - \phi^\ast\right)\grad\left(\frac{1}{s}\right)\cdot\un_i dA & & -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \end{eqnarray*} Since $\phi = \phi^\ast$ on the boundary, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{\grad\left(\phi - \phi^\ast\right)\cdot\un_i}{s} dA &=& \frac{-1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV + \\ & & \frac{-1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV \end{eqnarray*} We cannot combine the integrals on the right hand side. We therefore, interpret this equation separately in the two regions, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{1}{s}\grad\left(\phi - \phi^\ast\right)\cdot\un_i dA &=& -\frac{1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV \\ \frac{1}{4\pi}\int \frac{1}{s}\grad\left(\phi - \phi^\ast\right)\cdot\un_i dA &=& -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \end{eqnarray*} Noting that \[ \nabla^2\left(\frac{1}{s}\right) = 4\pi\delta(\vec{x} - \vec{x}^\op) \] we get, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{1}{s}\grad\left(\phi - \phi^\ast\right)\cdot\un_i dA &=& -\phi(\vec{x}) \\ \frac{1}{4\pi}\int \frac{1}{s}\grad\left(\phi - \phi^\ast\right)\cdot\un_i dA &=& -\phi^\ast(\vec{x}) \end{eqnarray*} Thus, in both regions, the potential can be interpreted as due to sources of strength $\grad\left(\phi - \phi^\ast\right)\cdot\un_i$. \item[(iii)] We once again apply Green's theorem separately to the region, $R_o$, outside the boundary and $R_i$, the one inside it. In $R_o$, choose \begin{eqnarray*} f &=& \frac{1}{4\pi s} \\ g &=& \phi^\ast(\vec{x}) \end{eqnarray*} while in $R_i$ choose \begin{eqnarray*} f &=& \frac{1}{4\pi s} \\ g &=& \phi(\vec{x}) \end{eqnarray*} Green's theorem applied in $R_i$ gives, \begin{equation}\label{c2s11e3} \frac{1}{4\pi}\int \frac{1}{s}\grad\phi\cdot\un_i dA - \frac{1}{4\pi}\int\phi\grad\left(\frac{1}{s}\right)\cdot\un_i dA = -\frac{1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV, \end{equation} where we used the fact that $\nabla^2\phi = 0$. Doing the same in $R_o$ gives, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{1}{s}\grad\phi^\ast\cdot\un_o dA - &=& -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \\ \frac{1}{4\pi}\int\phi^\ast\grad\left(\frac{1}{s}\right)\cdot\un_o dA & & \end{eqnarray*} Since $\un_o = -\un_i$, we have \begin{eqnarray} -\frac{1}{4\pi}\int \frac{1}{s}\grad\phi^\ast\cdot\un_i dA + &=& -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \label{c2s11e4} \\ \frac{1}{4\pi}\int\phi^\ast\grad\left(\frac{1}{s}\right)\cdot\un_i dA & & \nonumber \end{eqnarray} Adding \eqref{c2s11e3} and \eqref{c2s11e4}, \begin{eqnarray*} \frac{1}{4\pi}\int \frac{1}{s}\grad\left(\phi - \phi^\ast\right)\cdot\un_i dA - &=& -\frac{1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV \\ \frac{1}{4\pi}\int\left(\phi - \phi^\ast\right)\grad\left(\frac{1}{s}\right)\cdot\un_i dA & & -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \end{eqnarray*} Since $\grad\phi\cdot\un_i = \grad\phi^\ast\cdot\un_i$ on the boundary, \begin{eqnarray*} \frac{1}{4\pi}\int\left(\phi - \phi^\ast\right)\grad\left(\frac{1}{s}\right)\cdot\un_i dA &=& \frac{-1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \\ & & + \frac{-1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV \end{eqnarray*} We cannot combine the integrals on the right hand side. We therefore, interpret this equation separately in the two regions, \begin{eqnarray*} \frac{1}{4\pi}\int \left(\phi - \phi^\ast\right)\grad\cdot\frac{1}{s}\un_i dA &=& -\frac{1}{4\pi}\int_{R_i}\phi\nabla^2\left(\frac{1}{s}\right)dV \\ \frac{1}{4\pi}\int \left(\phi - \phi^\ast\right)\grad\frac{1}{s}\cdot\un_i dA &=& -\frac{1}{4\pi}\int_{R_o}\phi^\ast\nabla^2\left(\frac{1}{s}\right)dV \end{eqnarray*} Noting that \[ \nabla^2\left(\frac{1}{s}\right) = 4\pi\delta(\vec{x} - \vec{x}^\op) \] we get, \begin{eqnarray*} \frac{1}{4\pi}\int \left(\phi - \phi^\ast\right)\un_i\cdot\grad\frac{1}{s} dA &=& -\phi(\vec{x}) \\ \frac{1}{4\pi}\int \left(\phi - \phi^\ast\right)\un_i\cdot\grad\frac{1}{s} dA &=& -\phi^\ast(\vec{x}) \end{eqnarray*} Thus, potential in either regions can be considered to be due to source doublets of strength $\left(\phi - \phi^\ast\right)\un_i$ on the surface. \end{itemize} \item The potential due to a line vortex of strength $\kappa$ is given by (2.8.4) to be \[ \phi(\vec{x}) = -\frac{\kappa}{4\pi}\Omega, \] where $\Omega(\vec{x})$ is the solid angle subtended by the curve $\vec{s}$ at a point $\vec{x}$ is given by \[ \Omega = \int\frac{\vec{s}\cdot\un}{s^3}dA(\vec{x}^{\op}) \] Therefore \begin{eqnarray*} \phi(\vec{x}) &=& -\frac{\kappa}{4\pi}\int\frac{\vec{s}\cdot\un}{s^3}dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int\kappa\un\cdot\grad\left(\frac{1}{s}\right)dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int\vec{\mu}\cdot\grad\left(\frac{1}{s}\right)dA(\vec{x}^{\op}) \end{eqnarray*} The last equation suggests that $\phi(\vec{x})$ can be interpreted as potential due a distribution of source doublets of strength $\vec{\mu} = \kappa\un$ over the surface whose boundary coincides with that of the vortex tube. We will now show that the velocity corresponding to this potential is solenoidal and irrotational. Let $\vec{u} = \grad\phi$, so that \[ \vec{u} = \frac{1}{4\pi}\int\grad\left[\vec{\mu}\cdot\grad\left(\frac{1}{s}\right)\right]dA(\vec{x}^{\op}) \] Therefore, \[ \curl\vec{u} = \frac{1}{4\pi}\int\curl\grad\left[\vec{\mu}\cdot\grad\left(\frac{1}{s}\right)\right]dA(\vec{x}^{\op}) = 0 \] making $\vec{u}$ an irrotational field. Further, \begin{eqnarray*} \dive\vec{u} &=& \frac{1}{4\pi}\int\dive\grad\left[\vec{\mu}\cdot\grad\left(\frac{1}{s}\right)\right]dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int\nabla^2\left[\vec{\mu}\cdot\grad\left(\frac{1}{s}\right)\right]dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int\left[\vec{\mu}\cdot\nabla^2\grad\left(\frac{1}{s}\right)\right]dA(\vec{x}^{\op}) \\ &=& \frac{1}{4\pi}\int\left[\vec{\mu}\cdot\grad\nabla^2\left(\frac{1}{s}\right)\right]dA(\vec{x}^{\op}) \\ &=& \int\vec{\mu}\cdot\grad\delta(\vec{x} - \vec{x}^\op) dA(\vec{x}^{\op}) \\ &=& -\int\delta(\vec{x} - \vec{x}^\op)\dive\vec{\mu} dA(\vec{x}^{\op}) \\ &=& -\int\delta(\vec{x} - \vec{x}^\op)\cdot(0) dA(\vec{x}^{\op}) = 0, \end{eqnarray*} making $\vec{u}$ a solenoidal field. In getting from the fifth to the sixth step, we used the identity \begin{equation}\label{c2s11e5} \int \vec{f}(\vec{x}^\op)\cdot\grad\delta(\vec{x} - \vec{x}^\op) dA(\vec{x}^\op) = -\int \delta(\vec{x} - \vec{x}^\op)\dive\vec{f}(\vec{x}^\op) dA(\vec{x}^\op) \end{equation} We can prove it using integration by parts. In Cartesian tensor form, the left hand side is \[ \int f_i(\vec{x}^\op)\frac{\partial}{\partial x_i}\delta(\vec{x} - \vec{x}^\op)dA=f_i(\vec{x}^\op)\delta(\vec{x} - \vec{x}^\op)\Big|_C - \int\delta(\vec{x} - \vec{x}^\op)\pdt{f_i}{x_i}dA \] where $C$ denotes the boundary of the area of integration. Since the delta function is zero at all points other than $\vec{x}^\op$, \[ \int f_i(\vec{x}^\op)\frac{\partial}{\partial x_i}\delta(\vec{x} - \vec{x}^\op)dA = -\int\delta(\vec{x} - \vec{x}^\op)\pdt{f_i}{x_i}dA \] which, in Gibbs notation gives \eqref{c2s11e5}. We still have to prove: \begin{itemize} \item A sheet vortex is equivalent to a distribution of surce doublets over and normal to the surface coinciding with the sheet, provided that the closed vortex lines are reducible on the sheet. To that end, once again consider figure \ref{c2f5}. In that figure, the blue circles depict the vortex tubes. Since vortex tubes are closed, each one of them has a stream of fluid getting in the plane and another on getting out of the plane. A cross section of the vortex tube can thus be imagined to be an assembly of a source and a sink. Further, this assembly of a source and a sink is such that one of them is top of the other. (They are not side-by-side or in any other orientation.) Thus, the source doublet associated with the assembly is normal to the sheet. \item Any irrotational, solenoidal motion can be regarded as due to a certain sheet vortex coinciding with the boundary of the region of motion. To that end, we begin with \eqref{c2sa11e1}. \[ \int\phi_1\grad\phi_2\cdot\un dA = \int\phi_2\grad\phi_1\cdot\un dA \] Choose $\phi_2 = |\vec{x} - \vec{x}^\op|^{-1} = s^{-1}$. Further, let the integration be over a surface containing the field point $P$ with position vector $\vec{x}$. Thus, \[ \int\phi_1\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\cdot\un dA = \int\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\grad\phi_1\cdot\un dA \] Since the field point is on the surface of integration, the integral on the left hand side diverges. To prevent it, we must ignore the field point by excluding a small sphere, $\Sigma$, of radius $\epsilon$ centered around $P$. This exclusion, introduces an additional term on either sides of the above equation. Further, let us call the remaining surface $S$. Thus, \begin{eqnarray*} \int_{\Sigma}\phi_1\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\cdot\un dA + \int_{S}\phi_1\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\cdot\un dA &=& \\ \int_{\Sigma}\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\grad\phi_1\cdot\un dA + \int_{S}\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\grad\phi_1\cdot\un dA & & \end{eqnarray*} We can approximate $\phi_1$ and its gradient to be constant over $\Sigma$ so that the first terms on the either sides of the equation are approximated as \begin{eqnarray*} \int_{\Sigma}\phi_1\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\cdot\un dA&=&\phi_1(\vec{x})\int\frac{-1}{\epsilon^2}\epsilon^2\sin\theta d\theta d\varphi \\ &=& -4\pi\phi_1(\vec{x}) \\ \int_{\Sigma}\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\grad\phi_1\cdot\un dA &=& \grad\phi_1(\vec{x})\cdot\int\frac{1}{\epsilon}\epsilon^2\sin\theta d\theta d\varphi \end{eqnarray*} In the limit $\epsilon \rightarrow 0$, the second of the above two terms goes to zero so that, \[ -4\pi\phi_1(\vec{x}) + \int_{S}\phi_1\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\cdot\un dA = \int_{S}\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\grad\phi_1\cdot\un dA \] on rearrangement, \[ \phi_1(\vec{x}) = \frac{1}{4\pi}\int_{S}\phi_1\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\cdot\un dA - \frac{1}{4\pi}\int_{S}\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right)\grad\phi_1\cdot\un dA \] If $\phi_1(\vec{x}) = \kappa$ on the surface $S$, then, \[ \phi_1(\vec{x}) = \frac{1}{4\pi}\int_{S}\kappa\un\cdot\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right) dA \] If $\vec{\mu} = \kappa\un$, \[ \phi_1(\vec{x}) = \frac{1}{4\pi}\int_{S}\vec{\mu}\cdot\un\grad\left(\frac{1}{|\vec{x} - \vec{x}^\op|}\right) dA \] Thus, any solution of Laplace's equation, can be expressed as a potential due to source doublets on a surface. A solution of Laplace's equation is essentially a potential of an irrotational, solenoidal velocity. \end{itemize} \end{enumerate}
{ "alphanum_fraction": 0.6496047431, "avg_line_length": 50.7766143106, "ext": "tex", "hexsha": "c0909ad3ba9f8a91b8246c41a5ed731b6461083e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "amey-joshi/physics", "max_forks_repo_path": "gkb/chap2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "amey-joshi/physics", "max_issues_repo_path": "gkb/chap2.tex", "max_line_length": 188, "max_stars_count": null, "max_stars_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "amey-joshi/physics", "max_stars_repo_path": "gkb/chap2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 44542, "size": 116380 }
\documentclass[12pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{geometry} \usepackage{enumerate} \usepackage{natbib} \usepackage{float}%稳定图片位置 \usepackage{graphicx}%画图 \usepackage[english]{babel} \usepackage{a4wide} \usepackage{indentfirst}%缩进 \usepackage{enumerate}%加序号 \usepackage{multirow}%合并行 \title{\large UM-SJTU JOINT INSTITUTE\\Intro to Circuits\\(VE215)\\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ LABORATORY REPORT\\\ \\\ EXERCISE 2\\\ OP Amp Lab \\\ \\\ \\\ \\\ \\\ } \author{Name: Pan Chongdan\\ID: 516370910121\\Group: 16} \date{Date: \today} \begin{document} \maketitle \newpage \section{Introduction and Theoretical Background} \subsection{Objectives} \begin{enumerate} \item Learn how to build and test a variety of circuits based on LM 741 Op Amp chip: non-inverting and inverting amplifiers with fixed gain. \item Measure the gain of the amplifier and compare it with theoretical calculations. \item Determine the saturated output voltage of the amplifier. \end{enumerate} \subsection{Introduction} Operational amplifiers are integrated circuits used in many applications and I built and studied LM741. \subsubsection{Op Amp Terminals} \begin{figure}[H] \centering \includegraphics[scale=0.5]{P1.jpg} \caption{Circuit symbol of a typical op amp} \end{figure} In Figure 1, there are : \begin{enumerate} \item Two terminals for input signals: inverting (labeled -) and non-inverting (labeled +) \item A terminal for the output signal \item Two terminals for the power supply voltages: positive +Vcc and negative –Vcc. \end{enumerate} Figure 2 shows the pin numbers of LM741 \begin{figure}[H] \centering \includegraphics[scale=0.5]{P2.jpg} \caption{Pin numbers for LM 741 op amp Note:} \end{figure} \begin{enumerate} \item Pin 8 is not connected; pins 1 and 5 are not used in this lab. \item Do not mistake the connections of input signals 2 labeled – and 3 labeled +) for the connections to the power supply (4 for -Vcc and 7 for +Vcc). \item Make sure you connect the grounds of oscilloscope, function generator and DC source together. \end{enumerate} \subsubsection{The Gain of Amplifier Circuits} The amplifier circuits are characterized by their gain values. The voltage gain is the ratio of output voltage to the input voltage in the circuit: $$Voltage Gain=\frac{OutputVoltage}{InputVoltage}$$ In the lab, you can use oscilloscope to measure the input and output peak-to-peak amplitudes of the signals through two channels at the same time. \subsubsection{Inverting amplifier} \begin{figure}[H] \centering \includegraphics[scale=0.3]{P3.jpg} \caption{Inverting Amplifier} \end{figure} For inverting amplifier, the theoretical gain should be: $$Gain=\frac{V_{out}}{V_s}=-\frac{R_F}{R_A}$$ \subsubsection{Non-inverting Amplifier} \begin{figure}[H] \centering \includegraphics[scale=0.3]{P4.jpg} \caption{Inverting Amplifier} \end{figure} For non-inverting amplifier, the theoretical gain should be: $$Gain=\frac{V_{output}}{V_S}=1+\frac{R_F}{R_A}$$ \section{Apparatus} I used function generator and oscilloscope in exercise 2. \subsection{Function Generator} \begin{figure}[H] \centering \includegraphics[scale=0.3]{P5.jpg} \caption{Function Generator} \end{figure} \begin{enumerate} \item "Parameter": to change the amplitude, frequency of wave to generate. The amplitude here equals to half of the pp value. \item “1”/ 2”: to switch on the channel. \end{enumerate} \subsection{Oscilloscope} \begin{figure}[H] \centering \includegraphics[scale=0.3]{P6.jpg} \caption{Oscilloscope} \end{figure} \begin{enumerate} \item “Auto scale”: to automatically achieve an output on the screen with proper scale. \item "Means": to turn on the measurement of the wave. \item "1"$\setminus$"2": to show or hide the wave you detecting through channel 1 or 2. \end{enumerate} \section{Results} \subsection{Non-inverting Amplifier} \subsubsection{The resistances of the two resistors we used to build the circuit.} \begin{table}[H] \centering \begin{tabular}{|c|c|} \hline $R_1[\Omega]$ &51.3 \\ \hline $R_f[\Omega]$ &99.5 \\ \hline \end{tabular} \end{table} \subsubsection{The voltage supplied to the op amp.} \begin{table}[H] \centering \begin{tabular}{|c|c|} \hline $+V_{cc}[V]$ &+5 \\ \hline $-V_{cc}[V]$ &-5 \\ \hline \end{tabular} \end{table} \subsection{The input/output relationship} \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline $V_{PP(in)}$[V] &$V_{PP(out)}$[V] &Gain \\ \hline 0.2 &$33.4\times 10^{-3}$&0.167 \\ \hline 0.4 &$62.3\times 10^{-3}$&0.156 \\ \hline 0.6 &$90.5\times 10^{-3}$&0.151 \\ \hline 0.8 &$121\times 10^{-3}$&0.152 \\ \hline 1.0 &$0.15$&0.150 \\ \hline 1.2 &$0.179$&0.149 \\ \hline 1.4 &$0.211$&0.151 \\ \hline 1.6 &$0.243$&0.152 \\ \hline 1.8 &$0.271$&0.151 \\ \hline 2.0 &$0.300$&0.150 \\ \hline 2.2 &$0.332$&0.151 \\ \hline 2.4 &$0.352$&0.147 \\ \hline 2.6 &$0.368$&0.142 \\ \hline 2.8 &$0.382$&0.137 \\ \hline 3.0 &$0.396$&0.132 \\ \hline 3.2 &$0.402$&0.126 \\ \hline 3.4 &$0.402$&0.118 \\ \hline \end{tabular} \end{table} \subsection{Inverting Amplifier} \subsubsection{The resistances of the two resistors we used to build the circuit.} \begin{table}[H] \centering \begin{tabular}{|c|c|} \hline $R_1[\Omega]$ &51.3 \\ \hline $R_f[\Omega]$ &99.5 \\ \hline \end{tabular} \end{table} \subsubsection{The voltage supplied to the op amp.} \begin{table}[H] \centering \begin{tabular}{|c|c|} \hline $+V_{cc}[V]$ &+5 \\ \hline $-V_{cc}[V]$ &-5 \\ \hline \end{tabular} \end{table} \subsection{The input/output relationship} \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline $V_{PP(in)}$[V] &$V_{PP(out)}$[V] &Gain \\ \hline 0.2 &$22.1\times 10^{-3}$&0.111 \\ \hline 0.4 &$41.8\times 10^{-3}$&0.105 \\ \hline 0.6 &$60.3\times 10^{-3}$&0.101 \\ \hline 0.8 &$78.4\times 10^{-3}$&0.098 \\ \hline 1.0 &$96.5\times 10^{-3}$&0.097 \\ \hline 1.2 &$0.112$&0.094 \\ \hline 1.4 &$0.129$&0.092 \\ \hline 1.6 &$0.147$&0.092 \\ \hline 1.8 &$0.166$&0.092 \\ \hline 2.0 &$0.183$&0.092 \\ \hline 2.2 &$0.203$&0.093 \\ \hline 2.4 &$0.221$&0.092 \\ \hline 2.6 &$0.241$&0.093 \\ \hline 2.8 &$0.259$&0.093 \\ \hline 3.0 &$0.271$&0.091 \\ \hline 3.2 &$0.283$&0.086 \\ \hline 3.4 &$0.289$&0.085 \\ \hline 3.6 &$0.297$&0.083 \\ \hline 3.8&$0.304$&0.080 \\ \hline 4.0 &$0.304$&0.076 \\ \hline \end{tabular} \end{table} \section{Discussion} \subsection{Relation between $V_{pp(out)}$ vs. $V_{pp(in)}$} \subsubsection{Non-inverting Amplifier} $$V_1=V_{pp(in)}$$ $$\frac{0-V_1}{R_A}=\frac{V_1-V_{pp(out)}}{R_f}$$ $$Gain=\frac{V_{pp(out)}}{V_{pp(in)}}=1+\frac{R_f}{R_A}$$ \begin{figure}[H] \centering \includegraphics[scale=0.3]{P7.jpg} \caption{Relation between $V_{pp(out)}$ the $V_{pp(in)}$ for non-inverting amplifier} \end{figure} \subsubsection{Inverting Amplifier} $$\frac{V_{pp(in)}}{R_A}=\frac{0-V_{pp(out)}}{R_f}$$ $$Gain=\frac{V_{pp(out)}}{V_{pp(in)}}=-\frac{R_f}{R_A}$$ \begin{figure}[H] \centering \includegraphics[scale=0.3]{P8.jpg} \caption{Relation between $V_{pp(out)}$ the $V_{pp(in)}$ for inverting amplifier} \end{figure} \subsection{Calculation, Uncertainty Analysis, and Plot} \subsubsection{Non-inverting Amplifier} $$\bar{Gain}=\frac{1}{17}\sum^{17}_{i=1}Gain_i=0.146$$ $$s_X=0.012$$ $$u=0.012/0.5=0.024$$ $$u_r=16.44\%$$ $$Gain=0.146\pm0.024$$ \begin{figure}[H] \centering \includegraphics[scale=0.3]{P9.jpg} \caption{Relation between gain the $V_{pp(in)}$ for non-inverting amplifier} \end{figure} \subsubsection{Inverting Amplifier} $$\bar{Gain}=\frac{1}{20}\sum^{20}_{i=1}Gain_i=0.092$$ $$s_X=8.09\times10^{-3}$$ $$u=8.09\times10^{-3}/0.467=0.017$$ $$u_r=18.47\%$$ $$Gain=0.092\pm0.017$$ \begin{figure}[H] \centering \includegraphics[scale=0.4]{P10.jpg} \caption{Relation between gain the $V_{pp(in)}$ for non-inverting amplifier} \end{figure} \subsection{Error Analysis} For non-inverting amplifier,the theoretical gain =$1+\frac{99.5}{51.3}=2.93$,but my data is 0.146. For inverting amplifier, theoretical gain =-1.94 while my data is 0.092. All my data is about 5\% of the theoretical gain, I think it's because my function generator's output voltage reading is wrong. Because if there exists a resistor of big resistance, the ration of the theoretical gain of measured gain shouldn't be same. \par I find two potential reasons causing the error in the output voltage reading. The first reason is the oscilloscope's vertical scale and the probe is set wrong. The second reason is the oscilloscope may present the $V_{pp}$ higher than the function generator. The root cause is the generator doesn't generates a function that the oscilloscope is expecting. In addition, I might make some mistake in reading the voltage such as choosing the wrong unit, etc. \par The relative uncertainty is pretty high, I think there also two reasons. First, the amplifiers used are not ideal, they have their own resistance, which will cause the error. Second, the points in my figure make up a line approaching lever when $V_{in}$ is becoming bigger, it's because the amplifier is saturated and $V_{out}$ wont' be bigger after that. For an ideal amplifier, the line should be straight. \par In later lab work, I should be more carefully about the unit of all physical quantities as well as pay more attention to the amplification factor of the oscilloscope to make sure the oscilloscope match the function generator. \section{Reference} \begin{enumerate}[-] \item \emph{VE215FA2017 OP AMP LabManual} \item \emph{Circuits Make Sense}, Alexander Ganago, Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7036309093, "avg_line_length": 39.7818930041, "ext": "tex", "hexsha": "ba193c750169b932b1860a2f7ceb50e98f2368dd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "430ffaeea0830dc3105883374dd729fe9f86cc55", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PANDApcd/SemiConductorCircuit", "max_forks_repo_path": "VE215ElectricCircuits/Lab/Lab2/Lab.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "430ffaeea0830dc3105883374dd729fe9f86cc55", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PANDApcd/SemiConductorCircuit", "max_issues_repo_path": "VE215ElectricCircuits/Lab/Lab2/Lab.tex", "max_line_length": 460, "max_stars_count": null, "max_stars_repo_head_hexsha": "430ffaeea0830dc3105883374dd729fe9f86cc55", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PANDApcd/SemiConductorCircuit", "max_stars_repo_path": "VE215ElectricCircuits/Lab/Lab2/Lab.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3342, "size": 9667 }
\documentclass{article} \usepackage{amsmath} \usepackage[margin=1.0in]{geometry} \usepackage{xcolor} \begin{document} \noindent Does $\displaystyle \sum_{n=1}^\infty \frac{n^n}{n!}$ diverge, converge absolutely, or converge conditionally? \subsection*{Solution} \begin{align*} L&=\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right|\\ &= \lim_{n \to \infty} \left| \frac{(n+1)^{n+1}}{(n+1)!} \cdot \frac{n!}{n^n}\right|\\ &= \lim_{n \to \infty} \left| \left(\frac{n+1}{n}\right)^n\right|\\ &= \lim_{n \to \infty} \left(\frac{n+1}{n}\right)^n. \end{align*} This is an indeterminate form, so we let $y = \left(\frac{n+1}{n}\right)^n$. So $\ln y = n \ln \frac{n+1}{n}$. Applying limit to both sides, we have \[ \lim_{n \to \infty} \ln y = \lim_{n \to \infty} n \ln \frac{n+1}{n}.\] The limit on the right can be computed using L'Hopital's rule (with a chain rule and quotent rule). \begin{align*} &\lim_{n \to \infty} n \ln \frac{n+1}{n} \text{, which was the limit on the right side above}\\ &= \lim_{n \to \infty} \frac{ \ln \frac{n+1}{n}}{\frac1n}\\ &= \lim_{n \to \infty} \frac{ \frac{n}{n+1} \cdot \frac{n \cdot 1 - (n+1) \cdot 1}{n^2}}{\frac{-1}{n^2}} \text{ by L'Hopital's}\\ &= \lim_{n \to \infty} \frac{n}{n+1} \text{ after algebra simplification}\\ &= \lim_{n \to \infty} \frac11\\ &= 1. \end{align*} So, $\lim y = e^1 = e$. So $L=e$. Since $L>1$, the series $\displaystyle \sum_{n=1}^\infty \frac{n^n}{n!}$ diverges by the Ratio Test. \end{document}%%%%%%%%%%%%%%%%% \begin{align*} L&=\lim_{n \to \infty} \sqrt[n]{|a_n|}\\ &= \lim_{n \to \infty} \sqrt[n]{\left| \right|}\\ \end{align*} \begin{align*} L&=\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right|\\ &= \lim_{n \to \infty} \left| \right|\\ \end{align*} \begin{align*} \lim_{n \to \infty} a_n &= \lim_{n \to \infty} \\ \end{align*} Since $\sum |a_n| = \sum a_n$, the series $\displaystyle \sum_{n=1}^\infty AAAAAAAAAAAAAA$ converges absolutely. Since $|r| < 1$, the series ... converges by the Geometric Series Test. Since $|r| \geq 1$, the series ... diverges by the Geometric Series Test. The function $f(x)=\frac{}{}$ is continuous, positive, and decreasing on $[1,\infty)$. \subsection*{Solution}
{ "alphanum_fraction": 0.5992892048, "avg_line_length": 35.7301587302, "ext": "tex", "hexsha": "babf21436b2e51328a79a8e11f01de3af8afc2be", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2017-06-25T22:14:59.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-25T18:51:52.000Z", "max_forks_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "edward-kim-math/edward-d-kim.github.io", "max_forks_repo_path": "key/series/m2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "edward-kim-math/edward-d-kim.github.io", "max_issues_repo_path": "key/series/m2.tex", "max_line_length": 149, "max_stars_count": null, "max_stars_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "edward-kim-math/edward-d-kim.github.io", "max_stars_repo_path": "key/series/m2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 889, "size": 2251 }
% This example An LaTeX document showing how to use the l3proj class to % write your report. Use pdflatex and bibtex to process the file, creating % a PDF file as output (there is no need to use dvips when using pdflatex). % Modified \documentclass{l3proj} \usepackage{subfiles} \usepackage{nameref} \begin{document} \title{UniCom - The Student Feedback Application} \author{Adam Christie \\ Alice Ravier \\ Ashwin Maliampurakal \\ Jonas Sakalys} \date{9 January 2009} \maketitle \begin{abstract} As students, we feel that there is a lack of communication between lecturers and students in terms of our progress during various courses. This is an area that was also identified by the client, Dr Mark Wong of the school of Social and Public Policy, and in this project our team aimed to develop an application which attempts to minimise this gap in communication and improve a students learning experience at university. The client requested that an application is developed which allows lecturers to provide their students with feedback in different areas as they proceed through a module. This was intended to provide positive reinforcement to the student, and motivate them to improve in areas brought to their attention by the lecturer. This dissertation details the teams journey in developing this application beginning with requirements gathering, developing a specification, the development effort itself, and finally how the application was deployed. Furthermore, the ways in which the software engineering practices taught during the PSD course were applied to this project is also discussed. In particular, the use of Agile software development is detailed extensively, in addition to how members of the group worked together in this team project to deliver a product which met the requirements of Dr Mark Wong. \end{abstract} %% Comment out this line if you do not wish to give consent for your %% work to be distributed in electronic format. \educationalconsent \newpage %============================================================================== \section{Introduction} \subfile{sections/introduction} \newpage %============================================================================== \section{Case Study Background} \label{case_study} \subfile{sections/case_study} \newpage %============================================================================== \section{Development} \label{development} \subfile{sections/development} \newpage %============================================================================== \section{Technologies} \label{technologies} \subfile{sections/technologies} \newpage %============================================================================== \section{The Software Process} \label{software_process} \subfile{sections/software_process} \newpage % - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \section{Teamwork} \label{teamwork} \subfile{sections/teamwork} \newpage %------------------------------------------------------------------------------ \section{Add something here} \newpage %------------------------------------------------------------------------------ \section{Conclusions} \label{conclusions} \subfile{sections/conclusions} %============================================================================== \bibliographystyle{plain} \bibliography{dissertation} \end{document}
{ "alphanum_fraction": 0.6283803431, "avg_line_length": 33.7156862745, "ext": "tex", "hexsha": "cdcf46f15ebc0f995a9ff17426b36ae45d9dee04", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "89d02bd93ba7de6fd774a8a0be9c4b8c12f8ee99", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Ashwin-MJ/UniCom", "max_forks_repo_path": "docs/dissertation/dissertation.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "89d02bd93ba7de6fd774a8a0be9c4b8c12f8ee99", "max_issues_repo_issues_event_max_datetime": "2019-03-20T11:58:55.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-15T15:03:45.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Ashwin-MJ/UniCom", "max_issues_repo_path": "docs/dissertation/dissertation.tex", "max_line_length": 581, "max_stars_count": null, "max_stars_repo_head_hexsha": "89d02bd93ba7de6fd774a8a0be9c4b8c12f8ee99", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Ashwin-MJ/UniCom", "max_stars_repo_path": "docs/dissertation/dissertation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 666, "size": 3439 }
%!TEX root = ../notes.tex \section{April 20, 2022} \subsection{Short Vectors} \begin{ques*} How short is the shortest vector in a lattice $L$? \end{ques*} A more general question we could ask: \begin{ques*} When does some region contain a nontrivial lattice point? \end{ques*} \begin{theorem}[Minkowski's Theorem] Let $L\subseteq \RR^n$ be a lattice of dimension $n$. Let $S\subseteq\RR^n$ be a \emph{bounded symmetric convex} set. If $\mathsf{Vol}(S)>2^n\det(L)$, then $S\cap L$ contains a nonzero lattice point. \end{theorem} \begin{definition}[Bounded Set] $\{\text{Lengths of vectors in }S\}$ is bounded. In other words, there is some ball that contains $S$. \end{definition} \begin{definition}[Symmetric Set] If $\bvec{v}\in S$, then $-\bvec{v}\in S$. \end{definition} \begin{definition}[Convex Set] If $\bvec{v}, \bvec{w}$, then the line segment connecting $\bvec{v}$ and $\bvec{w}$ is a subset of $S$. \end{definition} \begin{proof}[Proof of Minkowski's Theorem] Let $\mathcal{F}$ be a fundamental domain. Any vector $\bvec{w} = t(\bvec{w}) + v(\bvec{w})$ where $t(\bvec{w})\in \mathcal{F}$ and $v(\bvec{w})\in L$. Consider map $t: \frac{1}{2}S \mapsto F$ by sending every vector $t: \bvec{w}\mapsto t(\bvec{w})$. What does $t$ do to volume? We cut $S$ up into finite number of regions, and `cut-and-paste' them into the fundamental domain. Locally, $t$ preserves volume. When must two points in $\frac{1}{2}S$ be sent to the same point in $F$? When we have `carpet' with area greater than room area. That is to say, $\mathsf{Vol}(\frac{1}{2} S) > \mathsf{Vol}(\mathcal{F})$ implies that there is an overlapping point. This is to say \begin{align*} \mathsf{Vol}\left(\frac{1}{2} S\right) & > \mathsf{Vol}(\mathcal{F}) \\ \frac{1}{2^n}\mathsf{Vol}(S) & > \mathsf{Vol}(\mathcal{F}) = \det(L) \\ \mathsf{Vol}(S) & > 2^n \det(L) \end{align*} So given this inequality, there are two points in $\frac{1}{2}S$ such that $t\left( \frac{1}{2}\bvec{w}_1 \right) = t\left( \frac{1}{2}\bvec{w}_2 \right)$. Then we know that \[\frac{1}{2}\bvec{w}_1 - \frac{1}{2}\bvec{w}_2 \in L\] So then consider \[\frac{1}{2}\bvec{w}_1 - \frac{1}{2}\bvec{w}_2 = \frac{1}{2}(\bvec{w}_1 - \bvec{w}_2)\] which is the midpoint of $\bvec{w}_1$ and $\bvec{w}_2$, which is in $S$ and in $L$. So $S$ contains a nonzero lattice point. \end{proof} \begin{theorem}[Variant of Minkowski's Theorem] If $S\subseteq \RR^n$ is bounded, symmetric, convex and \emph{closed} set, then if \[\mathsf{Vol}(S)\geq 2^n\det(L)\] $S\cap L$ contains a nonzero lattice point. \end{theorem} \begin{definition}[Closed Set] Every limit point of $S$ is contained in $S$. \end{definition} \emph{We added the condition that $S$ be closed, and changed our bound to be a $\geq$. } \begin{proof}[Proof of variant] For any $k$: \[\left( 1 + \frac{1}{k} \right)S \cap L\] contains $\bvec{v}_k\neq \bvec{0}\in L$ (which is true by our first version). The sequence $\bvec{v}_1, \bvec{v}_2, \bvec{v}_3, \dots$ is a sequence in $2S\cap L$. $2S$ is bounded, so we have a finite set of lattice points. There's some $\bvec{v}\neq \bvec{0}$ is contained in $\bigcap_{k}\left( 1 + \frac{1}{k} \right) S = S$ because $S$ is closed. \end{proof} \begin{corollary}[Hermite's Theorem] Let $L$ be a lattice of dimension $n$ in $\RR^n$. Then, $L$ contains a vector $\bvec{v}$ with \[||\bvec{v|}|\leq \sqrt{n}\cdot \det(L)^\frac{1}{n}\] \end{corollary} \begin{proof} \emph{Application of Minkowski's Theorem.} Apply Minkowski's Theorem to \[\left\{ (x_1, \dots, x_n)\Bigm\vert |x_i|\leq \det(L)^{1/n} \right\}\] which is a cube with side length $2\cdot \det(L)^{1/n}$. So $\textsf{Vol}(S) = 2^n\cdot \det(L)$. The diagonal has length $\sqrt{n}\det(L)^{1/n}$. \end{proof} A variant of Hermite's Theorem is that we can find an entire basis $\bvec{v}_1, \bvec{v}_2, \dots, \bvec{v}_n$ such that \[||\bvec{v}_1||\cdot||\bvec{v}_2||\cdots ||\bvec{v}_n|| \leq n^{n/2}\det(L)\] and we define the Hadamard ratio to be \[\mathcal{H} = \left( \frac{\det(L)}{||\bvec{v}_1||\cdot||\bvec{v}_2||\cdots ||\bvec{v}_n||} \right)^{1/n}\] where $0< \mathcal{H} \leq 1$ and $\mathcal{H} = 1$ when our basis is orthogonal.
{ "alphanum_fraction": 0.6298342541, "avg_line_length": 49.9310344828, "ext": "tex", "hexsha": "cbbadf296962d8fa421bf5db9ab5105d332cce57", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_path": "lectures/2022-04-20.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_path": "lectures/2022-04-20.tex", "max_line_length": 281, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_path": "lectures/2022-04-20.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "num_tokens": 1617, "size": 4344 }
\chapter{Security and privacy} \label{sec:security} The following sections discuss the security enhancements employed by \emph{HyCube}, which are essential elements of real peer-to-peer systems, as well as the privacy features that provide anonymous request-response communication, maintaining the identities of both communicating parties secret. Section \ref{sec:secureRouting} presents two routing security enhancements - introducing the secure routing tables concept and randomized next hop selection. Section \ref{sec:accessControlAuthentication} discusses access control and authentication mechanisms - it is explained how application-specific access control and authentication mechanisms may be incorporated into the \emph{HyCube} architecture. Section \ref{sec:congestionControl} describes the congestion control mechanism employed by \emph{HyCube}, preventing overloading nodes with processing large numbers of messages, at the same time limiting nodes' vulnerability to DoS/DDoS attacks. Section \ref{sec:privacy} discusses privacy issues - data encryption and anonymity. The section presents the registered and anonymous routes concept, allowing the communicating parties to retain their identities secret. \section{Secure routing} \label{sec:secureRouting} \subsubsection{Secure routing tables} In \cite{secrouting1}, the authors propose secure routing tables - additional constrained routing tables maintained by nodes, which may be used when messages are not delivered with the use of regular routing tables. The constrained routing tables are created based on some strong constraint on the set of node IDs that can fill each slot of the routing table. The constraint should be strong enough to make it difficult to be manipulated by potential attackers. The authors suggest that each entry in the constrained routing table be the closest to the desired point $P$ in the ID space. In the extension of \emph{Pastry} DHT, they define point $P$ as follows: for a routing table of a node with identifier $X$, the slot $j$ at level $i$ (the routing table slot may contain nodes with IDs that share the first $i$ digits with $X$ and have value $j$ in the $i+1$-th digit), $P$ shares the first $i$ digits with $X$, it has the value $j$ in the $i+1$-th digit, and it has the same values of remaining digits as $X$. Nodes closest to meeting this condition (the remaining digits closest to the ones of $X$) would be chosen for the secure routing tables. This approach is a good heuristic for limiting attacks based on propagating false information about possible routing table candidates, because the probability of any node's ID meeting that condition for large number of peers is very low. The above strategy (with modifications) has been adopted by \emph{HyCube}. For each routing table slot (primary or secondary routing table), the point $P$ is defined as follows: it has the first $i$ digits equal to the address of the hypercube corresponding to the routing table slot and remaining digits equal to those of $X$. Such a formulation is equivalent to the original definition but expressed using the hierarchical hypercube terminology and extending the definition to cover also the secondary routing table. However, the process of calculating the constraint for routing table nodes in \emph{HyCube} is different. In the original solution, the distance from the point $P$ to the ID of the routing table slot candidate is minimized. In \emph{HyCube}, the value being minimized is the distance between $P$ and the point being the result of the XOR operation on the candidate's ID and a certain secret key, maintained by each node. This secret key may be randomly re-generated after analyzing every $n_{change}$ candidates ($n_{change}$ is a system parameter). Additionally, to avoid replacing the constrained routing table nodes every time the secret key is changed, every routing table slot contains the value of the distance calculated when the node was added to the routing table. The node is replaced only if the distance to the new node ID (result of the XOR function on the ID and the new secret key) is smaller than the distance stored in the routing table slot. Such an approach makes it even more difficult for any attacking node to track the criterion (the secret key) used to make the node selection, i.e. to determine a way to be included in another node's routing tables, especially in routing tables of a large number of nodes, which makes it a very good protection against the Eclipse attack. Combined with the received message acknowledgment mechanism (Section \ref{sec:routing}) and resending, it is possible to configure \emph{HyCube} to switch to secure routing after a given number of unsuccessful trials. It is therefore possible to use the regular routing tables for efficient routing, and fall back on the secure routing tables only when needed. Furthermore, \emph{HyCube} allows forcing the use of secure routing tables explicitly (for routing, lookup, search, as well as for the operations on resources). \subsubsection{Random next hop selection} In addition to the use of secure routing tables, \emph{HyCube} may enable one additional heuristic when regular routing fails - skipping a certain random number of the best next hops found in the routing tables (next hop selection). The same behavior may be forced on subsequent nodes along the path by specifying an appropriate flag in the message header. The number of nodes to be skipped is generated based on the normal distribution with the mean and standard deviation (scale) specified by system parameters. Either absolute values of the generated numbers are used, or for the negative values, no nodes are skipped (the behavior is controlled by a parameter value). Additionally, the number of randomly skipped nodes may be limited by an upper bound (also defined by a parameter value), and another system parameter defines whether the nodes skipped should include the exact match (message recipient) or not. The parameters of the distribution should be chosen in a way that would prevent skipping too large numbers of nodes in individual steps (0, 1 or 2 next hops should be skipped in most cases). Otherwise, such an approach may increase the expected path length. The technique of skipping a random number of next hops in route selection provides a very good way of resending messages using different routes every time (with certain probability), still retaining comparable expected path lengths. However, to avoid potential route length increases, this mechanism should be applied only when a message is not delivered when using normal routing (the best next hops). \section{Access control and authentication} \label{sec:accessControlAuthentication} \subsubsection{Access control} In certain cases, it would be desirable to employ some application-specific restrictions to access the system. By default, the access to the DHT is open and any node can connect to the system by contacting any node already existing in the DHT. However, the architecture of \emph{HyCube} allows implementing any application-specific access control mechanism. The \emph{HyCube} library architecture supports defining external modules implementing certain interfaces, which process all messages being sent, as well as all received messages (\emph{message send processors} and \emph{received message processors}). Each message processor (either message send processor or received message processor) implements a method taking the message object as an argument. The method returns a boolean value indicating whether the message should be processed further or should be dropped. The access control mechanism may be implemented by registering additional message processors, which, depending on the access control logic, could allow or prevent further processing. \subsubsection{Authentication} Although in most cases, the identities of the communicating parties are not important, in certain applications, it may be crucial to verify senders of messages. Authentication may be realized, for example, by public key cryptography - by providing digital signatures of the messages being sent, or by encrypting (with the private key) certain content specified by the verifying party. This verification data may be exchanged either within application specific messages, or may be included in message headers (\emph{HyCube} protocol allows extending the message header with application-specific data) and processed by message send processors and received message processors. To restrict nodes from using any arbitrarily chosen identifiers (and to prevent malicious nodes from using other nodes' identifiers), it is possible to base the node identifiers on nodes' public keys in a way that would make it easy to validate the node ID based on its public key, and very difficult to find two different keys resulting in the same node ID. Thus, the node ID may be, for example, equal to the public key itself, or, if the public key is longer than the node identifier, a cryptographic hash function - digest of the public key may be used. It is also possible to verify message senders on the network level (e.g. IP addresses), for messages that are supposed to be sent to the node directly (not routed) from the sender (e.g. LEAVE or NOTIFY messages). \section{Congestion control} \label{sec:congestionControl} \emph{HyCube} employs a simple congestion control mechanism - limiting the maximum number of messages processed in a specified time interval. When receiving a message, the node checks whether the message limit is not already exceeded. If the message is within the specified limit, it is processed normally, and the process time is saved for future checks. Otherwise, the message is dropped. \emph{HyCube} allows defining a general limit - for all received messages (regardless of the message type), as well as limits for individual message types (for individual message types, the expected numbers of messages received may be different). These limits should be adjusted to the application and configuration (for example the expected number of keepalive or recovery messages). Such a simple approach is able to prevent congestions on the node level, and partially protect against DoS attacks. The solution would also limit overloading certain nodes by too many nodes being connected to them (storing references in their routing tables). If a node is overloaded (has too many incoming connections), at some point, it will start dropping the keepalive messages received from the requesting node, which will eventually cause the reference to be removed from the routing table, decreasing the overhead. \section{Privacy} \label{sec:privacy} The following sections discuss the mechanism used in \emph{HyCube} for ensuring a high level of privacy by allowing data encryption and retaining anonymity. \subsection{Data encryption} Data encryption in \emph{HyCube} may be carried out in two ways. In the simplest scenario, the message data may be encrypted on the application level. However, the mechanism of message processors (message send processors and received message processors) also allows registering separate application-specific modules that could automatically encrypt the message content before sending and decrypt the content before processing by subsequent message processors. The encryption may be symmetric or asymmetric, depending on the application (which should also define the methods for secure key exchange). \subsection{Anonymity} \label{sec:anonymity} \emph{HyCube} employs a technique ensuring anonymity that is similar to the one presented in \cite{freenet}, to provide anonymous request-response functionality, where neither the requestor, nor the responding node are known to any other node that may access the messages being exchanged (even the requestor does not know to which node the request is eventually delivered, and the node receiving the request does not know which node is the actual requesting node). \subsubsection{Registered routes} When routing a message, normally, nodes on the route find next hops and pass the message to them, not saving any information about the message. However, \emph{HyCube} also allows routing messages using registered routes, in which case, every node routing the message saves information about the message: a reference to the node from which the message was received (or information that the node is the original sender of the message) and a reference to the node to which the message was routed. The information is stored by nodes for a certain time (defined by a system parameter). Additionally every message is given a number \emph{routeID}, which is modified by the nodes along the route. For every message routed, every routing node should also store the \emph{routeID} value received in the message, and the new value of \emph{routeID} assigned by the node. The purpose of storing the information about routes is to make it possible to send responses back along the same routes. Based on the \emph{routeID}, and the direct sender, every node along the path should be able to determine the node that originally delivered the message, and pass the response to this node, with the original value of the \emph{routeID}. To indicate whether the message should be routed via a registered route or whether the message is a response being routed back along the registered route, in addition to \emph{routeID} nodes set values of two flags in the message header: \emph{RegisterRoute} and \emph{RouteBack}. Based on these values, nodes are able to determine how the message should be routed. To retain the anonymity of the requesting node, every node along the route should replace the sender fields (message header) by its own node ID and network address. In such a case, any node receiving the message would only see the information about the direct sender of the message, making it impossible to determine whether the node sending the message is the original sender of the message, or just an intermediate node routing the message. The same procedure should be followed when routing back responses. Using the technique described above, it is possible to realize any response-request services in an anonymous way. For example, considering inserting resources (PUT), or retrieving the resources (GET) from the distributed hash table, PUT/GET and PUT\_REPLY/GET\_REPLY messages would be routed in such a way that no node (including the node storing/returning the resource) would know the identity of the node that initiated the request, and no node (including the requestor) would know the identity of the node sending the response. However, to achieve such anonymity, one more modification should be made - nodes should ignore any message having their own ID specified as the recipient of the request message. Otherwise, routing the message to the node matching its recipient field would mean that the message would not be routed any further, and would suggest that the response is returned by that particular node. The decision regarding handling exact-match messages should be taken at the application level. Registered routes may also be used for routing application-level messages, in which case, the node receiving the message would be able to retrieve the \emph{RegisterRoute} flag value and the \emph{routeID} from the message, and, despite not knowing the identity of the sending node, would be able to continue communicating with the sender. Setting the \emph{RegisterRoute} flag would also affect the way the message delivery acknowledgments (ACK messages) are sent. Normally, acknowledgments are sent directly to the sending node. In case the flag \emph{RegisterRoute} is set to \emph{true}, the ACK message would be sent back to the sender along the registered route, setting the \emph{RouteBack} flag value to \emph{true} and setting the value of \emph{routeID} to the value received within the original message. For security reasons, \emph{HyCube} provides a possibility of blocking registered routes (by setting a system parameter value). Appropriate configuration should be chosen depending on the requirements of the application. \subsubsection{Hiding the hop count and time-to-leave information} Because the message header contains the TTL field (number of hops after which the message is dropped), as well as the number of hops the message already passed, to retain anonymity, this information should be somehow hidden. Otherwise, it would be possible to determine the sender of the request or the response by the node receiving the message directly from the sender. To prevent such situations, for the anonymous messages (\emph{AnonymousRoute} message header option set to \emph{true}), the number of hops should be set to the maximum possible value, which will make it unable to determine the actual number of hops passed and prevent increasing this value by consecutive next hops. \emph{HyCube} employs two different techniques for concealing the TTL message field (configurable). In the first scenario, any node routing the message, decrements the TTL value only with certain probability (normally, the TTL is decremented by each routing node). In the second scenario, every node along the route, after decrementing the TTL, adds a certain random number to the current TTL value. The distribution of this random number should, however, ensure a limited expected number of potential hops. Both approaches make it impossible for intermediate nodes to determine whether direct senders of received messages are the original senders, or routing nodes. \subsubsection{Steinhaus transform and anonymity} An important remark regarding the Steinhaus point (see Sections \ref{sec:steinhaus} and \ref{sec:varSteinhaus}) and the anonymity should be made. Let us consider an extreme situation, where a node routes (via a registered route) a message to a certain recipient ID (key) that is the furthest possible ID in the address space (in terms of the Euclidean metric). In such a case, when routing with the use of the Euclidean metric, any node receiving such a message (directly from the sender) would be certain that the direct sender is the original sender of the message. Otherwise, the message would have already be routed to a closer node. On the other hand, with the use of the Steinhaus transform, even the most distant node (Euclidean) may be closer to the destination than the previous node. This fact makes it impossible to prove, based on the sender ID field, whether the direct sender is the original sender of the message. However, when routing a message using a metric with the Steinhaus transform applied, the value of the Steinhaus point (message header), is set to the ID of the closest (Euclidean) point reached so far. In the case of a message routed between two most distant nodes, initially, the Steinhaus point would be given the value of the sender ID, which would at the same time expose the original message sender. To overcome this problem, for anonymous messages (\emph{AnonymousRoute} header option set to \emph{true}), if the sending node determines that it is likely to be the most distant existing node to the recipient, the initial Steinhaus point value is given the value of the ID of the second best next hop found in the routing tables (using the original Steinhaus point). In such a case, the only node that could determine that the sending node is the original sender would be the node that was set as the new Steinhaus point (normally, the message should not be routed again to the Steinhaus point). However, the message would not be routed to this node, because the value of the Stainhaus distance using the new Steinhaus point value would be the largest among all neighborhood set nodes (and self). For any value of the Steinhaus point, different than the initiating node, it is difficult to determine the source of the message, because the message might have been routed from many possible nodes. The sending node $X$ determines whether it is likely to be one of the most distant nodes to the recipient $Y$ by checking the following condition: \begin{equation} \label{eq:steinhausAnonymityEnableCheck} \left|\{N \in NS : d(N, Y) \leq d(X,Y)\} \right| \geq \ |NS| \cdot \zeta \end{equation} \noindent where $NS$ is the sending node's neighborhood set, $d$ is the Euclidean distance function, and $\zeta \in (0; 1]$ is a system parameter. For values of $\zeta$ close to $1$, whenever the condition \ref{eq:steinhausAnonymityEnableCheck} is satisfied, almost all neighborhood set nodes are closer to $Y$ than $X$, which would normally happen only when $X$ is the most distant node to $Y$ (or one of the most distant nodes). The sensitivity of the check may be controlled by changing the value of $\zeta$. For smaller values of $\zeta$, the Steinhaus point anonymity will be enabled for larger number of distant nodes. However, the value should not be too small, to limit the number of false positives (in certain cases, the neighborhood set may be not uniformly distributed in terms of directions). The default value is $\zeta = 0.9375$ (for fully populated neighborhood sets, the condition \ref{eq:steinhausAnonymityEnableCheck} is satisfied for at least 15 closer nodes of total 16 neighborhood set nodes). If the Steinhaus transform is used for calculating distances only when the prefix mismatch heuristic is already applied (Section \ref{sec:varSteinhaus}), the prefix mismatch heuristic should be switched on immediately if the condition \ref{eq:steinhausAnonymityEnableCheck} is satisfied. Following the same rule by all nodes eliminates the possibility of detecting the original message sender based on the Steinhaus point value. \subsubsection{Anonymous routes} The registered routes mechanism may be used for realization of anonymous request-response services or for routing data messages that may be processed by the application, allowing sending responses to the anonymous sender. However, in some situations, the response may be not needed, in which case, storing the registered route information by individual nodes is not necessary. The flag \emph{AnonymousRoute} in the message header is used to determine whether the message is anonymous. When a message is sent with the \emph{AnonymousRoute} flag set to \emph{true}, every node along the route should replace the message's original sender node ID and network address with its own node ID and network address. Additionally the option forces concealing the ``TTL'' and the ``Hop count'' header fields, and modifies the initial Steinhaus point if the sending node determines it is one of the most distant nodes to the recipient in the hierarchical hypercube. This option can be set to \emph{true} together with the \emph{RegisterRoute} or \emph{RouteBack} option, in which case, the message will be routed along a registered route, and additionally, concealing the ``TTL'', ``Hop count'' and ``Steinhaus point'' fields will be forced. When an application-level message is sent using an anonymous route, and the route is not registered, the message delivery acknowledgment should not be sent to the sender, because the original sender is unknown, and the ACK message would not be delivered. Thus, if such an anonymous message is sent, the sender should not expect the the delivery acknowledgment. Whenever an anonymous DHT request (PUT, REFRESH\_PUT, GET, DELETE) is received, the response should also be sent anonymously. However, GET requests sent anonymously, with the route not being registered, should be immediately dropped by any node receiving it. Because the original request sender is unknown, it would be not possible to return the result to the sender. For PUT, REFRESH\_PUT and DELETE requests sent anonymously without the route being registered, the requests should be processed, but no responses should be sent, as the original request senders are unknown. Every node routing an anonymous message updates the original message sender, which may cause messages sent by different nodes to be detected as duplicates. Thus, \emph{HyCube} allows disabling message duplicate detection mechanism for anonymous messages (anonymous routes). For security reasons, \emph{HyCube} also provides a possibility of blocking routing anonymous messages (by setting a system parameter value). Appropriate configuration should be chosen depending on the requirements of the application. \subsubsection{Anonymous resource replication} REPLICATE messages are normally sent directly to nodes that the resource is replicated to, which reveals the content of the storage of the node performing the replication. However, the nodes receiving the REPLICATE messages do not contact the sender directly to retrieve the resource, but get the resource by routing a regular GET message, possibly receiving the resource from another node. Thus, if the GET/GET\_REPLY messages are sent using anonymous registered routes, and the TTL is concealed, the actual transmission of resources is completely anonymous. Nevertheless, if retaining the storage content secret is required, depending on a system parameter value, the REPLICATE messages themselves, instead of being sent directly, may be routed anonymously to the destination node (anonymous routes). However, as the destination node is stored in the sending node's neighborhood set, normal next hop selection would send the message directly to the replication node anyway. Thus, the direct recipient of the REPLICATE message should be randomly selected among all nodes to which the replication information is being sent (or among all neighborhood set nodes). Such an approach would conceal the information about the actual node storing the resource (nodes would only be informed about the fact that the resources exist), and there would be no confirmation whether the node receiving the replication info eventually retrieved the resource or not. % ex: set tabstop=4 shiftwidth=4 softtabstop=4 noexpandtab fileformat=unix filetype=tex encoding=utf-8 fileencodings= fenc= spelllang=pl,en spell:
{ "alphanum_fraction": 0.8065588809, "avg_line_length": 159.5304878049, "ext": "tex", "hexsha": "377704a2349047bbb3e31b196c1c3e667037de9e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-10T16:08:30.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-10T16:08:30.000Z", "max_forks_repo_head_hexsha": "e7dc0bc7ff5d7c1d406bfee952398515f3f6b6c8", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "suhasagg/hycube", "max_forks_repo_path": "src/documentation/protocol_doc_tex/tex/security.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "e7dc0bc7ff5d7c1d406bfee952398515f3f6b6c8", "max_issues_repo_issues_event_max_datetime": "2016-11-27T18:10:58.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-02T14:25:30.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "suhasagg/hycube", "max_issues_repo_path": "src/documentation/protocol_doc_tex/tex/security.tex", "max_line_length": 1816, "max_stars_count": 3, "max_stars_repo_head_hexsha": "e7dc0bc7ff5d7c1d406bfee952398515f3f6b6c8", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "arturolszak/hycube", "max_stars_repo_path": "src/documentation/protocol_doc_tex/tex/security.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-18T02:15:36.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-18T14:05:13.000Z", "num_tokens": 5256, "size": 26163 }
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere \PassOptionsToPackage{hyphens}{url} \documentclass[11pt,ignorenonframetext,aspectratio=169]{beamer} \IfFileExists{pgfpages.sty}{\usepackage{pgfpages}}{} \setbeamertemplate{caption}[numbered] \setbeamertemplate{caption label separator}{: } \setbeamercolor{caption name}{fg=normal text.fg} \beamertemplatenavigationsymbolsempty \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \newif\ifbibliography \hypersetup{ pdftitle={Introduction to genetics}, pdfauthor={Deependra Dhakal}, pdfborder={0 0 0}, breaklinks=true} %\urlstyle{same} % Use monospace font for urls % Prevent slide breaks in the middle of a paragraph: \widowpenalties 1 10000 \raggedbottom \AtBeginPart{ \let\insertpartnumber\relax \let\partname\relax \frame{\partpage} } \AtBeginSection{ \ifbibliography \else \let\insertsectionnumber\relax \let\sectionname\relax \frame{\sectionpage} \fi } \AtBeginSubsection{ \let\insertsubsectionnumber\relax \let\subsectionname\relax \frame{\subsectionpage} } \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} \usepackage{setspace} \usepackage{wasysym} % \usepackage{fontenc} \usepackage{booktabs,siunitx} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{xcolor} \usepackage{tikz} % required for image opacity change \usepackage[absolute,overlay]{textpos} % for text formatting \usepackage[skip=0.333\baselineskip]{caption} % \usepackage{newtxtext,newtxmath}% better than txfonts \sisetup{per-mode=symbol} % % Added by CII % \usepackage[format=hang,labelfont=bf,margin=0.5cm,justification=centering]{caption} % \captionsetup{font=small,width=0.9\linewidth,labelfont=small,textfont={small}} % % End of CII addition \usepackage{subcaption} % \newcommand{\subfloat}[2][need a sub-caption]{\subcaptionbox{#1}{#2}} \captionsetup[sub]{font=footnotesize,labelfont=footnotesize,textfont=footnotesize} % \captionsetup[subfigure]{font=small,labelfont=small,textfont=small} % \captionsetup[subfloat]{font=scriptsize,labelfont=scriptsize,textfont=scriptsize} % this font option is amenable for beamer, although these are global settings \setbeamerfont{caption}{size=\tiny} % \setbeamerfont{subcaption}{size=\tiny} % this does not chage subfloat fonts % \setbeamerfont{subfloat}{size=\tiny} % this does not change subfloat fonts % use single line spacing ? \singlespacing % use customize theme and colortheme % \usetheme[subsectionpage=progressbar,background=light]{metropolis} % \usetheme[]{Warsaw} % other options are: rose, Berlin % \usetheme[subsectionpage=progressbar,background=light]{metropolis} % it should be integrated to template itself by replacing preset theme \setbeamercolor{title}{fg=blue!85!black,bg=red!20!white} \setbeamertemplate{section in toc shaded}[default][50] % \usecolortheme{crane} \usecolortheme{seahorse} % beamer progress bar customization \usetikzlibrary{calc} \definecolor{pbblue}{HTML}{0A75A8}% filling color for the progress bar \definecolor{pbgray}{HTML}{575757}% background color for the progress bar \makeatletter \def\progressbar@progressbar{} % the progress bar \newcount\progressbar@tmpcounta% auxiliary counter \newcount\progressbar@tmpcountb% auxiliary counter \newdimen\progressbar@pbht %progressbar height \newdimen\progressbar@pbwd %progressbar width \newdimen\progressbar@tmpdim % auxiliary dimension \progressbar@pbwd=\linewidth \progressbar@pbht=1.0ex % the progress bar \def\progressbar@progressbar{% \progressbar@tmpcounta=\insertframenumber \progressbar@tmpcountb=\inserttotalframenumber \progressbar@tmpdim=\progressbar@pbwd \multiply\progressbar@tmpdim by \progressbar@tmpcounta \divide\progressbar@tmpdim by \progressbar@tmpcountb \begin{tikzpicture}[rounded corners=2pt,very thin] \shade[top color=pbgray!20,bottom color=pbgray!20,middle color=pbgray!50] (0pt, 0pt) rectangle ++ (\progressbar@pbwd, \progressbar@pbht); \shade[draw=pbblue,top color=pbblue!50,bottom color=pbblue!50,middle color=pbblue] % (0pt, 0pt) rectangle ++ (\progressbar@tmpdim, \progressbar@pbht); \draw[color=normal text.fg!50] (0pt, 0pt) rectangle (\progressbar@pbwd, \progressbar@pbht) node[pos=0.5,color=normal text.fg] {\textnormal{% \pgfmathparse{\insertframenumber*100/\inserttotalframenumber}% \pgfmathprintnumber[fixed,precision=2]{\pgfmathresult}\,\%% }% }; \end{tikzpicture}% } \addtobeamertemplate{footline}{} % modified from 'headline' by dd_rookie {% \begin{beamercolorbox}[wd=\paperwidth,ht=4ex,center,dp=1ex]{white}% \progressbar@progressbar% \end{beamercolorbox}% } \makeatother \title[]{Introduction to genetics} \subtitle{An overview} \author[ Deependra Dhakal ]{Deependra Dhakal} \institute[ ]{ Gokuleshwor Agriculture and Animal Science College\\ Tribhuwan University\\ \textit{[email protected]}\\ \url{https://rookie.rbind.io} } \date[ Academic year 2019-2020 ]{ Academic year 2019-2020 } \begin{document} % Hide progress bar and footline on titlepage \begin{frame}[plain] \titlepage \end{frame} \begin{frame} \frametitle{Outline} % modified by dd_rookie \tableofcontents[hideallsubsections,pausesections] % modified by dd_rookie \end{frame} \hypertarget{introduction}{% \section{Introduction}\label{introduction}} \begin{frame}{Background} \protect\hypertarget{background}{} \begin{itemize}[<+->] \tightlist \item Pivotal position in biology \item Issues of overnment regimes -- politics of masses during Hitler's Germany. \item Can we stay ignorant about genetic issues ? \item Human civilization hinged on cultivation of modern crops. Since 10,000 years in the past, we have been practicing selection. \item The major drive of the green revolution \item Don't we consume engineered drugs, use plants with modification ? \item Can traditional agriculture satisfy human needs in distant future ? \item Debates about -- is it safe ? \end{itemize} \end{frame} \begin{frame}{} \protect\hypertarget{section}{} \begin{columns}[T,onlytextwidth] \column{0.5\textwidth} \begin{figure} \includegraphics[width=0.38\linewidth]{../images/green_revolution_borlaug} \caption{Norman Borlaug, a leader in the development of new strains of wheat that led to the Green Revolution. Borlaug was awarded the Nobel Peace Prize in 1970.}\label{fig:green-revolution-borlaug} \end{figure} \column{0.5\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/green_revolution_crop} \caption{Modern, high-yielding rice plant (left) and traditional rice plant (right)}\label{fig:green-revolution-crop} \end{figure} \end{columns} \end{frame} \begin{frame}{Definition} \protect\hypertarget{definition}{} \begin{block}{Study of heredity} \protect\hypertarget{study-of-heredity}{} \begin{itemize} \tightlist \item Why cats always have kittens and humans always have babies ? \item Why do children resemble their parents ? \item Why two people are never same ? \end{itemize} \end{block} \begin{block}{Etymology} \protect\hypertarget{etymology}{} \begin{itemize} \tightlist \item ``gene'' \item At any level of study genes are central \item Genes have unique nature to perform their biological roles -- replication, information bearing/generation of form and mutation \end{itemize} \end{block} \end{frame} \begin{frame}{Subdivision} \protect\hypertarget{subdivision}{} \begin{figure} \includegraphics[width=0.45\linewidth]{../images/genetics_subdivision} \caption{Subdivision of genetics into three interrelated fields}\label{fig:genetics-subdivision} \end{figure} \end{frame} \hypertarget{the-big-picture}{% \section{The big picture}\label{the-big-picture}} \begin{frame}{Gregor Mendel} \protect\hypertarget{gregor-mendel}{} \begin{itemize} \tightlist \item Blending theory of inheritance \item Mendelian theory of inheritance \end{itemize} \end{frame} \begin{frame}{Mendel's Experiment} \protect\hypertarget{mendels-experiment}{} \begin{columns}[T,onlytextwidth] \column{0.5\textwidth} \begin{figure} \includegraphics[width=0.35\linewidth]{../images/mendels_experiment} \caption{The mating scheme for Mendel’s experiment involving the crossing of purple- and white- flowered varieties of pea plants. The purple and white circles signify the gene variants for purple vs. white flower color. Gametes carry one gene copy; the plants each carry two gene copies. The $ imes$ signifies a cross-pollination between the purple -- and white-flowered plants.}\label{fig:mendels-experiment} \end{figure} \column{0.5\textwidth} \begin{figure} \includegraphics[width=0.5\linewidth]{../images/mendels_pea} \caption{\textbf{The seven phenotypic pairs studied by Mendel; For each character Mendel studied two contrasting phenotypes} \newline }\label{fig:mendels-pea} \end{figure} \end{columns} \end{frame} \begin{frame}{The crossing} \protect\hypertarget{the-crossing}{} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/crossing_selfing} \caption{Cross-pollination and selfing are two types of crosses}\label{fig:crossing-pea} \end{figure} \end{frame} \begin{frame}{Model organisms} \protect\hypertarget{model-organisms}{} \begin{columns}[T,onlytextwidth] \column{0.33\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/model_organisms_drosophila} \caption{\textit{Drosophila melanogaster}}\label{fig:model-organisms-drosophila} \end{figure} \column{0.33\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/model_organisms_coli} \caption{\textit{Escherichia coli}}\label{fig:model-organisms-ecoli} \end{figure} \column{0.33\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/model_organisms_elegans} \caption{\textit{Caenorhabditis elegans}}\label{fig:model-organisms-caenorhabditis} \end{figure} \end{columns} \end{frame} \begin{frame}{} \protect\hypertarget{section-1}{} \begin{columns}[T,onlytextwidth] \column{0.33\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/model_organisms_arabidopsis} \caption{\textit{Arabidopsis thaliana}}\label{fig:model-organisms-arabidopsis} \end{figure} \column{0.33\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/model_organisms_cerevisiae} \caption{\textit{Saccharomyces cerevisiae}}\label{fig:model-organisms-cerevisiae} \end{figure} \column{0.33\textwidth} \begin{figure} \includegraphics[width=0.8\linewidth]{../images/model_organisms_mus} \caption{\textit{Mus musculus}}\label{fig:model-organisms-mus} \end{figure} \end{columns} \end{frame} \begin{frame}{Fundamental concepts of genetics} \protect\hypertarget{fundamental-concepts-of-genetics}{} \begin{itemize}[<+->] \tightlist \item Cells are of two basic types: \alert{Eukaryotic} and \alert{prokaryotic} \item The gene is the fundamental unit of heredity \item Genes come in multiple forms called alleles \item Genes confer phenotypes \item Genetic information is carried in DNA and RNA \item Genes are located on chromosomes \item Chromosomes separate through the process of mitosis and meiosis \item Genetic information is transferred from DNA to RNA to protein \item Mutations are permanent, heritable changes in genetic information \item Some traits are affected by multiple factors \item Evolution is genetic change \end{itemize} \end{frame} \hypertarget{bibliography}{% \section{Bibliography}\label{bibliography}} \end{document}
{ "alphanum_fraction": 0.7505818464, "avg_line_length": 28.836689038, "ext": "tex", "hexsha": "4e710e5337f407dfc9c51a8086755914cbc3dbca", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e289df17f840f70979956756942be1bdeea54bd1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DeependraD/courseware_plbgen", "max_forks_repo_path": "GEN211_introductory_genetics/theory/01-introduction_and_scope_of_genetics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e289df17f840f70979956756942be1bdeea54bd1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DeependraD/courseware_plbgen", "max_issues_repo_path": "GEN211_introductory_genetics/theory/01-introduction_and_scope_of_genetics.tex", "max_line_length": 479, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e289df17f840f70979956756942be1bdeea54bd1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DeependraD/courseware_plbgen", "max_stars_repo_path": "GEN211_introductory_genetics/theory/01-introduction_and_scope_of_genetics.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-14T06:42:45.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-14T06:42:45.000Z", "num_tokens": 3914, "size": 12890 }
\chapter{``Zihintpause'' Pause Hint, Version 1.0} \label{chap:zihintpause} The PAUSE instruction is a HINT that indicates the current hart's rate of instruction retirement should be temporarily reduced or paused. The duration of its effect must be bounded and may be zero. No architectural state is changed. \begin{commentary} Software can use the PAUSE instruction to reduce energy consumption while executing spin-wait code sequences. Multithreaded cores might temporarily relinquish execution resources to other harts when PAUSE is executed. It is recommended that a PAUSE instruction generally be included in the code sequence for a spin-wait loop. A future extension might add primitives similar to the x86 MONITOR/MWAIT instructions, which provide a more efficient mechanism to wait on writes to a specific memory location. However, these instructions would not supplant PAUSE. PAUSE is more appropriate when polling for non-memory events, when polling for multiple events, or when software does not know precisely what events it is polling for. The duration of a PAUSE instruction's effect may vary significantly within and among implementations. In typical implementations this duration should be much less than the time to perform a context switch, probably more on the rough order of an on-chip cache miss latency or a cacheless access to main memory. A series of PAUSE instructions can be used to create a cumulative delay loosely proportional to the number of PAUSE instructions. In spin-wait loops in portable code, however, only one PAUSE instruction should be used before re-evaluating loop conditions, else the hart might stall longer than optimal on some implementations, degrading system performance. \end{commentary} PAUSE is encoded as a FENCE instruction with {\em pred}=W, {\em succ}=0, {\em fm}=0, {\em rd}={\tt x0}, and {\em rs1}={\tt x0}. \begin{commentary} PAUSE is encoded as a hint within the FENCE opcode because some implementations are expected to deliberately stall the PAUSE instruction until outstanding memory transactions have completed. Because the successor set is null, however, PAUSE does not {\em mandate} any particular memory ordering---hence, it truly is a HINT. Like other FENCE instructions, PAUSE cannot be used within LR/SC sequences without voiding the forward-progress guarantee. The choice of a predecessor set of W is arbitrary, since the successor set is null. Other HINTs similar to PAUSE might be encoded with other predecessor sets. \end{commentary}
{ "alphanum_fraction": 0.8057982526, "avg_line_length": 47.5094339623, "ext": "tex", "hexsha": "fd652a26dbf4863cc6b6122ace120ec63a76834d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-14T19:16:24.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-14T19:16:24.000Z", "max_forks_repo_head_hexsha": "799da44c52373dadbceb462314259deaa1da42db", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "hucan7/riscv-isa-manual", "max_forks_repo_path": "src/zihintpause.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "799da44c52373dadbceb462314259deaa1da42db", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "hucan7/riscv-isa-manual", "max_issues_repo_path": "src/zihintpause.tex", "max_line_length": 90, "max_stars_count": 1, "max_stars_repo_head_hexsha": "06f9ada051f3e120a9776ba451ee3c0c6f9ccd15", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "jeras/riscv-isa-manual", "max_stars_repo_path": "src/zihintpause.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-10T07:42:39.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-10T07:42:39.000Z", "num_tokens": 563, "size": 2518 }
\section{Transfer Learning} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/transfer_taxo} \caption{Transfer learning taxonomy.}\label{fig:taskonomy} \end{figure*} We follow the notations in the survey by~\cite{pan2009survey}. A \emph{domain} is a pair $\cD = (\cX,P(X))$, where $\cX$ is a feature space and $P(X)$ is a marginal probability distribution. For visual reasoning problems considered in this paper, $\cX$ will consist of purely visual inputs, i.e., either images or videos in some cases, or a combination of both visual inputs and questions in other cases. A \emph{task} is a pair $\cT= (\cY,f(\cdot))$, where $\cY$ is a label space and $f: \cX \to \cY$ is a prediction function. When the domain elements consist of both the question and the visual input, there is only one task, namely, to answer the question\footnote{% For the COG dataset, the answer is a tuple, one for each frame in the video, whereas for typical video answering datasets, only a single answer is needed for the entire video.}. % If the domain elements consist of just the visual inputs, then the task is defined by the question so that each question defines a separate task. \begin{definition}[\cite{pan2009survey}] \label{defn:transfer} Given a source domain $\cD_S$ and a source learning task $\cT_S$, a target domain $\cD_T$ and a target learning task $\cT_T$, transfer learning aims to help improve the learning of the target predictive function $f_T(\cdot)$ in $\cD_T$ using the knowledge in $\cD_S$ and $\cT_S$, where $\cD_S \ne \cD_T$, or $\cT_S \ne \cT_T$. \end{definition} In all our applications, $\cX_S = \cX_T$, so $\cD_S \ne \cD_T$ means that the marginal distributions $P_S$ and $P_T$ are different. Similarly, $\cT_S \ne \cT_T$ means that either $Y_S \ne Y_T$ or that the associated prediction functions are different. Although~\cref{defn:transfer} is quite general, it does not adequately capture all artifacts present in visual reasoning. For example, consider the transfer learning setting where the tasks $\cT_S$ and $\cT_T$ are the same but the marginal distributions $P_S$ and $P_T$ are different (referred to as \emph{domain adaptation}). As mentioned in the introduction, one setting is the case of static images, where this could be due to having different feature combinations in the source and target. A different setting is in the context of video reasoning where the number of frames can increase significantly going from source to target. These require possibly very different methods: the first involves building disentangled feature representations that can generalize across domains; the second might need external memory to remember relevant objects to generalize across frame lengths. Another situation is when the questions themselves can be grouped into families such as count-based queries, comparison of objects, or existence of objects with certain features etc. This entails studying transfer learning between families of tasks which requires extending the above definition. Broadly, we consider 3 kinds of transfer learning problems in this work, as illustrated in~\cref{fig:taskonomy}. Let $\cQ$ denote the set of questions and $\cV$ denote the set of visual inputs. \begin{description} \item[Feature Transfer:] In this setting of domain adaptation, $\cX_S = \cX_T \subseteq \cQ \times \cV$ and the task $f(q,v)$ is just the answer to the question $q$ on visual input $v$. The output set $\cY$ is the union of legitimate answers over all questions in $\cQ$. The marginal distributions $P_S$ and $P_T$ differ in the feature attributes such as shape, color, and size, or their combinations thereof. \item[Temporal Transfer:] This setting is similar to attribute adaptation in that $\cX_S = \cX_T \subseteq \cQ \times \cV$ and there is a single task. The key difference is that we introduce a notion of complexity $C(v) = (n, m)$ for a visual input $v$, where $n$ equals the maximum number of objects $n$ in an image, and $m$ equals the number of frames in a video. For any visual input $v_S$ coming from $\cX_S$ with $C(v_S) = (n_S, m_S)$ and for any visual input $v_T$ coming from $\cX_T$ with $C(v_T) = (n_T, m_T)$, we require that $n_T \ge n_S$ and $m_T \ge m_S$ with at least one inequality being a strict one. Thus, we necessarily increase the complexity of the visual input going from the source to the target domain. \item[Reasoning Transfer:] This setting requires an extension of~\cref{defn:transfer} above to investigate transfer learning when grouping questions into families. Let $\cV$ be the feature space consisting of visual inputs only, shared by all tasks, with a common marginal distribution $P(X)$. For each question $q \in \cQ$, we define the task $\cT_q = (\cY_q, f_q(\cdot))$ where the output set $\cY_q$ is the set of legitimate answers to $q$ and $f_q(v)$, for a visual input $v$, is the answer to question $q$ on visual input $v$. Thus, tasks are in a 1-1 correspondence with questions. A \emph{task family} is a probability distribution on tasks which in our case can be obtained by defining the distribution on $\cQ$. Given a task family, the goal is to learn a prediction function that gives an answer to $f_q(v)$ for $v \in \cV$ chosen according to the feature space distribution and $q$ chosen according to the task probability distribution. Suppose $\cF_S$ is the source task family and $\cF_T$ is the target task family. Transfer learning aims to help improve the learning of the predictive function for the target task family using the knowledge in the source task family. \end{description} If labeled data is available for $\cX_T$, a training algorithm distinction we make is between \emph{zero-shot learning} and \emph{finetuning}. Finetuning entails the use of labeled data in the target domain $\cD_T$, foreseeing performance gain on the target task $\cX_T$, after initial training on $\cX_S$ and additional training on $\cX_T$. Zero-shot learning thus refers to immediate test on $\cX_T$ after initial training on $\cX_S$.
{ "alphanum_fraction": 0.758473182, "avg_line_length": 77.9230769231, "ext": "tex", "hexsha": "52fdd1ae50732ee7676db8d47077537431570634", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-07-30T10:13:26.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-30T10:13:26.000Z", "max_forks_repo_head_hexsha": "4c5c503cb3976186d6eda4628f7c45914feba9fa", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Bhaskers-Blu-Org1/mi-visual-reasoning-pubs", "max_forks_repo_path": "Transfer_Learning/arxiv_version/submission_folder/v1/transfer_learning.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4c5c503cb3976186d6eda4628f7c45914feba9fa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Bhaskers-Blu-Org1/mi-visual-reasoning-pubs", "max_issues_repo_path": "Transfer_Learning/arxiv_version/submission_folder/v1/transfer_learning.tex", "max_line_length": 436, "max_stars_count": null, "max_stars_repo_head_hexsha": "4c5c503cb3976186d6eda4628f7c45914feba9fa", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Bhaskers-Blu-Org1/mi-visual-reasoning-pubs", "max_stars_repo_path": "Transfer_Learning/arxiv_version/submission_folder/v1/transfer_learning.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1561, "size": 6078 }
\subsection{Intel Lab Data} \label{sec:intel-lab-data-evaluation} We also evaluated our outlier detection framework on sensor data from the publicly available Intel Lab Data set\footnote{\url{http://db.csail.mit.edu/labdata/labdata.html}}. The Intel Lab Data contains data collected from 54 sensors spread throughout the Intel Berkeley Research Lab. Each data entry contains information including temperature, humidity, light and voltage taken from a Mica2Dot sensor and weatherboard. The dataset contains a total of approximately 2.3 million measurements. The Intel lab dataset has known outliers from faulty sensor readings due to periods of critically low voltage. During these periods, the sensors go haywire and produce faulty measurements. For example, the temperature may be registered as over $120$ degrees Celsius, which is obviously abnormal behavior in a human environment such as where the sensors were deployed. We analyzed a sample of 1000 data points selected at random from the sensor data; due to the numerical nature of this data, the Simple Gaussian and Mixture models are better-suited to analyzing it than the Histogram model. We also compare the results of our models to Local Outlier Factors, a common outlier detection methodology, in this section. \subsubsection{Simple Gaussian Model} The results from running the sensor data set through the Simple Gaussian model are shown in Figure~\ref{fig:gauss15}. The data is plotted in light green, and the outliers are marked by dark red crosses. In this experiment we flag the entries with column values that fall outside $1.5$ standard deviations of the mean of that particular column as outliers. This model runs relatively fast, as no correlations are computed (\timing{.03}{0}{.12}). %As we observe in Figure~\ref{fig:gauss15}, this leads to the extreme values on the temperature spectrum as well as the humidity spectrum to be identified as outliers. %However, many points in the normal range of operation are flagged as outliers. %The cause of the noise can be seen in Figure~\ref{fig:sensors_gaus_1-5b}. %In this plot we see that the values of both voltage and light are relatively evenly distributed within their respective ranges. %Therefore many values lie outside $1.5$ standard deviations from the mean, despite having reasonable values in other dimensions such as temperature. %Using information from correlations is one way in which we improve on this, which we describe in the next section. \subsubsection{Mixture Model} We set the statistical threshold to $0.7$, which produces two correlations between temperature and humidity and between temperature and voltage. Figure~\ref{fig:gmm1t1} shows the results when using a single Gaussian component. Points flagged as outliers have a likelihood of less than $7.5\%$ of being produced by the Gaussian generated by the model (\timing {.03}{.34}{.73}). This model is able to detect values with high temperature and low voltage as outliers. %We show in Figure~\ref{fig:sensors_nocorr} the benefits of pruning the data via correlations before feeding it into the Gaussian model. %In this experiment, we allow the statistical analyzer to mark all columns as correlated. %The model produces a Gaussian with so much noise from the light data that it detects many points even within the normal operating range as outliers, even when the threshold is reduced to $0.5\%$. %Thus, using some mechanism to find correlations is useful in narrowing the search space for outliers. Figure~\ref{fig:gmm2t05} shows the results obtained using the Mixture model with two components, using the same 1000 randomly selected data points. Flagged values have a likelihood of less than $7.5\%$ under their dominant Gaussian (\timing {.03}{.35}{0.78}). When using two Gaussians, the points clustered around the temperature $120$ degrees Celsius are no longer detected as outliers: they are modeled by their own Gaussian (although this Gaussian's weight is smaller than its counterpart). This model highlights the points within normal sensor operation that have outlying results. \subsubsection{Local Outlier Factors} \label{sec:lof-evaluation} In this section we compare the results of our Gaussian and Mixture models to Local Outlier Factors (LOF)~\cite{Breunig2000}, a frequently used method for outlier detection. LOF measures the degree to which a data point is an outlier by comparing each data point's reachability to those of its $k$ nearest neighbors. The higher the LOF, the more isolated the data point relative to its local neighborhood and therefore the more likely the point is to be an outlier. One downside of LOF compared to \dBoost/ is that it can only evaluate two-dimensional data. The original algorithm also has significant computation complexity in order to calculate the distance to the nearest neighbors of each data point. One benefit of LOF, however, is that the algorithm returns a continuous value that indicates the degree to which a point is an outlier, as opposed to a binary value. Figure~\ref{fig:lofk2} shows the outliers detected by LOF when $k=2$. We observe that contrary to the Gaussian and Mixture models, the outliers detected by LOF are scattered throughout the data. The outliers are not necessarily the points one would intuitively assume are outliers. This is because points that are within the normal range of the data will be selected as outliers if they are far enough away from the other points nearest to them. %When the number of nearest neighbors evaluated is increased to $10$ in Figure~\ref{fig:lof_10}, the outliers detected are further outside the main cluster of points. We find that LOF is not as useful at pointing out the tagged outliers in the sensor data set.
{ "alphanum_fraction": 0.8018785876, "avg_line_length": 97.4406779661, "ext": "tex", "hexsha": "3879ea732a47373ea1b309fb1e81a43809cd9f3f", "lang": "TeX", "max_forks_count": 16, "max_forks_repo_forks_event_max_datetime": "2022-02-28T06:42:36.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-21T12:28:33.000Z", "max_forks_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "adrianlut/raha", "max_forks_repo_path": "raha/tools/dBoost/paper/icde/intel-lab-data-evaluation.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d", "max_issues_repo_issues_event_max_datetime": "2021-01-10T12:59:43.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-10T12:59:43.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "adrianlut/raha", "max_issues_repo_path": "raha/tools/dBoost/paper/icde/intel-lab-data-evaluation.tex", "max_line_length": 490, "max_stars_count": 30, "max_stars_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "adrianlut/raha", "max_stars_repo_path": "raha/tools/dBoost/paper/icde/intel-lab-data-evaluation.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-07T07:44:58.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-05T12:03:45.000Z", "num_tokens": 1221, "size": 5749 }
%!TEX root = ../article.tex \section{Background} \label{sec:background} In this section we will provide an overview of the relevant related work to solve the challenges of converging the Internet of Things and Utility Computing. The most relevant concepts necessary to understand our proposal will be described throughout this section. % Cloud computing concepts and tools \subsection{Cloud computing concepts and tools} \label{sub:cloud_concepts_tools} % Smart Place Deployment \subsubsection{Smart Place Computing} \label{sub:smart_place_computing} Deployment of smart places usually was performed in a physical and isolated manner. By leveraging the infrastructure of smart places to the cloud, new provisioning approaches have to be adopted. Currently, cloud service delivery models are being developed based on the existing layers of the cloud architecture \cite{zhang2010cloud}: \textit{IaaS}, \textit{PaaS} and \textit{SaaS}. % Distefano Distefano et al. \cite{distefano2012enabling} proposed a conceptual architecture by mapping various elements in both cloud and IoT to the three layers of the cloud architecture (\gls{IaaS}, \gls{PaaS} and \gls{SaaS}). In this proposal IoT resources are provided voluntarily by their owners, while management functions - such as node management and policy enforcement - are viewed as peer functions of cloud infrastructure management. A \gls{PaaS} module is responsible to mashup IoT and cloud infrastructure (\gls{IaaS}) resources for applications, which are delivered to the clients through \gls{SaaS}. % CloudThings CloudThings \cite{zhou2013cloudthings} is an architecture that uses the cloud platform layers to integrate Internet of Things and Cloud Computing. The proposed architecture is an online platform which accommodates \gls{IaaS}, \gls{PaaS}, \gls{SaaS} and allows system integrators and solution providers to leverage the complete application infrastructure for developing, operating and composing applications and services. % IoT PaaS Li et. al \cite{li2013efficient} proposed IoT PaaS, a cloud platform that supports scalable IoT service delivery. Solution providers are able to deliver new solutions by leveraging computing resources and platform services - domain mediation, application context management, etc. - to the cloud. The proposed architecture aims to enable virtual vertical service delivery, for that it has a multi-tenant nature which is designed to help at the isolation of the environments of different solutions. Although a significant progress was achieved regarding the improvement for the deployment of \gls{IoT} solutions, most of the work still are in a conceptual stage. What is certain is that cloud service delivery models will be the basis for the service delivery models of \gls{IoT} solutions. % Configuration Management Tools \subsubsection{Configuration Management Tools} \label{subs:cm_tools} \gls{CM} tools are software management tools that allows to automate and specify the deployment of an application. Usually, users describe the system resources and their desired state and the \gls{CM} tool is responsible for enforcing the desired state. For instance, \gls{CM} tools allows the automation of the provisioning of physical and virtual machines, perform dependency management of software components and to perform the automation of management tasks. Currently, there are several solutions to perform configuration management of software, where the most relevant are Chef\footnote{\url{https://www.chef.io/}}, Puppet\footnote{\url{https://puppetlabs.com/}}, Ansible\footnote{\url{http://www.ansible.com/}} and Salt\footnote{\url{http://saltstack.com/}}. % Fog Computing \subsection{Fog computing for low latency responses} \label{sub:fog_computing} The Fog Computing \cite{bonomi2012fog} is a paradigm that aims to bring the cloud close to the ``edge of the Network''. By bringing the cloud close to the ground - hence the fog analogy - the Fog will be able to meet the requirements of several applications that the traditional clouds are not able to accomplish. The most notable case is the Internet of Things, that requires mobility support, geo-distribution in addition to location awareness and low latency. The Fog aims to achieve that by virtualizing the computing, storage and network services between end devices and the traditional data centers in the cloud. % Fog Computing Infrastructure \begin{figure}[ht!] \centering \includegraphics[width=.5\textwidth]{./figures/fog_architecture} \caption[IoT and Fog Computing.]{The Internet of Things and Fog Computing (Bonomi et. al (2012)).} \label{fig:fog_architecture} \end{figure} Bonomi et. al \cite{bonomi2012fog} presents the architecture of Fog Computing platform. As illustrated in Figure~\cite{bonomi2012fog} the distributed infrastructure of the Fog comprising of several players, covering from data centers, core of the network, edge of the network and end devices. \textit{Multi-Service Edge} is the lowest tier of the Fog and it is responsible for performing \gls{M2M} interactions. It collects and process the data from the \textit{Embedded Systems and Sensors} tier, issues commands to the actuators and also filters the data that is locally consumed and sent to the higher tiers. \textit{Core Networking and Services} tier is responsible for providing network services that are used to exchange data between sub-networks. This tier also provides security services as well \textit{QoS} and multicast services for the applications. Since the interaction time between the different tiers can range from seconds - e.g. low-latency real-time analytics - to days - transactional analytics - the Fog must support several types of storage, from ephemeral storage at the lowest tiers to semi-permanent at the highest tier. As higher is the tier, geographical coverage and the time scale increases \cite{bonomi2014fog}. The global coverage is given by the Cloud, which acts as a central repository for the persistent data and that is used to perform business analytics. % Fosstrak \subsection{Fosstrak Platform} \label{sub:fosstrak} The Free and Open Source Software for Track and Trace (Fosstrak) is an EPCglobal Network compliant \gls{RFID} software platform that was developed by Floerkemeier et. al \cite{floerkemeier2007rfid}. Figure~\ref{fig:fosstrak_architecture} presents the architecture of the Fosstrak platform. % Fosstrak Architecture \begin{figure}[ht!] \centering \includegraphics[width=.45\textwidth]{./figures/fosstrak_architecture} \caption[Fosstrak architecture.]{Fosstrak architecture.} \label{fig:fosstrak_architecture} \end{figure} The Fosstrak platform is composed by the following modules: \textit{a) Filtering \& Collection Server} is the module responsible for filtering and collecting data from \gls{RFID} readers. The module uses the \gls{LLRP} and \gls{HAL} interfaces to communicate with readers; \textit{b) Capturing Application} is the module responsible for transforming uninterpreted events into meaningful business events; and \textit{c) EPCIS Repository} is the module that provides persistence for \gls{EPCIS} events.
{ "alphanum_fraction": 0.8077892967, "avg_line_length": 62.6140350877, "ext": "tex", "hexsha": "57ec362524a2f75e242073e68d0d964fe117cd77", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:02:40.000Z", "max_forks_repo_forks_event_min_datetime": "2016-02-28T17:02:46.000Z", "max_forks_repo_head_hexsha": "781510de80f4916b12446d92f30f3d2bd58d0311", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mvpgomes/cloud4things", "max_forks_repo_path": "docs/extendend-abstract/sections/background.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "781510de80f4916b12446d92f30f3d2bd58d0311", "max_issues_repo_issues_event_max_datetime": "2020-06-08T18:59:03.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-05T19:08:48.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mvpgomes/cloud4things", "max_issues_repo_path": "docs/extendend-abstract/sections/background.tex", "max_line_length": 119, "max_stars_count": 2, "max_stars_repo_head_hexsha": "781510de80f4916b12446d92f30f3d2bd58d0311", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mvpgomes/cloud4things", "max_stars_repo_path": "docs/extendend-abstract/sections/background.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-24T16:43:14.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-15T12:48:43.000Z", "num_tokens": 1654, "size": 7138 }
\documentclass[12pt]{amsart} % PACKAGES \usepackage{url} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{fancyvrb} % for underscores https://texfaq.org/FAQ-underscore \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage{textcomp} \usepackage[ bookmarksopen, bookmarksdepth=2, %breaklinks=true colorlinks=true, urlcolor=blue]{hyperref} % GLOBAL FORMATTING %\linenumbers \parindent=0pt \parskip=0.5\baselineskip \raggedbottom % TITLE AUTHOR DATE \title{Notes on translating \TeX\ to a controlled natural language} \date{July 17, 2019} % Activate to display a given date or no date \author{Thomas Hales} % THEOREMS \newtheorem{definition}{Definition} \newtheorem{theorem}[definition]{Theorem} \newtheorem{lemma}[definition]{Lemma} \newtheorem{specification}[definition]{Specification} % COMMANDS \renewcommand\labelitemi{-} % ENVIRONMENTS % \leavevmode\par is to make remark work when it is the first item in a subsection. \newenvironment{remark} {\leavevmode\par\begin{tabular}{|p{13cm}}\parskip=\baselineskip{\bf Remark.}} {\end{tabular}} \newenvironment{oblongo}{}{} \newenvironment{prule}% {\begin{itemize}}% {\end{itemize}} \newcommand{\ptem}{\item} \newcommand{\nt}[1]{{\tt #1}} \newcommand{\rw}{$\quad\to\quad$} % DOCUMENT \begin{document} \maketitle \section{\TeX-CNL translation specification} The notes here add detail to the \href{https://github.com/formalabstracts/CNL-CIC/wiki/Conversion-from-LaTeX}{CNL-CIC Wiki}. A \TeX\ file written in CNL-enabled \TeX\ can be processed by the \TeX\ engine or by the CNL engine. %A boolean flag {\tt isPresentation} in the \TeX\ source determines %which mode is used for processing. When we talk about the pdf display, we mean the pdf output from \TeX. The CNL image means the output of the \TeX\ to CNL translation. The CNL image is a text file written in the CNL grammar. \subsection{Differences in CNL-enabled files} There are some visible differences between a typical \TeX\ source and a CNL-enabled one. \subsection{Variables and Identifiers} Identifiers can appear with the underscore character. They must be processed specially. We use the fancy verbatim mode and surround identifiers with ! as in \verb'!identifier_with_underscore!' . Subscripts written without braces are interpreted as part of the variable or identifier name. Subscripts written with braces are interpreted in CNL as function applications. Thus, \verb!x_1! is a variable and \verb!x_{1}! is the function $x$ applied to $1$. This works with multiple indices $a_{i j}$ (with a space between the $i$ and the $j$), is translated to CNL \verb!a \sb (i j)! The function application gets suppressed between the $i$ and the $j$, so that this is equivalent to \verb!a i j!. To activate function application in the subscript, wrap the subscripts in an extra layer of parentheses \verb!a_{(f x)}! or \verb!a_{\parenI{f x}}!. The \verb!\parenI! control sequence is invisible in the pdf display, but expands to parentheses in the CNL. \subsection{Words} Words are generally formed by consecutive alphabetic characters. When separate words are joined by \verb!\-!, \verb!\~!, or \verb!\'! they appear as a dash, space, and apostrophe in the pdf, but they are removed in the CNL file. For example, \verb!Riemann\-Roch! becomes \verb!RiemannRoch! in the CNL, \verb!Wiener\'s\~theorem! becomes \verb!Wienerstheorem!, \verb!totally\~bounded! becomes \verb!totallybounded!. Alphabetic phrases enclosed in \verb!{~ }! become a single word in the CNL. For example, \verb!{~generalized Bernoulli shift}! becomes \verb!generalizedBernoullishift! in CNL, and \verb!{~Sturm\-Liouville operators}! becomes \verb!SturmLiouvilleoperators!. The sequences \verb!\~! and \verb!\'! are no longer available as accents. Sometimes, one must choose whether to treat an adjective+noun as a term with a left attribute, as in \verb!commutative ring! or as a multi-word noun, as in \verb!normed\~ring!. In this case, commutative is made into an attribute because it is a predicate on ring. However, `normed` adds data (the norm) to the ring, and it is treated as a multi-word noun. \subsubsection{labels} Labels are converted to valid CNL identifiers by replacing whitespace and non-alphanumeric characters to underscores. \subsection{Special characters} \subsubsection{superscripts} \begin{itemize} \item The use of the superscript mark \verb!^! should be restricted to places where it acts as an infix (right associative) binary operator for raising the term to a power. \item When the superscript mark is used in connection with subscripts as function call, the interpretation of \verb!f_i^n! is the $n$th power of $f_i$, that is, $(f_i)^n$. Thus, this should be written in this order rather than \verb!f^n_i! (which, would suggest the $n$th iterate of $f$ applied to $i$). \item the superscript character is treated like every other math symbol. \item macros should be used when the meaning of a superscript is not a power. \end{itemize} \subsubsection{invisible characters} Invisible characters are generally given by macros (with convention that their names end with $I$). For example \verb!\mulI! is the invisible multiplication operator. \verb!x\mulI y! has pdf display $x y$, but the CNL image preserves the binary operation \verb!x\mulI y!. \subsubsection{math fonts} We treat variables in different fonts as different variables. We achieve this by fusing the font into the variable name. That is, when processing a source file containing \verb!\mathcal{C}!, it is not tokenized as a control sequence \verb!\mathcal! followed by a begin group \verb!{!, etc. Rather, it is tokenized as a single fused variable/identifier token \verb!\mathcal{C}!. This is done at the lexical stage as follows. Anything in the source file fitting the regular expression pattern \begin{Verbatim} %sedlex.regexp? "\\math", Star(alphabet), lbrace, alphabet, Star(alphanum), rbrace, Star(alphanum) \end{Verbatim} is treated as a single token. This means the literal characters \verb!\math! followed by 0 or more alphabetic characters \verb!a-zA-Z!, followed by a \verb!{!, another letter of the alphabet, 0 or more alphanumerics (the alphabet including \verb!_'0123456789!), \verb!}!, and then terminated by 0 or more alphanumerics. Thus, \verb!\mathcal{C}_1! and \verb!\mathfrak{g}'! match this regular expression. No expansion is made of the control sequence starting with \verb!\math...!. No white space is allowed. It will not generally work to use a macro that expands to something matching this regular expression. Macro expansion takes place after the lexical stage is complete. In particular, this pattern allows variable disambiguation based on the following math fonts (starting with \verb!\math...!): \verb!\mathrm!, \verb!\mathit!, \verb!\mathnormal!, \verb!\mathcal!, \verb!\mathscr!, \verb!\mathpzc!, \verb!\mathbb!, \verb!\mathbbm!, \verb!\mathbbmss!, \verb!\mathbbmtt!, \verb!\mathds!, \verb!\mathfrak!. Accents can be handled in the same way. For example, a \verb!\mathtilde! macro can be defined \verb!\def\mathtilde{\tilde}! which in the CNL image will expand in the same fashion as a math font. %https://tex.stackexchange.com/questions/58098/what-are-all-the-font-styles-i-can-use-in-math-mode \subsection{control sequences} \verb!\CNLcustom[k]\controlseq{pattern text}! When translating \TeX\ to CNL use the pattern text to translate the control sequence \verb!\controlseq! The integer $k$, which is optional, gives the number of braced arguments to read for expansion in the pattern text, after \verb!\controlseq! For example, \verb!\CNLcustom[1]\parenI{ (#1) }! states that \verb!\parenI{X}! in the \TeX\ should translated to $(X)$ in the CNL. For example, \verb!\CNLcustom\oplus{ \vplus }! A \verb!\noexpand! in the pattern text prevents the macro from being expanded, but the \verb!\noexpand! is not transcribed into the CNL image. \verb!\CNLcustom\X{\noexpand\X}!. Here \verb!\X! in the source becomes \verb!\X! in the image. \verb!\CNLnoexpand[k]\controlseq! is equivalent to \begin{Verbatim} \CNLcustom[k]\controlseq{\noexpand\controlseq} \end{Verbatim} \verb!\CNLdelete[k]\controlseq! When translating to CNL, delete the control sequence \verb!\controlseq! together with $k$ curly braced arguments. This is equivalent to giving empty expansion text \verb!\CNLcustom[k]\controlseq{}! \verb!\CNLexpand[k]\controlseq! expands the macro with $k$ arguments, but using the macro definition rather than \verb!\CNLcustom! pattern text. \subsubsection{lists and curried functions} Unlike in the source, in the CNL functions are generally curried. In the CNL, lists are generally demarcated with square brackets and semicolons. \verb!\app{f}{x1,....,xN}! (which displays as $f(x1,...,xN)$) becomes curried in CNL \verb!((f) (x1) (x2) ... (xN))! \verb!\list{x1,...,xN}! (which displays as $x1,\ldots,xN$) becomes a list in the CNL image \verb![x1;...;xN]! (Here the $x_i$s must have the same type.) For \TeX\ display, the list can be wrapped in delimiters to form tuples, set enumerations, etc. For example, \begin{Verbatim} \def\tuple#1{(#1)} \def\braced#1{\{#1\}} \end{Verbatim} For the CNL image, we may generate lists, set enumerations, etc. \begin{Verbatim}[fontfamily=tt,showspaces=false] \CNLcustom[1]\tuple{\list{#1}} \tuple{1,2,3} % CNLimage: [1;2;3] % source expansion {1,2,3} \CNLcustom[1]\braced{\setenum{\list{#1}}} \braced{1,2,3} % CNLimage: \setenum{[1;2;3]} % source expansion: \{1,2,3\} \end{Verbatim} \subsubsection{stripped characters} In translating to the CNL image, many formatting characters are removed from the \TeX\ source files. This include instructions to change among different \TeX\ modes: \verb!$ \( \) \begin{math} \end{math}!, \verb!\[ \] $$ \begin{displaymath} \end{displaymath}!. Rules, glues, and space \verb!\ \, \medskip, \quad!, etc. \subsubsection{curly braces} Some curly braces are stripped. Some are converted to parentheses. Here is the rule. Curly braces marking a subscript are replaced by parentheses. Curly braces used to demarcate control sequence arguments are stripped. \subsubsection{ambiguous notations} Lean elaboration does substantial work to resolve notational ambiguities. Some of these ambiguities may be resolved directly by the author in the \TeX\ source files, with notations that appear identical in the pdf display. For example, \begin{itemize} \item \verb!\closedInterval{a}{b}! -\qquad $[a,b]$ \item \verb!\LieCommutator{a}{b}! -\qquad $[a,b]$ \item \verb!\groupCommutator{a}{b}! -\qquad $[a,b]$ \item \verb!\image{f}{S}! -\qquad $f(S)$ \item \verb!f(S)! -\qquad $f(S)$ \item \verb!\fieldDegree{K}{F}! -\qquad $[K:F]$ \item \verb!\subgroupIndex{G}{H}! -\qquad $[G:H]$ \end{itemize} \subsubsection{foreign accents} In material originating in \TeX horizontal mode, the CNL image is stripped of many of the foreign accents. \begin{itemize} \item G\"odel - Godel \item \'etale - etale \end{itemize} The \verb!\~! has been redefined to be a space (that disappears in the CNL file). \subsubsection{control sequences as variables} We wish for it to be possible to use some control sequences (and more general token lists) as variables: \verb!\alpha!, etc. We use special control sequences \verb!\var! and \verb!\id! for this. They have no effect on the \TeX: \verb!\def\var#1{#1}! \verb!\def\id#1{#1}! \verb!\var{\alpha_1} --> V__alpha_1!. In the CNL, \verb!V__! is always prepended. Nonalphanumeric characters are deleted. \verb!\id{\alpha_1} --> id_alpha_1!. In the CNL, \verb!id_! is always prepended. Nonalphanumeric characters are deleted. It is up to the user to avoid name collisions in the CNL image. No macros are expanded inside \verb!\id! and \verb!\var! That is, these macros freeze what is in their replacement patterns. For example, \verb!\var{\mathcal{C}} --> V__mathcalC! But in the other direction, other macros can have replacement patterns that expand to \verb!\id! and \verb!\var!: \verb!\def\myalph#1{\var{\alpha_{#1}}}!. \verb!\myalpha{33} --> V__alpha_33!. \section{environments} All text is entirely ignored outside the \verb!\begin{cnl}...\end{cnl}! environment. Environments should be properly nested. In particular, they should be properly nested and terminated within the cnl environment. \verb!\input! should not be enclosed within any environment, except possibly a \verb!\begin{cnl}! environment. The input file is read iff the input statement is enclosed in the cnl environment. In CNL mode, there is a list of ignored environments, such as the \verb!remark! environment. No macro expansion is performed in the ignored environments. However, bracketing must be properly nested. In CNL mode, there is a list of translated environments, with custom translation for each of them, such as the \verb!definition! and \verb!theorem! environments. \subsection{specific environments} Here we describe what happens to the \TeX sources, upon translation to CNL-image. % https://latex.wikia.org/wiki/List_of_LaTeX_environments % http://www.personal.ceu.hu/tex/environ.htm \subsubsection{definition environment} Theorems, Definitions, etc. can carry labels. In fact, labels are encouraged. The labels should be valid atomic identifiers. In the CNL image, these labels become CNL labels. Theorems, etc. can also carry optional names \verb!\begin{theorem}[optional_name]\label{optional_name}! When a name is given, the label should also appear, and the two should be the same. \subsubsection{paragraph environments} \begin{description} \item [center] Environment begin/end stripped, leaving enclosed text. \item [flushleft, flushright, minipage, quotation, quote, verse] begin/end as well as formatting options are stripped, leaving enclosed text. \item [figure, picture, remark, thebibliography, titlepage] Environment begin/end and enclosed text stripped. \item [tabbing] begin/end stripped as well as tabbing characters \verb!\= \> \+ \-! \item [tabular] not supported. \end{description} \subsubsection{equation alignment environments} \begin{description} \item [array] The begin/end and column format instructions are stripped. The array is converted to a matrix (list of lists) in the CNL image. \item [align, eqnarray, gather] begin/end and \verb!&! stripped. A list is created with separator \verb!\\! translated to separator \verb!;! \item [equation,equation*] begin/end and \verb!&! stripped. Translation to CNL-Equation. Any label is translated. \item [multline, split] begin/end and \verb!& \\! stripped. \item [flalign, flalign*] Not supported. \end{description} See also gathered, split, aligned alignedat, alignat* \subsubsection{matrix environments} \begin{description} \item [matrix, pmatrix, bmatrix, Bmatrix, vmatrix, Vmatrix, smallmatrix] begin/end is stripped. The data is converted to a list of lists. The list of lists is given as an argument to a control sequence with the same name as the environment \verb!\pmatrix{[[0;1];[2;3]]}! \item [cases] Not supported. \end{description} \subsection{itemize environment} This includes itemize, enumerate, description. Many objects can be itemized in the CNL. We create custom itemize environments for them. This includes special \TeX\ support for match, function matching, cases, inductive and mutual inductive type declarations, structures, etc. These custom itemize environments are translated to the appropriate language constructs in the CNL image. \section{naming conventions for macros} The successful implementation of the the \TeX\ to CNL translation will rely on a very large number of macros. In an ordinary \TeX\ document, most macro names are largely invisible to everyone except the author. However, in our system, macro names become part of a CNL document and part of the formal specification of a mathematical definition. Therefore, increased care is called for in choosing macro names. We suggest some naming conventions. We recommend \href{https://hilton.org.uk/presentations/naming-guidelines}{naming-conventions}. Several of our rules have been adapted directly from that source. Follow \LaTeX\ \href{https://tex.stackexchange.com/questions/48195/macro-naming-best-practice}{capitalization conventions}. Most document macros are all lower case. (Common exceptions are upper case alphabets \verb!\Delta! \verb!\AA! (\AA), proper names, and variants of a lower case macro such as \verb!\rightarrow! and \verb!\Rightarrow!.) Package interface commands are CamelCase, and internal commands contain the symbol \verb!@!. We will be tolerant of allowing camelCase (starting with lowercase) in document-level macros. When using camelCase, upper-case characters should always mark the beginning of a word in the name. Where \LaTeX\ has established a precedent for naming macros, follow that. The order of words inside a name should follow the order of hierarchical identifiers for namespaces. The real absolute value would probably be found in the real numbers namespace and the absolute value on integers would probably be found in the integer namespace. As hierarchical identifiers for namespaces, we would have \verb!real.abs! and \verb!int.abs!. Thus, \verb!\realabs! and \verb!\intabs! should be the macro names (rather than say, \verb!\absreal!). Complete correctly-spelled words are better than abbreviations. Abbreviations should be dictionary-accepted abbreviations, or widely recognized math abbreviations (such as int for integer, nat for natural number, bool for boolean). It is expected that editor tools for word completion will be used, and the user has a better chance of guessing part of the macro name if the full word is used. Distinct meanings call for distinct names. The names should reflect the semantics. \begin{tabular}{l l l} \verb!\QuaternionRing! for ${\mathbb H}$ \\ \verb!\Diracdelta! for $\delta$ \\ \verb!\Kroneckerdelta! for $\delta$ \\ \verb!\Feigenbaumdelta! for $\delta$ \end{tabular} Following entropy, frequently used macros may have shortened names. Follow Knuth's \TeX\ style in this regard. \begin{tabular}{ l l l } common & \verb!\le! \verb!\it! \verb!\ne! \verb!\min! \\ uncommon & \verb!\displaywidowpenalty! \verb!\finalhyphendemerits! \end{tabular} Limit name length to at most 20 characters, and limit the word count in the name to 4. Generally avoid plural nouns in names. Name qualifiers should be suffixes (not prefixes). For example, \verb!\sumBig! and \verb!\sumSmall! Do not redefine existing macros. Make the macro vocabulary large and mathematically precise. Let entropy, the theory of error correcting codes, and searchability be guiding principles in macro naming. That is, macro names should be easy to find by search tools, and it should be difficult to mistake one macro for another. A mathematician who knows the mathematical name of something should be able to find the relevant macro by keyword search. Different macros should differ in more than one or two letters. \LaTeX\ violates this in places, such as \verb!\ldots! and \verb!\cdots!. But avoid creating further examples like this. Different macros should differ in more than word order. We make an exception when the ordering aligns with the semantics. \begin{tabular}{lll} incorrect: &\verb!\sumfromto{a}{b}{f}!\\ &\verb!\sumtofrom{a}{b}{f}!\\ permissible: &\verb!\realintervalopenclosed{a}{b}! for $(a,b]$\\ &\verb!\realintervalclosedopen{a}{b}! for $[a,b)$. \end{tabular} Users can always define their own abbreviated macro names for local use. Those will not become part of the shared libraries. \section{outstanding issues} \subsubsection{curly braces and font selection} We generally wish to strip font variations \verb!{\it such as italics}! in text mode. Note that this doesn't fit the usual pattern of control sequence applied to arguments. \subsubsection{ellipsis} There should be a general construction \begin{equation}\label{ellipsis} \verb!I \op \dots \op J! \end{equation} that strips $I$ and $J$ to the distinct core (taking the two token sequences and successively removing identical tokens from both ends), which should be two integer expressions $i$ and $j$. So that $I$ is the $\beta$-reduction of $K\ i$ and $J$ is the $\beta$-reduction of $K\ j$, for some term $K$. $K$ is the token sequence \verb!(\LAM v, sI)! where $v$ is a fresh variable and $sI$ is the token sequence obtained by replacing the distinct core of $I$ with $\verb!v!$. Then \eqref{ellipsis} represents (for associative operation \verb!\op!). \[ \verb! (H\ i) \op (H\ {i+1}) \op (H\ {i+2}) ... \op (H\ j)! \] For example, in \begin{equation}\label{W} \verb!W =\{\omega_{0}(\bu),\ldots,\omega_{3}(\bu)\}! \end{equation} we have \verb!I=\omega_{0}(\bu)!, \verb!J=\omega_{3}(\bu)!, $i=0$, $j=3$, We can manufacture a \TeX\ macro \verb!\ellipsis#1#2#3#4! Where the arguments are $H$ (as a term without the binder), $i$, $j$, \verb!\op!, respectively that constructs the correct \TeX'ed pdf. Roughly, $H$ should contain say a \verb!\DUMMY! where the insertion is to take place. The insertions are handled, say with something of the form \verb!\futurelet\DUMMY\CH i!, where \verb!\CH! expands to $H$. We can add custom rendering to the CNL image by binding the term $H$ as \verb!(\LAM \DUMMY, H)! and applying it to $i$ and $j$. This would not be hygienic; we would not be able to nest ellipses. For example, in \eqref{W}, we would write \[ \verb!W = \{ \ellipsis{\omega_{\DUMMY}(\bu)}{0}{3}{,} \}!. \] \section{References} \href{https://en.wikibooks.org/wiki/LaTeX/Mathematics}{Wikibook LaTeX Mathematics} \href{https://hilton.org.uk/presentations/naming-guidelines}{naming-conventions} \href{https://www.tug.org/TUGboat/tb32-3/tb102verna.pdf}% {Towards \LaTeX\ coding standards} \end{document}
{ "alphanum_fraction": 0.744322311, "avg_line_length": 35.0573248408, "ext": "tex", "hexsha": "d5891ce739da0183eda1a35415bf30f9b715cfce", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2020-08-15T01:30:32.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-27T16:34:53.000Z", "max_forks_repo_head_hexsha": "b521d3393339e5dd3b7f5cd21ba81a758bd5c55c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "HoanNguyen92/CNL-CIC", "max_forks_repo_path": "documentation/tex/tex-cnl-spec.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "b521d3393339e5dd3b7f5cd21ba81a758bd5c55c", "max_issues_repo_issues_event_max_datetime": "2020-03-25T15:51:32.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-17T06:09:51.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "HoanNguyen92/CNL-CIC", "max_issues_repo_path": "documentation/tex/tex-cnl-spec.tex", "max_line_length": 161, "max_stars_count": 14, "max_stars_repo_head_hexsha": "b521d3393339e5dd3b7f5cd21ba81a758bd5c55c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "HoanNguyen92/CNL-CIC", "max_stars_repo_path": "documentation/tex/tex-cnl-spec.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-07T18:13:04.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-27T16:34:39.000Z", "num_tokens": 6082, "size": 22016 }
\documentclass[conference]{IEEEtran} % This package serves to balance the column lengths on the last page of the document. % please, insert \balance command in the left column of the last page \usepackage{balance} %% to enable \thank command \IEEEoverridecommandlockouts %% The usage of the following packages is recommended %% to insert graphics \usepackage[pdftex]{graphicx} % to typeset algorithms \usepackage{algorithmic} \usepackage{algorithm} % to typeset code fragments \usepackage{listings} % provides various features to facilitate writing math formulas and to improve the typographical quality of their output. \usepackage[cmex10]{amsmath} \interdisplaylinepenalty=2500 % for urls typesetting and breaking \usepackage{url} % for vertical merging table cells \usepackage{multirow} % TODO \usepackage{natbib} \setcitestyle{square} % % \title{Automatized Generation of Alphabets of Symbols} % % \author{\IEEEauthorblockN{Serhii Hamotskyi\IEEEauthorrefmark{1}, Anis Rojbi\IEEEauthorrefmark{2}, Sergii Stirenko\IEEEauthorrefmark{1}, and Yuri Gordienko\IEEEauthorrefmark{1}} \IEEEauthorblockA{\IEEEauthorrefmark{1}Igor Sikorsky Kyiv Polytechnic Institute, Kyiv, Ukraine, Email: [email protected]} \IEEEauthorblockA{\IEEEauthorrefmark{2}Laboratoire THIM (Technologies, Handicaps, Interfaces et Multimodalités), University Paris 8, Paris, France} } %% \author{ %% \IEEEauthorblockN{Serhii Hamotskyi, Sergii Stirenko, Yuri Gordienko} %% \IEEEauthorblockA{Igor Sikorsky Kyiv Polytechnic Institute, \\ %% Kyiv, Ukraine, \\ %% Email: [email protected]} %% \and %% \IEEEauthorblockN{Anis Rojbi} %% \IEEEauthorblockA{Laboratoire THIM \\ %% (Technologies, Handicaps, Interfaces et Multimodalités), \\ University Paris 8, \\ %% Paris, France} %% } % conference papers do not typically use \thanks and this command % is locked out in conference mode. If really needed, such as for % the acknowledgment of grants, issue a \IEEEoverridecommandlockouts % after \documentclass % for over three affiliations, or if they all won't fit within the width % of the page, use this alternative format: % %\author{\IEEEauthorblockN{Michael Shell\IEEEauthorrefmark{1}, %Homer Simpson\IEEEauthorrefmark{2}, %James Kirk\IEEEauthorrefmark{3}, %Montgomery Scott\IEEEauthorrefmark{3} and %Eldon Tyrell\IEEEauthorrefmark{4}} %\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical and Computer Engineering\\ %Georgia Institute of Technology, %Atlanta, Georgia 30332--0250\\ Email: see http://www.michaelshell.org/contact.html} %\IEEEauthorblockA{\IEEEauthorrefmark{2}Twentieth Century Fox, Springfield, USA\\ %Email: [email protected]} %\IEEEauthorblockA{\IEEEauthorrefmark{3}Starfleet Academy, San Francisco, California 96678-2391\\ %Telephone: (800) 555--1212, Fax: (888) 555--1212} %\IEEEauthorblockA{\IEEEauthorrefmark{4}Tyrell Inc., 123 Replicant Street, Los Angeles, California 90210--4321}} \begin{document} \maketitle % typeset the title of the contribution \begin{abstract} In this paper, we discuss the generation of symbols (and alphabets) based on specific user requirements (medium, priorities, type of information that needs to be conveyed). A framework for the generation of alphabets is proposed, and its use for the generation of a shorthand writing system is explored. We discuss the possible use of machine learning and genetic algorithms to gather inputs for generation of such alphabets and for optimization of already generated ones. The alphabets generated using such methods may be used in very different fields, from the creation of synthetic languages and constructed scripts to the creation of sensible commands for multimodal interaction through Human-Computer Interfaces, such as mouse gestures, touchpads, body gestures, eye-tracking cameras, and brain-computing Interfaces, especially in applications for elderly care and people with disabilities. \end{abstract} \section{Introduction} % \IEEEoverridecommandlockouts\IEEEPARstart{T}{he need} to create writing systems has been with humankind since the dawn of time, and they always evolved based on the concrete challenges the writers faced. For example, the angular shapes of the runes are very convenient to be carved in wood or stone~\cite{williams1996origin}. The rapid increase of available mediums in the recent decades determined the need for many more alphabets, for very different use cases, such as controlling computers using touchpads, mouse gestures or eye tracking cameras. It is especially important for elderly care applications~\cite{gordienko2017augmented} on the basis of the newly available information and communication technologies based on multimodal interaction through human-computer interfaces like wearable computing, augmented reality, brain-computing interfaces~\cite{stirenko2017user}, etc. Many approaches for the manual creation of alphabets have been used, but we are not familiar with a formalized system for their generation. Manually created alphabets are usually suboptimal. For example, it might be argued that the Latin alphabet favours the writer more than the reader, since it evolved under the constraints of pen and paper, and those constraints are much less relevant in the computer age. Fonts which try to overcome this limitation exist~\cite{dotsies}. In a similar fashion, many systems do not use the possibilities given by the medium or context, electing to base themselves on already existing (familiar to the user, but suboptimal context-wise) symbols. A formalized framework capable of gathering requirements, generating symbols, grading them on a set of criteria and mapping them to meanings may be able to overcome many of those limitations. The main aim of this paper is to propose a formalized framework capable of gathering requirements, generating symbols, grading them on a set of criteria and mapping them to meanings, which potentially may overcome many of these limitations. \emph{The section II. Characteristics of a Rational Alphabet} contains the short characterization of basic terms and parameters of alphabets. The section \emph{III. Requirements for the needed alphabet} includes an example description of the requirements posed for alphabets used for shorthand systems. The section \emph{IV. Generation of Glyphs} proposes a method for the generation of glyphs with examples. The section \emph{V. Evaluation of Glyphs and Alphabets} contains discussion of fitness of glyphs/alphabets in relation to machine learning methods. The section \emph{VI. Discussion and future work} dedicated to discussion of the results obtained and lessons learned. \section{Characteristics of a rational alphabet} "Glyph" is defined as unique mark/symbol in a given medium. "Symbol" is defined as a glyph with a meaning attached to it. "Alphabet" is defined as a system of such symbols, including possible modifiers and conventions. Glyphs are generated and rated first, and meanings are assigned later; the alphabet as a whole is rated at the very end. This two-step process design choice is based on performance reasons (mutating individual glyphs and their meanings at the same time is too complex for any reasonably-sized alphabet) and is meant as a starting point for further research and adaptation. %% \begin{figure}[tbp] %% \centering %% \includegraphics[width=0.60\hsize]{f.pdf} %% \caption{Basic description of framework.} %% \end{figure} %% As this is a framework for the creation of alphabets in general, the goal is not to give any final recommendations about concrete alphabets, as they should be formulated individually for each alphabet. The following characteristics should generalize well for almost any alphabet, independently from the medium, dimensionality, and purpose. The vocabulary related to writing 2D characters with a pen or stylus is used, but this can be replaced with any other device. \subsection{Writing comfort and ergonomics} For our purposes, we define comfort as "how easy and enjoyable is to use the alphabet". \begin{itemize} \item How much mental effort does the recall of the symbols require (ease of recall) \begin{itemize} \item How familiar are the symbols to the user at the moment he is writing. \begin{itemize} \item Similarity to already known stimuli \item Availability of a mnemonic system \end{itemize} %% \item how much mental effort does the emph{creation} of the symbols require (in case the glyph is composite). \end{itemize} \item Fluency/flow, both for individual letters and their usual combinations. \item Physical limitations. For example, some strokes might be easier to write if someone is right-handed, or holds his pen in a certain way. \end{itemize} We suggest the following metrics as starting points for future research and discussion: \subsubsection{Mental effort} We think that this would be best measured via existing methods and some new methods of fatigue estimation on the basis of machine learning methods~\cite{gordienko2017ccp}. Changes in pupil size might be an especially interesting avenue in this aspect~\cite{Alns2014PupilSS}, as something objective and easy to measure. If memory is more an issue than cognitive load, than generating the alphabet in such a way so that the glyphs can be "calculated" at writing time might help; as a very example of this, when we were manually creating our shorthand system, we decided to encode time, modality, and person via a single glyph consisting of three parts. \subsubsection{Fluency} Possible metrics for fluency could be: \begin{itemize} \item Number of shap angles per glyph. \item Curvature per glyph. Both can be defined as sum the sum of absolute changes in direction per unit of distance. \item Ratio of strokes that mean something semantically, as opposed to "connecting one glyph with another", to the entire number. \item Number of easily connectable glyphs following each other in an average text, so that as little unnecessary movements are made. For example, given a representative source text, \[c=\sum_{i=1}^n\sum_{j=1}^nE(g_i, g_j)P(g_i,g_j)\], where \(n\) is the number of existing glyphs, \(E(g_i, g_j)\) is how "easy" are the two glyph to connect, \(P(g_i, g_j)\) is how the probability \(g_i\) will be directly before \(g_j\). \end{itemize} \subsection{Writing speed} Defined not as "how fast the pen moves", but rather "how much time is needed to convey the needed information". \begin{itemize} \item How fast are individual glyphs to write. This intersects heavily with "Fluency". \begin{itemize} \item Fluency from the subsection above. \item How much the pen needs to travel to form the glyph. \end{itemize} \item How much "meaning" can be encoded in one glyph. This is directly related to redundancy and entropy, discussed in the following sections. \item The more simple glyphs should be mapped to the most common symbols. \end{itemize} A potentially interesting experiment would be timing people using the system, and dividing the amount of information written by the time taken; but this would raise questions about the input information. Accurately calculating the entropy of the conveyed information for this purpose would be practical only for alphabets used in very narrow and formalized contexts. \subsection{Ease of recognition} \begin{itemize} \item How different are the glyphs between each other \item how much are distortions likely to worsen the recognition of the glyphs. \end{itemize} Additionally, here various memory biases and characteristics of human memory will be at play (see, for example,the Von Restorff effect~\cite{hunt1995subtlety}). \subsection{Universality} Ideally, the glyphs should generalize well. That means that once learned for styluses,the same alphabet shouldn't be too hard to port to other mediums without losing many of the above mentioned characteristics. Excepting changes of dimensionality (3D-gestures might be hard to port to a 2D-stylus), this is probably the hardest to quantify and account for. \section{Requirements for the needed alphabet} Most writing systems have been heavily influenced by the constraints inherent in their area of use --- purpose, characteristics of the information they needed to convey, materials. Even naturally evolving systems tend to converge towards local optima rather than a global optimum. Requirements and use patterns may gradually change, while the systems may be stuck in a state that is not optimal anymore. Therefore, a very careful analysis of the requirements and limitations is needed. As example of applying our requirements above to our case of shorthand system, we can consider the following: \begin{enumerate} \item On a purely symbolic level: \begin{enumerate} \item Writing letters \begin{enumerate} \item number of strokes needed to encode individual letters \item complexity of the resulting glyph \end{enumerate} \item Writing words \begin{enumerate} \item connections between individual letters (glyphs) \item how likely are letters that are easy to connect to each to be represented by easily connectable glyphs \item if all existing glyphs are not identical in complexity, what is the ratio of easy-to-write glyphs to the complex ones in a typical text (the bigger the ratio, the better) \end{enumerate} \end{enumerate} \item Writing sentences: \begin{enumerate} \item are there any often-repeating words or groups of words which, when replaced by a shorter, even if complex, symbol, would lead to a gain in time? ("The" as a typical example). \end{enumerate} \item On a semantic level: Are there any grammatical categories or modalities that are represented in natural text with many letters, that when replaced by a single glyph or a modifier, would lead to a gain in time? (tenses, number, gender, hypotheticals, ...). The above mentioned symbol encoding time, modality, and person, to shorten words like "they would have been able to", happened at this level of abstraction. \item On an information theoretical level: How much redundancy is needed? How many errors in transcription can happen before the message becomes either unreadable or its meaning is distorted? (Natural languages are redundant via multiple mechanisms, notably via agreement in person, gender, case... Errors or interferences will still allow to understand what’s being said, up to a certain point. This may not be the case for constructed writing systems, if they are built with low redundancy.)~\cite{reza1961introduction} \end{enumerate} One way to quantify some of the above would be analyzing source texts. At the end, at least the following information should be available: \begin{itemize} \item frequencies of individual letters \(p_i\) \item most-needed connections \(c_{ij}\) \end{itemize} As example of how the information can be used, let's consider again our hypothetical shorthand system. Each of the generated glyphs can have three possible starting and ending strokes, represented by integers, and positioned at different heights.\(I_s, I_e=\{0, 1, 2\}\) Glyphs \(i, j\) where \(i_e=j_s\) are considered easily connectable. Using this information, later we can map the glyphs to meanings in such a way, that the letters that are most likely to follow each other are more likely to be represented by easily connectable glyphs. The problem would be trivially solvable by having all glyphs start and end at the same point, but this would make it harder to differentiate the individual glyphs. \section{Generation of the glyphs} The second part of the proposed framework is the generation of possible glyphs. In this paper, Bezier curves have been used to generate the glyphs and calculate some of the needed metrics. During the generation of the example glyphs, we made the following assumptions about the alphabet for which the glyphs are generated: \begin{enumerate} \item The glyphs have a definite starting and ending point; the number of such points is limited, to facilitate connecting the symbols to each other. \item The stroke width does not vary (as, for example, in the case of Pitman shorthand), because of the low availability of pens able to convey even two levels of thickness and of low average penmanship skill in most people. (Though using it as a third or fourth dimension would certainly be possible.) \item The symbols will fit into a square bounding box. \end{enumerate} The generation of glyphs starts by fixing a definite starting and ending point and then adding a semi-random number of control points. Figures 1-3 are examples of glyphs generated using the above rules. \begin{figure}[tbp] \centering \includegraphics[width=0.75\hsize]{e1.pdf} \caption{Example of generated glyph with low fitness} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.75\hsize]{e3.pdf} \caption{Glyph with higher fitness } \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.75\hsize]{e2.pdf} \caption{The simpler a glyph is, the higher fitness it has} \end{figure} \balance \section{Evaluation of Glyphs and Alphabets} In this stage, the fitness of each glyph is determined. Many approaches are possible, and they heavily depend on the context and the medium for which the generation is being done. For our shorthand system, the main criteria were length and simplicity. The number of control points has been used as a proxy of fitness and has been partly accounted for in the generation phase (empirically, the more control points the more chaotic the glyph is). The second metric is complexity, which may be loosely defined as "how hard it would be to write this symbol using a pen". For our purposes, complexity is defined as \(\frac{c}{l}\), where \(c\) is the sum of the angles in the polygonal representation of the curve (informally, how curved the glyph is; the more curves there are and the sharper the individual curves are, the bigger the value is), and \(l\) is the length of the curve (a certain amount of curves on a large glyph should not be penalized as much as the same amount on a smaller one). C is calculated by converting the curve between the first adjoining control points to a polygon, summing the absolute value of the angles between all adjoining lines, and repeating the process for all the successive control points. \(c=\sum_{i=1}^n\sum_{j=2}^{p}L_n(j_i, j_i-1)\), where \(n\) is the number of control points, \(p\) is the number of lines used to approximate the curve, L is the angle between two lines, and \(j_i\) is the line after the control point \(i\). The reasons for defining \(c\) as we did are manifold, one of them being that a very similar metric is used for evaluating the similarity of the two glyphs to each other. Much better metrics are possible. The subjective reactions to signs might vary between people, differences due to age, cultural and/or language background are probable. This might be a promising area to study with the help of machine learning. Data like "Symbols similar to X perform poorly with demographic Y" would be valuable for creating alphabets when something about the probable users is known. Additionally, machine learning would open the doors for custom-tailored systems, where users rate some symbols and based on their feedback predictions are made about what other symbols they might like, remember and use. The first mapping of the generated glyphs, before its fitness is rated, is necessarily very tentative. In this paper we have not touched grammatical modalities and ways to shorten them in great detail, as they would merit quite a lot more research and space (and, probably, their own paper); regardless, they would have their place at this step of the framework. For an alphabet, our goals could be the following: \begin{enumerate} \item As much high-fitness letters as possible \item Letters which are found the most often should have the highest fitness (that is, be as simple as possible). \item The letters should be unlike to each other \item The letters should be easily connectable \end{enumerate} The most important requirement is for the letters to be unlike each other. This is needed both for the resulting text to be readable (the existance of a 1-to-1 mapping between a text written in shorthand and a normal text, or at least for the resulting text being readable using contextual clues) and for improving the memorization of the glyphs (memorizing many similar stimuli is much harder than many different ones, unless a good framework for memorization is given, such as dividing symbols in parts). For our purposes histogram comparison was the most straight-forward to implement. The data for the histogram is provided by the angles computed at the previous step. Basic shapes and turns would be recognizable, and the difference between the two makeshift histograms would approximate the difference between the glyphs. Here, \(D_{ij}\) is the difference between glyphs \(i, j\). Therefore, one formula for the fitness could be: \[ f=\sum^{n}_{i=1}f_i+ \sum^{n}_{i=1}\sum^{n}_{i=1}D_{ij}+ \sum^{n}_{i=1}f_ip_i \] and the glyphs are picked so that the above formula is maximized. (The formula above does not include connections.) A genetic algorithm at this point would attempt adding/removing/moving control points, switching glyphs between letters, introducing mirror-distortions etc. etc. \section{Discussion and future work} The basic ideas of this framework can be applied for the generation of any alphabet used in the real world. For touchpads, for example, connections may be built not using three possible endings, but 2D-points on the screen instead, and multitouch and weight-sensitivity may be included in the generation. By adding dimensions, 3D-gestures alphabets may be created. Much better heuristics for fitness may be created by more precise algorithms, machine learning and use of biology and cognitive science. The approaches demonstrated here are general enough to allow an enormous amount of flexibility in the kind of alphabets they may be used to create. One of the more interesting avenues of further research would be creating algorithms for mapping glyphs to semantics, both to letters and to more complex grammar categories or structures. Finding (with AI?) the categories which could be shortened to one or two symbols is challenging by itself, but not all of the possible patterns found by an AI would be intuitive enough for a person to use or even to understand. \section*{Acknowledgment} The work was partially supported by Ukraine-France Collaboration Project (Programme PHC DNIPRO) (http://www.campusfrance.org/fr/dnipro) \bibliographystyle{myIEEEtran} \bibliography{mybib} \end{document}
{ "alphanum_fraction": 0.7564673274, "avg_line_length": 85.0428571429, "ext": "tex", "hexsha": "b23496034a6d43c62610817310741b81cbb68f58", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "44b930a35945813c3958527661cb8489f01fca24", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pchr8/gh", "max_forks_repo_path": "fedcsis/ffinal/SH-FedCSIS/paper/Serhii-Hamotskyi-FedCSIS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "44b930a35945813c3958527661cb8489f01fca24", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pchr8/gh", "max_issues_repo_path": "fedcsis/ffinal/SH-FedCSIS/paper/Serhii-Hamotskyi-FedCSIS.tex", "max_line_length": 885, "max_stars_count": null, "max_stars_repo_head_hexsha": "44b930a35945813c3958527661cb8489f01fca24", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pchr8/gh", "max_stars_repo_path": "fedcsis/ffinal/SH-FedCSIS/paper/Serhii-Hamotskyi-FedCSIS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5188, "size": 23812 }
% % File acl2019.tex % %% Based on the style files for ACL 2018, NAACL 2018/19, which were %% Based on the style files for ACL-2015, with some improvements %% taken from the NAACL-2016 style %% Based on the style files for ACL-2014, which were, in turn, %% based on ACL-2013, ACL-2012, ACL-2011, ACL-2010, ACL-IJCNLP-2009, %% EACL-2009, IJCNLP-2008... %% Based on the style files for EACL 2006 by %%[email protected] or [email protected] %% and that of ACL 08 by Joakim Nivre and Noah Smith \documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2019} \usepackage{times} \usepackage{latexsym} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{hyperref} \usepackage{relsize} \renewcommand*{\UrlFont}{\smaller\relax} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \usepackage{enumitem} \usepackage{url} \aclfinalcopy % Uncomment this line for the final submission %\def\aclpaperid{***} % Enter the acl Paper ID here %\setlength\titlebox{5cm} % You can expand the titlebox if you need extra space % to show all the authors. Please do not make the titlebox % smaller than 5cm (the original size); we will check this % in the camera-ready version and ask you to change it back. \newcommand\BibTeX{B\textsc{ib}\TeX} \title{Concept Tagging for the Movie Domain by using Transfer Learning and Named Entity Recognition.} \author{Giovanni De Toni (197814) \\ University of Trento \\ Via Sommarive, 9, 38123 Povo,Trento TN\\ \texttt{[email protected]}} \date{} \begin{document} \maketitle \begin{abstract} Neural Networks have produced remarkable results in many Natural Language Processing tasks, for example, when tasked to assigning concepts to words of a sentence. Their successes are made possible by employing good word representations (embeddings) which a Neural Network can understand. This work evaluates several newly developed pre-trained embeddings (ELMo, BERT and ConceptNet) on the task of tagging sequences from the movie domain. We then compare the measurements with previous results of the literature. \end{abstract} \section{Introduction} Concept Tagging sequences is a classical problem in the NLP field. It consists of assigning to each word of a sentence a given concept which represents the meaning of that word (e.g., ``\textit{star wars}" is a movie title, hence the concept `` \textit{movie.title}"). Over the past years, it was extensively studied and several techniques were developed such to efficiently tag words given a pool of concepts. The most basic methods are statistical language models (e.g., Weight Finite State Transducers) which are easy to train and yield quite impressive results with a little tuning. However, with the rise of Recurrent Neural Networks, we can now develop models which have better performances than their statistic counterparts. Moreover, thanks to the use of specific word embeddings (e.g., word2vec, GloVe, ELMo etc.), we can boost the performances even more. This work takes as a starting point a previous paper on the topic by \citet{gobbi} and tries to improve the results of some of the neural models presented by adding new features (NER and POS tags) and by trying to employ more intelligent embeddings (ELMo, BERT and ConceptNet). We conducted several experiments with a few selected architectures and we tried to outperform the performances presented (with respect also to the classical WFST methods). The report is structured as follows: the second section describes the models used for the experiments, the third section gives a brief explanation about the possible types of embeddings and then it describes the chosen embeddings for our project. The fourth section describes the analysis of the dataset and of the embeddings. Ultimately, the fifth and the sixth sections report the experiments performed and the discussion over the results. \section{Models} To evaluate how the performance changes with respect to the previous results we selected two models from the original work. We concentrated on the Neural Networks approaches by selecting the ones which held the best results over the test set. We chose the following architectures: LSTM and LSTM-CRF. Moreover, we extended the features used by the various models by providing also the \textbf{Part-of-Speech tags (POS)} and the \textbf{Named Entities Recognition (NER)} entities as one-hot-encoded vectors. We now describe the two selected architectures. \textbf{Long-Short Term Memory (LSTM)} \cite{lstm} is a model which extends and improve the previous RNN by solving the vanishing/exploding gradients problem. We tested a simple LSTM which receives in input the embeddings of the words. Moreover, we tested another LSTM model which uses also the characters embeddings (CHAR) obtained using a convolutional layer concatenated with the new features. We experimented also with the \textbf{LSTM-CRF} model \cite{ yao2014recurrent, DBLP:journals/corr/HuangXY15} in which the LSTM provides the class scores for each token and then the Viterbi algorithm is used to determine the labels at a global level. Moreover, an LSTM-CRF version extended with the character embeddings and the new features was also evaluated. \section{Embeddings} As a first step to improve the performances of the previous work we concentrated on evaluating several recently proposed pre-trained language models to produce words representations. Generally speaking, language representations can be either \textbf{context-free} or \textbf{contextual}. Context-free means that the word representation is generated without looking at the context in which the word is used. For instance, the word ``\textit{season}" may have different meanings with respect to the context in which it is placed (e.g., ``\textit{Summer is my favourite season}" and ``\textit{Season the chicken and then serve}"), but in a context-free model it will get the same representation (embedding). A contextual representation takes care also of the context in which a word is used. Moreover, the contextual representations can be further divided into two other categories: unidirectional or bidirectional. A \textbf{unidirectional model} contextualizes a given term by just looking at the words on the right (or on the left) of it. A \textbf{bidirectional model} looks both on the left and on the right of the target word before producing a representation. There are also \textbf{shallow bidirectional models} which combine two unidirectional models (one for the left side and one for the right side) to give a more complete representation. In our work, we tested three different embeddings, one for each category (context-free, contextual shallow bidirectional and contextual bidirectional): ConceptNet, ELMo and BERT. We then compared their performances against the one used in the original work called \textbf{w2v\_trimmed} (a standard word2vec representation). \subsection{ConceptNet} \textbf{ConceptNet Numberbatch} \cite{speer2017conceptnet} is a context-free language representation. It is a set of word embeddings and it was made by combining previously existing models, namely: GloVe \cite{pennington2014glove}, word2vec \cite{mikolov2013efficient} and OpenSubtitles 2016 \cite{Lison2016OpenSubtitles2016EL}. Moreover, it leverages the data of the ConceptNet knowledge graph to enrich its representation. The authors reached this goal by using a retrofitting procedure to adjust the word embedding matrix by using their knowledge graph. Thanks to this procedure, ConceptNet is also intrinsically multilingual since this method pushes the model to use the same embedding space both for English words and words in different languages (since they share the same meaning). This work uses the Numberbatch 19.08 English-only model \footnote{\url{https://github.com/commonsense/conceptnet-numberbatch}} which contains 516782 embeddings with a dimension of 300. \subsection{ELMo} \textbf{ELMo (Embeddings from Language Models)} \cite{Peters:2018} is a deep contextualized word representation which captures both syntactic, semantic and polysemy characteristics of a given word. It uses a deep shallow bidirectional language model (biLM) trained on a large corpus by looking at the entire input sentences. The generated representations are deep because they are a function of all the layers of the biLM. Moreover, each layer models certain features of the given word. The original work states that the higher-level LSTM layers capture context-dependent aspects of the word meaning, while lower-level states capture aspects of the syntax. We used the pre-trained small ELMo model \footnote{\url{https://allennlp.org/elmo}} with the embedding dimension of 1024. In this work, the the final embeddings were produced by \textbf{linear combination} from the ELMo hidden layers. If $\mathbf{h_i}$ denotes an ELMo hidden layer, the final embeddings $\mathbf{e}$ can be computed by doing $\mathbf{e} = \sum_{i=0}^{N} \mathbf{h_i} \cdot w_i$, where $N$ is the number of layers. We devised two strategies to estimate the scalar weights $w_i$: \begin{itemize}[noitemsep] \item \textbf{Averaged}: $w_i = \frac{1}{N}$ to give an equal weighting; \item \textbf{Fine-tuned}: it learns directly all $w_i$ during the training of the final concept tagger. \end{itemize} \begin{figure*}[t] \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{img/word2vec-embeddings.png} \caption{w2v\_trimmed} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{img/elmo-embeddings.png} \caption{ELMo} \label{fig:sfig2} \end{subfigure}\\ \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{img/bert-embeddings.png} \caption{BERT} \label{fig:sfig2} \end{subfigure}% \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{img/conceptnet-embeddings.png} \caption{ConceptNet} \label{fig:sfig2} \end{subfigure} \caption{These plots show the embeddings representation of four major concepts of the training dataset. Namely, \textit{movie.name} (blue), \textit{actor.name} (orange), \textit{producer.name} (red) and \textit{director.name} (green). We can clearly see how some embeddings enable nicer clusters for some concepts. } \label{fig:fig-emb} \end{figure*} \subsection{BERT} \textbf{BERT (Bidirectional Encoder Representations from Transformers)} \cite{devlin2018bert} is a relatively new language representation model released in 2019. It pre-trains deep bidirectional representations from unlabeled texts by exploiting both the left and right context. The model is a multi-layer bidirectional Transformer encoder. The authors state that this new representation solves the previous limitations of pre-trained models (e.g., ELMo). They argue that the standard LMs are unidirectional, thus limiting the kind of architectures which can be used during training. They also state that other pre-trained embeddings like ELMo are just shallow bidirectional since they use several separate models to capture the left and right context. There are two steps in the BERT framework: \textit{pre-training} and \textit{fine-tuning}. During the first phase, the model is trained on unlabeled data over two separate unsupervised training tasks: \textit{Masked LM} and \textit{Next Sentence Prediction (NSP)}. The Masked LM task consists of predicting some words of the given input phrase which are masked beforehand. This would enable to train a real deep bidirectional representation since bidirectional conditioning on standard LM is not possible. The Next Sentence Prediction (NSP) consists of predicting relationships between sentences which are usually not captured by LM. During the fine-tuning phase, the model is initialized with the pre-trained parameters obtained in the previous stage and it is fine-tuned on the desired task. In our work, we used a pre-trained BERT model\footnote{\url{https://pypi.org/project/bert-embedding/}} to get the word embeddings directly without doing the fine-tuning phase. More specifically, the BERT model used was pre-trained on the BookCorpus \footnote{\url{https://yknzhu.wixsite.com/mbweb}} and English Wikipedia. The embedding dimension was 768. \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}|l|l|l|l|l|l|l|l|@{}} \hline\textbf{Model} & \textbf{Hidden} & \textbf{Epochs} & \textbf{Batch} & \textbf{Lr} & \textbf{Drop rate} & \textbf{Emb} & \textbf{Min} $F_1$ / \textbf{Mean} $F_1$ / \textbf{Best} $F_1$ \\ \hline LSTM & 200 & 15 & 20 & 0.001& 0.7& w2v\_trimmed&\begin{tabular}[c]{@{}lll@{}}81.88 & 83.17 & 84.35\\ 82.23 & \textbf{ 83.35 } & 84.22\\ \end{tabular} \\ \hline LSTM & 200 & 15 & 20 & 0.001& 0.7& ConceptNet&\begin{tabular}[c]{@{}lll@{}}81.58 & 82.49 & 83.75\\ 81.78 & \textbf{ 82.94 } & 84.11\\ \end{tabular} \\ \hline LSTM & 200 & 30 & 20 & 0.001& 0.7& BERT&\begin{tabular}[c]{@{}lll@{}}77.06 & 79.10 & 81.39\\ 80.51 & \textbf{ 82.13 } & 83.66\\ \end{tabular} \\ \hline LSTM & 200 & 30 & 20 & 0.001& 0.7& ELMo&\begin{tabular}[c]{@{}lll@{}}81.77 & 82.65 & 83.48\\ 83.77 & \textbf{ 84.40 } & 85.35\\ \end{tabular} \\ \hline LSTM & 200 & 30 & 20 & 0.001& 0.7& ELMo (fine tuned)&\begin{tabular}[c]{@{}lll@{}}75.19 & 82.29 & 85.14\\ 82.88 & \textbf{ 84.47 } & 85.60\\ \end{tabular} \\ \hline LSTM-CRF & 200 & 10 & 1 & 0.001& 0.7& w2v\_trimmed&\begin{tabular}[c]{@{}lll@{}}84.85 & \textbf{ 85.70 } & 86.83\\ 84.66 & 85.35 & 86.30\\ \end{tabular} \\ \hline LSTM-CRF & 200 & 10 & 1 & 0.001& 0.7& ConceptNet&\begin{tabular}[c]{@{}lll@{}}85.46 & \textbf{ 85.94 } & 86.52\\ 85.49 & 85.83 & 86.51\\ \end{tabular} \\ \hline LSTM-CRF & 200 & 20 & 1 & 0.001& 0.7& BERT&\begin{tabular}[c]{@{}lll@{}}82.33 & 83.73 & 84.95\\ 83.10 & \textbf{ 83.83 } & 84.98\\ \end{tabular} \\ \hline LSTM-CRF & 200 & 20 & 1 & 0.001& 0.7& ELMo&\begin{tabular}[c]{@{}lll@{}}84.50 & \textbf{ 85.53 } & 86.39\\ 84.12 & 85.29 & 86.29\\ \end{tabular} \\ \hline LSTM-CRF & 200 & 10 & 1 & 0.001& 0.7& ELMo (fine tuned)&\begin{tabular}[c]{@{}lll@{}}84.34 & 85.72 & 86.56\\ 83.96 & \textbf{ 85.81 } & 86.81\\ \end{tabular} \\ \hline \end{tabular}} \caption{Evaluation results of the various models. Each model is bidirectional. The performances were obtained by recording the results for 25 runs by varying the initialization parameters of the models and by keeping their hyperparameters fixed. The rightmost column is divided as such: the first line shows the results without any improvements and the second line shows the results by adding the character convolution (CHAR) and the NER and POS tagging features. The \textbf{bold} values indicate the best result for that particular embedding. } \label{tab:final-results} \end{table*} \section{Data Analysis} We evaluated our solutions by using the NL2SparQL4NLU dataset \footnote{\url{https://github.com/esrel/NL2SparQL4NLU}}. It corresponds to the MOVIE dataset of the original paper. It includes sentences taken from the movie domain (e.g., ``\textit{trailer for star wars a new hope}"). The dataset contains 3338 and 1084 sentences for the train and test set respectively. The dictionary size (\# of unique tokens) is of 1728 (train) and 1039 (test) with an OOV rate between the datasets of 24\%. The average sentence length is 6.42 (train) and 6.52 (test). We have a total of 23 concepts (with 43 concepts in IOB format). Each word was also POS tagged and there are 50 different tags between the training and test set (taken from the Tree Tagger POS list\footnote{\url{https://courses.washington.edu/hypertxt/csar-v02/penntable.html}}). We generated a new dataset by analyzing each sentence using a Named Entity Recognition (NER) tool, spaCy\footnote{\url{https://spacy.io/}}. It found 398 (1,86\%) and 169 (2,37\%) entities inside the training and test set respectively. These entities were of 9 different types\footnote{\url{https://spacy.io/api/annotation\#named-entities}}. \subsection{Embeddings Analysis} We evaluated also the various embeddings using the \textit{embedding coverage} metric (namely, how many words in the dataset dictionary can be found in the embedding matrix) and we compared it with the embeddings used in the original work. With the default embeddings (w2v\_trimmed), the coverage is 96.18\% (66 words not recognized) while with ConceptNet the coverage is 91.90\% (140 words not recognized). For BERT and ELMo models we performed a visual analysis instead by using dimensionality reduction techiques. More specifically, we used \textit{t-SNE} \cite{Maaten2008VisualizingDU} to reduce the dimensionality of the BERT/ELMo embeddings down to 2-dimensional data points and then we plotted the results for the four most represented concepts in the dataset. See Figure \ref{fig:fig-emb} for the results. It is possible to notice how the default embeddings, ELMo and BERT produce nice clusters for the \textit{movie.name} category. The other categories are slightly mixed together which is expected. For instance, \textit{actor.name} and \textit{director.name} may end up with the same embedding since a person name can be referring both to an actor and to a movie director. BERT and ELMo should be able to discriminate between them since they can exploit the context, but the lack of clusters could be caused by the \textit{t-SNE} reduction. Ultimately, ConceptNet does not produce significant clusters (apart from a small one composed by \textit{director.name} elements). This however could be again caused by the \textit{t-SNE} reduction. \begin{figure*}[t] \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{img/elmo-embeddings-not-trained-2D.png} \caption{Default ELMo Embeddings (\textit{movie.name} only)} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{img/elmo-embeddings-trained-2D.png} \caption{Fine-Tuned ELMo Embeddings (\textit{movie.name} only).} \label{fig:sfig2} \end{subfigure}%\\ %\begin{subfigure}{0.5\linewidth} % \centering % \includegraphics[width=\linewidth]{img/elmo-%embeddings-not-trained.png} % \caption{Averaged ELMo Embeddings 3D} % \label{fig:sfig1} %\end{subfigure}% %\begin{subfigure}{0.5\linewidth} % \centering % \includegraphics[width=\linewidth]{img/elmo-%embeddings-trained.png} % \caption{Learned Elmo Embeddings 3D %(\textit{movie.title} only).} % \label{fig:sfig2} %\end{subfigure} \caption{These plots show how the embeddings clustering for the \textit{movie.name} concept changes between the averaged version and the fine-tuned one. Fine-tuning the weights gives importance on the higher layers which capture context-dependent aspects of word meaning and therefore they help to produce better clustering.} \label{fig:fine-tuned-emb} \end{figure*} \section{Experiments} We evaluated the models by doing a hyperparameter search by starting from the values used in the original work. We tried to find the hyperparameter combination which maximizes the mean $F_1$ score over the test dataset. The final results can be seen in Table \ref{tab:final-results}. When using a classical LSTM model, it is clear that by combining the character representation (CHAR) and the additional features (POS+NER) we obtain better results than the default baseline. This happens regardless of the embeddings employed. Moreover, the ELMo embeddings provide better performances than the other competitors. The BERT embeddings give instead less performing results (more than 3\% difference between the $F_1$ mean baseline in some cases). \textit{However, it seems that using different embeddings does improve the baseline.} When using the LSTM-CRF we obtained slightly different results. The LSTM-CRF generally outperforms the simple LSTM model, but here adding the CHAR+NER+POS features gives good results only when using the fine-tuned ELMo embeddings or BERT. By employing ConceptNet or the default embeddings we obtain a better $F_1$ score without using those features (even if the difference is minimal). By using LSTM-CRF and ELMo (fine-tuned) we were able to reach the highest $F_1$ score of 86.81 over the various embeddings (the baseline model reaches a maximum of 86.83). Ultimately, the best combination overall with a mean $F_1$ value of 85.94 is obtained by using the ConceptNet embeddings without additional features and the LSTM-CRF model. All these results surpass the previous baseline obtained with the WFST (83.73). The models were written in Python and PyTorch by forking from the previous project\footnote{\url{https://github.com/fruttasecca/concept-tagging-with-neural-networks}} and both the code and the dataset are available on Github\footnote{\url{https://github.com/geektoni/concept-tagging-nn-spacy}}. \subsection{Effects of Embeddings Fine-Tuning} Next we focused on analyzing the used embeddings by focalizing on ELMo which gave the most interesting results. We studyied the visual differences between the fine-tuned and averaged ELMo embeddings. We also evaluated the SSE (Sum of Squared Errors) of the generated clusters for certain concepts. By fine-tuning the ELMo weightings, it is possible to generate slightly better ``concept clusters" in the embedding space. Figure \ref{fig:fine-tuned-emb} shows an example. We also saw an SSE error reduction of almost 50\% with the fine-tuned version (from 13490936.61 to 7267435.40) when considering the \textit{movie.name} concept. This is beneficial because a Neural Network learns how to divide the features space such to separate with hyperplanes each concept. Then we just need to check where the representation of my token lies to assign a concept. However, we noticed that, since our dataset is imbalanced (the concepts \textit{O} and \textit{movie.title} takes 80\% of the total) only the clusters of the majority classes will be improved. This is corroborated by the fact that by analyzing the \textit{confusion matrices} of the fine-tuned ELMo models (both LSTM and LSTM-CRF) it is possible to see how the accuracy of the majority concepts increases slightly while the accuracy of the other concepts degrades. See Table \ref{tab:elmo-comp} for a comparison between the ELMo versions. We can conclude that fine-tuning pre-trained contextual embeddings is always advisable. \begin{table*}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|l|l|l|l|l|l|l|} \cline{2-8} & \textbf{O} & \textbf{movie.name} & \textbf{director.name} & \textbf{actor.name} & \textbf{producer.name} & \textbf{movie.language} & \textbf{movie.release\_date} \\ \hline \multicolumn{1}{|l|}{\textbf{Averaged ELMo}} & 0.99584898 & 0.94475655 & 0.85628743 & 0.89855072 & 0.96330275 & \textbf{0.92592593} & \textbf{0.94117647} \\ \multicolumn{1}{|l|}{\textbf{Fine-Tuned ELMo}} & \textbf{0.99625468} & \textbf{0.96190476} & \textbf{0.87647059} & \textbf{0.91724138} & \textbf{0.98181818} & 0.84745763 & 0.90410959 \\ \hline \end{tabular}% } \caption{Given the top-seven concepts from the test dataset, we compute the tagging accuracy when using the LSTM-CRF and the CHAR+POS+NER features. The first five concepts appear more than 2\% over the entire dataset, while the last two appear less than 1\%. It is possible to see how the performances increase for the first fìive when using the fine-tuned ELMo, while the other two accuracies decrease with respect to the averaged ELMo.} \label{tab:elmo-comp} \end{table*} \section{Conclusion} The experiments gave us new insights about the usage of embeddings for obtaining state-of-the-art performances on concept tagging tasks. BERT and ELMo pre-trained models provide new embeddings obtained by looking at the entire phrase. However, they cannot be used directly out-of-the-box but they need to be fine-tuned on the task at hand to be effective. This can be clearly seen by looking at the difference between the ELMo embeddings created by mere averaging of the layers and by learning instead the weights of the linear combination. We showed how ELMo fine-tuned embeddings provides certain advantages with respect to their not-tuned version. We think this is also the reason why BERT underperformed in some cases. We used an implementation which extracts directly the embeddings without going through the fine-tuning phase described in the original BERT paper. Therefore, the embeddings are not ``trained" to interpret the current concept tagging task and they lead to bad performances. We are aware of the issues which arise from our validation methods (in particular, \textit{t-SNE} \cite{wattenberg2016how}) but we believe they still provide useful information. We presented also the impact of adding more features to the original model, more specifically, the POS tag and the NER entities. However, we did not see any important improvement. \textit{We thought that having extra information like the NER tags could have improved the performances of the models but it was not the case}. This happened for several reasons. For instance, one major problem we found with this method is related to the sparsity of the NER tagging. On over 21000 tokens, only 400~ were tagged with an entity definition (just the 1.9\%). This adds just too little discriminative power to the original model and therefore it does not produce any huge gain in performances. In conclusion, we were able to surpass classical WFST methods and we were able to almost replicate the results of \citet{gobbi}. \textit{However, even by employing more advanced embeddings and new feature combinations, we were unable to generate significative improvements during the evaluation}. We think that the general issue that needs to be solved to get better results is the unbalanced dataset we are using. Some concepts are underrepresented (e.g., \textit{actor.type}, \textit{actor.nationality}, etc.) while others are present in large quantity (e.g., \textit{O} and \textit{movie.name}). The model will be pushed to learn to classify the majority classes while ignoring the rest. We believe that this problem can be solved by increasing the size of the dataset. This can be done either by gathering real-world examples or by generating \textit{synthetic sentences} by sampling from a statistical model trained on the original dataset. However, this task was left for future work. \bibliography{acl2019} \bibliographystyle{acl_natbib} \end{document}
{ "alphanum_fraction": 0.7735685476, "avg_line_length": 89.0234899329, "ext": "tex", "hexsha": "dad1ea7307ccad578c93ad213d9bf6a11aa1d691", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "429a5bcb0b1355a82606899cf3d02e2faae61d54", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "geektoni/concept-tagging-nn-spacy", "max_forks_repo_path": "report/giovanni_de_toni_197814.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "429a5bcb0b1355a82606899cf3d02e2faae61d54", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "geektoni/concept-tagging-nn-spacy", "max_issues_repo_path": "report/giovanni_de_toni_197814.tex", "max_line_length": 1183, "max_stars_count": 1, "max_stars_repo_head_hexsha": "429a5bcb0b1355a82606899cf3d02e2faae61d54", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "geektoni/concept-tagging-nn-spacy", "max_stars_repo_path": "report/giovanni_de_toni_197814.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-15T07:58:19.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-15T07:58:19.000Z", "num_tokens": 7045, "size": 26529 }
\section*{Reduction} The \emph{reducer} $\Rightarrow$ is a partial function from programs/statements/expressions to programs/statements/expressions (with slight abuse of notation via overloading) and $\Rightarrow^*$ is its reflexive transitive closure. A \emph{reduction} is a sequence of programs $p_1 \Rightarrow \cdots \Rightarrow p_n$, where $p_n$ is not reducible, i.e. there is no program $q$ such that $p_n \Rightarrow q$. Here, the program $p_n$ is called the \emph{result of reducing} $p_1$. A \emph{value} is a primitive number expression, primitive boolean expression, a primitive string expression, a function definition expression or a function declaration statement. The \emph{substitution} function $p [ n := v ]$ on programs/statements/expressions replaces every free occurrence of the name $n$ in statement $p$ by value $v$. Care must be taken to introduce and preserve co-references in this process; substitution can introduce cyclic references in the result of the substitution. For example, $n$ may occur free in $v$, in which case every occurrence of $n$ in $p$ will be replaced by $v$ such that $n$ in $v$ refers cyclically to the node at which the replacement happens. \subsection*{Programs} \textbf{First-statement}: In a sequence of statements, we can always reduce the first one. \[ \frac{ \textit{statement}\ \Rightarrow\ \textit{statement}' }{ \textit{statement} \ldots \Rightarrow \textit{statement}' \ldots } \] \vspace{10mm} \textbf{Eliminate-function-declaration}: Function declarations as first statements are substituted in the remaining statements. \[ \frac{ f = \textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ \textit{parameters} \ \textbf{\texttt{)}}\ \textit{block} }{ f\ \textit{statement} \ldots\ \Rightarrow\ \textit{statement} \ldots[\textit{name} := f] } \] \vspace{10mm} \textbf{Eliminate-constant-declaration}: Constant declarations as first statements are substituted in the remaining statements. \[ \frac{ c = \textbf{\texttt{const}}\ \textit{name} \ \textbf{\texttt{=}}\ \textit{v} }{ c\ \textit{statement} \ldots\ \Rightarrow \ \textit{statement} \ldots[\textit{name} := v] } \] \vspace{10mm} \textbf{Eliminate-Values}: Values as first statemments are discarded, if they are preceding one or more statements in a statement sequence. \[ \frac{ v \ \mbox{is a value} }{ v \textbf{\texttt{;}} \textit{statement}+\ \Rightarrow \ \textit{statement}+ } \] \subsection*{Statements: Constant declarations} \textbf{Evaluate-constant-declaration}: The right-hand expressions in constant declarations are evaluated. \[ \frac{ \textit{expression} \ \Rightarrow \ \textit{expression}' }{ \textbf{\texttt{const}}\ \textit{name} \ \textbf{\texttt{=}}\ \textit{expression} \ \Rightarrow \ \textbf{\texttt{const}}\ \textit{name} \ \textbf{\texttt{=}}\ \textit{expression}' } \] \subsection*{Statements: Conditionals} \textbf{Conditional-statement-predicate}: A conditional statement is reducible if its predicate is reducible. \[ \frac{ e\ \Rightarrow\ e' }{ \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ e \ \textbf{\texttt{)}} \ \textbf{\texttt{\{}} \ \cdots \textbf{\texttt{\}}} \ \textbf{\texttt{else}} \ \textbf{\texttt{\{}} \ \cdots \textbf{\texttt{\}}} \ \Rightarrow \ \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ e' \ \textbf{\texttt{)}} \ \textbf{\texttt{\{}} \ \cdots \textbf{\texttt{\}}} \ \textbf{\texttt{else}} \ \textbf{\texttt{\{}} \ \cdots \textbf{\texttt{\}}} } \] \vspace{10mm} \textbf{Conditional-statement-consequent}: A conditional statement whose predicate is true reduces to the consequent block. \[ \frac{ }{ \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ \textbf{\texttt{true}} \ \textbf{\texttt{)}} \ \textbf{\texttt{\{}} \ \textit{statement}_1 \textbf{\texttt{\}}} \ \textbf{\texttt{else}} \ \textbf{\texttt{\{}} \ \textit{statement}_2 \textbf{\texttt{\}}} \ \Rightarrow \ \textbf{\texttt{\{}} \ \textit{statement}_1 \textbf{\texttt{\}}} } \] \textbf{Conditional-statement-alternative}: A conditional statement whose predicate is false reduces to the alternative block. \[ \frac{ }{ \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ \textbf{\texttt{false}} \ \textbf{\texttt{)}} \ \textbf{\texttt{\{}} \ \textit{statement}_1 \textbf{\texttt{\}}} \ \textbf{\texttt{else}} \ \textbf{\texttt{\{}} \ \textit{statement}_2 \textbf{\texttt{\}}} \ \Rightarrow \ \textbf{\texttt{\{}} \ \textit{statement}_2 \textbf{\texttt{\}}} } \] \subsection*{Statements: Blocks} \textbf{Block-statement-reduce}: A block statement is reducible if its program is reducible. \[ \frac{ \textit{program} \ \Rightarrow \ \textit{program}' }{ \textbf{\texttt{\{}} \ \textit{program} \ \textbf{\texttt{\}}} \ \Rightarrow \ \textbf{\texttt{\{}} \ \textit{program}' \ \textbf{\texttt{\}}} } \] \vspace{10mm} \textbf{Block-statement-undefined}: A block statement whose body only contains a single value statement reduces to the value \textbf{\texttt{undefined}}. \[ \frac{ }{ \textbf{\texttt{\{}} \ v \ \textbf{\texttt{;}} \ \textbf{\texttt{\}}} \ \Rightarrow \ \textbf{\texttt{undefined}} } \] \vspace{10mm} \textbf{Block-statement-reduce-return}: A block statement whose body only contains a single return statement can be reduced by reducing the return expression. \[ \frac{ e \ \Rightarrow \ e' }{ \textbf{\texttt{\{}} \ \textbf{\texttt{return}} \ e \ \textbf{\texttt{;}} \ \textbf{\texttt{\}}} \ \Rightarrow \ \textbf{\texttt{\{}} \ \textbf{\texttt{return}} \ e' \ \textbf{\texttt{;}} \ \textbf{\texttt{\}}} } \] \vspace{10mm} \textbf{Block-statement-eliminate-return}: A block statement whose body only contains a single return value reduces to that value. \[ \frac{ }{ \textbf{\texttt{\{}} \ \textbf{\texttt{return}} \ v \ \textbf{\texttt{;}} \ \textbf{\texttt{\}}} \ \Rightarrow \ v } \] \subsection*{Statements: Expression statements} \textbf{Expression-statement-reduce}: An expression statement is reducible if its expression is reducible. \[ \frac{ e\ \Rightarrow\ e' }{ e \textbf{\texttt{;}} \ \Rightarrow \ e' \textbf{\texttt{;}} } \] \subsection*{Expressions: Binary operators} \textbf{Left-binary-reduce}: An expression with binary operator can be reduced if its left sub-expression can be reduced. \[ \frac{ e_1 \ \Rightarrow \ e_1' }{ e_1\ \textit{binary-operator} \ e_2 \ \Rightarrow \ e_1'\ \textit{binary-operator} \ e_2 } \] \vspace{10mm} \textbf{And-shortcut-false}: An expression with binary operator $\textbf{\texttt{\&\&}}$ whose left sub-expression is $\textbf{\texttt{false}}$ can be reduced to $\textbf{\texttt{false}}$. \[ \frac{ }{ \textbf{\texttt{false}}\ \textbf{\texttt{\&\&}}\ e \ \Rightarrow \ \textbf{\texttt{false}} } \] \vspace{10mm} \textbf{And-shortcut-true}: An expression with binary operator $\textbf{\texttt{\&\&}}$ whose left sub-expression is $\textbf{\texttt{true}}$ can be reduced to the right sub-expression. \[ \frac{ }{ \textbf{\texttt{true}}\ \textbf{\texttt{\&\&}}\ e \ \Rightarrow \ e } \] \vspace{10mm} \textbf{Or-shortcut-true}: An expression with binary operator $\textbf{\texttt{||}}$ whose left sub-expression is $\textbf{\texttt{true}}$ can be reduced to $\textbf{\texttt{true}}$. \[ \frac{ }{ \textbf{\texttt{true}}\ \textbf{\texttt{||}}\ e \ \Rightarrow \ \textbf{\texttt{true}} } \] \vspace{10mm} \textbf{Or-shortcut-false}: An expression with binary operator $\textbf{\texttt{||}}$ whose left sub-expression is $\textbf{\texttt{false}}$ can be reduced to the right sub-expression. \[ \frac{ }{ \textbf{\texttt{false}}\ \textbf{\texttt{||}}\ e \ \Rightarrow \ e } \] \vspace{10mm} \textbf{Right-binary-reduce}: An expression with binary operator can be reduced if its left sub-expression is a value and its right sub-expression can be reduced. \[ \frac{ e_2\ \Rightarrow\ e_2', \textrm{and}\ \textit{binary-operator} \mbox{is not}\ \textbf{\texttt{\&\&}}\ \textrm{or}\ \texttt{\textbf{||}} }{ v\ \textit{binary-operator} \ e_2 \ \Rightarrow \ v\ \textit{binary-operator} \ e_2' } \] \vspace{10mm} \textbf{Prim-binary-reduce}: An expression with binary operator can be reduced if its left and right sub-expressions are values and the corresponding function is defined for those values. \[ \frac{ v\ \mbox{is result of}\ v_1\ \textit{binary-operator} \ v_2 }{ v_1\ \textit{binary-operator} \ v_2 \ \Rightarrow \ v } \] \subsection*{Expressions: Unary operators} \textbf{Unary-reduce}: An expression with unary operator can be reduced if its sub-expression can be reduced. \[ \frac{ e \ \Rightarrow \ e' }{ \textit{unary-operator} \ e \ \Rightarrow \ \textit{unary-operator} \ e' } \] \vspace{10mm} \textbf{Prim-unary-reduce}: An expression with unary operator can be reduced if its sub-expression is a value and the corresponding function is defined for that value. \[ \frac{ v'\ \mbox{is result of}\ \textit{unary-operator} \ v }{ \textit{unary-operator} \ v \ \Rightarrow \ v' } \] \subsection*{Expressions: conditionals} \textbf{Conditional-predicate-reduce}: A conditional expression can be reduced, if its predicate can be reduced. \[ \frac{ e_1 \ \Rightarrow \ e_1' }{ e_1\ \textbf{\texttt{?}}\ e_2\ \textbf{\texttt{:}}\ e_3 \ \Rightarrow \ e_1'\ \textbf{\texttt{?}}\ e_2\ \textbf{\texttt{:}}\ e_3 } \] \vspace{10mm} \textbf{Conditional-true-reduce}: A conditional expression whose predicate is the value $\textbf{\texttt{true}}$ can be reduced to its consequent expression. \[ \frac{ }{ \textbf{\texttt{true}}\ \textbf{\texttt{?}}\ e_1\ \textbf{\texttt{:}}\ e_2 \ \Rightarrow \ e_1 } \] \vspace{10mm} \textbf{Conditional-false-reduce}: A conditional expression whose predicate is the value $\textbf{\texttt{false}}$ can be reduced to its alternative expression. \[ \frac{ }{ \textbf{\texttt{false}}\ \textbf{\texttt{?}}\ e_1\ \textbf{\texttt{:}}\ e_2 \ \Rightarrow \ e_2 } \] \subsection*{Expressions: function application} \textbf{Application-functor-reduce}: A function application can be reduced if its functor expression can be reduced. \[ \frac{ e \ \Rightarrow \ e' }{ e\ \textbf{\texttt{(}}\ \textit{expressions} \ \textbf{\texttt{)}} \ \Rightarrow \ e'\ \textbf{\texttt{(}}\ \textit{expressions} \ \textbf{\texttt{)}} } \] \vspace{10mm} \textbf{Application-argument-reduce}: A function application can be reduced if one of its argument expressions can be reduced and all preceding arguments are values. \[ \frac{ e \ \Rightarrow \ e' }{ v\ \textbf{\texttt{(}}\ v_1 \ldots v_i \ e \ldots\ \textbf{\texttt{)}} \ \Rightarrow \ v\ \textbf{\texttt{(}}\ v_1 \ldots v_i \ e' \ldots\ \textbf{\texttt{)}} } \] \vspace{10mm} \textbf{Function-declaration-application-reduce}: The application of a function declaration can be reduced, if all arguments are values. \[ \frac{ f = \textbf{\texttt{function}}\ \textit{n} \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textit{block} }{ f\ \textbf{\texttt{(}}\ v_1 \ldots v_n\ \textbf{\texttt{)}} \ \Rightarrow \ \textit{block} [x_1 := v_1]\ldots[x_n := v_n] [n := f] } \] \vspace{10mm} \textbf{Function-definition-application-reduce}: The application of a function definition can be reduced, if all arguments are values. \[ \frac{ f = \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ b \mbox{, where $b$ is an expression or block} }{ f\ \textbf{\texttt{(}}\ v_1 \ldots v_n\ \textbf{\texttt{)}} \ \Rightarrow \ b [x_1 := v_1]\ldots[x_n := v_n] } \] \pagebreak \section*{Substitution} \textbf{Identifier}: An identifier with the same name as $x$ is substituted with $e_x$. \[ \frac{ }{ x[x := e_x] \ = \ e_x } \] \[ \frac{ \textit{name} \ \neq \ x }{ \textit{name} [x := e_x] \ = \ \textit{name} } \] \vspace{10mm} \textbf{Expression statement}: All occurrences of $x$ in $e$ are substituted with $e_x$. \[ \frac{ }{ e\textbf{\texttt{;}}[x := e_x] \ = \ e\,[x := e_x]\textbf{\texttt{;}} } \] \vspace{10mm} \textbf{Binary expression}: All occurrences of $x$ in the operands are substituted with $e_x$. \[ \frac{ }{ (e_1 \ \textit{binary-operator} \ e_2)[x := e_x] \ = \ e_1[x := e_x] \ \textit{binary-operator} \ e_2[x := e_x] } \] \vspace{10mm} \textbf{Unary expression}: All occurrences of $x$ in the operand are substituted with $e_x$. \[ \frac{ }{ (\textit{unary-operator} \ e)[x := e_x] \ = \ \textit{unary-operator} \ e[x := e_x] } \] \vspace{10mm} \textbf{Conditional expression}: All occurrences of $x$ in the operands are substituted with $e_x$. \[ \frac{ }{ (e_1\ \textbf{\texttt{?}}\ e_2\ \textbf{\texttt{:}}\ e_3)[x := e_x] \ = \ e_1[x := e_x]\ \textbf{\texttt{?}}\ e_2[x := e_x]\ \textbf{\texttt{:}}\ e_3[x := e_x] } \] \vspace{10mm} \textbf{Logical expression}: All occurrences of $x$ in the operands are substituted with $e_x$. \[ \frac{ }{ (e_1\ \textbf{\texttt{||}}\ e_2)[x := e_x] \ = \ e_1[x := e_x]\ \textbf{\texttt{||}}\ e_2[x := e_x] } \] \[ \frac{ }{ (e_1\ \textbf{\texttt{\&\&}}\ e_2)[x := e_x] \ = \ e_1[x := e_x]\ \textbf{\texttt{\&\&}}\ e_2[x := e_x] } \] \vspace{10mm} \textbf{Call expression}: All occurrences of $x$ in the arguments and the function expression of the application $e$ are substituted with $e_x$. \[ \frac{ }{ (e \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}})[x := e_x] \ = \ e[x := e_x] \ \textbf{\texttt{(}}\ x_1[x := e_x] \ldots x_n[x := e_x] \ \textbf{\texttt{)}} } \] \pagebreak \textbf{Function declaration}: All occurrences of $x$ in the body of a function are substituted with $e_x$ under given circumstances. \begin{enumerate}[label=\large\protect\textcircled{\small\arabic*}] \item Function declaration where $x$ has the same name as a parameter. \end{enumerate} \[ \frac{ \exists \, i \in \{1, \cdots, n\} \text{ s.t. } x \ = \ x_i }{ (\textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textit{block})[x := e_x] \ = \ \textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textit{block} } \] \vspace{3mm} \begin{enumerate}[label=\large\protect\textcircled{\small\arabic*}, start=2] \item Function declaration where $x$ does not have the same name as a parameter. \begin{enumerate}[label=(\,\roman*\,)] \item No parameter of the function occurs free in $e_x$. \end{enumerate} \end{enumerate} \[ \frac{ \forall \, i \in \{1, \cdots, n\} \text{ s.t. } x \ \neq \ x_i \text{, } \ \forall \, j \in \{1, \cdots, n\} \text{ s.t. } x_j \text{ does not occur free in $e_x$} }{ \substack{\substack{\displaystyle{(\textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textit{block})[x := e_x]} \vspace{1.5mm} \\ \vspace{0.5mm} \displaystyle{=}} \\ \displaystyle{ \textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textit{block}[x := e_x]}} } \] \vspace{3mm} \begin{enumerate} \item[] \begin{enumerate}[label=(\,\roman*\,), start=2] \item A parameter of the function occurs free in $e_x$. \end{enumerate} \end{enumerate} \[ \frac{ \forall \, i \in \{1, \cdots, n\} \text{ s.t. } x \ \neq \ x_i \text{, } \ \exists \, j \in \{1, \cdots, n\} \text{ s.t. } x_j \text{ occurs free in $e_x$} }{ \substack{\substack{\displaystyle{(\textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots x_j \ldots x_n \ \textbf{\texttt{)}}\ \textit{block})[x := e_x]} \vspace{1.5mm} \\ \vspace{0.5mm} \displaystyle{=}} \\ \displaystyle{(\textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots y \ldots x_n \ \textbf{\texttt{)}}\ \textit{block}[x_j := y])[x := e_x]}} } \] \begin{enumerate} \item[] \begin{enumerate}[label=\roman*., start=2] \item[] \vspace{3mm} Substitution is applied to the whole expression again as to recursively detect and rename all parameters of the function declaration that clash with variables that occur free in $e_x$, at which point (\,i\,) takes place. Note that the name $y$ is not declared in, nor occurs in \textit{block} and $e_x$. \end{enumerate} \end{enumerate} \vspace{10mm} \textbf{Lambda expression}: All occurrences of $x$ in the body of a lambda expression are substituted with $e_x$ under given circumstances. \begin{enumerate}[label=\large\protect\textcircled{\small\arabic*}] \item Lambda expression where $x$ has the same name as a parameter. \end{enumerate} \[ \frac{ \exists \, i \in \{1, \cdots, n\} \text{ s.t. } x \ = \ x_i }{ (\textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block})[x := e_x] \ = \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block} } \] \vspace{3mm} \begin{enumerate}[label=\large\protect\textcircled{\small\arabic*}, start=2] \item Lambda expression where $x$ does not have the same name as a parameter. \begin{enumerate}[label=(\,\roman*\,)] \item No parameter of the lambda expression occurs free in $e_x$. \end{enumerate} \end{enumerate} \[ \frac{ \forall \, i \in \{1, \cdots, n\} \text{ s.t. } x \ \neq \ x_i \text{, } \ \forall \, j \in \{1, \cdots, n\} \text{ s.t. } x_j \text{ does not occur free in $e_x$} }{ (\textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block})[x := e_x] \ = \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block}[x := e_x] } \] \vspace{3mm} \begin{enumerate} \item[] \begin{enumerate}[label=(\,\roman*\,), start=2] \item A parameter of the lambda expression occurs free in $e_x$. \end{enumerate} \end{enumerate} \[ \frac{ \forall \, i \in \{1, \cdots, n\} \text{ s.t. } x \ \neq \ x_i \text{, } \ \exists \, j \in \{1, \cdots, n\} \text{ s.t. } x_j \text{ occurs free in $e_x$} }{ (\textbf{\texttt{(}}\ x_1 \ldots x_j \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block})[x := e_x] \ = \ (\textbf{\texttt{(}}\ x_1 \ldots y \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block}[x_j := y])[x := e_x] } \] \begin{enumerate} \item[] \begin{enumerate}[label=\roman*., start=2] \item[] \vspace{3mm} Substitution is applied to the whole expression again as to recursively detect and rename all parameters of the lambda expression that clash with variables that occur free in $e_x$, at which point (\,i\,) takes place. Note that the name $y$ is not declared in, nor occurs in \textit{block} and $e_x$. \end{enumerate} \end{enumerate} \pagebreak \textbf{Block expression}: All occurrences of $x$ in the statements of a block expression are substituted with $e_x$ under given circumstances. \begin{enumerate}[label=\large\protect\textcircled{\small\arabic*}] \item Block expression in which $x$ is declared. \end{enumerate} \[ \frac{ x \text{ is declared in \textit{block}} }{ \textit{block}[x := e_x] \ = \ \textit{block} } \] \vspace{3mm} \begin{enumerate}[label=\large\protect\textcircled{\small\arabic*}, start=2] \item Block expression in which $x$ is not declared. \begin{enumerate}[label=(\,\roman*\,)] \item No names declared in the block occurs free in $e_x$. \end{enumerate} \end{enumerate} \[ \frac{ x \text{ is not declared in \textit{block}}\text{, } \ \textit{name} \text{ declared in \textit{block} does not occur free in } e_x }{ \textit{block}[x := e_x] \ = \ [\textit{block}[0][x := e_x] \text{, } \ldots \text{, } \textit{block}[n][x := e_x]] } \] \vspace{3mm} \begin{enumerate} \item[] \begin{enumerate}[label=(\,\roman*\,), start=2] \item A name declared in the block occurs free in $e_x$. \end{enumerate} \end{enumerate} \[ \frac{ x \text{ is not declared in \textit{block}}\text{, } \ \textit{name} \text{ declared in \textit{block} occurs free in } e_x }{ \textit{block}[x := e_x] \ = \ [\textit{block}[0][\textit{name} := y] \text{, } \ldots \text{, } \textit{block}[n][\textit{name} := y]][x := e_x] } \] \begin{enumerate} \item[] \begin{enumerate}[label=\roman*., start=2] \item[] \vspace{3mm} Substitution is applied to the whole expression again as to recursively detect and rename all declared names of the block expression that clash with variables that occur free in $e_x$, at which point (\,i\,) takes place. Note that the name $y$ is not declared in, nor occurs in \textit{block} and $e_x$. \end{enumerate} \end{enumerate} \vspace{10mm} \textbf{Variable declaration}: All occurrences of $x$ in the declarators of a variable declaration are substituted with $e_x$. \[ \frac{ }{ \textit{declarations}[x := e_x] \ = \ [\textit{declarations}[0][x := e_x] \ldots \textit{declarations}[n][x := e_x]] } \] \vspace{10mm} \textbf{Return statement}: All occurrences of $x$ in the expression that is to be returned are substituted with $e_x$. \[ \frac{ }{ (\textbf{\texttt{return}} \ e\textbf{\texttt{;}})[x := e_x] \ = \ \textbf{\texttt{return}} \ e[x := e_x]\textbf{\texttt{;}} } \] \vspace{10mm} \textbf{Conditional statement}: All occurrences of $x$ in the condition, consequent, and alternative expressions of a conditional statement are substituted with $e_x$. \[ \frac{ }{ ( \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ e \ \textbf{\texttt{)}} \ \textit{block} \ \textbf{\texttt{else}} \ \textit{block} )[x := e_x] \ = \ \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ e[x := e_x] \ \textbf{\texttt{)}} \ \textit{block}[x := e_x] \ \textbf{\texttt{else}} \ \textit{block}[x := e_x] } \] \vspace{10mm} \textbf{Array expression}: All occurrences of $x$ in the elements of an array are substituted with $e_x$. \[ \frac{ }{ [x_1 \text{, } \ldots \text{, } x_n][x := e_x] \ = \ [x_1[x := e_x] \text{, } \ldots \text{, } x_n[x := e_x]] } \] \pagebreak \section*{Free names} Let $\rhd$ be the relation that defines the set of free names of a given Source expression; the symbols $p_1$ and $p_2$ shall henceforth refer to unary and binary operations, respectively. That is, $p_1$ ranges over $\{\textbf{\texttt{!}}\}$ and $p_2$ ranges over $\{\textbf{\texttt{||}},\, \textbf{\texttt{\&\&}},\, \textbf{\texttt{+}},\, \textbf{\texttt{-}},\, \textbf{\texttt{*}},\, \textbf{\texttt{/}},\, \textbf{\texttt{===}},\, \textbf{\texttt{>}},\, \textbf{\texttt{<}}\}$. \vspace{10mm} \textbf{Identifier}: \[ \frac{ }{ x \ \rhd \ \{x\} } \] \[ \frac{ }{ \textit{name} \ \rhd \ \varnothing } \] \vspace{10mm} \textbf{Boolean}: \[ \frac{ }{ \textbf{\texttt{true}} \ \rhd \ \varnothing } \] \[ \frac{ }{ \textbf{\texttt{false}} \ \rhd \ \varnothing } \] \vspace{10mm} \textbf{Expression statement}: \[ \frac{ e \ \rhd \ S }{ e \textbf{\texttt{;}} \ \rhd \ S } \] \vspace{10mm} \textbf{Unary expression}: \[ \frac{ e \ \rhd \ S }{ p_1(e) \ \rhd \ S } \] \vspace{10mm} \textbf{Binary expression}: \[ \frac{ e_1 \ \rhd \ S_1 \text{, } \ e_2 \ \rhd \ S_2 }{ p_1(e_1, e_2) \ \rhd \ S_1 \cup S_2 } \] \vspace{10mm} \textbf{Conditional expression}: \[ \frac{ e_1 \ \rhd \ S_1 \text{, } \ e_2 \ \rhd \ S_2 \text{, } \ e_3 \ \rhd \ S_3 }{ e_1\ \textbf{\texttt{?}}\ e_2\ \textbf{\texttt{:}}\ e_3 \ \rhd \ S_1 \cup S_2 \cup S_3 } \] \vspace{10mm} \textbf{Call expression}: \[ \frac{ e \ \rhd \ S \text{, } \ e_k \ \rhd \ T_k }{ e(e_1,\, \ldots,\, e_n) \ \rhd \ S \cup T_1 \cup \ldots \cup T_n } \] \vspace{10mm} \textbf{Function declaration}: \[ \frac{ \textit{block} \ \rhd \ S }{ \textbf{\texttt{function}}\ \textit{name} \ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textit{block} \ \rhd \ S - \{x_1,\, \ldots,\, x_n\} } \] \vspace{10mm} \textbf{Lambda expression}: \[ \frac{ \textit{block} \ \rhd \ S }{ \textbf{\texttt{(}}\ x_1 \ldots x_n \ \textbf{\texttt{)}}\ \textbf{\texttt{=>}}\ \textit{block} \ \rhd \ S - \{x_1,\, \ldots,\, x_n\} } \] \vspace{10mm} \textbf{Block expression}: \[ \frac{ \textit{block}[k] \ \rhd \ S_k \text{, } \ T \text{ contains all names declared in \textit{block}} }{ \textit{block} \ \rhd \ (S_1 \cup \ldots \cup S_n) - T } \] \vspace{10mm} \textbf{Constant declaration}: \[ \frac{ e \ \rhd \ S }{ \textbf{\texttt{const}}\ \textit{name} \ \textbf{\texttt{=}}\ e\textbf{\texttt{;}} \ \rhd \ S } \] \vspace{10mm} \textbf{Return statement}: \[ \frac{ e \ \rhd \ S }{ \textbf{\texttt{return}}\ e\textbf{\texttt{;}} \ \rhd \ S } \] \vspace{10mm} \textbf{Conditional statement}: \[ \frac{ e \ \rhd \ S \text{, } \ \textit{block}_1 \ \rhd \ T_1 \text{, } \ \textit{block}_2 \ \rhd \ T_2 }{ \textbf{\texttt{if}}\ \textbf{\texttt{(}}\ e \ \textbf{\texttt{)}} \ \textit{block}_1 \ \textbf{\texttt{else}} \ \textit{block}_2 \ \rhd \ S \cup T_1 \cup T_2 } \]
{ "alphanum_fraction": 0.612808642, "avg_line_length": 24.9230769231, "ext": "tex", "hexsha": "d9eac9276b71edd86b60852756efdd265153a130", "lang": "TeX", "max_forks_count": 80, "max_forks_repo_forks_event_max_datetime": "2022-03-10T08:56:48.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-24T08:55:58.000Z", "max_forks_repo_head_hexsha": "343bf1eec7e27b5749fad3f82ee9956908c39eef", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "parnikkapore/frk_js-slang", "max_forks_repo_path": "docs/specs/source_2_stepper_rules.tex", "max_issues_count": 1117, "max_issues_repo_head_hexsha": "343bf1eec7e27b5749fad3f82ee9956908c39eef", "max_issues_repo_issues_event_max_datetime": "2022-03-29T14:47:08.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-09T08:08:25.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "parnikkapore/frk_js-slang", "max_issues_repo_path": "docs/specs/source_2_stepper_rules.tex", "max_line_length": 480, "max_stars_count": 48, "max_stars_repo_head_hexsha": "343bf1eec7e27b5749fad3f82ee9956908c39eef", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "parnikkapore/frk_js-slang", "max_stars_repo_path": "docs/specs/source_2_stepper_rules.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T09:40:24.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-09T06:16:03.000Z", "num_tokens": 9203, "size": 25920 }
\documentclass[]{beamer} % use setting 'handouts' to ignore overlay specifications, 'draft' to \usepackage[utf8]{inputenc} \usetheme{Madrid} % beamer theme, can change \usepackage{fancyvrb} \usepackage{etoolbox} \usepackage{tikz} \usepackage[compat=1.1.0]{tikz-feynman} \usepackage{ifluatex} % example colors \definecolor{vandy_gold}{RGB}{204, 161, 102} \definecolor{vandy_blue}{RGB}{18, 111, 150} \definecolor{vandy_red}{RGB}{153, 61, 27} \definecolor{vandy_gray}{RGB}{51, 51, 51} \definecolor{vandy_green}{RGB}{70, 78, 33} % example beamer color settings \setbeamercolor{section in toc}{fg=vandy_gold,bg=white} \setbeamercolor{alerted text}{fg=vandy_red} \setbeamercolor*{palette primary}{fg=white,bg=black} \setbeamercolor*{palette secondary}{fg=white,bg=vandy_gold} \setbeamercolor*{palette tertiary}{fg=white,bg=black} \setbeamercolor*{palette quaternary}{fg=white,bg=vandy_gold} \setbeamercolor*{sidebar}{fg=white,bg=black} \setbeamercolor*{palette sidebar primary}{fg=white} \setbeamercolor*{palette sidebar secondary}{fg=white} \setbeamercolor*{palette sidebar tertiary}{fg=white} \setbeamercolor*{palette sidebar quaternary}{fg=white} \setbeamercolor{titlelike}{parent=palette secondary} \setbeamercolor{frametitle}{fg=white,bg=vandy_gold} \setbeamercolor{frametitle right}{fg=white,bg=vandy_gold} \setbeamercolor{section number projected}{bg=black,fg=vandy_gold} \setbeamercolor{section in toc}{fg=black} \setbeamercolor{block title}{fg=white, bg=vandy_gray!50!white} \setbeamercolor{block body}{bg=vandy_gray!20!white} % example beamer format settings \setbeamertemplate{itemize item}{\color{black}$\bullet$} \setbeamertemplate{itemize subitem}{\color{black}$\bullet$} \setbeamerfont{block title}{size=\small} \setbeamerfont{block body}{size=\tiny} \setbeamertemplate{itemize/enumerate body begin}{\tiny} \setbeamertemplate{itemize/enumerate subbody begin}{\tiny} \AtBeginEnvironment{tabular}{\tiny} \tikzset{font=\tiny} % \includeonlyframes{current} % if you want to just compile certain slides: use [current] setting on a frame to make it 'current' % title slide information \title[Short Name]{Full Name for Presentation} \author[E. Sheridan]{\textbf{E. Sheridan}\inst{1}} \institute{Vanderbilt University\inst{1}} % useful commands: % \uncover<+->{} takes overlay specifications and reveals contents accordingly % \scalebox{x}{} scales contents by a factor of x % document \begin{document} % title page \frame{\titlepage} % table of contents \begin{frame}{Table of Contents} \tableofcontents \end{frame} % example section \section{Section 1} % example frame 1 \begin{frame}{Example Frame} \begin{block}{Block Title} This block appears immediately \end{block} \begin{block}{Block Title}<1-> This block appears on the second overlay and all following ones \end{block} \end{frame} % example frame 2 \begin{frame}{Another Example Frame} \begin{block}{Block Title}<+-> This block appears immediately \end{block} \begin{block}{Block Title}<+-> This block appears next \end{block} \end{frame} % example section \section{Section 2} % example frame 3 \begin{frame}{Yet Another Example Frame} \begin{columns} \begin{column}{0.4\linewidth} \begin{block}{Block Title}<+(1)-> This block appears almost immediately \end{block} \end{column} \begin{column}{0.4\linewidth} \begin{block}{Block Title}<+-> This block appears next \end{block} \end{column} \end{columns} \end{frame} \end{document}
{ "alphanum_fraction": 0.7307156781, "avg_line_length": 32.3513513514, "ext": "tex", "hexsha": "cd74d391b2a164f963833cafb4894271af212826", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c0a2f9a8e06aa65bbcd43a7e43ed750fe6f59341", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sheride/latex_templates", "max_forks_repo_path": "scientific_presentation/scientific_presentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c0a2f9a8e06aa65bbcd43a7e43ed750fe6f59341", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sheride/latex_templates", "max_issues_repo_path": "scientific_presentation/scientific_presentation.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "c0a2f9a8e06aa65bbcd43a7e43ed750fe6f59341", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sheride/latex_templates", "max_stars_repo_path": "scientific_presentation/scientific_presentation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1084, "size": 3591 }
\section{Other attacks} Attacks against discrete logarithm involve finding the integer $x$ for a given $\alpha$ and $\beta$ in $G$ such that: $$\beta = \alpha \circ \alpha \circ \dots \circ \ alpha = \alpha^x$$ In 1997, a lower bound for discrete logarithms was given: any generic algorithm that solves with high probability the problem must perform at least $\Omega(p^{1/2})$ group operations, where $p$ is the largest prime dividing the cardinality of the group. The same result holds for non cyclic groups and Diffie-Hellman protocol. The exact difficulty of computing this is still unknown: despite the existence of algorithms, there could be better or more powerful ones which still are undiscovered (similarly to integer factorization). Generic algorithms are methods which only rely on the group operation $\circ$, without needing further algebraic structure. This property makes them applicable in any cyclic group. \subsection{Brute force search} Brute-force is the most naive and costly generic algorithm to obtain the discrete logarithm $\log_\alpha\beta$: all powers of the generator $\alpha$ are computed \textbf{successively}, until the result equals $\beta$. For a random logarithm $x$, on average the correct solution is expected to be found after checking \textit{half of the possible values}. This gives a complexity of $O(|G|)$, linear in the cardinality of the group. To avoid this kind of attack to be successful, therefore, the cardinality of groups must be sufficiently large, without any other particular measure. In the case of $\mathbb{Z}^*_p$ with $p$ prime, approximately $\frac{p-1}{2}$ tests are required to compute the discrete logarithm, meaning that $|G| = p - 1$ should be in the order of at least $2^{80}$ to make a brute force infeasible using the modern hardware. \subsection{Shank's method} Shank's method is another generic algorithm which allows to reduce the time of a brute force search, at the trade off of occupying more storage. It is a meet-in-the-middle attack, storing intermediate values by trying to successively crack the multiple steps of discrete logarithm encryption. The procedure is based on rewriting the discrete logarithm in a two-digit representation: $$x = x_gm = x_b \qquad 0 \leq x_g, x_b < m$$ The value $m$ is chosen to be approximately the square root of the cardinality of the set, i.e.\ $m = \lceil{\sqrt{|G|}}\rceil$. Discrete logarithm can then be stated as $\beta = \alpha^x = \alpha^{x_gm + x_b}$ which leads to: $$\beta \cdot (\alpha^{-m})^{x_g} = \alpha^{x_b}$$ The core idea of the algorithm is using a divide-and-conquer approach to find $x_a$ and $x_b$ separately, in two phases: \begin{enumerate} \item Baby-step, where all the values $\alpha^{x_b}$ with $0 \leq x_b < m$ are computed and stored with approximately $m \approx \sqrt{|G|}$ operations; \item Giant-step, in which the program checks for all $x_g$ in $0 \leq x_g < m$ whether this condition is fulfilled: $\beta \cdot (\alpha^{-m})^g \stackrel{?}{=} \alpha^{x_b}$ for some stored entry $\alpha^{x_b}$ that was computed during the previous step. \end{enumerate} In case of a match, i.e.\ $\beta \cdot (\alpha^{-m})^{x_g, 0} = \alpha^{x_b, 0}$ for some pair $(x_{g, 0}, x_{b, 0})$ the discrete logarithm is: $$x = x_{g, 0}m + x_{b, 0}$$ Further speed up the method can be obtained using efficient lookup schemes, such as \textbf{hash tables} which allow constant time search. This algorithm has a total computational time and space of $O(\sqrt{|G|})$, which means that in a group of order $2^{80}$ an attacker would approximately need $2^{40}$ operations, easily obtainable with modern hardware. Direct consequence of this is that groups need to have a cardinality of at least $|G| \geq 2^{160}$, to make Shank's method have a complexity of $2^{80}$ and therefore be infeasible. In case of prime groups $G = \mathbb{Z}^*_p$, $p$ must have a length of at least 160 bit; using prime groups is advised since otherwise Shank's method would have an even smaller complexity. \subsection{Pollard's Rho method} Pollard's Rho method has the same computational time as Shank's algorithm, yet constant space requirements. It is based on the \textbf{birthday paradox}, concept related to the higher likelihood of collisions found between random attack attempts and a fixed degree of permutations. The basic idea consists in pseudo-randomly generate ($\rho$ represents randomness) group elements of the form $\alpha^i \cdot \beta^j$, keeping track of values $i$ and $j$, to then continue until obtaining a collision: $$\alpha^{i_1} \cdot \beta^{j_1} = \alpha^{i_2} \cdot \beta^{j_2}$$ Substituting $\beta = \alpha^x$ and comparing the exponents on both sides of the equation, the collision leads to the following formula: $$i_1 + xj_1 \equiv i_2 + jx_2 \mod |G| \qquad \rightarrow \qquad x \equiv \frac{i_2 - i_1}{j_1 - j_2} \mod |G|$$ The second part allows to find the discrete logarithm. Furthermore, additional methods to speed up computations such as the Extended Euclidean Algorithm and Floyd's cycle-finding algorithm. Its computational time is approximately $O(\sqrt{n})$, while used together with Silver-Pohlig-Hellman it can achieve $O(\sqrt{p})$ where $p$ is the largest prime factor of $n$. The practical importance of Rho's method is it being the best-known algorithm in \textit{elliptic curve group}s, making 160 bit operands very popular within this cryptography; however, it is not the most powerful attack for discrete logarithm and it does not work well in a distributed environment.
{ "alphanum_fraction": 0.7518458491, "avg_line_length": 100.9636363636, "ext": "tex", "hexsha": "9b3b12fba9098ba66441527a0e20a11ebc4b6b53", "lang": "TeX", "max_forks_count": 69, "max_forks_repo_forks_event_max_datetime": "2022-03-17T19:27:50.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-02T21:46:57.000Z", "max_forks_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "mrahtapot/TUM", "max_forks_repo_path": "Seminar - Algebraic Methods and Algorithms in Cryptology /chapters/attacks.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_issues_repo_issues_event_max_datetime": "2021-07-31T19:35:57.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-16T12:22:43.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "mrahtapot/TUM", "max_issues_repo_path": "Seminar - Algebraic Methods and Algorithms in Cryptology /chapters/attacks.tex", "max_line_length": 373, "max_stars_count": 225, "max_stars_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "mrahtapot/TUM", "max_stars_repo_path": "Seminar - Algebraic Methods and Algorithms in Cryptology /chapters/attacks.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T22:25:38.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-02T10:49:41.000Z", "num_tokens": 1427, "size": 5553 }
\subsection{Scaling images}
{ "alphanum_fraction": 0.7419354839, "avg_line_length": 6.2, "ext": "tex", "hexsha": "ca3ac1acdd71ae175d259b23c37bff3f16332c9b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/computer/image/01-04-scaling.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/computer/image/01-04-scaling.tex", "max_line_length": 27, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/computer/image/01-04-scaling.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 31 }
\section{pa-monad} \label{sec:pa-monad} \begin{enumerate} \item debug \\ tags file \begin{bluetext} "monad_test.ml" : pp(camlp4o -parser pa_monad.cmo) camlp4o -parser pa_monad.cmo monad_test.ml -printer o (** filter *) let a = perform let b = 3 in b let bind x f = f x let c = perform c <-- 3 ; c (* output let a = let b = 3 in b let bind x f = f x let c = bind 3 (fun c -> c) *) let bind x f = List.concat (List.map f x) let return x = [x] let bind2 x f = List.concat (List.map f x) let c = perform x <-- [1;2;3;4]; y <-- [3;4;4;5]; return (x+y) let d = perform with bind2 in x <-- [1;2;3;4]; y <-- [3;4;4;5]; return (x+y) let _ = List.iter print_int c let _ = List.iter print_int d (* let bind x f = List.concat (List.map f x) let return x = [ x ] let bind2 x f = List.concat (List.map f x) let c = bind [ 1; 2; 3; 4 ] (fun x -> bind [ 3; 4; 4; 5 ] (fun y -> return (x + y))) let d = bind2 [ 1; 2; 3; 4 ] (fun x -> bind2 [ 3; 4; 4; 5 ] (fun y -> return (x + y))) let _ = List.iter print_int c let _ = List.iter print_int d *) \end{bluetext} \item translation rule \\ it's simple. \textbf{perform} or \textbf{perform with bind in } then it will translate all phrases ending with \textit{;}; \textit{x <-- me;} will be translated into \textit{me >>= (fun x -> )}; \textit{me;} will be translated into \textit{me >>= (fun \_ -> ... )} you should refer \textit{pa\_monad.ml} for more details \textit{perform with exp1 and exp2 in exp3} uses the first given expression as bind and the second as match-failure function. \textit{perform with module Mod in exp } use the function named bind from module Mod. In addition ues the module's failwith in refutable patterns \begin{alternate} let a = perform with (flip Option.bind) in a <-- Some 3; b<-- Some 32; Some (a+ b) ;; val a : int option = Some 35 \end{alternate} it will be translated into \begin{bluetext} let a = flip Option.bind (Some 3) (fun a -> flip Option.bind (Some 32) (fun b -> Some (a + b))) \end{bluetext} \item ParameterizedMonad \\ \begin{ocamlcode} class ParameterizedMonad m where return :: a -> m s s a (>>=) :: m s1 s2 t -> (t -> m s2 s3 a) -> m s1 s3 a data Writer cat s1 s2 a = Writer {runWriter :: (a, cat s1 s2)} instance (Category cat) => ParameterizedMonad (Writer cat) where return a = Writer (a,id) m >>= k = Writer $ let (a,w) = runWriter (b,w') = runWriter (k a) in (b, w' . w) \end{ocamlcode} % $ \begin{bluetext} module State : sig type ('a,'s) t = 's -> ('a * 's) val return : 'a -> ('a,'s) t val bind : ('a,'s ) t -> ('a -> ('b,'s) t ) -> ('b,'s) t val put : 's -> (unit,'s) t val get : ('s,'s) t end = struct type ('a,'s) t = ('s -> ('a * 's)) let return v = fun s -> (v,s) let bind (v : ('a,'s) t) (f : 'a -> ('b,'s) t) : ('b,'s) t = fun s -> let a,s' = v s in let a',s'' = f a s' in (a',s'') let put s = fun _ -> (), s let get = fun s -> s,s end module PState : sig type ('a, 'b, 'c) t = 'b -> 'a * 'c val return : 'a -> ('a,'b,'b) t val bind : ('b,'a,'c)t -> ('b -> ('d,'c, 'e) t ) -> ('d,'a,'e) t val put : 's -> (unit,'b,'s)t val get : ('s,'s,'s) t end = struct type ('a,'s1,'s2) t = 's1 -> ('a * 's2) let return v = fun s -> (v,s) let bind v f = fun s -> let a,s' = v s in let a',s'' = f a s' in (a',s'') let put s = fun _ -> (), s let get = fun s -> s,s end \end{bluetext} \begin{ocamlcode} let v = State.(perform x <-- return 1 ; y <-- return 2 ; let _ = print_int (x+y) in return (x+y) );; \end{ocamlcode} \begin{ocamlcode} val v : (int, '_a) State.t = <fun> \end{ocamlcode} \begin{ocamlcode} let v = State.(perform x <-- return 1 ; y <-- return 2 ; z <-- get ; put (x+y+z) ; z<-- get ; let _ = print_int z in return (x+y+z));; \end{ocamlcode} \begin{ocamlcode} val v : (int, int) State.t = <fun> \end{ocamlcode} \begin{alternate} v 3;; 6- : int * int = (9, 6) \end{alternate} \begin{ocamlcode} let v = PState.(perform x <-- return 1 ; y <-- return 2 ; z <-- get ; put (x+y+z) ; z<-- get ; let _ = print_int z in return (x+y+z));; \end{ocamlcode} \begin{ocamlcode} val v : (int, int, int) PState.t = <fun> \end{ocamlcode} \begin{alternate} v 3 ;; 6- : int * int = (9, 6) \end{alternate} \begin{ocamlcode} let v = PState.(perform x <-- return 1 ; y <-- return 2 ; z <-- get ; put (string_of_int (x+y+z)) ; return z );; \end{ocamlcode} \begin{ocamlcode} val v : (int, int, string) PState.t = <fun> \end{ocamlcode} \begin{alternate} # v 3;; v 3;; - : int * string = (3, "6") \end{alternate} \end{enumerate} %%% Local Variables: %%% mode: latex %%% TeX-master: "../master" %%% End:
{ "alphanum_fraction": 0.5625398174, "avg_line_length": 24.0255102041, "ext": "tex", "hexsha": "337f8ce116af77743bcef5437a80abfcbc9a89d0", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-06-21T06:57:32.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-10T18:12:15.000Z", "max_forks_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mgttlinger/ocaml-book", "max_forks_repo_path": "library/monad.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_issues_repo_issues_event_max_datetime": "2018-12-03T04:15:48.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-09T13:53:43.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mgttlinger/ocaml-book", "max_issues_repo_path": "library/monad.tex", "max_line_length": 88, "max_stars_count": 142, "max_stars_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mgttlinger/ocaml-book", "max_stars_repo_path": "library/monad.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-15T00:47:37.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-12T16:45:40.000Z", "num_tokens": 1787, "size": 4709 }
\documentclass{beamer} \usepackage{xcolor} \newcommand{\red}[1]{\color{red}#1} \usetheme{Madrid} %This command sets the theme for the slideshow. % There are many themes available. Check the documentation of beamer for a complete list. You can also modify them and produce your own. % Here is a short list that you can check: default, Madrid, CambridgeUS, JuanLesPins, Goettingen, Malmoe \begin{document} \title[Perspective A. \& C. Mathematics]{Perspective over Applied and Computational Mathematics} \subtitle{A slideshow using beamer} \author{A.U. Thor} \institute[UC3M]{Universidad Carlos III de Madrid} \date{\today} \maketitle %This command creates a new slide with the title \section{First section of the presentation} \frame{\tableofcontents} % This command creates a slide with the table of contents \begin{frame} \frametitle{Frame Title} \framesubtitle{Frame Subtitle} Anything that you can typeset in a latex document will fit here. This includes inline math $\cos(x)$ or displayed $$\int_a^b f(x) dx$$ \begin{enumerate} \item First element \item Second element \end{enumerate} \end{frame} \begin{frame} \frametitle{Frame Title} \framesubtitle{Frame Subtitle} I can write some more text here \begin{block}{Block title} This serves to \alert{highlight} some content \end{block} And also here \end{frame} \section{Second section of the presentation} \frame{\tableofcontents[currentsection]} %The option in the \tableofcontents command presents all but the current section semitransparent. \begin{frame} \frametitle{How to introduce overlays I} \begin{itemize} \pause \item First element \pause \item Second element \end{itemize} \end{frame} \begin{frame} \frametitle{How to introduce overlays II} \textbf<2->{This line is bold only after slide 2} \uncover<4>{I am here but am no seen until the slide 4. I take my space in the slide} \alert<3->{This text gets highlighted after slide 3} \only<1>{This line is inserted only on slide 1.} \only<2>{This line is inserted only on slide 2.} \textit<2-3>{This line is slanted in the second and third slides} \end{frame} \end{document}
{ "alphanum_fraction": 0.7262065855, "avg_line_length": 23.585106383, "ext": "tex", "hexsha": "3aed7a71f1b3828ee847e4477db29c615c993345", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ac9db54fb04b7d240b1946dc60a7eb1c0cbbbcef", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jmppardo/Perspectivas", "max_forks_repo_path": "Latex/beamer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ac9db54fb04b7d240b1946dc60a7eb1c0cbbbcef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jmppardo/Perspectivas", "max_issues_repo_path": "Latex/beamer.tex", "max_line_length": 138, "max_stars_count": null, "max_stars_repo_head_hexsha": "ac9db54fb04b7d240b1946dc60a7eb1c0cbbbcef", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jmppardo/Perspectivas", "max_stars_repo_path": "Latex/beamer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 617, "size": 2217 }
\title{\textit{Planck} 2013 Results: Cosmological Parameter Tables} %\titlerunning{Planck Cosmological Parameter Tables} \maketitle \begin{abstract} These tables summarize the results of \textit{Planck} nominal mission parameter estimation exploration results. They include \textit{Planck} data in combination with WMAP polarization, \textit{Planck} lensing, and high-$l$ CMB experiments, as well as additional non-CMB data as detailed in the main parameter papers. \end{abstract} \newpage \section{Introduction} The tables are arranged grouped firstly by cosmological model, and then by data combination. The name tags match those of the full chains also provided on the PLA. They all start with {\tt base} to denote the baseline model, followed by the parameter tags of any additional parameters that are also varied (as defined in the parameter paper). Data combination tags are as follows (see the parameters paper for full description and references): \begin{tabular} { l l } Data tag & Data used\\ \hline {\tt planck} & high-$l$ \textit{Planck} temperature ({\tt CamSpec}, $50\le l\le 2500$) \\ {\tt lowl } & low-$l$ \textit{Planck} temperature ($2\le l \le 49$) \\ {\tt lensing} & \textit{Planck} lensing power spectrum reconstruction \\ {\tt lowLike} & low-$l$ WMAP 9 polarization \\ {\tt tauprior} & A Gaussian prior on the optical depth, $\tau = 0.09 \pm 0.013$ \\ {\tt BAO} & Baryon oscillation data from DR7, DR9 and and 6DF \\ {\tt SNLS} & Supernova data from the Supernova Legacy Survey \\ {\tt Union2} & Supernova data from the Union compilation \\ {\tt HST} & Hubble parameter constraint from HST (Riess et al) \\ {\tt WMAP} & The full WMAP (temperature and polarization) 9 year data \\ \hline \end{tabular} \vskip 1cm Data likelihoods are either included when running the chains, or by importance sampling. Data combinations that are added by importance sampling appear at the end of the list, following the {\tt post{\textunderscore}} tag. WMAP9 chains are run from the WMAP9 likelihood code with the same baseline assumptions as \textit{Planck}, and hence may different slightly from those available on Lambda (e.g. the baseline model has non-zero neutrino mass). Note that the best fits are merely examples of parameter combinations that fit the data well, due to parameter degeneracies there may be other combinations of parameters that fit the data nearly equally well. Beneath each table is the minus log Likelihood $\chi^2_{\rm eff}$ for each best fit model, and also the contributions coming from each separate part of the likelihood. The $R-1$ value is also given, which measures the convergence of the sampling chains, with small values being better converged. The sampling uncertainty on quoted mean values are typically of order $R-1$ in units of the standard deviation. %Files are provided in the format produced by the CosmoMC code, available from http://cosmologist.info/cosmomc. \newpage
{ "alphanum_fraction": 0.745785124, "avg_line_length": 73.7804878049, "ext": "tex", "hexsha": "cab3e274be878dad504017928650bfb6deb4981d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3c1d029b74034b92cb2974de15e4c18637a5277e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sjoudaki/CosmoJBD", "max_forks_repo_path": "batch1/outputs/tableHeader.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3c1d029b74034b92cb2974de15e4c18637a5277e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sjoudaki/CosmoJBD", "max_issues_repo_path": "batch1/outputs/tableHeader.tex", "max_line_length": 443, "max_stars_count": 1, "max_stars_repo_head_hexsha": "3c1d029b74034b92cb2974de15e4c18637a5277e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sjoudaki/CosmoJBD", "max_stars_repo_path": "batch1/outputs/tableHeader.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-04T15:29:37.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-04T15:29:37.000Z", "num_tokens": 748, "size": 3025 }
\documentclass{subfile} \begin{document} \section{IMO}\label{sec:imo} \begin{problem}[IMO $1995$, problem $2$] Let $a,b,c$ be real numbers such that $abc=1$. Prove that \begin{align*} \dfrac{1}{a^{3}(b+c)}+\dfrac{1}{b^{3}(c+a)}+\dfrac{1}{c^{3}(a+b)} & \geq\dfrac{3}{2} \end{align*} \end{problem} \begin{problem}[IMO $1999$, problem $2$] Let $n\geq2$ be an integer. Find the least constant $C$ such that \begin{align*} \sum\limits_{1\leq i < j\leq n}x_{i}x_{j}(x_{i}^{2}+x_{j}^{2}) & \leq C\left(\sum\limits_{i=1}^{n}x_{i}\right)^{4} \end{align*} holds for all non-negative real numbers $x_{1},\ldots,x_{n}$. When does equality occur? \end{problem} \begin{problem}[IMO $2000$, problem $2$] Let $a,b,c$ be positive real numbers such that $abc=1$. Prove that \begin{align*} \left(a-1+\dfrac{1}{b}\right)\left(b-1+\dfrac{1}{c}\right)\left(c-1+\dfrac{1}{a}\right) & \leq1 \end{align*} \end{problem} \begin{problem}[IMO $2001$, problem $2$] Let $a,b,c$ be positive real numbers. Prove that \begin{align*} \dfrac{a}{\sqrt{a^{2}+8bc}}+\dfrac{b}{\sqrt{b^{2}+8ca}}+\dfrac{c}{\sqrt{c^{2}+8ab}} & \geq1 \end{align*} \end{problem} \begin{problem}[IMO $2003$, problem $5$] Let $n$ be a positive integer and $x_{1}\leq\ldots\leq x_{n}$ be real numbers. Prove that \begin{align*} \left(\sum\limits_{i,j=1}^{n}|x_{i}-x_{j}|\right)^{2} & \leq \dfrac{2(n^{2}-1)}{3}\sum\limits_{i,j=1}^{n}(x_{i}-x_{j})^{2} \end{align*} Show that equality holds if and only if $x_{1},\ldots,x_{n}$ forms an arithmetic sequence. \end{problem} \begin{problem}[IMO $2004$, problem $4$] Let $n\geq3$ be an integer. Let $t_{1},\ldots,t_{n}$ be positive real numbers such that \begin{align*} n^{2}+1 & > (t_{1}+\ldots+t_{n})\left(\dfrac{1}{t_{1}}+\ldots+\dfrac{1}{t_{n}}\right) \end{align*} Show that $t_{i},t_{j},t_{k}$ are the sides of a triangle for all $1\leq i< j < k\leq n$. \end{problem} \begin{problem}[IMO $2005$, problem $3$] Let $x,y,z$ be real numbers such that $xyz\geq1$. Prove that \begin{align*} \dfrac{x^{5}-x^{2}}{x^{5}+y^{2}+z^{2}}+\dfrac{y^{5}-y^{2}}{x^{2}+y^{5}+z^{2}}+\dfrac{z^{5}-z^{2}}{x^{2}+y^{2}+z^{5}} & \geq0 \end{align*} \begin{solution} We have already solved it in \autoref{prob:imo2005-3}. \end{solution} \end{problem} \begin{problem}[IMO $2006$, problem $3$] Determine the least real number $M$ such that the inequality \begin{align*} \left|ab\left(a^{2}-b^{2}\right)+bc\left(b^{2}-c^{2}\right)+ca\left(c^{2}-a^{2}\right)\right| & \leq M(a^{2}+b^{2}+c^{2})^{2} \end{align*} holds for all real numbers $a,b,c$. \end{problem} \begin{problem}[IMO $2008$, problem $2$] Let $x,y,z\neq1$ be real numbers such that $xyz=1$. Prove that \begin{align*} \dfrac{x^{2}}{(x-1)^{2}}+\dfrac{y^{2}}{(y-1)^{2}}+\dfrac{z^{2}}{(z-1)^{2}} & \geq1 \end{align*} Also, prove that equality holds for infinitely many rational $x,y,z$ such that $xyz=1$ and $x,y,z\neq1$. \end{problem} \begin{problem}[IMO $2012$, problem $2$] Let $n\geq3$ be a positive integer and $a_2,\ldots,a_n$ be positive real numbers such that $a_1\cdots a_n=1$. Prove that \begin{align*} (1+a_2)^{2}\cdots(1+a_n)^{n} & \geq n^{n} \end{align*} \end{problem} \begin{problem}[IMO $2020$, problem $2$] Let $a,b,c,d$ be positive real numbers such that $a\geq b\geq c\geq d$ and $a+b+c+d=1$. Prove that \begin{align*} (a+2b+3c+4d)a^{a}b^{b}c^{c}d^{d} & < 1 \end{align*} \begin{solution} Since $a+b+c+d=1$, by \nameref{thm:weightedpowermean} on $\omega=(a,b,c,d)$ and $\mathbf{a}=(a,b,c,d)$, \begin{align*} a\cdot a+b\cdot a+c\cdot c+d\cdot d & \geq a^{a}b^{b}c^{c}d^{d} \end{align*} So it is enough to prove that \begin{align*} (a+2b+3c+4d)(a^{2}+b^{2}+c^{2}+d^{2}) & \leq (a+b+c+d)^{3} \end{align*} Expanding these we can easily see that the inequality has to follow. But we can prove it in a smarter way. \begin{align*} (a+b+c+d)^{3} & = (a+b+c+d)\left(a^{2}+b^{2}+c^{2}+d^{2}+2\sum ab\right) \end{align*} where the sum runs over all possible $\binom{4}{2}$ pairs. Then \begin{align*} (a+2b+3c+4d)(a^{2}+b^{2}+c^{2}+d^{2}) & < (a+b+c+d)^{3}\\ \iff (b+2c+3d)(a^{2}+b^{2}+c^{2}+d^{2}) & < 2(a+b+c+d)\left(\sum ab\right) \end{align*} Now, using $a\geq b\geq c$, \begin{align*} a^{2}+b^{2}+c^{2}+d^{2} & \leq a(a+b+c+d) \end{align*} So, it is enough to show that \begin{align*} a(b+2c+3d)(a+b+c+d) & \leq 2(a+b+c+d)\left(\sum ab\right)\\ \iff a(b+2c+3d) & \leq 2\sum ab\\ \iff 3da & \leq ab+2ca\\ \iff da+da+da & \leq ab+ca+ca \end{align*} This inequality obviously holds. \end{solution} \begin{remark} We could also use \nameref{thm:weightedjensen} after using the fact that $\log(x)$ is concave. The buffalo way works here as well. But the calculation is not going to be pretty if you go that way. This problem was highly criticized within some forums such as the Art of Problem Solving. It was the first inequality problem at the IMO since $2012$. A lot of people thought that the days of inequality at the IMO was over. But when this problem appeared at the IMO $2020$, many people complained and expressed their disappointment that the \textit{no inequality problem at the IMO} streak was finally broken with such a problem. \end{remark} \end{problem} \begin{problem}[IMO Shortlist $2015$, A1] Let $a,b,c$ be positive real numbers such that $\min\{ab,bc,ca\}\geq1$. Prove that \begin{align*} \sqrt[3]{(a^2+1)(b^2+1)(c^2+1)} & \leq\left(\dfrac{a+b+c}{3}\right)^2+1 \end{align*} \end{problem} \begin{problem}[IMO Shortlist $2015$, A8] Determine the largest real number $a$ such that for all $n\geq1$ and for all real numbers $x_{0},\ldots,x_{n}$ satisfying \begin{align*} 0 & = x_{0}<x_{1}<\ldots<x_{n} \end{align*} we have \begin{align*} \dfrac{1}{x_{1}-x_{0}}+\dfrac{1}{x_{2}-x_{1}}+\ldots+\dfrac{1}{x_{n}-x_{n-1}} & \geq a\left(\dfrac{2}{x_{1}}+\dfrac{3}{x_{2}}+\ldots+\dfrac{n+1}{x_{n}}\right) \end{align*} \end{problem} \begin{problem}[IMO Shortlist $2018$, A7] Find the maximal value of \begin{align*} S & = \sqrt[3]{\dfrac{a}{b+7}}+\sqrt[3]{\dfrac{c}{d+7}}+\sqrt[3]{\dfrac{d}{a+7}} \end{align*} where $a,b,c,d$ are non-negative real numbers which satisfy $a+b+c+d=100$. \end{problem} \end{document}
{ "alphanum_fraction": 0.5747380847, "avg_line_length": 38.0730337079, "ext": "tex", "hexsha": "ae817bda73cef30b41d43993aed2899fb524087b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_path": "imo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_path": "imo.tex", "max_line_length": 631, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_path": "imo.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "num_tokens": 2852, "size": 6777 }
\section{Introduction} \label{sec:introduction} Type errors are a common stumbling block for students trying to learn typed functional languages like \ocaml\ and \haskell. % Consider the ill-typed @fac@ function on the left in Figure~\ref{fig:factorial}. % The function returns @true@ in the base case (instead of @1@), and so \ocaml responds with the error message: % \begin{verbatim} This expression has type bool but an expression was expected of type int. \end{verbatim} % This message makes perfect sense to an expert who is familiar with the language and has a good mental model of how the type system works. % However, it may perplex a novice who has yet to develop such a mental model. % To make matters worse, unification-based type inference algorithms often report errors far removed from their source. % This further increases the novice's confusion and can actively mislead them to focus their investigation on an irrelevant piece of code. % Much recent work has focused on analyzing unification constraints to properly \emph{localize} a type error~\cite{Lerner2007-dt,Chen2014-gd,Zhang2014-lv,Pavlinovic2014-mr}, but an accurate source location does not explain \emph{why} the program is wrong. \begin{figure}[t] \centering \begin{minipage}{.49\linewidth} \centering \begin{ecode} let rec fac n = if n <= 0 then true else n * (*@\hlOcaml{fac (n-1)}@*) \end{ecode} \vspace{2em} \includegraphics[height=1.5in]{fac-overview.png} \end{minipage} \begin{minipage}{.49\linewidth} \centering \includegraphics[height=3in]{fac-long.png} \end{minipage} \vspace{1em} \caption{(top-left) An ill-typed \texttt{fac} function \hlOcaml{highlighting} the error location reported by \ocaml. (bottom-left) Dynamically witnessing the type error in \texttt{fac}, showing only function call-return pairs. (right) The same trace, fully expanded to show each small-step reduction in the computation.} \label{fig:factorial} \end{figure} In this paper we propose a new approach that explains static type errors by \emph{dynamically} witnessing how an ill-typed program goes wrong. % We have developed \toolname, an interactive tool that uses the source of the ill-typed function to automatically synthesize the result on the bottom-left in Figure~\ref{fig:factorial}, which shows how the recursive calls reduce to a configuration where the program ``goes wrong'' --- \ie\ the @int@ value @1@ is to be multiplied with the @bool@ value @true@. We achieve this via three concrete contributions. \paragraph{1. Finding Witnesses} Our first contribution is an algorithm for searching for \emph{witnesses} to type errors, \ie\ inputs that cause a program to go wrong~(\S~\ref{sec:searching-witness}). % This problem is tricky when we cannot rely on static type information, as we must avoid the trap of \emph{spurious} inputs that cause irrelevant problems that would be avoided by picking values of a different, relevant type. % We solve this problem by developing a novel operational semantics that combines evaluation and type inference. % We execute the program with \emph{holes} --- values whose type is unknown --- as the inputs. % A hole remains abstract until the evaluation context tells us what type it must have, for example the parameters to an addition operation must both be integers. % Our semantics conservatively instantiates holes with concrete values, dynamically inferring the type of the input until the program goes wrong. % We prove that our procedure synthesizes \emph{general} witnesses, which means, intuitively, that if a witness is found for a given ill-typed function, then, \emph{for all} (inhabited) input types, there exist values that can make the function go wrong. Given a witness to a type error, the novice may still be at a loss. % The standard \ocaml\ interpreter and debugging infrastructure expect well-typed programs, so they cannot be used to investigate \emph{how} the witness causes the program to crash. % More importantly, the execution itself may be quite long and may contain details not relevant to the actual error. \paragraph{2. Visualizing Witnesses} Our second contribution is an interactive visualization of the execution of purely functional \ocaml\ programs, well-typed or not~(\S~\ref{sec:interactive}). % We extend the semantics to also build a \emph{reduction graph} which records all of the small-step reductions and the context in which they occur. % The graph lets us visualize the sequence of steps from the source witness to the stuck term. The user can interactively expand the computation to expose intermediate steps by selecting an expression and choosing a traversal strategy. % The strategies include many of the standard debugging moves, \eg\ stepping \emph{forward} or \emph{into} or \emph{over} calls, as well stepping or jumping \emph{backward} to understand how a particular value was created, while preserving a context of the intermediate steps that allow the user to keep track of a term's provenance. We introduce a notion of \emph{jump-compressed} traces to abstract away the irrelevant details of a computation. % A jump-compressed trace includes only function calls and returns. For example, the trace in the bottom-left of Figure~\ref{fig:factorial} is jump-compressed. % Jump-compressed traces are similar to stack traces in that both show a sequence of function calls that lead to a crash. However, jump-compressed traces also show the return values of successful calls, which can be useful in understanding why a particular path was taken. \paragraph{3. Evaluating Witnesses} % Of course, the problem of finding witnesses is undecidable in general. In fact, due to the necessarily conservative nature of static typing, there may not even exist any witnesses for a given ill-typed program. % Thus, our approach is a heuristic that is only useful if it can find \emph{compact} witnesses for \emph{real-world} programs. % Our third contribution is an extensive evaluation of our approach on two different sets of ill-typed programs obtained by instrumenting compilers used in beginner's classes~(\S~\ref{sec:evaluation}). % The first is the \uwbench\ data set~\cite{Lerner2007-dt} comprising \uwsize\ ill-typed programs. % The second is a new \ucsdbench\ data set, comprising \ucsdsize\ ill-typed programs. % We show that for both data sets, our technique is able to generate witnesses for around 85\% of the programs, in under a second in the vast majority of cases. % Furthermore, we show that a simple interactive strategy yields compact counterexample traces with at most 5 steps for 60\% of the programs, and at most 10 steps for over 80\% of the programs. % We can even use witnesses to \emph{localize} type errors with a simple heuristic that treats the values in a ``stuck'' term as \emph{sources} of typing constraints and the term itself as a \emph{sink}, achieving around 70\% accuracy in locating the source of the error. The ultimate purpose of an error report is to help the programmer \emph{comprehend} and \emph{fix} problematic code. % Thus, our final contribution is a user study that compares \toolname's dynamic witnesses against \ocaml's type errors along the dimension of comprehensibility~(\S~\ref{sec:user-study}). % Our study finds that students given one of our witnesses are consistently more likely to correctly explain and fix a type error than those given the standard error message produced by the \ocaml compiler. % % \subparagraph{Witness Utility} % % Even if we can find small witnesses for the majority of type errors, it % may be that the witnesses do not actually help developers % \emph{understand} the errors. % % In other words, perhaps the static error message is sufficient to % diagnose and fix the error, or perhaps the witness simply does not add % enough information to make a difference. % % % Thus, our final contribution is a user study that compares the utility % of our witnesses with that of the error messages provided by the \ocaml % compiler~(\S~\ref{sec:user-study}). % \smallskip All together, our results show that in the vast majority of cases, (novices') ill-typed programs \emph{do} go wrong, and that the witnesses to these errors can be helpful in understanding the source of the error. This, in turn, opens the door to a novel dynamic way to explain, understand, and appreciate the benefits of static typing. \paragraph{Contributions Relative to Prior Publications} This paper extends our ICFP '16 paper of the same name~\cite{Seidel2016-ul}, focusing on the experimental evaluation. % First, in \S~\ref{sec:how-safe} we investigate the student programs for which we were unable to synthesize a witness. We group the failures into five categories, give representative examples, and suggest ways to improve our feedback in these cases. Interestingly, we find that in the majority of these failed cases, the programs do not actually admit a witness in our semantics. % Second, in \S~\ref{sec:locating} we attempt to use our witnesses to localize type errors with a simple heuristic. We treat the stuck term as a sink for typing constraints, and the values it contains as sources of constraints. We can then predict that either the stuck term or one of the terms that \emph{produced} a value it contains is likely at fault for the error. We compare our localizations to \ocaml and two state-of-the-art type error localization tools, and find that we are competitive with the state of the art. % Finally, we have also extended \S~\ref{sec:user-study} with an analysis of the statistical significance of our user study results. %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End:
{ "alphanum_fraction": 0.7816733892, "avg_line_length": 39.1457489879, "ext": "tex", "hexsha": "075827254d77c6f7cb3cdc9ce00be36dc1879965", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-08T21:52:40.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-11T21:36:24.000Z", "max_forks_repo_head_hexsha": "0de06f2c3a41063b1bf87d5f6dd11ad830be0512", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "gridaphobe/nanoml", "max_forks_repo_path": "paper/jfp-submission/introduction.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0de06f2c3a41063b1bf87d5f6dd11ad830be0512", "max_issues_repo_issues_event_max_datetime": "2017-05-16T17:37:54.000Z", "max_issues_repo_issues_event_min_datetime": "2017-05-16T05:03:45.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "gridaphobe/nanoml", "max_issues_repo_path": "paper/jfp-submission/introduction.tex", "max_line_length": 320, "max_stars_count": 6, "max_stars_repo_head_hexsha": "0de06f2c3a41063b1bf87d5f6dd11ad830be0512", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "gridaphobe/nanoml", "max_stars_repo_path": "paper/jfp-submission/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-25T23:09:59.000Z", "max_stars_repo_stars_event_min_datetime": "2017-02-06T02:33:28.000Z", "num_tokens": 2372, "size": 9669 }
\documentclass[letterpage,foldmark,10pt]{leaflet} \usepackage{lettrine} \usepackage{minted} \usepackage{framed} \usepackage{hyperref} \title{Handout for Code Your Own Site} \author{} \date{} \pagenumbering{gobble} \begin{document} \maketitle \section{Getting Started} Welcome to the class! The first step is to download the slides so you can follow along with the lectures: http://bit.ly/2amZB8z Then, navigate to \url{https://github.com/clarissalittler/websites-for-beginners} and click on the "download" button and then the "download zip" option. Alternately, just use following link to download the zip directly \url{https://github.com/clarissalittler/websites-for-beginners/archive/master.zip} (or \url{http://bit.ly/2anatmW}) \begin{framed} By default, the zip file will be downloaded into your \verb"Downloads" directory. Navigate to your downloads directory then /right/ click on the file \verb"websites-for-beginners-master.zip" and select the option ``extract here''. Then navigate into the directory and you will find yourself in the right directory for this course. \end{framed} \section{Links to resources} \begin{itemize} \item General JavaScript notes: \url{http://bit.ly/2ahKOZc} \item Tutorial on HTML and CSS: \url{http://bit.ly/2af3ifd} \item Tutorial on JavaScript and the Document Object Model: \url{http://bit.ly/2abXKFa} \item Neocities: \url{https://neocities.org/} \end{itemize} \section{Glossary of Terms} \begin{description} \item [HTML] HyperText Markup Language, the content of a web page \item [CSS] Cascading Style Sheets, how pages look \item [JavaScript] A programming language that runs in the browser and provides interaction \item [DOM] Document Object Model, which connects the JavaScript code to the page \end{description} \section{Common Tags} \begin{description} \item [h1] heading tag, for headlines, section headings, chapter titles \item [p] paragraph tag, holds basic text \item [ol] ordered list tag, for numbered or enumerated lists \item [ul] unordered list tag, for bulleted---unnumbered---lists \item [li] list items, go inside ol or li tags \item [style] CSS code goes between the style tags \item [script] JavaScript code goes between the script tags \item [a] anchor tag is used to make links \end{description} \section{HTML Template} \begin{minted}{html} <!doctype html> <html> <head> </head> <body> </body> </html> \end{minted} \newpage \section{HTML Example} \begin{minted}{html} <!doctype html> <html> <body> <h1>This is a heading</h1> <p>This is a paragraph</p> </body> </html> \end{minted} \section{Basic Selectors} \begin{description} \item [.name] selection by class \item [\#name] selection by ID \item [name] selection by tag name \end{description} \section{Common CSS Properties} \begin{description} \item [width] the width of the element \item [height] the height of the element \item [color] the color of the text \item [background-color] the color of the background of the element \item [display] how the element is displayed: block and none are two possible values \end{description} \section{CSS Example} \begin{minted}{html} <!doctype html> <html> <head> <style> p { color: red; } </style> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph</p> </body> </html> \end{minted} \section{Basic JavaScript Syntax} \subsection{Creating Variables} \begin{minted}{js} var variableName = 20; variableName; variableName = 30; \end{minted} \subsection{Arithmetic} \begin{minted}{js} 10 + 10; 20 - 5; 30 * 3; 30 / 10; \end{minted} \subsection{Strings} \begin{minted}{js} "this is a string"; 'as is this'; "as is 'this'"; 'and is "this"'; \end{minted} \begin{framed} You can mix quotation types for typesetting purposes, but otherwise they're the same. \end{framed} \subsection{Functions} \subsubsection{Creating functions} \begin{minted}{js} function functionName (x) { console.log(x); return x + 10; } \end{minted} \subsubsection{Using functions} \begin{minted}{js} functionName(10); console.log("thing"); \end{minted} \subsection{Objects} \begin{minted}{js} var myObject = {property1 : value1, property2 : value2}; myObject.property1; myObject.property2 = 100; \end{minted} \section{Document Object Model} \subsection{Preliminary} \begin{minted}{js} window.onload = function () { (your code here) }; \end{minted} \subsection{Objects and functions} \begin{description} \item [document] the main object that connects JavaScript to the web page \item [document.createElement] function that takes a tag name and returns an object \item [document.findElementById] function that retrieves an element by ID \item [document.createTextNode] function that creates text from strings \item [e.appendChild] function that attaches one element to another \item [e.style] object that contains CSS style properties \item [e.addClass] function that adds a CSS class to an element \end{description} \end{document}
{ "alphanum_fraction": 0.716536965, "avg_line_length": 30.5952380952, "ext": "tex", "hexsha": "9b18d2ea656c6cbc7236cd2a8265c0149f0bd790", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e13639e86fdb930bd7fa9aabec59fdb4f85c26d1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "clarissalittler/websites-for-beginners", "max_forks_repo_path": "MakingSitesLeaflet.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e13639e86fdb930bd7fa9aabec59fdb4f85c26d1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "clarissalittler/websites-for-beginners", "max_issues_repo_path": "MakingSitesLeaflet.tex", "max_line_length": 338, "max_stars_count": null, "max_stars_repo_head_hexsha": "e13639e86fdb930bd7fa9aabec59fdb4f85c26d1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "clarissalittler/websites-for-beginners", "max_stars_repo_path": "MakingSitesLeaflet.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1451, "size": 5140 }
% \section{201809-4} % \input{problem/14/201809-4-p.tex}
{ "alphanum_fraction": 0.6842105263, "avg_line_length": 19, "ext": "tex", "hexsha": "6e5a110cac61ae9eb2193cfb64b756180df16ef3", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-28T15:33:04.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-01T06:04:16.000Z", "max_forks_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lxlonlyn/CSP-Project", "max_forks_repo_path": "problem/14/201809-4.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c", "max_issues_repo_issues_event_max_datetime": "2022-02-03T15:32:34.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-22T15:33:17.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lxlonlyn/CSP-Project", "max_issues_repo_path": "problem/14/201809-4.tex", "max_line_length": 35, "max_stars_count": 5, "max_stars_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lxlonlyn/CSP-Project", "max_stars_repo_path": "problem/14/201809-4.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-27T03:58:42.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-22T15:34:01.000Z", "num_tokens": 24, "size": 57 }
\section{Formulary: Queueing Theory} \label{sec:Formulary-Queueing-Theory} \subsection{Foundations} \begin{description} \item [Inter-arrival time] \begin{equation} \expected{\tau} = \frac{1}{\lambda} \end{equation} \item [Service time] \begin{equation} \expected{S} = \frac{1}{\mu} \end{equation} \item [Response time] \begin{equation} \expected{T} = \expected{T_{Q}} + \expected{S} \end{equation} \item [Jobs] \begin{equation} \expected{N} = \expected{N_{Q}} + \expected{N_{S}} \end{equation} \item [Server demand] \begin{equation} \expected{D_{i}} = \expected{V_{i}} \cdot \expected{S_{i}} \end{equation} \begin{equation} \expected{D_{i}} = \frac{B_{i}}{C} \end{equation} \item [Total demands] \begin{equation} D = \sum_{i=1}^{m}\expected{D_{i}} \end{equation} \item [Maximum demand] \begin{equation} D_{max} = \max\{\expected{D_{i}}\} \end{equation} \end{description} \subsection{Operational Laws} \begin{description} \item [Stability Law] \begin{equation} \lambda < \mu \end{equation} \item [Little's Law for open systems] \begin{equation} \expected{N} = \lambda \cdot \expected{T} \end{equation} \item [Little's Law for batch systems] \begin{equation} N = X \cdot \expected{T} \end{equation} \item [Little's Law for interactive systems] \begin{equation} \expected{T_{R}} = \frac{N}{X} - \expected{Z} \end{equation} \item [Utilization Law] \begin{equation} \varrho_{i} = \frac{\lambda_{i}}{\mu_{i}} \end{equation} \begin{equation} \varrho_{i} = X \cdot \expected{D_{i}} \end{equation} \begin{equation} \varrho_{i} = \probability{server \; i \; busy} = \expected{requests \; to \; server \; i} = \expected{N_{S,i}} \end{equation} \item [Forced Flow Law] \begin{equation} X_{i} = \expected{V_{i}} \cdot X \end{equation} \item [Bottleneck Law] \begin{equation} \expected{D_{i}} = \expected{V_{i}} \cdot \expected{S_{i}} \end{equation} \end{description} \subsection{Asymptotic Analysis for closed systems} \begin{description} \item [Throughput Asymptotes] \begin{equation} X \leq \min \Big\{ \frac{N}{D + \expected{Z}} , \frac{1}{D_{max}} \Big\} \end{equation} \item [Response Time Asymptotes] \begin{equation} \expected{R} \geq \max\{D , N \cdot D_{max} - \expected{Z} \} \end{equation} \item [Queue Point] \begin{equation} N^{*} = \frac{D + \expected{Z}}{D_{max}} \end{equation} \end{description} \subsection{M/M/1} \begin{description} \item [Utilization] \begin{equation} \varrho = \frac{\lambda}{\mu} \end{equation} \item [State Probability] \begin{equation} \pi_{i} = \varrho^{i}(1-\varrho) \end{equation} \item [Mean System Jobs] \begin{equation} \expected{N} = \frac{\varrho}{(1-\varrho)} \end{equation} \item [Mean Response Time] \begin{equation} \expected{T} = \frac{1}{\mu-\lambda} \end{equation} \end{description} \subsection{M/M/m} \begin{description} \item [Utilization] Both system and each server \begin{equation} \varrho = \frac{\lambda}{m\mu} \end{equation} \item [Resource Requirement] \begin{equation} R = \expected{N_{S}} = \frac{\lambda}{\mu} \end{equation} \item [State Probability] \begin{equation} \pi_{i} = \left\{\begin{matrix} \Big(\frac{\lambda}{\mu}\Big)^{i}\frac{1}{i!}\pi_{0} & i \leq m\\ \Big(\frac{\lambda}{\mu}\Big)^{i}\frac{1}{m!}\Big(\frac{1}{m}\Big)^{i-m}\pi_{0} & i > m \end{matrix}\right. \end{equation} or equivalently \begin{equation} \pi_{i} = \frac{m^{q} \varrho^{i}}{q!} \cdot \pi_{0} \qquad with \quad q=\min\{i,m\} \end{equation} with \begin{equation} \pi_{0} = \Big[ \sum_{i=0}^{m-1} \frac{(m \varrho)^{i}}{i!} + \frac{(m \varrho)^{m}}{m! (1- \varrho)} \Big]^{-1} \end{equation} where $q$ is the number of busy servers. \item [Queue Probability] \begin{equation} P_{Q} = \frac{(m \varrho)^{m}}{m!(1-\varrho)} \cdot \pi_{0} \end{equation} \item [Mean Queue Jobs] \begin{equation} \expected{N_{Q}} = \frac{\varrho}{(1-\varrho)} \cdot P_{Q} \end{equation} \end{description} \subsection{M/M/m/m} \begin{description} \item [Utilization] Both system and each server \begin{equation} \varrho = \frac{\lambda}{m\mu} \end{equation} \item [State Probability] \begin{equation} \pi_{i} = \Big( \frac{\lambda}{\mu} \Big)^{i} \frac{1}{i!} \pi_{0} \end{equation} with \begin{equation} \pi_{0} = \Big[ \sum_{i=0}^{m} \Big( \frac{\lambda}{\mu} \Big)^{i} \frac{1}{i!} \Big]^{-1} \end{equation} \item [Block Probability] \begin{equation} P_{block} = \pi_{m} = \frac{\Big(\frac{\lambda}{\mu}\Big)^{m} \frac{1}{m!}}{\sum_{i=0}^{m} \Big( \frac{\lambda}{\mu} \Big)^{i} \frac{1}{i!}} \end{equation} or equivalently \begin{equation} \begin{split} P_{block} & = \pi_{m} \cdot \frac{e^{-\frac{\lambda}{\mu}}}{e^{-\frac{\lambda}{\mu}}} \\ & = \frac{e^{-\frac{\lambda}{\mu}} \cdot \Big(\frac{\lambda}{\mu}\Big)^{m} \frac{1}{m!}}{\sum_{i=0}^{m} e^{-\frac{\lambda}{\mu}} \cdot \Big( \frac{\lambda}{\mu} \Big)^{i} \frac{1}{i!}} \\ & = \frac{\probability{X = m}}{\probability{X \leq m}} \end{split} \end{equation} where $X \sim Poisson(\frac{\lambda}{\mu})$. \end{description} \subsection{M/M/\texorpdfstring{$\infty$}{Infinity}} \begin{description} \item [Utilization] \begin{equation} \varrho = \frac{\lambda}{\mu} \end{equation} \item [State Probability] \begin{equation} \pi_{i} = \Big(\frac{\lambda}{\mu})^{i} \frac{1}{i!} \pi_{0} \end{equation} with \begin{equation} \pi_{0} = e^{-\frac{\lambda}{\mu}} \end{equation} That is $N^{M/M/\infty} \sim Poisson(\frac{\lambda}{\mu})$. \item [Mean System Jobs] \begin{equation} \expected{N} = \frac{\lambda}{\mu} \end{equation} \item [Mean Queue Jobs] \begin{equation} \expected{N_{Q}} = 0 \end{equation} \item [Response Time] \begin{equation} \expected{T} = \expected{T_{S}} = \frac{1}{\mu} \end{equation} \item [Delay] \begin{equation} \expected{T_{Q}} = 0 \end{equation} \end{description} \subsection{M/G/1} \begin{description} \item [Mean Queue/Excess Time] \begin{equation} \expected{T_{Q}}=\frac{\varrho}{1-\varrho}\expected{S_{e}} \end{equation} \item [Pollaczek-Khinchin Formula] \begin{equation} \expected{T_{Q}}=\frac{\varrho}{1-\varrho}\cdot\frac{\expected{S^{2}}}{2\expected{S}} \end{equation} \begin{equation} \expected{T_{Q}}=\frac{\varrho}{1-\varrho}\cdot\frac{\expected{S}}{2}\cdot(C_{S}^{2}+1) \end{equation} \begin{equation} \expected{T_{Q}}=\frac{\lambda\expected{S^{2}}}{2(1-\varrho)} \end{equation} \end{description} \subsection{Relations between queues} \begin{description} \item [Block M/M/m/m vs. Queueing M/M/1] \begin{equation} P_{block} = \frac{(1-\varrho)P_{Q}}{1-\varrho P_{Q}} \end{equation} \end{description} \subsection{Capacity Provisioning} \begin{description} \item [Square-Root Staffing Rule] For any $M/M/m$ with high resource requirement $R$, the minimum number of servers $m_{\alpha}^{*}$ to ensure $P_{Q} < \alpha$ is \begin{equation} m_{\alpha}^{*} \approx R + c \sqrt{R} \end{equation} with $c$ solution of the equation \begin{equation} \frac{c \Phi(c)}{\phi(c)} = \frac{1 - \alpha}{\alpha} \end{equation} where $\Phi(\cdot)$ is the c.d.f. of the Standard Normal, and $\phi(\cdot)$ is its p.d.f. \end{description} \subsection{Open Jackson Networks} \begin{description} \item [State Probability] \begin{equation} \pi_{n_{1},...,n_{m}} = \prod_{i=1}^{m} \varrho_{i}^{n_{i}} (1-\varrho_{i}) \end{equation} \item [Probability Jobs at Server] \begin{equation} \probability{n_{i} jobs at server i} = \varrho_{i}^{n_{i}} (1-\varrho_{i}) \end{equation} \item [Mean Server Jobs] \begin{equation} \expected{N_{i}} = \frac{\varrho_{i}}{1 - \varrho_{i}} \end{equation} \end{description} \subsection{Closed Jackson Networks} \begin{description} \item [State Probability] \begin{equation} \pi_{n_{1},...,n_{m}} = C \cdot \prod_{i=1}^{m} \varrho_{i}^{n_{i}} \end{equation} where $C$ is the \textit{Normalizing Constant} determined as the solution of $ \sum_{(n_{1},...,n_{m}) \\ \in States} \pi_{n_{1},...,n_{m}} = 1 $ \end{description} \subsection{Mean Value Analysis} \begin{description} \item [MVA Response Time] \begin{equation} \expected{T_{j}^{M}} = \frac{1}{\mu_{j}} + \frac{p_{j} \lambda^{(M-1)} \expected{T_{j}^{(M-1)}}}{\mu_{j}} \end{equation} \item [MVA Fraction of Arrivals] \begin{equation} p_{j} = \frac{\lambda_{j}^{M}}{\lambda^{M}} = \frac{V_{j}}{\sum_{j=1}^{m} V_{j}} \end{equation} \item [MVA Total Arrival Rate] \begin{equation} \lambda^{(M-1)} = \frac{M-1}{\sum_{j=1}^{m} p_{j} \expected{T_{j}^{(M-1)}}} \end{equation} \end{description} \subsection{Network of PS Servers} \begin{description} \item [BCMP Theorem] \begin{equation} \pi_{n_{1},...,n_{m}} = \prod_{i=1}^{m} \varrho_{i}^{n_{i}} (1-\varrho_{i}) \end{equation} where $\varrho_{i}=\lambda_{i}\expected{S_{i}}$ \item [Probability Jobs at Server] \begin{equation} \probability{n_{i} jobs at server i} = \varrho_{i}^{n_{i}} (1-\varrho_{i}) \end{equation} where $\varrho_{i}=\lambda_{i}\expected{S_{i}}$ \end{description} \subsection{Renewal-Reward Theory} \begin{description} \item [Renewal-Reward Theorem] \begin{equation} \lim_{t \to \infty}\frac{R(t)}{t}=\frac{\expected{R}}{\expected{X}} \quad \forall 0\leq\expected{R}<\infty,0<\expected{X}<\infty \end{equation} where $X$ is the inter-arrival time, $R is the number of rewards earned$, and $R(t)$ is the total rewards by time $t$. \end{description} \subsection{Task Assignment for Server Farms} \begin{description} \item [Spare Server] \begin{equation} \sharp SpareServers=m-\lceil R\rceil \end{equation} \item [Mean Queue Time for M/G/m] \begin{equation} \expected{T_{Q}^{M/G/m}}\approx\Big(\frac{C^{2}+1}{2}\Big)\expected{T_{Q}^{M/M/m}} \end{equation} \item [Worst-Case Competitive Ratio] \begin{equation} CR_{\mathcal{P}}=\max_{\mathcal{A}}r_{\mathcal{P}}(\mathcal{A}) \end{equation} with \begin{equation} r_{\mathcal{P}}(\mathcal{A})=\frac{\expected{T(\mathcal{A})}^{\mathcal{P}}}{\expected{T(\mathcal{A})}^{\mathcal{OPT}}} \end{equation} \end{description} \subsection{Scheduling} \begin{description} \item [Slowdown] \begin{equation} Slowdown=\frac{T}{S} \end{equation} \begin{equation} \expected{Slowdown(x)}=\frac{\expected{T(x)}}{x} \end{equation} \begin{equation} \expected{Slowdown}=\int_{x}\frac{\expected{T(x)}}{x}f_{S}(x)\partial x \end{equation} \item [Fairness] \begin{equation} \expected{Slowdown^{\mathit{P}}(x)}<\expected{Slowdown^{\mathit{PS}}(x)}\quad\forall x \end{equation} \item [Fairness] \begin{equation} \expected{Slowdown^{\mathit{P}}(x)}>\expected{Slowdown^{\mathit{PS}}(x)}\quad\exists x \end{equation} \item [Starvation] \begin{equation} \exists x.\expected{Slowdown^{\mathit{P}}(x)}>\expected{Slowdown^{\mathit{PS}}(x)} \end{equation} \item [Response Time for M/G/1/FCFS] \begin{equation} \expected{T}=\expected{S}+\frac{\lambda\expected{S^{2}}}{2(1-\varrho)} \end{equation} \item [Response Time for M/G/1/PS] \begin{equation} \expected{T(x)}^{\mathit{M/G/1/PS}}=\frac{x}{1-\varrho} \end{equation} \item [Response Time for M/G/1/PLCFS] \begin{equation} \expected{T(x)}^{\mathit{M/G/1/PLCFS}}=\frac{x}{1-\varrho} \end{equation} \item [Queue Time for M/G/1/FB] \begin{equation} \expected{T(x)}^{\mathit{FB}}= \frac{x(1-\varrho_{\overline{x}})+\frac{1}{2}\lambda\expected{S_{\overline{x}}^{2}}} {\Big(1-\varrho_{\overline{x}}\Big)^{2}} \end{equation} \item [Queue Time for M/G/1/SJF] \begin{equation} \expected{T_{Q}(x)}^{\mathit{SJF}}= \frac{\varrho\expected{S^{2}}}{2\expected{S}}\cdot\frac{1}{\Big(1-\varrho_{x}\Big)^{2}} \end{equation} where $\varrho_{x}=\lambda F(x)\cdot\int_{t=0}^{x}t\frac{f(t)}{F(t)}\partial t$ is the arrival rate of jobs of size no more than $x$, namely $\lambda F(x)$, multiplied by the expected size of jobs of size no more than $x$, namely $\int_{t=0}^{x}t\frac{f(t)}{F(t)}\partial t$. \item [Response Time for M/G/1/PSJF] \begin{equation} \expected{T(x)}^{\mathit{PSJF}}= \expected{Res(x)}^{\mathit{PSJF}}+\expected{Wait(x)}^{\mathit{PSJF}}= \frac{x}{1-\varrho_{x}}+ \frac{\frac{\lambda}{2}\int_{0}^{x}f(t)t^{2}\partial t} {\Big(1-\varrho_{x}\Big)^{2}} \end{equation} where $\varrho_{x}=\lambda\int_{t=0}^{x}tf(t)\partial t$ is the load made up by jobs of size less than $x$. \item [Response Time for M/G/1/SRPT] \begin{equation} \expected{T(x)}^{\mathit{SRPT}}= \expected{Wait(x)}^{\mathit{SRPT}}+\expected{Res(x)}^{\mathit{SRPT}}= \frac{\lambda}{2}\frac{\int_{t=0}^{x}t^{2}f(t)\partial t + x^{2}(1-F(x))}{\Big(1-\varrho_{x}\Big)^{2}}+ \int_{t=0}^{x}\frac{1}{1-\varrho_{t}} \end{equation} \item [Relation between scheduling policies] \begin{equation} \expected{T(x)}^{\mathit{SRPT}}\leq\expected{T(x)}^{\mathit{PS}}\quad \forall x,\forall\varrho<\frac{1}{2} \end{equation} \begin{equation} \expected{T(x)}^{\mathit{SRPT}}\leq\expected{T(x)}^{\mathit{FB}} \end{equation} \begin{equation} \expected{T^{\mathit{FCFS}}}=\expected{T^{\mathit{LCFS}}}=\expected{T^{\mathit{RO}}} \end{equation} \end{description}
{ "alphanum_fraction": 0.6191002135, "avg_line_length": 22.7869127517, "ext": "tex", "hexsha": "c6f1102762e5ac9f8506f231daf5bebfd57ce907", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gmarciani/research", "max_forks_repo_path": "performance-modeling/sec/formulary-queueing-theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gmarciani/research", "max_issues_repo_path": "performance-modeling/sec/formulary-queueing-theory.tex", "max_line_length": 277, "max_stars_count": 3, "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gmarciani/research", "max_stars_repo_path": "performance-modeling/sec/formulary-queueing-theory.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "num_tokens": 5578, "size": 13581 }
\documentclass[12pt]{hitec} \usepackage{fancyvrb} \usepackage{hyperref} \title{Problem set 1} \author{Joe Armstrong} \date{} \begin{document} \maketitle \tableofcontents \section{Introduction} These problems are for lectures E1 and E2. Additional material can be found in the course book {\sl Programming Erlang, 2'nd edition}. \section{Problems} The first exercises involve correcting incorrect programs. The program \verb+bad_prog1.erl+ in my github repository \newline \href{https://github.com/joearms/paradis\_jan\_2016/blob/master/lectures/bad\_prog1.erl} {https://github.com/joearms/paradis\_jan\_2016/} has a large number of errors. Note: All programs you hand in should contain unit tests together with documentation and instructions on how to run the program. For v\"{a}lgodk\"{a}nd you should solve the sections mark advanced and your programs should be of production quality. This means, not only should the code be totally correct, it should also be well documented with no spelling errors and have good test coverage. \VerbatimInput[frame=single]{./bad_prog1.erl} Correct all the errors that you can find. Write unit tests\footnote{See the note on unit tests.} for this module and run these tests. See how many errors you can find {\sl before} running this through the compiler. Hint: Read the error messages - correct the first error and recompile before tackling the second error. Having fixed the first error the error messages that the compiler generates may change on subsequent recompilation (which is why it is good to fix one error at a time). As you get experienced look at only the line number in the error message then try to see the error in your code at this line without reading the remainder of the diagnostic. Note: Good programs should have good documentation and comments. Correct the comments as well as the code, and document the module in an appropriate manner. \section{A bank account} The following program implements a simple bank process, where the code you have to write has been replaced by the atom \verb+implement_this+. Replace the \verb+implement_this+ atoms by correct code. \VerbatimInput[frame=single]{./bank.erl} When your program works the unit test \verb+bank:test()+ should evaluate to \verb+horray+ Look at the code in \verb+e1.erl+ for inspiration. \section{Advanced: Many bank accounts} The previous bank account was for one person only. Implement a module \verb+multi_bank.erl+ that handles bank accounts for many people. You'll have to add an additional argument \verb+Who+ to all the functions in the API, so for example where \verb+bank.erl+ exports \verb+add(Pid, X)+ \verb+multi_bank.erl+ would have a function \verb+add(Pid, Who, X)+ where \verb+Who+ is a person name (for example, the atom \verb+joe+). Add an additional function \verb+lend(Pid, From, To, Ammount)+ to transfer money between bank accounts of the people \verb+From+ and \verb+To+. \section{Unit Tests} Here's an example of a simple module, containing a test function \verb+test()+. \VerbatimInput[frame=single]{./math0.erl} The unit tests are in the body of the function \verb+test()+. To run the tests we proceed as follows: \begin{verbatim} > erl Eshell V7.1 (abort with ^G) 1> c(math0). {ok,math0} 2> math0:test(). horray \end{verbatim} Here we have used pattern matching to test the return values of the function. The line: \begin{verbatim} 24 = fac(4) \end{verbatim} will fail with a pattern matching error if \verb+fac(4)+ does not return \verb+24+ (which is the value of factorial 4). If we call \verb+fac(N)+ with an argument that is not an integer the function should raise an exception. We can test that it has throw an exception with a statement like: \begin{verbatim} {'EXIT', _} = (catch fac(X)) \end{verbatim} In a later lecture I'll talk about catching exceptions, but for now all you need to know is that \verb+catch(F(X))+ traps errors occurring during the evaluation of \verb+F(X)+ converting them to error tuples of the form \verb+{'EXIT', Why}+ which we match with the pattern \verb+{'EXIT', _}+. \section*{Notes} \href{https://github.com/joearms/paradis\_jan\_2016/blob/master/lectures/} {https://github.com/joearms/paradis\_jan\_2016/blob/master/lectures/} contains material referred to in the problems. If you want to build the lectures locally you will need a working verison of erlang and pdflatex installed on your machine. \end{document}
{ "alphanum_fraction": 0.7625979843, "avg_line_length": 31.2237762238, "ext": "tex", "hexsha": "6d5c9a68aa2f4950dfec94e56a0c0223e680655d", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2020-02-20T21:27:59.000Z", "max_forks_repo_forks_event_min_datetime": "2017-02-16T10:22:05.000Z", "max_forks_repo_head_hexsha": "1963c34e98bcd4883da7e5a5870e33c343797855", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joearms/paradis_jan_2016", "max_forks_repo_path": "lectures/problems_1.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "1963c34e98bcd4883da7e5a5870e33c343797855", "max_issues_repo_issues_event_max_datetime": "2017-02-16T13:46:09.000Z", "max_issues_repo_issues_event_min_datetime": "2017-02-16T13:44:50.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joearms/paradis_jan_2016", "max_issues_repo_path": "lectures/problems_1.tex", "max_line_length": 88, "max_stars_count": 7, "max_stars_repo_head_hexsha": "1963c34e98bcd4883da7e5a5870e33c343797855", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joearms/paradis_jan_2016", "max_stars_repo_path": "lectures/problems_1.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-25T15:17:53.000Z", "max_stars_repo_stars_event_min_datetime": "2017-01-24T21:28:23.000Z", "num_tokens": 1166, "size": 4465 }
\section{Empirical Studies on Performance Bugs} \label{sec:2_study} Recently, several empirical studies have been conducted for real-world performance bugs. They all have different focuses. \citet{Zaman2012MSR} compare the qualitative difference between performance bugs and non-performance bugs across impact, context, fix and fix validation through studying 400 bugs from Mozilla Firefox and Google Chrome. Their study is conducted on different aspects for performance bugs from our study. For example, in their fix study, they focus on how many discussions among developers, whether the bug depends on other bugs, and whether the reporter provides some hints or patches during reporting. In our fix study, we focus on the fix strategies used in the final patches. \citet{SmartphoneStudy} randomly collect 70 performance bugs from smartphone applications to study the characteristics of these bugs. Similar to our study, they also find that there are common patterns for performance bugs, and these patterns can be used to detect previously unknown bugs. Since their performance bugs are collected from smartphone applications, they have some findings unique to mobile performance bugs. For example, the three most common types of performance bugs in their study are GUI lagging, energy leak and memory bloat. They also have some opposite findings from our study. For example, they find that inputs with small scale are suffice to expose performance bugs. The empirical study conducted by \citet{Nistor2013MSR} is similar to our bug characteristics study in Chapter~\ref{chap:sd} in that it also finds that performance problems take long time to get diagnosed and the help from profilers is very limited. However, the similarity ends here. Different from our study in Chapter~\ref{chap:study}, this work does not investigate the root causes of performance bugs. Different from our study in Chapter~\ref{chap:sd}, this work does not study how performance problems are observed and reported by end users. Its bug set includes many problems that are not perceived by end users and are instead discovered through developers' code inspection. \citet{HuangRegression} study 100 performance regression issues to understand what kind of changes are more likely to introduce performance regressions. Similar to our study, they also find that performance regressions are more likely to happen inside a loop, and many performance regressions involve invoking expensive function calls. Different from our work, their study focuses on performance regression.
{ "alphanum_fraction": 0.8135658915, "avg_line_length": 58.6363636364, "ext": "tex", "hexsha": "97ae23fa829134a40b31fe198db00e6121847eac", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d5820825a2e9e3c53de37f2925ea0d87b8b2c73b", "max_forks_repo_licenses": [ "Artistic-1.0-Perl", "ClArtistic" ], "max_forks_repo_name": "songlh/thesis", "max_forks_repo_path": "chapter-background/0_study.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5820825a2e9e3c53de37f2925ea0d87b8b2c73b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Artistic-1.0-Perl", "ClArtistic" ], "max_issues_repo_name": "songlh/thesis", "max_issues_repo_path": "chapter-background/0_study.tex", "max_line_length": 123, "max_stars_count": null, "max_stars_repo_head_hexsha": "d5820825a2e9e3c53de37f2925ea0d87b8b2c73b", "max_stars_repo_licenses": [ "Artistic-1.0-Perl", "ClArtistic" ], "max_stars_repo_name": "songlh/thesis", "max_stars_repo_path": "chapter-background/0_study.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 528, "size": 2580 }
\documentclass[revision-guide.tex]{subfiles} %% Current Author: PS \setcounter{chapter}{0} \begin{document} \chapter{Mechanics} \begin{content} \item scalars and vectors \item moment of a force \item kinematics \item Newton’s laws of motion \item conservation of linear momentum \item density \item pressure \end{content} \section*{Candidates should be able to:} \section{Scalars and Vectors} \spec{distinguish between scalar and vector quantities and give examples of each} A scalar quantity\footnote{strictly we are modelling a physical quantity as a mathematical object} is one which has only a magnitude whereas a vector has \emph{both} magnitude and direction. We often use positive and negative values to indicate direction (e.g. $v=-2\ ms^{-1}$) but this does not mean that all negative values are vectors! Note that there are different ways of multiplying vectors and scalars. Two vectors can be multiplied to give a scalar \emph{or} a vector. For example, work done is the (scalar) product of force and displacement, both vectors. \spec{resolve a vector into two components at right angles to each other by drawing and by calculation} Vectors can be split into two components using trigonometry. The diagram below shows a velocity vector being split into horizontal and vertical components $v_x$ and $v_y$. \begin{figure}[h] \begin{center} \begin{tikzpicture} \draw[very thick, black, ->] (0,0) -- node[above] {$\mathbf{v}$} (5,2.5); \draw[very thick, purple, ->] (0,0) -- node[below] {$\mathbf{v_x}$} (5,0); \draw[very thick, purple, ->] (0,0) -- node[left]{$\mathbf{v_y}$}(0,2.5); \draw [very thick,gray](1,0) arc (0:26.6:1); \draw [gray](1,0.25) node[right]{$\theta$}; \draw (7,1.25) node[right] { $\begin{aligned} \mathbf{v}&=\mathbf{v_x + v_y}\\ v_x &= v\cos{\theta}\\ v_y &= v\sin{\theta} \end{aligned}$}; \end{tikzpicture} \end{center} \end{figure} \spec{combine any number of coplanar vectors at any angle to each other by drawing} Vectors can be added by placing them end to end. The resultant vector is the one joining the start of the first vector to the end of the final vector. Its magnitude and direction can be calculated by trigonometry or scale drawing. \begin{figure}[h] \begin{center} \begin{tikzpicture} \draw[thick,->] (0,0) -- node[above] {$\mathbf{v_1}$} (3,1) -- node[below] {$\mathbf{v_2}$} (6,.5) -- node[below] {$\mathbf{v_3}$} (9,2); \draw[thick,purple,->] (0,0) -- node[above]{$\mathbf{v}$} (9,2); \end{tikzpicture} $$\mathbf{v} = \mathbf{v_1}+\mathbf{v_2}+\mathbf{v_3}$$ \end{center} \end{figure} \section{Static Equillibrium} \spec{calculate the moment of a force and use the conditions for equilibrium to solve problems (restricted to coplanar forces)} The moment of a force is calculated by multiplying its magnitude by the perpendicular distance of the force's line of action to the pivot point. This is mathematically equivalent to multiplying the distance from the pivot by the component of the force perpendicular to that distance. \begin{figure}[h] \centering \begin{tikzpicture} \draw[thick] (0,0) -- (6,2); \draw[thick, purple, ->] (6,2) -- node[right]{$ F$}(6,1); \draw[thick, purple, dashed] (6,1) -- node[right] {line of action of $F$}(6,-1) ; \draw[thick, dashed] (0,0) -- node[below]{perpendicular distance, $x$} (6,0); \end{tikzpicture} $$\mathrm{moment} = Fx $$ \end{figure} The conditions for equilibrium are: \begin{enumerate} \item The sum of all the forces acting on the object must be zero. \item The sum of all the moments on an object must be zero. \end{enumerate} \begin{example} A Tower Crane lifts a load into position. The load has a weight of \SI{4.0e4}{N} and the arm of the crane has a weight of \SI{1.2e5}{N}. Calculate the required weight of the counterweight and the force the tower must support. Assume the centre of mass of the arm is at its centre. \vspace{1cm} \begin{center} \begin{tikzpicture} \draw[thick] (2,0) rectangle (2.5,5); \draw[thick] (0,4) rectangle (8,4.2); \fill (-0.5,3.8) rectangle (.5,4.4); \draw(2.25,5) -- (8,4.2); \foreach \x in {0,1,...,13} { \draw (1+0.5*\x,4) -- (1.5+0.5*\x,4.2); \draw (1+0.5*\x,4.2) -- (1.5+0.5*\x,4); } \foreach \y in {0,1,...,9} { \draw (2,0+0.5*\y) -- (2.5,0.5+0.5*\y); \draw (2.5,0+0.5*\y) -- (2,0.5+0.5*\y); } \draw[purple, thick, ->] (8,4) -- (8,3) node[right]{$W_l=\SI{4.0e4}{N}$}; \draw[purple, thick, ->] (4,4) -- (4,3.4) node[below]{$W_a=\SI{1.2e5}{N}$}; \draw[purple,thick, ->] (0,4) -- (0,3) node[right]{$W_c$}; \draw[<->] (0,6) -- node[above]{15m}(2.25,6); \draw[<->] (2.25,6) -- node[above]{45m}(8,6); \end{tikzpicture} \end{center} \textbf{Answer} We begin by taking moments around the tower of the crane. The weight of the arm, $W_a$, acts \SI{15}{m} from the tower so solving for moments gives: $$ 15W_c = 15W_a + 45W_l $$ $$ W_c = \SI{2.4e5}{N} $$ The sum of the downward forces must equal the reaction force of the tower so: $$ R = \SI{4.0e5}{N} $$ \end{example} \section{Kinematics} \spec{construct displacement-time and velocity-time graphs for uniformly accelerated motion} For uniform acceleration, a graph of velocity against time will be linear, with the formula $ v=u+at$, and a graph of displacement against time will be parabolic, with the formula $s=ut+\frac{1}{2}at^2$. \begin{center} \begin{tikzpicture}[domain=0:3] \draw[thick,->] (0,0) -- (3,0) node[below]{$t$}; \draw[thick,->] (0,0) -- (0,3) node[left]{$s$}; \draw[thick,->] (4,0) -- (7,0) node[below]{$t$}; \draw[thick,->] (4,0) -- (4,3) node[left]{$v$}; \draw[color=purple] plot (\x+4,{.2+0.4*\x}); \draw[color=purple] plot (\x,{.2*\x+0.2*\x^2}); \end{tikzpicture} \end{center} \spec{identify and use the physical quantities derived from the gradients of displacement--time and areas and gradients of velocity--time graphs, including cases of non-uniform acceleration} The quantities are given in the table below: \begin{tabular}{r|ll} & gradient & area \\ \hline displacement-time & velocity & -- \\ velocity-time & acceleration & displacement \\ \end{tabular} If the graph is non-linear then the gradient of a tangent must be taken. Note that areas below the axis in a velocity-time graph represent \emph{negative} displacement. \spec{recall and use:} \begin{align*} v &= \frac{\Delta x}{\Delta t}\\ a &= \frac{\Delta v}{\Delta t} \end{align*} \spec{recognise and use the kinematic equations for motion in one dimension with constant acceleration:} \begin{align*} s &= ut + \frac{1}{2}at^2 \\ v^2 &= u^2 + 2as \\ s &= \left( \frac{u+v}{2} \right) t \end{align*} \spec{recognise and make use of the independence of vertical and horizontal motion of a projectile moving freely under gravity} When an object moves in a uniform gravitational field it motion can be modeled by considering the horizontal and vertical components of motion separately. The horizontal component has a constant velocity and the vertical has a constant acceleration. \begin{example} A ball is thrown with a velocity of \SI{5}{m.s^{-1}} from a height of \SI{1.2}{m}. If its initial angle to the horizontal is \ang{50} calculate the distance it travels before it hits the ground. \answer The first step is to split the velocity into horizontal and vertical components: \begin{align*} v_x = 5\cos{50}\\ v_y = 5\sin{50} \end{align*} The time for the ball to reach the ground can now be calculated using the vertical motion and the equation $s=ut+\frac{1}{2}at^2$, setting $s = \SI{-1.2}{m}$. This gives $t=\SI{1.02}{s}$. Finally, the horizontal distance is calculated using the simple constant velocity formula to give $x=\SI{3.28}{m}$. \end{example} \section{Forces} \spec{recognise that internal forces on a collection of objects sum to zero vectorially} This is as a result of Newton's Third Law. \spec{recall and interpret statements of Newton’s laws of motion} \begin{enumerate} \item An object will remain at rest, or continue at a constant velocity, unless a resultant force acts upon it. \item $F=ma$, where $F$ is the vector sum of the forces acting on the body. Or, alternatively $F=\frac{dp}{dt}$ (see below). \item For every force of object A acting on object B there exists a force of the same type, of equal magnitude and opposite direction of object B acting on object A. \emph{It is important to be able to distinguish the `equal and opposite' forces which may act on a single object in equilibrium from a Newton's Third Law pair of forces.} \end{enumerate} \spec{recall and use $F = ma$ in situations where mass is constant} Remember that $F$ is the \emph{resultant} force acting on the body. \spec{understand the effect of kinetic friction and static friction} \spec{recall and use $F_k = \mu_k N$ and $F_s = \mu_s N$, where $N$ is the normal contact force and $\mu_k$ and $\mu_s$ are the coefficients of kinetic friction and static friction, respectively} Friction occurs between two objects when they are pushed together by a normal force. A useful model is that the maximum size of the frictional force is proportional to the normal force. There is usually a difference between the constant of proportionality when the two surfaces are stationary compared to each other (static friction) compared to when they are sliding past each other (kinetic friction). It is usually the case that $\mu_k < \mu_s$. An interesting result is that blocks of different masses should take the same distance to slide to a halt: \[ s = \frac{\frac{1}{2}mv^2}{F} = \frac{\frac{1}{2}mv^2}{\mu_k N} = \frac{\frac{1}{2}mv^2}{\mu_k mg} = \frac{v^2}{2\mu_k g} \] \spec{recall and use the independent effects of perpendicular components of a force} As with velocities, forces can be split into two perpendicular components and their effects considered independently. \begin{example} A block of mass \SI{4}{kg} is on a frictionless slope of \ang{30}. Calculate the rate at which it accelerates down the slope. \usetikzlibrary{scopes} \def\iangle{30} % Angle of the inclined plane \def\down{-90} \def\arcr{0.5cm} % Radius of the arc used to indicate angles \begin{tikzpicture}[ force/.style={>=latex,draw=blue,fill=blue}, axis/.style={densely dashed,gray,font=\small}, M/.style={rectangle,draw,fill=lightgray,minimum size=0.5cm,thin}, m/.style={rectangle,draw=black,fill=lightgray,minimum size=0.3cm,thin}, plane/.style={draw=black,fill=blue!10}, ] \draw[plane] (0,-2) coordinate (base) -- coordinate[pos=0.5] (mid) ++(\iangle:6) coordinate (top) |- (base) -- cycle; \path (mid) node[M,rotate=\iangle,yshift=0.25cm] (M) {}; \draw[->] (base)++(\arcr,0) arc (0:\iangle:\arcr); \path (base)++(\iangle*0.5:\arcr+10pt) node {$30 ^{\circ}$}; \begin{scope}[rotate=\iangle] % Draw axes and help lines {[axis,->] } % Forces {[force,->] % Assuming that Mg = 1. The normal force will therefore be cos(alpha) \draw [green](M.center) -- ++(0,{cos(\iangle)}) node[above right, black] {$N$}; \draw [green](M.center) -- ++(-.5,0) node[left, black] {Mg sin30}; } \end{scope} \draw[force,->] (M.center) -- ++(0,-1) node[below] {$Mg$}; %% \\ %}; \end{tikzpicture} \answer The weight should be split into components along the slope and perpendicular to the slope (shown in green). Only the component along the slope contributes to the acceleration. \[ a = \frac{F}{m} = \frac{mg\sin{30}}{m} = \SI{4.9}{ms^{-2}} \] This question could be extended to include friction by calculating the normal force, the frictional force and hence a new acceleration. If $\mu_k = 0.4$ then the answers are \SI{34}{N}, \SI{14}{N} and \SI{1.5}{ms^{-2}} respectively (Try it!) \end{example} \spec{recall and use p = mv and apply the principle of conservation of linear momentum to problems in one dimension} Momentum is a conserved quantity (along with energy and charge). It can be calculated using the formula $p=mv$ where $p$ is the momentum. In any closed system the total momentum of the particules must remain constant. This can be used to predict the outcomes of collisions in certain cases. \spec{distinguish between elastic and inelastic collisions} An elastic collision is one in which \emph{kinetic energy} is conserved. An inelastic collision is one in which it is not. In general, a collision in which two objects adhere will not conserve kinetic energy as the final velocity will be given by: $$ v = \frac{m_1u_1+m_2u_2}{m_1+m_2} $$ and therefore the final kinetic energy will be given by: \[ \text{KE} = \frac{1}{2}(m_1+m_2)v^2 = \frac{1}{2}\frac{\left( m_1u_1+m_2u_2\right)^2}{m_1+m_2}\] which cannot be equal to the original kinetic energy. \spec{relate resultant force to rate of change of momentum in situations where mass is constant and recall and use $F = \frac{\Delta P}{\Delta t}$} Newton's second law is more properly given by: \[ F = \frac{dp}{dt} \] This simplifies to the GCSE formulation for constant mass: \[ F = \frac{dp}{dt} = m\frac{dv}{dt} = ma \] A simplified version is: $$F = \frac{\Delta P}{\Delta t}$$ This will give the correct result for a constant force or otherwise give the average force. \spec{recall and use the relationship impulse = change in momentum} Multiplying both sides of the equation above by time gives: $$ F \Delta t = \Delta P $$ The quantity on the left hand side is the impulse. \spec{recall and use the fact that the area under a force-time graph is equal to the impulse} Using calculus to solve differential version of Newton's Second Law above gives: $$ \Delta P = \int_{t_0}^{t_1} F dt $$ The right-hand side of this equation represents the area under a force-time graph. \spec{apply the principle of conservation of linear momentum to problems in two dimensions} When objects are free to move in two dimensions then momentum must be conserved along two axis. \begin{example} Two objects are able to slide frictionlessly over a horizontal surface. The first object, $m_1=\SI{3}{\kg}$ is propelled with an initial speed $u_1=\SI{5}{\m\per\s}$ towards a second mass, $m_2=\SI{1.5}{\kg}$, which is initially at rest. After the collision both objects move at \SI{30}{\degree} on either side of the line of the original motion. What are the final speeds of the two objects? Is the collision elastic? \answer Conservation of momentum along the x-axis gives $$ m_1u_1 = m_1v_1\cos{\theta} + m_2v_2\cos{\theta}$$ Conservation of momentum along the y-axis gives $$ m_1v_1\sin{\theta} = m_2v_2\sin{\theta} $$ These equations can be combined to give $$ v_1 = \frac{u_1}{2\cos{\theta}} = \SI{2.887}{\m\per\s}$$ and $$ v_2 = \frac{m_1}{m_2}v_1= \SI{5.773}{\m\per\s}$$ The initial KE of the system is $$ K_i = \frac{1}{2}m_1u_1^2 = \SI{37.5}{\joule} $$ and the final KE of the system is $$ K_f = \frac{1}{2}m_1v_1^2 + \frac{1}{2}m_2u_2^2 = \SI{37.5}{\joule}$$ since $K_i = K_f$, the collision is elastic \end{example} \spec{recall and use density = mass / volume} \spec{recall and use pressure = normal force / area} \spec{recall and use $ p = \rho gh $ for pressure due to a liquid.} These are GCSE equations and should present no problems. \end{document} %sagemathcloud={"latex_command":"latexmk -pdf -f -g -bibtex -synctex=1 -interaction=nonstopmode '1-mechanics.tex'"}
{ "alphanum_fraction": 0.6921686358, "avg_line_length": 43.5084269663, "ext": "tex", "hexsha": "373cc03d9ee4ae1d01994ef098a94467dee84678", "lang": "TeX", "max_forks_count": 22, "max_forks_repo_forks_event_max_datetime": "2021-05-09T13:48:59.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-19T16:16:46.000Z", "max_forks_repo_head_hexsha": "99bf9346cd40ebe6f312f164d731b7010b86534a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dhruvrattan/physicsrevision", "max_forks_repo_path": "1-mechanics.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "99bf9346cd40ebe6f312f164d731b7010b86534a", "max_issues_repo_issues_event_max_datetime": "2021-06-24T08:14:03.000Z", "max_issues_repo_issues_event_min_datetime": "2016-12-19T16:46:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dhruvrattan/physicsrevision", "max_issues_repo_path": "1-mechanics.tex", "max_line_length": 448, "max_stars_count": 14, "max_stars_repo_head_hexsha": "99bf9346cd40ebe6f312f164d731b7010b86534a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dhruvrattan/physicsrevision", "max_stars_repo_path": "1-mechanics.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-08T21:47:07.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-13T19:37:54.000Z", "num_tokens": 4794, "size": 15489 }
\chapter{Edward Sapir} \label{ch.sapir} As observed in the preceding chapter, {\Boas} continued to play a leading role in the development of American linguistics right up until his death in 1942. His influence was exerted through his direction of various funding agencies and organs of publication and, more generally, by his approving and encouraging (or withholding such support from) the work of other scholars. It was, for example, partly because of {\Boas}'s `patronage' that \name{Roman}{Jakobson} was able to establish himself in the United States within a relatively short time after his arrival, as noted in chapter~\ref{ch.jakobson} above. In substantive terms, however, {\Boas}'s influence on the development of linguistic theory in America was rather indirect in the years following the publication of the first volume of the \textsl{Handbook of American Indian Languages} \citep{boas:handbook_1}. Though his basic tenets continued to shape the orientation of linguistic research and to form the basis of a developing American approach to the science of language, he did not himself play a leading role in the formation of linguistic theory. This was not just because, as described in detail by \citet[ch. 3]{murray93:theory.groups}, he was averse to theory but also because his own interests were largely elsewhere: in the more general field of \isi{ethnography}, not only linguistics; and within linguistics, in the description of the native languages of North America. It thus fell largely to {\Boas}'s students, proceeding from his general views, to develop a more specific and articulated theory of linguistic structure. By far the most important among {\Boas}'s students in this respect was \name{Edward}{Sapir}, who had substantially eclipsed {\Boas} as a theoretician of language by the mid-1920s. \section{Sapir's life} \name{Edward}{Sapir} was born in 1884 in Lauenburg, Germany.\footnote{\citet{darnell90:sapir} provides an extensive and detailed account of {\Sapir}'s life and career, though reservations may be necessary on some points, as suggested by \citet{silverstein91:rvw.darnell}. \citealt{cowan.etal86:sapir.centennial} represents the content of a symposium in 1984 on the occasion of the centennial of {\Sapir}'s birth. \citealt{koerner84:sapir.appraisals} is a collection of articles about {\Sapir}'s life and work, issued on that occasion.} When he was five, his parents emigrated to the United States, where he attended school in New York. He received a BA from Columbia in 1904, and went on to earn a master's degree there in 1905 in Germanics with a minor in {Sanskrit}. While at Columbia he met and began to study with {\Boas}; in the years immediately following his MA he did fieldwork in the state of Washington on the Wishram dialect of Chinook, and in Oregon on {Takelma} with Mrs. \name{Frances}{Johnson}, under {\Boas}'s guidance. In 1907-8 he was a research associate in \isi{anthropology} at the University of California (Berkeley) under \name{Alfred}{Kroeber}, where he worked on Yana. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.95\textwidth]{figures/tillohash.jpg} \caption{Tony Tillohash, Kaibab Paiute Indian, in his Carlisle School Uniform} \label{fig:ch.sapir.tillohash} \end{wrapfigure} This was followed by two years' appointment at the University of Pennsylvania in Philadelphia, as a fellow and subsequently as instructor. During this period (with the support of the University of Pennsylvania Museum) he had a {Southern Paiute} student, \name{Tony}{Tillohash}\footnote{\citet{fowler86:tillohash} provide a fuller picture of \name{Tony}{Tillohash}'s background and life after his work with {\Sapir}, and the relation of that work to {\Sapir}'s analyses of Southern Paiute.} from the Carlisle Indian School, to work with in Philadelphia, and did fieldwork in the summer of 1909 with Utes on the Uintah Reservation in Utah, together with his student \name{J. Alden}{Mason} (Figure~\ref{fig:ch.sapir.sapir_utes}). In later years he seems, like many others, to have somewhat idealized his graduate student days in Berkeley and Philadelphia, and to have rather resented the administrative and other job-related duties that interfered with the conduct of research in the professional positions he occupied. In 1908 he defended his description of {Takelma} as a dissertation for {\Boas} at Colum\-bia, and was awarded a doctorate in the following year. In 1910 he was hired to head the newly established division of \isi{anthropology} within the Geological Survey of the Canadian National Museum (the forerunner of the present Museum of Man) in Ottawa, where he was to ``establish a thorough and scientific investigation of the native races of Canada, their distribution, languages, cultures, etc., and to collect and preserve records of the same'' (quoted by \citet[64]{murray81:sapir.canada} from a 1910 letter to {\Sapir} from his new superior, the deputy minister of mines). Though initially enthusiastic about this opportunity (which made him virtually {\Boas}'s Canadian equivalent), he soon became disheartened and complained about the bleakness and isolation of his life in Ottawa. In fact during these years he did fieldwork on a large number of languages (including Nootka, as well as Sarcee, the first work he did on a language of the Athabaskan family which was to occupy him off and on for much of his life), and published a great deal in a number of areas. \begin{wrapfigure}{l}{.4\textwidth} \includegraphics[width=.95\textwidth]{figures/Sapir-1913.jpg} \caption{Edward Sapir (1913)} \label{fig:ch.sapir.sapir_1913} \end{wrapfigure} His \ili{Takelma} grammar (essentially his 1909 dissertation) was published as \citealt{sapir22:takelma} in volume 2 of the \textsl{Handbook of American Indian Languages}. This work is truly incredible in its comprehensiveness and insight when one considers that it was based on only a month and a half of fieldwork. Around 1917 he wrote his description of \ili{Southern Paiute} (which was only published some years later, as \citealt{sapir30:s.paiute}); his popular outline \textsl{Language} \citep{sapir21:language} also appeared during these years. Together with his monograph on \textsl{Time Perspective} \citep{sapir16:time.perspective}, these were essentially the only book-length works {\Sapir} produced in his entire career, but along with a large number of shorter articles on linguistic and more general cultural topics, as well as nonlinguistic writings, they make the list of his publications during his Canadian years impressive indeed.\footnote{{\Sapir}'s \textsl{Collected Works} on linguistic and ethnographic topics are intended to run to a total of 16 very substantial volumes, nine of which have appeared as of this date (2021).} While he was in Ottawa, {\Sapir}'s first wife \name{Florence Delson}{Sapir} suffered a series of mental and physical illnesses from which she eventually died in 1924. An added disappointment was the fact that his efforts to develop anthropological research on the native peoples of Canada were largely halted by the financial and other exigencies of the First World War. Coupled with his perception of the {contrast} between the freedom of his student days and the amount of comparatively unrewarding responsibility incumbent on him as an administrator, these frustrations increased his feelings of isolation and alienation. In this period {\Sapir} began to devote a considerable amount of time to artistic expression—poetry and music, as well as the writing of quite a number of (literary) reviews. From 1917 through the early 1930s, he was a major contributor to \textsl{The Dial} (one of the most important American literary journals of the time) as well as a reviewer and writer for other journals such as \textsl{The Freeman}, \textsl{Poetry}, \textsl{The New Republic}, \textsl{The Nation}, and others far from professional linguistics and \isi{ethnography}. Many of the same concerns that dominate his nonacademic writing, however, also appear in connection with his work in \isi{anthropology}. He became increasingly interested in questions of psychiatry and the nature of personality, and particularly in the relation between personality and \isi{culture}. While his linguistic work always constituted a sort of foundation for the exploration of such issues, he was generally more interested in expanding the relation between the study of language and other domains than in the development of specifically linguistic methodology. As the field became increasingly self-aware and professionalized in later years (recall that, in the 1920s, there really was no such distinct discipline as linguistics at any American university), this would put his work rather at odds with the tendency of most other linguists to emphasize the uniqueness of the object of their research and the methods appropriate to it. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.9\textwidth]{figures/Sapir-1925.jpg} \caption{Edward Sapir (1925)} \label{fig:ch.sapir.sapir_1925} \end{wrapfigure} In 1925 {\Sapir} was offered a position at the University of Chicago, which he was happy to accept, though he would have preferred a move to Columbia in New York. At Chicago he had a great many students (many of whom, with a few exceptions such as {\Hoijer}, later followed him in his move to Yale); and within a short time he was a major figure in American \isi{anthropology}. He continued to do fieldwork on several languages (including \ili{Navajo} and Hupa) and had the opportunity to do many of the things whose absence he had regretted in Ottawa. In 1926 he was married again, to \name{Jean}{McClenaghan}. For a time he continued to write poetry (and to participate in the University of Chicago poetry club); but eventually the pressure of other work left him little time for anything but his professional obligations. Gradually becoming disillusioned with the amount of administrative effort demanded of him at Chicago, he accepted the very attractive offer of a Sterling Professorship at Yale in 1931. His appointment coincided with the establishment of a Department of Linguistics, although as described by \citet{wells74:lx-at-yale}, this ``Department'' had no budget, or power of appointment, or undergraduate program, or even a Chairman (the central administrative figure being the Director of Graduate Studies). It was, rather, a PhD program and a grouping of courses offered by faculty who all held appointments in some other Department. Linguistics at Yale did not become a real (i.e., budgetary) department until 1959: in {\Sapir}'s time, it was what would today be called an inter-departmental degree program. {\Sapir}'s appointment as Chair of Anthropology, itself a program within a larger Department of Social Sciences, made him Sterling Professor of Anthropology and Linguistics. As \citet{haas84:sapir.training} recalls the situation, he encouraged his students in Anthropology working on unwritten (or ``primitive'') languages to take courses in Linguistics that would prepare them for comparative and historical studies as well as synchronic description. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.9\textwidth]{figures/MaryHaas_cat.jpg} \caption{Mary Haas and cat} \label{fig:ch.sapir.haas_cat} \end{wrapfigure} Sapir\ia{Sapir, Edward}'s offer from Yale included funding to allow three of his students---\name{Stanley}{Newman}, \name{Walter}{Dyk} and \name{Morris}{Swadesh}---to come with him. {\Swadesh} and Mary {\Haas} had been married in the spring of 1931 (spending their honeymoon doing fieldwork on ``Nootka'' (Nuuchahnulth) and ``Nitinat'' (Ditidaht) on Vancouver Island), and {\Haas} came with {\Swadesh} without separate support. He also attracted \name{Carl}{Voegelin} (a student of {\Kroeber}'s), \name{Benjamin}{Whorf} (a non-student who sat in on his classes) and a number of others, though very few beginning students, in {contrast} to his years at Chicago. Haas\ia{Haas, Mary Rosamond} and {\Swadesh} were divorced in 1937 (by mutual agreement, according to \citet{heaton.etal21:women.in.lx}, on the basis that {\Haas}'s job propects would be much better if she were unmarried). She had completed her dissertation on Tunica in 1935 (an abridged version of which eventually appeared as \citealt{haas41:tunica}, the only part of a projected volume 4 of the \textsl{Handbook of American Indian Languages} that was ever published), but especially in Depression times, her job prospects were poor. {\Sapir} helped her find support at Yale for a few years, but the sexist nature of academia had its effects: \begin{quotation} In a formal letter to A. L. {\Kroeber}, dated June 17, 1935 (Bancroft Library, UC Berkeley), {\Sapir} recommended {\Swadesh} (`there is no better linguist in the country') for a research position or instructorship at Berkeley, adding: `Mrs. Swadesh has just obtained her Ph.D. with an excellent thesis on Tunica [and] at no extra cost to your department, or at very little extra cost, you would be getting the benefit of another linguist.' He followed this up with a personal letter, dated July 24, 1935 (Bancroft Library, UC Berkeley), in which he wrote: `{\Swadesh} and his wife are \ldots\ likely for an indefinite period---perhaps the rest of their lives---to be committed to specialist work in American Indian linguistics \ldots\ the Swadeshes love languages as you love decorative art and chess. Their combined energy is enormous and a very little effort to fund them would be richly rewarded.' Two years later, {\Sapir} once again recommended {\Haas} to {\Kroeber} for a position at Berkeley, this time independently of {\Swadesh}: `I do not know much of what your plans are for a geographic survey of American Indian linguistics in general or California linguistics in particular, but if you have such a scheme in mind, I should think that Mary {\Haas} would be a particularly good bet. My respect for her work has grown steadily from year to year. She is not as brilliant as Morris but more interested in historic problems and fully as accurate in her field methodology' ({\Sapir} to {\Kroeber} August 5, 1937, Bancroft Library, UC Berkeley). {\Haas} once told Golla\ia{Golla, Victor} that she knew of these letters, but was neither surprised nor offended by the blatancy of {\Sapir}'s male chauvinism (`no extra cost', `not as brilliant'). The reality of academic life in the 1930s, she explained, was that men were always given preference, and {\Sapir} knew that it was easier to sell {\Swadesh} than herself to a figure like {\Kroeber}.\\ \citep[827, fn. 2]{golla.matisoff97:haas.obit} \end{quotation} \begin{wrapfigure}{r}{.35\textwidth} \includegraphics[width=.9\textwidth]{figures/morris-swadesh_army.jpg} \caption{Morris Swadesh} \label{fig:ch.sapir.swadesh_army} \end{wrapfigure} Swadesh\ia{Swadesh, Morris} taught briefly at the University of Wisconsin, but his outspoken political views got him fired in 1939, and he moved to Mexico. During World War II he participated in the Army Intensive Language Program, and after the war was hired at the City College of New York. His politics got him caught up in the Red Scare of the late 1940s, and he was again fired. He returned to Mexico, where he continued to teach linguistics until moving to The University of Alberta in 1966, where he died in 1967. In New Haven, {\Sapir} encountered a considerable degree of anti-semitic feeling. Unusually, his appointment at Yale was to the Graduate School only, and not, as in most cases, also to Yale College, it not being considered suitable for Yale undergraduates to be taught by a Jew, and in any case there was no undergraduate program in Linguistics. He was also apparently blackballed from the Yale Graduate club, the institution where most serious academic business among the faculty was transacted. In reaction to this, the Linguistics faculty withdrew \emph{en masse} from participation in the Graduate Club, and henceforth met at a \emph{Stammtisch} in the University Commons dining hall.\footnote{\name{Stanley}{Insler}, personal communication.} He was still by no means free of administrative obligations, and complained that he had no time to himself for research. In 1937-38, these irritations were exacerbated by a series of heart attacks, and he died of heart disease in 1939. {\Sapir}'s background as a student of {\Boas} obviously had a significant influence on his later views. His first work (such as his \ili{Takelma} grammar) is clearly within that tradition, though it also shows considerable originality and independence---enough so that {\Boas} had not thought it suitable for inclusion in the first volume of the \textsl{Handbook}. In fact, more of {\Sapir}'s apparently distinctive position can be traced to its Boasian roots than is sometimes recognized: the {stress} he put on the psychological foundations of linguistic knowledge, the extent to which a language can be studied in order to analyze the unconscious categorization that underlies the worldview of its speakers—these basic goals are a direct working out of {\Boas}'s view of language as ``a window on the soul.'' {\Sapir}'s original contributions to the development of a comprehensive theoretical view of language and its structure are not in any way to be minimized, but it should also be recognized that both in general and in many of its specifics, the resulting systematization has a great deal in common with the position sketched (and to some extent practiced) by {\Boas}. The influence of his study of \name{Johann}{Herder} for his Master's thesis may also have played a role in the development of his thinking. Such an approach to language as a profoundly internal mental phenomenon must be contrasted (as of course it usually is) with the behaviorist, positivist, and mechanist climate of research which grew up in the sciences generally during the 1930s and 1940s. The central figure in the rise of such an approach to linguistics was Leonard {\Bloomfield}, whose work will be the subject of the following chapter. Typically, presentations of the history of American linguistics associate {\Sapir}'s views with the 1920s and early 1930s, and treat {\Bloomfield} as succeeding {\Sapir}. As stressed by \citet{hymes.fought81:structuralism} and \citet{murray93:theory.groups}, however, the actual chronology is somewhat more complicated. In fact, {\Bloomfield} and {\Sapir} were essentially contemporaries; and if {\Sapir} was clearly a prominent figure in \isi{anthropology} in the 1920s before {\Bloomfield} became well known, the 1930s were a time in which both were active and influential. Certainly {\Sapir} was more prominent in the relations between American and European linguists in the development of phonology; he corresponded extensively with {\Trubetzkoy} in the early 1930s (though these letters were destroyed before his death, and cannot now be examined), and the latter spoke positively of him on many occasions. When the \isi{International Phonological Association} was established under the influence of the Prague school linguists in 1932, it was {\Sapir} who was elected as the sole American member of its board, and he continued to be the primary link between European and American phonologists until his death. {\Sapir} and {\Bloomfield} of course knew and interacted with one another to a considerable extent (they were colleagues at Chicago, and in part in competition for students there between 1927 and 1931), though it seems that while their relations were perfectly cordial, they were anything but fast friends. {\Sapir}'s own style of research was based much more on brilliance and intuition, searching for dramatic insights whose foundation might (or might not) be confirmed by later systematic investigation. {\Bloomfield} was much more methodical in the way he felt theoretical propositions ought to be worked out, and while admiring {\Sapir}'s more virtuosic approach, he referred to him (at least in matters outside of language) as a ``medicine man'' (\name{Carl}{Voegelin}, as quoted in \citealt[540]{hockett70:bloomfield.anthology}). {\Sapir}, for his part, ``admired {\Bloomfield}'s ability patiently to excerpt data and to file and collate slips until the patterns of the language emerged, but spoke deprecatingly of {\Bloomfield}'s sophomoric psychology'' (\emph{Ibid}., pp. 539-40). Such a {contrast} in styles cannot have made for an easy cooperation; nor were their relations improved, one imagines, by the fact that some of the best students at Chicago left to follow {\Sapir} to Yale in 1931. Overall, if one had to bet in the early 1930s on the likely outcome of the inevitable rivalry between the two, one would surely have had to predict the continued ascendancy of {\Sapir}. While {\Sapir} did indeed continue to exert an important influence on linguistics throughout the 1930s, 1940s and 1950s through his own work and that of a series of students (and their students in turn), his was increasingly a peripheral, even eccentric position in relation to the main stream of development of the field. {\Sapir} was thus gradually eclipsed by {\Bloomfield}, for a number of rather superficial (but nonetheless important) reasons. Among these is surely the fact that {\Sapir} died in 1939 and was thus unable to exercise the influence of his undeniably attractive abilities in the years during and right after World War II. Further, his students and closest associates were, after the war, either dead ({\Whorf}), unemployed (and subject to political persecution, in the case of {\Swadesh}), or employed in universities on the West Coast ({\Haas}, {\Hoijer}, {\Newman}) where their influence on academic politics was almost negligible. In addition to these factors, there was the fact that {\Bloomfield} had written a major textbook \citep{bloomfield:lg} which had a formative influence on virtually all the immediately following generations of students in linguistics, while {\Sapir} had not; and also the fact that {\Sapir} taught only once at the (summer) Linguistic Institute (a major institution in the formation and training of a new generation of scholars who saw themselves professionally as linguists), while {\Bloomfield} taught there several times. Finally, one must not neglect the fact that {\Bloomfield}'s appeal to a positivist, mechanist philosophy of science was completely in tune with the `ideological' climate of academic research at the time. If linguists saw a major part of their task as the establishment of a distinct discipline of linguistics which was not simply a part of Germanics, Romance, Semitics, comparative philology, \isi{anthropology}, etc., it seemed that the way to achieve this goal was by stressing the status of linguistics as a science; and here {\Bloomfield}'s approach seemed much more appropriate than {\Sapir}'s \isi{mentalism} and flashes of intuition. The appeal of research which takes on at least the trappings of `science' has of course not disappeared; one can argue about the extent to which such considerations distort scholarly judgments in particular cases, but there is no question that they contributed to the relegation of the `{\Sapir} school' to a marginal position in American linguistics in the late 1930s and subsequently. \section{Sapir's view of the nature of language} It is on the basis of his conception of the object of study in linguistics that {\Sapir} differs most fundamentally from the approach to language which arose during the 1930s and came to dominate research in America, especially after World War II. In {contrast} to these later developments, {\Sapir} believed in the importance of a rich and highly structured domain of interior mental phenomena, including in particular virtually all of what is essential to the nature of language. In chapters~\ref{ch.bloomfield} and~\ref{ch.structuralists}, we will trace the development by which, for many linguists, language came to be considered as exhaustively studiable in terms of its external manifestations: sounds, and patterns of observable behavior to which `\isi{meaning}' could be reduced (at least programmatically, in principle). For {\Sapir}, in {contrast}, these physical aspects of language were merely peripheral (almost incidental) concomitants of a reality which is to be sought in the mind, and whose study provides invaluable information about the nature and structure of human cognitive activity. The consequences of this difference are quite clear in the domain of interest to us here, the study of phonology. For nearly all theoreticians of the time, certainly including {\Sapir}, a central role is played in phonological structure by a basic segment-like element: the \emph{phoneme}. Linguists who approached language strictly in terms of its external manifestations, however, founded this notion on the study of the physical sounds of speech: through extracting the acoustic or auditory properties which distinguish one \isi{speech sound} from another, or through an analysis of the distribution of various physical segment types. {\Sapir}'s conception is quite different, since the physical implementation of a \isi{phoneme} is among its least interesting properties. True, phonemes are realized in the sounds of speech; but their essence is rather something in the mind whose most important features may be unrelated (or even in direct contradiction) to measurable aspects of a physical event. In a much-quoted passage, the central reality of \isi{sound structure} is likened to ``an ideal flow of phonetic elements [\ldots] heard, inadequately from a purely objective standpoint, as the intention of the actual rumble of speech'' \citep[56]{sapir21:language}. The claim that language is primarily a psychological rather than a physical activity does not at all imply that the structure of this activity is given in advance by the innate, biologically controlled organization of the human brain. On the contrary, {\Sapir} stresses in the introductory chapters of \textsl{Language} his view that ``speech is a human activity that varies without assignable limit as we pass from social group to social group, because it is a purely historical heritage of the group, the product of long continued social usage.'' He specifically contrasts walking, which ``is an organic, an instinctive function (not, of course, itself an instinct)'' with speech, which ``is a non-instinctive, acquired, `cultural' function'' \citep[4]{sapir21:language}. Language, like the rest of \isi{culture}, is something that we learn more or less as we find it, and because it is there, rather than because we are in some way inherently predisposed to acquire a system of a particular nature. {\Sapir}'s {stress} on the cultural (rather than biological) basis of language can be traced rather directly to the views of {\Boas}. In American \isi{anthropology} in the twentieth century, this {stress} on the social environment rather than biological background as the source of cultural institutions affects many more domains of study than just language; and {\Boas} is often cited as the dominant figure championing such a position in anthropological studies as a whole. His student \name{Margaret}{Mead}, for example, is generally felt to have been urging a fundamentally Boasian view in her enormously important study of {Samoan} society (\citealt{mead28:samoa}, to which {\Boas} wrote a foreword), which argued for a cultural rather than biological foundation for many human attitudes (aggressivity, jealousy, the turmoil of adolescence, etc.). In the social and political context of the 1920s and 1930s (and subsequently), this {stress} on environment rather than heredity as a determinant of human cognitive functions and attitudes was generally felt to be an important contribution of the social sciences, useful in supporting `liberal' positions on desirable social {change}. Subsequent controversy about Mead\ia{Mead, Margaret}'s work, initiated by \posscitet{freeman83:mead.samoa} attack on her account of life in Samoa, has centered on the claim that she misrepresented (or at least mis-perceived) the facts of {Samoan} society in order to exaggerate the importance of such social factors at the expense of inherited ones—a predisposition she is presumed to have acquired from {\Boas}. {\Boas}'s {stress} on diversity (as opposed to some sort of biologically inherited uniformity) played an essential role in forcing the recognition that languages (or cultures) very different from those of Europe had to be approached in their own terms, rather than as imperfect or primitive approaches to some uniform ideal system. Obviously, this position has general cultural and political implications for many issues beyond the narrow question of how a particular science (linguistics or \isi{anthropology}) should be organized. In urging a non-biological view of the essential nature of language, {\Sapir} was supporting the same point in what seemed the most logically straightforward way; for if human language is actually determined in its structure by innate, biologically inherited factors, it would appear that it should present a more or less uniform organization (at least within a genetically uniform sampling of humanity). The observed diversity of human languages and their historical evolution, however, seems to contradict this view rather directly. Based as they were on a strict `organic' determinism, the typologies of language and its evolution that were proposed in the nineteenth century could be shown to be hopelessly inadequate as a characterization of linguistic reality: a result which {\Sapir} felt was not at all an accident but a direct consequence of their inadequate underlying conception of the nature of language. A number of factors thus led {\Sapir} to {stress} the social as opposed to biological basis of linguistic structure: his education with {\Boas}, the developing climate of opinion in academic \isi{anthropology} in the 1920s and 1930s in conjunction with liberal political views during the same period, and the apparent necessity to make such an assumption in order to explain the evident diversity of human languages and their failure to follow the same evolutionary sequence. In seeing the structure of language as strictly an accidental consequence of cultural environment, however, free of any sort of biologically grounded necessity, this view leads logically to a major problem for linguistics. In the passage quoted above, language is argued to be ``a human activity that varies without assignable limit''; but this implies that there are absolutely no (non-accidental) \isi{universals} of linguistic structure—a finding at variance with the manifest fact that, if languages may be very different from one another in many ways, we still have no difficulty at all in knowing what sort of activity and system in a society to call `language', and in fact no difficulty in finding many ways in which languages resemble one another. {\Sapir} was of course well aware of the fact that languages do not actually differ from one another in absolutely arbitrary ways, and that there are at least some generalizations that are valid across languages. Identifying specific dimensions along which languages may in fact differ from one another (and thus by implication, properties in terms of which they are comparable), through the development of an explicit typological scheme applicable in principle to any language, occupied a considerable amount of his efforts. To be coherent, the position that any comprehensive \isi{typology} is possible must rest on the assumption that there are some \isi{universals} of human language; and once he asserted that these do not have basis in our biological nature as \emph{Homo sapiens}, it was logically incumbent on {\Sapir} either to propose some other foundation for them or to deny their existence. The denial that there are any significant linguistic \isi{universals} was the path often taken by American structuralists (see \posscitet[96]{joos57:readings} famous statement about the arbitrary differences possible among languages, cited in the previous chapter), but not the view of {\Sapir}: ``It would be too easy to relieve ourselves of the burden of constructive thinking and to take the standpoint that each language has its unique history, therefore its unique structure. Such a standpoint expresses only a half truth'' \citep[121]{sapir21:language}. In fact, we observe that languages show similarities in structure despite being unrelated to one another (at least in the time frame relevant to the development of the features in question). He suggests that these similarities may have their origin in the fact that ``a language changes not only gradually but consistently, that it moves unconsciously from one type towards another, and that analogous trends are observable in remote quarters of the globe'' (\emph{Ibid}); but, whatever their source, it is important for the linguist to develop a framework in which both the similarities and the differences among languages can be adequately represented. This he attempts to do in chapter 6 of \textsl{Language}. He observes first of all that previous classificatory schemes were much too limited to encompass the actual variety of human language. There are several reasons for this: they usually involved too few categories (e.g., `isolating' vs. `agglutinating' vs. `inflecting'); they were established with regard to only a single aspect of linguistic structure (typically the formal \isi{mechanism} of word formation); they were based on a sample of too few languages; and (most importantly), they were guided by the aim of arriving at a uniform evolutionary sequence culminating in some particular type—often the language of the investigator, or perhaps classical \ili{Greek} and \ili{Latin}—as the manifestation of the ultimate stage of the evolution of civilized expression. {\Sapir}'s own scheme is certainly more ramified than any other proposed up to the time. It would take us much too far afield here to explore it in detail (see \citealt{sra90:krems} for some discussion), but we can note that it is based on three quite different dimensions. One of these (the most innovative aspect of his framework) is the type of concepts expressed in a given language. He assumes that every language must express a range of basic (concrete) concepts corresponding to the reference of simple lexical items, especially nouns and verbs. In addition, every language must express a certain range of pure relational concepts, which ``serve to relate the concrete elements of the proposition to each other, thus giving it definite syntactic form.'' The positing of such categories as necessary ones is already a significant departure from the strong view that languages are in principle arbitrarily different from one another. In addition to these minimal requirements, corresponding essentially to lexical roots in the one case and to purely syntactic inflectional categories in the other, languages may allow for two sorts of interpenetration of referential and relational constructs. As one possibility, languages may express \emph{derivational} concepts, by which the meanings of radical items are modified to form new lexical items (e.g., an agentive operator which takes basic concrete verbs and produces nouns with the sense `one who typically or often [verb]s'); and as another, they may allow for certain \emph{concrete relational} categories. The latter are categories such as agreement in person, or in `natural' (as opposed to purely arbitrary) gender: categories that play a role in inflection and the organization of syntactic structure, but which nonetheless have a sort of semantic or referential basis as well. He arrives at four general categories of language, depending on whether one, the other, both, or neither of these possibilities is realized in a given language. It can be seen that {\Sapir} assumes a division between \isi{syntax} and \isi{lexicon} as the basis for a distinction between inflectional and derivational morphology. That such a point of view furnishes the only satisfactory foundation for this traditional opposition is argued in generative terms by \citet{sra82:wheresmorphology}. {\Sapir}'s second dimension of linguistic \isi{contrast} is the traditional one of the formal means by which those concepts which find expression in a given language are realized: isolating (where each \isi{concept} is expressed in a separate word), agglutinating (where distinct concepts are expressed by distinct, nonoverlapping parts of words), fusional (where some amalgamation of distinct concepts into single or \isi{overlapping} parts of a word is found), and symbolic (where some concepts are expressed not by a separable part of the word, but rather by the structural relation between one word and another, as in cases of Ablaut like \emph{sing}/\emph{sang}/\emph{sung}/\emph{song}). Employing such a classification in addition to that distinguishing types of \isi{concept} allows {\Sapir} to characterize a language in which concepts of one type (e.g. derivational ones) are expressed in one way (e.g. symbolically), while those of another (perhaps pure relational ones) are expressed in another (e.g. by agglutinating affixes). Finally, {\Sapir} allows for the classification of languages along a third dimension, that of ``degree of synthesis'' or typical conceptual complexity of individual words—an essentially continuous scale ranging from \emph{analytic} through \emph{synthetic} to the extreme of \emph{polysynthetic}. {\Sapir}'s overall classificatory framework is much more complex, and accordingly more delicate than any of the traditional nineteenth-century schemes. One may still question whether it provides dimensions that are adequate to characterize the significant differences and similarities among the world's languages; but that is not our purpose here. Rather, what is interesting is the role which {\Sapir} thought a \isi{typology} plays in a theory of language. Precisely because it provides a number of potentially independent dimensions, rather than a single unidirectional scale like most of those that preceded it, it serves a fundamentally synchronic, descriptive purpose. It is intended, that is, to describe what the structure of a language is, rather than how far along a presumed evolutionary scale it has progressed. It is reasonably clear also that a primary goal of typological research today is not intended to be served by {\Sapir}'s framework. Current typological work (at its best, at least) seeks to establish necessary connections among phenomena: for example, Greenberg's celebrated \isi{typology} of SOV, SVO, and VSO languages was intended not simply to specify the range of freedom available to the languages of the world with respect to the major constituents of the sentence, but also to bring out connections between that relative order and other features, such as the relative order of nouns and modifying adjectives, the choice of prepositions or postpositions, etc. Precisely because {\Sapir}'s schema provides nothing beyond a range of mutually independent categories, it lacks such a logical structure, and in fact there is little evidence {\Sapir} looked for implicational relationships among typological parameters provided by his system. On the other hand, {\Sapir} did think there were profound relationships which might eventually be discovered between the categories of linguistic structure in his terms and basic aspects of \isi{culture} and of mental life. Together with \name{Benjamin Lee}{Whorf}, he was largely responsible for bringing to prominence in anthropological discussion the claim that the structure of our language determines many aspects of the way in which we see and structure the world. In other words, the categorizations imposed by language channel and structure our thought, leading us to see some connections among phenomena while ignoring others—differing connections for the speakers of differing languages. This is of course a natural development of {\Boas}'s ideas about the importance of differences in the categories languages treat as obligatory, optional, or unexpressed; but {\Sapir} and {\Whorf} pursued the psychological implications of this position much further than {\Boas}. While he did not claim to be able to demonstrate actual connections between the categories of his \isi{typology} and language-specific cognitive differences, {\Sapir} did feel that the elucidation of such connections was a role \isi{typology} should be able to play. I should also mention two other potential applications of a typological schema according to {\Sapir}, both in the sphere of historical linguistics. First, there is his celebrated theory of linguistic ``\isi{drift}.'' This notion is intended to represent the fact that, even after a language has divided into a number of distinct, separated speech communities, the evolution of the several individual descendants of that language may well continue to pursue very similar lines. This leads to a state in which multiple members of a family make the same innovation quite independently of one another (or at least without any necessary contact between them)—which of course makes the historical linguist's task that much harder in determining which features of the daughter languages should be attributed to their common ancestor. Often presented as something quite mystical, the most straightforward way to interpret {\Sapir}'s notion of linguistic \isi{drift} is simply as the claim that change is motivated by structural factors, and that such structural factors, present in the ancestor of a group of related languages, may persist and continue to influence their later evolution even after their separation. Ideally, a \isi{typology} ought to provide categories in terms of which to identify such structural factors and clarify their influence on change. Additionally, \isi{typology} played a role in {\Sapir}'s own concrete historical work. He did extensive research of this sort, including historical studies in \ili{Indo-Europe\-an} (especially Tocharian) and \ili{Semitic}, but especially in the classification of American Indian languages. One of his best-known theoretical claims, in fact, was his far-reaching proposal for a genetic classification of the languages of North and Central America into six large groups. This classification was based on a large number of rather remote linguistic relationships, many of which could not be proved or even significantly supported by standard comparative evidence; and one naturally asks what {\Sapir} based his assertions on. It is fairly clear that most of those claimed genetic connections which {\Sapir} posited without support from common vocabulary rested on presumed similarities in structure—just the sort of parallels that a typological framework ought to be able to make explicit. The evidential role of such structural similarities is particularly strong within {\Sapir}'s general perspective on language as culturally based, but otherwise largely arbitrary; if the role of factors other than cultural transmission in determining the structure of a language is comparatively small, it should follow that structural similarity is strong presumptive evidence of a genetic relationship, since the preservation of such factors over the period of evolution from a common ancestor is virtually the only (nonaccidental) \isi{explanation} for their presence. The role of \isi{typology} here is to provide an instrument sensitive enough to identify such similarities; once identified, such a \emph{prima facie} case for genetic unity must eventually be supported by standard comparative evidence, but \isi{typology} ought to show the historical linguist where to begin looking. If I have devoted so much space to a consideration of {\Sapir}'s views on \isi{typology}, it is not because of the interest his specific proposals hold for the modern reader. Rather, it seems important to understand the central role which the general notion of a typological characterization of linguistic structure played in {\Sapir}'s view of a theory of language. An understanding of that role, in turn, makes clearer the sense in which {\Sapir} construed language as a psychological phenomenon. As an aspect of human mental and cognitive life rather than merely an external system of interpersonal signals, language plays a profound role in determining the way we see and organize the world, but its own structure is in turn determined culturally, in an external and contingent fashion that allows little or no role for innate or other biological factors. For {\Sapir}, the fundamental problem of linguistics was thus not the construction of a `theory of grammar' but the elucidation of the relationship between language on one hand and \isi{culture} and personality on the other. \section{Sapir's conception of phonological structure} \begin{wrapfigure}{r}{.3\textwidth} \includegraphics[width=.95\textwidth]{figures/Sapir_0.jpg} \caption{Edward Sapir (ca. 1930)} \label{fig:ch.sapir.sapir_stock} \end{wrapfigure} In discussing the role of sounds in language, {\Sapir} starts from a perspicuous comparison of an articulatory gesture as it functions linguistically, and what is effectively the same gesture as it might be used nonlinguistically. Though physically identical in all relevant respects, these differ dramatically in their integration with other similar gestures, both syntagmatically and paradigmatically—i.e., both in terms of their place in a sequence of human activities, and in their relation to other, alternative gestures. They also differ in what counts as an accurate {performance} of the gesture in question, but, most importantly, they differ in the intention underlying the gesture in question. Non-speech gestures have directly functional significance, while the same gestures when used linguistically serve simply as a ``link in the construction of a symbol.'' This sort of distinction between a physical act and its linguistic uses is reminiscent of {\Saussure}, but it should be noted (apart from the fact that {\Sapir} hardly ever refers to other scholars at all) that {\Sapir} never refers to {\Saussure} in his theoretical writing and it is quite unlikely that the \textsl{Cours} would have made it to Canada during World War I or its immediate aftermath. It follows, then, that the essential nature of a sound as used in speech lies in this special character of the intentionality underlying it: in the fact of what a speaker has in mind in producing it, not the physical details of the production itself. {\Sapir} draws a useful {analogy} between sounds in speech and other tools used by humans: a club is a club not because it has a particular physical form but because it is put to a particular use. Phenomenological philosophers such as Heidegger\ia{Heidegger, Martin} make a similar point in arguing that the logically prior reality of a tool such as a hammer is its ``readiness to hand'' (i.e., its suitability for fulfilling particular intentions of a conscious user), and that its ``presence at hand'' (i.e., its specific character as a physical object with certain dimensions, weight, etc.) is an aspect that arises only secondarily, when we step back from its basic being as a hammer to regard it as a mere object. The fundamental nature of a \isi{speech sound} is thus to be sought in the uses to which it is put in the intentions of a speaker. This reality is a mental rather than a physical one, and it is exactly this `\isi{mentalism}' that is generally taken to characterize {\Sapir}'s view. To say that the basic unit of \isi{sound structure} (the \isi{phoneme}) has a psychological basis, however, is to tell only part of the story. Even if the physical properties by which the speaker's phonemic intention is realized are logically secondary from a linguistic point of view, that does not mean they are unreal, irrelevant, or completely arbitrary. Even if ``a club is not defined for us when it is said to be made of wood and to have such and such a shape and such and such dimensions'' (\citealt[46]{sapir33:reality}\footnote{Citations from this article refer to the {English} translation that appears in \citealt[pp. 46--60]{sapir49:selected.writings}.}), since the essence of ``club-ness'' lies in the use to which we put it rather than in these properties, we still could not choose any arbitrary physical object (an apartment building, say, or a pool of water) and decide to think of it as a `club'. Similarly, we could not choose to regard any arbitrary vocal event (a Bronx cheer, for example) as filling the role of the \ili{English} \isi{phoneme} /d/. A complete conception of either clubs or phonemes can only be reached when we regard them as physical objects (or events) of a particular sort, invested with a particular intentional value. A concentration on the mental reality of phonemes to the complete exclusion of their physical properties has led some interpreters of {\Sapir} to suggest that he rejected or ignored their phonetic properties. This is quite at variance with his practice; in his descriptive work, not only does he describe phonemes in standard articulatory terms, presenting charts of phonemes classified by traditional phonetic dimensions, but he often appeals to phonetic properties as having an explanatory role in the operation of phonological processes. In his ``Glottalized Continuants'' article, for example, he notes \citep[251ff.]{sapir38:glottalized.continuants} that in \ili{Navajo} the \isi{phoneme} /ỷ/ (glottalized [y]) only exists in \isi{alternation} with un-glottalized /y/, where it is produced as a result of the ``d-modification'' rule. As he observes, the reflex one would expect in \ili{Navajo} for ``d-modified'' /y is /z̧/; and this is in fact found in most cases. However, the \isi{regular} ``d-modified'' forms of /m/ and /n/ are (glottalized) /m̉/ and /n̉/; and he suggests that the (otherwise nonexistent) glottalized /ỷ/ arose by \isi{analogy} with these segments. The `\isi{analogy}' involved can only be based on the notion that (at least for sonorants), ``d-modification'' involves an \isi{alternation} between segments with and without the phonetic property of glottalization. Recall also that he speaks of the phonemic reality of language for a speaker/hearer as ``an ideal flow of \emph{phonetic} elements'' (emphasis supplied). {\Sapir}'s phonemes are thus `ideal' in the sense of constituting a mental reality which may correspond only indirectly to physical events—but not in the sense of having no phonetic properties. The phonemic properties of a segment are those assigned to it in the speaker/hearer's mind, but the result is still something that can be regarded as an ideal sound rather than a complete abstraction. Reinforcing this interpretation is an important constraint noted by \citet{mccawley67:sapir} on {\Sapir}'s phonemic analyses. These are quite consistently presented in the form of charts of the phonetic segments that occur in a language, in which some elements are enclosed in parentheses. The parenthesized segments are those that are regarded not as phonemes but as variants of other, phonemic segments. As a result of this way of conceiving of phonemes, it is clear (as {\McCawley} argues) that the set of phonemes for {\Sapir} is always a subset of the set of occurring phonetic types. He did not, thus, allow for analyses in which some phonemes are phonetically abstract in the sense of combining a collection of phonetic properties that never occur together in any surface segment—a type of analysis proposed for several languages in the early years of generative phonology, and the basis for a part of the so-called `\isi{abstractness}' controversy (section~\ref{sec:abstractness}). I will suggest below that {\Sapir}'s constraint on the segments that can occur in phonemic forms is simply one part of a larger limitation on the extent to which \isi{phonemic representations} can deviate from the \isi{regularities} that characterize phonetic forms. What is of interest to us here is the following: {\Sapir}'s presentation of phonemic elements as the non-parenthesized subset of a language's segment inventory shows that phonemes cannot be abstract in the sense of `phonetically non-occurring'; but it also shows that they are quite concrete in the sense of being homogeneous with phonetically complete (i.e., fully specified) segments. They thus have phonetic properties, even if (a) these properties alone do not constitute the primary reality of the \isi{phoneme}, since it is its `use' within the system of the language that primarily determines its linguistic essence; and (b) the properties of the \isi{phoneme} corresponding to a given phonetic segment may not be determinable by direct physical measurement (for reasons that I will explore below). \citet{mccawley67:sapir} also notes that {\Sapir} seems to have conceived of phonemes not as collections of properties but rather as unitary individuals: as he puts it, logically similar to proper rather than common nouns. It is interesting to observe that this point of view would have allowed {\Sapir} to respond to an objection made by {\Bloomfield} concerning the linguistic significance of \isi{phonetic representations} (had he addressed the question). The issue (which will be dealt with in more detail in chapter~\ref{ch.bloomfield}) is this: if one thinks of the segments in such a representation as characterized by the phonetic properties which are observed and recorded in them, then any (humanly accessible) \isi{phonetic transcription} must be incomplete due to the possibility that additional properties not noted explicitly in it could in principle be distinguished as well. If one thinks of a phonetic segment as a unitary whole, however, as {\Sapir} apparently did, then a possible response to this charge of necessary incompleteness would be that simply to name an individual is sufficient to provide a unique identification, even if all of its properties are not known. Even though {\Sapir} did not conceive of a \isi{phoneme} as defined by a collection of phonetic properties, there is another sense in which a \isi{phoneme}'s linguistic identity is decomposable into a number of individual factors. An essential characteristic of a \isi{phoneme} is that it forms part of a small finite inventory of comparable elements, which together constitute a system. Indeed, a \isi{phoneme} is described as ``a functionally significant unit in the rigidly defined pattern or configuration of sounds peculiar to a language'' \citep[46]{sapir33:reality}. Individual phonemes are thus sounds which are located in an ``inner configuration of the sound system of the language'' \citep[41f.]{sapir25:sound.patterns}, and the place of a (phonemic) sound in such a structure is given not by its objective phonetic properties, but rather by ``a general feeling of its phonetic relationship resulting from all the specific phonetic relationships (such as parallelism, \isi{contrast}, combination, imperviousness to combination, and so on) to all other sounds''. ``Parallelism'' here may well be based on the phonetic properties sounds have in common, but this is only part of the story. Sounds are also close to one another in the pattern of a language if they share aspects of distribution. For instance, the \ili{English} phonemes /p, t, k/ belong together not only because they constitute the voiceless \isi{stops} of the language but also because (a) they occur initially, medially, and finally; (b) they may be preceded by \emph{s} in all positions; (c) they may be followed by \emph{r} initially and medially; (d) they may be preceded by \emph{s} and followed by \emph{r} initially and medially; (e) each has a voiced correspondent. Proximity of sounds in a language's pattern may also be shown by the alternations they enter into: thus, in \ili{English} /f/ and /v/, /s/ and /z/, /θ/ and /ð/ are related because they alternate (\emph{wife}/\emph{wives}, [a] \emph{house}/ [to] \emph{house}, \emph{bath}/\emph{bathe}, etc.), while /p, t, k/ are not grouped by such a relationship with /b, d, g/ (though they are in \ili{German}). The full system of language's phonemic pattern is thus given not by phonetic factors alone (though these are not irrelevant), but also by a wide range of distributional, morphological, and other non-phonetic properties in terms of which sounds may be similar or different. It follows from this that what is the same inventory of phonemes from a phonetic point of view could be organized into more than one distinct system; and {\Sapir} makes this conclusion explicit in his papers on phonological structure. He notes, for example, that essentially the same pair of phonemes (/θ/ and /ð/) can be found in both \ili{English} and \ili{Spanish}, but that the structural connection between them is much closer in \ili{English} (where they are related by alternations) than in \ili{Spanish} (where they are not, and where on the contrary /θ/ alternates with velar /k/ instead: \emph{opaco} {[oˈpako]} `opaque', \emph{opacidad} {[opaθiˈdad]} `\isi{opacity}'). The converse of this, that the same ``inner configuration of the sound system of the language'' could be built on phonetically distinct segment inventories is also argued, thus establishing the essential role of non-phonetic factors in determining the character of a \isi{phonological system}. It is not clear that {\Sapir} ever actually worked out the entire phonology of a language on the basis of the sort of property he argued was fundamental in determining phonological structure, but he did include in his descriptive work numerous references to affinities between sounds that were established on this basis. The importance of this point of view was that, while it is primarily a theory of the nature of elements occurring in a class of \isi{representations}, the elements themselves and their relations to one another are defined in terms of the rules of the language: rules governing distribution, on the one hand, and rules describing alternations on the other. The resulting theory is a theory of \isi{phonemic representations} that we can characterize as a `\isi{fully specified basic variant} view' in the terms of chapter~\ref{ch.saussure_sound}, similar in that respect to the early views of {\DeCourtenay} and the first works of {\Trubetzkoy}. Subsequent work, especially that influenced by {\Bloomfield}, would depart from this position on many points: by abandoning {\Sapir}'s psychological approach for an external one, by reducing the content of phonemic elements to the minimum of properties necessary to specify their distinctive function, and by sharply reducing the \isi{abstractness} of the relation between phonemic and phonetic form. In all of these respects, American phonology followed a course of development similar to that found between the early views of {\Baudouin} and the later position of {\Trubetzkoy} and {\Jakobson}'s work. Another difference between {\Sapir}'s conception of phonemic structure and that of later American structuralists bears some relation to all of these points. The range of non-phonetic \isi{regularities} (stated in a grammar by rules) which were considered as relevant to establishing a phonemic system soon came to be restricted to questions of the surface distribution of phonetic segments alone. {\Sapir}'s position accords a crucial role to the study of rules in establishing the nature of phonemic elements, where the class of rules involved is a rather comprehensive one. An exclusive focus on \isi{regularities} of surface distribution would gradually result in a theory that only accorded theoretical status to the \isi{representations} themselves. \section{Sapir's descriptive practice in phonology} Most of {\Sapir}'s theoretical writing in phonology (e.g. \citealt{sapir21:language,sapir25:sound.patterns,sapir33:reality}) was devoted to establishing the notion of `phonemes' and the difference between a linguistically significant representation of \isi{sound structure} and a phonetic representation of speech as a physical reality. This is true even of \citealt{sapir21:language}, where the word \emph{phoneme} is not used as such, although there is nonetheless an obvious continuity of views with later work. Much valuable evidence concerning {\Sapir}'s conception of phonological structure can also be obtained from a consideration of his practice in describing particular linguistic facts (most comprehensively, in complete grammars such as \citealt{sapir22:takelma} and \citeyear{sapir30:s.paiute}). It is worth exploring these issues further, aside from whatever intrinsic interest the question may have, since {\Sapir} was highly regarded during his lifetime as an insightful descriptivist, and it was on the model of his work that his students and others sought to continue a `{\Sapir} tradition' (for some discussion, see \citealt{harris44:yokuts,harris45:navajo,harris51:rvw.sapir}, as well as \citealt{hymes.fought81:structuralism}). A prominent regard in which {\Sapir}'s practice differed from that of most subsequent American structuralists was the degree of \isi{abstractness} of the relation between phonetic and phonemic form. This was argued explicitly and exemplified in theoretical papers such as \citealt{sapir33:reality} and can be illustrated from virtually any of his descriptive works. It is convenient to discuss this \isi{abstractness} in terms of the degree of deviation {\Sapir} permitted from the principle that would later be taken to define ``the practical requirements of phonemics (i.e., given the \isi{phoneme} in an environment, we know what sound it represents; and given the sound in an environment, we know what \isi{phoneme} it represents)'' \citep[239]{harris45:navajo}: later referred to as the principle of \emph{bi-uniqueness}. It is quite clear (and often remarked) that {\Sapir} allowed for many descriptions in which the phonemic representation of a form cannot be determined from its phonetic form alone. Most of the examples in his ``psychological reality'' paper \citep{sapir33:reality} are intended to make exactly this point, and it pervades his descriptive work. As one example, in his description of \ili{Takelma} \citep{sapir22:takelma} he cites an instance in which as many as four different phonemic forms (the stem \emph{sā$^{a}$g} followed by nothing, by glottal stop, by \emph{tʽ} or by \emph{kʽ}) all converge on exactly the same phonetic representation (\emph{sākʽ}). Such a possibility arises because phonemic elements are construed not as re-codings of phonetic forms but as an inner psychological reality that corresponds to the physical event of speech, where the correspondence is mediated by a system of (possibly quite complex) rules. Among their effects, some of these rules might describe neutralizations of multiple phonemic forms in a single phonetic form—by specifying that under given conditions two different phonemes have the same variant, by deleting certain phonemic elements from the phonetic representation under given conditions, or by directly replacing one \isi{phoneme} by another in the environment in question. {\Sapir} was not concerned to state a procedure by which one representation could be recovered from the other without appealing to any other information: for him the relation between phonemic and phonetic realities might be mediated by any and all aspects of human cognitive abilities, and such phenomena as \isi{neutralization} were not seen as posing any problems of principle whatsoever. The possibility of having non-unique phonemic forms that correspond to the same phonetic representation was a point on which later linguists explicitly separated themselves from {\Sapir}. The kind of phonemic-to-phonetic relationship he envisioned would later be explicitly distinguished from phonemics and characterized as `morphophonemic'. It appears that {\Sapir} accepted this terminology at least in his last years, and in fact an important source of morphophonemic theory in American linguistics was {\Sapir}'s collaboration with {\Swadesh}, {\Newman}, and {\Voegelin} (among others of his students) around 1933-36. {\Sapir} uses the term morphophonemic in such late works as his `Glottalized Continuants' paper \citep{sapir38:glottalized.continuants}, and his students devoted much attention to morphophonemic as opposed to phonemic description (at least in the sense that term came to have for others). The absence of any systematic theoretical statement from {\Sapir} on the matter, however, leaves us in some doubt as to the role of a distinction between \isi{morphophonemics} and phonemics in his thinking. Less commonly remarked than the difficulty of recovering {\Sapir}'s phonemic forms from phonetic information alone was the fact that he does not appear to have required unique translatability in the other direction, either. That is, in some instances more than one variant may be assigned to the same \isi{phoneme}, without its necessarily being possible to determine in terms of phonological factors alone which variant will occur in a particular form. In his \ili{Southern Paiute} description, for example, he discusses at length \citep[47f.]{sapir30:s.paiute} the distribution of the alveolar and palatal spirants and \isi{affricates} (\emph{s} and \emph{c} = [š], \emph{ts} and \emph{tc} = [č]). He finds that the distribution of the alveolar and palatal segments is largely complementary (determined by the qualities of the surrounding vowels), and on this basis \emph{c} is described as a variant of \emph{s} and \emph{tc} as a variant of \emph{ts}. The rule distributing \emph{tc} and \emph{ts} is straightforward: \emph{ts} appears before \emph{i} and \emph{tc} elsewhere. The distribution of \emph{c} and \emph{s} is more complicated, and depends on an interplay of preceding and following vowel. However, there is a small residue of cases in which one of the two appears in the environment characteristic of the other, including a few near-minimal contrasts: cf. \emph{ɔsɔrɔŋwi-} `to snore' vs. \emph{qɔc·ɔvï-} `tinder', or \emph{ ta-na'c·ix̯a} `cleft in hoof' vs. \emph{pi-na's·ix̯a} `between one's legs'. Some surface contrasts of this sort may be due to the operation of a set of \isi{assimilation} rules (\emph{Ibid}, pp. 54ff.) when more than one spirant or affricate appear in a word, and some may result from morphologically governed `analogies'. Nonetheless, there is also a class of instances in which the distribution of alveolar \emph{vs}. palatal \isi{articulation} is simply not predictable. Here, as elsewhere, what is important to {\Sapir} is that in the overwhelming majority of cases these two phonetically similar types are not in \isi{contrast}, and there are frequent interchanges between them depending on the vocalic environment. On this basis, {\Sapir} treats them as phonemically identifiable: in the dictionary accompanying the \ili{Southern Paiute} grammar, he describes \emph{c} as a ``mere'' variant of \emph{s}, and thus not a ``primary sound,'' even though their complementarity is not quite absolute. This is a sort of violation of the principle quoted above from {\Harris} that later phonemicists, more concerned with methodological rigor than with representing their intuitions about a language under discussion, would certainly not countenance. Nonetheless, it is a sort of situation which we might well find fairly often in natural languages if we were to look closely for it. Consider the following situation, for example. In \ili{Danish}, the (short) vowel \emph{a} shows considerable \isi{variation} in quality depending on a following consonant, ranging from a rather back vowel before \emph{r} to a rather front vowel before dentals. Before velars and labials, however, there is some difference across dialects: in some, \emph{a} before these segments is roughly as front as before dentals, but for other (especially conservative) dialects, \emph{a} in such positions has an intermediate degree of backness. In general, each individual dialect has a predictable distribution of such phonetic variants, though the principles involved differ slightly from one to another. Now imagine a speaker of \ili{Danish} who has been exposed in early childhood to two distinct dialects; suppose, for example, that the child was brought up in part by a nursemaid speaking a dialect different from that of his parents. In a few words the nursemaid's pronunciation may be retained, even though the parents' dialect is acquired as a whole. As a result, the speaker's lexicon may contain apparent contrasts between a relatively front \emph{a} and a further back quality before labials or velars: \emph{bamse} `teddy-bear' with front [a], but \emph{hamstre} `hoard, accumulate' with further back [ɒ], reflecting the source of the former word in one dialect and the source of the latter in a slightly different one.\footnote{This is essentially the situation described to me by the late \name{Jørgen}{Rischel} in his own pronunciation.} When such a situation is encountered, it is usual to dismiss its significance for synchronic phonologies by saying that it involves two (or even more) coexistent systems, each of which is valid for a distinct portion of the vocabulary. Even if we admit that such an account somehow solves the descriptive problem as far as our hypothetical \ili{Danish} speaker is concerned, that is not the end of the matter: this speaker's children may well learn their language from him, preserving the minute vowel distinctions within his vocabulary, and in their speech there is no longer a question of non-homogeneous sources or distinct coexistent systems. Though explicit descriptions of such a situation are rare in the literature, a certain amount of anecdotal evidence and unsystematic personal observation suggests that it may well arise with some frequency in real speech communities. One way to characterize this state of affairs, of course, is to say that for the speakers in question, the result of the (original) dialect mixture is the creation of a new \isi{contrast} between two short \emph{a} phonemes. This suggests, however, that the difference is one which is capable of differentiating words by itself: i.e., that a form like \emph{bamse} pronounced with the back vowel of \emph{hamstre} would constitute a potential new word of \ili{Danish}. Alternatively, however, one might adopt a solution similar to that of {\Sapir}'s description of alveolars and palatals in \ili{Southern Paiute}: treat both of the \emph{a}'s as corresponding to the same phonemic entity, but allow for the realization of that entity to differ slightly from word to word as a lexical property of individual items which is not completely governed by rule. In that case, \emph{bamse} pronounced with a back \emph{a} would not be a \emph{new} word of \ili{Danish} but, rather, a different (possibly incorrect) pronunciation of `teddy-bear'. Another example, different in its origins but similar in its consequences for phonemic analysis, is cited by \citet[301f.]{hockett99:bloomfield.bio}. In a letter to {\Jakobson} dated 29 May, 1945 \citep[749]{halle88:bloomfield-jakobson}, {\Bloomfield} responded to a point that {\Jakobson} had apparently just brought to his attention: \begin{quotation} the idiosyncratic peculiarity in the \ili{Russian} word for `sun', orthographically \emph{solnce}: the stressed first \isi{syllable} of this has a higher mid back rounded [o], as is usual before nonpalatalized /l/, in place of the more general lower mid back rounded [ɔ] — but in pronunciation there is no [l]! On this {\Bloomfield} writes: ``Your comment [\ldots] is very enlightening. As soon as one gets a certain way into a language, one finds breaks and out­ croppings in the phonemic system. I take this to be due to the constant workings of \isi{sound change} — some of these manifest themselves in this way; the system is never completely in balance.'' \end{quotation} Such allowance for lexically idiosyncratic \isi{variation} within the realization of a single phonological category might be described in several ways. In {\Sapir}'s case, an individual lexical entry for a form contains (as we will note below) both the phonological representation and a list of its principal occurring surface variants. Thus all forms with either \emph{s} or \emph{c} are listed together under \emph{s} in his \ili{Southern Paiute} dictionary \citep{sapir31:s.paiute.dict}, but those which show phonetic \emph{c} (generally for completely predictable reasons) have the phonological unit in question represented in this way. In such a framework the state of affairs under discussion here is described by allowing different phonetic sub-entries to be associated with the same phonemic form. Alternatively, if one distinguishes in \isi{representations} between numerically specified `detail values' for features and the binary valued categorial interpretation imposed on them (as sketched. e.g., in \citealt{sra74:orgphon}), one might simply allow lexical \isi{representations} corresponding to a given category to vary slightly within the range of that category. The point to be made here is not that such a minor insouciance on {\Sapir}'s part toward the precise distribution of variants as is shown by his description of \ili{Southern Paiute} alveolars and palatals corresponds to some deep insight which was buried by the hostile attitudes of later phonologists. There is no reason to believe that {\Sapir} intended to make systematic use of the sort of descriptive possibility just raised. What is important, however, is the recognition that explicit attempts to make \isi{phonemic theory} as rigorous as possible in subsequent years involved the claim that any phonetic difference between forms necessarily corresponded to one of three possibilities: (a) a contrastive distinction between two phonemes; (b) a completely predictable difference between two variants (`allophones') of the same \isi{phoneme}; or (c) free \isi{variation} between variants of the same \isi{phoneme}, with no phonological role. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.9\textwidth]{figures/Sapir-1937.jpg} \caption{Edward Sapir (1937)} \label{fig:ch.sapir.sapir_1937} \end{wrapfigure} In fact, a fourth possibility was implicit in {\Sapir}'s practice: a difference between variants of the same \isi{phoneme}, which does not correspond to a \isi{contrast} between two potentially distinctive phonological units, yet is not `free \isi{variation}' either, since it is distributed idiosyncratically in particular lexical items. We have only hinted above at an answer to the question of exactly what phonological functions such a description could serve, and what situations it might be appropriate for, but perhaps it warrants further examination—which is only possible if the methodological assumptions of phonemicists since {\Sapir} are reexamined.\\ \section{Rules and their interactions in Sapir's phonology} I have noted above in passing that {\Sapir} imagines a phonemic representation to be related to phonetic form by the operation of a system of rules. In his descriptions (most of which, it should be recalled, were actually written at a relatively early stage of his professional career) he typically refers to the elements of a phonemic representation as `organic'; it is in terms of whether a given element is organic or not that it may differ from a merely phonetic segment—and, as I have emphasized here, not in terms of its nature as a relatively complete specification of a (possible) articulatory segment. Organic elements of a form are also referred to sometimes as `morphological'—by which it is not meant that they serve individually as the realization of content, but rather that in the form in question they are present in the phonological representation of some morphological unit (rather than being introduced by rule). From this fact and others, it is easy to see that the difference between `organic' and `inorganic' segments is a function of the status of a particular \isi{token} of a segment, not of an entire segment type: the same segment type may be organic in some occurrences and inorganic in others. Some segment types may be variants (rather than phonemes), however, in which case the phonetic identity of the segment in question is always introduced by rule. The rules stated in {\Sapir}'s descriptions may be distinguished in various ways, though whether any of these classifications had any systematic status for {\Sapir} is open to question. One difference which he appeals to more or less explicitly in his \isi{typology}, however, is that between phonetic processes which by themselves serve to mark some grammatical category (`symbolic' processes, like vowel ab\-laut in \ili{Germanic} for example) and those which either serve simply as accessory marks of some category or are determined by phonological factors alone. A particularly important difference is that between rules which state \isi{regularities} of distribution and rules that describe actual alterations in the (ideal, or phonemic) form of words. {\Sapir} follows roughly the outline of {\Boas} in stating as many general distributional \isi{regularities} as possible that hold over the entire inventory of surface forms in a language—restrictions on possible consonant clusters, vowel sequences, adjacent or other multiple stresses within the same word, co-occurrence of particular vowels and \isi{consonants} or accentual elements and \isi{syllable} types, etc. These are formulated as generalizations about the range of surface forms in the language, and subsequent statements of rules that alter the form of linguistic elements are, whenever possible, motivated by (or at least related to) such restrictions and generalizations. We can also distinguish rules which describe the distribution of properties (or variants) whose occurrence is completely predictable from those that relate independently occurring segment types. In \ili{Southern Paiute}, for example, {\Sapir} treats such properties as \isi{stress} and the devoiced variants of vowels as predictable, while other segment types are potential phonemes. In consequence, since these variants are everywhere predictable, their appearance is completely abstracted away from in giving lexical (or phonemic) \isi{representations}. Other elements which may be introduced by rule, in {contrast}, also have an independent status in the language as phonemes: in \ili{Southern Paiute} a \isi{nasal consonant} may be phonemic or it may be introduced by rule (after a `\isi{nasalizing}' stem); long and short vowels are phonemically distinct, but rules lengthen short vowels or shorten long ones in various positions; \emph{tc} (a variant of \emph{ts}) is an independent \isi{phoneme} from \emph{t}, but it is also the result of the \isi{palatalization} of \emph{t} after \emph{i}, etc. In cases where the presence of a phonemic element within a morphological unit is predictable by some rule affecting morphological sequences, it is nonetheless written as such in phonemic forms: thus, intr-amorphemic \emph{tc} after \emph{i} is always written as such, and never as \emph{t}. As a result of this practice, we may well find two distinct statements of what is apparently the same regularity: in \ili{Southern Paiute} the statement that only spirantized forms of the (non-geminated) \isi{stops} appear when they are intervocalic is quite separate from the rule of spirantization that affects single prevocalic \isi{stops} after vowel-final stems. With regard to the \emph{t}/\emph{tc} \isi{alternation}, a statement that within morphological units only \emph{tc} (and not \emph{t}) appears after \emph{i} is needed in addition to the \isi{palatalization} rule which replaces \emph{t} by \emph{tc} when \emph{i} precedes. The relation between the two is generally made overt, however: the rule governing the \isi{alternation} will be explicitly motivated by the need to maintain the overall regularity (\emph{tc} but not \emph{t} appears after \emph{i}) where it would otherwise be violated in the juxtaposition of independent elements. Relations between `organic' (or phonemic) forms and phonetic structure may be quite complex, and the rules which establish the relation are not necessarily independent of one another. {\Sapir}'s generalizations are quite uniformly formulated as processes which replace one representation with another (`\isi{item and process}' descriptions, as \citet{hockett54:two.models} would later christen them), as opposed to the static statements of distributional \isi{regularities} (`item and arrangement') which were the norm in most later American phonemic theories. {\Sapir} manifestly intended such replacements to represent a part of the synchronic grammar of the language, and not simply a fact of its history: in several places in both the \ili{Takelma} and the Southern Paiute descriptions \citep{sapir22:takelma,sapir30:s.paiute}, as well as in his account of Wakashan in the `Glottalized Continuants' paper \citep{sapir38:glottalized.continuants}, he explicitly contrasts ``living'' processes in a language with those changes which have only ``etymological'' value. The \isi{mechanism} of rules which replace one representation with another lends itself naturally to a particular expression of rule interrelations. When one rule presupposes information that is provided by another in order to operate correctly, we normally formalize this situation by applying that rule after all others whose operation it presupposes; and {\Sapir} employs ordering at least as a metaphor in the same way (again, as a synchronic descriptive device rather than exclusively a historical account). \citet{kenstowicz75:application} explores the assumptions of {\Sapir} and others in this regard, and establishes a number of properties of the ordering relation as employed by {\Sapir}. He notes, for example, that the role of ordering is in general limited to the description of what \citet{kiparsky:3dimensions} christened as `feeding' situations (cases, that is, in which one rule has the effect of creating new instances for another rule to apply to). Whenever a rule creates a situation to which another rule could apply, {\Sapir} assumes that the natural state of affairs is for this subsequent modification to take place. He does sometimes observe explicitly that this is the case—for example, ``an original \emph{γ} is sometimes weakened to a glide $^{\emph{γ}}$ or even entirely lost before or after an \emph{u}-vowel, more often after an \emph{ï}-vowel. \emph{Vocalic contractions may then result}'' (\citealt[52]{sapir30:s.paiute}; emphasis supplied). More often, however, he lets this sort of situation go unremarked. In the case of (what are now called) `bleeding' orders (cases in which a rule is prevented from applying to a form that would otherwise undergo it by the intervention of a second rule) or `counter-feeding' orders (where a possible feeding relationships does not obtain), the failure of a rule to apply is stated explicitly. In \ili{Southern Paiute} there is a general rule by which ``when an initial \emph{w} comes, by derivation or compounding, to stand after a vowel, it regularly becomes nasalized to `-\emph{ŋw}-''; we are also told that ``[t]his rule does not operate, however, when \emph{w} becomes intervocalic by \isi{reduplication}'' \citep[49]{sapir30:s.paiute}. A possible alternative descriptive account would simply order the rule \isi{nasalizing} \emph{w} after processes of derivation and compounding, but before \isi{reduplication}; however, {\Sapir} employs order only to describe (realized) feeding relationships rather than in the more general fashion of many later theorists. Another particularly important descriptive device in {\Sapir}'s work, supplanting order in many instances, is a direct appeal to the difference between organic and inorganic elements. In describing the \isi{stress} system of \ili{Southern Paiute}, he states a general principle of \isi{stress} on alternate moras to which (he makes clear) there are numerous apparent exceptions. These exceptions are all due, however, to the operation of rules shortening, lengthening, or diphthongizing vowels, or coalescing them, or inserting glide vowels in certain cases. The overall generalization is ``that all inorganic increments and losses have no effect on the mora-construction of the word'' \citep[38]{sapir30:s.paiute} One might formulate this simply by ordering the \isi{stress} rule before any of the other processes, but apparently the non-feeding nature of the relationship in question makes it preferable for {\Sapir} to state it in terms of the difference between organic and inorganic elements. It is worth noting that there is actually one class of exceptions to the generalization that \isi{stress} is assigned to alternate organic moras: {\Sapir} notes that ``long vowels resulting from contraction of long + short vowels, however, count as ordinary long vowels [\ldots] Similarly, vowel plus diphthong results in a two-moraed diphthong. [\ldots] In other words, no three-moraed syllables are found'' \citep[38]{sapir30:s.paiute}. This would imply that (in a description based entirely on ordering) the \isi{stress} rule should apply after vowel coalescence, but before other shortening, lengthening, etc., rules. Again, since the generalization is not one involving a feeding relationship, {\Sapir} chooses to describe it in terms of the nature of the \isi{representations} involved rather than in terms of the interaction of rules. \section{The relation between rules and representations} We turn finally to a feature of {\Sapir}'s work which was responsible (at least indirectly) for a considerable amount of comment in the phonological literature: his conception of the lexical structure of individual linguistic elements within a comprehensive description. Though he never specifically discusses this question in theoretical terms, relevant information is to be found scattered throughout his work. Most importantly, we have the example of his \ili{Southern Paiute} dictionary \citep{sapir31:s.paiute.dict} to go by. A dictionary representation must, of course, specify all of the information about an element which is `morphological' in character—i.e., its organic phonological shape and any additional idiosyncratic properties which distinguish it from other elements in the language. The format of entries in the \ili{Southern Paiute} dictionary is rather interesting, however. Each item is presented in (a) an approximation to its organic (or `phonologic') form; and (b) the surface form of representative words in which it appears. The latter are of course predictable from the former by rule. We can observe that any alternants which differ from the organic form (or from one another) in terms of some phonemic segment are generally given; alternants which differ in terms of variants (i.e., completely predictable segment types, the parenthesized elements in a chart of the occurring phonetic segments) are not necessarily made explicit. Thus, \isi{stress} and \isi{vowel} or sonorant \isi{devoicing}, for example, are always predictable in \ili{Southern Paiute} for {\Sapir}, and so \isi{stress} is not marked and all vowels and sonorants are shown as voiced in lexical entries (though these properties are of course marked in the surface full-word forms given to exemplify lexical items). The presence of \emph{both} phonemic and (more nearly) phonetic forms in a lexical entry could have allowed {\Sapir} to describe exceptional lexical items: thus, if some item fails to undergo a rule, he might simply not list the variant of it that would be produced by this rule, with the implication that the operation of the rule is blocked for this item. We can observe that this practice of giving both phonetic and phonemic forms would also allow {\Sapir} to describe lexically idiosyncratic phonetic realizations of a single \isi{phoneme}, as in the case of \ili{Southern Paiute} \emph{s} discussed above. In addition to the basic and variant forms of an item, a lexical entry may also contain information about its effect on other items. This is in the form of arbitrary markers (i.e., ones that are not interpretable as any phonological segment) which trigger certain morphologically conditioned rules in the grammar. The most celebrated example of this is the marking of every stem (and most affixes) in \ili{Southern Paiute} with respect to which of three arbitrary classes it belongs to: `spirantizing', `geminating', or `\isi{nasalizing}'. ``Here the deciding factor is the nature of the preceding stem or suffix, which, as far as a descriptive analysis of Paiute is concerned, must be credited, as part of its inner form, with an inherent spirantizing, geminating, or \isi{nasalizing} power'' \citep[63]{sapir30:s.paiute}. The inclusion of such arbitrary morphological properties in lexical items represents an implicit recognition that purely phonological accounts of alternations are not always possible: i.e., that it is not possible in such a case to reduce the characterization of an element as spirantizing, etc., to other aspects of its phonologically motivated form. This is a conclusion of {\Sapir}'s which, at least in this particular case, has been contested by several writers. Discussions by \citet{harms66:s.paiute}, \citet{mccawley67:sapir}, \citet[344ff.]{spe} (as well as untold numbers of student homework assignments and term papers) have striven to find phonological characterizations of the difference among {\Sapir}'s three classes of \ili{Southern Paiute} stems and suffixes. Typically it is observed, for example, that the effect of spirantizing morphemes is exactly what would be predicted if the items in question simply ended in a vowel (since intervocalic single \isi{consonants} always appear in spirantized form), provided `geminating' and `\isi{nasalizing}' morphemes ended in something else. It is usually suggested that \isi{nasalizing} elements end phonologically in a \isi{nasal consonant}, while geminating ones end in a voiceless vowel (\citeauthor{harms66:s.paiute}) or an unspecified obstruent (\citeauthor{spe}). On the basis of this characterization, a set of purely \isi{phonological rules} can be formulated to derive the three variants of following elements, eliminating the morphological operators `spirantizing', etc. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.95\textwidth]{figures/Sapir-Yale.jpg} \caption{Edward Sapir (ca. 1938)} \label{fig:ch.sapir.sapir_yale} \end{wrapfigure} There is no reason to imagine that {\Sapir} simply missed the possibility of such a solution: his descriptive insight into the language was clearly quite sufficient to see this analysis, and in fact many of its ingredients are present explicitly in his description (see, e.g., the remarks on spirantization as the normal case in \citealt[63f.]{sapir30:s.paiute}). If he chose to describe this situation by means of morphological and not phonological properties of elements, it is likely that some systematic reasoning lay behind the decision. In fact, it is possible to show that the solutions in the subsequent literature were not available to {\Sapir} in principle. Furthermore, it can be shown that his description is in fact more accurate on empirical grounds than the phonological alternatives that have been proposed. We can see immediately that one of these possibilities would not have been open to {\Sapir}: Harms\ia{Harms, Robert}'s proposal to identify geminating stems by a final voiceless vowel, as opposed to final voiced vowels in the spirantizing class. For {\Sapir}, \isi{voicing} of vowels was a property that was completely predictable, and voiceless vowels were variants (rather than phonemes). Since an organic representation consists only of phonemes, it could not contain voiceless vowels, and so Harms\ia{Harms, Robert}'s solution is excluded. Note that it will not do to say that the geminating class of morphemes in itself establishes the phonemic status of voiceless vowels: it remains the case that voiceless vowels only exist in surface forms under specific, predictable conditions, and since this generalization remains valid regardless of the morphological \isi{alternation} in question, voiceless vowels are excluded in principle from organic \isi{representations}. I confine my attention, therefore, to \citeauthor{spe}'s proposal that geminating elements end in an obstruent consonant (which they suggest is actually \emph{t}, though that is irrelevant to the present discussion) and \isi{nasalizing} ones in a \isi{nasal consonant}. From {\Sapir}'s point of view this solution would not have been unacceptable in the way Harms\ia{Harms, Robert}'s would be, since both obstruents and \isi{nasal consonants} of course appear in phonemic forms; but there is a different problem. This is that \ili{Southern Paiute} forms always end in a vowel, and positing stems and suffixes that end in \isi{consonants} (obstruents or nasals) would violate what is otherwise an absolutely valid generalization about the language. This generalization is not true of phonetic forms directly, since apparently consonant-final words are created by rules of vowel \isi{devoicing} and subsequent reduction of voiceless vowels or their absorption into a preceding spirant, or by elision of final short vowels before a word beginning with a vowel. It is also true that words may end with a glottal stop; but {\Sapir} argues in his phonological description that an organic glottal stop is actually associated with a \isi{syllable} as a whole, and not with a specific sequential position within it. It may be realized syllable-finally, but more often is found somewhere within the vowel \isi{articulation}—or even in a neighboring \isi{syllable}. When these predictable effects are abstracted away from, however, the generalization remains that \ili{Southern Paiute} words end in open syllables, and {\Sapir} quite evidently interprets the same generalization as applying to individual lexical elements. Morpheme-final obstruents and nasals, then, could not be posited in phonological forms without violating a canonical pattern of the language. The issue which we suggest {\Sapir} implicitly resolved on this basis was only later raised explicitly. \citet{hale73:lardil} argued that in several languages, the assumption that a completely general phonological account should be given for a morphologically limited phonological pattern leads to incorrect consequences for reasons quite close to those that apparently motivated {\Sapir}. Strictly speaking, the discussion of these examples falls outside of the historical purposes of the present work. Since the issue is one of some significance, however, and since it can be shown that {\Sapir}'s solution is actually validated by the evidence from \ili{Southern Paiute} along the same lines as those argued by Hale, an appendix to the present chapter contains a further elaboration of these points. It appears, in fact, that the generalization at work here is of a piece with other \isi{constraints} imposed in effect on {\Sapir}'s phonological analyses. The fact noted by {\McCawley} that the set of phonemes constitutes a subset of the set of surface phonetic segments is clearly of the same order, since a posited \isi{phoneme} which did not meet this requirement would violate an obvious generalization about surface forms. Another instance of the same respect for surface \isi{regularities} is to be found in the fact that ``any characteristic common to all the alternants of a morpheme will appear in {\Sapir}'s phonologic representation of it'' \citep[110]{mccawley67:sapir}. This does not mean that the phonologic representation is necessarily one of the occurring surface alternants, of course, or even that all of the components of such a representation show up in some surface alternant (e.g., the absolutive suffix in \ili{Southern Paiute} is assigned the representation \emph{pi}, despite the fact that the initial consonant always shows up as spirantized \emph{v}, geminated \emph{p:}, or nasalized \emph{mp}). It does entail, however, that any valid generalization about the surface form of an element will be reflected in its \isi{underlying representation}. In {\Sapir}'s phonology, then, generalizations induced from surface representation play an essential role in constraining the set of underlying (`phonemic', `phonologic', `organic') \isi{representations}. Even though the relation between the two in any particular case may be quite complex, the two are unified on a global basis across a language by a single set of \isi{regularities} of distribution and canonical structure. Importantly, we should recall that the same set of \isi{regularities} plays a significant role in determining the class of phonological rules. These are in general assumed to operate so as to affect morphologically complex \isi{representations} in which some generalization would otherwise be violated, so as to make them conform to the generalization. For example, a language allowing only two-consonant clusters may insert an epenthetic vowel in the environment CC{\gapline}C \emph{so as to avoid a cluster of three consonants}. This rationale for the existence of \isi{phonological rules}, taken over largely from {\Boas}, serves as the basis of a distinction between genuinely phonological and (at least partially) morphologized rules for {\Sapir}. The features of {\Sapir}'s practice which have most often been noted are the distance between his \isi{underlying representation}s and surface phonetics, and the role played by rules of \isi{alternation} (formulated as replacement processes) in his grammars. To understand the central points of his analyses, however, it is important to see that both of these components of an analysis are construed by him as organized by the \isi{regularities} obtaining in surface forms. The elements of \isi{phonological representations} are determined by reference to rules of the language: as observed above, the natural classes and linguistically significant affinities among phonemic elements are given by the \isi{regularities} of distribution and \isi{alternation} that they participate in. Conversely, the \isi{phonological rules} of the language function primarily to maintain a regularity determinable from the canonical form of surface \isi{representations}. Few linguists, historically, have held as unified a view as {\Sapir} of the interrelationship of considerations of rules and \isi{representations} in a single structure within natural languages. In some ways, however, {\Sapir}'s use of surface \isi{regularities} to drive other aspects of a language's phonology anticipates the central role of \isi{constraints} on surface form in \isi{Optimality Theory}, an approach we consider in chapter~\ref{ch.otlabphon}. \newpage \begin{figure} \includegraphics[width=.75\textwidth]{figures/Sapir-utes.jpg} \caption{Edward Sapir (1909) {[In glasses, with group at Mrs. Dodd's, Uintah Ute Reservation, White Rock, Utah. J. Alden Mason peering from bushes]}} \label{fig:ch.sapir.sapir_utes} \end{figure} \section*{APPENDIX: Abstractness and {\Sapir}'s analysis of Southern Paiute} \addcontentsline{toc}{section}{APPENDIX: Abstractness and Sapir's analysis of Southern Paiute} A central feature of {\Sapir}'s analysis of \ili{Southern Paiute} is his claim that the spirantizing, \isi{nasalizing}, or geminating effect of stems (and affixes) on a following affix is to be stated as a morphological property of individual elements, rather than in terms of phonologically derived consequences of the elements' underlying shape. The most plausible alternatives to that analysis discussed above all involve positing underlying shapes that violate an otherwise valid generalization about the language—that words (and thus stems) do not end in \isi{consonants}. They would thus be inconsistent with {\Sapir}'s overall approach to phonological structure. A set of similar cases are discussed by \citet{hale73:lardil}, including a particularly clear example furnished by the phonology of several languages of the Polynesian family. Hale\ia{Hale, Kenneth}'s discussion is based on Māori, but we take our examples below from Sāmoan, where the facts are quite similar, but somewhat more complex and revealing. In Sāmoan (as in Māori) there are no consonant clusters, and no final \isi{consonants}; i.e., all syllables are open. There is very little inflectional morphology in the language, but most verbs have a so-called `passive' form, and many have a plural, reciprocal, and/or a gerund form in addition. The passive is almost always formed by the addition of a suffix which is the reflex of proto-Polynesian *-\emph{C-ia}, where the particular consonant that appears differs from one verb to another. Thus, \emph{o'o} `arrive, reach' has the passive \emph{o'otia}, but \emph{oso} `jump', passive \emph{osofia}; \emph{ula} `make fun of', passive \emph{ulagia},\footnote{Sāmoan \emph{g} = [ŋ].} \emph{inu} `drink', passive \emph{inumia}, etc. Since the consonant that appears in the suffix depends on the root that is involved, the lexical entry for a root must contain some indication of which form the suffix takes when added to it. There are evidently two basic alternatives open for the description of these facts. We might follow {\Sapir}'s example in his \ili{Southern Paiute} and characterize each stem with an abstract morphological operator: thus, the phonological representation of \emph{o'o} would be simply /o'o/, with the additional indication that it is a `\emph{t}-stem' (as opposed to /oso/, which is an `\emph{f}-stem', etc.). In this case, the suffix itself would have several morphologically determined forms: /-tia/ with some verbs, /-fia/ with others, etc. Alternatively, we might incorporate the suffix consonant into the representation of the stem itself, giving \isi{underlying representation}s such as /o'ot/, /osof/, etc. In this case, the suffix could be given a unitary underlying form /-ia/; and the correct forms could be derived simply by the application of a rule deleting final \isi{consonants} when no suffix follows. On that basis /o'ot/ would yield surface [o'o], but /o'ot+ia/ would give [o'otia] since the truncation rule would not apply in this form. This second solution involves no arbitrary morphological features, and only a single completely automatic phonological rule (final consonant truncation). It is evidently simpler from a purely formal point of view than the morphological account; and it is noteworthy that when \citet[219]{bloomfield:lg} discusses this example, he adopts that analysis without further comment, as if it were obvious. This is also an example frequently used in elementary linguistics classes, since the phonological account of the \isi{variation} is so easy to arrive at and may well represent the historical development. Nonetheless, there are some problems for this solution. From a purely phonological point of view, the fact that some suffix variants are not simply -\emph{Cia} poses a slight problem: the forms -\emph{ina} (e.g. \emph{salu} `sweep', passive \emph{saluina}), -\emph{a} (e.g. \emph{ave} `take', passive \emph{avea}), and -\emph{na} (e.g. \emph{'ai} `eat', passive \emph{'aina}) are not straightforwardly derivable from some stem form plus /-ia/. Assuming we ignore these as simply suppletive forms of the suffix, there still remain more serious difficulties. For instance, the causative form of verbs is made with the prefix \emph{fa'a-}, sometimes with \isi{reduplication} of the root. Interestingly, virtually all such causatives take the passive suffix /-ina/, regardless of the suffix taken by the noncausative: e.g., \emph{pa'u} `fall' has the passive \emph{pa'utia} indicating that its stem-final consonant (on the phonological solution) should be /t/, but its causative \emph{fa'apa'u} has the passive \emph{fa'apa'uina}. Similarly, \emph{manatu} `remember' has the passive \emph{manatua}, but its causative\emph{ fa'amanatu} `remind' has the passive fa'\emph{amanatuina}. The stem \emph{oso} `jump' apparently ends in /-f/, if this is what is indicated by the passive \emph{osofia}; but in that case the causative \emph{fa'aosooso} is doubly problematic: first because the supposed final /-f/ is not preserved before the reduplicated copy (i.e., the causative is not *\emph{fa'aosofoso}); and second because its passive is \emph{fa'aosoosoina} (not *\emph{fa'aosoosofina}). It might be claimed that the causative morphology involves truncating the final consonant and then replacing the passive ending appropriate to the root with another; but aside from the fact that this seriously weakens the explanatory force of the proposed final consonant in the phonological form, it will still not resolve all difficulties. In addition to the passive, Sāmoan also has another suffix (the reciprocal), which usually takes the form -\emph{C(a')i} with different \isi{consonants} depending on the root. Forms such as \emph{fe-ita-ga'i}, reciprocal of \emph{ita} `be angry', appear to lend support to the final consonant solution, since the passive of this stem is \emph{ita-gia}: the two show the same idiosyncratic consonant, a fact which is immediately explained if this consonant is part of the stem. But in that case, forms such as \emph{alofa} `love' pose a problem: its passive is \emph{alofa-gia}, but its reciprocal is \emph{fe-alofa-ni}. In such a case there is apparently no single form of the stem which can account for all variants. \citet{hale73:lardil} cites similar facts from Māori, together with a number of generalizations about the direction of regularization through \isi{historical change}, to argue that despite the \isi{simplicity} of the phonological solution, the descriptively adequate account of this \isi{variation} is actually the morphological one. Most importantly for our purposes, he argues that in this case (and in other unrelated ones which he examines), the morphological solution is to be preferred for a principled reason: because the phonological solution posits underlying forms (with final \isi{consonants}) which violate an important surface generalization about the language (all syllables are open), this disparity between the canonical forms of deep and surface \isi{representations} excludes the phonological solution altogether. I have argued above that it is exactly this consideration which (at least in part) led {\Sapir} to prefer a morphological account in \ili{Southern Paiute}: phonological \mbox{(near-)}surface forms always end in vowels, so it is not acceptable to posit underlying forms that violate this generalization by ending in obstruent or \isi{nasal consonants}. If {\Sapir} was thus influenced in choosing a morphological analysis of the consonant alternations in \ili{Southern Paiute} by considerations like those noted by Hale, this would still not be an empirical argument for his solution, and it is especially important (if we are interested in the correctness of the theory) to ask whether there is additional evidence bearing in the same direction. In fact, when we examine the phonology and morphology of \ili{Southern Paiute} (and {\Sapir}'s description of it) more closely, there turn out to be important problems for the phonological view which also argue in favor of the morphological account. In his discussion of why he adopts the morphological position, {\Sapir} notes that there is no independently motivated difference in the phonological shape of spirantizing, geminating, and \isi{nasalizing} stems to which their differential behavior could be related. Of course, that is just what is in question; and the claim of the phonological account is precisely that there is such a difference, but that it is neutralized if no suffix which would allow it to surface follows the stem. {\Sapir} argues on the basis of comparative data that there is no consistent etymological difference either, which would not of course bear directly on the synchronic descriptive issue (though it is strongly suggestive of a non-phonological account if one believes phonological alternations are usually, if not always, the reflex of \isi{historical change}). Much more significant than the sort of plausibility argument one might found on historical considerations, however, are the purely descriptive problems which a phonological account must face. Recall that on that view, suffixes are uniformly represented with simple \isi{consonants}, and the spirantized, geminated, and nasalized alternants arise by virtue of the effect of a stem-final vowel, obstruent, or nasal on this segment. This assumes that the suffix shapes appearing with particular stems are exclusively a function of a unitary underlying phonological property of stems. There are a number of suffixes, however, which have \isi{invariant} forms, regardless of the character of the stem they are attached to. Some of these are consistently nasalized (e.g., \emph{-ŋqï-} `Indirective, \emph{to}, \emph{for}'); some geminated (e.g. \emph{-q:u-} `numeral objective'), and some invariably spirantized (e.g., \emph{-γa-}`durative'). On one version of a strictly morphological account, all that would need to be said about these items is that their lexical entries have only a single phonetic form corresponding to their organic form: this has the effect of blocking the morphologically conditioned selection of non-occurring variants. On the phonological account, however, one must introduce additional rules: for example, to eliminate a posited underlying stem-final nasal or obstruent consonant before a suffix which is invariantly spirantized. A number of other suffixes appear in two of the three possible forms, but not in the third. In general, these appear either spirantized or nasalized but not geminated. After geminating stems these elements occur spirantized: for example, the agentive suffix \emph{-vi/-mpi} has no geminate form, and thus appears spirantized after the geminating stem \emph{nɔ:-} `to carry on one's back' (in e.g. \emph{niŋwï'-nɔ˙$^{\textit{ɔ}}$-ϕɪ} `person-carrier, mythical bird that carries people away in its talons'). {\Sapir} traces the nasalized variants of these suffixes to an independent phonological rule of nasalization applying after a nasal in the preceding \isi{syllable} (as in the future suffix \emph{-vania}, appearing as \emph{-mpania} in \emph{iviŋumpania} `will take a drink'). This process is distinct from morphological nasalization, which does not require a \isi{nasal consonant} in a \isi{nasalizing} element: for example, the stem \emph{pa'a-} `to be high' and the suffix \emph{-vi} `agentive', among others, are \isi{nasalizing} despite the lack of a \isi{nasal consonant} in their phonological form. {\Sapir} thus treats the nasalization of the `two-shape' suffixes as extraneous, and reduces the issue to a difference between forms that undergo the morphological processes of consonant \isi{alternation} and those that do not. A problem is posed for this account, however, by the fact that such suffixes appear nasalized after even those \isi{nasalizing} stems that contain no \isi{nasal consonant}: \emph{paγimpani} `I shall go', from \isi{nasalizing} \emph{paγi-} `go, walk' plus the future suffix \emph{-va/-mpa}. In order to describe the exceptional items on the purely phonological account, it must be assumed either that they undergo spirantization even after obstruents or that they trigger exceptional deletion of a preceding obstruent (but not of a nasal). One interpretation of {\Sapir}'s morphological account, in \isi{contrast}, simply involves omitting a geminated alternant from the lexical entry: the nasalized form will be chosen correctly where appropriate, with the spirantized form treated as the `elsewhere' case. Some of these exceptional suffixes simply represent specialized uses of one of the variants of a \isi{regular}, alternating suffix. The suffixes \emph{-γi} `to come in order to {[---]}' and \emph{-γwa'ai} `to go in order to {[---]}', for example, are invariantly spirantized. They are clearly related, however, to the alternating suffixes \emph{-γi/k:i/ŋki} `to come while {[---]}ing' and \emph{-γwa'ai/k:wa'ai/ŋkwa'ai} `to go while {[---]}ing'. To describe this phonologically, we must assume the suffix acquired a special sense together with the necessary exception features to enforce spirantization even after consonant-final stems; but if we take {\Sapir}'s account, we need only say (as he does) that one of the existing variants of the \isi{regular} suffix acquires a special sense. Just as the description of the suffixes poses problems for the phonological account, a consistent description of the stems also seems difficult on that view. For example, the same stem may well appear with one suffix variant in some cases but with another in others: e.g., \emph{wa'a} `cedar' is normally geminating, as shown by \emph{wa'ap:ï} `cedar tree', but sometimes takes nasal suffixes as in \emph{wa'ampi} `cedar berry'. To describe such cases, we would have to assume the stem has two distinct phonological shapes, distributed on a morphologically determined basis. In this same vein, there is an interesting semisystematic tendency for spirantizing stems to be treated as geminating when they are compounded with other independent stems: e.g., \emph{aŋqa-} `red' is normally spirantizing, as shown by \emph{aŋqaγa} `to be red', but often appears as geminating in compounds like \emph{aŋqap:aγi} `red fish, trout' or \emph{aŋqa-q:ani} `red house'. {\Sapir} notes that the tendency to use geminate variants of stems in compounds may be due to the greater phonetic similarity between this form and the (initial, thus unspirantized) consonant of the simplex form; for discussion, see \citealt{darden84:s.paiute}. For our purposes, the interesting point is that {\Sapir} sees this restructuring as ``the first step towards the dulling of a consciousness of consonantal alternations and toward their development into mere historical survivals'' \citep[70]{sapir30:s.paiute}. He envisions a developmental sequence similar to that posited by {\DeCourtenay} and {\Kruszewski} (see above, chapter~\ref{ch.kazan}), by which a process which may once have been phonological has become morphological, and is on its way to becoming a merely lexical relic. If we assume the phonological view, we must claim that the unusual behavior resides not in the second element of such compounds, where {\Sapir} plausibly localizes it, but rather in the development of irregular obstruent-final variants of the first elements—variants which only appear compounded. None of this evidence demonstrates conclusively that a phonological account of the \ili{Southern Paiute} suffix \isi{alternation} is \emph{impossible}; rather (like Hale\ia{Hale, Kenneth}'s evidence from Māori and the similar facts reviewed above from Sāmoan), the evidence indicates that it is less appropriate than the morphological analysis pursued by {\Sapir}. Further evidence derived from the facts of \isi{reduplication} makes this case even clearer. Reduplication normally copies the initial CV of an element; e.g., \emph{sivai} `whittles' reduplicates as \emph{sisivai} `whittles many times'. The stem consonant following the reduplicated \isi{syllable} is no longer word-initial and thus subject to change, and the changes it undergoes are the same three as those found in suffix \isi{alternation}: spirantization, gemination, or nasalization. Since the shape of the stem is more self-evident than the identity of a hypothetical stem-final consonant, the facts of \isi{reduplication} ought to provide clear evidence for whether or not phonological structure is the essential determinant of these alternations. Cases in which \isi{reduplication} results in nasalization appear to provide some evidence in favor of the phonological account, because most of these are stems of the shape /CVNX/ (e.g., \emph{qani} `house', which reduplicates as \emph{qaŋqani} `houses'). If we allowed the \isi{reduplication} rule to copy CV(N), instead of only CV, and treated nasalization as resulting from a sequence of \isi{nasal consonant} plus stop, this result would follow directly. Unfortunately, however, there are some instances in which \isi{nasalizing} \isi{reduplication} arises without a \isi{nasal consonant} in the stem ({\Sapir} cites \emph{pɔmpɔtsats-} `lizard'), which suggest that nasalization in \isi{reduplication} (though largely predictable) is in part morphologized. The case of geminating \isi{reduplication} is much more difficult to explain phonologically. The class of stems that geminate when they reduplicate seems completely unpredictable: thus, although \emph{tava'c:upï }`dry' reduplicates as \emph{t{\scriptsize A}ta'ϕ{\scriptsize A}cupï:} `all dry' with a geminate, \emph{tavin'na} `put out one's breast, strut' reduplicates as \emph{tara'vin'naai} `keeps putting out (his) breast' with a spirantized form. It should be emphasized that there is no consistent correlation between the form CVCX where the second C is an obstruent (or a geminate) and geminating, rather than spirantizing \isi{reduplication}. Further, the gemination may affect a stem-internal consonant, rather than a stem initial one: e.g., \emph{ivi} `drink' reduplicates as \emph{i'ip:i} `drinks repeatedly' with a geminate, or \emph{tïv\textsuperscript{w}ïn:aγai} `leads', as \emph{tï'tïp:ïnaq:ai} `leads away several times'. Finally, the same stem may appear with more than one kind of \isi{reduplication}, in different morphological categories. The stem \emph{qwïï-} `take' forms a distributive \emph{qwïγwïï} `several take (one object)' with spirantizing \isi{reduplication}, but also an iterative qwï'qwïï `to take one object several times'. All of these facts indicate that morphological factors and not phonological structure determine the applicability of the process of `gemination'. We conclude, then, that a variety of evidence suggests that {\Sapir} was right in treating the spirantizing, geminating, and \isi{nasalizing} processes in \ili{Southern Paiute} as morphologically rather than phonologically conditioned. This result is of some interest in itself, but we have pursued it at such length here not simply as a matter of descriptive linguistics. Rather, the point to be made is that a phonological solution (if correct) would have been available to {\Sapir}, if systematic considerations had not led him to prefer the morphological account. These systematic factors, we suggest, are the same as those discussed by \citet{hale73:lardil}: a desire to avoid positing underlying forms which would violate a basic generalization about the structure of surface forms in the language. %%% Local Variables: %%% mode: latex %%% TeX-master: "/Users/sra/Dropbox/Docs/Books/P20C_2/LSP/main.tex" %%% End:
{ "alphanum_fraction": 0.7962498117, "avg_line_length": 60.6978161503, "ext": "tex", "hexsha": "2e9aecb093ac455977d74bdf1e49669e2dd8475f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3793c564a7fc6dd5b201a2479f0bcd21ca265c87", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/327", "max_forks_repo_path": "chapters/10_sapir.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3793c564a7fc6dd5b201a2479f0bcd21ca265c87", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/327", "max_issues_repo_path": "chapters/10_sapir.tex", "max_line_length": 140, "max_stars_count": null, "max_stars_repo_head_hexsha": "3793c564a7fc6dd5b201a2479f0bcd21ca265c87", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/327", "max_stars_repo_path": "chapters/10_sapir.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 29318, "size": 119514 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Arsclassica Article % LaTeX Template % Version 1.1 (1/8/17) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % Original author: % Lorenzo Pantieri (http://www.lorenzopantieri.net) with extensive modifications by: % Vel ([email protected]) % % License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[ 10pt, % Main document font size a4paper, % Paper type, use 'letterpaper' for US Letter paper oneside, % One page layout (no page indentation) %twoside, % Two page layout (page indentation for binding and different headers) headinclude,footinclude, % Extra spacing for the header and footer BCOR5mm, % Binding correction ]{scrartcl} \input{structure.tex} % Include the structure.tex file which specified the document structure and layout \hyphenation{Fortran hy-phen-ation} % Specify custom hyphenation points in words with dashes where you would like hyphenation to occur, or alternatively, don't put any dashes in a word to stop hyphenation altogether %---------------------------------------------------------------------------------------- % TITLE AND AUTHOR(S) %---------------------------------------------------------------------------------------- \title{\normalfont\spacedallcaps{Predicting Heart Disease Mortality}} % The article title %\subtitle{Subtitle} % Uncomment to display a subtitle \author{\spacedlowsmallcaps{Carlos Alberto Nava Fonseca\textsuperscript{1}}} % The article author(s) - author affiliations need to be specified in the AUTHOR AFFILIATIONS block \date{} % An optional date to appear under the author(s) %---------------------------------------------------------------------------------------- \begin{document} %---------------------------------------------------------------------------------------- % HEADERS %---------------------------------------------------------------------------------------- \renewcommand{\sectionmark}[1]{\markright{\spacedlowsmallcaps{#1}}} % The header for all pages (oneside) or for even pages (twoside) %\renewcommand{\subsectionmark}[1]{\markright{\thesubsection~#1}} % Uncomment when using the twoside option - this modifies the header on odd pages \lehead{\mbox{\llap{\small\thepage\kern1em\color{halfgray} \vline}\color{halfgray}\hspace{0.5em}\rightmark\hfil}} % The header style \pagestyle{scrheadings} % Enable the headers specified in this block %---------------------------------------------------------------------------------------- % TABLE OF CONTENTS & LISTS OF FIGURES AND TABLES %---------------------------------------------------------------------------------------- \maketitle % Print the title/author/date block \setcounter{tocdepth}{2} % Set the depth of the table of contents to show sections and subsections only \tableofcontents % Print the table of contents \listoffigures % Print the list of figures \listoftables % Print the list of tables %---------------------------------------------------------------------------------------- % ABSTRACT %---------------------------------------------------------------------------------------- \section*{Executive Summary} % This section will not appear in the table of contents due to the star (\section*) The present report documents the techniques and analysis of the rate of heart disease (per 100,000 individuals) across the United States at the county-level from socioeconomic indicators. The data is taken from the United States Department of Agriculture Economic Research Service (USDA ERS) and the University of Wisconsin Population Health Institute. County Health Rankings \& Roadmaps. There are 33 variables in this dataset. Each row in the dataset represents a United States county that is unidentifiable in the dataset. While all the variables were studied, the discoveries lead to detect outliers, lack of data and wrong values. During the data processing, education and health variables were the most significantly correlated to the heart disease mortality. And, after using categories for some variables, both area information about the county and the economic typology play a major role in the prediction of heart disease. When developing the Machine Learning model, only one model couldn't get an effective prediction. Therefore Stacking Regressions were used to improve accuracy. %---------------------------------------------------------------------------------------- % AUTHOR AFFILIATIONS %---------------------------------------------------------------------------------------- \let\thefootnote\relax\footnotetext{\textsuperscript{1} \textit{[email protected]}} %---------------------------------------------------------------------------------------- %\newpage % Start the article content on the second page, remove this if you have a longer abstract that goes onto the second page %---------------------------------------------------------------------------------------- % INTRODUCTION %---------------------------------------------------------------------------------------- \section{Introduction} This chapter describes the dataset used to develop a Machine Learning model to predict Heart Disease. These datasets (training and test sets) were obtained from the United States Department of Agriculture Economic Research Service (USDA ERS). \subsection{Data Exploration: Target Variable} From the dataset provided, there are 33 variables, which are divided into four categories: \begin{itemize} \item Area: variables that contain information about the county \item Economic Indicators: they are categories of economic dependence, labour force, unemployment and insurance \item Health: indicators about obesity, smoking, diabetes and other characteristics of the county's population \item Demographics: information from the county's characteristics such as age percentage distribution, education, and others. \end{itemize} Also, additional information was given from the article from Jones (2009) \cite{Jones2009} on relevant factors of heart disease linked with the rural population with certain economic characteristics. The variable selected for prediction is: heart\_disease\_mortality\_per\_100k. It is defined as the rate of heart disease (per 100,000 individuals) across the data set. A brief summary of the variable is presented in Table~\ref{tab:sum_01}. \FloatBarrier \input{Tables/sum_table_01.tex} From Table~\ref{tab:sum_01} it can be seen that the target variable main statistics. Plotting this variable it is observable in Figure~\ref{fig:target_01} that is near a normal distributed variable and negatively skewed (slightly pointing to the left). \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.75\columnwidth]{heart_normal_01.png} \caption[Target Variable]{Target Variable Distribution vs Theoretical Normal Distribution (\emph{Own elaboration made with: Python Seaborn}), \index{Heart Disease distribution.}} \label{fig:target_01} \end{figure} The Q–Q (quantile-quantile) plot helps to compare the two probability distributions by plotting their quantiles against each other. As seen in Figure~\ref{fig:prob_01} it is presented the graphically the properties of the normal distribution and the variable distribution. This Q–Q plot compares a sample of data on the vertical axis to a statistical population on the horizontal axis. The points follow a linear pattern, suggesting that the data are not distributed as a standard normal $X ~ N(0,1)$. Fortunately, there is little offset between the line and the points suggest that the data follows a $X ~ N(\mu,\sigma)$ distribution. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.75\columnwidth]{probability_plot_01.png} \caption[Q-Q Graph]{Probability Distribution vs Theoretical Probability Distribution (\emph{Own elaboration made with: Python Seaborn}), \index{Heart Disease probability distribution.}} \label{fig:prob_01} \end{figure} Since the target variable is left-skewed, it was transformed this variable and make it more normally distributed using the $log(1+x)$ function to be accurate in the floating-point accuracy. This skewed data is being normalised by adding one (in order to transform the 0 since $log 0$ is not defined) and taking the natural logarithm. The data can be nearly normalised using this transformation technique. Actually, many of the algorithms to be tested assume that the data is normal and calculate with this assumption. So, if data is closer to normal, the better it fits linear models. After the logarithmic transformation of the target variable, it was obtained Figure~\ref{fig:target_02} with almost no skew suggesting that the data appears more normally distributed. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.75\columnwidth]{heart_normal_02.png} \caption[Target Variable Normalized]{Target Variable Distribution vs Theoretical Normal Distribution (\emph{Own elaboration made with: Python Seaborn}), \index{Heart Disease distribution.}} \label{fig:target_02} \end{figure} In Figure~\vref{fig:prob_02}, it details the gap between the theoretical an data distribution are closer confirming that the logarithmic transformation normalized the data making it suitable to use linear modelling. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.75\columnwidth]{probability_plot_02.png} \caption[Q-Q Graph Normalized]{Probability Distribution vs Theoretical Probability Distribution (\emph{Own elaboration made with: Python Seaborn}), \index{Heart Disease probability distribution.}} \label{fig:prob_02} \end{figure} \subsection{Data Exploration: Train Variables} From the data there are 3198 samples, which can be used to train model; also 33 features and 1 target variable. A brief summary of non-categorical data is presented in Table~\ref{tab:sum_02}. It can be seen that most data is normalized since it is presented in terms of ratio per one hundred or percentage. Some data from the Health category (air pollution, homicides per 100,000 inhabitants, vehicle crash per 100,000 inhabitants, population per dentist and population per physician) are not normalized. This represents an advantage now that the target data is normalized. \input{Tables/sum_table_02.tex} Further analysis of the data presented in Table~\ref{tab:sum_02} suggest that the variables might be correlated with each other. To graphically test this hypothesis, it is visualized as the correlations between variables in Figure~\ref{fig:correlation_01}. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.8\columnwidth]{correlation_01.png} \caption[Correlation Heat Map]{Correlation Heat Map (\emph{Own elaboration made with: Python Seaborn}), \index{Correlation between variables.}} \label{fig:correlation_01} \end{figure} From this Figure, the assumption is that the lower right side of the graph holds more correlation to the target variable. There are also negative correlations, mainly from labour and education. Further detail on variables correlated with more than $|0.5|$ with the target variable was plotted in the following Figure~\ref{fig:correlation_02}. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{correlation_02.png} \caption[Detailed Correlation Heat Map]{Detailed Correlation Heat Map (\emph{Own elaboration made with: Python Seaborn}), \index{Correlation between variables $<|0.5|$.}} \label{fig:correlation_02} \end{figure} These findings suggest that the lifestyle factors that come with income can reduce or increase the risk for heart disease. With this in mind its logic to think that factors like education, income and wealth play an important role in overall health. Social position can influence a person’s behaviour, impacting decisions related to diet, exercise and smoking. Exploring the data in more detail, the analysis proceeds to the data set distribution. Presented in Figure~\ref{fig:boxplot_01} that data has many outliers and is not normally distributed. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{box_plot_01.png} \caption[Box plot of data variables]{Percentage Variables (\emph{Own elaboration made with: Python Seaborn}), \index{Data Variables Boxplot}} \label{fig:boxplot_01} \end{figure} A closer look to data which is $<|0.5|$ correlated to the target variable (see Figure~\ref{fig:boxplot_02}) follows the same pattern. Therefore the model must be aware of the skewness of each variable and some of these variables might have outliers. From this analysis, the conclusion is that the mean of each variable presented in Table~\ref{tab:sum_02} as part of the statistical description, is sensitive to these outliers if they are removed or transformed. \FloatBarrier \begin{figure}[H] \centering \includegraphics[width=0.9\columnwidth]{box_plot_02.png} \caption[Box plot of correlated data variables]{Percentage Variables with $<|0.5|$ correlation (\emph{Own elaboration made with: Python Seaborn}), \index{Data Variables Boxplot}} \label{fig:boxplot_02} \end{figure} \subsection{Missing Data and Skewness} Missing values in the training data set can affect the prediction or classification of a model negatively. Also, some machine learning algorithms can't accept missing data. Take for example Support Vector Machines or Linear Regression. Having understood a general perspective on how data variables have some correlation role with the target variable, and a brief description of them, the following procedures are made to the the data variables. After the review the data composition in Table~\ref{tab:data_01} that there is lack of data ranging from $60.48\%$ to $0.16\%$ which must be treated in order to try different models without errors. \input{Tables/missing_data_01.tex} From the previous section, it is known that data mean is sensitive. Therefore, it must take into account that the gaps in the data set must consider both the training and testing sets and go through the same procedure. Having known that the mean is sensitive, the median was considered to fill the missing values on variables who were below 1\% of missing data. The other variables received different treatment. Since data were missing between 5 to 60.5 per cent, the approach was to use normalized data within the range of the mean and one standard deviation. In other words, a sample random variable $X$ with $N(\mu,\sigma)$ was used to fill the missing values. Additionally, categorical variables must be mapped to account them in the analysis. The following data variables were mapped according to their characteristics: \begin{itemize} \item \textit{area\_\_rucc} will be mapped between Metropolitan Counties (Value: 1) and Non-Metropolitan Counties (Value: 0). \item \textit{area\_\_urban\_influence} will be mapped between Large influence (Value: 1) and Small influence (Value: 0). \item \textit{econ\_\_economic\_typology} will be mapped between economic activities that are classified as stressful (manufacturing, government dependent and mining) with value of 1, others will be non-stressful (Value:0). \item \textit{yr} will be mapped according to \textit{a} as 1 and \textit{b} as 0. \end{itemize} Also, based on the correlation analysis and the mapped variables some elements were dropped since they could not be transformed in normalized data or had not a direct approach to heart disease. Homicides, crash deaths, and population per dentist and physicians were left out. To make all data normally distributed, the skew for each data variable was calculated. Values for the first 10 variables are presented in Table~\ref{tab:skew_01}. \input{Tables/skew_data_01.tex} The assumption of normality at the beginning of this analysis leads to modelling that is simple, mathematically tractable, and powerful compared to tests that do not make the normality assumption. Unfortunately, the skewed data set is in fact not approximately normal. However, an appropriate transformation of a data set can often yield a data set that does follow approximately a normal distribution. This increases the applicability and usefulness of models based on the normality assumption. The Box-Cox transformation is a particularly useful family of transformations. It is defined as: \begin{equation} T(Y) = (Y^{\lambda} - 1)/\lambda \label{eq:boxcox} \end{equation} where Y is the response variable and $\lambda$ is the transformation parameter. For $\lambda = 0$, the natural log of the data is taken instead of using the Formula~\ref{eq:boxcox}. Variables with an absolute value of skewness greater than $0.75$ were transformed with a $\lambda = 0.15$. The process that leads to normalizing the data in the training set was also used in the test set so it can predict with the assumptions made for the variables. %---------------------------------------------------------------------------------------- % METHODS %---------------------------------------------------------------------------------------- \section{Machine Learning Model} Breiman (1996)~\cite{Breiman1996} presents stacking regressions is a method for forming linear combinations of different predictors to give improved prediction accuracy. The idea is to use cross-validation data and least squares under non-negativity constraints to determine the coefficients in the combination. The idea was first presented by Wolpert (1992)~\cite{Wolpert1992}. Under this idea, a model was developed. \subsection{Base Models} The following models were tested: \begin{enumerate} \item LASSO Regression. \item Elastic Net Regression. \item Gradient Boosting Regression. \item Extreme Gradient Boosting (or XGBoost). \end{enumerate} The models were evaluated by their performance using cross-validation of the Root Mean Squared Logarithmic Error (RMSLE) error. The following results were obtained: \input{Tables/base_models.tex} Now that the procedure has the Mean and Standard Deviation, the first approach is averaging base models. Obtaining the following result: \\ \textbf{Averaged base models score: Mean = 0.1113 Std. = (0.0034)}. When stacking the averaged models as a meta-model, it got: \\ \textbf{Stacking Averaged models score: Mean = 0.1000 Std. = (0.0036)}. Finally, after and creating a stacked classification model we used the exponential function on the predicted model. Obtaining the results in the original scale for submission. %---------------------------------------------------------------------------------------- % RESULTS AND DISCUSSION %---------------------------------------------------------------------------------------- \section{Results and Conclusions} The analysis presented in this report acknowledge the following findings: \begin{itemize} \item Filling the missing values with mean/median/mode for a high rate of present data (over 99\%) do not generate outliers and is appropriate for data close to a normal distribution. \item In some variables, random sampling was used to predict missing values using a random variable with $X ~ N(\mu,\sigma)$ distribution. \item Categorical variables were mapped in binary values to include them in the model. \item After filling the missing data and mapping the categorical variables, the skewness of the data was modelled with the Box-Cox technique to adjust them into normalized data. \item After the classification with base models, a stacking regression model was used to predict the values of the target variable. \end{itemize} Based on the Root Mean Squared Logarithmic Error (RMSLE), the model constructed for predicting the heart disease using a stacking regression showed to perform better than the base models alone. Further work can be implemented with the categorical data and missing data with other techniques such as Decision Trees based on correlated variables. %---------------------------------------------------------------------------------------- % BIBLIOGRAPHY %---------------------------------------------------------------------------------------- \renewcommand{\refname}{\spacedlowsmallcaps{References}} % For modifying the bibliography heading \bibliographystyle{unsrt} \bibliography{sample.bib} % The file containing the bibliography %---------------------------------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.713296533, "avg_line_length": 65.5619047619, "ext": "tex", "hexsha": "9e44f6602239f53c5154e58ad2cc7621b75996cc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8bd09b95424d22d6f8436bc2a3f764a66a17c15b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "carlosnva/ML_project_MS", "max_forks_repo_path": "report/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8bd09b95424d22d6f8436bc2a3f764a66a17c15b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "carlosnva/ML_project_MS", "max_issues_repo_path": "report/main.tex", "max_line_length": 636, "max_stars_count": null, "max_stars_repo_head_hexsha": "8bd09b95424d22d6f8436bc2a3f764a66a17c15b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "carlosnva/ML_project_MS", "max_stars_repo_path": "report/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4378, "size": 20652 }
\part{Lecture 08: Supervised Learning} \author{Wilhelm Kirchg\"assner} \title[RL Lecture 08]{Lecture 08: Function Approximation \\with Supervised Learning} \date{} \frame{\titlepage} \frame{\frametitle{Table of Contents}\tableofcontents} \section{Motivation and Background} \frame{\frametitle{The Machine Learning Triad} \begin{figure} \includegraphics[width=11cm]{fig/lec01/Machine_Learning_Disciplines.pdf} \caption{Disciplines of machine learning %(cf. \ref{fig:Machine_Learning_Disciplines}) } \label{fig:Machine_Learning_Disciplines_2} \end{figure} } \frame{\frametitle{Introductory Material} Machine learning (ML) and especially the field of supervised learning (SL) is extensively researched and taught. \begin{itemize} \item Courses at UPB \begin{itemize} \item \href{https://ei.uni-paderborn.de/en/nt/teaching/veranstaltungen/statistical-and-machine-learning/}{\textit{Statistical and Machine Learning} by the Comm. Eng. Dept. (NT)} \item \href{https://cs.uni-paderborn.de/is/teaching/}{\textit{Machine Learning I \& II} by CS Dept. Intelligent Systems and ML} \end{itemize}\pause \item Renowned online courses \begin{itemize} \item \href{https://de.coursera.org/learn/machine-learning}{\textit{Coursera ML} by Stanford's Andrew Ng} \item \href{https://www.fast.ai/}{\textit{Practical deep learning for coders} by fast.ai} \item \href{https://www.kaggle.com/learn/overview}{\textit{Intro to ML} by Kaggle Courses} \end{itemize}\pause \item Books classics \begin{itemize} \item \href{http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop\%20-\%20Pattern\%20Recognition\%20And\%20Machine\%20Learning\%20-\%20Springer\%20\%202006.pdf}{\textit{Pattern Recognition and Machine Learning} by M. Bishop} \item \href{http://www-stat.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf}{\textit{The Elements of Statistical Learning} by Hastie et al.} \item \href{https://www.deeplearningbook.org/}{\textit{Deep Learning} by I. Goodfellow et al.} \end{itemize} \end{itemize} } % ML hype in industry \frame{\frametitle{Machine Learning in Industry} Machine learning applications are a \href{https://magazine.startus.cc/how-machine-learning-is-changing-the-major-industries/}{fast growing industry itself}, and enhance more and more automation in classical industry as well. Among others, popular industries are: \begin{itemize} \item Embedded systems, \item Mobility, and \item Digital assistants \end{itemize} Most applications are of the supervised type. The demand for highly skilled ML engineers is growing correspondingly. } \frame{\frametitle{Instances of ML Applications} \begin{itemize} \item Recommendation systems \begin{itemize} \item Which ads to display on a website? \item Which items are \href{https://tech.instacart.com/deep-learning-with-emojis-not-math-660ba1ad6cdc}{most likely put into cart next} by the user? \end{itemize}\pause \item Forecasting \begin{itemize} \item Weather, sales, \href{https://eng.uber.com/forecasting-introduction/}{geospatial Uber calls}, \href{https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting}{restaurant}/\href{https://towardsdatascience.com/web-traffic-forecasting-f6152ca240cb}{website} traffic \item Material attrition in engineering processes (predictive maintenance) \end{itemize}\pause \item Classification/Regression \begin{itemize} \item Speech assistants (Alexa/Siri), pedestrial detection (autonomous driving), \href{https://www.kaggle.com/c/severstal-steel-defect-detection}{fault detection in engineering processes} \item Chatbots, translators, credit scoring (fintech) \end{itemize}\pause \item \href{https://developers.google.com/machine-learning/gan}{Generative models} \end{itemize} } % Kaggle comps \frame{\frametitle{ML Competitions with Price Pool} \begin{figure} \includegraphics[width=9cm]{fig/lec08/comp_logos.pdf} \caption{Kaggle and DrivenData} \label{fig:comp_logos} \end{figure} Open ML competition platforms like \href{https://www.kaggle.com/}{kaggle} or \href{https://www.drivendata.org/competitions/}{DrivenData} offer a multitude of diverse competitions to participate in at no cost. \begin{itemize} \item Most competitions come with a decent price pool of 15 tsd. dollars up to 1 mil. dollars hosted by stakeholders from the industry and government. \item These competitions are almost exclusively of the supervised type. \end{itemize} } % Typical ML pipeline \frame{\frametitle{Typical Supervised Learning Pipeline} \begin{figure} \includegraphics[width=9cm]{fig/lec08/ML_pipeline.png} \caption{A typical supervised learning pipeline -- sometimes more art than science} \label{fig:ml_pipeline} \end{figure} } % How does this contribute to RL? See AlphaGo, which is built on SL % Future RL will benefit from methods borrowed from SL, \frame{\frametitle{Supervised Learning in Reinforcement Learning} \hl{SL approximates functions, RL approximates policies.} \pause However, there are two situations where SL is auxiliary in RL: \begin{itemize} \item Function approximation of (action-)state values, if the number of possible states exceeds any reasonable memory capabability, which is often the case. \begin{itemize} \item $v_\pi(x) \approx \hat v(x, \bm w)$ with $\bm w$ being a trainable weight vector. \end{itemize} \pause \item Imitation learning. A simple-to-implement, deterministic baseline policy is often available, but an RL agent might fail to achieve that performance when learning from scratch. With SL, this baseline policy can be approximated to be the initial behavior of the agent.\pause \begin{itemize} \item Expert moves in board games. \item Basic linear controllers in engineering applications with feedback-loops. \end{itemize} \end{itemize} } \section{Supervised Learning Problem Statement} \frame{\frametitle{Supervised Learning Problem Statement} \begin{block}{Supervised learning} Given a \hl{labeled} data set $\left\langle\bm x_k, \bm y_k\right\rangle \in \bm{\mathcal{D}}$ with $k \in [0, K]$ and $K$ being the data set size, approximate the mapping function $f^\ast: \bm x_k \mapsto \bm y_k$ with a parameterizable ML \hl{model} $f_{\bm w}: \bm x_k \mapsto \hat{\bm y}_k \approx \bm y_k \quad\forall k$. \end{block}\pause \begin{itemize} \item Goodness of fit can be measured by a manifold of \hl{metrics} \\(e.g., mean squared error, classification accuracy, etc.).\pause \item Reducing the look-up-table-like mapping $f^\ast$ to a parameterized function $f_{\bm w}$ degrades any metric on the data set but enables interpolation to unseen data.\pause \item The dimension $\xi$ of model parameters $\bm w \in \mathbb R^\xi$ is adjustable in many model families, which trades off \hl{bias} with \hl{variance} (among other factors, leading to so-called under- and overfitting).\pause \item On top of $\bm w$, a ML model might also have hyperparameters that can be optimized (e.g., number of layers in a neural network). \end{itemize} } \frame{\frametitle{Bias and Variance} \begin{figure} \includegraphics[width=12cm]{fig/lec08/bias_variance.pdf} \caption{Left: Decision boundaries in binary classification, $k$-nearest neighbors with one (bright) and nine (dark) neighbors. Right: Regression example, least squares (dark) and $2$-nearest neighbors (bright).} \label{fig:bias_variance} \end{figure} } \frame{\frametitle{Generalization Error} \begin{block}{Supervised learning performance} SL performance is measured by a model's \hl{generalization error}, i.e., goodness of fit on unseen data. \end{block}\pause A data set is often finite as opposed to RL environments generating arbitrarily many observations. \begin{itemize} \item How to generate unseen data?\pause \begin{itemize} \item Hold out portions of the data set for \hl{cross-validation}. \end{itemize} \end{itemize} } \frame{\frametitle{$k$-fold Cross-Validation} \begin{columns}[t,onlytextwidth] \begin{column}{0.55\textwidth} \begin{minipage}[c]{\linewidth} \begin{figure} \includegraphics[width=\textwidth]{fig/lec08/kfold-cv.pdf} \caption{$k$-fold CV with five folds} \label{fig:kfold-cv} \end{figure} \end{minipage} \end{column} \hfill \begin{column}{0.45\textwidth} \begin{minipage}[c]{\linewidth} \begin{itemize} \item Cross-validation (CV) can be conducted with $k$-fold CV.\pause \item Training is repeated $k$ times with $k$ different splits of the training set.\pause \item Each observation serves as unseen instance at least once.\pause \item The validation error is an indicator for tuning hyperparameters. \end{itemize} \end{minipage} \end{column} \end{columns} } \frame{\frametitle{Means to Improve an SL Model} SL performance can be improved by: \begin{itemize} \item Collecting more data, i.e., increasing $K$ (more data is always better).\pause \item Choosing a more appropriate model.\pause \item Optimizing hyperparameters of the model.\pause \item Averaging over several different models (ensembling).\pause \item Most effectively: Revealing the most predictive patterns in the data to the model (feature engineering). \end{itemize} } \section{Feature Engineering} \frame{\frametitle{Table of Contents} \tableofcontents[currentsection] } \frame{\frametitle{Feature Engineering} Additional features might be: \begin{itemize} \item Coming from the real world via additional sensors or additional tracking mechanisms (think of a user's click behavior on a website)\pause \item Hand-designed (\textit{engineered}) by experts in the corresponding domain from the original feature set\pause \item Automatically built according to properties of each feature in the original set (Auto-ML)\pause \end{itemize} \begin{alertblock}{Caution} Adding more features is not equivalent to having more data (which is always better). Having a fixed data set size, adding arbitrarily many features, regardless of their origin, increases chances to align statistical fluctuations with the target $\bm y_k$ - overfitting is the result. \end{alertblock} } \frame{\frametitle{Feature Engineering Example (Classification)} \begin{figure} \includegraphics[width=11cm]{fig/lec08/fe_example.pdf} \caption{Features $r=\sqrt{\text{width}^2 + \text{height}^2}$ and $\theta = \arctan{(\frac{\text{height}}{\text{width}})}$ reveal linearly separable class distribution} \label{fig:fe_example} \end{figure} } \frame{\frametitle{Feature Engineering Example (Regression)} \begin{figure} \includegraphics[width=11cm]{fig/lec08/fe_example2.pdf} \caption{Log-transform of the target signal exhibits linear relationship to the regressor} \label{fig:fe_example2} \end{figure} } \frame{\frametitle{Normalization} Most models require data to be \hl{normalized} before training (apart from tree-based models). Typical normalizaton schemes: \begin{itemize} \item Standard scaling: $\tilde{ \bm x} = (\bm x- \text{Avg}(\bm x)) / \text{Std}(\bm x)$ \item Min-Max scaling: $\tilde{\bm x} = (\bm x - \text{min}(\bm x))/ (\text{max}(\bm x) - \text{min}(\bm x))$ \end{itemize} In an unnormalized data set, features with high variance will eclipse patterns in other features. } \frame{\frametitle{Data Types} Several different data types can be utilized for ML: \begin{itemize} \item Binary: 1 or 0 (True or False).\pause \item Integer: $\mathbb{N}$ (e.g., number of rooms in a building).\pause \item Real-valued: $\mathbb{R}$ (e.g., temperature).\pause \item Categorical: like \{blue, green, red\}\pause \item Ordinal: Categoricals that can be ordered, e.g., educational experience (From elementary school to Ph.D.) \end{itemize} } \frame{\frametitle{Data Type Specific Normalization} How to normalize categorical data?\pause \begin{itemize} \item \textit{One-hot} encoding \begin{itemize} \item Replace a categorical of $n$ values with $n$ binary features. \item Feature space gets sparse and might get too big for memory. \end{itemize}\pause \item Mean target encoding \begin{itemize} \item Replace each value of a categorical with the average (regression) or mode (classification) of the dependent variable being observed with the corresponding value. \item This might lead to information \textit{leaking} from the dependent variables into the independent variables, and might exhibit high performance that cannot be reproduced on unseen data. \end{itemize}\pause \item Entity embeddings \begin{itemize} \item Let a neural network find a cardinality-constrained set of real-valued features for each categorical. \item Works well in practice but is more intricate than alternatives. \end{itemize} \end{itemize} } \frame{\frametitle{Typical Feature Engineering Schemes} Feature design is often of the following form (tricks of the trade): Given $K$ feature vectors $\bm x_k \in \mathbb R^P$ with, e.g., $P=3$ (two real-valued regressors and a categorical independent variable $\bm x_k = (x_{k,r_1}, x_{k,r_2}, x_{k,c})$):\pause \begin{itemize} \item $\tilde x_{k} = x_{k,r_1} + x_{k,r_2}$ (or any other combination, e.g., product, division, subtraction, also cf. \figref{fig:fe_example}),\pause \item $\tilde x_k = x_{k,r} - \frac{1}{|\mathcal B|}\sum_{i \in \mathcal B} x_{i,r}\quad\forall r = \{r_1, r_2\}$ with $\mathcal B = \{i: x_{i,c} = x_{k, c}\}$,\pause \item Clip/drop/aggregate outliers away,\pause \item Coordinate transformations for spatial features (e.g., rotation),\pause \item In time domain: \begin{itemize} \item $\tilde{\bm{x}}_k = (x_{k,r_1}, x_{k-1,r_1}, x_{k-2,r_1}, x_{k,r_2}, x_{k,c})$ (lag features), \item $\tilde x_k = (1-\alpha)\tilde x_{k-1} + \alpha x_{k,r}$ (moving averages). \end{itemize}\pause \item In frequency domain: \begin{itemize} \item Amplitude and index of frequencies from a fast fourier transform (FFT) \end{itemize} \end{itemize} } \section{Typical Machine Learning Models} \frame{\frametitle{Table of Contents} \tableofcontents[currentsection] } % Model zoo (and their Hyperparameters) \frame{\frametitle{Model Landscape} When trying to find an appropriate mapping between input and output data, one can choose from a variety of models:\pause \begin{itemize} \item Linear/logistic regression (with regularization) \begin{itemize} \item The simplest data-fitting algorithm \end{itemize}\pause \item Support vector machines (SVM) \begin{itemize} \item Most popular algorithm before 2012 \end{itemize}\pause \item (Deep) neural networks (DNN) \begin{itemize} \item Also coined as \textit{deep learning}, soared in popularity since 2012 \item Most prevalent in the domains of natural language processing (NLP) and image processing \end{itemize}\pause \item Gradient Boosting Machines (GBM) \begin{itemize} \item Chaining of \textit{weak} models (most of the time decision trees) \item The best performing stand-alone model in tabular ML competitions \end{itemize} \end{itemize} } \frame{\frametitle{Model Choice} \begin{figure} \includegraphics[width=9cm]{fig/lec08/bigdrill.png} \caption{Choose models appropriate for the problem!\\(Source: Adapted from \href{https://www.reddit.com/r/toolporn/comments/1kw63x/hilti_30c_in_your_hands_feels_huge_1050_x_750/}{reddit})} \label{fig:bigdrill} \end{figure} } \subsection{Linear Regression} \frame{\frametitle{Linear Regression (1)} %\textit{In the following, ML models for regression are presented. %Each model is equivalently suitable for classification though after minor adaptions.} Linear models assume a linear relationship between $\bm x_k = (1, x_{k,1}, x_{k,2},\dots,x_{k,P})$ and $y_k$ via trainable coefficients $\bm w \in \mathbb{R}^{P+1}$: \begin{align} f(\bm x_k) &= \hat y_k = w_0 + \sum_{p=1}^P x_{k,p}w_p, \\ \hat{\bm y}&= \bm{\Xi}\bm w, \end{align} where $\bm{\Xi} = (\bm x_1, \dots, \bm x_K)$. \pause Among other methods, $\bm w$ can be estimated from $K$ samples by minimizing the residual sum of squares (RSS), which is coined the \hl{least squares} method: \begin{equation} RSS(\bm w) = \sum_{k=1}^K (y_k - f(\bm x_k))^2 = (\bm y-\bm{\Xi w})\T(\bm y - \bm{\Xi w}). \label{eq:rss} \end{equation} } \frame{\frametitle{Linear Regression (2)} Deriving (\ref{eq:rss}) with respect to $\bm w$ and setting it to zero while assuming $\bm{\Xi}\T\bm{\Xi}$ is positive definite, yields an analytically closed solution form: \begin{equation} \hat{\bm{y}} = \bm{\Xi\hat{w}} = \bm{\Xi} (\bm{\Xi}\T\bm{\Xi})^{-1}\bm{\Xi}\T\bm{y}. \label{eq:OLS_estimate} \end{equation}\pause \begin{alertblock}{Multicollinearity} If two regressors exhibit strong linear correlation, their coefficients can grow indeterministically. This corresponds to high variance in $\hat{\bm w}$. Regularization of $\hat{\bm w}$ alleviates this effect - it induces bias for less variance. Most prevalent linear regularized techniques are LASSO and Ridge: \begin{align} RSS_{LASSO}(\bm w) &= (\bm y-\bm{\Xi w})\T(\bm y - \bm{\Xi w}) + \lambda||\bm w||_1, \\ RSS_{Ridge}(\bm w) &= (\bm y-\bm{\Xi w})\T(\bm y - \bm{\Xi w}) + \lambda||\bm w||_2, \label{eq:ridge} \end{align} where $\lambda$ controls the growth penalty. \end{alertblock} } \subsection{Artificial Neural Networks} \frame{\frametitle{Artificial Neural Networks} Artificial neural networks (ANNs) describe non-linear approximators $\hat{\bm y} = f(\bm{\Xi; \bm w})$ that are end-to-end differentiable.\pause \newline \begin{columns}[t,onlytextwidth] \begin{column}{0.45\textwidth} \begin{minipage}[c]{\linewidth} \begin{figure} \includegraphics[width=\textwidth]{fig/lec08/neuron.pdf} \caption{A typical neuron as the key building block of ANNs.} \label{fig:neuron} \end{figure} \end{minipage} \end{column} \hfill \begin{column}{0.55\textwidth} \begin{minipage}[c]{\linewidth} \begin{itemize} \item An ANN consists of \hl{nodes} in one or more \hl{layers}.\pause \item Each node transforms the weighted sum of all previous nodes through an activation function.\pause \item The weighted connections are called \hl{edges}, and the weights are the parameters of the ANN. \end{itemize} \end{minipage} \end{column} \end{columns} } \frame{\frametitle{Multi-Layer Perceptron} A vanilla ANN is the so-called \hl{feed-forward ANN} or \hl{multi-layer perceptron}. \newline \begin{columns}[t,onlytextwidth] \begin{column}{0.45\textwidth} \begin{minipage}[c]{\linewidth} \begin{figure} \includegraphics[width=\textwidth]{fig/lec08/MLP.pdf} \caption{Multi-layer perceptron.} \label{fig:mlp} \end{figure} \end{minipage} \end{column} \hfill \begin{column}{0.5\textwidth} \begin{minipage}[r]{\linewidth} \begin{itemize} \item Only forward-flowing edges.\pause \item The \hl{depth} $L$ and width $H^{(l)}$ are hyperparameters.\pause \end{itemize} With $\varphi^{(l)}$ and $\bm{\mathcal Z}^{(l)}$ denoting the activation function and activation of layer $l$ respectively, we get for the output matrix $\bm{\mathcal H}^{(l)}$ \begin{equation*} \bm{\mathcal H}^{(l)} = \varphi^{(l)}\big( \underbrace{\bm{\mathcal H}^{(l-1)}\bm{\mathcal{W}}^{(l)} + \bm{b}^{(l)}}_{\bm{\mathcal Z}^{(l)}} \big). \end{equation*} \end{minipage} \end{column} \end{columns}\pause Weight matrix $\bm{\mathcal{W}}^{(l)} \in \mathbb{R}^{H^{(l-1)}\times H^{(l)}}$ and (broadcasted) bias matrix $\bm{b}^{(l)} \in \mathbb{R}^{K\times H^{(l)}}$ are iteratively optimized, and constitute $\bm w$. } \frame{\frametitle{Activation Functions} \begin{columns}[t,onlytextwidth] \begin{column}{0.45\textwidth} \begin{minipage}[c]{\linewidth} Within hidden layers most \\prevalent activation functions $\varphi(\cdot)$ are \begin{itemize} \item $h = \text{tanh}(z)$ \item $h = \frac{1}{1+e^{-z}}$ (sigmoid) \item $h = \text{max}(0,z)$ \\(rectified linear unit (ReLU)) \end{itemize} \end{minipage} \end{column} \hfill \begin{column}{0.52\textwidth} \begin{minipage}[r]{\linewidth} \begin{figure} \includegraphics[width=\textwidth]{fig/lec08/act_funcs.pdf} \caption{Common activation functions} \label{fig:act_funcs} \end{figure} \end{minipage} \end{column} \end{columns}\pause Whereas $\varphi^{(L)}(\cdot)$ is task-dependent: \begin{itemize} \item Regression: $\hat{{y}} = {h}^{(L)} = {z}^{(L)}$ \item Binary classification: sigmoid \item Multi-class classification: \begin{equation*} h_c^{(L)} = \frac{e^{z_c}}{\sum_{i=1}^C e^{z_i}}\quad\text{(softmax)} \end{equation*} \end{itemize} } \frame{\frametitle{Training Neural Networks (1)} ANN parameters are usually iteratively optimized via a variant of \hl{gradient descent}, e.g., stochastic gradient descent (SGD). \begin{align} \label{eq:weight_update} \bm{\mathcal{W}}^{(l)} &\leftarrow \bm{\mathcal{W}}^{(l)} - \alpha \nabla_{\bm{\mathcal{W}}^{(l)}} \mathcal{L}(\bm y, \hat{\bm y}),\\ \label{eq:bias_update} \bm{b}^{(l)} &\leftarrow \bm{b}^{(l)} - \alpha \nabla_{\bm{b}^{(l)}} \mathcal{L}(\bm y, \hat{\bm y}), \end{align} with $\alpha$ being the step size and $\mathcal{L}(\cdot)$ denoting the \hl{loss} between the ground truth vector and the estimation vector.\pause Typical loss functions: \begin{itemize} \item Regression: (root) mean squared error (RMSE), mean absolute error \item Classification: Cross-entropy (CE) \end{itemize} Several iterations over the data set $\bm{\mathcal D}$ are called \hl{epochs}. } \frame{\frametitle{Training Neural Networks (2)} % typ. gradientenabstiegsbild und da dann nochmal eine gesondere erklärung worin sich SGD und GD unterscheidet. %Darauf aufbauend könnte man dann erwähnen, dass die typische built-in optimierer in den ANN toolboxen aufgebohrte SGD variaten sind und hier auf die weitere literatur verweisen \begin{columns}[t,onlytextwidth] \begin{column}{0.45\textwidth} \begin{minipage}[c]{\linewidth} \begin{figure} \includegraphics[width=\textwidth]{fig/lec08/sgd.pdf} \caption{BGD vs. SGD} \label{fig:sgd} \end{figure} \end{minipage} \end{column} \hfill \begin{column}{0.52\textwidth} \begin{minipage}[r]{\linewidth} Gradient descent alternatives: \begin{itemize} \item Batch gradient descent (BGD): Average gradients over all samples, then update weights. \item Stochastic gradient descent (SGD): Update weights after each sample. \end{itemize} \end{minipage} \end{column} \end{columns} SGD is more computationally efficient, but steps are more random. Nowadays mini-batch gradient descent is used (mix of SGD and BGD), and further improvements, e.g., momentum and second derivatives, to ensure faster convergence to better optima. } \frame{\frametitle{Training Neural Networks (3)} \textbf{How to retrieve the gradients:} Recall chain rule for vector derivatives, e.g., with $\bm y = g(\bm x)$ and $z = f(\bm y)$ where $g: \mathbb{R}^m \rightarrow \mathbb{R}^n$ and $f: \mathbb{R}^n \rightarrow \mathbb{R}$: \begin{equation} \nabla_{\bm x}z = \frac{\partial z}{\partial \bm x} = {\underbrace{\bigg( \frac{\partial \bm y}{\partial \bm x}\bigg)}_{\text{Jacobian of g }}}\T\cdot \underbrace{\frac{\partial z}{\partial\bm y}}_\text{gradient} = \sum_{j} \frac{\partial y_j}{\partial \bm x}\cdot\frac{\partial z}{\partial y_j}. \end{equation}\pause This can be used equivalently for matrices/tensors of any shape $\nabla_{\bm{\Xi}}y = \frac{\partial y}{\partial \bm{\Xi}}$ when we assume to enumerate each element of the tensor consecutively and loop through them.\pause \begin{block}{Error Backpropagation} After a \hl{forward step} through the network, make a \hl{backward step} in which the gradient $\theta$ of the loss $\mathcal{L}(\bm y, \hat{\bm y})$ is computed w.r.t the ANN's parameters from the output layer back to the input layer. \end{block} } \frame{\frametitle{Training Neural Networks (4)} \setlength{\algomargin}{0.5em} \begin{algorithm}[H] \SetKwInput{Input}{input} \SetKwInput{Output}{output} \SetKwInput{Init}{init} \SetKwInput{Param}{parameter} %\Input{$\bm{\mathcal{W}}, \bm b, \bm x, \bm y$ } %\Output{$\hat{\bm y}$ and list of loss gradients w.r.t. all network parameters.} \Init{$\bm{\mathcal H}^{(0)} \leftarrow \bm{\Xi}$} // forward propagation\\ \For{$l=1,\ldots,L$ layers}{ $\bm{\mathcal Z}^{(l)} \leftarrow \bm{\mathcal H}^{(l-1)}\bm{\mathcal{W}}^{(l)} + \bm{b}^{(l)}$\\ $\bm{\mathcal H}^{(l)} \leftarrow \varphi^{(l)}(\bm{\mathcal Z}^{(l)})$\\ } // backward propagation\\ $\theta \leftarrow \nabla_{\bm{h}^{(L)}}\mathcal{L}(\bm y, \hat{\bm y})$ // note that $\bm{h}^{(L)} = \hat{\bm y}$\\ \For{$l= L, \ldots, 1$ layers}{ $\theta \leftarrow \theta \odot \partial (\varphi^{(l)})(\bm{\mathcal Z}^{(l)}) = \nabla_{\bm{\mathcal Z}^{(l)}} \mathcal{L}(\bm y, \hat{\bm y})$ // $\odot$: elementwise mult.\\ Append $\theta = \nabla_{\bm{b}^{(l)}} \mathcal{L}(\bm y, \hat{\bm y})$ to list of bias gradients\\ Append $(\bm{\mathcal H}^{(l-1)})\T\cdot\theta = \nabla_{\bm{\mathcal W}^{(l)}} \mathcal{L}(\bm y, \hat{\bm y})$ to list of weight gradients\\ $\theta \leftarrow \theta\cdot(\bm{\mathcal W}^{(l)})\T = \nabla_{\bm{\mathcal H}^{(l-1)}}\mathcal{L}(\bm y, \hat{\bm y})$\\ } \caption{Error backpropagation} \label{algo:errorbp} \end{algorithm} } \frame{\frametitle{Error Backpropagation Example (1)} % minimalproblem wo man mal den Loss ausrechnet und einen gradienten abstiegsschritt darstellt Assume $\bm x_0 = [2, 5, 7], y_0 = 2.5$, and a two-layered ANN with the MSE cost, and sigmoid activation functions $\sigma(z) = \frac{1}{1+e^{-z}}$. The hidden layer contains two neurons with output $\bm h^{(1)} \in \mathbb{R}^2$, while the weight vectors are initialized with $\bm{\mathcal W}^{(1)} = \big[\begin{smallmatrix} 0.1& -0.3 & 0.2\\ 0.0& 0.4 & -0.9 \\\end{smallmatrix} \big]\T, \bm b^{(1)} = [0.05, -0.03]$, and $\bm{\mathcal W}^{(2)} = [ 0.2, -0.8]\T, \bm b^{(2)} = [0.1]$.\pause Applying SGD, we start with forward propagation: \begin{align*} \bm h^{(1)} &= \varphi^{(1)}(\bm{x}_0\bm{\mathcal{W}}^{(1)} + \bm{b}^{(1)}) \\ \quad &= \sigma([0.1, -4.3] + [0.05, -0.03]) = [0.53, 0.01]\\ \hat{y}_0 &= \bm{h}^{(1)}\bm{\mathcal{W}}^{(2)} + \bm{b}^{(2)} = 0.198 \end{align*} } \frame{\frametitle{Error Backpropagation Example (2)} Backpropagation (with $\sigma^\prime(z) = \partial_z \sigma(z) = \sigma(z)(1-\sigma(z))$): \begin{align*} \theta^{(2)} = \nabla_{\hat{y}_0}\mathcal{L}(y_0, \hat{y_0}) &= \nabla_{\hat{\bm{y}}} (y_0 - \hat y_0)^2 = -2 (y_0 - \hat y_0) = -4.604\\ \nabla_{\bm{b}^{(2)}} \mathcal{L}(y_0, \hat{y_0}) &= \theta^{(2)} \odot \partial (\varphi^{(2)})(\bm z^{(2)}) = \theta^{(2)} \\ \nabla_{\bm{\mathcal W}^{(2)}} \mathcal{L}(y_0, \hat{y}_0) &= (\bm{h}^{(1)})\T\cdot\theta^{(2)} = [-2.44, -0.046]\T\\ \theta^{(1)} = \nabla_{\bm{h}^{(1)}}\mathcal{L}(y_0, \hat{y}_0)&= \theta^{(2)}\cdot(\bm{\mathcal W}^{(2)})\T = [-0.921, 3.683]\\ \nabla_{\bm{b}^{(1)}} \mathcal{L}(y_0, \hat{y_0}) &= \theta^{(1)} \odot \partial (\varphi^{(1)})(\bm z^{(1)}) = \theta^{(1)}\odot \sigma^\prime(\bm{x}_0\bm{\mathcal{W}}^{(1)} + \bm{b}^{(1)}) \\ \quad &= \theta^{(1)}\odot (\bm h^{(1)}(1-\bm h^{(1)})) = [-0.229, 0.036]\\ \nabla_{\bm{\mathcal W}^{(1)}} \mathcal{L}(y_0, \hat{y}_0) &= \bm{x}_0\T\cdot\theta^{(1)} = \big[\begin{smallmatrix} -1.84 & -4.605 & -6.447\\ 7.366& 18.415 & 25.781 \\\end{smallmatrix} \big]\T \end{align*} Now update weights and biases according to (\ref{eq:weight_update}) and (\ref{eq:bias_update}). } \frame{\frametitle{Weight Initialization} Early in deep learning research, it was found that random uniform or random normal weight initialization leads to poor training. According to Glorot and Bengio\footnote{X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks", \textit{Proceedings of Machine Learning Research}, 2010}, use the following layer-specific initialization schemes (with $H_\text{in}$ and $H_\text{out}$ denoting amount of hidden units of previous and current layer, respectively): \begin{itemize} \item uniform: $\bm{w} \sim \mathcal{U}\Big(-\frac{\sqrt{6}}{\sqrt{H_\text{in}+H_\text{out}}}, \frac{\sqrt{6}}{\sqrt{H_\text{in}+H_\text{out}}} \Big)$ \item normal: $\bm{w} \sim \mathcal{N}\Big(0, \frac{\sqrt{2}}{\sqrt{H_\text{in}+H_\text{out}}} \Big)$ \end{itemize}\pause Please note that generally due to the random weight initialization the result of repeated error backpropagation training is always different regardless of having the same hyperparameters and the same data. This equals to \hl{local optimization in highly non-linear parameter spaces at random starting points}. } \frame{\frametitle{Regularizing Neural Networks} In order to mitigate overfitting, ANNs must be regularized by \begin{itemize} \item weight decay, i.e., adding an $\ell_2$ penalty term to the weights, see (\ref{eq:ridge}), \pause \item layer normalization during training, \begin{itemize} \item i.e all layers' activations are normalized by standard scaling separately, \end{itemize}\pause \item dropout, i.e., randomly disable nodes' contribution. \begin{itemize} \item This helps especially in deep networks, \item and effectively builds an ensemble of ANNs with shared edges. \end{itemize} \end{itemize} } \frame{\frametitle{Advanced Topologies} \begin{figure}[tbp] \centering \begin{minipage}[t]{0.38\textwidth} %\centering \includegraphics[width=\textwidth]{fig/lec08/RNN_over_sequence.pdf} \end{minipage}\hfill \begin{minipage}[t]{0.55\textwidth} \centering \includegraphics[width=\textwidth]{fig/lec08/CNN_over_sequence.pdf} \end{minipage} \caption{Recurrent (left) and 1-D convolutional (right) ANNs are more appropriate in time domains, e.g., where the given data set has a dynamic system background} \end{figure} %2-D Convolutional NNs are the state of the art in image processing } \subsection{Gradient Boosting Machines} \frame{\frametitle{Gradient Boosting Machines} Gradient boosting machines (GBMs) consist of $M$ additive \hl{chained weak learners} $\psi(\bm{\Xi})$, where each learner tries to minimize the whole ensemble's overall loss $\mathcal{L}(\bm y, \hat{\bm y}_m)$ given the preceding ensemble $\hat{\bm y}_{m-1}$ (assume scalar regression): \begin{equation} \hat{\bm y}_m = \hat{\bm y}_{m-1} + \psi_m(\bm{\Xi}) = \sum_{m=1}^M \psi_m(\bm{\Xi}). \end{equation}\pause \begin{columns}[t,onlytextwidth] \begin{column}{0.45\textwidth} \begin{minipage}[c]{\linewidth} Weak learners are often small decision trees with a depth of less than 10. \newline\newline Instead of optimizing real-valued parameters, here we are searching for functions $\psi(\cdot)$. Hence, optimization has to be additive. \end{minipage} \end{column} \hfill \begin{column}{0.52\textwidth} \begin{minipage}[r]{\linewidth} \begin{figure} \includegraphics[width=\textwidth]{fig/lec08/decision_tree.pdf} \caption{Exemplary decision tree for binary classification and two regressors} \label{fig:decision_tree} \end{figure} \end{minipage} \end{column} \end{columns} % Boosting Algorithm } \frame{\frametitle{Boosting Tree Ensembles (1)} The minimization objective $J_m$ for growing trees at the $m$-th boosting round can be taylor-expanded to the second-order approximation: \begin{align} \begin{split} J_m &= \mathcal{L}(\bm{y}, \hat{\bm{y}}_{m-1} + \psi(\bm{\Xi}))\\ &\approx \mathcal{L}(\bm y, \hat{\bm y}_{m-1}) + (\bm{\tau}')\T \psi_m(\bm{\Xi}) +\frac{1}{2}(\bm{\tau}'')\T \psi_m^2(\bm{\Xi}), \end{split} \end{align} with $\bm{\tau}' = \partial_{\hat{\bm y}_{m-1}}\mathcal{L}(\bm y, \hat{\bm y}_{m-1})$ and $\bm{\tau}'' = \partial_{\hat{\bm y}_{m-1}}^2\mathcal{L}(\bm y, \hat{\bm y}_{m-1})$.\pause Removing the constant part leaves us with a simplified objective: \begin{align} \begin{split} \tilde{J}_m &= (\bm{\tau}')\T\psi_m(\bm{\Xi}) +\frac{1}{2}(\bm{\tau}'')\T{\psi_m}^2(\bm{\Xi}) \\ &= \sum_{k=1}^K [\tau'_k \psi_m(\bm x_k) + \frac{1}{2}\tau''_k\psi_m^2(\bm x_k)]. \end{split} \end{align} } \frame{\frametitle{Boosting Tree Ensembles (2)} Now consider a single tree with $N$ leaves to be defined as a vector of leaf-scores $\psi_m(\bm{x}) = \bm{w}_{s(\bm x)}, \bm w \in \mathbb{R}^N, s: \mathbb{R}^P \rightarrow [1, N]$ where $s$ maps data observations to the index of the tree's corresponding outcome (leaf). Consider the set of observations in leaf $n$ to be $I_n = \{ k | s(\bm x_k) = n \}$.\pause Then, we can regroup the objective by each leaf: \begin{align} \begin{split} \tilde{J}_m &= \sum_{k=1}^K \bigg[\tau'_k w_{s(\bm x_k)} + \frac{1}{2}\tau''w_{s(\bm x_k)}^2\bigg]\\ &= \sum_{n=1}^N \bigg[\underbrace{\bigg(\sum_{k\in I_n}\tau'_k\bigg)}_{T'}w_n + \frac{1}{2}\underbrace{\bigg(\sum_{k\in I_n} \tau''_k\bigg)}_{T''}w_n^2\bigg]. \end{split} \end{align} This is a sum of single-variabled quadratic functions. } \frame{\frametitle{Boosting Tree Ensembles (3)} Assuming a fixed tree structure $s(\bm x)$, the optimal weights and corresponding loss directly follow: \begin{align} w_n^\ast = \frac{-T'_k}{T''_k}\qquad \tilde{J} = -\frac{1}{2}\sum_{n=1}^N\frac{(T')_k^2}{T''_k}. \end{align} Infinitely many possible tree structures $s$ makes a greedy search inevitable.\pause \begin{block}{Greedy best split search} Start a tree with depth $0$. Then, for each tree node, try to add a split. The corresponding change in the objective is the \hl{gain}: \begin{equation} \frac{1}{2}\big(\underbrace{\frac{(T')_\text{left}^2}{T''_\text{left}}}_{\text{cost of left child}} + \underbrace{\frac{(T')_\text{right}^2}{T''_\text{right}}}_{\text{cost of right child}} - \underbrace{\frac{(T')_\text{left}^2 + (T')_\text{right}^2}{T''_\text{left} + T''_\text{right}}}_\text{cost of no split}\big). \end{equation} Find the best split over each features' range by a linear scan of the gain. \end{block} } \frame{\frametitle{Implementation Considerations} The GBM became renowned by fast, scalable, cache-aware, and sparsity-aware implementations (e.g., XGBoost, LightGBM, CatBoost). As always, regularization is also part of the optimization but was not outlined here for conciseness. The spectrum of regularization factors comprises: \begin{itemize} \item Tree constraints (depth, number of leaves, etc.), \item Weight decay, \item Random sampling (column- and row-wise), \item Learning rates for each additional tree. \end{itemize} } \frame{\frametitle{Hyperparameter Optimization (1)} \begin{figure} \includegraphics[width=10cm]{fig/lec08/levels_of_opt.pdf} \caption{The three levels of optimization} \label{fig:levels_of_opt} \end{figure} } \frame{\frametitle{Hyperparameter Optimization (2)} \begin{itemize} \item Hyperparameter optimization is, again, a non-linear optimization problem. \item Evaluation of any point in this space can be very costly, though. \item Information gathered during a search must be fully utilized. \item Toolboxes (incomprehensive) \begin{itemize} \item \href{https://github.com/hyperopt/hyperopt}{Hyperopt} \item \href{https://scikit-optimize.github.io/stable/}{Scikit-optimize} \item \href{https://pyswarms.readthedocs.io/en/latest/}{Pyswarm} \end{itemize} \end{itemize} } \frame{\frametitle{SL Toolboxes} \begin{itemize} \item Deep learning \begin{itemize} \item \href{https://www.tensorflow.org/tutorials/quickstart/beginner}{Tensorflow 2 (Keras)} \item \href{https://pytorch.org/}{PyTorch} \item \href{https://chainer.org/}{Chainer} \item \href{https://docs.microsoft.com/en-us/cognitive-toolkit/}{CNTK} \end{itemize}\pause \item Gradient boosting machines \begin{itemize} \item \href{https://xgboost.readthedocs.io/en/latest/}{XGBoost} \item \href{https://lightgbm.readthedocs.io/en/latest/}{LightGBM} \item \href{https://catboost.ai/}{CatBoost} \end{itemize}\pause \item Linear, tree-based, memory-based models, SVMs, among others \begin{itemize} \item \href{https://scikit-learn.org/stable/}{Scikit-learn} \end{itemize} \end{itemize} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Summary %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame} \frametitle{Summary: What You've Learned Today} \begin{itemize} \item Industry has high demand for ML applications.\pause \item Higher bias trades off variance for a better overall score.\pause \item How to cross-validate and improve SL models.\pause \item How features are engineered and normalized.\pause \item Fundamentals of linear regression, neural networks and gradient boosted trees. \end{itemize} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Final Slide %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \frame{\frametitle{The End for Today} \begin{figure} \includegraphics[width=10cm]{fig/lec08/dilbert.jpg} \end{figure} \vspace{1cm} \centering Thanks for your attention and have a nice week! }
{ "alphanum_fraction": 0.7021115731, "avg_line_length": 43.9833729216, "ext": "tex", "hexsha": "26887ad06cbf9b85df056739e0a7b9656178e936", "lang": "TeX", "max_forks_count": 115, "max_forks_repo_forks_event_max_datetime": "2022-03-31T18:13:08.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-08T17:12:25.000Z", "max_forks_repo_head_hexsha": "09a211da5707ba61cd653ab9f2a899b08357d6a3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "speedhunter001/reinforcement_learning_course_materials", "max_forks_repo_path": "lecture_slides/tex/Lecture08.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "09a211da5707ba61cd653ab9f2a899b08357d6a3", "max_issues_repo_issues_event_max_datetime": "2021-05-12T14:37:08.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-22T07:27:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "speedhunter001/reinforcement_learning_course_materials", "max_issues_repo_path": "lecture_slides/tex/Lecture08.tex", "max_line_length": 372, "max_stars_count": 557, "max_stars_repo_head_hexsha": "e086ae7dcee2a0c1dbb329c2b25cf583c339c75a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adilsheraz/reinforcement_learning_course_materials", "max_stars_repo_path": "lecture_slides/tex/Lecture08.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T19:30:35.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-20T08:38:15.000Z", "num_tokens": 12092, "size": 37034 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \DeclareMathOperator{\sgn}{sgn} \usepackage{hyperref} \usepackage{enumitem} \usepackage{commath} \usepackage[margin=0.5in]{geometry} \title{Spline Math} \author{James Kuszmaul} \begin{document} \maketitle \section{Introductions} This document describes some of the math going on in computing constraints on the maximum velocities in \texttt{trajectory.*}. Some of the thoughts are somewhat disorganized and represent a stream-of-consciousness more than actually useful documentation. For a given path, we will have: \begin{tabular}{r|l} Name & Description \\ \hline $v_l$, $v_r$ & Left and right wheel velocities \\ $\theta$ & Robot heading/yaw at a position along the path \\ $r$ & Robot radius (assuming center of mass in center of robot) \\ $u_l$, $u_r$ & Left and right wheel voltages \\ $V_{max}$ & Maximum allowable voltage (typically 12V) \\ $v_{max}$ & User-specified maximum velocity \\ $u$ & $\begin{bmatrix} u_l \\ u_r \end{bmatrix}$ \\ $A$, $B$ & LTI, matrices such that $\begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} = A \begin{bmatrix} v_l \\ v_r \end{bmatrix} + B u$ \\ $s$ & Distance along the path \\ \end{tabular} We are interested in discovering the fastest way to traverse the path in question, given constraints on: \begin{itemize} \item Friction limits of the wheels (assumed to static and time/velocity invariant). \item Voltage limits on the motors. \item Maximum velocity. \end{itemize} The voltage constraints are essentially provided by the linear dynamics shown above: \begin{align} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= A \begin{bmatrix} v_l \\ v_r \end{bmatrix} + B u \end{align} Also, at any given point we must maintain velocities/accelerations that actually allow us to follow the path (as opposed to optimizing $v_l$/$v_r$ separately...). Using the velocities of the individual wheels as a starting point, and introducing dummy variables for the velocity/acceleration of the center of the robot (corresponding to time derivatives of $s$): \begin{align*} v_c &= \frac{v_l r_r + v_r r_l}{r_r + r_l} \\ \begin{bmatrix} v_l \\ v_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c \\ \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} - r_l \frac{d^2\theta}{ds^2} v_c \\ r_r \frac{d^2\theta}{ds^2} v_c \end{bmatrix} v_c \\ \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 \\ \end{align*} And finally, we need to know the lateral accelerations of the two wheels as part of determining whether we would be breaking friction. Note that, because of the general assumptions we are making about the setup of the robot, the lateral acceleration on the two wheels will be identical (if they were not identical, that would imply the robot was tearing itself apart side-to-side). \begin{align*} a_{lat} &= \frac{d\theta}{ds} v_c^2 \\ \end{align*} Finally, we need to be combine the lateral/longitudinal accelerations of each wheel to determine whether the wheel would break friction. While technically there is only one expression for this, as a practical matter we have a bit of leeway in defining exactly what the relationship should be (e.g., if we want separate max longitudinal/lateral accelerations, we can create an ellipse; we could also just create a raw rectangle and decouple the two, although that would probably be a particularly poor approximation of the actual dynamics). For now we will say that our friction is limited by some nonnegative convex function $g(a_{lat}, a_{lng}) \le 1$ for any given wheel. Summarizing our equations and constraints, we have: \begin{align*} v_c &= \frac{v_l r_r + v_r r_l}{r_r + r_l} \\ a_{lat} &= \frac{d\theta}{ds} v_c^2 \\ g(a_{lat}, \dot{v}_l) &\le 1 \\ g(a_{lat}, \dot{v}_r) &\le 1 \\ \abs{u_l} &\le V_{max} \\ \abs{u_r} &\le V_{max} \\ \begin{bmatrix} v_l \\ v_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c \\ \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= A \begin{bmatrix} v_l \\ v_r \end{bmatrix} + B \begin{bmatrix} u_l \\ u_r \end{bmatrix} \\ \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 \\ \end{align*} With $v_l$, $v_r$, $u_l$, and $u_r$ free at all $s$ except the start and end, at which the velocities must be zero (technically, we could make do with allowing non-zero velocities, but then we could end up in situations that would be impossible to solve--because the splines have differentiable curvature, any of the splines we use will always have some valid velocity profile, but depending on the curvature, that velocity may be arbitrarily small). We can provide an upper limit on the goal velocity at any given point with the min of the specified max velocity and the lateral-acceleration limited velocity. This also gives an initial pass of the longitudinal accelerations that would be required to follow this profile. \begin{align*} v_{lat} &= \min (v_{max}, \sqrt{\frac{a_{lat,max}}{\frac{d\theta}{ds}}}) \\ \dot{v}_{lat} &= \begin{cases} 0 & v_{lat} = v_{max} \\ -\frac12 \sqrt{\frac{a_{lat,max}}{\frac{d\theta}{ds}^3}} \frac{d^2\theta}{ds^2} v_{lat} & v_{lat} \ne v_{max} \\ \end{cases} \\ &= \begin{cases} 0 & v_{lat} = v_{max} \\ -\frac12 \frac{a_{lat,max}}{\frac{d\theta}{ds}^2} \frac{d^2\theta}{ds^2} & v_{lat} \ne v_{max} \\ \end{cases} \\ \end{align*} If we start to consider accounting also for longitudinal accelerations, we can start with: \begin{align} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 \\ (r_r + r_l) \left[ \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \frac{\dot{v}_l r_r + \dot{v}_r r_l}{r_r + r_l} + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 \right] \\ \begin{bmatrix} r_l + r_l r_r \frac{d\theta}{ds} & -r_l + r_lr_l \frac{d\theta}{ds} \\ -r_r - r_r r_r \frac{d\theta}{ds} & r_r - r_r r_l \frac{d\theta}{ds} \\ \end{bmatrix} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} -r_l \\ r_r \end{bmatrix} (r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 \\ \begin{bmatrix} 1 + r_r \frac{d\theta}{ds} & -1 + r_l \frac{d\theta}{ds} \\ -1 - r_r \frac{d\theta}{ds} & 1 - r_l \frac{d\theta}{ds} \\ \end{bmatrix} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= \begin{bmatrix} -1 \\ 1 \end{bmatrix} (r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 \\ \begin{bmatrix} 1 + r_r \frac{d\theta}{ds} & -1 + r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} & -1 + r_l \frac{d\theta}{ds} \\ \end{bmatrix} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= -(r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 \\ (1 + r_r \frac{d\theta}{ds}) \dot{v}_l + (-1 + r_l \frac{d\theta}{ds}) \dot{v}_r &= -(r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 \\ %(1 + r \frac{d\theta}{ds}) \dot{v}_l + (-1 + r \frac{d\theta}{ds}) (2\dot{v}_c - %\dot{v}_l) &= % -2 r \frac{d^2\theta}{ds^2} v_c^2 \\ %2 \dot{v}_l + 2(-1 + r \frac{d\theta}{ds}) \dot{v}_c % &= -2 r \frac{d^2\theta}{ds^2} v_c^2 \\ %\dot{v}_l &= (1 - r \frac{d\theta}{ds}) \dot{v}_c - 2 r \frac{d^2\theta}{ds^2} v_c^2 \\ %\dot{v}_r &= (1 + r \frac{d\theta}{ds}) \dot{v}_c + 2 r \frac{d^2\theta}{ds^2} v_c^2 \\ \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ -1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &\le g'(v_c, s) \\ \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ -1 & 0 \\ 0 & -1 \\ \end{bmatrix} \left( \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 \right) &\le g'(v_c, s) \end{align} Note how we have a linear relationship between $\dot{v}_l$ and $\dot{v}_r$, where the slope of the line between the two is constant at any given point on the path, but the y-intercept and the max/min bounds get tighter with speed. Furthermore, the velocity along the path ($v_c$) is a linear function of the wheel velocities $v_l$ and $v_r$. Thus, we can state, at any given point on the path, a lower velocity along the path will not decrease our acceleration bounds (if we are travelling in a straight line, then our speed does not affect the straight-line acceleration limits). We also can introduce the voltage constraints: \begin{align} \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= A \begin{bmatrix} v_l \\ v_r \end{bmatrix} + B \begin{bmatrix} u_l \\ u_r \end{bmatrix} \\ \begin{bmatrix} \dot{v}_l \\ \dot{v}_r \end{bmatrix} &= A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c + B \begin{bmatrix} u_l \\ u_r \end{bmatrix} \\ \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 &= A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c + B \begin{bmatrix} u_l \\ u_r \end{bmatrix} \\ \end{align} With these constraints, we can no longer guarantee that the set of feasible velocities at any given point along the path is contiguous. However, we can guarantee that at any given point there will be at most 2 contiguous sets of valid velocities---in the case where there are two separate sets of allowable velocities, then one corresponds to the lower velocities which start at zero and go up until we can no longer use our finite available voltage to track the path. The second zone can occur at higher velocities where we are going so fast that the deceleration from the Back-EMF of the motor is enough to slow us down, but not so fast that we would lose friction. This divergence does technically create a bit of a challenge--if it were to, e.g., split into arbitrarily many possible paths than that would complicate the optimization problem substantially. However, because: \begin{enumerate} \item We only split into two branches at once; and \item we should only have a limited number of branchings for a given N-spline \footnote{I'll leave this as an exercise to the reader/my future self. However, logically, a spline with a given number of points can only have so many features and so should be able to have a very finite number of branchings.}. \end{enumerate} It should not actually be an issue to exhaustively attempt to plan for all possible branchings, knowing that the path of ``always follow the slower branch'' will always provide a feasible path, and choose whatever feasible plan is fastest. Once we choose which branch to take, it is a relatively simple matter of exercising the existing method. Essentially, taking the set of feasible velocities/accelerations as a starting point we do a forwards and backwards pass (order is unimportant)) where we start at our initial/terminal velocity, and accelerate forwards/decelerate backwards as hard as possible for each pass. If attempting to accelerate as hard as possible on the forwards pass (or, conversely, decelerate on the backwards) results in us violating the feasible velocity bounds, we can simply set the velocity at that point for that pass to the highest feasible velocity. We can guarantee that the pass in the opposite direction will \emph{not} get stuck at that same point because if that would suggest we had different physics going backwards than forwards, which is not the case\footnote{I'm too lazy to spell this logic out more carefully, and I don't think it is wrong. That is not guaranteed, however.}. \section{Specialization for Friction Circle} As our particular friction circle, we will attempt to use a straight ellipse\footnote{Note that this has numerical stability issues due to the infinite slope near zero}. With a $a_{lat,max}$ and $a_{lng,max}$ for the maximum lateral and longitudinal accelerations, i.e.: \begin{align*} g(a_{lat}, a_{lng}) &= \left(\frac{a_{lat}}{a_{lat,max}}\right)^2 + \left(\frac{a_{lng}}{a_{lng,max}}\right)^2 \\ \end{align*} We care about turning this into a function of the path geometry and the current velocity that returns the max longitudinal acceleration and lets us calculate the friction-limited maximum acceleration, $v_{c,fmax}$. In order to do so, we note that, in order to be at the edge of a feasible velocity, we observe that, at any given $v_c$, we will have two pairs of constraints on the center-of-mass longitudinal acceleration that both must be met, corresponding to the fact that the each wheel creates an upper/lower bound on the valid accelerations of the center-of-mass of the robot, but at high $\frac{d^2\theta}{ds^2}v_c^2$, the constraints from each wheel may not overlap. Thus, we care about the $v_c$ such that the pairs of constraints \emph{just barely} overlap. This will happen when one constraint from the right wheel and one from the left are just on the edge of being violated; furthermore, of the four possible left/right pairings (+/- of left/right), only two pairs correspond to being on the edge of overlapping (the other two pairs correspond to, e.g., having the upper acceleration limit in the same spot, as occurs when driving straight; in such a situation, there are plenty of valid accelerations, we just happen to start slipping both wheels at the same time). Which pair to look at is determined by the signs on $\dot{v}_c$--if they are the same (as would occur when going nearly straight), then we care about when the opposite limits overlap (e.g., upper limit on the left, lower on the right). \begin{align*} \dot{v}_l, \dot{v}_r &\le a_{lng, max} \sqrt{1 - \left(\frac{a_{lat}}{a_{lat,max}}\right)^2} \\ \dot{v}_l, \dot{v}_r &\le a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2} \\ \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \\ -1 + r_l \frac{d\theta}{ds} \\ -1 - r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c &\le \begin{bmatrix} r_l \\ -r_r \\ -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 + a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2} \\ 0 &= \pm \{r_l,r_r\} \frac{d^2\theta}{ds^2} v_{c,fmax}^2 + a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_{c,fmax}^2}{a_{lat,max}}\right)^2} \\ \left(\frac{d^2\theta}{ds^2} \frac{\{r_l,r_r\}v_{c,fmax}^2}{a_{lng, max}}\right)^2 &= 1 - \left(\frac{\frac{d\theta}{ds}v_{c,fmax}^2}{a_{lat,max}}\right)^2 \\ v_{c,fmax} &= \min_{r \in \{r_l, r_r\}} \left(\left(\frac{d^2\theta}{ds^2} \frac{r}{a_{lng, max}}\right)^2 + \left(\frac{\frac{d\theta}{ds}}{a_{lat,max}}\right)^2\right)^{-\frac14} \\ \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 &= \begin{bmatrix}\pm 1 \\ \pm 1\end{bmatrix} a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2} \\ K_2 \dot{v}_c + K_1 v_c^2 &= \begin{bmatrix}\pm 1 \\ \pm 1\end{bmatrix} a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2} \\ K_2' K_2 \dot{v}_c + K_2' K_1 v_c^2 &= K_2' \begin{bmatrix}\pm 1 \\ \pm 1\end{bmatrix} a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2} \\ K_2' K_1 v_c^2 &= K_2' \begin{bmatrix}\pm 1 \\ \pm 1\end{bmatrix} a_{lng, max} \sqrt{1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2} \\ \left(\frac{K_2' K_1 v_c^2}{K_2' \begin{bmatrix} 1 \\ \pm 1\end{bmatrix}a_{lng,max}}\right)^2 &= 1 - \left(\frac{\frac{d\theta}{ds}v_c^2}{a_{lat,max}}\right)^2 \\ v_c &= \min \left( \left(\frac{K_2'K_1}{K_2' \begin{bmatrix} 1 \\ \pm 1\end{bmatrix}a_{lng, max}}\right)^2 + \left(\frac{\frac{d\theta}{ds}}{a_{lat,max}}\right)^2 \right)^{-\frac14} \end{align*} Where the $\pm 1$ term will have opposite signs of the two coefficients of $K_2$ have the same sign, and will have the same sign of the coefficients of $K_2$ have opposite signs. $K_2'$ refers to a transformed version of $K_2$ such that $K_2'K_2 = 0$ (see below for exact definition). \section{Calculating Feasible Voltage Velocities} Given: \begin{align*} \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 &= A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c + B \begin{bmatrix} u_l \\ u_r \end{bmatrix} \\ \abs{u} &\le V_{max} \\ \norm{B^{-1} \left(\begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c\right) }_\infty \le V_{max} \\ \end{align} This is a linear program for fixed $v_c$. We wish to determine the limits of $v_c$ for which the linear program will have a solution. In order to analyze this linear program in $\dot{v}_c$, it is useful to know the signs of the coefficients of $\dot{v}_c$, which in this case are $B^{-1} \begin{bmatrix}1 - r_l\frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds}\end{bmatrix}$ If either coefficient is zero, then for any given $v_c$, then the linear program becomes trivial, as the term with $\dot{v}_c$ will drop out of one of the rows and so the other row will be guaranteed to be valid for some $\dot{v}_c$ while the row with the dropped $\dot{v}_c$ will either be valid or not. If $B$ were the identity matrix, then this would correspond to situations where one wheel is stationary and the acceleration of that wheel is not affected by the overall acceleration along the path. Determining the limits of allowable velocities is then an issue of calculating the point at which the stationary wheel goes from being controllable to not, for a given overall velocity. Otherwise, the edge of feasibility will occur when the pair of constraints (where one pair is for the left voltage, one for the right) cross and so we will have each voltage as $\pm V_{max}$; if the signs of the coefficients differ, then we will have $u_l = u_r$. Otherwise, $u_l = -u_r$. For $u_l = u_r$, attempting to maximize $v_c$: \begin{align*} B^{-1} \left(\begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c\right) &= \pm \begin{bmatrix} V_{max} \\ V_{max} \end{bmatrix} \\ \begin{pmatrix} 1 + r_r\frac{d\theta}{ds} \\ r_l\frac{d\theta}{ds} - 1 \end{pmatrix}\left( \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c &= \pm B \begin{bmatrix} V_{max} \\ V_{max} \end{bmatrix} \right) \\ (r_l \frac{d\theta}{ds} - 1) (1 + r_r \frac{d\theta}{ds}) \dot{v}_c + \begin{bmatrix} -r_l - r_lr_r\frac{d\theta}{ds} \\ -r_r + r_lr_r\frac{d\theta}{ds} \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - \begin{bmatrix} 1 + r_r \frac{d\theta}{ds} & 0 \\ 0 & r_l \frac{d\theta}{ds} - 1 \end{bmatrix} A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c &= \begin{bmatrix} 1 + r_r \frac{d\theta}{ds} & 0 \\ 0 & r_l \frac{d\theta}{ds} - 1 \end{bmatrix} B \begin{bmatrix} \pm V_{max} \\ \pm V_{max} \end{bmatrix} \end{align*} \begin{align*} -(r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 - ((1 + r_r\frac{d\theta}{ds})(A_{11}(1 - r_l\frac{d\theta}{ds}) + A_{12}(1 + r_r\frac{d\theta}{ds})) + (r_l\frac{d\theta}{ds} - 1)(A_{12}(1 - r_l\frac{d\theta}{ds}) + A_{11}(1 + r_r\frac{d\theta}{ds})) ) v_c &= \\ \pm V_{max}\frac{d\theta}{ds}(B_{12}r_l + B_{11}r_l + B_{11}r_r + B_{12}r_r) \\ (r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 + \frac{d\theta}{ds}( A_{11} (-r_l - r_r) + A_{12}(r_r + r_l) + r_r(A_{11}(1 - r_l\frac{d\theta}{ds}) + A_{12}(1 + r_r\frac{d\theta}{ds})) + r_l(A_{12}(1 - r_l\frac{d\theta}{ds}) + A_{11}(1 + r_r\frac{d\theta}{ds})) ) v_c &= \\ (r_l + r_r) \frac{d^2\theta}{ds^2} v_c^2 + \frac{d\theta}{ds}( (r_l + r_r) (A_{12} - A_{11}) + A_{11}(r_l + r_r) + A_{12}(r_l + r_r) + A_{12}\frac{d\theta}{ds}(r_r^2 - r_l^2)) v_c &= \\ (r_l + r_r) \left( \frac{d^2\theta}{ds^2} v_c^2 + \frac{d\theta}{ds}A_{12}\left(2 + \frac{d\theta}{ds}(r_r - r_l)\right) v_c\right) &= \\ \pm V_{max}(r_l + r_r)\frac{d\theta}{ds}(B_{11} + B_{12}) \\ \frac{d^2\theta}{ds^2} v_c^2 + \frac{d\theta}{ds}A_{12}\left(2 + \frac{d\theta}{ds}(r_r - r_l)\right) v_c = \pm V_{max}\frac{d\theta}{ds}(B_{11} + B_{12}) \\ \end{align*} Note that when I eliminate parts of $A$ and $B$ I am taking advantage of the fact that different rows of $A$ and $B$ are just reordered version of one another (i.e., each is of the format $\begin{bmatrix} a & b \\ b & a \end{bmatrix}$). In the final equation, we then execute the quadratic formula for $v_c$. Note that if you write the quadratic formula out there will be up to 4 real solutions (the two solutions to the quadratic formula by the two $\pm V_{max}$ results); because of the placement of the $\pm V_{max}$, then 0, 1, or 3 of the solutions will be greater than zero \footnote{The basic format of the quadratic formula in this case will be $b \pm \sqrt{b^2 \pm k}$ or some equivalent, and $0 < \abs{k} \le b^2$ ($\abs{k} \le b^$ is necessary to have 4 real solutions; and $k = 0$ only if $b$ also is zero, or if $V_{max} = 0$, which are not particularly relevant corner cases). If $b$ is positive, then 3 will be positive ($b +$ anything is positive, and since $\sqrt{b^2 - k} \le b$, $b - \sqrt{b^2 - k} \ge 0$, while $b - \sqrt{b^2 + k} \le 0$). A similar argument holds for negative $b$. If $b = 0$, then if there are four real solutions, all four are zero. If there are not four real solutions, then $\abs{k} > b^2$ and we have two real solutions. In this case, because $\sqrt{b^2 + k} > b$, we will have one positive and one negative solution.}. The zero positive solution case is not particularly interesting, as it only applies at zero curvature (which cannot apply here, as we already stated that $\abs{r\frac{d\theta}{ds}} > 1$ (or, if we have zero $V_{max}$). The one or three provide us our cutoffs for where there may be a gap in allowable accelerations. For situations where the coefficients have the same sign, attempting to maximize $v_c$: \begin{align*} (r_l \frac{d\theta}{ds} - 1) (1 + r_r \frac{d\theta}{ds}) \dot{v}_c + \begin{bmatrix} -r_l - r_lr_r\frac{d\theta}{ds} \\ -r_r + r_lr_r\frac{d\theta}{ds} \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - \begin{bmatrix} 1 + r_r \frac{d\theta}{ds} & 0 \\ 0 & r_l \frac{d\theta}{ds} - 1 \end{bmatrix} A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c &= \begin{bmatrix} 1 + r_r \frac{d\theta}{ds} & 0 \\ 0 & r_l \frac{d\theta}{ds} - 1 \end{bmatrix} B \begin{bmatrix} \pm V_{max} \\ \mp V_{max} \end{bmatrix} \end{align*} \begin{align*} (r_l + r_r) \left( \frac{d^2\theta}{ds^2} v_c^2 + \frac{d\theta}{ds}A_{12}\left(2 + \frac{d\theta}{ds}(r_r - r_l)\right) v_c\right) &= \pm V_{max}(B_{11} - B_{12})(2 + \frac{d\theta}{ds}(r_r - r_l)) \\ \frac{d^2\theta}{ds^2} v_c^2 + \frac{d\theta}{ds}A_{12}\left(2 + \frac{d\theta}{ds}(r_r - r_l)\right) v_c &= \pm V_{max}\frac{(B_{11} - B_{12})(2 + \frac{d\theta}{ds}(r_r - r_l))}{r_r + r_l}\\ \end{align*} Where the same basic rules apply to the quadratic formula solutions. Considering situations where one of the rows in the equations disappear, we end up with the $\dot{v}_c$ term disappearing and we must solve the following equation for whichever row of $B^{-1}$ $B^{-1}_N$ causes the $\dot{v}_c$ term to disappear: \begin{align*} B^{-1} &= \frac{1}{B_{11}^2 - B_{12}^2}\begin{bmatrix} B_{11} & -B_{12} \\ -B_{12} & B_{11} \end{bmatrix} \\ B^{-1}_N\left( \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c \right) &= \pm V_{max} \\ \end{align*} Trying to be more general: \begin{align*} B^{-1} \left(\begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} \dot{v}_c + \begin{bmatrix} -r_l \\ r_r \end{bmatrix} \frac{d^2\theta}{ds^2} v_c^2 - A \begin{bmatrix} 1 - r_l \frac{d\theta}{ds} \\ 1 + r_r \frac{d\theta}{ds} \end{bmatrix} v_c\right) &= \pm \begin{bmatrix} V_{max} \\ V_{max} \end{bmatrix} \\ B^{-1} \left(K_2 \dot{v}_c + K_1 v_c^2 - A K_2 v_c\right) &= \pm \begin{bmatrix} V_{max} \\ V_{max} \end{bmatrix} \\ K_2' &= K_2^T \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \\ K_2' \left(K_2 \dot{v}_c + K_1 v_c^2 - A K_2 v_c\right) &= K_2' B \begin{bmatrix} \pm 1 \\ \pm 1 \end{bmatrix} V_{max} \\ K_2' \left(K_2 \dot{v}_c + K_1 v_c^2 - A K_2 v_c\right) &= K_2' B \begin{bmatrix} \pm 1 \\ \pm 1 \end{bmatrix} V_{max} \\ K_2' \left(K_1 v_c^2 - A K_2 v_c\right) &= K_2' B \begin{bmatrix} \pm 1 \\ \pm 1 \end{bmatrix} V_{max} \\ \end{align*} The quadratic formula coefficients: \begin{align*} a v_c^2 + b v_c + c &= 0 \\ a &= K_2'K_1 \\ b &= -K_2'AK_2 \\ c &= -K_2' B \begin{bmatrix} \pm 1 \\ \pm 1 \end{bmatrix} V_{max} \\ b^2 - 4 ac &= (K_2'AK_2)^2 + 4K_2'K_1 K_2'B \begin{bmatrix} \pm 1 \\ \pm 1 \end{bmatrix} V_{max} \\ \end{align*} %Interestingly, the solutions for both basically devolve to: %\begin{align*} % \frac{d^2\theta}{ds^2} v_c^2 + 2 \frac{d\theta}{ds} A_{12} v_c % = \pm V_{max}\max(\abs{\frac{d\theta}{ds}}, 1)(B_{11} + B_{12}) \\ %\end{align*} % %While not overly interesting, the moments when we will have a split in %feasibility will be at: % %\begin{align*} %0 < \abs{4V_{max}\max(\abs{\frac{d\theta}{ds}}, 1)(B_{11} + %B_{12})\frac{d^2\theta}{ds^2}} &\le \abs{2\frac{d\theta}{ds} A_{12}} \\ %\abs{\frac{d\theta}{ds}} &\le 1 : \\ %0 < \abs{2V_{max}(B_{11} + B_{12})\frac{d^2\theta}{ds^2}} &\le %\abs{\frac{d\theta}{ds} A_{12}} \\ %\abs{\frac{d\theta}{ds}} &> 1 : \\ %0 < \abs{2V_{max}(B_{11} + B_{12})\frac{d^2\theta}{ds^2}} &\le \abs{A_{12}} \\ %\end{align*} % %Note that, in general, $A_{12}$ will be relatively small when compared to %$2V_{max}(B_{11} + B_{12})$. %\begin{align*} %B_{12} = B_{21} &, B_{11} = B_{22} \\ %2 r \frac{d^2\theta}{ds^2} v_c^2 &= \pm 2V_{max}r\frac{d\theta}{ds} (B_{12} - B_{11}) \\ %v_c &= \sqrt{\pm\frac{V_{max}\frac{d\theta}{ds}}{\frac{d^2\theta}{ds^2}} (B_{12} - B_{11})} \\ %\end{align*} % %\begin{align*} %\begin{bmatrix} 1 - r^2 \frac{d\theta}{ds}^2 \\ 1 - r^2 \frac{d\theta}{ds}^2 % \end{bmatrix} \dot{v}_c + \begin{bmatrix} -1 - r\frac{d\theta}{ds} \\ 1 - % r\frac{d\theta}{ds} \end{bmatrix} r \frac{d^2\theta}{ds^2} v_c^2 % - A \begin{bmatrix} 1 - r^2 \frac{d\theta}{ds}^2 \\ 1 - r^2 \frac{d\theta}{ds}^2 % \end{bmatrix} v_c &= B % \begin{bmatrix} \mp V_{max} (1 + r\frac{d\theta}{ds}) \\ \pm V_{max} (1 - % r\frac{d\theta}{ds}) \end{bmatrix} \\ % 2 r \frac{d^2\theta}{ds^2} v_c^2 % &= \pm V_{max}(B_{21} - B_{22} - B_{11} + B_{12}) \\ % r \frac{d^2\theta}{ds^2} v_c^2 % &= \pm V_{max}(B_{12} - B_{11}) \\ %v_c &= \sqrt{\pm \frac{V_{max}}{r\frac{d^2\theta}{ds^2}} (B_{12} - B_{11})} \\ %\end{align*} \end{document}
{ "alphanum_fraction": 0.6582641991, "avg_line_length": 49.6584507042, "ext": "tex", "hexsha": "49164bf62984edeb6d9caca3616de20ff95d9417", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-03-08T07:33:18.000Z", "max_forks_repo_forks_event_min_datetime": "2021-08-19T19:20:04.000Z", "max_forks_repo_head_hexsha": "3a8a0c4359f284d29547962c2b4c43d290d8065c", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Ewpratten/frc_971_mirror", "max_forks_repo_path": "frc971/control_loops/drivetrain/spline_math.tex", "max_issues_count": 10, "max_issues_repo_head_hexsha": "3a8a0c4359f284d29547962c2b4c43d290d8065c", "max_issues_repo_issues_event_max_datetime": "2022-03-18T22:14:15.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-18T03:22:19.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Ewpratten/frc_971_mirror", "max_issues_repo_path": "frc971/control_loops/drivetrain/spline_math.tex", "max_line_length": 138, "max_stars_count": 39, "max_stars_repo_head_hexsha": "3a8a0c4359f284d29547962c2b4c43d290d8065c", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Ewpratten/frc_971_mirror", "max_stars_repo_path": "frc971/control_loops/drivetrain/spline_math.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-21T15:23:43.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-18T03:22:30.000Z", "num_tokens": 10637, "size": 28206 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %2345678901234567890123456789012345678901234567890123456789012345678901234567890 % 1 2 3 4 5 6 7 8 \documentclass[letterpaper, 10 pt, conference]{ieeeconf} % Comment this line out if you need a4paper %\documentclass[a4paper, 10pt, conference]{ieeeconf} % Use this line for a4 paper \IEEEoverridecommandlockouts % This command is only needed if % you want to use the \thanks command \overrideIEEEmargins % Needed to meet printer requirements. %In case you encounter the following error: %Error 1010 The PDF file may be corrupt (unable to open PDF file) OR %Error 1000 An error occurred while parsing a contents stream. Unable to analyze the PDF file. %This is a known problem with pdfLaTeX conversion filter. The file cannot be opened with acrobat reader %Please use one of the alternatives below to circumvent this error by uncommenting one or the other %\pdfobjcompresslevel=0 %\pdfminorversion=4 % See the \addtolength command later in the file to balance the column lengths % on the last page of the document % The following packages can be found on http:\\www.ctan.org %\usepackage{graphics} % for pdf, bitmapped graphics files %\usepackage{epsfig} % for postscript graphics files %\usepackage{mathptmx} % assumes new font selection scheme installed %\usepackage{times} % assumes new font selection scheme installed %\usepackage{amsmath} % assumes amsmath package installed %\usepackage{amssymb} % assumes amsmath package installed \usepackage{lipsum} \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsfonts,bm} \usepackage{xcolor} \usepackage{algorithmic} \usepackage{graphicx} \usepackage{textcomp} \usepackage{xcolor} \usepackage{hyperref} \title{\LARGE \bf LOL: Lidar-only Odometry and Localization in 3D point cloud maps* } \author{Dávid Rozenberszki and András L. Majdik$^{1}$% <-this % stops a space \thanks{*The research reported in this paper was supported by the Hungarian Scientific Research Fund (No.\ NKFIH OTKA KH-126513) and by the project: Exploring the Mathematical Foundations of Artificial Intelligence 2018-1.2.1-NKP-00008. }% <-this % stops a space \thanks{$^{1}$Dávid Rozenberszki and András L. Majdik are with the Machine Perception Research Laboratory, MTA SZTAKI, Hungarian Academy of Sciences, Institute for Computer Science and Control, 1111 Budapest, Hungary {\tt\small \{rozenberszki,majdik\}@sztaki.hu}}% %\thanks{$^{2}$András L. Majdik is with the the Machine Perception Research Laboratory, % MTA SZTAKI, Hungarian Academy of Sciences, Institute for Computer Science and Control, 1111 Budapest, Hungary % {\tt\small [email protected]}}% } \begin{document} \maketitle \thispagestyle{empty} \pagestyle{empty} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{abstract} % We deal with the problem of odometry and localization for Lidar-equipped vehicles driving in urban or residental environments. %, where a premade target map exists to localize against. In this paper we deal with the problem of odometry and localization for Lidar-equipped vehicles driving in urban environments, where a premade target map exists to localize against. In our problem formulation, to correct the accumulated drift of the Lidar-only odometry we apply a place recognition method to detect geometrically similar locations between the online 3D point cloud and the a priori offline map. % This approach differs from the original SLAM algorithms in the prior loop of mapping as in our case it is based on a existing target map to localize against. % The Lidar-only odometry is based on a state-of-the art Lidar odometry algorithm, while the localization problem is tackled by a segment matching algorithm. In the proposed system, we integrate a state-of-the-art Lidar-only odometry algorithm with a recently proposed 3D point segment matching method by complementing their advantages. % Both of the two sub-problems addressed in this paper are based and solved by state-of-the-art algorithms. The odometry is obtained by a high frequency but low fidelity velocity calculations with additional refinement and matching of the registered pointclouds in the map, while the localization is depending on a segment-based representation of the Lidar environment. %Our contribution in this problem was to find the best performing methods for the subtasks and modify them to perform together completing each other deficiencies without losing the ability for real-time performance and high frequency localization. % Our contribution in this problem after the selection of algorithms was to integrate them together to complement each others deficiencies and implement additional steps to increase the global localization reliability and precision. Also, we propose additional enhancements in order to reduce the number of false matches between the online point cloud and the target map, and to refine the position estimation error whenever a good match is detected. % Our solution of correspondence filtering between the target and local segments is based on an advanced RANSAC method to reduce the number of false matches. % We also integrated an additional step for the position update with ICP refinement of the segment point clouds after their prior transformation by cenroids. %improved the matching accuracy with a RANSAC filter and solved the problem of relocalization after finding correspondences. % The proposed approach is tested on several Kitti datasets of different lengths and environments. We demonstrate the utility of the proposed LOL system on several Kitti datasets of different lengths and environments, where the relocalization accuracy and the precision of the vehicle's trajectory were significantly improved in every case, while still being able to maintain real-time performance. % Our LOL algorithm succeeded to improve from the original methods in the relocalization accuracy in every case while still being able maintain real-time performance. \end{abstract} %\begin{IEEEkeywords} %Lidar, Odometry, Localization, 3D Point Clouds % Localization, Mapping, Range Sensing, Computer Vision for Transportation, Autonomous Vehicle Navigation %\end{IEEEkeywords} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{SUPPLEMENTARY ATTACHMENTS} %Please note that %this paper is accompanied by The source code of the proposed algorithm is publicly available at: \url{https://github.com/RozDavid/LOL} A video demonstration is available at: \url{https://youtu.be/ektGb5SQGRM} \section{INTRODUCTION} % problem definition In this paper, we deal with the challenges of globally localizing a robot in urban environments using exclusively Light Detection and Ranging (Lidar) measurements. Parallel a Lidar odometry and mapping algorithm is running to calculate the estimated consecutive poses from the streaming Lidar point clouds in real-time, while the global position updates are recognized against a previously known point cloud of a target map---accepted as ground-truth--- to cancel the built-up drift. % motivation Our motivation stems from the certain need for Lidar based sensors in autonomous navigation and driving as these types of measurements can provide relatively high precision data, with constant irrespective of the distance measured. Also compared to simple RGB or RGB-D images it is generally less sensitive to seasonal and adversarial weather changes. In many applications though, they complete each other, as well as with other sensors (e.g. Inertial Measurement Unit (IMU), wheel odometry, radar) for the best performance in machine perception. However for true redundancy, these sub-systems must work independently as well, to avoid dangerous situations from sensor malfunction. Using solely Lidar data has its difficulties also, as they operate on almost one scale lower frequency as cameras and even two or three lower as IMU sensors. Moreover, their scans are sparse in points, but usually still computationally expensive to process them in real-time. % Recent achievements in laser-based technology were significant in either the hardware or the algorithm development. Lidars have been reduced in size and weight but doubled in frequency or density. They can be now handhold and used from a battery \cite{handhold} or even attached to a Micro Aerial Vehicle (MAV) \cite{uav}, with the expected spread of flash Lidars, the performance will be even higher. % ezt inkabb kihagynam: In spite of all these, there are significantly higher number of research papers dealing with the problem of visual deep learning and image processing in autonomous driving, than Lidar based methods. % Utilizing the benefits of the Lidar state-of-the-art algorithms reached superb performance in both Odometry and Localization problems as well, but they cannot merge these sub-problems in one all-round algorithm. % SOA In case of pure Lidar odometry the LOAM \cite{loam} algorithm scores among the highest in term of translational and rotational errors on the Kitti Vision Benchmark \cite{kitti} with 2-axis Lidars such as Velodynes. % This method calculates the odometry of the moving Lidar by frame-to-frame comparison of the scans while using these scans to refine the low fidelity velocity estimate by fine matching and registration of the point clouds into a coordinate system aligned global map. This method estimates the 6 DoF (Degree of Freedom) displacement of a moving Lidar with very low drift on short trajectories by frame-to-frame comparison of the consecutive scans, and by fine matching and registration of the consecutive point clouds into a coordinate system aligned online map. % Limitation of SOA %As the odometry calculation covers only one part of our overall approach it was essential for us to find a method that can operate with low drift in various environments, without exceeding a reasonable computational complexity, the LOAM algorithm satisfied both our requirements. % However, in the original LOAM algorithm there is no place recognition method to detect loop-closure situations whenever the vehicle returns to a previously visited location in the environment. However, in case of long trajectories and since the drift is continuously accumulated a significant error could build up in the position estimation, c.f. Fig. \ref{fig:loam_maps} top row. % Beyond SOA In our problem formulation, to correct the accumulated error in the odometry we apply a place recognition method to detect geometrically similar locations between the online and the a priori offline 3D point cloud map. Our approach is opposite to Simultaneous Localization and Mapping (SLAM) systems, where the goal is to detect loop-closure situations whenever the vehicle returns to a previously visited location in the environment, within the online map. For the place recognition frontend we chose to integrate the SegMap \cite{segmap} method, that is a state of the art algorithm for the extraction and matching of 3D point cloud segments. % Contributions Our main contribution is the integration and adaptation of the LOAM and SegMap algorithms into a novel solution, creating thus a Lidar-only Odometry and Localization (LOL) method that eliminates the need of any other supplementary sensor, e.g., IMU, wheel encoder, and satellite-based Global Positioning System (GPS). Furthermore, we included some additional improvements in the acceptance of correct matches, by applying further geometrical constraints complementing the feature similarity ones. Namely, we apply a RANSAC based geometric verification, and once a good match is detected between the online measurements end the target map, we only search for similar 3D Lidar segments (with relaxed similarity constraints) in the neighbourhood of the current location defined by the location uncertainty. Also, we only use the shift between the target map and the online source segments centroids as a prior, and we refine the final transformation by applying a fine-grained ICP matching between the two point clouds. We tested the proposed algorithm on several Kitti datasets, c.f. Fig. \ref{fig:loam_maps} bottom row, and found a considerable improvement in term of precision without a significant computational cost increase. % There were bigger differences in approach in the problem of Lidar localization against a target map, that will be detailed in Section \ref{localization}. After testing and research in possibilities we chose the also state-of-the art SegMatch \cite{segmatch} and SegMap \cite{segmap} algorithms to compare and test against each other, both performing more than sufficiently. These methods have outstanding performance in point cloud segmentation and feature description. The only difference in between is the way they describe individual segments, while SegMatch operates with the geometry-based ensemble of shape histograms, while SegMap with data-driven learning-based descriptors with semantic information and reconstruction capability from low dimensional feature space. % The applied algorithms will be compared to other approaches in Section \ref{related} and detailed thoroughly in Section \ref{proposed_system}. We have to highlight here that our goal was not to create a novel approach in the sub-tasks of the Lidar only odometry and Localization problem (LOL) in 3D point cloud maps, but to find and merge the already created state-of-the-art algorithms in such way, that their strengths are completed, but their deficiencies are solved by each other. This algorithm differs from the original SLAM algorithms that the target map of the localization part is given, thus does not require the first, mapping loop of the SLAM algorithms, but can optimize its trajectory from the beginning against an accepted ground truth. To summarize, this paper advances the state-of-the-art with the following contributions: \begin{itemize} % \item After examination of possibilities the selection of two state-of-the-art algorithms and their modification in such a way to complement each others performance. \item We present a novel Lidar-only odometer and Localization system by integrating and complementing the advantages of two state of the algorithms. \item We propose a set of enhancements: (i) a RANSAC-based geometrical verification to reduce the number of false %increase the number of true-positive matches between the online point cloud and the offline map; and (ii) a fine-grained ICP alignment to refine the relocalization accuracy whenever a good match is detected. \item We publicly release the source code of the proposed system. % \item Implement an advanced two-stage geometric filtering verification with consecutive location- and Random Sample Consensus (RANSAC) based steps. % \item Integrate an ICP-based localization transformation matrix calculation into the SegMap algorithm, to acquire and refine the required relocalization updates. % \item Publishing the source code of our LOL problem solution with the prepared demonstration files for testing and for any use. \end{itemize}{} The remainder of the paper is organized as follows: related work is presented in Section \ref{related}, the details of the proposed system are described in Section \ref{proposed_system}, followed by experiments in Section \ref{experiments} and conclusions in Section \ref{conclusion}. \section{RELATED WORK} \label{related} \begin{figure*}[!h] \centerline{\includegraphics[width=0.72\textwidth]{figures/lol_flowchart_5.png}}\par \caption{Flowchart of the complete Lidar Odometry and Localization algorithm, where the blue color represents the parts originated from the LOAM, orange color from the SegMap algorithm and the green modules are our contribution. When only the border is green as in the Incremental Pose Graph Mapping Module we modified the original SegMap method to incorporate the relocalization in the global map.} \label{fig:loam-segmap_flowchart} \end{figure*} Lidar sensors are robust and precise ranging sensors for a wide range of weather conditions and distances, thus making it crucial for autonomous navigation problems. % Hereby and for the rest of this paper, our LOL problem needs to be separated into two, almost separate problems. The related scientific literature can be grouped along with the two sub-problems addressed in this paper. Namely, the odometry measurements are calculated with the frequency of the scans providing a continuous pose estimate, while the re-localization happens occasionally against a previously known target ground truth map, only when an adequate number of similarities are detected. As the ICP (Iterative Closest Point) algorithms become the dominant method for the alignment of three-dimensional models based on geometry \cite{essential_icp}, it is used as the initial step of most Lidar based odometry calculation problems. While typical ICPs assume that the points in the cloud are measured simultaneously and not sequentially (practically it is rather 4D than 3D) as most available rangefinders operate, it is difficult to avoid accumulated tracking error for continuous motion. For fast-moving objects such as autonomous vehicles an additional second step is incorporated into the algorithm to remove the distortion with know velocity measured from IMU sensors in \cite{imu_icp}, with multiple sensors using extended Kalman Filter (EKF) \cite{ekf_icp} or calculating the velocity directly from the Lidar scans without the aid of additional sensors such as in \cite{vicp}. Additionally, the precision of the estimate can be further improved by incorporating a lower frequency mapping and geometric point cluster matching method into the odometry \cite{zlot}, calculating loop closures in \cite{bosse} or representing the surface with Implicit Moving Least Squares (IMLS) \cite{imls_icp}. There is an even bigger difference in approaches to the problem of localization. On the one hand, one can explore curb detection based methods. Wang et al. \cite{curb} approached the solution by detecting the curbs at every single frame, densified with the projection of the previous detections into the current frame, based on the vehicle dynamics, applying the beam model to the contours and finally matching the extracted contour to the high-precision target map. Another approach with similar idea by Hata et al. \cite{curb_mc} is simply using these curbs complemented with a robust regression method named Least Trimmed Squares (LTS) as the input of the Monte Carlo Localization algorithm \cite{monte-carlo}. These methods share the same strengths in the independence of scale, as they both operate in the given area of the target map around the position estimate. Numerous research works are tackling the problem of localization based on the comparison of local keypoint features of the local- and target map. Bosse and Zlot \cite{bosse-zlot} are using 3D \textit{Gestalt} descriptors for the description of point cloud directly and vote their nearest neighbours in the feature space with a vote-matrix representation. Range-image conversion approaches are also present with features such as Speeded Up Robust Features (SURFs) in \cite{surf1, surf2}, but the closest to our segment matching approach are the ones including features such as Fast Point Feature Histogram (FPFH) \cite{fpfh}. A more general approach of the problem when global descriptors are used for the local point clouds and compared to the database of target map segments for instance in \cite{rohling} or at Magnusson et al. \cite{magnusson}. %, where a 1-D histogram of point heights are used with the restriction of ground objects or at Magnusson et al. \cite{magnusson}, where the cloud is split into overlapping grids and direction invariant shape properties are computed (spherical, linear and types of planar). These properties are stored in a matrix of surface histograms and compared to the target for place recognition. A higher-level representation of the point cloud segment of object-level was presented by Finman et al. \cite{object}, but only for RGB-D cameras and a small number of objects. Most global feature-based descriptors though operate on a preprocessed, segmented point clouds of the local and global frame, describing the objects individually, and requiring a good prior segmentation algorithm. On well-defined segments, there are existing algorithms that are not relying on simplistic geometric features such as Symmetric Shape Distance in \cite{ssdistance}. The fastest emerging trend of deep learning is also applicable in point cloud environments such as segments and can be used for description or reconstruction as well. The learning-based method opens a completely new window to solving the above problems in a data-driven fashion. Weixin et al. in \cite{l3net} presented L3-Net a 3D convolution RNN directly operating on the point clouds and tested to exceed the results of the geometry-based pipelines on various environments. Zeng et al. proposed the 3DMatch in \cite{3dmatch}, one of the first data-driven CNN for localization in the 3D segment space. Autoencoder networks are presented in \cite{autoe1, autoe2} for localization and reconstruction as well. A different and interesting approach is the DeepICP \cite{deepicp}, an end-to-end 3D point cloud registration framework replacing standard RANSAC based ICP algorithms for matching the local and target clouds, resulting in a comparably better performance than previous state-of-the-art geometry-based methods. % Here the keypoints are detected by the weighting layer of the cloud, the attached section region is represented by grid voxels, the matching points are generated by the generation layer, and finally a closed-loop Singular Value Decomposition (SVD) is performed in the keypoint pairs. The introduced related works were listed without being exhaustive. %, yet showed that the field abounds in superb methods. These works used very different approaches reaching high performance, but only focusing on a section of our problem. In most real-world scenarios in autonomous driving applications, target maps are or will be available to localize against, and should be utilized if one can find true matches with high confidence. Though the necessity is given, the field lacks in comprehensive Lidar based odometry and localization methods for real-world applications. \section{PROPOSED SYSTEM} \label{proposed_system} In this section, we present the building blocks of the proposed method for our Lidar-only odometry and Localization (LOL) solution. First, we introduce the general structure and the data flow between the different components and their connections to help the reader understand the logic behind our approach. % and will detail the used algorithms and our contributions later in detail. % Next, we briefly present the main components: (i) the Lidar Odometry and Mapping (LOAM) algorithm \cite{loam}, that is responsible for real-time state estimate of the vehicle with robust and high-frequency performance; (ii) the Segment Matching (SegMap) \cite{segmap} localization methods will be expounded that is responsible for the recognition of the same segments between the local and target maps of the scene. Next, we briefly present the Lidar Odometry and Mapping (LOAM) \cite{loam}, and the Segment Matching (SegMap) \cite{segmap} algorithms. Finally, we highlight the changes we've made to integrate them, and we present further enhancements to calculate the position updates from the matching segments. \subsection{System architecture} \label{system_architecture} We visualized the data flow and the logical diagram of our architecture in Fig. \ref{fig:loam-segmap_flowchart}. Here the modules shown by the blue color are originated from the LOAM algorithm, the orange ones from the SegMap algorithm and finally the modules represented with green color are our contribution and additions to integrate these into the proposed solution. As Fig. \ref{fig:loam-segmap_flowchart} indicates, two inputs are required for our LOL algorithm to operate with an additional assumption for start position before first localization. Firstly, we rely on the target map in the form of a 3D point cloud, where we would like to localize our vehicle. For starting the algorithm we first take the point cloud map, segment it and describe the created segments in a way that will be detailed later in Subsection \ref{localization}. This way we create a database of segments where we can search by the global position of the segment centroids and access their feature vector values to compare against our local map segments for calculating the correspondence candidates and later matches. Secondly, when the target map is set, we can start streaming the Lidar scans for the rest of the sequence. The scans are forwarded to the Point Cloud Registration module, the first part of the LOAM algorithm, where it is filtered, registered and transmitted to the following modules. Here the odometry is calculated with maintaining the pose transform of the vehicle in the global reference frame by a prior odometry calculation, then a refinement of the pose with the mapping module all detailed in Subsection \ref{loam} and shown on Fig. \ref{fig:loam-segmap_flowchart}. The very same scans are used for building the local map of the vehicle but only used for this purpose when there is an odometry estimate available for the local frame in the global space. These scans are densified by utilizing the last \textit{k}---a predefined parameter---scans with their belonging odometry estimate, then segmented and described the same way as previously the target map, only in an online manner. The last orange module of Fig. \ref{fig:loam-segmap_flowchart} the segmented local map compared to the local area of the target map and the matches are found between the most similar segments. The similarity is defined in the absolute distance of n dimension feature vectors, where \textit{n} depends on the descriptor module of SegMatch \cite{segmatch} or SegMap. The correspondence candidates are here filtered by our two-stage RANSAC based method to filter the outliers, thus false positive matches are removed before calculating a general transformation matrix for the global relocalization. Finally, to calculate a more accurate localization after the prior join of segment centroids, we apply an ICP refinement to the local and target segment point clouds as it will be detailed in \ref{our_contribution}. This final and combined update transformation matrix is inserted into the incremental pose graph mapping module of the SegMap algorithm, simultaneously feedbacked for the creation of the local map and published as the true global trajectory of the vehicle. \subsection{Lidar-only odometry}\label{loam} The problem is addressed for the estimation of the ego-motion of a pre-calibrated 3D Lidar, based on the perceived streaming point clouds. The additional mapping function is not used to update the target maps, though fully taken advantage of for refining the odometry estimate, as will be detailed later in this subsection. We have to note, that this subsection will be only a brief introduction for the complete LOAM algorithm, but can be found in the original paper. \begin{figure*}[ht!] \begin{center} \begin{tabular}{c@{\hspace{10mm}}c@{\hspace{10mm}}c@{\hspace{10mm}}} \includegraphics*[width=0.23\linewidth]{figures/drive18_loam_map.png} & \includegraphics*[width=0.23\linewidth]{figures/drive_27_loam_map_2.png} & \includegraphics*[width=0.23\linewidth]{figures/drive28_loam_map.png} \\ \includegraphics*[width=0.23\linewidth]{figures/drive18_traj.png} & \includegraphics*[width=0.23\linewidth]{figures/drive27_traj.png} & \includegraphics*[width=0.23\linewidth]{figures/drive28_traj.png} \\ (a) & (b) & (c)\\ \end{tabular} \caption{Illustration of results and comparison between the estimated trajectories tested on various length Kitti \cite{kitti} datasets: (a) Drive 18, 04:36 minutes, $\approx 2200 m$ (b) Drive 27, 07:35 minutes, $\approx 3660 m$ (c) Drive 28, 08:38 minutes, $\approx4125 m$. \textbf{Top row:} result of the LOAM algorithm, where the ground truth map is visualized with black points, while the self built map points on the green-red scale according to the vertical height. \textbf{Bottom row:} comparison between the trajectory obtained by applying the proposed LOL algorithm (green line) and the original LOAM trajectory shown (red line) with respect to the ground truth map. } \label{fig:loam_maps} \end{center} \end{figure*} As it can be seen in Fig. \ref{fig:loam-segmap_flowchart}, the first challenge is to calculate a suitable set of feature points from the received laser scans. Let $\mathcal{P}_k$ be a received point cloud scan for the sweep $k \in Z^+$. For the Lidar ego-coordinate system ${L}$ the algorithm was modified to use a more typical coordinate axes convention of $x$ forward, $y$ left and $z$ up right-hand rule. The coordinates of the point $i$, where $i \in \mathcal{P}_k$ in ${L_k}$ are denoted as $\boldsymbol{X}_{k,i}^L$. %As for Lidar, the density of point measurements are not even on the vertical and horizontal axes, we would like to also distinguish the consecutive points on a single horizontal line of the scan, generated by a single rotating laser scanner and denoted by $S$. We would like to select the feature points that are on sharp edges and planar surfaces, by calculating the smoothness of the local surface. The selected feature points are filtered by region to avoid over crowded areas, unreliable parallel surfaces and within the accepted threshold of $c$ smoothness values $c$. % To define the smoothness of a local surface, the points in scans are sorted based on $c$ values and calculated by, %\begin{equation}\label{eq:loam_smoothness} % c=\frac{1}{|S| \cdot \| \boldsymbol{X}_{(k,i)}^L\|} \|\sum_{j\in S, j \neq i}\big ( \boldsymbol{X}_{(k,i)}^L - \boldsymbol{X}_{(k,j)}^L \big )\| %\end{equation} %The highest $c$ values assigned to the points are accepted as edge points, the lowest $c$ values to be planar points. To avoid crowded feature points we separate the space into four identical parts and allow a maximum number of two points per region for the edge and planar points both. While selecting feature points some additional selection is applied to filter the unreliable feature points: %\begin{itemize} % \item We want to avoid selecting points whose surrounding points are already selected, % \item also avoiding points on local planar surfaces to the laser beams. % \item The assigned $c$ value is higher\textbackslash smaller than the edge\textbackslash planar points, but cannot exceed the maximum. %\end{itemize} When the accepted feature points of the reprojected $\mathcal{P}_k$ and $\mathcal{P}_{k+1}$ are available, their feature points are stored in a 3D KD-tree with their calculated $c$ values and separated as planar or edge points. The closest neighbours of feature points are then selected from the consecutive scans to calculate the edge lines of two or more points or planar patches for a minimum number of three planar feature points. Then for verification of the correspondence, the smoothness of the local surface is checked based on the same smoothness equation. For the pose transform calculations $\mathbf{T}^L_{k+1}$ is the 6DOF rigid motion between timestamps $[t_{k+1}, t]$. Where $\mathbf{T}^L_{k+1}=[t_x, t_y, t_z, \theta_x, \theta_y, \theta_z]^T$ in \{L\} and is solved with minimizing the function of: \begin{equation} \bm{\mathscr{f}}(\mathbf{T}^L_{k+1})=\mathbf{d} \end{equation} where each row of $\bm{\mathscr{f}}$ corresponds to a feature point weighted with smaller weights if their correspondence distance$\mathbf{d}$ is larger and through nonlinear iterations minimizing $\mathbf{d}$ to zero. With the converged or after the maximum iteration number terminated optimization $\mathbf{T}^L_{k+1}$ contains the Lidar ego-motion between $[t_{k+1}, t_{k+2}]$ the final step of the LOAM algorithm, the Mapping registers the scans $\mathcal{P}_k$ in \{W\} for all previous scans, thus defining $\mathcal{Q}_k$ point cloud on the map. % Here the feature points and correspondence matching is made similarly as previously, only instead of $\mathcal{P}_k$ and $\mathcal{P}_{k+1}$, $\mathcal{Q}_k$ and $\mathcal{Q}_{k+1}$. We use the same smoothness equation on feature point cloud $\mathcal{Q}_k$ for refining the odometry estimate by the registered map, but with a 10 times lower frequency and 10 times higher number of points than we used previously for the odometry cloud $\mathcal{P}_k$. Finally, to evenly distribute the points in the map the clouds are voxelized into a 5 cm grid and the odometry poses are updated by the mapping frequency and broadcasted to the localization part of our algorithm. \subsection{3D place recognition} \label{localization} As seen on Fig. \ref{fig:loam_maps}, LOAM odometry and consequently its mapping performance is outstanding along short trajectories. However, in case of longer sequences, a significant drift is accumulated, that needs to be cancelled for reliable vehicle position estimates. For this purpose, we choose the SegMap algorithm with additional descriptors used from the Segmatch algorithm \cite{segmatch} to localize against a known target map. These algorithms share a common approach in the localization problem as using both the local and target maps the same way, thus will be introduced together. The only difference in the two methods are the feature values and dimensions of the segments as it will be detailed later in this subsection. %The SegMap was ideal for our needs as it introduces a robust correspondence recognition, thus a localization method in even large scale unstructured environments. The 3D segment approach provides a good compromise between local and global description and robust to environmental changes, even filtering moving objects such as vehicles or cars. Also, an essential strength of the algorithm, that it can perform a 1 Hz localization recognition frequency on an average computer that has to calculate the odometry values simultaneously using the previously described LOAM algorithm. Fig. \ref{fig:loam-segmap_flowchart} shows the complete pipeline including the modular design of the SegMap algorithm. It first extracts the pointcloud after a voxelization and filtering step then describes the segments from the 3D Lidar point clouds and matches these segments to the offline segmented and described target map. Due to the modular design of this framework, it was straightforward to implement the different methods of feature extraction and description into the system, such as incorporating the novel 3D point cloud Convolutional Neural Network (CNN) descriptor of SegMap. %The first step of the matching algorithm is the segmentation block. For the offline segmentation of the target map and also the online segmentation of the real-time local map of the vehicle, the point clouds are given in the global \textit{map} reference frame. First, the ground plane is stripped according to the vertical \textit{z} axes distance of the individual points, then a voxelization method is applied to remove noisy point measurements and homogenize density. On the filtered cloud a "Cluster-All Method is used of \cite{segmentation}, then Euclidean clustering method is used for growing the segments. These steps perform robustly for a wide range of Lidar resolution and frequency, satisfying the requirement of real-time operation. %On given segments the next challenge as in Fig. \ref{fig:loam-segmap_flowchart} the feature and descriptor extraction. Here we tested and used two different methods, the first based on the original SegMatch algorithm, then the second proposed in the SegMap paper as previously mentioned. %When a segment is detected and segmented from the received point-cloud, we use the included points to store them as the first view of the segment. Later for more consecutive scans of the same segment, additional views are created with an increasing line of sight of the same object and a higher number of associated points. In a created segment all the views are described, but only the latest is used for the utmost confidence. On a given segment of $\mathcal{C}_k$ an optional length feature vector is created containing the different type of descriptors: $\mathscr{f}_i=[\mathscr{f}^1, \mathscr{f}^2, \dots, \mathscr{f}^m]$: \begin{itemize} \item $\mathscr{f}_1$ \textit{Eigenvalue based} method calculates the eigenvalues of the segment's points and combined in a 1x7 vector. Here the \textit{linearity}, \textit{planarity}, \textit{scattering}, \textit{omnivarence}, \textit{anisotropy}, \textit{eigenentropy} and \textit{curvature change} measures as proposed in \cite{eigen_features}. \item $\mathscr{f}_2$ A CNN based approach to achieve a data driven descriptor. First the voxelized segment is resized to a fixed 32x32x16 input size, then forwarded to a simple convolutional network topology. Three 3D convolutional and max pooling layers are followed by two fully connected (FC) layers with dropout. Also the original scale of input segment is passed as an additional parameter to the first FC layer to increase robustness. The final descriptor vector of 1x64 size is obtained by the activations of the last of the two FC layers. The weights of the network are trained and optimized as detailed in the original paper \cite{segmap}. \end{itemize} %The last challenge in the original SegMap method is the segment matching performance between the described target- and local segments. For this a learning-based approach is utilized, a classifier is responsible to decide if two segments are representing the same object. First, a k-d tree search in the feature space is performed distinguish by the number of features in the vector, then the results fed to the pretrained random forest classifier. The random forest determines whether clusters $\mathcal{C}_i$ and $\mathcal{C}_j$ are representing the same object by computing the absolute difference in the feature space: $\Delta f^i=|f^i_j-f^i_k|$. Finally, on the results a learning-based classifier is responsible for building the final list of match candidates based on the feature vector similarities. \subsection{Localization update and further enhancements}\label{our_contribution} The previously introduced modules were thoughtfully chosen to work together and complete each other, hereby we present the details to incorporate the segment matching recognitions and the real-time odometry estimates into our solution. % Though the original SegMap algorithm contained a solution, where instead of the required IMU or GPS based odometry pose information, only the Lidar scans were used to calculate a basic odometry only with scan-to-scan ICP calculations, but replacing this part with the LOAM algorithm resulted in a much better performance. This following section is also meant to present our contribution regarding the refined relocalization performance and filtering of the false-positive matches with an additional advanced RANSAC based geometric verification. First, we extended the geometric verification step of the original SegMap algorithm with additional filtering for location. We search in the database of segments only within an adjustable threshold of centroid distance from our odometry position, this way reducing the putative correspondence candidates. After the matches are found in this reduced pool of segments we check the consistency of pairs with the following criteria. The absolute difference of translation vectors between the different match pair centroids can not exceed a given parameter, this way filtering the clear outliers with a minimal computing capacity. %Next, we create a point cloud of the remaining matching segment centroids and through a RANSAC iteration with terminal statements for max iteration and convergence score we align the source and target centroid point clouds, with filtering the outlier pairs again. Next, we create a pointcloud of the remaining matching segment centroids and through a RANSAC iteration we try to align the target and source clouds. If the algorithm converged we successfully filtered the remaining outliers and now we have a high confidence in true positives. On these matches we can calculate the required 6DOF update transformation between the estimated and the true global pose of the vehicle. The performance of our filtering method can be seen in Table \ref{tab:ransac_table}. \begin{figure}[ht!] \begin{center} \begin{tabular}{c@{\hspace{1mm}}c@{\hspace{1mm}}} \includegraphics*[width=0.44\linewidth]{figures/match_recognition.png} & \includegraphics*[width=0.44\linewidth]{figures/prior_transform.png} \\ (a) & (b) \\ \includegraphics*[width=0.44\linewidth]{figures/after_align.png} & \includegraphics*[width=0.44\linewidth]{figures/icp_refine.png} \\ (c) & (d)\\ \end{tabular} \caption{The required steps for aligning the local and target point cloud segments to cancel the drift in the odometry estimate: (a) we calculate the initial transformation by using the segment centroids; (b) we apply the RANSAC algorithm to filter the outlier matches; (c) we refine the update estimate by ICP matching between the corresponding segment point clouds. The result of the final alignment are shown in (d).} \label{fig:pose_refine} \end{center} \end{figure} %On the remaining segments now we have high confidence in being a true positive, therefore we can calculate based on these matches the required 6DOF update transformation between the estimated and the true global pose of the vehicle. The problem solution for aligning the centroids is clear, but might not be the best solution for us. The reason for it is derived from the motion of the Lidar scanner. As the Lidar moves forward and detects a new segment for the first time, the form and line of sight of the object will be incomplete. Though the descriptors with eigenvalues or the CNN descriptor are robust enough to recognize an incomplete and a complete scan from the same objects \cite{segmap}, but the centroid of the incomplete segment will be shifted to the direction of the Lidar. This could result in only a small transition in the update pose, but a huge angular error in the fitting if we optimize for the minimum squared error. Instead, we used a more precise method using the entire point cloud of the segments. Let $W_e^W$ be the odometry pose estimate of the vehicle in the global reference frame \{W\} a dimension of 4x4 homogeneous transformation matrix and $W_t^W$ the true pose. Also $R_t^W$ is the 3x3 3D rotation matrix and $V_t^W$ is the 3x1 translation vector of $W_t^W$. Here $W_t^W$ can be obtained after calculating $W_u^W$ update transformation between the estimated and localized pose by \begin{equation} W_t^W=W_u^W W_e^W. \end{equation} Our only task is to calculate the update transformation following the steps below: \begin{enumerate} \item We take the densified local point cloud $P_l^R$ of the robot as detailed in Section \ref{system_architecture}. \{R\} is a right-hand rule coordinate system in the \{W\}, where the position of the Lidar is in the origin and faces in $x$ axes direction. Here we first translate the cloud to the world coordinate system, to compare with the target cloud. In Fig. \ref{fig:pose_refine}(a) the local cloud is visualized by different colors for the different segments and the target map with white color. The vector connecting the matching segment pairs are drawn with a green line. On Fig. \ref{fig:pose_refine}(b) one can see the source cloud with orange and the target cloud with a green color now in the same coordinate system \{W\}. \item For an accepted number of $n$ matches we calculate a prior mean transformation by transforming the $P_l^W$ by $W_p^W$ transformation matrix, where $R_p^W$ and the homogeneous row is an identity matrix, $V_{sp,i}^W$ is the vector connecting the centroids of the i\textsuperscript{th} segment pair and \begin{equation} V_p^W=\frac{1}{n}\sum_{i=0}^n V_{sp,i}^W \end{equation} The result of this prior transformation on a real segment can be seen in Fig. \ref{fig:pose_refine}(c) with the target cloud visualized with green color and a true positive matching segment by blue. \item The final step of cloud alignment is an ICP step of point clouds. Here we only keep the points in $P_t^W$ target point cloud, that belongs to a true segment (accepted by the RANSAC filter) and align these set of points with an ICP step to the previously translated $P_l^W$. The output of this operation will now be a small refinement matrix $W_{ICP}^W$. The result of this fine alignment can be seen in Fig. \ref{fig:pose_refine}(d). \end{enumerate} Finally $W_u^W$, and this way the truly localized pose of the robot can be calculated by simply following the multiplication order of our calculated transformation matrices, all defined in the world coordinate frame and resulting, \begin{equation} W_t=W_u W_e = W_p W_{ICP}^{-1} W_e. \end{equation} This method was tested to be straightforward, with terminal statements of the ICP step computationally effective method, yet resulted in an excellent performance even running on an average strength PC and without the loss of tracking on the longest sequences. \section{EXPERIMENTS} \label{experiments} All experiments were performed on a system equipped with an Intel i7-6700K processor, with 32 Gb of RAM and an Nvidia GeForce GTX 1080 GPU performing easily in real-time operation. The timings for mean values in miliseconds and for standard deviations in parenthesis were 372.7 (7.2) for segmentation, 0.40 (0.26) for description, 26.27 (14.27) for match recognition in the reduced candidate pool, 0.09 (0.7) for the additinal RANSAC geometric filtering, and finally 88.0 (37.2) for the ICP alignment. The tested sequences were based on the Kitti Vision benchmark dataset, but we used especially the Drive number 18., 27. and 28. sequences as they were the longest in residential and city areas. The ground truth maps were created with the \textit{laser\_slam} tool that was made available with the SegMap algorithm. Fig. \ref{fig:abs_error} bottom diagrams are representing the quantitative results of our method compared to the original LOAM algorithm and the given ground truth trajectory. In Fig. \ref{fig:abs_error} bottom left one can see the absolute error for the number of scans and their distribution in occurrence in Fig. \ref{fig:abs_error} bottom right for the same sequence and color codes. Moreover, Fig. \ref{fig:abs_error} top shows the whole trajectory, where the black trajectory of ground truth is visualized together with the red odometry path and green path of our updated trajectory. As one can note from the figures either on these or previously in Fig. \ref{fig:loam_maps} the resulted trajectory is significantly better than the original odometry without losing the ability of real-time performance and always being able to relocalize after built up drift resulted from noisy Lidar measurements and moving objects. \begin{figure}[!t] \centerline{\includegraphics[width=0.5\textwidth]{figures/full_evaluate_18.png}}\par \caption{The absolute error for the relocalized poses of our updated path and the path from the original LOAM algorithm on the Kitti Drive 18 dataset. Here the value of the absolute error is calculated by comparing the global pose of a trajectory node at the timestamp of a scan to the corresponding ground truth trajectory pose with the same timestamp. The error is calculated by the 3D Euclidean distance of poses.} \label{fig:abs_error} \end{figure} \begin{table}[!t] \centering \caption{%Number of putative SegMap correspondences, accepted relocalizations with RANSAC filtering, and remaining false matches for different minimum cluster sizes Number of detected: true negative/true positive/false positive SegMap matches in case of different cluster sizes % En ezt irnam a szovegebe-> From the total number of putative SegMap correspondences the proposed LOL algorithm detected a number of detected: true negative/true positive/false positive SegMap matches in case of different minimum cluster sizes. !!! Az ertekeket vissza kell irni az eredetire } \begin{tabular}{|l|c|c|c|c|} \hline Sequence\textbackslash Cluster size[-] & 2 & 3 & 4 & 5\\ \hline Drive 18 & 628/49/2 &335/36/0 & 33/25/0 &22/22/0\\ Drive 27& 797/41/3 & 343/32/0 & 40/26/0 & 14/14/0\\ Drive 28 & 771/85/3& 491/70/0 & 212/54/1 &30/30/0\\ \hline \end{tabular} \label{tab:ransac_table} \end{table} In Table \ref{tab:ransac_table} we summarize the performance of our filtering method for the different cluster sizes of putative SegMap correspondence candidates and the examined three sequences. %In every measurement setting the numbers represent respectively the filtered false positive matches, the accepted true positives and the accepted false positives that resulted in false global relocalization. In every measurement setting the numbers represent respectively the filtered false positive matches, the accepted true positives and the accepted false positives that resulted in false global relocalization. The truth of the matches were verified according to the correlation of their resulting transformation with the ground truth trajectory. Originally SegMap in localization mode did not included a similar filtering algorithm, only dealt with the problem of correspondence recognition. Based on Table \ref{tab:ransac_table} one can conclude that the proposed solution improved the relocalization significantly in a wide range of cluster sizes by filtering out false positive correspondence, thus incorrect localization updates. \section{CONCLUSION} \label{conclusion} To conclude, this method solves the problem of Lidar-only odometry and localization in predefined 3D point cloud maps. Our algorithm consists of two state-of-the-art algorithms integrated in such a way to complement each other deficiencies and to highlight their advantages. Furthermore, we completed our solution with an advanced RANSAC filtering for additional certainty and ICP matching of the local- and target maps for increased precision. %Our general LOL algorithm outperforms other methods in every subtask with keeping the computational complexity at an affordable level to perform in real-time operation. \addtolength{\textheight}{-8cm} % This command serves to balance the column lengths % on the last page of the document manually. It shortens % the textheight of the last page by a suitable amount. % This command does not take effect until the next page % so it should come on the page before the last. Make % sure that you do not shorten the textheight too much. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \bibliographystyle{IEEEtran} \bibliography{lol} \end{document}
{ "alphanum_fraction": 0.7870677678, "avg_line_length": 140.0648648649, "ext": "tex", "hexsha": "238aac944f5dcb9a0e25a52678d5f140c54155bd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "590522a554935c554f77ee16c91bafd3b83f56a5", "max_forks_repo_licenses": [ "Artistic-1.0-Perl", "ClArtistic" ], "max_forks_repo_name": "DavidRozenberszki/LOL-Lidar-Odometry-and-Mapping-in-3D-point-clouds", "max_forks_repo_path": "root.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "590522a554935c554f77ee16c91bafd3b83f56a5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Artistic-1.0-Perl", "ClArtistic" ], "max_issues_repo_name": "DavidRozenberszki/LOL-Lidar-Odometry-and-Mapping-in-3D-point-clouds", "max_issues_repo_path": "root.tex", "max_line_length": 1858, "max_stars_count": null, "max_stars_repo_head_hexsha": "590522a554935c554f77ee16c91bafd3b83f56a5", "max_stars_repo_licenses": [ "Artistic-1.0-Perl", "ClArtistic" ], "max_stars_repo_name": "DavidRozenberszki/LOL-Lidar-Odometry-and-Mapping-in-3D-point-clouds", "max_stars_repo_path": "root.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11260, "size": 51824 }
% language=uk \environment luatex-style \startcomponent luatex-nodes \startchapter[reference=nodes,title={Nodes}] \startsection[title={\LUA\ node representation}][library=node] \topicindex {nodes} \libindex {fields} \libindex {subtypes} \libindex {values} \TEX's nodes are represented in \LUA\ as userdata objects with a variable set of fields. In the following syntax tables, such as the type of such a userdata object is represented as \syntax {<node>}. The current return value of \type {node.types()} is: \startluacode for id, name in table.sortedhash(node.types()) do context.type(name) context(" (%s), ",id) end context.removeunwantedspaces() context.removepunctuation() \stopluacode . % period The \prm {lastnodetype} primitive is \ETEX\ compliant. The valid range is still $[-1,15]$ and glyph nodes (formerly known as char nodes) have number~0 while ligature nodes are mapped to~7. That way macro packages can use the same symbolic names as in traditional \ETEX. Keep in mind that these \ETEX\ node numbers are different from the real internal ones and that there are more \ETEX\ node types than~15. You can ask for a list of fields with \type {node.fields} and for valid subtypes with \type {node.subtypes}. The \type {node.values} function reports some used values. Valid arguments are \nod {dir}, \type {direction}, \nod {glue}, \whs {pdf_literal}, \whs {pdf_action}, \whs {pdf_window} and \whs {color_stack}. Keep in mind that the setters normally expect a number, but this helper gives you a list of what numbers matter. For practical reason the \type {pagestate} values are also reported with this helper. \stopsection \startsection[title={Main text nodes}] \topicindex {nodes+text} These are the nodes that comprise actual typesetting commands. A few fields are present in all nodes regardless of their type, these are: \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{next} \NC node \NC the next node in a list, or nil \NC \NR \NC \type{id} \NC number \NC the node's type (\type {id}) number \NC \NR \NC \type{subtype} \NC number \NC the node \type {subtype} identifier \NC \NR \LL \stoptabulate The \type {subtype} is sometimes just a dummy entry because not all nodes actually use the \type {subtype}, but this way you can be sure that all nodes accept it as a valid field name, and that is often handy in node list traversal. In the following tables \type {next} and \type {id} are not explicitly mentioned. Besides these three fields, almost all nodes also have an \type {attr} field, and there is a also a field called \type {prev}. That last field is always present, but only initialized on explicit request: when the function \type {node.slide()} is called, it will set up the \type {prev} fields to be a backwards pointer in the argument node list. By now most of \TEX's node processing makes sure that the \type {prev} nodes are valid but there can be exceptions, especially when the internal magic uses a leading \nod {temp} nodes to temporarily store a state. \subsection{\nod {hlist} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{list} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width of the box \NC \NR \NC \type{height} \NC number \NC the height of the box \NC \NR \NC \type{depth} \NC number \NC the depth of the box \NC \NR \NC \type{shift} \NC number \NC a displacement perpendicular to the character progression direction \NC \NR \NC \type{glue_order} \NC number \NC a number in the range $[0,4]$, indicating the glue order \NC \NR \NC \type{glue_set} \NC number \NC the calculated glue ratio \NC \NR \NC \type{glue_sign} \NC number \NC 0 = \type {normal}, 1 = \type {stretching}, 2 = \type {shrinking} \NC \NR \NC \type{head/list} \NC node \NC the first node of the body of this list \NC \NR \NC \type{dir} \NC string \NC the direction of this box, see~\in [dirnodes] \NC \NR \LL \stoptabulate \topicindex {nodes+lists} \topicindex {lists} A warning: never assign a node list to the \type {head} field unless you are sure its internal link structure is correct, otherwise an error may result. Note: the field name \type {head} and \type {list} are both valid. Sometimes it makes more sense to refer to a list by \type {head}, sometimes \type {list} makes more sense. \subsection{\nod {vlist} nodes} \topicindex {nodes+lists} \topicindex {lists} This node is similar to \nod {hlist}, except that \quote {shift} is a displacement perpendicular to the line progression direction, and \quote {subtype} only has the values 0, 4, and~5. \subsection{\nod {rule} nodes} \topicindex {nodes+rules} \topicindex {rules} Contrary to traditional \TEX, \LUATEX\ has more \prm {rule} subtypes because we also use rules to store reuseable objects and images. User nodes are invisible and can be intercepted by a callback. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes {rule} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width of the rule where the special value $-1073741824$ is used for \quote {running} glue dimensions \NC \NR \NC \type{height} \NC number \NC the height of the rule (can be negative) \NC \NR \NC \type{depth} \NC number \NC the depth of the rule (can be negative) \NC \NR \NC \type{left} \NC number \NC shift at the left end (also subtracted from width) \NC \NR \NC \type{right} \NC number \NC (subtracted from width) \NC \NR \NC \type{dir} \NC string \NC the direction of this rule, see~\in[dirnodes] \NC \NR \NC \type{index} \NC number \NC an optional index that can be referred to \NC \NR \NC \type{transform} \NC number \NC an private variable (also used to specify outline width) \NC \NR \LL \stoptabulate The \type {left} and type {right} keys are somewhat special (and experimental). When rules are auto adapting to the surrounding box width you can enforce a shift to the right by setting \type {left}. The value is also subtracted from the width which can be a value set by the engine itself and is not entirely under user control. The \type {right} is also subtracted from the width. It all happens in the backend so these are not affecting the calculations in the frontend (actually the auto settings also happen in the backend). For a vertical rule \type {left} affects the height and \type {right} affects the depth. There is no matching interface at the \TEX\ end (although we can have more keywords for rules it would complicate matters and introduce a speed penalty.) However, you can just construct a rule node with \LUA\ and write it to the \TEX\ input. The \type {outline} subtype is just a convenient variant and the \type {transform} field specifies the width of the outline. \subsection{\nod {ins} nodes} \topicindex {nodes+insertions} \topicindex {insertions} This node relates to the \prm {insert} primitive. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC the insertion class \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{cost} \NC number \NC the penalty associated with this insert \NC \NR \NC \type{height} \NC number \NC height of the insert \NC \NR \NC \type{depth} \NC number \NC depth of the insert \NC \NR \NC \type{head/list} \NC node \NC the first node of the body of this insert \NC \NR \LL \stoptabulate There is a set of extra fields that concern the associated glue: \type {width}, \type {stretch}, \type {stretch_order}, \type {shrink} and \type {shrink_order}. These are all numbers. A warning: never assign a node list to the \type {head} field unless you are sure its internal link structure is correct, otherwise an error may result. You can use \type {list} instead (often in functions you want to use local variable with similar names and both names are equally sensible). \subsection{\nod {mark} nodes} \topicindex {nodes+marks} \topicindex {marks} This one relates to the \prm {mark} primitive. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC unused \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{class} \NC number \NC the mark class \NC \NR \NC \type{mark} \NC table \NC a table representing a token list \NC \NR \LL \stoptabulate \subsection{\nod {adjust} nodes} \topicindex {nodes+adjust} \topicindex {adjust} This node comes from \prm {vadjust} primitive. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{adjust} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{head/list} \NC node \NC adjusted material \NC \NR \LL \stoptabulate A warning: never assign a node list to the \type {head} field unless you are sure its internal link structure is correct, otherwise an error may be the result. \subsection{\nod {disc} nodes} \topicindex {nodes+discretionaries} \topicindex {discretionaries} The \prm {discretionary} and \prm {-}, the \type {-} character but also the hyphenation mechanism produces these nodes. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{disc} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{pre} \NC node \NC pointer to the pre|-|break text \NC \NR \NC \type{post} \NC node \NC pointer to the post|-|break text \NC \NR \NC \type{replace} \NC node \NC pointer to the no|-|break text \NC \NR \NC \type{penalty} \NC number \NC the penalty associated with the break, normally \prm {hyphenpenalty} or \prm {exhyphenpenalty} \NC \NR \LL \stoptabulate The subtype numbers~4 and~5 belong to the \quote {of-f-ice} explanation given elsewhere. These disc nodes are kind of special as at some point they also keep information about breakpoints and nested ligatures. The \type {pre}, \type {post} and \type {replace} fields at the \LUA\ end are in fact indirectly accessed and have a \type {prev} pointer that is not \type {nil}. This means that when you mess around with the head of these (three) lists, you also need to reassign them because that will restore the proper \type {prev} pointer, so: \starttyping pre = d.pre -- change the list starting with pre d.pre = pre \stoptyping Otherwise you can end up with an invalid internal perception of reality and \LUATEX\ might even decide to crash on you. It also means that running forward over for instance \type {pre} is ok but backward you need to stop at \type {pre}. And you definitely must not mess with the node that \type {prev} points to, if only because it is not really a node but part of the disc data structure (so freeing it again might crash \LUATEX). \subsection{\nod {math} nodes} \topicindex {nodes+math} \topicindex {math+nodes} Math nodes represent the boundaries of a math formula, normally wrapped into \type {$} signs. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{math} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{surround} \NC number \NC width of the \prm {mathsurround} kern \NC \NR \LL \stoptabulate There is a set of extra fields that concern the associated glue: \type {width}, \type {stretch}, \type {stretch_order}, \type {shrink} and \type {shrink_order}. These are all numbers. \subsection{\nod {glue} nodes} \topicindex {nodes+glue} \topicindex {glue} Skips are about the only type of data objects in traditional \TEX\ that are not a simple value. They are inserted when \TEX\ sees a space in the text flow but also by \prm {hskip} and \prm {vskip}. The structure that represents the glue components of a skip is called a \nod {glue_spec}, and it has the following accessible fields: \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{width} \NC number \NC the horizontal or vertical displacement \NC \NR \NC \type{stretch} \NC number \NC extra (positive) displacement or stretch amount \NC \NR \NC \type{stretch_order} \NC number \NC factor applied to stretch amount \NC \NR \NC \type{shrink} \NC number \NC extra (negative) displacement or shrink amount\NC \NR \NC \type{shrink_order} \NC number \NC factor applied to shrink amount \NC \NR \LL \stoptabulate The effective width of some glue subtypes depends on the stretch or shrink needed to make the encapsulating box fit its dimensions. For instance, in a paragraph lines normally have glue representing spaces and these stretch or shrink to make the content fit in the available space. The \type {effective_glue} function that takes a glue node and a parent (hlist or vlist) returns the effective width of that glue item. When you pass \type {true} as third argument the value will be rounded. A \nod {glue_spec} node is a special kind of node that is used for storing a set of glue values in registers. Originally they were also used to store properties of glue nodes (using a system of reference counts) but we now keep these properties in the glue nodes themselves, which gives a cleaner interface to \LUA. The indirect spec approach was in fact an optimization in the original \TEX\ code. First of all it can save quite some memory because all these spaces that become glue now share the same specification (only the reference count is incremented), and zero testing is also a bit faster because only the pointer has to be checked (this is no longer true for engines that implement for instance protrusion where we really need to ensure that zero is zero when we test for bounds). Another side effect is that glue specifications are read|-|only, so in the end copies need to be made when they are used from \LUA\ (each assignment to a field can result in a new copy). So in the end the advantages of sharing are not that high (and nowadays memory is less an issue, also given that a glue node is only a few memory words larger than a spec). \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{glue} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{leader} \NC node \NC pointer to a box or rule for leaders \NC \NR \LL \stoptabulate In addition there are the \type {width}, \type {stretch} \type {stretch_order}, \type {shrink}, and \type {shrink_order} fields. Note that we use the key \type {width} in both horizontal and vertical glue. This suits the \TEX\ internals well so we decided to stick to that naming. A regular word space also results in a \type {spaceskip} subtype (this used to be a \type {userskip} with subtype zero). \subsection{\nod {kern} nodes} \topicindex {nodes+kerns} \topicindex {kerns} The \prm {kern} command creates such nodes but for instance the font and math machinery can also add them. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{kern} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{kern} \NC number \NC fixed horizontal or vertical advance \NC \NR \LL \stoptabulate \subsection{\nod {penalty} nodes} \topicindex {nodes+penalty} \topicindex {penalty} The \prm {penalty} command is one that generates these nodes. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{penalty} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{penalty} \NC number \NC the penalty value \NC \NR \LL \stoptabulate The subtypes are just informative and \TEX\ itself doesn't use them. When you run into an \type {linebreakpenalty} you need to keep in mind that it's a accumulation of \type {club}, \type{widow} and other relevant penalties. \subsection[glyphnodes]{\nod {glyph} nodes} \topicindex {nodes+glyph} \topicindex {glyphs} These are probably the mostly used nodes and although you can push them in the current list with for instance \prm {char} \TEX\ will normally do it for you when it considers some input to be text. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC bit field \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{char} \NC number \NC the character index in the font \NC \NR \NC \type{font} \NC number \NC the font identifier \NC \NR \NC \type{lang} \NC number \NC the language identifier \NC \NR \NC \type{left} \NC number \NC the frozen \type {\lefthyphenmnin} value \NC \NR \NC \type{right} \NC number \NC the frozen \type {\righthyphenmnin} value \NC \NR \NC \type{uchyph} \NC boolean \NC the frozen \prm {uchyph} value \NC \NR \NC \type{components} \NC node \NC pointer to ligature components \NC \NR \NC \type{xoffset} \NC number \NC a virtual displacement in horizontal direction \NC \NR \NC \type{yoffset} \NC number \NC a virtual displacement in vertical direction \NC \NR \NC \type{width} \NC number \NC the (original) width of the character \NC \NR \NC \type{height} \NC number \NC the (original) height of the character\NC \NR \NC \type{depth} \NC number \NC the (original) depth of the character\NC \NR \NC \type{expansion_factor} \NC number \NC the to be applied expansion_factor \NC \NR \NC \type{data} \NC number \NC a general purpose field for users (we had room for it) \NC \NR \LL \stoptabulate The \type {width}, \type {height} and \type {depth} values are read|-|only. The \type {expansion_factor} is assigned in the par builder and used in the backend. A warning: never assign a node list to the components field unless you are sure its internal link structure is correct, otherwise an error may be result. Valid bits for the \type {subtype} field are: \starttabulate[|c|l|] \DB bit \BC meaning \NC \NR \TB \NC 0 \NC character \NC \NR \NC 1 \NC ligature \NC \NR \NC 2 \NC ghost \NC \NR \NC 3 \NC left \NC \NR \NC 4 \NC right \NC \NR \LL \stoptabulate See \in {section} [charsandglyphs] for a detailed description of the \type {subtype} field. The \type {expansion_factor} has been introduced as part of the separation between font- and backend. It is the result of extensive experiments with a more efficient implementation of expansion. Early versions of \LUATEX\ already replaced multiple instances of fonts in the backend by scaling but contrary to \PDFTEX\ in \LUATEX\ we now also got rid of font copies in the frontend and replaced them by expansion factors that travel with glyph nodes. Apart from a cleaner approach this is also a step towards a better separation between front- and backend. The \type {is_char} function checks if a node is a glyph node with a subtype still less than 256. This function can be used to determine if applying font logic to a glyph node makes sense. The value \type {nil} gets returned when the node is not a glyph, a character number is returned if the node is still tagged as character and \type {false} gets returned otherwise. When nil is returned, the id is also returned. The \type {is_glyph} variant doesn't check for a subtype being less than 256, so it returns either the character value or nil plus the id. These helpers are not always faster than separate calls but they sometimes permit making more readable tests. The \type {uses_font} helpers takes a node and font id and returns true when a glyph or disc node references that font. \subsection{\nod {boundary} nodes} \topicindex {nodes+boundary} \topicindex {boundary} This node relates to the \prm {noboundary}, \prm {boundary}, \prm {protrusionboundary} and \prm {wordboundary} primitives. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{boundary} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{value} \NC number \NC values 0--255 are reserved \NC \NR \LL \stoptabulate \subsection{\nod {local_par} nodes} \topicindex {nodes+paragraphs} \topicindex {paragraphs} This node is inserted at the start of a paragraph. You should not mess too much with this one. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{pen_inter} \NC number \NC local interline penalty (from \lpr {localinterlinepenalty}) \NC \NR \NC \type{pen_broken} \NC number \NC local broken penalty (from \lpr {localbrokenpenalty}) \NC \NR \NC \type{dir} \NC string \NC the direction of this par. see~\in [dirnodes] \NC \NR \NC \type{box_left} \NC node \NC the \lpr {localleftbox} \NC \NR \NC \type{box_left_width} \NC number \NC width of the \lpr {localleftbox} \NC \NR \NC \type{box_right} \NC node \NC the \lpr {localrightbox} \NC \NR \NC \type{box_right_width} \NC number \NC width of the \lpr {localrightbox} \NC \NR \LL \stoptabulate A warning: never assign a node list to the \type {box_left} or \type {box_right} field unless you are sure its internal link structure is correct, otherwise an error may result. \subsection[dirnodes]{\nod {dir} nodes} \topicindex {nodes+direction} \topicindex {directions} Direction nodes mark parts of the running text that need a change of direction and \ the \prm {textdir} command generates them. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{dir} \NC string \NC the direction (but see below) \NC \NR \NC \type{level} \NC number \NC nesting level of this direction whatsit \NC \NR \LL \stoptabulate Direction specifiers are three|-|letter combinations of \type {T}, \type {B}, \type {R}, and \type {L}. These are built up out of three separate items: \startitemize[packed] \startitem the first is the direction of the \quote{top} of paragraphs \stopitem \startitem the second is the direction of the \quote{start} of lines \stopitem \startitem the third is the direction of the \quote{top} of glyphs \stopitem \stopitemize However, only four combinations are accepted: \type {TLT}, \type {TRT}, \type {RTT}, and \type {LTL}. Inside actual \nod {dir} nodes, the representation of \nod {dir} is not a three|-|letter but a combination of numbers. When printed the direction is indicated by a \type {+} or \type {-}, indicating whether the value is pushed or popped from the direction stack. \subsection{\nod {marginkern} nodes} \topicindex {nodes+paragraphs} \topicindex {paragraphs} \topicindex {protrusion} Margin kerns result from protrusion. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{marginkern} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the advance of the kern \NC \NR \NC \type{glyph} \NC node \NC the glyph to be used \NC \NR \LL \stoptabulate \stopsection \startsection[title={Math noads}] \topicindex {nodes+math} \topicindex {math+nodes} These are the so||called \quote {noad}s and the nodes that are specifically associated with math processing. Most of these nodes contain subnodes so that the list of possible fields is actually quite small. First, the subnodes: \subsection{Math kernel subnodes} Many object fields in math mode are either simple characters in a specific family or math lists or node lists. There are four associated subnodes that represent these cases (in the following node descriptions these are indicated by the word \type {<kernel>}). The \type {next} and \type {prev} fields for these subnodes are unused. \subsection{\nod {math_char} and \nod {math_text_char} subnodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{char} \NC number \NC the character index \NC \NR \NC \type{fam} \NC number \NC the family number \NC \NR \LL \stoptabulate The \nod {math_char} is the simplest subnode field, it contains the character and family for a single glyph object. The \nod {math_text_char} is a special case that you will not normally encounter, it arises temporarily during math list conversion (its sole function is to suppress a following italic correction). \subsection{\nod {sub_box} and \nod {sub_mlist} subnodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{head/list} \NC node \NC list of nodes \NC \NR \LL \stoptabulate These two subnode types are used for subsidiary list items. For \nod {sub_box}, the \type {head} points to a \quote {normal} vbox or hbox. For \nod {sub_mlist}, the \type {head} points to a math list that is yet to be converted. A warning: never assign a node list to the \type {head} field unless you are sure its internal link structure is correct, otherwise an error is triggered. \subsection{\nod {delim} subnodes} There is a fifth subnode type that is used exclusively for delimiter fields. As before, the \type {next} and \type {prev} fields are unused. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{small_char} \NC number \NC character index of base character \NC \NR \NC \type{small_fam} \NC number \NC family number of base character \NC \NR \NC \type{large_char} \NC number \NC character index of next larger character \NC \NR \NC \type{large_fam} \NC number \NC family number of next larger character \NC \NR \LL \stoptabulate The fields \type {large_char} and \type {large_fam} can be zero, in that case the font that is set for the \type {small_fam} is expected to provide the large version as an extension to the \type {small_char}. \subsection{Math core nodes} First, there are the objects (the \TEX book calls them \quote {atoms}) that are associated with the simple math objects: ord, op, bin, rel, open, close, punct, inner, over, under, vcent. These all have the same fields, and they are combined into a single node type with separate subtypes for differentiation. Some noads have an option field. The values in this bitset are common: \starttabulate[|l|r|] \DB meaning \BC bits \NC \NR \TB \NC set \NC \type{0x08} \NC \NR \NC internal \NC \type{0x00} + \type{0x08} \NC \NR \NC internal \NC \type{0x01} + \type{0x08} \NC \NR \NC axis \NC \type{0x02} + \type{0x08} \NC \NR \NC no axis \NC \type{0x04} + \type{0x08} \NC \NR \NC exact \NC \type{0x10} + \type{0x08} \NC \NR \NC left \NC \type{0x11} + \type{0x08} \NC \NR \NC middle \NC \type{0x12} + \type{0x08} \NC \NR \NC right \NC \type{0x14} + \type{0x08} \NC \NR \NC no sub script \NC \type{0x21} + \type{0x08} \NC \NR \NC no super script \NC \type{0x22} + \type{0x08} \NC \NR \NC no script \NC \type{0x23} + \type{0x08} \NC \NR \LL \stoptabulate \subsection{simple \nod {noad} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{noad} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{nucleus} \NC kernel node \NC base \NC \NR \NC \type{sub} \NC kernel node \NC subscript \NC \NR \NC \type{sup} \NC kernel node \NC superscript \NC \NR \NC \type{options} \NC number \NC bitset of rendering options \NC \NR \LL \stoptabulate \subsection{\nod {accent} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{accent} \NC \NR \NC \type{nucleus} \NC kernel node \NC base \NC \NR \NC \type{sub} \NC kernel node \NC subscript \NC \NR \NC \type{sup} \NC kernel node \NC superscript \NC \NR \NC \type{accent} \NC kernel node \NC top accent \NC \NR \NC \type{bot_accent} \NC kernel node \NC bottom accent \NC \NR \NC \type{fraction} \NC number \NC larger step criterium (divided by 1000) \NC \NR \LL \stoptabulate \subsection{\nod {style} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{style} \NC string \NC contains the style \NC \NR \LL \stoptabulate There are eight possibilities for the string value: one of \type {display}, \type {text}, \type {script}, or \type {scriptscript}. Each of these can have be prefixed by \type {cramped}. \subsection{\nod {choice} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{display} \NC node \NC list of display size alternatives \NC \NR \NC \type{text} \NC node \NC list of text size alternatives \NC \NR \NC \type{script} \NC node \NC list of scriptsize alternatives \NC \NR \NC \type{scriptscript} \NC node \NC list of scriptscriptsize alternatives \NC \NR \LL \stoptabulate Warning: never assign a node list to the \type {display}, \type {text}, \type {script}, or \type {scriptscript} field unless you are sure its internal link structure is correct, otherwise an error can occur. \subsection{\nod {radical} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{radical} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{nucleus} \NC kernel node \NC base \NC \NR \NC \type{sub} \NC kernel node \NC subscript \NC \NR \NC \type{sup} \NC kernel node \NC superscript \NC \NR \NC \type{left} \NC delimiter node \NC \NC \NR \NC \type{degree} \NC kernel node \NC only set by \lpr {Uroot} \NC \NR \NC \type{width} \NC number \NC required width \NC \NR \NC \type{options} \NC number \NC bitset of rendering options \NC \NR \LL \stoptabulate Warning: never assign a node list to the \type {nucleus}, \type {sub}, \type {sup}, \type {left}, or \type {degree} field unless you are sure its internal link structure is correct, otherwise an error can be triggered. \subsection{\nod {fraction} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC (optional) width of the fraction \NC \NR \NC \type{num} \NC kernel node \NC numerator \NC \NR \NC \type{denom} \NC kernel node \NC denominator \NC \NR \NC \type{left} \NC delimiter node \NC left side symbol \NC \NR \NC \type{right} \NC delimiter node \NC right side symbol \NC \NR \NC \type{middle} \NC delimiter node \NC middle symbol \NC \NR \NC \type{options} \NC number \NC bitset of rendering options \NC \NR \LL \stoptabulate Warning: never assign a node list to the \type {num}, or \type {denom} field unless you are sure its internal link structure is correct, otherwise an error can result. \subsection{\nod {fence} nodes} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{subtype} \NC number \NC \showsubtypes{fence} \NC \NR \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{delim} \NC delimiter node \NC delimiter specification \NC \NR \NC \type{italic} \NC number \NC italic correction \NC \NR \NC \type{height} \NC number \NC required height \NC \NR \NC \type{depth} \NC number \NC required depth \NC \NR \NC \type{options} \NC number \NC bitset of rendering options \NC \NR \NC \type{class} \NC number \NC spacing related class \NC \NR \LL \stoptabulate Warning: some of these fields are used by the renderer and might get adapted in the process. \stopsection \startsection[title={Front|-|end whatsits}] Whatsit nodes come in many subtypes that you can ask for them by running \type {node.whatsits}: \startluacode for id, name in table.sortedpairs(node.whatsits()) do context.type(name) context(" (%s), ",id) end context.removeunwantedspaces() context.removepunctuation() \stopluacode . % period Some of them are generic and independent of the output mode and others are specific to the chosen backend: \DVI\ or \PDF. Here we discuss the generic font|-|end nodes nodes. \subsection{\whs {open}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{stream} \NC number \NC \TEX's stream id number \NC \NR \NC \type{name} \NC string \NC file name \NC \NR \NC \type{ext} \NC string \NC file extension \NC \NR \NC \type{area} \NC string \NC file area (this may become obsolete) \NC \NR \LL \stoptabulate \subsection{\whs {write}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{stream} \NC number \NC \TEX's stream id number \NC \NR \NC \type{data} \NC table \NC a table representing the token list to be written \NC \NR \LL \stoptabulate \subsection{\whs {close}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{stream} \NC number \NC \TEX's stream id number \NC \NR \LL \stoptabulate \subsection{\whs {user_defined}} User|-|defined whatsit nodes can only be created and handled from \LUA\ code. In effect, they are an extension to the extension mechanism. The \LUATEX\ engine will simply step over such whatsits without ever looking at the contents. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{user_id} \NC number \NC id number \NC \NR \NC \type{type} \NC number \NC type of the value \NC \NR \NC \type{value} \NC number \NC a \LUA\ number \NC \NR \NC \NC node \NC a node list \NC \NR \NC \NC string \NC a \LUA\ string \NC \NR \NC \NC table \NC a \LUA\ table \NC \NR \LL \stoptabulate The \type {type} can have one of six distinct values. The number is the \ASCII\ value if the first character of the type name (so you can use string.byte("l") instead of \type {108}). \starttabulate[|r|c|p|] \DB value \BC meaning \BC explanation \NC \NR \TB \NC 97 \NC a \NC list of attributes (a node list) \NC \NR \NC 100 \NC d \NC a \LUA\ number \NC \NR \NC 108 \NC l \NC a \LUA\ value (table, number, boolean, etc) \NC \NR \NC 110 \NC n \NC a node list \NC \NR \NC 115 \NC s \NC a \LUA\ string \NC \NR \NC 116 \NC t \NC a \LUA\ token list in \LUA\ table form (a list of triplets) \NC \NR \LL \stoptabulate \subsection{\whs {save_pos}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \LL \stoptabulate \subsection{\whs {late_lua}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{data} \NC string or function \NC the to be written information stored as \LUA\ value \NC \NR \NC \type{token} \NC string \NC the to be written information stored as token list \NC \NR \NC \type{name} \NC string \NC the name to use for \LUA\ error reporting \NC \NR \LL \stoptabulate The difference between \type {data} and \type {string} is that on assignment, the \type {data} field is converted to a token list, cf.\ use as \lpr {latelua}. The \type {string} version is treated as a literal string. \stopsection \startsection[title={\DVI\ backend whatsits}] \subsection{\whs {special}} There is only one \DVI\ backend whatsit, and it just flushes its content to the output file. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{data} \NC string \NC the \prm {special} information \NC \NR \LL \stoptabulate \stopsection \startsection[title={\PDF\ backend whatsits}] \subsection{\whs {pdf_literal}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{mode} \NC number \NC the \quote {mode} setting of this literal \NC \NR \NC \type{data} \NC string \NC the to be written information stored as \LUA\ string \NC \NR \NC \type{token} \NC string \NC the to be written information stored as token list \NC \NR \LL \stoptabulate Possible mode values are: \starttabulate[|c|p|] \DB value \BC keyword \NC \NR \TB \NC 0 \NC \type{origin} \NC \NR \NC 1 \NC \type{page} \NC \NR \NC 2 \NC \type{direct} \NC \NR \NC 3 \NC \type{raw} \NC \NR \NC 4 \NC \type{text} \NC \NR \LL \stoptabulate The higher the number, the less checking and the more you can run into trouble. Especially the \type {raw} variant can produce bad \PDF\ so you can best check what you generate. \subsection{\whs {pdf_refobj}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{objnum} \NC number \NC the referenced \PDF\ object number \NC \NR \LL \stoptabulate \subsection{\whs {pdf_annot}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width (not used in calculations) \NC \NR \NC \type{height} \NC number \NC the height (not used in calculations) \NC \NR \NC \type{depth} \NC number \NC the depth (not used in calculations) \NC \NR \NC \type{objnum} \NC number \NC the referenced \PDF\ object number \NC \NR \NC \type{data} \NC string \NC the annotation data \NC \NR \LL \stoptabulate \subsection{\whs {pdf_start_link}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width (not used in calculations) \NC \NR \NC \type{height} \NC number \NC the height (not used in calculations) \NC \NR \NC \type{depth} \NC number \NC the depth (not used in calculations) \NC \NR \NC \type{objnum} \NC number \NC the referenced \PDF\ object number \NC \NR \NC \type{link_attr} \NC table \NC the link attribute token list \NC \NR \NC \type{action} \NC node \NC the action to perform \NC \NR \LL \stoptabulate \subsection{\whs {pdf_end_link}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC \NC \NR \LL \stoptabulate \subsection{\whs {pdf_dest}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width (not used in calculations) \NC \NR \NC \type{height} \NC number \NC the height (not used in calculations) \NC \NR \NC \type{depth} \NC number \NC the depth (not used in calculations) \NC \NR \NC \type{named_id} \NC number \NC is the \type {dest_id} a string value? \NC \NR \NC \type{dest_id} \NC number \NC the destination id \NC \NR \NC \NC string \NC the destination name \NC \NR \NC \type{dest_type} \NC number \NC type of destination \NC \NR \NC \type{xyz_zoom} \NC number \NC the zoom factor (times 1000) \NC \NR \NC \type{objnum} \NC number \NC the \PDF\ object number \NC \NR \LL \stoptabulate \subsection{\whs {pdf_action}} These are a special kind of items that only appear inside \PDF\ start link objects. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{action_type} \NC number \NC the kind of action involved \NC \NR \NC \type{action_id} \NC number or string \NC token list reference or string \NC \NR \NC \type{named_id} \NC number \NC the index of the destination \NC \NR \NC \type{file} \NC string \NC the target filename \NC \NR \NC \type{new_window} \NC number \NC the window state of the target \NC \NR \NC \type{data} \NC string \NC the name of the destination \NC \NR \LL \stoptabulate Valid action types are: \starttabulate[|l|l|] \DB value \BC meaning \NC \NR \TB \NC 0 \NC \type{page} \NC \NR \NC 1 \NC \type{goto} \NC \NR \NC 2 \NC \type{thread} \NC \NR \NC 3 \NC \type{user} \NC \NR \LL \stoptabulate Valid window types are: \starttabulate[|l|l|] \DB value \BC meaning \NC \NR \TB \NC 0 \NC \type{notset} \NC \NR \NC 1 \NC \type{new} \NC \NR \NC 2 \NC \type{nonew} \NC \NR \LL \stoptabulate \subsection{\whs {pdf_thread}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width (not used in calculations) \NC \NR \NC \type{height} \NC number \NC the height (not used in calculations) \NC \NR \NC \type{depth} \NC number \NC the depth (not used in calculations) \NC \NR \NC \type{named_id} \NC number \NC is \type {tread_id} a string value? \NC \NR \NC \type{tread_id} \NC number \NC the thread id \NC \NR \NC \NC string \NC the thread name \NC \NR \NC \type{thread_attr} \NC number \NC extra thread information \NC \NR \LL \stoptabulate \subsection{\whs {pdf_start_thread}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{width} \NC number \NC the width (not used in calculations) \NC \NR \NC \type{height} \NC number \NC the height (not used in calculations) \NC \NR \NC \type{depth} \NC number \NC the depth (not used in calculations) \NC \NR \NC \type{named_id} \NC number \NC is \type {tread_id} a string value? \NC \NR \NC \type{tread_id} \NC number \NC the thread id \NC \NR \NC \NC string \NC the thread name \NC \NR \NC \type{thread_attr} \NC number \NC extra thread information \NC \NR \LL \stoptabulate \subsection{\whs {pdf_end_thread}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC \NC \NR \LL \stoptabulate \subsection{\whs {pdf_colorstack}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{stack} \NC number \NC colorstack id number \NC \NR \NC \type{command} \NC number \NC command to execute \NC \NR \NC \type{data} \NC string \NC data \NC \NR \LL \stoptabulate \subsection{\whs {pdf_setmatrix}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \NC \type{data} \NC string \NC data \NC \NR \LL \stoptabulate \subsection{\whs {pdf_save}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \LL \stoptabulate \subsection{\whs {pdf_restore}} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{attr} \NC node \NC list of attributes \NC \NR \LL \stoptabulate \stopsection \startsection[title={The \type {node} library}][library=node] \subsection {Introduction} The \type {node} library contains functions that facilitate dealing with (lists of) nodes and their values. They allow you to create, alter, copy, delete, and insert \LUATEX\ node objects, the core objects within the typesetter. \LUATEX\ nodes are represented in \LUA\ as userdata with the metadata type \type {luatex.node}. The various parts within a node can be accessed using named fields. Each node has at least the three fields \type {next}, \type {id}, and \type {subtype}: \startitemize[intro] \startitem The \type {next} field returns the userdata object for the next node in a linked list of nodes, or \type {nil}, if there is no next node. \stopitem \startitem The \type {id} indicates \TEX's \quote{node type}. The field \type {id} has a numeric value for efficiency reasons, but some of the library functions also accept a string value instead of \type {id}. \stopitem \startitem The \type {subtype} is another number. It often gives further information about a node of a particular \type {id}, but it is most important when dealing with \quote {whatsits}, because they are differentiated solely based on their \type {subtype}. \stopitem \stopitemize The other available fields depend on the \type {id} (and for \quote {whatsits}, the \type {subtype}) of the node. Support for \nod {unset} (alignment) nodes is partial: they can be queried and modified from \LUA\ code, but not created. Nodes can be compared to each other, but: you are actually comparing indices into the node memory. This means that equality tests can only be trusted under very limited conditions. It will not work correctly in any situation where one of the two nodes has been freed and|/|or reallocated: in that case, there will be false positives. At the moment, memory management of nodes should still be done explicitly by the user. Nodes are not \quote {seen} by the \LUA\ garbage collector, so you have to call the node freeing functions yourself when you are no longer in need of a node (list). Nodes form linked lists without reference counting, so you have to be careful that when control returns back to \LUATEX\ itself, you have not deleted nodes that are still referenced from a \type {next} pointer elsewhere, and that you did not create nodes that are referenced more than once. Normally the setters and getters handle this for you. There are statistics available with regards to the allocated node memory, which can be handy for tracing. \subsection{\type {is_node}} \topicindex {nodes+functions} \libindex {is_node} \startfunctioncall <boolean|integer> t = node.is_node(<any> item) \stopfunctioncall This function returns a number (the internal index of the node) if the argument is a userdata object of type \type {<node>} and false when no node is passed. \subsection{\type {types} and \type {whatsits}} \libindex {types} \libindex {whatsits} This function returns an array that maps node id numbers to node type strings, providing an overview of the possible top|-|level \type {id} types. \startfunctioncall <table> t = node.types() \stopfunctioncall \TEX's \quote {whatsits} all have the same \type {id}. The various subtypes are defined by their \type {subtype} fields. The function is much like \type {types}, except that it provides an array of \type {subtype} mappings. \startfunctioncall <table> t = node.whatsits() \stopfunctioncall \subsection{\type {id}} \libindex{id} This converts a single type name to its internal numeric representation. \startfunctioncall <number> id = node.id(<string> type) \stopfunctioncall \subsection{\type {type} and \type {subtype}} \libindex {type} \libindex {subtype} In the argument is a number, then the next function converts an internal numeric representation to an external string representation. Otherwise, it will return the string \type {node} if the object represents a node, and \type {nil} otherwise. \startfunctioncall <string> type = node.type(<any> n) \stopfunctioncall This next one converts a single whatsit name to its internal numeric representation (\type {subtype}). \startfunctioncall <number> subtype = node.subtype(<string> type) \stopfunctioncall \subsection{\type {fields}} \libindex {fields} This function returns an array of valid field names for a particular type of node. If you want to get the valid fields for a \quote {whatsit}, you have to supply the second argument also. In other cases, any given second argument will be silently ignored. \startfunctioncall <table> t = node.fields(<number> id) <table> t = node.fields(<number> id, <number> subtype) \stopfunctioncall The function accepts string \type {id} and \type {subtype} values as well. \subsection{\type {has_field}} \libindex {has_field} This function returns a boolean that is only true if \type {n} is actually a node, and it has the field. \startfunctioncall <boolean> t = node.has_field(<node> n, <string> field) \stopfunctioncall \subsection{\type {new}} \libindex{new} The \type {new} function creates a new node. All its fields are initialized to either zero or \type {nil} except for \type {id} and \type {subtype}. Instead of numbers you can also use strings (names). If you create a new \nod {whatsit} node the second argument is required. As with all node functions, this function creates a node at the \TEX\ level. \startfunctioncall <node> n = node.new(<number> id) <node> n = node.new(<number> id, <number> subtype) \stopfunctioncall \subsection{\type {free}, \type {flush_node} and \type {flush_list}} \libindex{free} \libindex{flush_node} \libindex{flush_list} The next one the node \type {n} from \TEX's memory. Be careful: no checks are done on whether this node is still pointed to from a register or some \type {next} field: it is up to you to make sure that the internal data structures remain correct. \startfunctioncall <node> next = node.free(<node> n) flush_node(<node> n) \stopfunctioncall The \type {free} function returns the next field of the freed node, while the \type {flush_node} alternative returns nothing. A list starting with node \type {n} can be flushed from \TEX's memory too. Be careful: no checks are done on whether any of these nodes is still pointed to from a register or some \type {next} field: it is up to you to make sure that the internal data structures remain correct. \startfunctioncall node.flush_list(<node> n) \stopfunctioncall \subsection{\type {copy} and \type {copy_list}} \libindex{copy} \libindex{copy_list} This creates a deep copy of node \type {n}, including all nested lists as in the case of a hlist or vlist node. Only the \type {next} field is not copied. \startfunctioncall <node> m = node.copy(<node> n) \stopfunctioncall A deep copy of the node list that starts at \type {n} can be created too. If \type {m} is also given, the copy stops just before node \type {m}. \startfunctioncall <node> m = node.copy_list(<node> n) <node> m = node.copy_list(<node> n, <node> m) \stopfunctioncall Note that you cannot copy attribute lists this way. However, there is normally no need to copy attribute lists as when you do assignments to the \type {attr} field or make changes to specific attributes, the needed copying and freeing takes place automatically. \subsection{\type {prev} and \type{next}} \libindex{prev} \libindex{next} These returns the node preceding or following the given node, or \type {nil} if there is no such node. \startfunctioncall <node> m = node.next(<node> n) <node> m = node.prev(<node> n) \stopfunctioncall \subsection{\type {current_attr}} \libindex{current_attr} This returns the currently active list of attributes, if there is one. \startfunctioncall <node> m = node.current_attr() \stopfunctioncall The intended usage of \type {current_attr} is as follows: \starttyping local x1 = node.new("glyph") x1.attr = node.current_attr() local x2 = node.new("glyph") x2.attr = node.current_attr() \stoptyping or: \starttyping local x1 = node.new("glyph") local x2 = node.new("glyph") local ca = node.current_attr() x1.attr = ca x2.attr = ca \stoptyping The attribute lists are ref counted and the assignment takes care of incrementing the refcount. You cannot expect the value \type {ca} to be valid any more when you assign attributes (using \type {tex.setattribute}) or when control has been passed back to \TEX. Note: this function is somewhat experimental, and it returns the {\it actual} attribute list, not a copy thereof. Therefore, changing any of the attributes in the list will change these values for all nodes that have the current attribute list assigned to them. \subsection{\type {hpack}} \libindex {hpack} This function creates a new hlist by packaging the list that begins at node \type {n} into a horizontal box. With only a single argument, this box is created using the natural width of its components. In the three argument form, \type {info} must be either \type {additional} or \type {exactly}, and \type {w} is the additional (\type {\hbox spread}) or exact (\type {\hbox to}) width to be used. The second return value is the badness of the generated box. \startfunctioncall <node> h, <number> b = node.hpack(<node> n) <node> h, <number> b = node.hpack(<node> n, <number> w, <string> info) <node> h, <number> b = node.hpack(<node> n, <number> w, <string> info, <string> dir) \stopfunctioncall Caveat: there can be unexpected side|-|effects to this function, like updating some of the \prm {marks} and \type {\inserts}. Also note that the content of \type {h} is the original node list \type {n}: if you call \type {node.free(h)} you will also free the node list itself, unless you explicitly set the \type {list} field to \type {nil} beforehand. And in a similar way, calling \type {node.free(n)} will invalidate \type {h} as well! \subsection{\type {vpack}} \libindex {vpack} This function creates a new vlist by packaging the list that begins at node \type {n} into a vertical box. With only a single argument, this box is created using the natural height of its components. In the three argument form, \type {info} must be either \type {additional} or \type {exactly}, and \type {w} is the additional (\type {\vbox spread}) or exact (\type {\vbox to}) height to be used. \startfunctioncall <node> h, <number> b = node.vpack(<node> n) <node> h, <number> b = node.vpack(<node> n, <number> w, <string> info) <node> h, <number> b = node.vpack(<node> n, <number> w, <string> info, <string> dir) \stopfunctioncall The second return value is the badness of the generated box. See the description of \type {hpack} for a few memory allocation caveats. \subsection{\type {prepend_prevdepth}} \libindex {prepend_prevdepth} This function is somewhat special in the sense that it is an experimental helper that adds the interlinespace to a line keeping the baselineskip and lineskip into account. \startfunctioncall <node> n, <number> delta = node.prepend_prevdepth(<node> n,<number> prevdepth) \stopfunctioncall \subsection{\type {dimensions} and \type {rangedimensions}} \libindex{dimensions} \libindex{rangedimensions} \startfunctioncall <number> w, <number> h, <number> d = node.dimensions(<node> n) <number> w, <number> h, <number> d = node.dimensions(<node> n, <string> dir) <number> w, <number> h, <number> d = node.dimensions(<node> n, <node> t) <number> w, <number> h, <number> d = node.dimensions(<node> n, <node> t, <string> dir) \stopfunctioncall This function calculates the natural in|-|line dimensions of the node list starting at node \type {n} and terminating just before node \type {t} (or the end of the list, if there is no second argument). The return values are scaled points. An alternative format that starts with glue parameters as the first three arguments is also possible: \startfunctioncall <number> w, <number> h, <number> d = node.dimensions(<number> glue_set, <number> glue_sign, <number> glue_order, <node> n) <number> w, <number> h, <number> d = node.dimensions(<number> glue_set, <number> glue_sign, <number> glue_order, <node> n, <string> dir) <number> w, <number> h, <number> d = node.dimensions(<number> glue_set, <number> glue_sign, <number> glue_order, <node> n, <node> t) <number> w, <number> h, <number> d = node.dimensions(<number> glue_set, <number> glue_sign, <number> glue_order, <node> n, <node> t, <string> dir) \stopfunctioncall This calling method takes glue settings into account and is especially useful for finding the actual width of a sublist of nodes that are already boxed, for example in code like this, which prints the width of the space in between the \type {a} and \type {b} as it would be if \type {\box0} was used as-is: \starttyping \setbox0 = \hbox to 20pt {a b} \directlua{print (node.dimensions( tex.box[0].glue_set, tex.box[0].glue_sign, tex.box[0].glue_order, tex.box[0].head.next, node.tail(tex.box[0].head) )) } \stoptyping You need to keep in mind that this is one of the few places in \TEX\ where floats are used, which means that you can get small differences in rounding when you compare the width reported by \type {hpack} with \type {dimensions}. The second alternative saves a few lookups and can be more convenient in some cases: \startfunctioncall <number> w, <number> h, <number> d = node.rangedimensions(<node> parent, <node> first) <number> w, <number> h, <number> d = node.rangedimensions(<node> parent, <node> first, <node> last) \stopfunctioncall \subsection{\type {mlist_to_hlist}} \libindex {mlist_to_hlist} \startfunctioncall <node> h = node.mlist_to_hlist(<node> n, <string> display_type, <boolean> penalties) \stopfunctioncall This runs the internal mlist to hlist conversion, converting the math list in \type {n} into the horizontal list \type {h}. The interface is exactly the same as for the callback \cbk {mlist_to_hlist}. \subsection{\type {slide}} \startfunctioncall <node> m = node.slide(<node> n) \stopfunctioncall Returns the last node of the node list that starts at \type {n}. As a side|-|effect, it also creates a reverse chain of \type {prev} pointers between nodes. \subsection{\type {tail}} \libindex {tail} \startfunctioncall <node> m = node.tail(<node> n) \stopfunctioncall Returns the last node of the node list that starts at \type {n}. \subsection{\type {length} and type {count}} \libindex {length} \libindex {count} \startfunctioncall <number> i = node.length(<node> n) <number> i = node.length(<node> n, <node> m) \stopfunctioncall Returns the number of nodes contained in the node list that starts at \type {n}. If \type {m} is also supplied it stops at \type {m} instead of at the end of the list. The node \type {m} is not counted. \startfunctioncall <number> i = node.count(<number> id, <node> n) <number> i = node.count(<number> id, <node> n, <node> m) \stopfunctioncall Returns the number of nodes contained in the node list that starts at \type {n} that have a matching \type {id} field. If \type {m} is also supplied, counting stops at \type {m} instead of at the end of the list. The node \type {m} is not counted. This function also accept string \type {id}'s. \subsection{\type {is_char} and \type {is_glyph}} \libindex {is_char} \libindex {is_glyph} The subtype of a glyph node signals if the glyph is already turned into a character reference or not. \startfunctioncall <boolean> b = node.is_char(<node> n) <boolean> b = node.is_glyph(<node> n) \stopfunctioncall \subsection{\type {traverse}} \libindex {traverse} \startfunctioncall <node> t, id, subtype = node.traverse(<node> n) \stopfunctioncall This is a \LUA\ iterator that loops over the node list that starts at \type {n}. Typically code looks like this: \starttyping for n in node.traverse(head) do ... end \stoptyping is functionally equivalent to: \starttyping do local n local function f (head,var) local t if var == nil then t = head else t = var.next end return t end while true do n = f (head, n) if n == nil then break end ... end end \stoptyping It should be clear from the definition of the function \type {f} that even though it is possible to add or remove nodes from the node list while traversing, you have to take great care to make sure all the \type {next} (and \type {prev}) pointers remain valid. If the above is unclear to you, see the section \quote {For Statement} in the \LUA\ Reference Manual. \subsection{\type {traverse_id}} \libindex {traverse_id} \startfunctioncall <node> t, subtype = node.traverse_id(<number> id, <node> n) \stopfunctioncall This is an iterator that loops over all the nodes in the list that starts at \type {n} that have a matching \type {id} field. See the previous section for details. The change is in the local function \type {f}, which now does an extra while loop checking against the upvalue \type {id}: \starttyping local function f(head,var) local t if var == nil then t = head else t = var.next end while not t.id == id do t = t.next end return t end \stoptyping \subsection{\type {traverse_char} and \type {traverse_glyph}} \libindex {traverse_char} \libindex {traverse_glyph} The \type{traverse_char} iterator loops over the \nod {glyph} nodes in a list. Only nodes with a subtype less than 256 are seen. \startfunctioncall <node> n, font, char = node.traverse_char(<node> n) \stopfunctioncall The \type{traverse_glyph} iterator loops over a list and returns the list and filters all glyphs: \startfunctioncall <node> n, font, char = node.traverse_glyph(<node> n) \stopfunctioncall \subsection{\type {traverse_list}} \libindex {traverse_list} This iterator loops over the \nod {hlist} and \nod {vlist} nodes in a list. \startfunctioncall <node> n, id, subtype, list = node.traverse_list(<node> n) \stopfunctioncall The four return values can save some time compared to fetching these fields but in practice you seldom need them all. So consider it a (side effect of experimental) convenience. \subsection{\type {has_glyph}} \libindex {has_glyph} This function returns the first glyph or disc node in the given list: \startfunctioncall <node> n = node.has_glyph(<node> n) \stopfunctioncall \subsection{\type {end_of_math}} \libindex {end_of_math} \startfunctioncall <node> t = node.end_of_math(<node> start) \stopfunctioncall Looks for and returns the next \type {math_node} following the \type {start}. If the given node is a math end node this helper returns that node, else it follows the list and returns the next math endnote. If no such node is found nil is returned. \subsection{\type {remove}} \libindex {remove} \startfunctioncall <node> head, current = node.remove(<node> head, <node> current) \stopfunctioncall This function removes the node \type {current} from the list following \type {head}. It is your responsibility to make sure it is really part of that list. The return values are the new \type {head} and \type {current} nodes. The returned \type {current} is the node following the \type {current} in the calling argument, and is only passed back as a convenience (or \type {nil}, if there is no such node). The returned \type {head} is more important, because if the function is called with \type {current} equal to \type {head}, it will be changed. \subsection{\type {insert_before}} \libindex {insert_before} \startfunctioncall <node> head, new = node.insert_before(<node> head, <node> current, <node> new) \stopfunctioncall This function inserts the node \type {new} before \type {current} into the list following \type {head}. It is your responsibility to make sure that \type {current} is really part of that list. The return values are the (potentially mutated) \type {head} and the node \type {new}, set up to be part of the list (with correct \type {next} field). If \type {head} is initially \type {nil}, it will become \type {new}. \subsection{\type {insert_after}} \libindex {insert_after} \startfunctioncall <node> head, new = node.insert_after(<node> head, <node> current, <node> new) \stopfunctioncall This function inserts the node \type {new} after \type {current} into the list following \type {head}. It is your responsibility to make sure that \type {current} is really part of that list. The return values are the \type {head} and the node \type {new}, set up to be part of the list (with correct \type {next} field). If \type {head} is initially \type {nil}, it will become \type {new}. \subsection{\type {first_glyph}} \libindex {first_glyph} \startfunctioncall <node> n = node.first_glyph(<node> n) <node> n = node.first_glyph(<node> n, <node> m) \stopfunctioncall Returns the first node in the list starting at \type {n} that is a glyph node with a subtype indicating it is a glyph, or \type {nil}. If \type {m} is given, processing stops at (but including) that node, otherwise processing stops at the end of the list. \subsection{\type {ligaturing}} \libindex {ligaturing} \startfunctioncall <node> h, <node> t, <boolean> success = node.ligaturing(<node> n) <node> h, <node> t, <boolean> success = node.ligaturing(<node> n, <node> m) \stopfunctioncall Apply \TEX-style ligaturing to the specified nodelist. The tail node \type {m} is optional. The two returned nodes \type {h} and \type {t} are the new head and tail (both \type {n} and \type {m} can change into a new ligature). \subsection{\type {kerning}} \libindex {kerning} \startfunctioncall <node> h, <node> t, <boolean> success = node.kerning(<node> n) <node> h, <node> t, <boolean> success = node.kerning(<node> n, <node> m) \stopfunctioncall Apply \TEX|-|style kerning to the specified node list. The tail node \type {m} is optional. The two returned nodes \type {h} and \type {t} are the head and tail (either one of these can be an inserted kern node, because special kernings with word boundaries are possible). \subsection{\type {unprotect_glyph[s]}} \libindex {unprotect_glyphs} \libindex {unprotect_glyph} \startfunctioncall node.unprotect_glyph(<node> n) node.unprotect_glyphs(<node> n,[<node> n]) \stopfunctioncall Subtracts 256 from all glyph node subtypes. This and the next function are helpers to convert from \type {characters} to \type {glyphs} during node processing. The second argument is optional and indicates the end of a range. \subsection{\type {protect_glyph[s]}} \libindex {protect_glyphs} \libindex {protect_glyph} \startfunctioncall node.protect_glyph(<node> n) node.protect_glyphs(<node> n,[<node> n]) \stopfunctioncall Adds 256 to all glyph node subtypes in the node list starting at \type {n}, except that if the value is 1, it adds only 255. The special handling of 1 means that \type {characters} will become \type {glyphs} after subtraction of 256. A single character can be marked by the singular call. The second argument is optional and indicates the end of a range. \subsection{\type {last_node}} \libindex {last_node} \startfunctioncall <node> n = node.last_node() \stopfunctioncall This function pops the last node from \TEX's \quote{current list}. It returns that node, or \type {nil} if the current list is empty. \subsection{\type {write}} \libindex {write} \startfunctioncall node.write(<node> n) \stopfunctioncall This function that will append a node list to \TEX's \quote {current list}. The node list is not deep|-|copied! There is no error checking either! You mignt need to enforce horizontal mode in order for this to work as expected. \subsection{\type {protrusion_skippable}} \libindex {protrusion_skippable} \startfunctioncall <boolean> skippable = node.protrusion_skippable(<node> n) \stopfunctioncall Returns \type {true} if, for the purpose of line boundary discovery when character protrusion is active, this node can be skipped. \stopsection \startsection[title={Glue handling}][library=node] \subsection{\type {setglue}} \libindex {setglue} You can set the five properties of a glue in one go. Non|-|numeric values are equivalent to zero and reset a property. \startfunctioncall node.setglue(<node> n) node.setglue(<node> n,width,stretch,shrink,stretch_order,shrink_order) \stopfunctioncall When you pass values, only arguments that are numbers are assigned so \starttyping node.setglue(n,655360,false,65536) \stoptyping will only adapt the width and shrink. When a list node is passed, you set the glue, order and sign instead. \subsection{\type {getglue}} \libindex {getglue} The next call will return 5 values or nothing when no glue is passed. \startfunctioncall <integer> width, <integer> stretch, <integer> shrink, <integer> stretch_order, <integer> shrink_order = node.getglue(<node> n) \stopfunctioncall When the second argument is false, only the width is returned (this is consistent with \type {tex.get}). When a list node is passed, you get back the glue that is set, the order of that glue and the sign. \subsection{\type {is_zero_glue}} \libindex {is_zero_glue} This function returns \type {true} when the width, stretch and shrink properties are zero. \startfunctioncall <boolean> isglue = node.is_zero_glue(<node> n) \stopfunctioncall \stopsection \startsection[title={Attribute handling}][library=node] \subsection{Attributes} \topicindex {attributes} The newly introduced attribute registers are non|-|trivial, because the value that is attached to a node is essentially a sparse array of key|-|value pairs. It is generally easiest to deal with attribute lists and attributes by using the dedicated functions in the \type {node} library, but for completeness, here is the low|-|level interface. Attributes appear as linked list of userdata objects in the \type {attr} field of individual nodes. They can be handled individually, but it is much safer and more efficient to use the dedicated functions associated with them. \subsection{\nod {attribute_list} nodes} \topicindex {nodes+attributes} An \nod {attribute_list} item is used as a head pointer for a list of attribute items. It has only one user-visible field: \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{next} \NC node \NC pointer to the first attribute \NC \NR \LL \stoptabulate \subsection{\nod {attr} nodes} A normal node's attribute field will point to an item of type \nod {attribute_list}, and the \type {next} field in that item will point to the first defined \quote {attribute} item, whose \type {next} will point to the second \quote {attribute} item, etc. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{next} \NC node \NC pointer to the next attribute \NC \NR \NC \type{number} \NC number \NC the attribute type id \NC \NR \NC \type{value} \NC number \NC the attribute value \NC \NR \LL \stoptabulate As mentioned it's better to use the official helpers rather than edit these fields directly. For instance the \type {prev} field is used for other purposes and there is no double linked list. \subsection{\type {has_attribute}} \libindex {has_attribute} \startfunctioncall <number> v = node.has_attribute(<node> n, <number> id) <number> v = node.has_attribute(<node> n, <number> id, <number> val) \stopfunctioncall Tests if a node has the attribute with number \type {id} set. If \type {val} is also supplied, also tests if the value matches \type {val}. It returns the value, or, if no match is found, \type {nil}. \subsection{\type {get_attribute}} \libindex {get_attribute} \startfunctioncall <number> v = node.get_attribute(<node> n, <number> id) \stopfunctioncall Tests if a node has an attribute with number \type {id} set. It returns the value, or, if no match is found, \type {nil}. If no \type {id} is given then the zero attributes is assumed. \subsection{\type {find_attribute}} \libindex {find_attribute} \startfunctioncall <number> v, <node> n = node.find_attribute(<node> n, <number> id) \stopfunctioncall Finds the first node that has attribute with number \type {id} set. It returns the value and the node if there is a match and otherwise nothing. \subsection{\type {set_attribute}} \libindex {set_attribute} \startfunctioncall node.set_attribute(<node> n, <number> id, <number> val) \stopfunctioncall Sets the attribute with number \type {id} to the value \type {val}. Duplicate assignments are ignored. \subsection{\type {unset_attribute}} \libindex {unset_attribute} \startfunctioncall <number> v = node.unset_attribute(<node> n, <number> id) <number> v = node.unset_attribute(<node> n, <number> id, <number> val) \stopfunctioncall Unsets the attribute with number \type {id}. If \type {val} is also supplied, it will only perform this operation if the value matches \type {val}. Missing attributes or attribute|-|value pairs are ignored. If the attribute was actually deleted, returns its old value. Otherwise, returns \type {nil}. \subsection{\type {slide}} \libindex {slide} This helper makes sure that the node lists is double linked and returns the found tail node. \startfunctioncall <node> tail = node.slide(<node> n) \stopfunctioncall After some callbacks automatic sliding takes place. This feature can be turned off with \type {node.fix_node_lists(false)} but you better make sure then that you don't mess up lists. In most cases \TEX\ itself only uses \type {next} pointers but your other callbacks might expect proper \type {prev} pointers too. Future versions of \LUATEX\ can add more checking but this will not influence usage. \subsection{\type {check_discretionary}, \type {check_discretionaries}} \libindex{check_discretionary} \libindex{check_discretionaries} When you fool around with disc nodes you need to be aware of the fact that they have a special internal data structure. As long as you reassign the fields when you have extended the lists it's ok because then the tail pointers get updated, but when you add to list without reassigning you might end up in trouble when the linebreak routine kicks in. You can call this function to check the list for issues with disc nodes. \startfunctioncall node.check_discretionary(<node> n) node.check_discretionaries(<node> head) \stopfunctioncall The plural variant runs over all disc nodes in a list, the singular variant checks one node only (it also checks if the node is a disc node). \subsection{\type {flatten_discretionaries}} \libindex {flatten_discretionaries} This function will remove the discretionaries in the list and inject the replace field when set. \startfunctioncall <node> head, count = node.flatten_discretionaries(<node> n) \stopfunctioncall \subsection{\type {family_font}} \libindex {family_font} When you pass a proper family identifier the next helper will return the font currently associated with it. You can normally also access the font with the normal font field or getter because it will resolve the family automatically for noads. \startfunctioncall <integer> id = node.family_font(<integer> fam) \stopfunctioncall \stopsection \startsection[title={Two access models}][library=node] \topicindex{nodes+direct} \topicindex{direct nodes} \libindex {todirect} \libindex {tonode} \libindex {tostring} Deep down in \TEX\ a node has a number which is a numeric entry in a memory table. In fact, this model, where \TEX\ manages memory is real fast and one of the reasons why plugging in callbacks that operate on nodes is quite fast too. Each node gets a number that is in fact an index in the memory table and that number often is reported when you print node related information. You go from userdata nodes and there numeric references and back with: \startfunctioncall <integer> d = node.todirect(<node> n)) <node> n = node.tonode(<integer> d)) \stopfunctioncall The userdata model is rather robust as it is a virtual interface with some additional checking while the more direct access which uses the node numbers directly. However, even with userdata you can get into troubles when you free nodes that are no longer allocated or mess up lists. if you apply \type {tostring} to a node you see its internal (direct) number and id. The first model provides key based access while the second always accesses fields via functions: \starttyping nodeobject.char getfield(nodenumber,"char") \stoptyping If you use the direct model, even if you know that you deal with numbers, you should not depend on that property but treat it as an abstraction just like traditional nodes. In fact, the fact that we use a simple basic datatype has the penalty that less checking can be done, but less checking is also the reason why it's somewhat faster. An important aspect is that one cannot mix both methods, but you can cast both models. So, multiplying a node number makes no sense. So our advice is: use the indexed (table) approach when possible and investigate the direct one when speed might be a real issue. For that reason \LUATEX\ also provide the \type {get*} and \type {set*} functions in the top level node namespace. There is a limited set of getters. When implementing this direct approach the regular index by key variant was also optimized, so direct access only makes sense when nodes are accessed millions of times (which happens in some font processing for instance). We're talking mostly of getters because setters are less important. Documents have not that many content related nodes and setting many thousands of properties is hardly a burden contrary to millions of consultations. Normally you will access nodes like this: \starttyping local next = current.next if next then -- do something end \stoptyping Here \type {next} is not a real field, but a virtual one. Accessing it results in a metatable method being called. In practice it boils down to looking up the node type and based on the node type checking for the field name. In a worst case you have a node type that sits at the end of the lookup list and a field that is last in the lookup chain. However, in successive versions of \LUATEX\ these lookups have been optimized and the most frequently accessed nodes and fields have a higher priority. Because in practice the \type {next} accessor results in a function call, there is some overhead involved. The next code does the same and performs a tiny bit faster (but not that much because it is still a function call but one that knows what to look up). \starttyping local next = node.next(current) if next then -- do something end \stoptyping Some accessors are used frequently and for these we provide more efficient helpers: \starttabulate[|l|p|] \DB function \BC explanation \NC \NR \TB \NC \type{getnext} \NC parsing nodelist always involves this one \NC \NR \NC \type{getprev} \NC used less but a logical companion to \type {getnext} \NC \NR \NC \type{getboth} \NC returns the next and prev pointer of a node \NC \NR \NC \type{getid} \NC consulted a lot \NC \NR \NC \type{getsubtype} \NC consulted less but also a topper \NC \NR \NC \type{getfont} \NC used a lot in \OPENTYPE\ handling (glyph nodes are consulted a lot) \NC \NR \NC \type{getchar} \NC idem and also in other places \NC \NR \NC \type{getwhd} \NC returns the \type {width}, \type {height} and \type {depth} of a list, rule or (unexpanded) glyph as well as glue (its spec is looked at) and unset nodes\NC \NR \NC \type{getdisc} \NC returns the \type {pre}, \type {post} and \type {replace} fields and optionally when true is passed also the tail fields \NC \NR \NC \type{getlist} \NC we often parse nested lists so this is a convenient one too \NC \NR \NC \type{getleader} \NC comparable to list, seldom used in \TEX\ (but needs frequent consulting like lists; leaders could have been made a dedicated node type) \NC \NR \NC \type{getfield} \NC generic getter, sufficient for the rest (other field names are often shared so a specific getter makes no sense then) \NC \NR \NC \type{getbox} \NC gets the given box (a list node) \NC \NR \NC \type{getoffsets} \NC gets the \type {xoffset} and \type {yoffset} of a glyph or \type {left} and \type {right} values of a rule \NC \NR \LL \stoptabulate In the direct namespace there are more such helpers and most of them are accompanied by setters. The getters and setters are clever enough to see what node is meant. We don't deal with whatsit nodes: their fields are always accessed by name. It doesn't make sense to add getters for all fields, we just identifier the most likely candidates. In complex documents, many node and fields types never get seen, or seen only a few times, but for instance glyphs are candidates for such optimization. The \type {node.direct} interface has some more helpers. \footnote {We can define the helpers in the node namespace with \type {getfield} which is about as efficient, so at some point we might provide that as module.} The \type {setdisc} helper takes three (optional) arguments plus an optional fourth indicating the subtype. Its \type {getdisc} takes an optional boolean; when its value is \type {true} the tail nodes will also be returned. The \type {setfont} helper takes an optional second argument, it being the character. The directmode setter \type {setlink} takes a list of nodes and will link them, thereby ignoring \type {nil} entries. The first valid node is returned (beware: for good reason it assumes single nodes). For rarely used fields no helpers are provided and there are a few that probably are used seldom too but were added for consistency. You can of course always define additional accessors using \type {getfield} and \type {setfield} with little overhead. When the second argument of \type {setattributelist} is \type {true} the current attribute list is assumed. \def\yes{$+$} \def\nop{$-$} \def\supported#1#2#3% {\NC \type{#1} \NC \ifx#2\yes\lix{node} {#1}\fi #2 \NC \ifx#3\yes\lix{node.direct}{#1}\fi #3 \NC \NR} \starttabulate[|l|c|c|] \DB function \BC node \BC direct \NC \NR \TB \supported {check_discretionaries} \yes \yes \supported {check_discretionary} \yes \yes \supported {copy_list} \yes \yes \supported {copy} \yes \yes \supported {count} \yes \yes \supported {current_attr} \yes \yes \supported {dimensions} \yes \yes \supported {effective_glue} \yes \yes \supported {end_of_math} \yes \yes \supported {family_font} \yes \nop \supported {fields} \yes \nop \supported {find_attribute} \yes \yes \supported {first_glyph} \yes \yes \supported {flatten_discretionaries} \yes \yes \supported {flush_list} \yes \yes \supported {flush_node} \yes \yes \supported {free} \yes \yes \supported {get_attribute} \yes \yes \supported {get_synctex_fields} \nop \yes \supported {getattributelist} \nop \yes \supported {getboth} \yes \yes \supported {getbox} \nop \yes \supported {getchar} \yes \yes \supported {getcomponents} \nop \yes \supported {getdepth} \nop \yes \supported {getdirection} \nop \yes \supported {getdir} \nop \yes \supported {getdisc} \yes \yes \supported {getfam} \nop \yes \supported {getfield} \yes \yes \supported {getfont} \yes \yes \supported {getglue} \yes \yes \supported {getheight} \nop \yes \supported {getid} \yes \yes \supported {getkern} \nop \yes \supported {getlang} \nop \yes \supported {getleader} \yes \yes \supported {getlist} \yes \yes \supported {getnext} \yes \yes \supported {getnucleus} \nop \yes \supported {getoffsets} \nop \yes \supported {getpenalty} \nop \yes \supported {getprev} \yes \yes \supported {getproperty} \yes \yes \supported {getshift} \nop \yes \supported {getsubtype} \yes \yes \supported {getsub} \nop \yes \supported {getsup} \nop \yes \supported {getdata} \nop \yes \supported {getwhd} \yes \yes \supported {getwidth} \nop \yes \supported {has_attribute} \yes \yes \supported {has_field} \yes \yes \supported {has_glyph} \yes \yes \supported {hpack} \yes \yes \supported {id} \yes \nop \supported {insert_after} \yes \yes \supported {insert_before} \yes \yes \supported {is_char} \yes \yes \supported {is_direct} \nop \yes \supported {is_glyph} \yes \yes \supported {is_node} \yes \yes \supported {is_zero_glue} \yes \yes \supported {kerning} \yes \yes \supported {last_node} \yes \yes \supported {length} \yes \yes \supported {ligaturing} \yes \yes \supported {mlist_to_hlist} \yes \nop \supported {new} \yes \yes \supported {next} \yes \nop \supported {prepend_prevdepth} \nop \yes \supported {prev} \yes \nop \supported {protect_glyphs} \yes \yes \supported {protect_glyph} \yes \yes \supported {protrusion_skippable} \yes \yes \supported {rangedimensions} \yes \yes \supported {remove} \yes \yes \supported {set_attribute} \yes \yes \supported {set_synctex_fields} \nop \yes \supported {setattributelist} \nop \yes \supported {setboth} \nop \yes \supported {setbox} \nop \yes \supported {setchar} \nop \yes \supported {setcomponents} \nop \yes \supported {setdepth} \nop \yes \supported {setdirection} \nop \yes \supported {setdir} \nop \yes \supported {setdisc} \nop \yes \supported {setfam} \nop \yes \supported {setfield} \yes \yes \supported {setfont} \nop \yes \supported {setexpansion} \nop \yes \supported {setglue} \yes \yes \supported {setheight} \nop \yes \supported {setkern} \nop \yes \supported {setlang} \nop \yes \supported {setleader} \nop \yes \supported {setlink} \nop \yes \supported {setlist} \nop \yes \supported {setnext} \nop \yes \supported {setnucleus} \nop \yes \supported {setoffsets} \nop \yes \supported {setpenalty} \nop \yes \supported {setprev} \nop \yes \supported {setproperty} \yes \yes \supported {setshift} \nop \yes \supported {setsplit} \nop \yes \supported {setsubtype} \nop \yes \supported {setsub} \nop \yes \supported {setsup} \nop \yes \supported {setwhd} \nop \yes \supported {setwidth} \nop \yes \supported {slide} \yes \yes \supported {subtypes} \yes \nop \supported {subtype} \yes \nop \supported {tail} \yes \yes \supported {todirect} \yes \yes \supported {tonode} \yes \yes \supported {tostring} \yes \yes \supported {traverse_char} \yes \yes \supported {traverse_glyph} \yes \yes \supported {traverse_id} \yes \yes \supported {traverse} \yes \yes \supported {types} \yes \nop \supported {type} \yes \nop \supported {unprotect_glyphs} \yes \yes \supported {unprotect_glyph} \yes \yes \supported {unset_attribute} \yes \yes \supported {usedlist} \yes \yes \supported {uses_font} \yes \yes \supported {vpack} \yes \yes \supported {whatsits} \yes \nop \supported {write} \yes \yes \LL \stoptabulate The \type {node.next} and \type {node.prev} functions will stay but for consistency there are variants called \type {getnext} and \type {getprev}. We had to use \type {get} because \type {node.id} and \type {node.subtype} are already taken for providing meta information about nodes. Note: The getters do only basic checking for valid keys. You should just stick to the keys mentioned in the sections that describe node properties. Some of the getters and setters handle multiple node types, given that the field is relevant. In that case, some field names are considered similar (like \type {kern} and \type {width}, or \type {data} and \type {value}. In retrospect we could have normalized field names better but we decided to stick to the original (internal) names as much as possible. After all, at the \LUA\ end one can easily create synonyms. Some nodes have indirect references. For instance a math character refers to a family instead of a font. In that case we provide a virtual font field as accessor. So, \type {getfont} and \type {.font} can be used on them. The same is true for the \type {width}, \type {height} and \type {depth} of glue nodes. These actually access the spec node properties, and here we can set as well as get the values. In some places \LUATEX\ can do a bit of extra checking for valid node lists and you can enable that with: \startfunctioncall node.fix_node_lists(<boolean> b) \stopfunctioncall You can set and query the \SYNCTEX\ fields, a file number aka tag and a line number, for a glue, kern, hlist, vlist, rule and math nodes as well as glyph nodes (although this last one is not used in native \SYNCTEX). \startfunctioncall node.set_synctex_fields(<integer> f, <integer> l) <integer> f, <integer> l = node.get_synctex_fields(<node> n) \stopfunctioncall Of course you need to know what you're doing as no checking on sane values takes place. Also, the synctex interpreter used in editors is rather peculiar and has some assumptions (heuristics). \stopsection \startsection[title={Properties}][library=node] \topicindex {nodes+properties} \topicindex {properties} \libindex{flush_properties_table} \libindex{get_properties_table} \libindex{set_properties_mode} Attributes are a convenient way to relate extra information to a node. You can assign them at the \TEX\ end as well as at the \LUA\ end and and consult them at the \LUA\ end. One big advantage is that they obey grouping. They are linked lists and normally checking for them is pretty efficient, even if you use a lot of them. A macro package has to provide some way to manage these attributes at the \TEX\ end because otherwise clashes in their usage can occur. Each node also can have a properties table and you can assign values to this table using the \type {setproperty} function and get properties using the \type {getproperty} function. Managing properties is way more demanding than managing attributes. Take the following example: \starttyping \directlua { local n = node.new("glyph") node.setproperty(n,"foo") print(node.getproperty(n)) node.setproperty(n,"bar") print(node.getproperty(n)) node.free(n) } \stoptyping This will print \type {foo} and \type {bar} which in itself is not that useful when multiple mechanisms want to use this feature. A variant is: \starttyping \directlua { local n = node.new("glyph") node.setproperty(n,{ one = "foo", two = "bar" }) print(node.getproperty(n).one) print(node.getproperty(n).two) node.free(n) } \stoptyping This time we store two properties with the node. It really makes sense to have a table as property because that way we can store more. But in order for that to work well you need to do it this way: \starttyping \directlua { local n = node.new("glyph") local t = node.getproperty(n) if not t then t = { } node.setproperty(n,t) end t.one = "foo" t.two = "bar" print(node.getproperty(n).one) print(node.getproperty(n).two) node.free(n) } \stoptyping Here our own properties will not overwrite other users properties unless of course they use the same keys. So, eventually you will end up with something: \starttyping \directlua { local n = node.new("glyph") local t = node.getproperty(n) if not t then t = { } node.setproperty(n,t) end t.myself = { one = "foo", two = "bar" } print(node.getproperty(n).myself.one) print(node.getproperty(n).myself.two) node.free(n) } \stoptyping This assumes that only you use \type {myself} as subtable. The possibilities are endless but care is needed. For instance, the generic font handler that ships with \CONTEXT\ uses the \type {injections} subtable and you should not mess with that one! There are a few helper functions that you normally should not touch as user: \typ {flush_properties_table} will wipe the table (normally a bad idea), \typ {get_properties_table} and will give the table that stores properties (using direct entries) and you can best not mess too much with that one either because \LUATEX\ itself will make sure that entries related to nodes will get wiped when nodes get freed, so that the \LUA\ garbage collector can do its job. In fact, the main reason why we have this mechanism is that it saves the user (or macro package) some work. One can easily write a property mechanism in \LUA\ where after a shipout properties gets cleaned up but it's not entirely trivial to make sure that with each freed node also its properties get freed, due to the fact that there can be nodes left over for a next page. And having a callback bound to the node deallocator would add way to much overhead. Managing properties in the node (de)allocator functions is disabled by default and is enabled by: \starttyping node.set_properties_mode(true) \stoptyping When we copy a node list that has a table as property, there are several possibilities: we do the same as a new node, we copy the entry to the table in properties (a reference), we do a deep copy of a table in the properties, we create a new table and give it the original one as a metatable. After some experiments (that also included timing) with these scenarios we decided that a deep copy made no sense, nor did nilling. In the end both the shallow copy and the metatable variant were both ok, although the second one is slower. The most important aspect to keep in mind is that references to other nodes in properties no longer can be valid for that copy. We could use two tables (one unique and one shared) or metatables but that only complicates matters. When defining a new node, we could already allocate a table but it is rather easy to do that at the lua end e.g.\ using a metatable \type {__index} method. That way it is under macro package control. When deleting a node, we could keep the slot (e.g. setting it to false) but it could make memory consumption raise unneeded when we have temporary large node lists and after that only small lists. Both are not done. So in the end this is what happens now: when a node is copied, and it has a table as property, the new node will share that table. If the second argument of \typ {set_properties_mode} is \type {true} then a metatable approach is chosen: the copy gets its own table with the original table as metatable. If you use the generic font loader the mode is enabled that way. A few more xperiments were done. For instance: copy attributes to the properties so that we have fast access at the \LUA\ end. In the end the overhead is not compensated by speed and convenience, in fact, attributes are not that slow when it comes to accessing them. So this was rejected. Another experiment concerned a bitset in the node but again the gain compared to attributes was neglectable and given the small amount of available bits it also demands a pretty strong agreement over what bit represents what, and this is unlikely to succeed in the \TEX\ community. It doesn't pay off. Just in case one wonders why properties make sense: it is not so much speed that we gain, but more convenience: storing all kind of (temporary) data in attributes is no fun and this mechanism makes sure that properties are cleaned up when a node is freed. Also, the advantage of a more or less global properties table is that we stay at the \LUA\ end. An alternative is to store a reference in the node itself but that is complicated by the fact that the register has some limitations (no numeric keys) and we also don't want to mess with it too much. \stopsection \stopchapter \stopcomponent
{ "alphanum_fraction": 0.704205019, "avg_line_length": 36.6487804878, "ext": "tex", "hexsha": "a379b7b0a4d34d042483acffd4209dda6c19cbd6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/luatex/base/luatex-nodes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/luatex/base/luatex-nodes.tex", "max_line_length": 109, "max_stars_count": null, "max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/luatex/base/luatex-nodes.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 27480, "size": 97669 }
\section{Gaussian elimination} Gaussian elimination is an algorithm for solving systems of linear equations.
{ "alphanum_fraction": 0.8440366972, "avg_line_length": 36.3333333333, "ext": "tex", "hexsha": "bb416577c56e787a7d0fbe590a54f03eaa1a5689", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5058ad4cfa5f3191c009c568b8c3001fc6901c9c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "openscript/math-cheat-sheet", "max_forks_repo_path": "src/linear-algebra/matrices/gaussian-elimination.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5058ad4cfa5f3191c009c568b8c3001fc6901c9c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "openscript/math-cheat-sheet", "max_issues_repo_path": "src/linear-algebra/matrices/gaussian-elimination.tex", "max_line_length": 77, "max_stars_count": null, "max_stars_repo_head_hexsha": "5058ad4cfa5f3191c009c568b8c3001fc6901c9c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "openscript/math-cheat-sheet", "max_stars_repo_path": "src/linear-algebra/matrices/gaussian-elimination.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 20, "size": 109 }
%!TEX TS-program = lualatex %!TEX encoding = UTF-8 Unicode \documentclass[letterpaper]{tufte-handout} %\geometry{showframe} % display margins for debugging page layout \usepackage{fontspec} \def\mainfont{Linux Libertine O} \setmainfont[Ligatures={Common,TeX}, Contextuals={NoAlternate}, BoldFont={* Bold}, ItalicFont={* Italic}, Numbers={OldStyle}]{\mainfont} \setsansfont[Scale=MatchLowercase]{Linux Biolinum O} \usepackage{microtype} \usepackage{graphicx} % allow embedded images \setkeys{Gin}{width=\linewidth,totalheight=\textheight,keepaspectratio} \graphicspath{{img/}} % set of paths to search for images \usepackage{amsmath} % extended mathematics \usepackage{booktabs} % book-quality tables \usepackage{units} % non-stacked fractions and better unit spacing \usepackage{siunitx} \usepackage{multicol} % multiple column layout facilities \usepackage{microtype} % filler text \usepackage{hyperref} \usepackage{enumitem} %\usepackage{fancyvrb} % extended verbatim environments % \fvset{fontsize=\normalsize}% default font size for fancy-verbatim environments \makeatletter % Paragraph indentation and separation for normal text \renewcommand{\@tufte@reset@par}{% \setlength{\RaggedRightParindent}{1.0pc}% \setlength{\JustifyingParindent}{1.0pc}% \setlength{\parindent}{0pc}% \setlength{\parskip}{0.5\baselineskip}% } \@tufte@reset@par % Paragraph indentation and separation for marginal text \renewcommand{\@tufte@margin@par}{% \setlength{\RaggedRightParindent}{0pt}% \setlength{\JustifyingParindent}{0.5pc}% \setlength{\parindent}{0.5pc}% \setlength{\parskip}{0pt}% } \makeatother % Set up the spacing using fontspec features \renewcommand\allcapsspacing[1]{{\addfontfeatures{LetterSpace=15}#1}} \renewcommand\smallcapsspacing[1]{{\addfontfeatures{LetterSpace=10}#1}} \title{Study Guide 07\hfill} \author{Phenotypic evolution and quantitative traits} \date{} % without \date command, current date is supplied \begin{document} \maketitle % this prints the handout title, author, and date \section{Vocabulary}\marginnote{\textbf{Read:} Chapter 6, pages 135--148, 151--160.} \begin{multicols}{2} discrete traits \\ quantitative traits \\ polygenic traits \\ fitness functions \\ directional selection \\ stabilizing selection \\ disruptive selection \\ selection gradient \\ breeder's equation $\left(\Delta z = h^2S\right)$ \\ heritability $\left(h^2\right)$ \\ additive genetic variance \\ \end{multicols} %\printclassoptions \section{Concepts} You should \emph{write} clear and concise answers to each question in the Concepts section. The questions are not necessarily independent. Think broadly across lectures to see ``the big picture.'' \begin{enumerate} \item Darwin's basic idea was that natural selection affects the ability of an organism to survive and reproduce. Thus, natural selection acts on the phenotype. However, evolutionary change occurs at the genetic level. Explain the relationship between phenotype, genotype and fitness to explain how natural selection determines whether evolutionary change can occur in a population. \item Describe the three fitness functions for selection given in class. Illustrate and explain each mode. Provide examples (real or hypothetical) of each. Relate each to different values of relative fitness $\left(w\right)$. Describe and illustrate how survivorship and frequency changes for each function changes as a trait value changes. \item Evolution \emph{by} selection requires correlation between a phenotypic trait and fitness, and between parental and offspring phenotypes. Know and explain why both of these correlations are necessary for selection to occur. \item Use the breeder's equation to show that a population will not evolve if a trait is heritable $(h^2 > 0)$ but there is no selection $(S = 0)$, or if there is selection $(s \ne 0)$ but the trait is not heritable $(h^2 = 0)$. \item Use the breeder's equation to calculate the predicted amount of phenotypic change given the appropriate values. I would give you $\beta$, $P$, and $h^2$. Be sure you understand what each of these variables represents. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7716064161, "avg_line_length": 42.1919191919, "ext": "tex", "hexsha": "40295228334cba89b25d7a6d5827a9b3da6b5c42", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2721d0e2f33333ca5337ccae56508143bfa481d8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mtaylor-semo/300", "max_forks_repo_path": "study_guides/300_study_guide07.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2721d0e2f33333ca5337ccae56508143bfa481d8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mtaylor-semo/300", "max_issues_repo_path": "study_guides/300_study_guide07.tex", "max_line_length": 386, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2721d0e2f33333ca5337ccae56508143bfa481d8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mtaylor-semo/300", "max_stars_repo_path": "study_guides/300_study_guide07.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-19T03:16:10.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-19T03:16:10.000Z", "num_tokens": 1109, "size": 4177 }
\documentclass[color=usenames,dvipsnames]{beamer}\usepackage[]{graphicx}\usepackage[]{color} % maxwidth is the original width if it is less than linewidth % otherwise use linewidth (to make sure the graphics do not exceed the margin) \makeatletter \def\maxwidth{ % \ifdim\Gin@nat@width>\linewidth \linewidth \else \Gin@nat@width \fi } \makeatother \definecolor{fgcolor}{rgb}{0, 0, 0} \newcommand{\hlnum}[1]{\textcolor[rgb]{0.69,0.494,0}{#1}}% \newcommand{\hlstr}[1]{\textcolor[rgb]{0.749,0.012,0.012}{#1}}% \newcommand{\hlcom}[1]{\textcolor[rgb]{0.514,0.506,0.514}{\textit{#1}}}% \newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}% \newcommand{\hlstd}[1]{\textcolor[rgb]{0,0,0}{#1}}% \newcommand{\hlkwa}[1]{\textcolor[rgb]{0,0,0}{\textbf{#1}}}% \newcommand{\hlkwb}[1]{\textcolor[rgb]{0,0.341,0.682}{#1}}% \newcommand{\hlkwc}[1]{\textcolor[rgb]{0,0,0}{\textbf{#1}}}% \newcommand{\hlkwd}[1]{\textcolor[rgb]{0.004,0.004,0.506}{#1}}% \let\hlipl\hlkwb \usepackage{framed} \makeatletter \newenvironment{kframe}{% \def\at@end@of@kframe{}% \ifinner\ifhmode% \def\at@end@of@kframe{\end{minipage}}% \begin{minipage}{\columnwidth}% \fi\fi% \def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep \colorbox{shadecolor}{##1}\hskip-\fboxsep % There is no \\@totalrightmargin, so: \hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}% \MakeFramed {\advance\hsize-\width \@totalleftmargin\z@ \linewidth\hsize \@setminipage}}% {\par\unskip\endMakeFramed% \at@end@of@kframe} \makeatother \definecolor{shadecolor}{rgb}{.97, .97, .97} \definecolor{messagecolor}{rgb}{0, 0, 0} \definecolor{warningcolor}{rgb}{1, 0, 1} \definecolor{errorcolor}{rgb}{1, 0, 0} \newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX \usepackage{alltt} %\documentclass[color=usenames,dvipsnames,handout]{beamer} \usepackage[roman]{../lectures} %\usepackage[sans]{../lectures} \hypersetup{pdfpagemode=UseNone,pdfstartview={FitV}} % Compile and open PDF %% New command for inline code that isn't to be evaluated \definecolor{inlinecolor}{rgb}{0.878, 0.918, 0.933} \newcommand{\inr}[1]{\colorbox{inlinecolor}{\texttt{#1}}} \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \begin{document} \begin{frame}[plain] \centering \LARGE % Lecture 15: Open Populations and the \\ Spatial Cormack-Jolly-Seber Model \\ \vfill \large WILD(FISH) 8390 \\ Estimation of Fish and Wildlife Population Parameters \\ \vfill Richard Chandler \\ University of Georgia \\ \end{frame} %\section{Intro} \begin{frame}[plain] \frametitle{Topics} \Large \only<1>{\tableofcontents}%[hideallsubsections]} % \only<2 | handout:0>{\tableofcontents[currentsection]}%,hideallsubsections]} \end{frame} \section{Intro} \begin{frame} \frametitle{Open populations} \large Overview \begin{itemize}[<+->] \normalsize \item Population closure is an assumption we make to estimate density in short time periods. \item If we want to study the processes governing spatial \textit{and temporal} variation in density, we need to move beyond closed population models. \item Open population SCR models allow us to model spatial and temporal variation in survival, recruitment, and movement. \end{itemize} \end{frame} \begin{frame} \frametitle{Types of closure} \large Geographic closure \begin{itemize} \normalsize \item Animals cannot enter or leave the study area \item But what is the study area in non-spatial settings? \item What is a non-spatial setting? \end{itemize} \pause \vfill \large Demographic closure \begin{itemize} \normalsize \item No births or deaths \item Reasonable over short time periods \item But temporal dynamics are often of chief interest \end{itemize} \end{frame} \begin{frame} \frametitle{Two varieties of open models} \large Cormack-Jolly-Seber Model \begin{itemize}%[<+->] \normalsize \item<1-> Primarily used to estimate survival \item<2-> When permanent emigration is possible, estimate is of ``apparent survival'' ($\phi$) \item<3-> Apparent survival is the probability of surviving and not leaving the study area permanently \item<4-> CJS models are not used to estimate abundance or recruitment \end{itemize} \pause \vfill \large \uncover<5->{Jolly-Seber model} \normalsize \begin{itemize} \item<6-> Traditionally used to estimate abundance, (apparent) survival, and recruitment \end{itemize} \pause \vfill \centering %\bf \uncover<7->{ Both classes of models can be made spatially explicit to study movement, among other things \\ } \end{frame} \section{Non-spatial CJS} \begin{frame} \frametitle{Non-spatial CJS model} \large Notes \begin{itemize}[<+->] \normalsize \item Typical scenario involves ``releasing'' indviduals and recapturing them over time \item We don't model initial capture process \item Instead, we ``condition on initial capture'' \item As an example, imagine releasing a bunch of tagged animals in a translocation study \item Because we aren't estimating $N$, no data augmentation required \item It is possible to estimate parameters using 1 sampling occasion per season \item Robust design provides more information about $p$ \end{itemize} \end{frame} \begin{frame} \frametitle{Non-spatial CJS model} State model \[ z_{i,t} \sim \mbox{Bernoulli}(z_{i,t-1} \times \phi) \] \vfill Observation model \[ y_{i,t} \sim \mbox{Bernoulli}(z_{i,t} \times p) \] \pause \vfill \small where \begin{itemize} \item $z_{i,t}$ is ``alive state'' of individual $i$ at time $t$ \item $\phi$ is ``apparent survival''. Probability of being alive and not permanently emigrating. \item $y_{i,t}=1$ if individual was encountered, $y_{i,t}=0$ otherwise. \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Simulating CJS data without robust design} {%\bf Parameters and dimensions} %\small \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{T} \hlkwb{<-} \hlnum{10} \hlcom{## primary periods (eg, years)} \hlstd{K} \hlkwb{<-} \hlnum{1} \hlcom{## only 1 secondary sampling occasion} \hlstd{n} \hlkwb{<-} \hlnum{25} \hlstd{phi} \hlkwb{<-} \hlnum{0.7} \hlstd{p} \hlkwb{<-} \hlnum{0.4} \hlstd{z} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlstd{y} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlstd{first} \hlkwb{<-} \hlkwd{rpois}\hlstd{(n,} \hlnum{1}\hlstd{)}\hlopt{+}\hlnum{1} \hlcom{## random release dates} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill {%\bf Generate $z$ and $y$} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{z[i,first[i]]} \hlkwb{<-} \hlnum{1} \hlcom{## Known alive at release} \hlstd{y[i,first[i]]} \hlkwb{<-} \hlnum{1} \hlkwa{for}\hlstd{(t} \hlkwa{in} \hlstd{(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T) \{} \hlstd{z[i,t]} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(}\hlnum{1}\hlstd{,} \hlnum{1}\hlstd{, z[i,t}\hlopt{-}\hlnum{1}\hlstd{]}\hlopt{*}\hlstd{phi)} \hlcom{## Alive/dead state} \hlstd{y[i,t]} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(}\hlnum{1}\hlstd{,} \hlnum{1}\hlstd{, z[i,t]}\hlopt{*}\hlstd{p)} \hlcom{## Data} \hlstd{\}} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Simulated CJS data} \begin{columns} \begin{column}{0.5\textwidth} Latent alive/dead state \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{z[,}\hlnum{1}\hlopt{:}\hlnum{7}\hlstd{]} \end{alltt} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] ## [1,] 1 0 0 0 0 0 0 ## [2,] NA 1 1 1 1 0 0 ## [3,] NA 1 1 0 0 0 0 ## [4,] 1 0 0 0 0 0 0 ## [5,] 1 1 1 0 0 0 0 ## [6,] NA 1 1 1 1 0 0 ## [7,] NA NA 1 0 0 0 0 ## [8,] NA NA NA 1 1 1 1 ## [9,] 1 1 0 0 0 0 0 ## [10,] 1 0 0 0 0 0 0 ## [11,] 1 1 1 1 0 0 0 ## [12,] 1 0 0 0 0 0 0 ## [13,] NA NA 1 0 0 0 0 ## [14,] 1 0 0 0 0 0 0 ## [15,] NA 1 1 1 1 0 0 ## [16,] 1 1 1 1 1 0 0 ## [17,] 1 0 0 0 0 0 0 ## [18,] 1 1 1 0 0 0 0 ## [19,] 1 0 0 0 0 0 0 ## [20,] NA 1 1 0 0 0 0 ## [21,] NA NA 1 1 0 0 0 ## [22,] NA 1 1 1 0 0 0 ## [23,] NA 1 0 0 0 0 0 ## [24,] 1 0 0 0 0 0 0 ## [25,] NA 1 1 1 0 0 0 \end{verbatim} \end{kframe} \end{knitrout} \end{column} \begin{column}{0.5\textwidth} Encounter histories (data) \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y[,}\hlnum{1}\hlopt{:}\hlnum{7}\hlstd{]} \end{alltt} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] ## [1,] 1 0 0 0 0 0 0 ## [2,] NA 1 0 0 1 0 0 ## [3,] NA 1 0 0 0 0 0 ## [4,] 1 0 0 0 0 0 0 ## [5,] 1 0 1 0 0 0 0 ## [6,] NA 1 0 0 1 0 0 ## [7,] NA NA 1 0 0 0 0 ## [8,] NA NA NA 1 0 0 1 ## [9,] 1 0 0 0 0 0 0 ## [10,] 1 0 0 0 0 0 0 ## [11,] 1 1 0 0 0 0 0 ## [12,] 1 0 0 0 0 0 0 ## [13,] NA NA 1 0 0 0 0 ## [14,] 1 0 0 0 0 0 0 ## [15,] NA 1 0 0 0 0 0 ## [16,] 1 0 1 0 1 0 0 ## [17,] 1 0 0 0 0 0 0 ## [18,] 1 1 0 0 0 0 0 ## [19,] 1 0 0 0 0 0 0 ## [20,] NA 1 0 0 0 0 0 ## [21,] NA NA 1 1 0 0 0 ## [22,] NA 1 1 1 0 0 0 ## [23,] NA 1 0 0 0 0 0 ## [24,] 1 0 0 0 0 0 0 ## [25,] NA 1 1 1 0 0 0 \end{verbatim} \end{kframe} \end{knitrout} \end{column} \end{columns} \end{frame} \subsection{Likelihood} \begin{frame}[plain] \frametitle{Topics} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame}[fragile] \frametitle{Likelihood analysis} \small We can fit CJS models in R with the `RMark' or `marked' packages. We'll use `marked', which requires the capture histories to be formatted as a character vector named \inr{ch}. \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{library}\hlstd{(marked)} \hlstd{y.marked} \hlkwb{<-} \hlkwd{ifelse}\hlstd{(}\hlkwd{is.na}\hlstd{(y),} \hlnum{0}\hlstd{, y)} \hlstd{cap.histories} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(} \hlkwc{ch}\hlstd{=}\hlkwd{apply}\hlstd{(y.marked,} \hlnum{1}\hlstd{, paste,} \hlkwc{collapse}\hlstd{=}\hlstr{""}\hlstd{))} \hlkwd{head}\hlstd{(cap.histories,} \hlkwc{n}\hlstd{=}\hlnum{3}\hlstd{)} \end{alltt} \begin{verbatim} ## ch ## 1 1000000000 ## 2 0100100000 ## 3 0100000000 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill You can use formulas to model covariate effects. \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{fm0} \hlkwb{<-} \hlkwd{crm}\hlstd{(}\hlkwc{data}\hlstd{=cap.histories,} \hlkwc{model}\hlstd{=}\hlstr{"CJS"}\hlstd{,} \hlkwc{hessian}\hlstd{=}\hlnum{TRUE}\hlstd{,} \hlkwc{model.parameters}\hlstd{=}\hlkwd{list}\hlstd{(}\hlkwc{Phi}\hlstd{=}\hlkwd{list}\hlstd{(}\hlkwc{formula}\hlstd{=}\hlopt{~}\hlnum{1}\hlstd{),} \hlkwc{p}\hlstd{=}\hlkwd{list}\hlstd{(}\hlkwc{formula}\hlstd{=}\hlopt{~}\hlnum{1}\hlstd{)))} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Likelihood analysis} Estimates of apparent survival and capture probability. \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{predict}\hlstd{(fm0)} \end{alltt} \begin{verbatim} ## $Phi ## occ estimate se lcl ucl ## 1 4 0.617999 0.08624809 0.4415182 0.7680138 ## ## $p ## occ estimate se lcl ucl ## 1 5 0.3242274 0.1095067 0.1526448 0.5609906 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \subsection{Bayesian} \begin{frame}[plain] \frametitle{Topics} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame}[fragile] \frametitle{Non-spatial CJS model in JAGS} % \scriptsize \fbox{\parbox{\linewidth}{\verbatiminput{CJS-nonspatial.jag}}} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{writeLines}\hlstd{(}\hlkwd{readLines}\hlstd{(}\hlstr{"CJS-nonspatial.jag"}\hlstd{))} \end{alltt} \end{kframe} \end{knitrout} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.678, 0.847, 0.902}\color{fgcolor}\begin{kframe} \begin{verbatim} model { phi ~ dunif(0,1) p ~ dunif(0,1) for(i in 1:n) { z[i,first[i]] <- 1 ## Known alive at time of release for(t in (first[i]+1):T) { z[i,t] ~ dbern(z[i,t-1]*phi) ## Survives with probability phi y[i,t] ~ dbern(z[i,t]*p) ## Detected with probability p } } } \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{JAGS} Data \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.data.nonsp} \hlkwb{<-} \hlkwd{list}\hlstd{(}\hlkwc{y}\hlstd{=y,} \hlkwc{n}\hlstd{=n,} \hlkwc{first}\hlstd{=first,} \hlkwc{T}\hlstd{=T)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Initial values for $z$ matrix \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{zi.nonsp} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{zi.nonsp[i,(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T]} \hlkwb{<-} \hlnum{1} \hlstd{\}} \hlstd{jags.inits.nonsp} \hlkwb{<-} \hlkwa{function}\hlstd{()} \hlkwd{list}\hlstd{(}\hlkwc{phi}\hlstd{=}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{),} \hlkwc{p}\hlstd{=}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{),} \hlkwc{z}\hlstd{=zi.nonsp)} \hlstd{jags.pars.nonsp} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"phi"}\hlstd{,} \hlstr{"p"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill {%\bf \normalsize Fit the model} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{library}\hlstd{(jagsUI)} \hlstd{jags.post.samples.nonsp} \hlkwb{<-} \hlkwd{jags.basic}\hlstd{(}\hlkwc{data}\hlstd{=jags.data.nonsp,} \hlkwc{inits}\hlstd{=jags.inits.nonsp,} \hlkwc{parameters.to.save}\hlstd{=jags.pars.nonsp,} \hlkwc{model.file}\hlstd{=}\hlstr{"CJS-nonspatial.jag"}\hlstd{,} \hlkwc{n.chains}\hlstd{=}\hlnum{3}\hlstd{,} \hlkwc{n.adapt}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{n.burnin}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{n.iter}\hlstd{=}\hlnum{2000}\hlstd{,} \hlkwc{parallel}\hlstd{=}\hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Posterior distributions} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(jags.post.samples.nonsp)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.65\linewidth]{figure/jc1-plot-1} } \end{knitrout} \end{frame} \section{Spatial CJS} \begin{frame}[plain] \frametitle{Topics} \Large \tableofcontents[currentsection] \end{frame} \begin{frame} \frametitle{Spatial CJS} \large Why spatial? \begin{itemize} \normalsize \item Comes with all the benefits of other SCR models \item Allows for movement between primary periods, making it possible to separate survival from permanent emigration \item Movement itself may be of interest, such as in studies of dispersal \end{itemize} \pause \vfill \normalsize {\centering %\bf We begin by ignoring movement, but we use the robust design \\ } \end{frame} \begin{frame}[fragile] \frametitle{Simulating spatial CJS data with robust design} \scriptsize %\footnotesize \begin{columns} % \column{\dimexpr\paperwidth-10pt} \begin{column}{0.5\textwidth}%{0.5\dimexpr\paperwidth-10pt} {%\bf Parameters and data dimensions} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{T} \hlkwb{<-} \hlnum{10} \hlcom{# primary occasions} \hlstd{K} \hlkwb{<-} \hlnum{3} \hlcom{# secondary occasions} \hlstd{n} \hlkwb{<-} \hlnum{25} \hlcom{# nIndividuals} \hlstd{phi} \hlkwb{<-} \hlnum{0.7} \hlstd{p0} \hlkwb{<-} \hlnum{0.4} \hlcom{# baseline capture prob} \hlstd{sigma} \hlkwb{<-} \hlnum{0.1} \hlcom{# scale parameter} \end{alltt} \end{kframe} \end{knitrout} \pause {%\bf Traps, activity centers, and \\ detection probability} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{co} \hlkwb{<-} \hlkwd{seq}\hlstd{(}\hlnum{0.25}\hlstd{,} \hlnum{0.75}\hlstd{,} \hlkwc{length}\hlstd{=}\hlnum{5}\hlstd{)} \hlstd{x} \hlkwb{<-} \hlkwd{cbind}\hlstd{(}\hlkwd{rep}\hlstd{(co,} \hlkwc{each}\hlstd{=}\hlnum{5}\hlstd{),} \hlkwd{rep}\hlstd{(co,} \hlkwc{times}\hlstd{=}\hlnum{5}\hlstd{))} \hlstd{J} \hlkwb{<-} \hlkwd{nrow}\hlstd{(x)} \hlstd{xlim} \hlkwb{<-} \hlstd{ylim} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,}\hlnum{1}\hlstd{)} \hlstd{s} \hlkwb{<-} \hlkwd{cbind}\hlstd{(}\hlkwd{runif}\hlstd{(n, xlim[}\hlnum{1}\hlstd{], xlim[}\hlnum{2}\hlstd{]),} \hlkwd{runif}\hlstd{(n, ylim[}\hlnum{1}\hlstd{], ylim[}\hlnum{2}\hlstd{]))} \hlstd{d} \hlkwb{<-} \hlstd{p.sp} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, J)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{d[i,]} \hlkwb{<-} \hlkwd{sqrt}\hlstd{((s[i,}\hlnum{1}\hlstd{]}\hlopt{-}\hlstd{x[,}\hlnum{1}\hlstd{])}\hlopt{^}\hlnum{2} \hlopt{+} \hlstd{(s[i,}\hlnum{2}\hlstd{]}\hlopt{-}\hlstd{x[,}\hlnum{2}\hlstd{])}\hlopt{^}\hlnum{2}\hlstd{)} \hlstd{p.sp[i,]} \hlkwb{<-} \hlstd{p0}\hlopt{*}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{d[i,]}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma}\hlopt{^}\hlnum{2}\hlstd{))} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \end{column} \pause %\vfill \begin{column}{0.5\textwidth}%{0.5\dimexpr\paperwidth-10pt} \scriptsize %\tiny {%\bf Generate $z$ and $y$} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{z} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlstd{y.sp} \hlkwb{<-} \hlkwd{array}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwd{c}\hlstd{(n, J, K, T))} \hlcom{## 4D array} \hlstd{first} \hlkwb{<-} \hlkwd{rpois}\hlstd{(n,} \hlnum{1}\hlstd{)}\hlopt{+}\hlnum{1} \hlcom{## random release dates} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{z[i,first[i]]} \hlkwb{<-} \hlnum{1} \hlkwa{for}\hlstd{(t} \hlkwa{in} \hlstd{(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T) \{} \hlstd{z[i,t]} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(}\hlnum{1}\hlstd{,} \hlnum{1}\hlstd{,} \hlstd{z[i,t}\hlopt{-}\hlnum{1}\hlstd{]}\hlopt{*}\hlstd{phi)} \hlkwa{for}\hlstd{(j} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{J) \{} \hlstd{y.sp[i,j,}\hlnum{1}\hlopt{:}\hlstd{K,t]} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(K,} \hlnum{1}\hlstd{,} \hlstd{z[i,t]}\hlopt{*}\hlstd{p.sp[i,j])} \hlstd{\}} \hlstd{\}} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \end{column} \end{columns} \end{frame} \subsection{Likelihood} \begin{frame}[plain] \frametitle{Topics} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame}[fragile] \frametitle{Likelihood analysis of spatial CJS model} We'll use the R package `openpopscr'\footnote{Another option is the `openCR' package.} for model fitting. It isn't on CRAN, but you can install it from github using the following commands. \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{library}\hlstd{(remotes)} \hlkwd{install_github}\hlstd{(}\hlstr{"r-glennie/openpopscr"}\hlstd{,} \hlkwc{build} \hlstd{=} \hlnum{TRUE}\hlstd{,} \hlkwc{build_opts} \hlstd{=} \hlkwd{c}\hlstd{(}\hlstr{"--no-resave-data"}\hlstd{,} \hlstr{"--no-manual"}\hlstd{),} \hlkwc{build_vignettes} \hlstd{=} \hlnum{TRUE}\hlstd{)} \hlkwd{library}\hlstd{(openpopscr)} \hlkwd{library}\hlstd{(secr)} \end{alltt} \end{kframe} \end{knitrout} \vfill If you're on Windows, you might have to install Rtools first. \\ \centering \url{https://cran.r-project.org/bin/windows/Rtools/} \\ \end{frame} \begin{frame}[fragile] \frametitle{Likelihood analysis of spatial CJS model} Begin by making a mask in `secr' \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{trap.df} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(x}\hlopt{*}\hlnum{1000}\hlstd{);} \hlkwd{colnames}\hlstd{(trap.df)} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"x"}\hlstd{,}\hlstr{"y"}\hlstd{)} \hlstd{traps} \hlkwb{<-} \hlkwd{read.traps}\hlstd{(}\hlkwc{data}\hlstd{=trap.df,} \hlkwc{detector}\hlstd{=}\hlstr{"proximity"}\hlstd{)} \hlstd{mask} \hlkwb{<-} \hlkwd{make.mask}\hlstd{(}\hlkwc{traps}\hlstd{=traps,} \hlkwc{buffer}\hlstd{=}\hlnum{250}\hlstd{)} \hlkwd{plot}\hlstd{(mask);} \hlkwd{points}\hlstd{(traps,} \hlkwc{pch}\hlstd{=}\hlnum{3}\hlstd{,} \hlkwc{col}\hlstd{=}\hlstr{"blue"}\hlstd{,} \hlkwc{lwd}\hlstd{=}\hlnum{2}\hlstd{)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.6\linewidth]{figure/mask-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Likelihood analysis of spatial CJS model} Format for `secr' \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y.sp.secr} \hlkwb{<-} \hlstd{y.sp; y.sp.secr[}\hlkwd{is.na}\hlstd{(y.sp)]} \hlkwb{<-} \hlnum{0} \hlstd{caps} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(}\hlkwc{session}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{animal}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlkwd{slice.index}\hlstd{(y.sp.secr,} \hlnum{1}\hlstd{), y.sp.secr),} \hlkwc{occasion}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlkwd{slice.index}\hlstd{(y.sp.secr,} \hlnum{3}\hlopt{:}\hlnum{4}\hlstd{), y.sp.secr),} \hlkwc{trap}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlkwd{slice.index}\hlstd{(y.sp.secr,} \hlnum{2}\hlstd{), y.sp.secr))} \hlstd{capthist} \hlkwb{<-} \hlkwd{make.capthist}\hlstd{(}\hlkwc{captures}\hlstd{=caps,} \hlkwc{traps}\hlstd{=traps,} \hlkwc{noccasions}\hlstd{=}\hlnum{30}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \vfill Then format for `openpopscr' \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{cjs.data} \hlkwb{<-} \hlstd{ScrData}\hlopt{$}\hlkwd{new}\hlstd{(capthist, mask,} \hlkwc{primary}\hlstd{=}\hlkwd{rep}\hlstd{(}\hlnum{1}\hlopt{:}\hlnum{10}\hlstd{,} \hlkwc{each}\hlstd{=}\hlnum{3}\hlstd{))} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Likelihood analysis} Create the model object and then fit it \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{mod} \hlkwb{<-} \hlstd{CjsModel}\hlopt{$}\hlkwd{new}\hlstd{(}\hlkwd{list}\hlstd{(lambda0}\hlopt{~}\hlnum{1}\hlstd{, sigma}\hlopt{~}\hlnum{1}\hlstd{, phi}\hlopt{~}\hlnum{1}\hlstd{), cjs.data,} \hlkwc{start}\hlstd{=}\hlkwd{list}\hlstd{(}\hlkwc{lambda0}\hlstd{=}\hlnum{1.5}\hlstd{,} \hlkwc{sigma}\hlstd{=}\hlnum{50}\hlstd{,} \hlkwc{phi}\hlstd{=}\hlnum{0.5}\hlstd{))} \hlstd{mod}\hlopt{$}\hlkwd{fit}\hlstd{()} \hlstd{mod} \end{alltt} \end{kframe} \end{knitrout} Back-transform \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{mod}\hlopt{$}\hlkwd{get_par}\hlstd{(}\hlstr{"lambda0"}\hlstd{,} \hlkwc{k} \hlstd{=} \hlnum{1}\hlstd{,} \hlkwc{j} \hlstd{=} \hlnum{1}\hlstd{)} \end{alltt} \begin{verbatim} ## [1] 0.3824823 \end{verbatim} \begin{alltt} \hlstd{mod}\hlopt{$}\hlkwd{get_par}\hlstd{(}\hlstr{"sigma"}\hlstd{,} \hlkwc{k} \hlstd{=} \hlnum{1}\hlstd{,} \hlkwc{j} \hlstd{=} \hlnum{1}\hlstd{)} \end{alltt} \begin{verbatim} ## [1] 102.0998 \end{verbatim} \begin{alltt} \hlstd{mod}\hlopt{$}\hlkwd{get_par}\hlstd{(}\hlstr{"phi"}\hlstd{,} \hlkwc{k} \hlstd{=} \hlnum{1}\hlstd{,} \hlkwc{m}\hlstd{=}\hlnum{1}\hlstd{)} \end{alltt} \begin{verbatim} ## [1] 0.5708015 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \subsection{Bayesian} \begin{frame}[plain] \frametitle{Topics} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame}[fragile] \frametitle{Spatial CJS model in JAGS} % \vspace{-5mm} % \scriptsize \fbox{\parbox{\linewidth}{\verbatiminput{CJS-spatial.jag}}} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{writeLines}\hlstd{(}\hlkwd{readLines}\hlstd{(}\hlstr{"CJS-spatial.jag"}\hlstd{))} \end{alltt} \end{kframe} \end{knitrout} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.678, 0.847, 0.902}\color{fgcolor}\begin{kframe} \begin{verbatim} model { phi ~ dunif(0,1) ## Apparent survival p0 ~ dunif(0,1) ## Baseline capture prob sigma ~ dunif(0, 2) ## Scale parameter of detection function for(i in 1:n) { ## Loop over individuals s[i,1] ~ dunif(xlim[1], xlim[2]) # static activity centers s[i,2] ~ dunif(ylim[1], ylim[2]) for(j in 1:J) { ## Loop over traps d[i,j] <- sqrt((s[i,1] - x[j,1])^2 + (s[i,2] - x[j,2])^2) p[i,j] <- p0*exp(-d[i,j]^2/(2*sigma^2)) ## Capture prob } z[i,first[i]] <- 1 ## Condition on first encounter for(t in (first[i]+1):T) { z[i,t] ~ dbern(z[i,t-1]*phi) ## Model subsequent encounters for(j in 1:J) { for(k in 1:K) { y[i,j,k,t] ~ dbern(z[i,t]*p[i,j]) } } } } } \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{JAGS} Data \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.data.sp} \hlkwb{<-} \hlkwd{list}\hlstd{(}\hlkwc{y}\hlstd{=y.sp,} \hlkwc{n}\hlstd{=n,} \hlkwc{first}\hlstd{=first,} \hlkwc{x}\hlstd{=x,} \hlkwc{J}\hlstd{=J,} \hlkwc{K}\hlstd{=K,} \hlkwc{T}\hlstd{=T,} \hlkwc{xlim}\hlstd{=xlim,} \hlkwc{ylim}\hlstd{=ylim)} \end{alltt} \end{kframe} \end{knitrout} Initial values for $z$ matrix \scriptsize \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{zi.sp} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{zi.sp[i,(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T]} \hlkwb{<-} \hlnum{1} \hlstd{\}} \hlstd{jags.inits.sp} \hlkwb{<-} \hlkwa{function}\hlstd{()} \hlkwd{list}\hlstd{(}\hlkwc{phi}\hlstd{=}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{),} \hlkwc{z}\hlstd{=zi.sp)} \hlstd{jags.pars.sp} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"phi"}\hlstd{,} \hlstr{"p0"}\hlstd{,} \hlstr{"sigma"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill {\normalsize Fit the model} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.post.samples.sp} \hlkwb{<-} \hlkwd{jags.basic}\hlstd{(}\hlkwc{data}\hlstd{=jags.data.sp,} \hlkwc{inits}\hlstd{=jags.inits.sp,} \hlkwc{parameters.to.save}\hlstd{=jags.pars.sp,} \hlkwc{model.file}\hlstd{=}\hlstr{"CJS-spatial.jag"}\hlstd{,} \hlkwc{n.chains}\hlstd{=}\hlnum{3}\hlstd{,} \hlkwc{n.adapt}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{n.burnin}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{n.iter}\hlstd{=}\hlnum{2000}\hlstd{,} \hlkwc{parallel}\hlstd{=}\hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Posterior distributions} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(jags.post.samples.sp)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.6\linewidth]{figure/jc2-plot-1} } \end{knitrout} \end{frame} \section{Spatial CJS with movement} \begin{frame} \frametitle{Modeling dispersal among years} {\large What if activity centers move between years?} \begin{itemize} \normalsize \item Could affect estimates of detection parameters \item More importantly, might provide opportunity to study dispersal \end{itemize} \pause \vfill {\large Potential movement models} \begin{itemize} \normalsize \item Random walk: ${\bm s}_t \sim \mbox{Normal}({\bm s}_{t-1}, \tau^2)$ \item Random walk with attraction point: ${\bm s}_t \sim \mbox{Normal}({\bm s}_{t-1} + \rho(\bar{\bm s} - {\bm s}_{t-1}), \tau^2)$ \item Extensions can be made to allow for resource selection, landscape resistance, etc\dots\footnote{https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecy.3473}. \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Simulating spatial CJS data with dispersal} \begin{columns} \footnotesize %\small \begin{column}{0.5\textwidth}%{0.5\dimexpr\paperwidth-10pt} Parameters and data dimensions \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{T} \hlkwb{<-} \hlnum{10}\hlstd{; K} \hlkwb{<-} \hlnum{5}\hlstd{; n} \hlkwb{<-} \hlnum{25} \hlcom{## same as before} \hlstd{phi} \hlkwb{<-} \hlnum{0.9}\hlstd{; p0} \hlkwb{<-} \hlnum{0.4}\hlstd{; sigma} \hlkwb{<-} \hlnum{0.05} \hlcom{## dispersal parameter (of random walk)} \hlstd{tau} \hlkwb{<-} \hlnum{0.05} \end{alltt} \end{kframe} \end{knitrout} \pause Activity centers take a random walk \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlcom{## state-space should be bigger} \hlstd{xlim} \hlkwb{<-} \hlstd{ylim} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,} \hlnum{1}\hlstd{)} \hlstd{first} \hlkwb{<-} \hlkwd{rpois}\hlstd{(n,} \hlnum{1}\hlstd{)}\hlopt{+}\hlnum{1} \hlstd{s} \hlkwb{<-} \hlkwd{array}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwd{c}\hlstd{(n,} \hlnum{2}\hlstd{, T))} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{s[i,,first[i]]} \hlkwb{<-} \hlkwd{cbind}\hlstd{(}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{, xlim[}\hlnum{1}\hlstd{], xlim[}\hlnum{2}\hlstd{]),} \hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{, ylim[}\hlnum{1}\hlstd{], ylim[}\hlnum{2}\hlstd{]))} \hlkwa{for}\hlstd{(t} \hlkwa{in} \hlstd{(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T) \{} \hlstd{s[i,}\hlnum{1}\hlstd{,t]} \hlkwb{<-} \hlkwd{rnorm}\hlstd{(}\hlnum{1}\hlstd{, s[i,}\hlnum{1}\hlstd{,t}\hlopt{-}\hlnum{1}\hlstd{], tau)} \hlstd{s[i,}\hlnum{2}\hlstd{,t]} \hlkwb{<-} \hlkwd{rnorm}\hlstd{(}\hlnum{1}\hlstd{, s[i,}\hlnum{2}\hlstd{,t}\hlopt{-}\hlnum{1}\hlstd{], tau)} \hlstd{\}} \hlstd{\}} \hlstd{d} \hlkwb{<-} \hlstd{p} \hlkwb{<-} \hlkwd{array}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwd{c}\hlstd{(n, J, T))} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlkwa{for}\hlstd{(t} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{T) \{} \hlstd{d[i,,t]} \hlkwb{<-} \hlkwd{sqrt}\hlstd{((s[i,}\hlnum{1}\hlstd{,t]}\hlopt{-}\hlstd{x[,}\hlnum{1}\hlstd{])}\hlopt{^}\hlnum{2} \hlopt{+} \hlstd{(s[i,}\hlnum{2}\hlstd{,t]}\hlopt{-}\hlstd{x[,}\hlnum{2}\hlstd{])}\hlopt{^}\hlnum{2}\hlstd{)} \hlstd{p[i,,t]} \hlkwb{<-} \hlstd{p0}\hlopt{*}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{d[i,,t]}\hlopt{^}\hlnum{2} \hlopt{/} \hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma}\hlopt{^}\hlnum{2}\hlstd{))} \hlstd{\}} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \end{column} \pause \begin{column}{0.5\textwidth}%{0.5\dimexpr\paperwidth-10pt} Generate $z$ and $y$ \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{z} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlstd{y.move} \hlkwb{<-} \hlkwd{array}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwd{c}\hlstd{(n, J, T))} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{z[i,first[i]]} \hlkwb{<-} \hlnum{1} \hlkwa{for}\hlstd{(t} \hlkwa{in} \hlstd{(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T) \{} \hlstd{z[i,t]} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(}\hlnum{1}\hlstd{,} \hlnum{1}\hlstd{, z[i,t}\hlopt{-}\hlnum{1}\hlstd{]}\hlopt{*}\hlstd{phi)} \hlkwa{for}\hlstd{(j} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{J) \{} \hlstd{y.move[i,j,t]} \hlkwb{<-} \hlkwd{rbinom}\hlstd{(}\hlnum{1}\hlstd{, K, z[i,t]}\hlopt{*}\hlstd{p[i,j,t])} \hlstd{\}} \hlstd{\}} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \end{column} \end{columns} \end{frame} \begin{frame}[fragile] \frametitle{Movement of activity center 1} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(}\hlkwd{t}\hlstd{(s[}\hlnum{1}\hlstd{,,]),} \hlkwc{pch}\hlstd{=}\hlnum{16}\hlstd{,} \hlkwc{type}\hlstd{=}\hlstr{"o"}\hlstd{,} \hlkwc{xlab}\hlstd{=}\hlstr{"x"}\hlstd{,} \hlkwc{ylab}\hlstd{=}\hlstr{"y"}\hlstd{,} \hlkwc{xlim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,} \hlnum{1}\hlstd{),} \hlkwc{ylim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,} \hlnum{1}\hlstd{),} \hlkwc{asp}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{col}\hlstd{=}\hlstr{"blue"}\hlstd{)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.65\linewidth]{figure/s1-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Movement of all activity centers} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor} {\centering \includegraphics[width=0.65\linewidth]{figure/s-all-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Spatial CJS model with dispersal in JAGS} \vspace{-2mm} \tiny %\fbox{\parbox{\linewidth}{\verbatiminput{CJS-spatial-move.jag}}} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{writeLines}\hlstd{(}\hlkwd{readLines}\hlstd{(}\hlstr{"CJS-spatial-move.jag"}\hlstd{))} \end{alltt} \end{kframe} \end{knitrout} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.678, 0.847, 0.902}\color{fgcolor}\begin{kframe} \begin{verbatim} model { phi ~ dunif(0,1) p0 ~ dunif(0,1) sigma ~ dunif(0, 2) tau ~ dunif(0, 2) tauprec <- 1/tau^2 # must convert tau to precision for(i in 1:n) { s[i,1,first[i]] ~ dunif(xlim[1], xlim[2]) s[i,2,first[i]] ~ dunif(ylim[1], ylim[2]) for(j in 1:J) { d[i,j,first[i]] <- sqrt((s[i,1,first[i]] - x[j,1])^2 + (s[i,2,first[i]] - x[j,2])^2) p[i,j,first[i]] <- p0*exp(-d[i,j,first[i]]^2/(2*sigma^2)) } z[i,first[i]] <- 1 for(t in (first[i]+1):T) { s[i,1,t] ~ dnorm(s[i,1,t-1], tauprec)#T(xlim[1], xlim[2]) s[i,2,t] ~ dnorm(s[i,2,t-1], tauprec)#T(ylim[1], ylim[2]) for(j in 1:J) { d[i,j,t] <- sqrt((s[i,1,t] - x[j,1])^2 + (s[i,2,t] - x[j,2])^2) p[i,j,t] <- p0*exp(-d[i,j,t]^2/(2*sigma^2)) } z[i,t] ~ dbern(z[i,t-1]*phi) for(j in 1:J) { y[i,j,t] ~ dbin(z[i,t]*p[i,j,t], K) # for(k in 1:K) { y[i,j,k,t] ~ dbern(z[i,t]*p[i,j,t]) } } } } } \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{JAGS} \small Data \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jd.sp.move} \hlkwb{<-} \hlkwd{list}\hlstd{(}\hlkwc{y}\hlstd{=y.move,} \hlkwc{n}\hlstd{=n,} \hlkwc{first}\hlstd{=first,} \hlkwc{x}\hlstd{=x,} \hlkwc{J}\hlstd{=J,} \hlkwc{K}\hlstd{=K,} \hlkwc{T}\hlstd{=T,} \hlkwc{xlim}\hlstd{=xlim,} \hlkwc{ylim}\hlstd{=ylim)} \end{alltt} \end{kframe} \end{knitrout} Initial values for $z$, and parameters to monitor \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{zi} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, n, T)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{n) \{} \hlstd{zi[i,(first[i]}\hlopt{+}\hlnum{1}\hlstd{)}\hlopt{:}\hlstd{T]} \hlkwb{<-} \hlnum{1} \hlstd{\}} \hlstd{ji.sp.move} \hlkwb{<-} \hlkwa{function}\hlstd{()} \hlkwd{list}\hlstd{(}\hlkwc{phi}\hlstd{=}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{),} \hlkwc{z}\hlstd{=zi,} \hlkwc{s}\hlstd{=s)} \hlcom{## Cheating with s} \hlstd{jp.sp.move} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"phi"}\hlstd{,} \hlstr{"p0"}\hlstd{,} \hlstr{"sigma"}\hlstd{,} \hlstr{"tau"}\hlstd{)} \hlstd{good1} \hlkwb{<-} \hlkwd{which.max}\hlstd{(}\hlkwd{rowSums}\hlstd{(z,} \hlkwc{na.rm}\hlstd{=}\hlnum{TRUE}\hlstd{)}\hlopt{+}\hlkwd{rowSums}\hlstd{(y.move,} \hlkwc{na.rm}\hlstd{=}\hlnum{TRUE}\hlstd{))} \hlstd{time1} \hlkwb{<-} \hlstd{first[good1]}\hlopt{:}\hlstd{T} \hlstd{jp.s1x} \hlkwb{<-} \hlkwd{paste0}\hlstd{(}\hlstr{"s["}\hlstd{, good1,} \hlstr{",1,"}\hlstd{, time1,} \hlstr{"]"}\hlstd{)} \hlstd{jp.s1y} \hlkwb{<-} \hlkwd{paste0}\hlstd{(}\hlstr{"s["}\hlstd{, good1,} \hlstr{",2,"}\hlstd{, time1,} \hlstr{"]"}\hlstd{)} \hlstd{jp.z1} \hlkwb{<-} \hlkwd{paste0}\hlstd{(}\hlstr{"z["}\hlstd{, good1,} \hlstr{","}\hlstd{, time1,} \hlstr{"]"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Fit the model \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jps.sp.move} \hlkwb{<-} \hlkwd{jags.basic}\hlstd{(}\hlkwc{data}\hlstd{=jd.sp.move,} \hlkwc{inits}\hlstd{=ji.sp.move,} \hlkwc{parameters.to.save}\hlstd{=} \hlkwd{c}\hlstd{(jp.sp.move, jp.s1x, jp.s1y, jp.z1),} \hlkwc{model.file}\hlstd{=}\hlstr{"CJS-spatial-move.jag"}\hlstd{,} \hlkwc{n.chains}\hlstd{=}\hlnum{3}\hlstd{,} \hlkwc{n.adapt}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{n.burnin}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{n.iter}\hlstd{=}\hlnum{2000}\hlstd{,} \hlkwc{parallel}\hlstd{=}\hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Posterior distributions} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(jps.sp.move[,jp.sp.move])} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.6\linewidth]{figure/jc3-plot-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Posterior movement path} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor} {\centering \includegraphics[width=0.8\linewidth]{figure/jc3-s-1} } \end{knitrout} \end{frame} \begin{frame} \frametitle{Summary} \large % Key points % \begin{itemize}[<+->] % \item Spatial CJS models allow for inference about survival, movement, and capture probability. \\ \pause \vfill % \item We could have considered many other movement models. \\ \pause \vfill % \item Next, we'll use Jolly-Seber models to study spatio-temporal variation in abundance and recruitment. \\ % \end{itemize} \end{frame} % \begin{frame} % \frametitle{Assignment} % {\bf \large For next week} % \begin{enumerate}[\bf (1)] % \item Modify {\tt CJS-spatial.jag} to estimate yearly survival % \item Work on analysis of your own data % \item Prepare a 2-min presentation on your planned analysis. % \item Paper should be a minimum of 4 pages, single-spaced, 12-pt % font, including: % \begin{itemize} % \item Introduction % \item Methods (including model description) % \item Results (with figures) % \item Discussion % \end{itemize} % \end{enumerate} % \end{frame} \end{document}
{ "alphanum_fraction": 0.6161654855, "avg_line_length": 32.0798771121, "ext": "tex", "hexsha": "2863effbbb5bde121d95dca870ccc17905a38f28", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f738381e9260e8fbd1a63f0c96402612aa44a3d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rbchan/popdy-inference", "max_forks_repo_path": "lectures/CJS/lecture-CJS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f738381e9260e8fbd1a63f0c96402612aa44a3d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rbchan/popdy-inference", "max_issues_repo_path": "lectures/CJS/lecture-CJS.tex", "max_line_length": 243, "max_stars_count": null, "max_stars_repo_head_hexsha": "f738381e9260e8fbd1a63f0c96402612aa44a3d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rbchan/popdy-inference", "max_stars_repo_path": "lectures/CJS/lecture-CJS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 17792, "size": 41768 }
Cloud computing is fostering a model of application development that combines authored code with functionality provided by extant web services. Consumer-targeting applications (particularly those designed for mobile platforms) interact with highly scalable and reliable ``back-end'' web services that are deployed in remote data centers. In addition, enterprise Information Technology (IT) strategies are focusing on exposing their {\em digital assets} as web services for consumption by their employees via corporate applications or by their customers via software and data ``as-a-service''. This rapidly proliferating model of application development and IT operation is designed to scale, both in the number of applications, and in the number of web services that must be hosted and curated. Each web service exports one or more Application Programming Interfaces (APIs) that must be accessible by users, applications, and/or other services. Because applications encode their logic in terms of remote ``calls'' to web services, these APIs define functional boundaries that must be incorporated into the application architecture. From an IT management perspective, the vast collection of web APIs must be governed as critical infrastructure components or else ``client'' applications will encounter problems when the APIs they depend on change or fail. Further, web services often make API calls to each other creating dependencies between APIs that must be carefully managed. Moreover, the APIs have a software life cycle that is independent and longer than the life cycle of the services themselves. As technological improvements make better service implementations possible, IT management must upgrade these services while keeping the APIs unchanged. Similarly, as new API versions emerge to increase functionality, backward compatibility must be maintained in a way that takes into account the strategic objectives of the organization. Today's cloud platforms severely lag behind in API governance. Typical cloud environments impose restrictions on applications and web services in order to guarantee scalability and high availability (e.g. Google App Engine~\cite{gae} prevents applications from accessing the file system). However, they do not prevent the developers from violating software development and maintenance policies. Consequently, developers often violate naming and versioning conventions when naming digital assets, take dependencies on incorrect or deprecated APIs, and very often end up re-implementing program logic from the scratch instead of reusing extant APIs that provide the required functionality. This lack of API governance also leads to many security vulnerabilities (e.g. DoS attacks by malicious or poorly coded clients), violations of IP rights and licensing terms, and in some cases, even financial losses. Web APIs also play a key role in managing and enforcing service level agreements (SLAs). Ideally, any public API should be exposed with a well-defined SLA detailing its performance and quality of service (QoS) characteristics. Without proper means of API governance, developers that host services in cloud settings have no way of enforcing competitive SLAs on their APIs, or even getting a thorough idea of what kind of SLAs their services can uphold. Today, developers have to perform a lot of offline and online testing to learn the performance characteristics of their APIs. Furthermore, developers often have to be content with basic SLA monitoring as opposed to comprehensive SLA enforcement. {\bf My research focuses on designing and implementing systems for defining and implementing API governance as a cloud-native feature.} API management is often a documentary function (e.g. ~\cite{apigee,layer7,wso2am} that does not include the ability to enforce policies, particularly at scale in a cloud setting. My research agenda focuses on making API governance a ``first-class'' cloud service that is fully integrated with the core of the cloud and the cloud management tools. In a cloud-based IT context, APIs must be governed by policies that are defined by the organization to ensure predictable operation of the web services hosted by the cloud. Additionally, there should be automated mechanisms in place for easily developing new APIs, discovering existing APIs, analyzing the syntactic and semantic features of APIs, and porting applications across APIs. My research will make it trivial for developers to consume, combine and leverage APIs to create powerful applications, while adhering to tried and tested software engineering best practices. They will be able to quickly learn the QoS properties of APIs and enforce competitive SLAs through the cloud with clear and accurate levels of certainty. My plan is to \begin{itemize} %\item develop new {\bf policy specification languages for web %services} that %enables governance enforcement during service development, %and at runtime, \item develop a distributed {\bf cloud-integrated service for enforcing IT management policies} governing web APIs when operating at scale in cloud settings, \item develop automated mechanisms for discovering, analyzing, comparing and reasoning about web APIs, in order to {\bf simplify the API-based development model}, and, \item develop methods for {\bf automated QoS and performance analysis} of web services to formulate and enforce competitive SLAs for APIs hosted in clouds. \end{itemize} My research spans a number of prominent research areas in computer science including programming languages, distributed systems and software engineering. The proposed policy enforcement platform motivates new research on innovative new ways of using existing distributed consensus, scalability and high availability techniques. It also calls for more research in novel policy specification languages that are developer-friendly, and can be verified and executed efficiently in cloud settings. Automated performance analysis requires more research in using static analysis methods (e.g. abstract interpretation, WCET analysis, etc.), system modeling, and simulations for analyzing a wide range of services deployed in modern cloud platforms. Above all, since the primary target of this work are cloud platforms, there needs to be a strong focus towards making all these proposed mechanisms scale to handle thousands of APIs, policies, client applications and users. \subsection{Outcomes and Assessment} The outcome of this research plan, I believe, will be new advances in services computing, IT management, cloud computing, API policy enforcement, and SLA management. If successful, the research will be transformative since APIs are rapidly becoming the most valuable digital asset hosted in cloud settings and my work enables scalable management of these assets. In addition, as a cloud systems researcher I intend to use open source, on-premise clouds (AppScale~\cite{krintzappscale13} and Eucalyptus~\cite{eucalyptus09}) to produce research and educational artifacts (data sets, demos and test applications, code repositories, etc.) to my colleagues, and the wider research community. I also plan to make my systems, tools and mechanisms available as persistent software artifacts to the wider research community and to monitor its uptake. Because of their design and their use of open source cloud technologies, results of my research will be readily available to researchers and educators who wish to develop new artifacts and curricula for cloud computing.
{ "alphanum_fraction": 0.8202410915, "avg_line_length": 58.9765625, "ext": "tex", "hexsha": "cdc3e0364103a005c36117e22e70128074940a5b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_forks_repo_path": "Eager/paper/proposal/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_issues_repo_path": "Eager/paper/proposal/intro.tex", "max_line_length": 123, "max_stars_count": 3, "max_stars_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_stars_repo_path": "Eager/paper/proposal/intro.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-16T18:20:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-12T01:18:49.000Z", "num_tokens": 1467, "size": 7549 }
\documentclass[12pt, english]{article} \usepackage[utf8]{inputenc} \usepackage{mathptmx} \usepackage{color} \usepackage[dvipsnames]{xcolor} \definecolor{darkblue}{RGB}{0.,0.,139.} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \usepackage{amsmath} \usepackage{amstext} \usepackage{amssymb} \usepackage{setspace} \usepackage{lipsum} \usepackage[authoryear]{natbib} \usepackage{url} \usepackage{booktabs} \usepackage[flushleft]{threeparttable} \usepackage{graphicx} \usepackage[english]{babel} \usepackage{pdflscape} \usepackage[unicode=true,pdfusetitle, bookmarks=true,bookmarksnumbered=false,bookmarksopen=false, breaklinks=true,pdfborder={0 0 0},backref=false, colorlinks,citecolor=black,filecolor=black, linkcolor=black,urlcolor=black] {hyperref} \usepackage[all]{hypcap} % Links point to top of image, builds on hyperref \usepackage{breakurl} % Allows urls to wrap, including hyperref \linespread{2} \usepackage[authoryear]{natbib} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \begin{document} \begin{doublespace} \title{The Racial Wage Gap and the Coronavirus\thanks{I want to thank Dr. Tyler Ransom for his feedback and guidance.}} \end{doublespace} \author{Emma Reinsch\thanks{Department of Economics, University of Oklahoma.\ E-mail~address:~\href{mailto:[email protected]}{[email protected]}}} % \date{\today} \date{April 13, 2021} \maketitle \section{Abstract} In the wake of the COVID-19 (coronavirus) pandemic, overlooked social problems have resurfaced. The movement at the forefront of all news and social media websites was the Black Lives Matter movement. The protests that began in the United States due to the police brutality experienced by African Americans quickly spread to the rest of the world. These protests were a culmination of frustration experienced by this marginalized social group and the death of George Floyd was only the climax to centuries of fury. Not only must African Americans continuously worry about the threat of police brutality, they also face additional discrimination of all types, everyday. This paper will examine a portion of race-based discrimination. In previous research, I study the impact of the wage gap on gender-based discrimination with an emphasis on STEM fields. This paper will study the wage gap between White and African Americans by controlling for additional mitigating factors. Using one log to level simple linear regression and 5 multiple linear regression models, I will predict the wage gap between African Americans and White Americans across all occupations. I have set up a model that will test hourly wage with race (Black Americans), women, occupation, marital status, number of children present in the household, level of education, and age. My second main regression model will test the same relationship listed above as well as variables specific to COVID-19. Using cross-sectional data ranging from 2015 to 2021 retrieved from the Current Population Survey (CPS), I performed 6 linear regressions and found that, in each case, hourly income earned by African Americans is significantly less than income earned by White Americans. \section{Introduction} The year 2020 was one for the history books. It was mainly a year that made the world reflect on their freedoms like sitting inside a restaurant with elderly family members or simply taking a walk outside. On January 21st, 2020, the first COVID-19 case was documented in the United States (\citet{Hist2021}. While most of the population refused to acknowledge the disease’s presence, COVID-19 rapidly spread throughout different countries which prompted the worldwide “lockdown” on March 13th, 2020 (\citet{Hist2021}. In a frantic effort to reduce the spread of the virus and “flatten the curve”, health experts encouraged social distancing, hand washing, and wearing masks. While the pandemic ripped through the world destroying families and the global economy, the U.S. was also experiencing a different type of epidemic: racial discrimination and injustice. The call for social justice and police reform did not start in 2020. However, the death of George Floyd in May 2020 sparked growing conversations surrounding the topic of equality (\citet{DAMORE2020}. Soon, efforts to promote education of Black history and the BlackLivesMatter movement became the forefronts of newscasts and social media websites. While this growing awareness encompassing racial discrimination and ending racism have undoubtedly improved the lives of Black Americans, it is important to understand how different types of microaggressions are affecting this racial group. Along with blatant, unfavorable bias shown towards African Americans, the income wage gap between Black and White Americans quantifies some of the discrimination faced by this vulnerable social group. Furthermore, it is also relevant to note the disproportionate impact of COVID-19 on African Americans and investigate the change in the income gap (hourly wage gap) after taking the pandemic into account. In my previous research, I examine the residual discrimination in the form of the gender-pay gap for women employed in STEM fields while controlling for mitigating factors. On average, women in STEM fields earned approximately 16\% less than men according to 2018 American Community Survey (ACS) data (\citet{REINSCH2020}. While it is difficult to compare across different surveys, the trends outlined by (\citet{McCall2001} show that the racial wage gap may be larger between races than between genders. This research will employ 2015-2021 Current Population Survey (CPS) data and draw conclusions about the racial wage gap between highly educated Black and White Americans across all occupations who hold at least a bachelor’s degree. Data from March 2020 to February 2021 will allow us to draw specific conclusions about the impact of the pandemic on the racial wage gap. Previous literature provides the experiences of highly educated minorities (specifically Blacks and Hispanics) in terms of mismatched job placement and wage penalties. To further expand on the research surrounding the racial wage gap, I explore the disparate effects of COVID-19 on Black Americans’ wages through unemployment during the pandemic, job loss, and their abilities to work-from-home. The remainder of this research paper will present a brief literature review, descriptions of the data, variables, and statistical regression models used to conduct this research, explanations of the obtained results, as well as a discussion of the results and concluding remarks. \section{Literature Review} The field of labor economics is one that most researchers will never fully comprehend; some data is simply unobservable and cannot be accurately measured. For instance, when attempting to measure a wage gap, no matter how many regressions econometricians run, the reasons behind the existence of wage gaps may not ever be fully discernible. In the meantime, researchers must gather the available evidence to draw conclusions. Throughout the last two years, I have become fascinated with labor economics. In previous projects, I concentrated my research towards the gender income wage gap as a form of gender-based discrimination in STEM and economics-related fields. I broaden the aforementioned study by investigating the hourly wage gap experienced between Black and White Americans pre and “post” pandemic. Historically, the racial wage gap has been crowded out by more blatant, systemic racism (\citet{Koechlin2019}. In fact, the racial earnings gap is larger between races than between genders (\citet{McCall2001}.The residual discrimination felt by African Americans is exhibited in many ways: police brutality, mismatched economic opportunities, higher rates of incarceration, and inadequate access to healthcare during a global pandemic (\citet{CLAY2021}. The year 2020 has reminded society that it still has a long way to go before achieving equity and equality. While my data analysis will only focus on one aspect of potential discrimination, the racial wage gap, it is important to understand previous research surrounding racial discrimination in the labor market. An important reason behind the existence of the racial wage gap is attributed to mismatched employment opportunities. According to (\citet{LuLi2021}, even highly educated minorities and immigrants are disproportionately channeled into mismatched jobs. In turn, minorities face greater wage penalties due to vertical disparity. This shows that nativity stratification is correlated with future job placement and earnings. Other researchers also conclude that there exists a large negative effect of immigration on the relative wages of Hispanic and African Americans (\citet{McCall2001}. While the 1967 extension of the minimum wage explains over 20\% of the reduction in the racial earnings gap (\citet{DerenoncourtMontialoux2020}, progress continues to stall due to mismatched employment and education opportunities largely attributed to native geographic location (\citet{LuLi2021}. In addition, even though the wage gap has slightly decreased, and more minorities are finding employment, there is evidence that the type of industry African and Hispanic Americans chose to work in are hiding true unemployment (\citet{LangLehmannJee-Yeon2012}. Figure \ref{fig:fig1} and Figure \ref{fig:fig2} show that while income overtime for both racial groups is steadily increasing, we observe that the y-axis for both groups have different values. While White Americans' yearly income has increased above \$$65,000$, African Americans' yearly income has just crossed the \$$40,000$ range. These images show that White Americans make over \$$25,000$ more per year than African Americans. Next, while the income wage gap between Black and White Americans is large, the wealth gap is larger. The latter can be partially explained by centuries of racist exclusion, violence, and plunder. Furthermore, as previously mentioned, labor economists have failed to draw complete conclusions about wage and wealth gaps since they ignore essential ways that racial inequality has been reproduced by U.S. capitalism (\citet{Koechlin2019}. U.S. capitalism is built upon the Neoclassical notion that if a person works hard, they can climb the social ladder. This notion is also known as The American Dream. While many individuals benefit from this economic system, it is important to understand the institutional disadvantages experienced by susceptible social groups. As \citet{LuLi2021} describe in their study, there are other institutional barriers preventing African Americans and other visible minorities from achieving the American Dream. These barriers include native geographic location, access to education (inner-city school vs suburban school), access to higher education, and adequate access to healthcare. In particular, more than half of the Black-White wage gap in the United States in the early 1990s can be accounted for by discrepancies between human capital accumulation (\citet{Koechlin2019}. By filtering my data to only include individuals holding at least a bachelor's degree, I eliminate the human capital model comparison. There is no “simple fix” to the wage gap problem other than a policy overhaul and the elimination of taste-based discrimination. While there has been a slow improvement, society still has not reached the point of equality. More awareness is drawn to the problem in 2020 after the controversial killing of George Floyd and Breonna Taylor. However, African Americans face another dangerous threat: the coronavirus. Studies show that racial disparities exist in the way that COVID-19 has impacted individuals. African Americans, in fact, are disproportionately affected by this disease. There exists an association between race and higher mortality rates \citet{CLAY2021}. The reasons behind this statistic include age, a higher presence of comorbidities in African Americans, health care affordability issues, and access issues. Not only are African Americans more likely to lose their lives to this disease, but they may also be worse off if they manage to survive. I hypothesize that African Americans are more likely to become unemployed during the pandemic than their white counterparts. As previously mentioned, this research and data analysis will focus on the (hourly) racial wage/earnings gap across all occupations. I contribute to the labor economics literature by studying the hourly, average racial wage gap before the pandemic began as well as the hourly wage gap throughout the pandemic. I am testing whether specific COVID-19 factors such as availability to work (with pay) remotely and unemployment due to COVID-19 disproportionately affected Black Americans as opposed to White Americans. \section{Data} \begin{itemize} \item Source of Data I extracted my data from IPUMS-CPS using variables and sampling from the Current Population Survey (CPS). This data is authorized by the Minnesota Population Center with an affiliation to the University of Minnesota. The Public Use Microdata Set from the Current Population Survey uses data from the Census to easily identify critical variables and manipulate data for research purposes. The data extracted focuses primarily on individuals instead of households. I selected data ranging from 2015 to 2021 (January and February exclusively) representing the United States. My audience is likely to be college graduates holding at least a bachelor’s degree and individuals in the labor force (ages 25 to 65). My regressions were calculated using 91,219 observations. \item Description of Selected Variables The main variables used are reported in Table \ref{tab:variables} The income-wage variable was mutated to the natural logarithm of individuals’ wages. This allowed for greater ease when interpreting since all values in the sample were positive. The conversion from hour-wage to log(hourwage) allows the independent variables to be interpreted as a percentage change in hourly wage as each variable increases by one unit. The log(hourwage) variable was manipulated to include incomes ranging from \$$10$ to \$$500$ per hour since I predict most individuals with holding at least a bachelor’s degree will earn hourly wages within this span. I filter the data to only include employed individuals (present in the labor force). The primary independent variable being tested in my hypothesis is Black Americans’ earnings compared to White Americans’ earnings which is measured by black. Additional independent variables are represented by female, AGE, AGE.squared, EDUC, OCC, nchild, notmarried, covid, covidpayremote, covidunable, and covidlook. Prior to running regressions on the selected data, I made the following manipulations: \begin{itemize} \item Black: black, two or more races, and three or more races where black was included = 1, all others = 0 \item Female: female = 1, male = 0 \item AGE: Only individuals between the ages of 25 and 65 were included in my analysis. \item AGE.squared: Ages between 25 and 65 have been squared for additional accuracy. \item EDUC: Only individuals with a bachelor’s degree and above were included: bachelor’s degree = 111 (EDUC> =111). \item Notmarried: married = 0, not married (single, widowed, divorced, separated) = 1 \item Nchild: no children = 0, 1 or more children in the household = 1 \item Covid: Observations occurring after 04/2020 = 1, all others = 0 \item Covidpayremote: Observations were able to work remotely with pay = 1, others = 0 \item Coviduable: Observations were unable to work due to COVID-19 = 1, others = 0 \item Covidlook: Observations were looking for work during the COVID-19 pandemic = 1, others = 0 \end{itemize} \item Verifying Gauss-Markov Assumptions \begin{enumerate} \item Linear in Parameters – The model consists of a multiple linear regression that is linear in parameters and follows \begin{equation} Y = \beta_0 + \beta_1 + ... + u \end{equation} \item Random Sampling — The data includes approximately 91,219 observations from the United States, across a wide range of demographics. The sampling is to be assumed as random. \item No perfect collinearity — There is no perfect collinearity between the variables in the model. No correlation between x variables. \item E (u|x) = 0; It will be difficult to assume Gauss-Markov Assumption 4 for this sample. The results below across all regression models are significant, there sample size is large, and I have included many relevant independent variables which should control for omitted variable bias. However, the R-squared across the models show that the independent variables only slightly account for the variation in the dependent variable (log(incwage)). \item Homoskedasticity — The model and variables constructed from the data are assumed to have constant variance of u’s over x variables. \end{enumerate} \end{itemize} \section{Methods and Models} Adapted from the classical linear regression model, \begin{equation} Y = \beta + \epsilon \end{equation}, where Y is the dependent variable log(hourwage) and X is all the independent variables (listed in Table \ref{tab:variables}), the following models can be obtained: \begin{itemize} \item Model 1: Simple Linear Regression Model The first model is a simple regression model. I only regress black (Black Americans) on hourly wage. I observe the direct relationship between race and earnings. \begin{equation} log(hourwage) = \beta_0 + \beta_1black + u \end{equation} \item Model 2: Multiple Linear Regression Model The second model is a multiple regression model. This model includes black, female, AGE, AGE.squared, OCC, EDUC, notmarried, and nchild. I examine the average residual hourly wage gap between Black and White Americans over a six year period. \begin{multline} log(hourwage) = \beta_0 + \beta_1black + \beta_2female + \beta_3AGE + \beta_4AGE.squared + \beta_5OCC + \beta_6EDUC +\\ \beta_7notmarried + \beta_8nchild + u \end{multline} \item Model 3: Multiple Linear Regression Model with interaction term The third model is also a multiple linear regression model with the dummy variable black interacting with the dummy variable female. The interaction between these variables will allow us to directly examine the influence of gender on the racial hourly wage gap. \begin{multline*} log(hourwage) = \beta_0 + \beta_1black*female + \beta_2AGE + \beta_3AGE.squared + \beta_4OCC + \beta_5EDUC +\\ \beta_6notmarried + \beta_7nchild + u \end{multline*} \item Model 4: Multiple Linear Regression Model with interaction term The fourth model is also a multiple linear regression model with the dummy variable black interacting with the variable OCC (occupation). The interaction between these variables will allow us to directly observe the racial hourly wage gap across all occupations. \begin{multline*} log(hourwage) = \beta_0 + \beta_1black*OCC + \beta_2female + \beta_3AGE + \beta_4AGE.squared + \beta_5EDUC +\\ \beta_6notmarried + \beta_7nchild + u \end{multline*} \item Model 5: Multiple Linear Regression Model with interaction term The fifth model is also a multiple linear regression model with the dummy variable black interacting with the variable EDUC (bachelor’s degree education and above). The interaction between these variables allows us to investigate how level of education influences the racial hourly wage gap. \begin{multline*} log(hourwage) = \beta_0 + \beta_1black*EDUC + \beta_2female + \beta_3AGE + \beta_4AGE.squared + \beta_5OCC +\\ \beta_6notmarried + \beta_7nchild + u \end{multline*} \item Model 6: Multiple Linear Regression Model with COVID-19 variables and interaction terms The sixth model is also a multiple linear regression model with new variables describing COVID-19 impacts on employment and wage. I hypothesize that the inclusion of these covid variables will worsen the hourly wage gap between Black and White Americans. \begin{multline*} \log(hourwage) = \beta_0 + \beta_1black*covid + \beta_2black*female + \beta_3black*covidpayremote +\\ \beta_4black*covidunable + \beta_5black*covidlook + \beta_6black*OCC + \beta_7EDUC + \beta_8AGE +\\ \beta_9AGE.squared + \beta_10notmarried + \beta_11nchild + u \end{multline*} \end{itemize} \section{Results/Findings} The main results are reported in Table \ref{tab:estimates}. There are six regression models present in the table. Model 1 in Table \ref{tab:estimates} allowed me to obtain a causal relationship between the independent variable "black" and the dependent variable "loghourwage". On average, over a six year period (2015-2021), Black Americans earned 11.7\% less than their white counterparts per hour. This result is statistically significant at the 5\% level (p\textless0.05). Therefore, we reject the null hypothesis that White and Black Americans' hourly wages are equal in favor of the alternative hypothesis that Black Americans earn less than White Americans. The robust standard error for the variable "black" is 0.005 signifying that Black Americans earn between 11.2\% and 12.2\% less than their White counterparts across all occupations and that this difference is significant at the 5\% level. While Model 1 is statistically significant, we observe an R-squared value of 0.005. This indicates that the variable "black" only accounts for 0.5\% of the variation in hourly wages. I expect Gauss-Markov Assumption \#4 to be violated due to a long list of unobservable variables that are likely correlated to the variable "black" (ie: cognitive ability, preference for specific types of jobs, parents' education levels, etc.). Model 2 in Table \ref{tab:estimates} represents a multiple regression model. I include the independent variables "black", "female", "AGE", "AGE.squared", "EDUC", "OCC", "notmarried", and "nchild". On average, over a six year period, Black Americans earned 10.4\% less than their White counterparts per hour. This result is statistically significant at the 5\% level (p\textless0.05). Therefore, we reject the null hypothesis in favor of the alternative hypothesis. The racial hourly wage gap is likely smaller due to several mitigating variables that are included in the regression. Specifically, by observing the female variable, we conclude that on average, women earned 10.9\% less than their male counterparts per hour. The robust standard error for the variable "black" is 0.005 signifying that Black Americans earn between 9.9\% and 10.9\% less than White Americans and that this difference is statistically significant at the 5\% level. The R-squared for Model 2 is 0.126. This indicates that the independent variables listed in Model 2 only account for 12.6\% of the variation in hourly wage. While this coefficient of determination is higher, it is still considered low. Model 3 in Table \ref{tab:estimates} represents a multiple regression model with the variable "black" interacting with the variable "female". I incorporate this interaction term to demonstrate the direct relationship between hourly wage and African American women. The independent variables listed in Model 2 are reused in Model 3. On average, over the same six year period, female African Americans earn 13.2\% less than their White counterparts per hour. This result is statistically significant at the 5\% level (p\textless0.05). Therefore, we reject the null hypothesis in favor of the alternative hypothesis. We observe an increase in the racial earnings gap due to the interaction with the variable "female". The robust standard error for the variable "black" is 0.008 which signifies that female African Americans earn between 12.4\% and 14.0\% less than male White Americans and this result is statistically significant at the 5\% level. The R-squared for Model 3 is also 0.126. This indicates that the independent variables listed in Model 3 only account for 12.6\% of the variation in hourly wage. Model 4 in Table \ref{tab:estimates} represents a multiple regression model with the variable "black" interacting with the variable "OCC" (occupation). I incorporate this interaction term to demonstrate the direct relationship between hourly wage and employed African Americans. The independent variables listed in Model 2 are reused in Model 4. On average, employed African Americans earn 12.3\% less than their White counterparts per hour. This result is statistically significant at the 5\% level (p\textless0.05). Therefore, we reject the null hypothesis in favor of the alternative hypothesis. We observe a decrease in the coefficient for "black" compared to the coefficient in Model 2 showing that the hourly wage gap increases for employed African Americans. The robust standard error for the variable "black" is 0.009 which signifies that working African Americans earn between 11.4\% and 13.2\% less than White Americans and this result is statistically significant at the 5\% level. The R-squared for Model 4 is also 0.126. This indicates that the independent variables listed in Model 4 only account for 12.6\% of the variation in hourly wage. Model 5 in Table \ref{tab:estimates} represents a multiple regression model with the variable "black" interacting with the variable "EDUC" (education). I incorporate this interaction term to demonstrate the direct relationship between hourly wage and educated African Americans. The independent variables listed in Model 2 are reused in Model 5. On average, educated (minimum bachelor's degree) African Americans earn 55.4\% more than their White counterparts per hour. This result is statistically significant at the 5\% level (p\textless0.05). We observe an extreme increase in the coefficient for "black" compared to the coefficient in Model 2 showing that the hourly wage gap does not exist for African Americans holding at least a bachelor's degree. The robust standard error is 0.107 signifying that educated African Americans earn between 44.7\% and 66.1\% more than White Americans and this result is statistically significant at the 5\% level. The R-squared for Model 5 is also 0.126. This indicates that the independent variables listed in Model 5 only account for 12.6\% of the variation in hourly wage. This model may also suffer from false causality. It is highly unlikely that Black Americans earn more than White Americans given that previous research shows that African Americans experience earning gaps. Model 6 in Table \ref{tab:estimates} represents a multiple regression model which includes four interaction terms and new COVID-19 variables. The variable "black" interacts with the variable "female", "OCC", "covid", "covidpayremote", "covidunable", and "covidlook". I incorporate these interaction terms to demonstrate the effects of the coronavirus on African American individuals' hourly wages. I exclude the interaction between "EDUC" and "black" since this relationship may be the result of a spurious correlation. The independent variables listed in Model 2 are reused in Model 6. The variables "covid", "covidpayremote", "covidunable", and "covidlook" are also added to Model 6. On average, within the period of April 2020 and February 2021, African Americans earned 14.9\% less than their White counterparts per hour when accounting for COVID-19 effects. This result is statistically significant at the 5\% level (p\textless0.05). Therefore, we reject the null hypothesis that African Americans and White Americans earned identical hourly wages during the pandemic in favor of the alternative hypothesis that African Americans earned less per hour. The robust standard error for the variable "black" is 0.014 signifying that African Americans impacted by COVID-19 earned between 13.5\% and 16.3\% less than their White counterparts and this result is statistically significant at the 5\% level. The R-squared for Model 6 is 0.128. This indicates that the independent variables listed in Model 6 account for 12.8\% of the variation in hourly wage. While this coefficient is slightly higher than the other models, it is still considered low. Once again, I expect Gauss-Markov Assumption 4 to be violated due to a long list of unobservable variables that are likely correlated to the variable "black". \section{Discussion and Conclusions} My goal when conducting this research was to examine the differences in earnings between African Americans and White Americans. I also sought to explore the disproportionate effects of COVID-19 on African Americans as opposed to White Americans. While the hourly wage gap observed throughout this research is large and significant, we can expect a higher wage gap when observing income wage per year. Furthermore, after investigating specific COVID-19 effects, we observe a 4\% higher hourly wage gap. This result shows that the pandemic adversely affected African Americans' hourly wages. While the higher minimum wage implemented in the 1960s reduced over 20\% of the wage gap (\citet{DerenoncourtMontialoux2020}, the residual wage gap remains high and significant. In summary, my six regression models show that, after controlling for other variables that could influence hourly earnings, Black Americans continue facing significant wage-based discrimination. Nevertheless, the racial earnings gap only explains a fraction of the discrimination faced by African Americans. This vulnerable social group suffers from increased taste-based discrimination and statistical discrimination. In addition, they face institutional barriers preventing them from achieving the "American Dream" (\citet{Koechlin2019}. After controlling for gender, age, occupation type, level of education, maritual status, number of children in the household, and employment during the pandemic, my six linear regressions show that, on average, African Americans earned at least 10.4\% less than their white counterparts per hour. When employing the interaction terms between "black*female", "black*OCC", "black*EDUC", "black*covid", "black*covidpayremote", "black*covidunable", and "black*covidlook", the inequality of hourly income rose between 1 and 3\%. The interaction terms show that the wage gap worsens once African Americans enter industry and again when the coronavirus swept the nation. According to \citet{LuLi2021}, the results I gathered can also be explained by employment mismatch and through implicit racial bias. In other words, this earnings gap exists due to prejudice job placement. The racial earnings gap is a problem for many reasons. First, it shows that systemic racism is only one of the institutional barriers African Americans must face when entering the job market. After eliminating the need to compare human capital models between African Americans and White Americans, the residual wage gap still grew by 3\% when interacting the variable "black" with the variable "OCC" (occupation). Not only do African Americans face unequal job opportunities (\citet{LuLi2021}, the significant lower hourly pay would also deter African Americans from entering the labor force altogether. Furthermore, the deterrence experienced by this marginalized group hurts the economy as a whole since it suffers from a lack of diversity which translates to inadequate backgrounds and experience that would allow a field to evolve. Next, when adding the coronavirus factor, inequity is further exposed. African Americans were worse off due to COVID-19 factors; they were less likely to hold a paid job during the pandemic, they were more likely to be unable to work during the pandemic, and they were more likely to be looking for work. The largest limitation of this study is the low R-squared values across all the regression outputs. While my results were significant, the low coefficients of determination show that my independent variables to not account for much of the variation in hourly wages. Additionally, Gauss-Markov Assumption \#4 is likely to be violated; there exist multiple unobservable and unquantifiable factors such as cognitive ability, motivation, and mental health status. This paper contributes to the labor economics literature by providing another quantitative earnings gap analysis. Furthermore, it provides a new research lens by investigating the disproportionate effects of the coronavirus on African Americans. In the future, it would be beneficial to study the racial earnings gap across specific occupations such as STEM fields. It would also be interesting to study the racial earnings gap within educational institutions or within businesses. It is pertinent that society continues to bring awareness to social injustices so that it may learn, evolve, and solve gender and race based earning gaps. \bibliographystyle{jpe} \nocite{*} \bibliography{ReferencesFP.bib} \section{Figures and Tables} \begin{figure}[htp] \centering \includegraphics[width=8cm]{incomeAA.png} \caption{African American Income Over Time} \label{fig:fig1} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=8cm]{incomeW.png} \caption{White American Income Over Time} \label{fig:fig2} \end{figure} \begin{table}[ht] \centering \begin{tabular}{lcccc} Variable & Source & Year & O bservations & Type\\ loghourwage & IPUMS-CPS & 2015-2021 & 91219 & Dependent\\ black & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ female & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ AGE & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ AGE.squared & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ EDUC & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ nchild & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ OCC & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ covid & IPUMS-CPS & 2020-2021 & 91219 & Independent\\ covidpayremote & IPUMS-CPS & 2020-2021 & 91219 & Independent\\ covidunable & IPUMS-CPS & 2020-2021 & 91219 & Independent\\ covidlook & IPUMS-CPS & 2020-2021 & 91219 & Independent\\ notmarried & IPUMS-CPS & 2015-2021 & 91219 & Independent\\ \end{tabular} \caption{Description of Selected Variables} \label{tab:variables} \end{table} \input{table2fp} \end{document}
{ "alphanum_fraction": 0.7888821648, "avg_line_length": 147.0381355932, "ext": "tex", "hexsha": "401e205eb70a4754aa3a6d5e1e63d5d416cd4c2c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "be8bbd64956d4c3bfb1efb1dcc59637215e488ca", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ereinsch21/DScourseS21", "max_forks_repo_path": "ProblemSets/PS11/PS11new_Reinsch.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "be8bbd64956d4c3bfb1efb1dcc59637215e488ca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ereinsch21/DScourseS21", "max_issues_repo_path": "ProblemSets/PS11/PS11new_Reinsch.tex", "max_line_length": 1806, "max_stars_count": null, "max_stars_repo_head_hexsha": "be8bbd64956d4c3bfb1efb1dcc59637215e488ca", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ereinsch21/DScourseS21", "max_stars_repo_path": "ProblemSets/PS11/PS11new_Reinsch.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7696, "size": 34701 }
\section{Conclusions \& Next Steps}\label{sec:summary} This \tnote{} reports on our ongoing effort to develop a modified version of the {\tt legacypipe} photometric pipeline which is optimized for large galaxies, with the ultimate goal of producing a Legacy Survey Large Galaxy Atlas (LSLGA). In addition to enabling detailing multi-wavelength analyses of large, spatially well-resolved galaxies, this effort will also significantly improve DESI target selection for the Bright Galaxy Survey and in pointings containing large galaxies. In addition to addressing the outstanding issues discussed above, in the near future we intend to work on the following additional threads, in no particular order of importance: \begin{itemize} \item{We will revisit the parent galaxy sample selection to ensure that low surface brightness galaxies are not being excluded (e.g., by incorporating H~{\sc i}-selected galaxies).} \item{We will run the large-galaxy pipeline on the full sample of $10,654$ galaxies, in order to identify the most common problems and failure modes.} \item{We will build a web-based interface on the {\tt legacysurvey.org} web-server (work that we have already begun), in order to make it easy to inspect the data and model outputs for the large-galaxy sample.} \item{Using existing code, we will use image simulations---whereby we insert simulated large galaxies into the CCD-level data---to test the performance of the large-galaxy pipeline under different initial conditions and in different regimes.} \item{We will incorporate more sophisticated galaxy models, including multi-component S\'{e}rsic profiles, into the library of possible two-dimensional galaxy models.} \item{We will investigate ways of accounting for second-order variations in the galaxy models, such as radial color gradients or azimuthal asymmetries.} \item{And finally we will engage with the Data Systems and Target Selection Working Group to incorporate the catalogs constructed using the large-galaxy pipeline into DESI target selection and fiber assignment.} \end{itemize} %Planar sky. %We zoom in on the object of interest and define a custom ``brick'' centered on %that galaxy -- no edge issues to worry about! %\begin{figure} %\centering %\includegraphics[width=0.3\textwidth]{figures/ugc04203-image-custom-annot.jpg} %\includegraphics[width=0.3\textwidth]{figures/ugc04203-model-custom-annot.jpg} %\includegraphics[width=0.3\textwidth]{figures/ugc04203-resid-custom-annot.jpg} %\caption{Modeling of UGC04203 using the customized, large-galaxy optimized % version of the pipeline. See Section~\ref{sec:largepipeline} for details. %\label{fig:custom}} %\end{figure}
{ "alphanum_fraction": 0.7917594655, "avg_line_length": 52.8235294118, "ext": "tex", "hexsha": "0ff4021c2a26b0b53d3ee0538a11e76e3a8375c1", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-14T01:00:35.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-12T00:14:48.000Z", "max_forks_repo_head_hexsha": "75ab0a7649efeb912e3d3a450941ae0b0fe21ca3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dstndstn/legacyhalos", "max_forks_repo_path": "paper/summary.tex", "max_issues_count": 27, "max_issues_repo_head_hexsha": "75ab0a7649efeb912e3d3a450941ae0b0fe21ca3", "max_issues_repo_issues_event_max_datetime": "2020-05-15T16:35:36.000Z", "max_issues_repo_issues_event_min_datetime": "2018-01-26T11:01:04.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dstndstn/legacyhalos", "max_issues_repo_path": "paper/summary.tex", "max_line_length": 80, "max_stars_count": 6, "max_stars_repo_head_hexsha": "75ab0a7649efeb912e3d3a450941ae0b0fe21ca3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dstndstn/legacyhalos", "max_stars_repo_path": "paper/summary.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-20T14:18:37.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-04T01:08:54.000Z", "num_tokens": 636, "size": 2694 }
\section{Conclusion} In this dissertation, we present three research projects --- 1) an edge-based traffic load balancing system (i.e., Presto) for datacenter networks, 2) virtualized congestion control technique for multi-tenant clouds (~\acdc TCP) and 3) low latency software rate limiters for cloud networks. All of them leverage the flexibility and high programmability of software-defined network edge (i.e., end-host networking) to improve the performance of datacenter networks. Each of these three projects focuses on one crucial functionality in datacenter networks. In the following, we provide a short conclusion for each project in turn. {\bf Presto: Edge-based Load Balancing for Fast Datacenter Networks.} Modern datacenter networks are built with multi-stage Clos networks. There are usually tens to hundreds of network paths between two servers in the same datacenter. The state-of-the-art traffic load balancing uses flow-level schemes (e.g., ECMP and WCMP). However, flow-level load balancing schemes suffer from the elephant flow collision problem. Elephant flow collisions lead to reduced throughput and increased latency. We propose Presto to address this classic problem. Presto is an end-host-based traffic load balancing system. At the sender side, Presto uses virtual switch (OVS) to chunk elephant flows into flowcells (a flowcell consists of multiple consecutive TCP segments and its maximum size is 64KB) and spread the flowcells evenly over multiple network paths. At the receiver side, we design and implement improved Generic Receive Offload (GRO) functionality in Linux networking subsystem to mask packet reordering for TCP. Presto makes sure mice flows (smaller than or equal to 64KB in size) are not exposed to packet ordering issue. Note that in realistic datacenter networks, the overwhelming majority of the flows are mice flows. For elephant flows, Presto's improved GRO logic puts flowcells in order before they are pushed up to TCP. Presto eliminates the elephant flow collision problem and demonstrates that subflow-level traffic load balancing is possible and effective. {\bf ~\acdc{} TCP: Virtual Congestion Control Enforcement for Datacenter Networks.} In multi-tenant clouds, tenants manage their own Virtual Machines (VMs). VM TCP stacks' congestion control algorithms can be outdated, inefficient or even misconfigured. Those outdated, inefficient or even misconfigured VM TCP stacks can cause severe network congestion and throughput fairness issue. Network congestion and throughput unfairness affect the performance of the applications running in clouds (e.g., increased task completion time). To address this problem, we present~\acdc{} TCP, a virtual congestion control enforcement technique for datacenter networks. The key idea of~\acdc{} TCP is to implement an intended congestion control algorithm (i.e., DCTCP) in the virtual switch in the hypervisor and the congestion control decisions are enforced via modifying TCP header's~\rwnd{} field. Our experiment results show that enforcing an intended congestion control algorithm in the network virtualization layer can greatly reduce network latency and improve throughput fairness. Also,~\acdc{} TCP's CPU and memory overhead is negligible. {\bf Low Latency Software Rate Limiters for Cloud Networks.} Rate limiters are employed to provide bandwidth allocation feature in multi-tenant clouds. For example, in Google Cloud Platform, different kinds of VMs are allocated with different maximum bandwidths. However, we find that rate limiters can increase network latency by an order of magnitude or even higher. That is because traffic shaping (the underlying mechanism of rate limiters) maintains a queue to absorb bursty traffic and dequeues packets into the network based on preconfigured rate. Queueing latency in rate limiter queue inflates end-to-end network latency. To solve this problem, we first extend ECN into rate limiter queues and apply DCTCP on the end-host. Though this straightforward scheme reduces network latency significantly, it can also lead to TCP throughput oscillation because of coarse-grained segment-level ECN marking and long congestion control loop latency. To address the shortcomings of the straightforward scheme, we present~\dem{}, which directly sets TCP ECE bit in reverse ACKs and~\spring{} which runs a queue-length-based congestion control algorithm and enforces congestion control decisions via modifying~\rwnd{} field in reverse ACKs. Both~\dem{} and~\spring{} enable low latency, high network saturation rate limiters.~\dem{} relies on ECN support while~\spring{} is generic and can handle both ECN flows and non-ECN flows. The techniques and mechanisms proposed in this dissertation are original. We believe the research work presented in this dissertation will be valuable to the computer networking research community. \section{Lessons Learned} {\bf Darkness Is Before the Dawn.} The first important thing I learned during my Ph.D. study is that sometimes we have to go through the darkness before the dawn. The Presto project is such an example. Initially, we only implemented the Presto sender side logic, but we found that even in a simple topology, network throughput could not reach 9.3Gbps. We later found that it was caused by packet reordering. Before starting the project, we thought modern TCP stacks should be smart enough to tolerate a certain amount of packet reordering. However, we found it was not the case for high speed networks. I did extensive experiments to figure out how to improve the performance. %Frankly speaking, at the time, I did not know whether I could find one solution or not. After exploring in the darkness for around 2 months, we finally designed the first version of the improved GRO logic to mark packet reordering for TCP. The story tells me sometimes doing research is just like walking in the dark, but we should not lose hope. We will be fine if we keep exploring in the right direction. {\bf Simple Things Can Work Well.} I also learned that simple things can work pretty well in practice, maybe this is especially true in system and networking research. For example, the DCTCP congestion control law is extremely simple but this work has a huge impact in datacenter networking research. I think all of techniques or mechanisms introduced in this thesis are simple, especially, AC/DC TCP, DEM and Spring. We find simple things can boost performance significantly. I like simple and elegant solutions a lot. {\bf Start Simple and Build Software Step by Step.} We need to write code to validate our ideas. A key thing I learned is that we should start small and build our prototypes step by step. Once we finish a small step, we can check whether the code we wrote works or not. In this way, we can have a deep understanding of the code we wrote. Also we can find bugs or performance issues at the early stage. All the projects presented in this thesis involve some kernel level programming. The practice works pretty well for me. \iffalse {\bf Thinking Twice Before Starting a New Project.} Choosing a project is as important as finishing it. Finding a promising project is not always easy. The lesson I learned is that we should think twice before starting a new project and once we start it, we should try our best. An example is the AC/DC TCP project. Actually, at that time, we had a few topics we might want to explore. I spent around one month to think which topic was more promising. I actually built a sketch in my head and thought what were the major steps and major challenges in a each project. This helps reduce the risk of starting an ambitious project. \fi {\bf Old Ideas or Techniques Can Reborn in New Use Cases.} TCP was born in the late 1980s. But we observe TCP can not meet our performance goals in datacenters networks. So new research opportunities come. ECN-based or latency-based congestion control are not really new neither. But because of datacenter networks, they come to play important roles now. I have observed this phenomenon a lot of times in the last 5 years in networking research. I think applying old ideas or techniques is not a problem. The key thing is how we apply them, how we can improve upon them and how we can leverage the innovations in hardware or software to make the old ideas become more effective in the new settings. \section{Future Work} In the following, we list potential research topics in datacenter networking we may explore in the future. {\bf Automatic Datacenter Network Topology Analysis.} Datacenter network topology determines how scalable the network is, how resilient it is to link or switch failures and how easily the network can be incrementally deployed. Today's practice is that network architects need to manually infer (usually based on experiences) many key characteristics related to the candidate network topologies. So we lack a scientific and formal method to evaluate different network topologies. Therefore, there is a need to build a network topology analysis framework to help network architect analyze and compare candidate network topologies. To compare different network topologies, we need to set up metrics to quantify different network topologies. Our first goal is to identify a set of metrics (e.g., cost, wiring complexity, bandwidth, reliability, routing convergence) that can accurately quantify datacenter network topologies. Also, we need to define a set of workloads and traffic patterns to run against the network. Next, we want to investigate whether we can design and implement an automatic topology analysis framework to gain more insights and help design better network topologies. {\bf Applying Machine Learning Techniques to Traffic Load Balancing.} Datacenter networks are shared by a large amount of applications hosted in clouds. Based on the fact that traffic is multiplexed and the conjecture that the majority of traffic should be used by some top applications, predictable traffic patterns may exist in the datacenter networks. First, we want to measure traffic load on each link in a real datacenter network for a long time (e.g., a few months). Based on the measurement data, we analyze whether traffic loads can be predictable or not. If so, we can utilize big-data systems and machine learning techniques to help us apply better traffic load balancing schemes to reduce the possibility of network congestion and improve the performance of applications. We believe such a machine-learning-aided load balancing system can be applied to different scenarios. For example, it can be applied in both intra-datacenter networks and wide-area inter-datacenter networks. {\bf Near Real Time Network Congestion Monitoring.} Network congestion is a crucial performance hurdle for high performance cloud computing services like search, query and remote procedure calls. Studies have shown that end-to-end network latency (TCP RTT) can be increased by tens of milliseconds due to network congestion. Such a huge network latency can affect customer's experience and can have significant negative impacts on revenue. Therefore, one research question is whether we can monitor network congestion in (near) real time manner. If the answer is yes, how we should provide such information to network administrators or developers in the cloud? How we can quickly reroute network traffic to bypass congested network paths? This research problem is challenging because network congestion information should be obtained in (near) real time manner such that application traffic can be rerouted to avoid buffer building up. In today's data center networks, the base line end-to-end latency is around 40 to 200 microseconds, so we need to reduce the ``monitoring and action'' control loop latency as small as possible. To achieve this goal, we may need to explore recent advances in fast software packet processing (e.g., DPDK~\cite{intel-dpdk}) and modern hardware features in the switches. %\cite{example}
{ "alphanum_fraction": 0.8039134742, "avg_line_length": 72.0952380952, "ext": "tex", "hexsha": "919c35e6d24983919964d74f9bd162c9a300244a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "keqhe/phd_thesis", "max_forks_repo_path": "lanyue_thesis/conc/conc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "keqhe/phd_thesis", "max_issues_repo_path": "lanyue_thesis/conc/conc.tex", "max_line_length": 243, "max_stars_count": 2, "max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "keqhe/phd_thesis", "max_stars_repo_path": "lanyue_thesis/conc/conc.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z", "num_tokens": 2508, "size": 12112 }
\chapter{C Client API} \label{chap:api:c-client} The HyperDex Client library, \code{libhyperdex-client} is the de facto way to access a HyperDex cluster for storing and retrieving data. All data-store operations are provided by the this library. \section{Building the HyperDex C Binding} \label{sec:api:c:client:build} The HyperDex C Binding is automatically built and installed via the normal HyperDex build and install process. You can ensure that the client is always built by providing the \code{--enable-client} option to \code{./configure} like so: \begin{consolecode} % ./configure --enable-client \end{consolecode} \section{Compiling and Linking Your Application} \label{sec:api:c:client:link} Unless otherwise noted, all Client operations are defined in the \code{hyperdex/client.h} include. You can include this in your own program with: \begin{ccode} #include <hyperdex/client.h> \end{ccode} To link against \code{libhyperdex-client}, provide the \code{-lhyperdex-client} option at link time: \begin{consolecode} % cc -o output input.c -I/path/to/hyperdex/include -L/path/to/hyperdex/lib -lhyperdex-client \end{consolecode} HyperDex provides support for the automatically determining the compiler and linker flags for \code{libhyperdex-client}. First, ensure the \code{pkg-config} program is installed, and then run: \begin{consolecode} % pkg-config --cflags hyperdex-client -I/usr/local/include % pkg-config --libs hyperdex-client -L/usr/local/lib -lhyperdex-client \end{consolecode} The first command outputs the compiler flags necessary to include the \code{hyperdex/client.h} file, while the second command outputs the flags necessary for linking against \code{libhyperdex-client}. To put it all together, you can compile your application with: \begin{consolecode} % cc -o output input.c `pkg-config --cflags --libs hyperdex-client` \end{consolecode} For more information about \code{pkg-config}, see the \href{http://people.freedesktop.org/~dbn/pkg-config-guide.html#using}{pkg-config documentation}. \section{Hello World} \label{sec:api:c:client:helloworld} The following is a minimal application that stores the value "Hello World" and then immediately retrieves the value: \inputminted{c}{\topdir/c/client/hello-world.c} You can compile and run this example with: \begin{consolecode} % cc -o hello-world hello-world.c `pkg-config --cflags --libs hyperdex-client` % ./hello-world put "Hello World!" get done got attribute "v" = "Hello World!" \end{consolecode} Each operation, whether it is a PUT or a GET, is broken down into a request and a response. The \code{hyperdex\_client\_put} and \code{hyperdex\_client\_get} operations each initiate the request. The application then calls \code{hyperdex\_client\_loop} to wait for the completion of the operation. The status pointer passed to put or get will be used to fill in the status of the operation, while the status pointer passed to loop will be used to report transient problems with the cluster. Until the operation's id is returned from loop, the operation is still outstanding and incomplete. The stored value is passed via \code{struct hyperdex\_client\_attribute} which specifies both the value of the string "Hello World!" and that this value is, in fact, a string. All HyperDex datatypes are passed to HyperDex as bytestrings via this interface. This code omits error checking, assuming that all operations succeed. In the subsequent documentation we'll explore proper error checking, which should be in place for all non-example code. \section{Asynchronous Patterns} \label{sec:api:c:client:async} All operations are issued {\em asynchronously}, that is, the operation's start is decoupled from its completion, and it is up to the application to wait for its completion. The C API provides a unified, event-loop-like interface for polling events for their completion. An application can issue any number of operations, and poll for their completion using the \code{hyperdex\_client\_loop} call. This provides you with several advantages unavailable in any other distributed key-value store: \begin{itemize} \item Asynchronous events free your application to perform other operations while waiting for events to complete. \item Asynchronous events re-use underlying resources, such as TCP connections to the cluster, across operations. This enables improved performance and consumes fewer resources than would be consumed by synchronous clients generating a similar workload. \item Operations will be buffered in userspace when they cannot be immediately sent to a server. This ensures that applications will never block waiting for slow servers. \end{itemize} Of course, applications do not need to embrace this asynchronous design pattern. It's always possible to use the library in a synchronous manner by immediately following every operation with a call to \code{hyperdex\_client\_loop}. Applications that do embrace this asynchronous design pattern will have a certain structure. Specifically: \begin{itemize} \item Each operation must eventually be followed by a call to \code{hyperdex\_client\_loop}. This is the core of the HyperDex client. All work, including flushing userspace buffers, occurs via the call to loop. \item Pointers used for output from operations must remain valid until \code{hyperdex\_client\_loop} indicates that the operation has successfully finished. Consequently, they must not be aliased to pointers passed to other operations. \end{itemize} Finally, it's important to realize that calling \code{hyperdex\_client\_loop} is necessary to complete operations. An operation's outcome is not determined until the application calls \code{hyperdex\_client\_loop}. Do not simply issue \code{hyperdex\_client\_put} operations (or similar) and assume that the operations complete because there is no guarantee that they will do so. \section{Creating a Client} \label{sec:api:c:client:create} A HyperDex client is encapsulated within the incomplete \code{struct hyperdex\_client} type. This type is created by the HyperDex client library, and should only be freed using the provided method. \begin{ccode} struct hyperdex_client* hyperdex_client_create(const char* coordinator, uint16_t port); \end{ccode} Create a new client instance. This call allocates and initializes local structures. If allocation or initialization fail, the call will return NULL and set \code{errno} appropriately. This call does not establish the connection to the coordinator; that will be established and maintained automatically by other calls made with this client instance. \textbf{Parameters:} \begin{description}[labelindent=\widthof{{\code{coordinator}}},leftmargin=*,noitemsep,nolistsep,align=right] \item[\code{coordinator}] A C-string containing IP address or hostname of the coordinator. \item[\code{port}] The port number for the coordinator. \end{description} \begin{ccode} void hyperdex_client_destroy(struct hyperdex_client* client); \end{ccode} Destroy a previously instantiated client, and release all associated resources. This call always succeeds. \textbf{Parameters:} \begin{description}[labelindent=\widthof{{\code{client}}},leftmargin=*,noitemsep,nolistsep,align=right] \item[\code{client}] A previously created client instance. \end{description} \section{Data Structures} \label{sec:api:c:client:data-structures} HyperDex natively supports a variety of data structures. This section describes the available data structures and their encoding within C. HyperDex encodes as a byte string all data structures passed between the application and HyperDex. This format of this byte string varies according to its type. In this section, we'll describe the format of data structures, and provide an API for serializing to and from the prescribed format. All APIs discussed in this section are provided by \code{libhyperdex-client}. \subsection{\code{enum hyperdatatype}} \label{sec:api:c:client:hyperdatatype} The \code{enum hyperdatatype} is used to represent the type of a byte string to HyperDex. Whenever a structure accepts a byte string as a value, it will typically accept an \code{enum hyperdatatype} to convey the type of the string. \paragraph{Primitive Data Types} Primitive data types are the basic data types of HyperDex. Applications may use these primitives as the key and dimensions of hyperspaces within HyperDex. \begin{description}[noitemsep] \item[\code{HYPERDATATYPE\_STRING}] A byte string. \item[\code{HYPERDATATYPE\_INT64}] A 64-bit signed integer. \item[\code{HYPERDATATYPE\_FLOAT}] A 64-bit floating point value. \end{description} \paragraph{Container Data Types} Container data types contain a collection of primitive data types. Container data types cannot be used as the key or dimensions of the hyperspace. There are three container types available within HyperDex: \begin{description} \item[Lists] A list contains elements of one primitive type. The order of elements in a list is preserved, and it's possible for duplicate elements to exist. \item[Sets] A set contains elements of one primitive type. Each element exists in the set at most once. Although the implementation enforces an order on the set for efficiency purposes, set operations are agnostic to the order of the included elements. \item[Maps] A map contains key-value pairs of elements, where the key and value may be of different types. Each key is unique and has an associated value. Maps also offer the ability to perform most primitive operations on the values within the map. \end{description} Each of these containers may be instantiated with a primitive data type as the contained elements. In total, HyperDex supports all of the following container data types: \begin{description}[noitemsep] \item[\code{HYPERDATATYPE\_LIST\_STRING}] A list of strings. \item[\code{HYPERDATATYPE\_LIST\_INT64}] A list of integers. \item[\code{HYPERDATATYPE\_LIST\_FLOAT}] A list of floats. \item[\code{HYPERDATATYPE\_SET\_STRING}] A set of strings. \item[\code{HYPERDATATYPE\_SET\_INT64}] A set of integers. \item[\code{HYPERDATATYPE\_SET\_FLOAT}] A set of floats. \item[\code{HYPERDATATYPE\_MAP\_STRING\_STRING}] A map from strings to strings. \item[\code{HYPERDATATYPE\_MAP\_STRING\_INT64}] A map from strings to integers. \item[\code{HYPERDATATYPE\_MAP\_STRING\_FLOAT}] A map from strings to floats. \item[\code{HYPERDATATYPE\_MAP\_INT64\_STRING}] A map from integers to strings. \item[\code{HYPERDATATYPE\_MAP\_INT64\_INT64}] A map from integers to integers. \item[\code{HYPERDATATYPE\_MAP\_INT64\_FLOAT}] A map from integers to floats. \item[\code{HYPERDATATYPE\_MAP\_FLOAT\_STRING}] A map from floats to strings. \item[\code{HYPERDATATYPE\_MAP\_FLOAT\_INT64}] A map from floats to integers. \item[\code{HYPERDATATYPE\_MAP\_FLOAT\_FLOAT}] A map from floats to floats. \end{description} The following data types are defined as well, and are generally only of interest to HyperDex developers and those who are writing client bindings: \begin{description}[noitemsep] \item[\code{HYPERDATATYPE\_LIST\_GENERIC}] A list whose element type is unspecified. \item[\code{HYPERDATATYPE\_SET\_GENERIC}] A set whose element type is unspecified. \item[\code{HYPERDATATYPE\_MAP\_GENERIC}] A map whose key/value types are unspecified. \item[\code{HYPERDATATYPE\_MAP\_STRING\_KEYONLY}] A map whose key is a string and whose value is unspecified. \item[\code{HYPERDATATYPE\_MAP\_INT64\_KEYONLY}] A map whose key is an integer and whose value type is unspecified. \item[\code{HYPERDATATYPE\_MAP\_FLOAT\_KEYONLY}] A map whose key is a float and whose value type is unspecified. \item[\code{HYPERDATATYPE\_GARBAGE}] A reserved constant never used within HyperDex. \end{description} \subsection{Bytestring Format} \label{sec:api:c:client:format} The format of the data structures is defined to be the same on all platforms. For each format, Python-like psuedocode is provided that shows example encodings. \paragraph{string format} A string is an 8-bit byte string. HyperDex is agnostic to the contents of the string, and it may contain any bytes, including \code{\\x00}. By convention, the trailing \code{NULL} should be omitted for C-strings to ensure interoperability across languages. For example: \begin{pythoncode} >>> encode_string('Hello\x00World!') b'Hello\x00World!' \end{pythoncode} \paragraph{int format} Integers are encoded as signed 8-byte little-endian integers. For example: \begin{pythoncode} >>> encode_int(1) b'\x01\x00\x00\x00\x00\x00\x00\x00' >>> encode_int(-1) b'\xff\xff\xff\xff\xff\xff\xff\xff' >>> encode_int(0xdeadbeef) b'\xef\xbe\xad\xde\x00\x00\x00\x00' \end{pythoncode} \paragraph{float format} Floats are encoded as IEEE 754 binary64 values in little-endian format. For example: \begin{pythoncode} >>> encode_double(0) b'\x00\x00\x00\x00\x00\x00\x00\x00' >>> encode_double(3.1415) b'o\x12\x83\xc0\xca!\t@' \end{pythoncode} \paragraph{list(string) format} Lists of strings are encoded by concatenating the encoding of each string, prefixed by an unsigned 4-byte little endian integer indicating the length of the string. For example: \begin{pythoncode} >>> encode_list_string([]) b'' >>> encode_list_string(['hello', 'world']) b'\x05\x00\x00\x00hello\x05\x00\x00\x00world' \end{pythoncode} \paragraph{list(int) format} Lists of integers are encoded by concatenating the encoded form of each integer. For example: \begin{pythoncode} >>> encode_list_int([]) b'' >>> encode_list_int([1, -1, 0xdeadbeef]) b'\x01\x00\x00\x00\x00\x00\x00\x00' \ b'\xff\xff\xff\xff\xff\xff\xff\xff' \ b'\xef\xbe\xad\xde\x00\x00\x00\x00' \end{pythoncode} \paragraph{list(floats) format} Lists of floats are encoded by concatenating the encoded form of each float. For example: \begin{pythoncode} >>> encode_list_float([]) b'' >>> encode_list_float([0, 3.1415]) b'\x00\x00\x00\x00\x00\x00\x00\x00' \ b'o\x12\x83\xc0\xca!\t@' \end{pythoncode} \paragraph{set(string) format} Sets of strings are encoded by concatenating the encoding of each string in sorted order, where each string is prefixed by an unsigned 4-byte little endian integer indicating the length of the string. For example: \begin{pythoncode} >>> encode_set_string([]) b'' >>> encode_set_string(['world', 'hello']) b'\x05\x00\x00\x00hello\x05\x00\x00\x00world' \end{pythoncode} \paragraph{set(int) format} Sets of integers are encoded by concatenating the encoded form of each integer in sorted order. For example: \begin{pythoncode} >>> encode_set_int([]) b'' >>> encode_set_int([1, -1, 0xdeadbeef]) b'\xff\xff\xff\xff\xff\xff\xff\xff' \ b'\x01\x00\x00\x00\x00\x00\x00\x00' \ b'\xef\xbe\xad\xde\x00\x00\x00\x00' \end{pythoncode} \paragraph{set(float) format} Sets of floats are encoded by concatenating the encoded form of each float in sorted order. For example: \begin{pythoncode} >>> encode_set_float([]) b'' >>> encode_set_float([3.1415, 0]) b'\x00\x00\x00\x00\x00\x00\x00\x00' \ b'o\x12\x83\xc0\xca!\t@' \end{pythoncode} \paragraph{map(string, string) format} Maps from strings to strings are formed by encoding the individual elements, each prefixed by an unsigned 4-byte little endian integer indicating their length. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_string_string({}) b'' >>> encode_map_string_string({'hello': 'world', ... 'map key': 'map val', ... 'map', 'encoding'}) b'\x05\x00\x00\x00hello\x05\x00\x00\x00world' \ b'\x03\x00\x00\x00map\x08\x00\x00\x00encoding' \ b'\x07\x00\x00\x00map key\x07\x00\x00\x00map val' \end{pythoncode} \paragraph{map(string, int) format} Maps from strings to ints are formed by encoding the individual elements, where keys are prefixed by an unsigned 4-byte little endian integer indicating their length. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_string_int({}) b'' >>> encode_map_string_int({'world': -1, ... 'hello': 1}) b'\x05\x00\x00\x00hello\x01\x00\x00\x00\x00\x00\x00\x00' \ b'\x05\x00\x00\x00world\xff\xff\xff\xff\xff\xff\xff\xff' \end{pythoncode} \paragraph{map(string, float) format} Maps from strings to ints are formed by encoding the individual elements, where keys are prefixed by an unsigned 4-byte little endian integer indicating their length. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_string_float({}) b'' >>> encode_map_string_float({'zero': 0, ... 'pi': 3.1415}) b'\x02\x00\x00\x00pio\x12\x83\xc0\xca!\t@' \ b'\x04\x00\x00\x00zero\x00\x00\x00\x00\x00\x00\x00\x00' \end{pythoncode} \paragraph{map(int, string) format} Maps from ints to strings are formed by encoding the individual elements, where values are prefixed by an unsigned 4-byte little endian integer indicating their length. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_int_string({}) b'' >>> encode_map_int_string({1: 'hello', ... -1: 'world'}) b'\xff\xff\xff\xff\xff\xff\xff\xff\x05\x00\x00\x00world' \ b'\x01\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00hello' \end{pythoncode} \paragraph{map(int, int) format} Maps from ints to ints are formed by encoding the individual elements. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_int_int({}) b'' >>> encode_map_int_int({1: 0xdeadbeef, ... -1: 0x1eaff00d}) b'\xff\xff\xff\xff\xff\xff\xff\xff\x0d\xf0\xaf\x1e\x00\x00\x00\x00' \ b'\x01\x00\x00\x00\x00\x00\x00\x00\xef\xbe\xad\xde\x00\x00\x00\x00' \end{pythoncode} \paragraph{map(int, float) format} Maps from ints to floats are formed by encoding the individual elements. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_int_float({}) b'' >>> encode_map_int_float({1: 0, ... -1: 3.1415}) b'\xff\xff\xff\xff\xff\xff\xff\xffo\x12\x83\xc0\xca!\t@' \ b'\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' \end{pythoncode} \paragraph{map(float, string) format} Maps from floats to strings are formed by encoding the individual elements, where values are prefixed by an unsigned 4-byte little endian integer indicating their length. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_float_string({}) b'' >>> encode_map_float_string({0: 'hello', ... 3.1415: 'world'}) b'\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00hello' \ b'o\x12\x83\xc0\xca!\t@\x05\x00\x00\x00world' \end{pythoncode} \paragraph{map(float, int) format} Maps from floats to ints are formed by encoding the individual elements. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_float_int({}) b'' >>> encode_map_float_int({0: 1, ... 3.1415: -1}) b'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00' \ b'o\x12\x83\xc0\xca!\t@\xff\xff\xff\xff\xff\xff\xff\xff' \end{pythoncode} \paragraph{map(float, float) format} Maps from floats to floats are formed by encoding the individual elements. The pairs of elements are stored in sorted order according to the first element of the pair (the map's key). For example: \begin{pythoncode} >>> encode_map_float_float({}) b'' >>> encode_map_float_float({0: 1, ... 3.1415: -1}) b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf0?' \ b'o\x12\x83\xc0\xca!\t@\x00\x00\x00\x00\x00\x00\xf0\xbf' \end{pythoncode} \subsection{Serialization API} \label{sec:api:c:client:serialize} The serialization API supports serialization of all datatypes supported by HyperDex. Of course, feel free to manually encode data structures, especially where doing so can make use of efficient stack-allocated data structures. \paragraph{struct hyperdex\_ds\_arena} The packing routines described below may occasionally have to allocate memory into which the encoded forms of the datatypes are copied. To free the programmer from the burden of having to manually allocate and free each of these pieces of memory, the data structures API allocates all memory via an instance of \code{struct hyperdex\_ds\_arena}. Via a single call to \code{hyperdex\_ds\_arena\_destroy}, all memory allocated via the arena is free'd. \code{struct hyperdex\_ds\_arena} is intentionally defined as an incomplete type because its internals are subject to change. To create an arena, call \code{hyperdex\_ds\_arena\_create}. The arena should subsequently be destroyed via \code{hyperdex\_ds\_arena\_destroy}. \begin{ccode} struct hyperdex_ds_arena* hyperdex_ds_arena_create(); \end{ccode} Create a new arena for alocating memory. On success, this function returns a non-null pointer for the new arena. On failure, the function returns \code{NULL}, indicating that memory allocation failed. It is the caller's responsibility to pass this function to \code{hyperdex\_ds\_arena\_destroy} when finished. \begin{ccode} void hyperdex_ds_arena_destroy(struct hyperdex_ds_arena* arena); \end{ccode} Free all memory associated with \code{arena}. This function always succeeds. \paragraph{serialize string} No serialization is necessary for string data types. For convenience, the serialization API provides a copy function that copies a string into arena-allocated memory. \begin{ccode} int hyperdex_ds_copy_string(struct hyperdex_ds_arena* arena, const char* str, size_t str_sz, enum hyperdex_ds_returncode* status, const char** value, size_t* value_sz); \end{ccode} Copy the string \code{str}/\code{str\_sz} into memory allocated via \code{arena} and return the copy via \code{value} and \code{value\_sz}. This function will fail and return -1 if there is insufficient memory available for copying the string. All pointers returned by this function remain valid until \code{arena} is destroyed. The client should not attempt to free the returned copy. \paragraph{serialize int} \begin{ccode} void hyperdex_ds_pack_int(int64_t num, char* value); \end{ccode} Packs \code{num} into the bytes pointed to by \code{buf}. This function always succeeds. It is the caller's responsibility to ensure that \code{buf} points to at least \unit{8}{\byte}. \begin{ccode} int hyperdex_ds_copy_int(struct hyperdex_ds_arena* arena, int64_t num, enum hyperdex_ds_returncode* status, const char** value, size_t* value_sz); \end{ccode} Encode \code{num} into memory allocated via \code{arena} and return the value via \code{value} and \code{value\_sz}. This function will fail and return -1 if there is insufficient memory available for encoding the nubmer. All pointers returned by this function remain valid until \code{arena} is destroyed. The client should not attempt to free the returned copy. \paragraph{serialize float} \begin{ccode} void hyperdex_ds_pack_float(double num, char* value); \end{ccode} Pack \code{num} into the bytes pointed to by \code{buf}. This function always succeeds. It is the caller's responsibility to ensure that \code{buf} points to at least \unit{8}{\byte}. \begin{ccode} int hyperdex_ds_copy_float(struct hyperdex_ds_arena* arena, double num, enum hyperdex_ds_returncode* status, const char** value, size_t* value_sz); \end{ccode} Encode \code{num} into memory allocated via \code{arena} and return the value via \code{value} and \code{value\_sz}. This function will fail and return -1 if there is insufficient memory available for encoding the nubmer. All pointers returned by this function remain valid until \code{arena} is destroyed. The client should not attempt to free the returned copy. \paragraph{serialize lists} The below functions incrementally build lists, performing all relevant error checking to ensure that the resuling HyperDex list is well-formed. The first element appended to the list implicitly determines the type of the list. All subsequent calls that push elements of a different type will fail. \begin{ccode} struct hyperdex_ds_list* hyperdex_ds_allocate_list(struct hyperdex_ds_arena* arena); \end{ccode} Create a new dynamic list. This function will fail and return \code{NULL} should memory allocation fail. \begin{ccode} int hyperdex_ds_list_insert_string(struct hyperdex_ds_list* list, const char* str, size_t str_sz, enum hyperdex_ds_returncode* status); \end{ccode} Append the string \code{str}/\code{str\_sz} to \code{list}. This function will fail and return -1 if memory allocation fails, or the list is not a list of strings. \begin{ccode} int hyperdex_ds_list_insert_int(struct hyperdex_ds_list* list, int64_t num, enum hyperdex_ds_returncode* status); \end{ccode} Append the integer \code{num} to \code{list}. This function will fail and return -1 if memory allocation fails, or the list is not a list of integers. \begin{ccode} int hyperdex_ds_list_insert_float(struct hyperdex_ds_list* list, double num, enum hyperdex_ds_returncode* status); \end{ccode} Append the float \code{num} to \code{list}. This function will fail and return -1 if memory allocation fails or the list is not a list of floats. \begin{ccode} int hyperdex_ds_list_finalize(struct hyperdex_ds_list*, enum hyperdex_ds_returncode* status, const char** value, size_t* value_sz, enum hyperdatatype* datatype); \end{ccode} Finalize the list by writing its elements into a bytestring. This function returns the bytestring and the list type. It will fail and return -1 if memory allocation fails. \paragraph{serialize sets} The below functions incrementally build sets, performing all relevant error checking to ensure that the resuling HyperDex set is well-formed. The first element inserted into the set implicitly determines the type of the set. All subsequent calls that insert elements of different types will fail. \begin{ccode} struct hyperdex_ds_set* hyperdex_ds_allocate_set(struct hyperdex_ds_arena* arena); \end{ccode} Create a new dynamic set. This function will fail and return \code{NULL} should memory allocation fail. \begin{ccode} int hyperdex_ds_set_insert_string(struct hyperdex_ds_set* set, const char* str, size_t str_sz, enum hyperdex_ds_returncode* status); \end{ccode} Insert the string \code{str}/\code{str\_sz} into \code{set}. This function will fail and return -1 if memory allocation fails, or the set is not a set of strings. \begin{ccode} int hyperdex_ds_set_insert_int(struct hyperdex_ds_set* set, int64_t num, enum hyperdex_ds_returncode* status); \end{ccode} Insert the integer \code{num} into \code{set}. This function will fail and return -1 if memory allocation fails, or the set is not a set of integers. \begin{ccode} int hyperdex_ds_set_insert_float(struct hyperdex_ds_set* set, double num, enum hyperdex_ds_returncode* status); \end{ccode} Insert the float \code{num} into \code{set}. This function will fail and return -1 if memory allocation fails, or the set is not a set of floats. \begin{ccode} int hyperdex_ds_set_finalize(struct hyperdex_ds_set*, enum hyperdex_ds_returncode* status, const char** value, size_t* value_sz, enum hyperdatatype* datatype); \end{ccode} Finalize the set by writing its elements into a bytestring. This function returns the bytestring and the set type. It will fail and return -1 if memory allocation fails. \paragraph{serialize maps} The below functions incrementally build maps, performing all relevant error checking to ensure that the resuling HyperDex map is well-formed. The first key/value-pair inserted into the map implicitly determines the type of the map. All subsequent calls that insert elements of different types will fail. The map is built by alternating calls to the key/value functions described below, starting with a key-based function. This keeps the number of cases linear in the number of primitive types a map may contain, rather than appending key-value pairs directly (which would require a quadratic number of calls). \begin{ccode} struct hyperdex_ds_map* hyperdex_ds_allocate_map(struct hyperdex_ds_arena* arena); \end{ccode} Create a new dynamic map. This function will fail and return \code{NULL} should memory allocation fail. \begin{ccode} int hyperdex_ds_map_insert_key_string(struct hyperdex_ds_map* map, const char* str, size_t str_sz, enum hyperdex_ds_returncode* status); \end{ccode} Set the key of the next pair to be inserted into \code{map} to the string specified by \code{str} and \code{str\_sz}. This function will fail and return -1 if memory allocation fails, or the map does not use strings for keys. \begin{ccode} int hyperdex_ds_map_insert_val_string(struct hyperdex_ds_map* map, const char* str, size_t str_sz, enum hyperdex_ds_returncode* status); \end{ccode} Set the value of the next pair to be inserted into \code{map} to the string specified by \code{str} and \code{str\_sz}, and insert the pair. This function will fail and return -1 if memory allocation fails, or the map does not use strings for values. \begin{ccode} int hyperdex_ds_map_insert_key_int(struct hyperdex_ds_map* map, int64_t num, enum hyperdex_ds_returncode* status); \end{ccode} Set the key of the next pair to be inserted into \code{map} to the integer specified by \code{num}. This function will fail and return -1 if memory allocation fails, or the map does not nuse integers for keys. \begin{ccode} int hyperdex_ds_map_insert_val_int(struct hyperdex_ds_map* map, int64_t num, enum hyperdex_ds_returncode* status); \end{ccode} Set the value of the next pair to be inserted into \code{map} to the integer specified by \code{num}, and insert the pair. This function will fail and return -1 if memory allocation fails, or the map does not use integers for values. \begin{ccode} int hyperdex_ds_map_insert_key_float(struct hyperdex_ds_map* map, double num, enum hyperdex_ds_returncode* status); \end{ccode} Set the key of the next pair to be inserted into \code{map} to the float specified by \code{num}. This function will fail and return -1 if memory allocation fails, or the map does not use floats for keys. \begin{ccode} int hyperdex_ds_map_insert_val_float(struct hyperdex_ds_map* map, double num, enum hyperdex_ds_returncode* status); \end{ccode} Set the value of the next pair to be inserted into \code{map} to the float specified by \code{num}. This function will fail and return -1 if memory allocation fails, or the map does not use floats for values. \begin{ccode} int hyperdex_ds_map_finalize(struct hyperdex_ds_map*, enum hyperdex_ds_returncode* status, const char** value, size_t* value_sz, enum hyperdatatype* datatype); \end{ccode} Finalize the map by writing its key/value-pairs into a bytestring. This function returns the bytestring and the map type. It will fail and return -1 if memory allocation fails, or an uneven number of key/value calls were made. \subsection{Deserialization API} \label{sec:api:c:client:deserialize} The deserialization API provides routines to unpack ints and floats, and iterate the elements in lists, sets, and maps. Iterators return elements one-by-one with a minimal amount of copying and allocation. All iterators are used in the same pattern. For example, to iterate a list of integers: \inputminted{c}{\topdir/c/client/iterate.c} Compile and run this example with: \begin{consolecode} $ cc -o iterate iterate.c `pkg-config --cflags --libs hyperdex-client` $ ./iterate 1 -1 3735928559 \end{consolecode} The function \code{hyperdex\_ds\_iterator\_init} sets up the iterator. Each container data type has a specialized iteration function. All iterators share the same initialization function. \begin{ccode} void hyperdex_ds_iterator_init(struct hyperdex_ds_iterator* iter, enum hyperdatatype datatype, const char* value, size_t value_sz); \end{ccode} Initialize an iterator for the given data type/value. This function always succeeds. \paragraph{deserialize string} No deserialization is necessary for string data types. \paragraph{deserialize int} \begin{ccode} int hyperdex_ds_unpack_int(const char* buf, size_t buf_sz, int64_t* num); \end{ccode} Unpack \code{num} from \code{buf}/\code{buf\_sz}. This function will fail and return -1 if \code{buf\_sz} is not exactly \unit{8}{\byte}. \paragraph{deserialize float} \begin{ccode} int hyperdex_ds_unpack_float(const char* buf, size_t buf_sz, double* num); \end{ccode} Unpack \code{num} from \code{buf}/\code{buf\_sz}. This function will fail and return -1 if \code{buf\_sz} is not exactly \unit{8}{\byte}. \paragraph{deserialize lists} \begin{ccode} int hyperdex_ds_iterate_list_string_next(struct hyperdex_ds_iterator* iter, const char** str, size_t* str_sz); \end{ccode} Return the next string element in the list. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the list of strings is malformed. The falue stored in \code{*str} is a pointer into the list of strings and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_list_int_next(struct hyperdex_ds_iterator* iter, int64_t* num); \end{ccode} Return the next integer element in the list. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the list of integers is malformed. \begin{ccode} int hyperdex_ds_iterate_list_float_next(struct hyperdex_ds_iterator* iter, double* num); \end{ccode} Return the next float element in the list. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the list of floats is malformed. \paragraph{deserialize sets} \begin{ccode} int hyperdex_ds_iterate_set_string_next(struct hyperdex_ds_iterator* iter, const char** str, size_t* str_sz); \end{ccode} Return the next string element in the set. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the set of strings is malformed. The value stored in \code{*str} is a pointer into the set of strings and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_set_int_next(struct hyperdex_ds_iterator* iter, int64_t* num); \end{ccode} Return the next integer element in the set. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the set of ints is malformed. \begin{ccode} int hyperdex_ds_iterate_set_float_next(struct hyperdex_ds_iterator* iter, double* num); \end{ccode} Return the next float element in the set. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the set of ints is malformed. \paragraph{deserialize maps} \begin{ccode} int hyperdex_ds_iterate_map_string_string_next(struct hyperdex_ds_iterator* iter, const char** key, size_t* key_sz, const char** val, size_t* val_sz); \end{ccode} Return the next pair of (string, string) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. The values stored in \code{*key} and \code{*val} are pointers into the map and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_map_string_int_next(struct hyperdex_ds_iterator* iter, const char** key, size_t* key_sz, int64_t* val); \end{ccode} Return the next pair of (string, int) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. The value stored in \code{*key} is a pointer into the map and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_map_string_float_next(struct hyperdex_ds_iterator* iter, const char** key, size_t* key_sz, double* val); \end{ccode} Return the next pair of (string, float) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. The value stored in \code{*key} is a pointer into the map and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_map_int_string_next(struct hyperdex_ds_iterator* iter, int64_t* key, const char** val, size_t* val_sz); \end{ccode} Return the next pair of (int, string) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. The value stored in \code{*val} is a pointer into the map and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_map_int_int_next(struct hyperdex_ds_iterator* iter, int64_t* key, int64_t* val); \end{ccode} Return the next pair of (int, int) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. \begin{ccode} int hyperdex_ds_iterate_map_int_float_next(struct hyperdex_ds_iterator* iter, int64_t* key, double* val); \end{ccode} Return the next pair of (int, float) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. \begin{ccode} int hyperdex_ds_iterate_map_float_string_next(struct hyperdex_ds_iterator* iter, double* key, const char** val, size_t* val_sz); \end{ccode} Return the next pair of (float, string) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. The value stored in \code{*val} is a pointer into the map and should not be free'd by the application. \begin{ccode} int hyperdex_ds_iterate_map_float_int_next(struct hyperdex_ds_iterator* iter, double* key, int64_t* val); \end{ccode} Return the next pair of (float, int) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. \begin{ccode} int hyperdex_ds_iterate_map_float_float_next(struct hyperdex_ds_iterator* iter, double* key, double* val); \end{ccode} Return the next pair of (float, float) in the map. This function will return 1 if an element is returned, 0 if there are no elements to return, and -1 if the map is malformed. \subsection{Memory Management Utilties} \label{sec:api:c:client:memory} The data structures API provides utility functions for allocating structures from the arena, obviating the need to free them individually. \begin{ccode} struct hyperdex_client_attribute* hyperdex_ds_allocate_attribute(struct hyperdex_ds_arena* arena, size_t sz); \end{ccode} Allocate an array of \code{struct hyperdex\_client\_attribute}. On success, this function returns a non-null pointer containing \code{sz} elements. On failure, the function returns \code{NULL}, indicating that memory allocation failed. The memory will remain valid until the arena is destroyed and should not be free'd independently by the application. \begin{ccode} struct hyperdex_client_attribute_check* hyperdex_ds_allocate_attribute_check(struct hyperdex_ds_arena* arena, size_t sz); \end{ccode} Allocate an array of \code{struct hyperdex\_client\_attribute\_check}. On success, this function returns a non-null pointer containing \code{sz} elements. On failure, the function returns \code{NULL}, indicating that memory allocation failed. The memory will remain valid until the arena is destroyed and should not be free'd independently by the application. \begin{ccode} struct hyperdex_client_map_attribute* hyperdex_ds_allocate_map_attribute(struct hyperdex_ds_arena* arena, size_t sz); \end{ccode} Allocate an array of \code{struct hyperdex\_client\_map\_attribute}. On success, this function returns a non-null pointer containing \code{sz} elements. On failure, the function returns \code{NULL}, indicating that memory allocation failed. The memory will remain valid until the arena is destroyed and should not be free'd independently by the application. \section{Attributes} \label{sec:api:c:client:attributes} In HyperDex, {\em attributes} specified named values that comprise an object. For instance, in Chapter~\nameref{chap:quick-start}, the phonebook space has attributes ``username'', ``first'', ``last'', and ``phone''. The C API represents such attributes using \code{struct hyperdex\_client\_attribute}. The C definition of this struct is: \begin{ccode} struct hyperdex_client_attribute { const char* attr; /* NULL-terminated */ const char* value; size_t value_sz; enum hyperdatatype datatype; }; \end{ccode} This struct specifies the name, value, and data type of the attribute. The \code{attr} field is a NULL-terminated C-string that names the attribute affected by the value. The \code{value} and \code{value\_sz} fields contain a properly formatted byte string and its size. The \code{datatype} field indicates the encoding of the byte string. The interpretation of an attribute is dependent upon the operation being performed. In the case of \code{hyperdex\_client\_put}, the attributes directly convey the values to be stored; \code{hyperdex\_client\_get} returns the stored attributes. Other operations, such as \code{hyperdex\_client\_string\_prepend} interpret the attribute as an argument to the operation. In the case of prepend, the attribute specifies the value to be prepended. \section{Map Attributes} \label{sec:api:c:client:map-attributes} Some HyperDex operations affect key-value pairs contained within maps. These operations use \code{struct hyperdex\_client\_map\_attribute} to specify the name of the attribute affected, the key within the map, and the value associated with that key. The C definition of this struct is: \begin{ccode} struct hyperdex_client_map_attribute { const char* attr; /* NULL-terminated */ const char* map_key; size_t map_key_sz; enum hyperdatatype map_key_datatype; const char* value; size_t value_sz; enum hyperdatatype value_datatype; }; \end{ccode} The struct specifies the name, key, and value of to be used for the operation. The \code{map\_key\_*} and \code{value\_*} fields specify, respectively, the key and value of the element within the map. The relative fields are specified as byte strings with associated data types. The \code{attr} field specifies the name of the map. \section{Predicates} \label{sec:api:c:client:predicates} In HyperDex, a {\em predicate} is an expression about an attribute that is true or false. Predicates are specified to HyperDex using \code{struct hyperdex\_client\_attribute\_check} which is defined as: \begin{ccode} struct hyperdex_client_attribute_check { const char* attr; /* NULL-terminated */ const char* value; size_t value_sz; enum hyperdatatype datatype; enum hyperpredicate predicate; }; \end{ccode} Note that this struct closely resembes \code{struct hyperdex\_client\_attribute}, with the addition of a field named \code{predicate}. This field is an enum with the following values: \begin{description} \item[\code{HYPERPREDICATE\_FAIL}] Always fail. \item[\code{HYPERPREDICATE\_EQUALS}] Check that the existing value is equal to the one specified by \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_LESS\_EQUAL}] Check that the existing value is less than or equal to the one specified by \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_GREATER\_EQUAL}] Check that the existing value is greater than or equal to the one specified by \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_REGEX}] Check that the existing value matches the regular expression stored in as a string in \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_LENGTH\_EQUALS}] Check that the existing container or string has a length equal to the integer stored in \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_LENGTH\_LESS\_EQUAL}] Check that the existing container or string has a length less than or equal to the integer stored in \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_LENGTH\_GREATER\_EQUAL}] Check that the existing container or string has a length greater than or equal to the integer stored in \code{value}/\code{value\_sz}. \item[\code{HYPERPREDICATE\_CONTAINS}] Check that the container contains an element matching \code{value}/\code{value\_sz}. \end{description} \section{Error Handling} \label{sec:api:c:client:error-handling} Every call in the client provides a means for reporting failure. After each call, your application should check for the error and react appropriately. Depending upon the error, your application may retry the request, or may need to take more drastic action. Errors are typically reported via \code{enum hyperdex\_client\_returncode} defined in \code{hyperdex/client.h}. Values for this enum fall into three categories: values returned during normal operation, values returned to indicate anticipated errors, and values that should never be returned in practice. The common-case returncodes are: \begin{description} \item[\code{HYPERDEX\_CLIENT\_SUCCESS}] The operation was successful and no errors occurred. \item[\code{HYPERDEX\_CLIENT\_NOTFOUND}] The operation finished because the requested object was not found. \item[\code{HYPERDEX\_CLIENT\_SEARCHDONE}] An operation that potentially returns multiple objects. For instance, a search has finished and will no longer be returned via loop. \item[\code{HYPERDEX\_CLIENT\_CMPFAIL}] The predicate specified as part of a conditional operation was not true. \item[\code{HYPERDEX\_CLIENT\_READONLY}] The cluster is in read-only mode and not accepting write operations. \end{description} The following errors stem from environmental problems or problems with the application's usage of HyperDex. These errors are generally easy to remedy and are anticipated by the client library. \begin{description} \item[\code{HYPERDEX\_CLIENT\_UNKNOWNSPACE}] The specified space does not exist. Ensure that a space exists before trying to manipulate its data. \item[\code{HYPERDEX\_CLIENT\_COORDFAIL}] The connection to the coordinator has failed. The application should back off before retrying. Note that the connection to the coordinator is not a simple TCP connection, and is redundant if the coordinator consists of multiple servers. This error indicates to the application that the redundancy has failed and that the application should back-off. \item[\code{HYPERDEX\_CLIENT\_SERVERERROR}] A server returned a nonsensical result to the client library. Generally retrying the request should be sufficient to overcome the problem. \item[\code{HYPERDEX\_CLIENT\_POLLFAILED}] The poll system call failed in an unexpected manner. Typically, this means that the application using the HyperDex library has mismanaged its file descriptors and improperly altered descriptors in use by the HyperDex library. This generally indicates that there is a bug in the HyperDex client library, the application using the library, or both. \item[\code{HYPERDEX\_CLIENT\_OVERFLOW}] An integer operation failed to complete because it would have resulted in signed overflow. \item[\code{HYPERDEX\_CLIENT\_RECONFIGURE}] The server responsible for managing the operation failed while the operation was in-flight. \item[\code{HYPERDEX\_CLIENT\_TIMEOUT}] The \code{hyperdex\_client\_loop} operation exceeded its timeout without completing an outstanding operation. This does not affect the status of any outstanding operation. \item[\code{HYPERDEX\_CLIENT\_UNKNOWNATTR}] The operation references an attribute that is not part of the space. Make sure to use attributes within the space's schema. \item[\code{HYPERDEX\_CLIENT\_DUPEATTR}] The operation references an attribute multiple times in a way that is not permitted. \item[\code{HYPERDEX\_CLIENT\_NONEPENDING}] The \code{hyperdex\_client\_loop} call was made, but no operations were outstanding. This generally indicates that the loop call was called too many times for the operations issued. \item[\code{HYPERDEX\_CLIENT\_DONTUSEKEY}] The operation attempted to mutate the key, which is not permitted. \item[\code{HYPERDEX\_CLIENT\_WRONGTYPE}] The attribute or predicate was not compatible with the type in the space's schema. For example, the operation may have attempted to issue a PUT operation that writes an integer to a non-integer datatype, or tries to perform string operations on a non-string data type. Check that the operation performed is compatible with the data type of the affected attributes. \item[\code{HYPERDEX\_CLIENT\_NOMEM}] The library failed to allocate memory. \item[\code{HYPERDEX\_CLIENT\_INTERRUPTED}] The HyperDex library was interrupted by a signal. Read more about signals in Section~\ref{sec:api:c:client:signals}. \item[\code{HYPERDEX\_CLIENT\_CLUSTER\_JUMP}] The client library started receiving configurations for a different HyperDex cluster. This can happen if a new coordinator is restarted on the same address that the client connects to. This error will not be persistent. The client library will switch to the new cluster configuration, and this error just serves as a notification to the application. \item[\code{HYPERDEX\_CLIENT\_OFFLINE}] All servers responsible for handling the specified operation are currently offline and unavailable, whether due to failure or planned downtime. \end{description} The following errors indicate significant bugs within the client or application. In practice they should never happen and indicate bugs within HyperDex itself. They are used as one would use an \code{assert} statement to enforce an invariant. \begin{description} \item[\code{HYPERDEX\_CLIENT\_INTERNAL}] One or more of the HyperDex client library's internal invariants have been broken. It's best to destroy and recreate the client. \item[\code{HYPERDEX\_CLIENT\_EXCEPTION}] The C library is implemented internally using C++. C++ generated an unhandled exception that was caught at the C boundary. This indicates a bug in HyperDex, and exists only as a safeguard. Applications should never see this error. \item[\code{HYPERDEX\_CLIENT\_GARBAGE}] This value is reserved as a well-defined value that the library will never return, and that is not used as a constant anywhere else within HyperDex. \end{description} Note that an asynchronous application should distinguish between {\em local} errors which affect one outstanding operation, and {\em global} errors that transiently affect all operations, but do not change the completion status of those operations. Local errors are always returned via the \code{enum hyperdex\_client\_returncode*} pointer passed at the time the application initated the operation. These errors may either result from the application returning a negative operation id, in which case the operation immediately completes, or from the result of a successful \code{hyperdex\_client\_loop} call. In either case, the error has no impact on the result of any other operation. Global errors are always returned via the \code{enum hyperdex\_client\_returncode*} pointer passed to the most recent invocation of \code{hyperdex\_client\_loop}. These errors are not localized to any particular operation and indicate errors that are non-permanent. Example global errors include application-requested timeouts, interruptions by signals, and temporary non-connectivity with the coordinator. \section{Operations} \label{sec:api:c:client:ops} \input{\topdir/c/client/ops} \pagebreak \section{Working with Signals} \label{sec:api:c:client:signals} The HyperDex client library is signal-safe. Should a signal interrupt the client during a blocking operation, it will return \code{HYPERDEX\_CLIENT\_INTERRUPTED}. \section{Working with Threads} \label{sec:api:c:client:threads} The HyperDex client library is fully reentrant. Instances of \code{struct hyperdex\_client} and their associated state may be accessed from multiple threads, provided that the application employes its own synchronization that provides mutual exclusion. Put simply, a multi-threaded application should protect each \code{struct hyperdex\_client} instance with a mutex or lock to ensure correct operation.
{ "alphanum_fraction": 0.7488985173, "avg_line_length": 42.2386100386, "ext": "tex", "hexsha": "85952b9bfdac70097e13b6b1f9de4cda5fb3c4fc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8a1ca06bd387e6bd9d57bce6292d93f22d23c680", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "derekchiang/HyperDex", "max_forks_repo_path": "doc/c/client.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8a1ca06bd387e6bd9d57bce6292d93f22d23c680", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "derekchiang/HyperDex", "max_issues_repo_path": "doc/c/client.tex", "max_line_length": 108, "max_stars_count": null, "max_stars_repo_head_hexsha": "8a1ca06bd387e6bd9d57bce6292d93f22d23c680", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "derekchiang/HyperDex", "max_stars_repo_path": "doc/c/client.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13587, "size": 54699 }
\section{Introduction} \label{s:intro} In this paper we consider reproducibility in the computational sciences. We interpret the computational sciences as including computer science, but also any other science in which computational work plays an important role, such as physics, biology or psychology. The central hypothesis of this paper is that reproducibility is a cornerstone of the scientific method, and should therefore be a cornerstone of the computational sciences. We ask to what extent this is true and what challenges arise in reproducing work in the computational sciences. To study these questions, we perform four case studies that consider different issues related to reproducibility in the computational sciences. Each case study takes on a different aspect of reproducibility, identifies any problems encountered, and discusses the points that they have raised. An often-overlooked fact is that reproducibility in the computational sciences is not only an issue of the actual computations, but also of preliminaries and other setup that is not related to computation at all. The first case study asks how reproducibility is affected by differing ethics requirements at different universities. Ethics approval has to be obtained for any experiments involving human subjects, such as the ones that are common in human-computer interaction research, but different institutions put emphasis on different aspects. How reproducible is research that is ``conditioned'' on one particular ethics form? The second case study considers whether parallel and distributed computational experiments are more or less reproducible than serial ones. Computational experiments are at the core of much research, such as in artificial intelligence, where the motivation for this case study lies. In recent years, multi-core and multi-processor machines have become increasingly prevalent. To make use of this increased processing power, experiments need to utilise multiple resources at once. But is this inherently detrimental to reproducibility? While computer science may account for a significant fraction of computational experiments, they are important in almost every scientific discipline. In environmental science, medicine, physics and chemistry for example, simulations of physical processes help scientists gain insights. Our third case study is motivated by this fact and asks how reproducible computational scientific experiments are from disciplines other than computer science. We consider experiments from several research areas. Given that researchers in other areas use computation only as a tool and may not be as aware of issues related to reproducibility as computer scientists, does reproducibility suffer? Our final case study considers whether reproducible for one person is the same as reproducible for another. It asks how reproducible is the data analysis of a published experiment in human-computer interaction where the author made significant efforts to make it reproducible. While it is reasonable to assume that the author of an experiment can conduct it in a way that enables her or him to do it again, having another person do it is an entirely different matter. There may be implicit assumptions that are not specified, background knowledge assumed, or environmental aspects unconsidered. Can a carefully prepared experiment be reproduced by someone with no specific background in the area? Most of the work for this paper was performed during the Summer School on Experimental Methodology in Computational Science Research.\footnote{St Andrews, Scotland, August 4-8, 2014, \url{http://blogs.cs.st-andrews.ac.uk/emcsr2014/}} Indeed, a highly provisional first draft of this paper was completed by the end of the school~\cite{emcsr_arxiv_draft}, with the ensuing weeks used in completing work and writing. The first three case studies were selected by discussing the interests of participants and forming groups and research questions around these. Therefore, these case studies were performed by subsets of the authors, and are presented below in Sections~\ref{s:group1}~--~\ref{s:group3}. The final case study was led by a lecturer of the summer school, was performed by all participants, and is presented in Section~\ref{s:group4}. The case studies we consider are not exhaustive and no single one can give complete answers. Instead, they shine spotlights on specific, important areas related to the issue of reproducibility in the computational sciences. Furthermore, each raises interesting questions for future consideration by researchers interested in the reproducibility of computational experiments, be it trying to reproduce someone else's experiments, or making their own experiments reproducible. Additionally, we consider the meta-level problem of how to make this very paper reproducible. We have striven to make this paper open, reproducible, and executable. The entire edit history, including paper and many aspects of the case studies, is available openly on GitHub and was indeed open from the start of writing the paper~\cite{summerschoolpaper}. The paper has executable aspects through the integration of R and \LaTeX\ via the R package Sweave~\cite{lmucs-papers:Leisch:2002}, so that as the underlying data change, new tables and figures can be regenerated automatically. Finally, we have endeavoured to make it reproducible through measures such as providing virtual machines (VMs) for aspects of our work, not least a VM in which the paper itself can be rebuilt, and we welcome readers to attempt to reproduce it - either locally or using cloud computing.
{ "alphanum_fraction": 0.8244179847, "avg_line_length": 59.8617021277, "ext": "tex", "hexsha": "f2c144da3e9158b41788376883b4e24c6dc5f0e0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0699d98f37446c8d8b124051d9c3bfc7efd81e86", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "larskotthoff/recomputation-ss-paper", "max_forks_repo_path": "SummerSchoolPaper/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0699d98f37446c8d8b124051d9c3bfc7efd81e86", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "larskotthoff/recomputation-ss-paper", "max_issues_repo_path": "SummerSchoolPaper/intro.tex", "max_line_length": 99, "max_stars_count": null, "max_stars_repo_head_hexsha": "0699d98f37446c8d8b124051d9c3bfc7efd81e86", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "larskotthoff/recomputation-ss-paper", "max_stars_repo_path": "SummerSchoolPaper/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1146, "size": 5627 }
\documentclass[12pt]{amsart} \usepackage{enumerate} \usepackage[colorlinks=true, linkcolor=blue, urlcolor=blue, citecolor=blue, anchorcolor=blue, pdfborder={0 0 0}]{hyperref} \usepackage{url} \usepackage{graphicx,color} \usepackage{cite} \usepackage{amsthm, amsmath, amssymb} \usepackage{mathtools} \usepackage[top=45truemm, bottom=45truemm, left=30truemm, right=30truemm]{geometry} \usepackage{nicefrac} \usepackage{cancel} \usepackage{float} \usepackage{tabularx} \usepackage{makecell} \usepackage{array} \usepackage{ragged2e} \newcolumntype{P}[1]{>{\RaggedRight\hspace{0pt}}p{#1}} \newcolumntype{L}{>{\begin{math}}l<{\end{math}}}% \newcolumntype{C}{>{\begin{math}}c<{\end{math}}}% \newcolumntype{R}{>{\begin{math}}r<{\end{math}}}% \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \setlength{\headsep}{2em} \setlength{\skip\footins}{1.4pc plus 5pt minus 2pt} \title[Engel Expansions in Collatz Sequences]{The Role of Engel Expansions in Collatz Sequences} \author[F.\ Last1]{\href{https://orcid.org/0000-0001-5257-2236}{\includegraphics[scale=0.06]{orcid.png}}\hspace{1mm}Eldar Sultanow} \address{Eldar Sultanow\\ Capgemini Deutschland GmbH\\ Bahnhofstraße 30\\ 90402 Nuremberg\\ Germany} \curraddr{} \email{[email protected]} \author[F.\ Last2]{First Last} \address{First Lastname\\ Graduate School of Mathematics\\ XYZ University\\ City\\ Adresszusatz\\ ZIP\\ Germany} \curraddr{} \email{[email protected]} \subjclass[2010]{11B99} \keywords{Engel Expansions, Collatz Sequences} \begin{document} \begingroup \let\MakeUppercase\relax \maketitle \endgroup \begin{abstract} The Collatz conjecture is a number theoretical problem, which has puzzled countless researchers using myriad approaches. Presently, there are scarcely investigations to treat the problem from the angle of the question "which are the corner cases the Collatz Sequences?". We pursue this question and to this end examine ascending continued fractions -- the so called Engel expansions. We demonstrate that Engel expansions form worst case sequences $v_1,v_2,\ldots,v_n,v_{n+1}$ that maximize $v_{n+1}$. We show that maximizing the product $(1+\nicefrac{1}{3v_1})(1+\nicefrac{1}{3v_2})\cdots(1+\nicefrac{1}{3v_n})(1+\nicefrac{1}{3v_{n+1}})$ requires us to minimize $v_1$ and to maximize $v_{n+1}$ as well as the number of division by two within the sequence. \end{abstract} \section{Introduction} \label{introduction} The Collatz conjecture is a well-known number theory problem and is the subject of numerous publications. An overview is provided by Lagarias \cite{Ref_Lagarias_2010}. Therefore, our description of the topic will be brief. The mathematician Lothar Collatz introduced a function $g:\mathbb{N}\rightarrow\mathbb{N}$ as follows: \begin{equation} \label{eq:func_collatz} g(x)= \begin{cases} 3x+1 & 2\nmid x\\ x/2 & \text{otherwise} \end{cases} \end{equation} \par\medskip In the following, we only consider compressed Collatz sequences that solely contain the odd members, such as described by Bruckman \cite{Ref_Bruckman_2008}, who used the more convenient function that opts out all even integers: \begin{equation} \label{eq:func_collatz_odd} f(x)=(3x+1)\cdot2^{-\alpha(x)},\text{where}\hspace{1em}2^{\alpha(x)}\mathrel\Vert(3x+1) \end{equation} \par\medskip Note that $\alpha(x)$ is the largest possible exponent for which $2^{\alpha(x)}$ exactly divides $3x+1$. Especially for prime powers, one often says $p^\alpha$ \textit{divides} the integer $x$ \textit{exactly}, denoted as $p^\alpha\mathrel\Vert x$, if $p^\alpha$ is the greatest power of the prime $p$ that divides $x$. \newpage \begin{definition} \label{def:halting_conditions} A (compressed) Collatz sequence $v_1,v_2,\ldots,v_n,v_{n+1}$ ends when one of the following two conditions is reached: \[ \begin{array}{ll} \text{1.}&v_{n+1}=1\\[\medskipamount] \text{2.}&v_{n+1}\in\{v_1,v_2,v_3,\ldots,v_n\} \end{array} \] When the first condition applies, the Collatz conjecture is true for a specific sequence. If the second condition is fulfilled, the sequence has led to a cycle. \end{definition} \par\bigskip\noindent Maximizing the product $(1+\nicefrac{1}{3v_1})(1+\nicefrac{1}{3v_2})\cdots(1+\nicefrac{1}{3v_n})(1+\nicefrac{1}{3v_{n+1}})$ means to search for a sequence of odd numbers $v_1,v_2,\ldots$ as small as possible. One could try to use the arithmetic\footnote{An arithmetic sequence is a sequence, "in which each term is a constant $q$ more than the previous term" \cite[p.~206]{Ref_Brualdi_2010}.} sequence of odd integers $v_i=2i-1$ (beginning at $v_1=1$), but will find that for this case the product will not converge against a limit value as shown in appendix~\ref{appx:pochhammer}. A Collatz sequence $v_1,v_2,\ldots,v_n,v_{n+1}$ allowed at most one division by $2$ between two successive members. Dividing only once between two successive members, maximizes $v_{n+1}$. Such a sequence forms the following ascending continued fraction (cf. also \cite[p.~11]{Ref_Laarhoven}): \begin{equation} \label{eq:engel_k3} v_{n+1}=\cfrac{3\cfrac{3\cfrac{3\cfrac{3v_1+1}{2}+1}{2}+1}{2}+1}{2}\dotsb =\frac{3^nv_1+\sum_{i=0}^{n-1}3^i2^{n-1-i}}{2^n} =\frac{3^n(v_1+1)-2^n}{2^n} \end{equation} \medskip \begin{example} \label{ex:engel_31} A concrete example for such a sequence is $v_1=31$, $v_2=47$, $v_3=71$, $v_4=107$, $v_5=161$. And, to follow that example, we can calculate $v_5$ in a straightforward way: \[ v_5=v_{n+1}=\frac{3^4(31+1)-2^4}{2^4}=161 \] \par\medskip Besides, by choosing a starting number $v_1=2^{n+1}-1$, we are able to infinitely generate sequences each forming an ascending continued fraction. As per equation~\ref{eq:engel_k3} the last member in this sequence is the odd number $v_{n+1}=3^n\cdot2-1$. \end{example} \medskip \begin{remark} Ascending variants of a continued fraction, such as used in equation~\ref{eq:engel_k3}, shall not be confused with continued fractions as treated in \cite{Ref_Moore}, \cite{Ref_Hensley}, \cite{Ref_Borwe_etal}. Ascending continued fractions used in our case correspond to the so-called "Engel Expansions" \cite{Ref_Kraaikamp_Wu}. \end{remark} \par\noindent As illustrated below, we can formulate the ascending continued fractions in a generalized fashion, whereas the analogy to \ref{eq:engel_k3} is given by $b_1=b_2=b_3=b_4=2$ and $a_1=3^0$, $a_2=3^1$, $a_3=3^2$ and $a_4=3^3+3^4v_1$: \[ \cfrac{a_1+\cfrac{a_2+\cfrac{a_3+\cfrac{a_4}{b_4}}{b_3}}{b_2}}{b_1}\dotsb=\frac{a_1}{b_1}+\frac{a_2}{b_1b_2}+\frac{a_3}{b_1b_2b_3}+\frac{a_4}{b_1b_2b_3b_4}+\cdots \] \par\medskip The generalized form of equation~\ref{eq:engel_k3} may be used to compute any of the above-named ascending continued fraction that has $a_i=k^{i-1}$, $b_i=b$ for $i\in\mathbb{N}$ and $a_n=k^{n-1}+k^nv_1$: \begin{equation} \label{eq:generalized_asc_continued_fraction} v_{n+1}=\frac{k^n(kv_1-bv_1+1)-b^n}{b^n(k-b)} \end{equation} \par\medskip\noindent Table~\ref{table:engel_expansions_k_1_3_5_7} provides for $k=1,3,5,7$ each formula that calculates $v_{n+1}$ of an Engel expansion along with some example sequences. We obtained these formulas by inserting $b=2$ and $k=1,3,5,7$ into equation~\ref{eq:generalized_asc_continued_fraction}. {\renewcommand{\arraystretch}{1.8} \begin{table}[H] \centering \begin{tabular}{|L|L|L|L|} \hline \thead{\boldsymbol{k}} & \thead{\textbf{equation for~}\boldsymbol{v_{n+1}}} & \thead{\textbf{example sequence}} & \thead{\textbf{resulting}~\boldsymbol{v_{n+1}}}\\ \hline 1 & v_{n+1}=\frac{v_1-1+2^n}{2^n} & 513,257,129,65,33,17,9,5,3 & v_9=\frac{513-1+2^8}{2^8}=3 \\ \hline 3 & v_{n+1}=\frac{3^n(v_1+1)-2^n}{2^n} & 127,191,287,431,647,971,1457 & v_7=\frac{3^6(127+1)-2^6}{2^6}=1457 \\ \hline 5 & v_{n+1}=\frac{5^n(3v_1+1)-2^n}{3\cdot2^n} & 85,213,533,1333,3333,8333 & v_6=\frac{5^5(3\cdot85+1)-2^5}{3\cdot2^5}=8333 \\ \hline 7 & v_{n+1}=\frac{7^n(5v_1+1)-2^n}{5\cdot2^n} & 51,179,627,2195,7683 & v_5=\frac{7^4(5\cdot51+1)-2^4}{5\cdot2^4}=7683 \\ \hline \end{tabular} \caption{Some exemplary Engel expansions for $b=2$ and $k=1,3,5,7$} \label{table:engel_expansions_k_1_3_5_7} \end{table}} \section{Include more divisions by two into an Engel expansion} \label{sec:include_divisions_engel_expansion} For calculating the largest possible $v_{n+1}$, we considered so far Engel expansions which contain only $n$ division by two within a Collatz sequence of $n+1$ memebers. In the following we include $m$ additional divisions by two and thus a total of $m+n$ divisions. \par\medskip Our starting point again is a Collatz sequence $v_1,v_2,\ldots,v_{n+1}$ consisting of $n+1$ members. We have a total of $m+n$ divisions by two that are distributed over $n$ positions: \begin{equation} \label{eq:alpha_n_m} \alpha=n+m=\alpha_1+\alpha_2+\cdots+\alpha_n \end{equation} The number of positive integer solutions of the diophantine equation above is given by the following binomial coefficient, which is commonly used to calculate combinations with repetition, see \cite[p.~54]{Ref_Brualdi_2010}: \[ \binom{n+(\alpha-n)-1}{\alpha-n}=\binom{n+m-1}{m} \] \par\medskip Each of these solutions describes a possible way to distribute the $m+n$ divisions by two across $n$ positions within a Collatz sequence containing $n+1$ members. But how do we distribute these divisions in such a way that $v_{n+1}$ becomes maximum? This is precisely the case if as many divisions as possible are performed right at the beginning, which is illustrated in appendix~\ref{appx:permuting_divisions}. The Engel expansion of this case (that maximizes the value of $v_{n+1}$) provides the following formula for calculating the last sequence member $v_{n+1}$: \begin{equation} \label{eq:engel_more_divisions} v_{n+1}=\left(\frac{k}{2}\right)^{n-1}\left(\frac{kv_1+1}{2^{\alpha_1}}+\frac{1}{k-2}\right)-\frac{1}{k-2}=\left(\frac{k}{2}\right)^{n-1}\left(v_2+\frac{1}{k-2}\right)-\frac{1}{k-2} \end{equation} \medskip \begin{example} \label{ex:engel_67} We choose the sequence for $k=5$ starting at $v_1=67$ continuing with $v_2=21$, $v_3=53$, $v_4=133$, $v_5=333$, $v_6=833$ and $v_7=2083$. The last sequence member can be calculated by equation~\ref{eq:engel_more_divisions} directly as follows: \[ v_7=\left(\frac{5}{2}\right)^{6-1}\left(21+\frac{1}{5-2}\right)-\frac{1}{5-2}=2083 \] \end{example} \par\medskip\noindent Moreover theorem~\ref{theo:permutation} specifies the limitation of the effect, which the division permutations have on the last sequence member $v_{n+1}$. As demonstrated in appendix~\ref{appx:permuting_divisions}, the maximum impact on $v_{n+1}$ of permuting division by two in a given Collatz sequence can be calculated by the difference between the highest and lowest possible value of $v_{n+1}$. \par\medskip \begin{theorem} \label{theo:permutation} Let $v_1,v_2,\ldots,v_n,v_{n+1}$ be a sequence in which a total of $n+m$ divisions by two took place. No matter how these divisions are permuted, i.e. performed sooner or later, the last member $v_{n+1}$ can differ at most by the following product: \[ \left(\frac{3^{n-1}}{2^{n-1}}-1\right)\left(1-\frac{1}{2^m}\right) \] \end{theorem} \newpage \section{Sum of reciprocated Collatz members} \label{sum_reciprocal_vertices} A product $\prod(1+a_n)$ with positive terms $a_n$ is convergent if the series $\sum a_n$ converges, see Knopp \cite[p.~220]{Ref_Knopp}. A similar statement provides Murphy \cite{Ref_Murphy}, who write the factors in the form $c_n=1+a_n$ and explains that if $\prod c_n$ is convergent then $c_n\rightarrow1$ and therefore if $\prod (1+a_n)$ is convergent then $a_n\rightarrow0$. \par\medskip We write the sum of reciprocated Collatz members as $\nicefrac{1}{kv_1}+\nicefrac{1}{kv_2}+\ldots+\nicefrac{1}{kv_n}+\nicefrac{1}{kv_{n+1}}$. In order to formulate this sum independently from the successive members $v_2,v_3,\ldots$, we substitute these as follows: \begin{flalign} v_1&=v_1\notag\\ v_2&=\frac{kv_1+1}{2^{\alpha_1}}\notag\\ v_3&=\frac{k^2v_1+k+2^{\alpha_1}}{2^{\alpha_1+\alpha_2}}\notag\\ v_4&=\frac{k^3v_1+k^2+k\cdot2^{\alpha_1}+2^{\alpha_1+\alpha_2}}{2^{\alpha_1+\alpha_2+\alpha_3}}\label{eq:sum_v_4}\\ \vdots\notag\\ v_{n+1}&=\frac{k^nv_1+\sum_{j=1}^{n}k^{j-1}2^{\alpha_1+\ldots+\alpha_n-\sum_{l>n-j}\alpha_l}}{2^{\alpha_1+\ldots+\alpha_n}}\label{eq:sum_v_n_plus_1} \end{flalign} \par\medskip The sum of the reciprocal Collatz sequence members can be expressed as a term that only depends from $v_1$ and from the number of dvisions by two $\alpha_1,\alpha_2,\alpha_3,\ldots$ between two successive members: \begin{equation*} \sum_{i=1}^{n+1}\frac{1}{kv_i}=\frac{1}{k}\left(\frac{1}{v_1}+\sum_{i=1}^{n}\frac{1}{v_{i+1}}\right)=\frac{1}{k}\left(\frac{1}{v_1}+\sum_{i=1}^{n}\frac{2^{\alpha_1+\ldots+\alpha_i}}{k^iv_1+\sum_{j=1}^{i}k^{j-1}2^{\alpha_1+\ldots+\alpha_n-\sum_{l>i-j}\alpha_l}}\right) \end{equation*} %\vspace{1em} \section{The product of reciprocated Collatz members incremented by one} \label{appx:product_formula_depending_v1} In a similar way to deduce the sum of reciprocal vertices depending only on $v_1$ as performed in \ref{sum_reciprocal_vertices}, we evolve the formula for the product of reciprocated Collatz members (incremented by one): \begin{flalign} \prod_{i=1}^{n+1}\left(1+\frac{1}{kv_i}\right)&=1+\frac{2^{\alpha_1+\ldots+\alpha_n}+k\cdot2^{\alpha_1+\ldots+\alpha_{n-1}}+\ldots+k^{n-1}\cdot2^{\alpha_1}+k^n}{k^{n+1}v_1}\label{eq:prod_sum_v_n_plus_1}\\ &=1+\frac{2^{\alpha_1+\ldots+\alpha_n}+k\cdot\sum_{j=1}^{i}k^{j-1}2^{\alpha_1+\ldots+\alpha_n-\sum_{l>i-j}\alpha_l}}{k^{n+1}v_1}\label{eq:prod_sum_v_n_plus_1_inserted}\\ &=\frac{2^{\alpha_1+\ldots+\alpha_n}\left(1+kv_{n+1}\right)}{k^{n+1}v_1}\label{eq:prod_sum_v_n_plus_1_simplified} \end{flalign} We inserted the sum used in equation~\ref{eq:sum_v_n_plus_1} into the above-given equation~\ref{eq:prod_sum_v_n_plus_1} and then obtained equation~\ref{eq:prod_sum_v_n_plus_1_inserted}. Now let us divide this product by the last factor in order to retrieve the product which iterates to $n$ instead of $n+1$: \begin{equation} \label{eq:prod_sum_v_n_simplified} \prod_{i=1}^{n}\left(1+\frac{1}{kv_i}\right)=\frac{\prod_{i=1}^{n+1}\left(1+\frac{1}{kv_i}\right)}{\frac{kv_{n+1}+1}{kv_{n+1}}}=\frac{2^{\alpha_1+\ldots+\alpha_n}v_{n+1}}{k^nv_1} \end{equation} \par\medskip The above-shown equation~\ref{eq:prod_sum_v_n_simplified} becomes simplified, when we replaced the numerator by equation~\ref{eq:prod_sum_v_n_plus_1_simplified}. The question which sequence maximizes its last member $v_{n+1}$ ties into the question: Which sequence maximizes the product? The product formula~\ref{eq:prod_sum_v_n_simplified} does not depend from all vertices $v_1,v_2,\ldots,v_n$, it depends only from $2^\alpha=2^{\alpha_1+\ldots+\alpha_n}$, from the first sequence member $v_1$ and the final one $v_{n+1}$. \section{Maximizing the product of reciprocated Collatz members} Consider a Collatz sequence containing $n$ elements and starting at a given integer $v_1$. The corresponding product given by equation~\ref{eq:prod_sum_v_n_simplified} becomes largest if we \begin{itemize} \item maximize the last member $v_{n+1}$, \item choose a smallest possible $v_1$ and \item choose an $\alpha=n+m$ as large as possible, where $m$ is the crucial lever. \end{itemize} It is therefore a maximization problem over several variables. In the following we will reflect on how to approach a solution to the problem. \par\medskip\noindent {\bfseries\boldmath Minimizing $v_1$:} This is obvious and tricially achievable by iterating $v_1$ through small integers, starting at $v_1=1$. \par\medskip\noindent {\bfseries\boldmath Maximizing $\alpha$:} We must choose a Collatz sequence, in which as many divisions by two as possible occurred. Since $n$ and $m$ depend directly on $\alpha$, maximizing $\alpha$ is equivalent to maximizing $n$ or $m$. In general, adding more factors $(1+\nicefrac{1}{kv_i})$ can only increase the product. So, the larger $n$, the larger the product. Note that despite everything, $m$ is the key maximizer of the product -- these additional divisions by two reduce the sequence members $v_i$. \par\medskip\noindent {\bfseries\boldmath Maximizing $v_{n+1}$:} If we fix $v_1,\alpha,n$ then this maximum occurs when the sequence is an Engel expansion, id est when we run the most divisions by two at the beginning. Consequently, the exponent alpha (the total number of divisions by two) is the sum of a large $\alpha_1$ and the remaining alpha values which are all one: \[ \alpha=n+m=\alpha_1+\alpha_2+\cdots+\alpha_n=\alpha_1+1+\cdots+1=\alpha_1+n-1 \] The product of reciprocated Collatz members (incremented by one) for such an Engel expansion is given by the following equation: \begin{flalign} \label{eq:prod_engel_more_divisions} \prod_{i=1}^{n}\left(1+\frac{1}{kv_i}\right)&=1+\frac{1}{kv_1}+\frac{2^{\alpha_1}}{k(k-2)v_1}\left(1-\left(\frac{2}{k}\right)^{n-1}\right)\\ \label{eq:prod_engel_more_divisions_v2} &=1+\frac{1}{kv_1}+\frac{kv_1+1}{k(k-2)v_1v_2}\left(1-\left(\frac{2}{k}\right)^{n-1}\right) \end{flalign} \medskip \begin{example} An example for $k=3$ provides the sequence $v_1=661$, $v_2=31$, $v_3=47$, and $v_4=71$. In this case $\alpha_1=6=m+1$ and $\alpha_2=\alpha_3=\alpha_4=1$. We now calcultae the product of reciprocated Collatz sequence members by inserting $v_1=661$ and $v_2=31$ together with $k=3$ and $n=4$ into equation~\ref{eq:prod_engel_more_divisions_v2}: \begin{flalign*} \prod_{i=1}^{4}\left(1+\frac{1}{3v_i}\right)&= \left(1+\frac{1}{3\cdot661}\right)\left(1+\frac{1}{3\cdot31}\right)\left(1+\frac{1}{3\cdot47}\right)\left(1+\frac{1}{3\cdot71}\right)\\ &=1+\frac{1}{3\cdot 661}+\frac{3\cdot 661+1}{3\cdot(3-2)\cdot 661\cdot 31}\left(1-\left(\frac{2}{3}\right)^{4-1}\right)\\ &=1.0232158532713247 \end{flalign*} \end{example} \par\medskip Maximizing the product of reciprocated Collatz sequence members $\prod_{i=1}^{n}(1+\nicefrac{1}{kv_i})$ requires us to maximize the equation~\ref{eq:prod_engel_more_divisions} or \ref{eq:prod_engel_more_divisions_v2}. This maximization is archieved by choosing $v_1$ as small as possible (which we already know). Additionally we need to select a largest possible $\alpha_1=m+1$ or a lowest possible $v_2$ (which is clear as well). Lastly, this maximization also requires the choice of a value $n$ as large as possible. To find the largest product, we iterate the variables $v_1$ and $n$ independently through integers $1,2,\ldots$ and calculate the product for each combination of the variable assignment using equation~\ref{eq:prod_engel_more_divisions_v2}. Alternatively, one can maximize equation~\ref{eq:prod_engel_more_divisions} or \ref{eq:prod_engel_more_divisions_v2} analytically for the different $k$. \par\medskip Table~\ref{table:product_equations_k_3_5_7} provides for $k=1,3,5,7$ the formula for calculating the Engel expansion's product. This table furthermore indicates the maximum cases along with the corresponding results for each product. These resulting product values have been determined empirically in \cite{Ref_Koch_etal}. Recall that we are not allowed to choose an arbitrarily large $n$ if we set $v_1=1$ because of the halting conditions given by definition~\ref{def:halting_conditions}. The case $v_1=3,v_2=5,n=\infty$ for $k=3$ disregards the halting condition, the value $n$ runs towards infinity and thus much further than allowed -- nevertheless, the product value converges towards $\nicefrac{4}{3}$. Note that for $n=1$ it does not matter what value $v_2$ takes -- it has no effect on the resulting product value. {\renewcommand{\arraystretch}{1.8} \begin{table}[H] \centering \begin{tabular}{|L|L|L|L|} \hline \thead{\boldsymbol{k}} & \thead{\textbf{product formula}} & \thead{\textbf{maximum case}} & \thead{\textbf{resulting product}}\\ \hline 1 & 1+\frac{1}{v_1} + \frac{v_1+1}{-v_1v_2}\left(1-2^{n-1}\right) & v_1=1,n=1 & 2 \\ \hline 3 & 1+\frac{1}{3v_1} + \frac{3v_1+1}{3v_1v_2}\left(1-\left(\frac{2}{3}\right)^{n-1}\right) & \makecell[l]{v_1=1,n=1\\v_1=3,v_2=5,n=\infty} & \frac{4}{3} \\ \hline 5 & 1+\frac{1}{5v_1} + \frac{5v_1+1}{15v_1v_2}\left(1-\left(\frac{2}{5}\right)^{n-1}\right) & v_1=1,v_2=3,n=2 & \frac{32}{25} \\ \hline 7 & 1+\frac{1}{7v_1} + \frac{7v_1+1}{35v_1v_2}\left(1-\left(\frac{2}{7}\right)^{n-1}\right) & v_1=1,n=1 & \frac{8}{7} \\ \hline \end{tabular} \caption{Formulas that calculate the Engel expansion's product for $k=1,3,5,7$} \label{table:product_equations_k_3_5_7} \end{table}} \section{Pitfalls and limitations} We have maximized $v_1$ and $n$, but not the most important variable - namely the number $m$ of additional divisions by two. To our misfortune, there exist sequences that are not Engel expansions and still produce a larger product. Let us take for example the following five-element sequence for $k=3$ starting at $v_1=7$ and finishing with $v_{n}=5$: \[ \begin{array}{l@{\hspace{1em}}l@{\hspace{1em}}l@{\hspace{1em}}l@{\hspace{1em}}l@{\hspace{2em}}l} v_1=7&v_2=11&v_3=17&v_4=13&v_5=5&v_{n+1}=v_6=1\\ \alpha_1=1&\alpha_2=1&\alpha_3=2&\alpha_4=3&\alpha_5=4& \end{array} \] The total number of divisions by two within this sequence is $\alpha=11=n+m=5+6$. The product $(1+\nicefrac{1}{3v_1})\cdots(1+\nicefrac{1}{3v_5})$ of this sequence is $1.20399764$. If we insert into the Engel expansion product formula~\ref{eq:prod_engel_more_divisions_v2} analogously $v_1=7$ and $n=5$, it yields a product value of only $1.12404468$. Do not forget that in this case the value $\alpha=4$ is much lower. Let us shift the $m=6$ divisions to the beginning: \[ \alpha=11=7+1+1+1+1=\alpha_1+1+1+1+1 \] In this case we would obtain a (hypothetical) product value of $5.93885949$, which is quite larger. One may verify this by inserting $k=3,v_1=7,\alpha_1=7,n=5$ into equation~\ref{eq:prod_engel_more_divisions}. However, such a sequence does not exist. We have shown that Engel expansions produce large product values, when $v_1$ is small enough. These product values are exactly those upper limits that were empirically determined in \cite{Ref_Koch_etal}. But we did not answer the question: Does there exist a sequence which is not an Engel expansion, but nevertheless has a greater product value by compensating a smaller $v_{n+1}$ through including more additional divisions by two (which means by compensating a smaller $v_{n+1}$ through having a larger $m$)? \par\medskip In the end, it is not sufficient to maximize only $v_{n+1}$, we need to optimize all parameters. Some sequences, such the one considered above $7,11,17,13,5$ produce a large product value $(1+\nicefrac{1}{3v_1})\cdots(1+\nicefrac{1}{3v_5})$ and there do not exist a corresponding Engel expansion (with analogous parameters) exceeding this product value. %{\setlength{\jot}{1.2em} %\begin{flalign} %\label{eq:engel_k3_m} %v_{n+1}&=\cfrac{3\cfrac{3\cfrac{3\cfrac{3v_1+1}{2\cdot2^m}+1}{2}+1}{2}+1}{2}\dotsb %=\cfrac{3\cfrac{3\cfrac{3v_2+1}{2}+1}{2}+1}{2}\dotsb %=\frac{3^{n-1}(v_2+1)-2^{n-1}}{2^{n-1}}\\ %\notag %&=\frac{3^{n-1}(\frac{3v_1+1}{2\cdot2^{m}}+1)-2^{n-1}}{2^{n-1}}=\frac{3^nv_1+3^{n-1}+3^{n-1}2^{m+1}}{2^{m+n}}-1 %\end{flalign}} \newpage \section{Appendix} \vspace{1em} \subsection{The sequence of odd positive integers produces a divergent product} \label{appx:pochhammer} The arithmetic sequence of odd numbers $v_1=1,v_2=3,v_3=5,\ldots$ produces a divergent product $(1+\nicefrac{1}{3v_1})(1+\nicefrac{1}{3v_2})(1+\nicefrac{1}{3v_3})\cdots$. This sequence allows us to transform the product using the Pochhammer’s symbol (sometimes referred to as the \textit{rising factorial} or \textit{shifted factorial}), which is denoted by $(x)_n$ and defined as follows \cite{Ref_Zwillinger_Kokoska}, \cite[p.~679]{Ref_Brychkov} and \cite[p.~1005]{Ref_Trott}: \[ (x)_n=x(x+1)(x+2)\cdots(x+n-1)=\prod_{i=0}^{n-1}(x+i)=\prod_{i=1}^{n}(x+i-1)=\frac{\Gamma(x+n)}{\Gamma(x)} \] Setting $v_i=2i-1$ into the product and setting $x=\nicefrac{k+1}{2k}$ into Pochhammer’s symbol $(x)_n$ interestingly makes it possible for us to perform the following transformation: \begin{equation} \label{eq:pochhammer} \prod_{i=1}^{n}\left(1+\frac{1}{kv_i}\right) =\frac{\prod_{i=1}^{n}(kv_i+1)}{\prod_{i=1}^{n}kv_i} =\frac{\prod_{i=1}^{n}\left(k(2i-1)+1\right)}{k^n\prod_{i=1}^{n}(2i-1)} =\frac{2^{2n}n!}{(2n)!}\cdot\frac{\Gamma\left(\frac{k+1+2kn}{2k}\right)}{\Gamma\left(\frac{k+1}{2k}\right)} \end{equation} \par\medskip \begin{example} As an example that is easy to recalculate we choose $k=3$ and $n=4$: \[ \left(1+\frac{1}{3*1}\right)\left(1+\frac{1}{3*3}\right)\left(1+\frac{1}{3*5}\right)\left(1+\frac{1}{3*7}\right)=1,6555=\frac{2^8*4!}{8!}\cdot\frac{\Gamma(\frac{14}{3})}{\Gamma(\frac{4}{6})} \] \end{example} \par\medskip We transform the product in the numerator in equation~\ref{eq:pochhammer} into a form that allows us to use the Pochhammer’s symbol. It can be written as $2^nk^n(x)_n$, whwereby $x=\nicefrac{k+1}{2k}$: \begin{flalign*} \prod_{i=1}^{n}\left((2i-1)k+1\right)&=2^nk^n\prod_{i=1}^{n}\frac{(2i-1)k+1}{2k}=2^nk^n\prod_{i=1}^{n}\frac{k+1+2ki-2k}{2k}\\ &=2^nk^n\prod_{i=1}^{n}\left(\frac{k+1}{2k}+i-1\right)=2^nk^n\frac{\Gamma\left(\frac{k+1+2kn}{2k}\right)}{\Gamma\left(\frac{k+1}{2k}\right)} \end{flalign*} \par\medskip We recall the basic fact that the product of even integers is given by $\prod_{i=1}^{n}2i=2^n\cdot n!$ and the product of odd integers is $\prod_{i=1}^{n}\left(2i-1\right)=\nicefrac{(2n)!}{2^n\cdot n!}$. For this reason we could transform the product in the denominator in equation~\ref{eq:pochhammer} as follows: \[ \prod_{i=1}^{n}kv_i=k^n\prod_{i=1}^{n}v_i=k^n\prod_{i=1}^{n}(2i-1)=k^n\frac{(2n)!}{2^nn!} \] \vspace{1em} \subsection{Permuting divisions by two} \label{appx:permuting_divisions} In order to illustrate how permuting divisions by two affect the last sequence member $v_{n+1}$, we take a look at two corner cases: \begin{itemize} \item the one where we do the additional $m$ divisions by $2$ at the end and \item the one where we do these additional divisions at the very beginning. \end{itemize} \par\medskip\noindent \textbf{The first case} is our starting point to examine how the swapping a division by two affects the sequence member $v_{n+1}$. For this, let us compare the Engel expansion where we devide by $2^m$ afterwards with one where we divide by $2$ in the penultimate step and by $2^{m-1}$ in last step. One can immediately recognize the following inequality with a mere look: \[ \cfrac{1+\cfrac{3+\cfrac{3^2+\cfrac{3^3+3^4v_1}{2}}{2}}{2}}{2\cdot2^m} < \cfrac{1+\cfrac{3+\cfrac{3^2+\cfrac{3^3+3^4v_1}{2}}{2}}{2\cdot\textcolor{red}{\mathbf{2}}}}{2\cdot2^{m-1}} \] \par\bigskip To put it simply, in the expansion on the right side of the above-shown inequality we perform one division by two a little bit earlier as we do it in the expansion on the left side of the expansion. Almost all summands of both expansions cancel out each other: \[ \frac{1}{2\cdot2^m}+\cancel{\frac{3}{2^2\cdot2^m}+\frac{3^2}{2^3\cdot2^m}+\frac{3^3+3^4v_1}{2^4\cdot2^m}} < \frac{1}{2\cdot2^{m-1}}+\cancel{\frac{3}{2^2\cdot\textcolor{red}{\mathbf{2}}\cdot2^{m-1}}+\frac{3^2}{2^3\cdot\textcolor{red}{\mathbf{2}}\cdot2^{m-1}}\frac{3^3+3^4v_1}{2^4\cdot\textcolor{red}{\mathbf{2}}\cdot2^{m-1}}} \] \par\medskip\noindent \textbf{The second case} deals with Engel expansions where we perform that additional $m$ divisions by two as early as possible. The resulting value $v_{n+1}$ decreases, when we make a division by two later: \[ \cfrac{1+\cfrac{3+\cfrac{3^2+\cfrac{3^3+3^4v_1}{2\cdot2^{m-1}}}{2\cdot\textcolor{red}{\mathbf{2}}}}{2}}{2} < \cfrac{1+\cfrac{3+\cfrac{3^2+\cfrac{3^3+3^4v_1}{2\cdot2^m}}{2}}{2}}{2} \] \par\bigskip Also here almost all summands of both Engel expansions, they cancel each other out: \[ \cancel{\frac{1}{2}+\frac{3}{2^2}}+\frac{3^2}{2^3\cdot\textcolor{red}{\mathbf{2}}}+\cancel{\frac{3^3+3^4v_1}{2^4\cdot\textcolor{red}{\mathbf{2}}\cdot2^{m-1}}} < \cancel{\frac{1}{2}+\frac{3}{2^2}}+\frac{3^2}{2^3}+\cancel{\frac{3^3+3^4v_1}{2^4\cdot2^m}} \] \par\bigskip While the first case minimizes the value of $v_{n+1}$, the second case maximizes it. The difference between the maximum and the minimum is given by the following equation: \[ \frac{3^{n-1}\left(\frac{3v_1+1}{2\cdot2^m}+1\right)-2^{n-1}}{2^{n-1}}-\frac{3^n\left(v_1+1\right)-2^n}{2^{n+m}}=\left(\frac{3^{n-1}}{2^{n-1}}-1\right)\left(1-\frac{1}{2^m}\right) \] %\begin{flalign*} %&\frac{3^{n-1}\left(\frac{3v_1+1}{2\cdot2^m}+1\right)-2^{n-1}}{2^{n-1}}-\frac{3^n\left(v_1+1\right)-2^n}{2^{n+m}}\\ %=&\frac{3^{n-1}\cdot\left(3v_1+1+2^{m+1}\right)-2^{n-1}\cdot2^{m+1}-3^n\left(v_1+1\right)+2^n}{2^{m+1}\cdot2^{n-1}}\\ %=&\frac{3^{n-1}+3^{n-1}\cdot2^{m+1}-2^{n+m}-3^n+2^n}{2^{n+m}}=\frac{3^{n-1}-3\cdot3^{n-1}+3^{n-1}\cdot2^{m+1}-2^{n+m}+2^n}{2^{n+m}}\\ %=&\frac{-2\cdot3^{n-1}+3^{n-1}\cdot2^{m+1}-2^{n+m}+2^n}{2^{n+m}}=\frac{\left(2\cdot3^{n-1}-2^n\right)\left(2^m-1\right)}{2^n\cdot2^m}\\ %=&\left(\frac{3^{n-1}}{2^{n-1}}-1\right)\left(1-\frac{1}{2^m}\right) %\end{flalign*} \par\bigskip This has the consequence that for a given sequence consisting of $n+1$ members, between which a total of $n+m$ divisions have taken place, the permutations of these divisions have a limited effect on the last sequence member $v_{n+1}$ as described by theorem~\ref{theo:permutation}. \vspace{1em} \subsection{Condition for a limited growth of the Engel expansion} \label{appx:condition_limited_growth} Let us look now into the question of what condition must be met to prevent a greater growth than a decline in Collatz sequences. Specifically we consider an Engel expansion comprising $n+1$ sequence members that include $m$ additional divisions by two at the beginning. The last member $v_{n+1}$ in such a sequence can be calulated by formula~\ref{eq:engel_k3_m}. In order to restrict the growth of this sequence, we require that the last member has to be smaller than the first one. For this we define the condition $v_{n+1}<v_1$: \[ \frac{3^nv_1+3^{n-1}+3^{n-1}2^{m+1}}{2^{m+n}}-1<v_1 \] \par\medskip\noindent Reshaping this inequality leads to the following condition: \begin{equation} \label{eq:condition_limited_growth} \frac{3^{n-1}\left(2^{m+1}-2\right)}{2^{m+n}-3^n}-1<v_1 \end{equation} \vspace{1em} \subsection{Visualizing the product as a function of \boldmath{$v_1,n$}} \label{appx:permuting_divisions} When we display the product for $k=3$ as a discrete function of $v_1,n$ then it becomes visually clear that the maximum is $\nicefrac{4}{3}$ at $v_1=1,n=1$. Figure~\ref{fig:1} provides a chart illustrating the product values for Engel expansions depending on the variables $v_1$ and $n$. \par\medskip In Figure~\ref{fig:2} we see the product values for actual Collatz sequences as a function of both variables $v_1$ and $n$. There we observe that product values of several Collatz sequences are larger than ones of the corresponding Engel expansions displayed by figure~\ref{fig:1}. \newpage \begin{figure}[H] \begin{center} \includegraphics[width=0.68\textwidth]{prod_engel.png} \caption{Engel expansion's product value ($k=3$) as a function of $v_1,n$} \label{fig:1} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.68\textwidth]{prod.png} \caption{Product value ($k=3$) as a function of $v_1,n$} \label{fig:2} \end{center} \end{figure} %\newpage \vspace{1em} \bibliographystyle{unsrt} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7212949232, "avg_line_length": 59.7981132075, "ext": "tex", "hexsha": "8e8964aa159043fe30cc6822f0fbdec3f0c88805", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Sultanow/collatz", "max_forks_repo_path": "03 Engel Expansion/TeX/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Sultanow/collatz", "max_issues_repo_path": "03 Engel Expansion/TeX/main.tex", "max_line_length": 911, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Sultanow/collatz", "max_stars_repo_path": "03 Engel Expansion/TeX/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-01T15:54:55.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-01T15:12:10.000Z", "num_tokens": 11586, "size": 31693 }
\section{\module{StringIO} --- Read and write strings as files} \declaremodule{standard}{StringIO} \modulesynopsis{Read and write strings as if they were files.} This module implements a file-like class, \class{StringIO}, that reads and writes a string buffer (also known as \emph{memory files}). See the description on file objects for operations (section \ref{bltin-file-objects}). \begin{classdesc}{StringIO}{\optional{buffer}} When a \class{StringIO} object is created, it can be initialized to an existing string by passing the string to the constructor. If no string is given, the \class{StringIO} will start empty. \end{classdesc} The following methods of \class{StringIO} objects require special mention: \begin{methoddesc}{getvalue}{} Retrieve the entire contents of the ``file'' at any time before the \class{StringIO} object's \method{close()} method is called. \end{methoddesc} \begin{methoddesc}{close}{} Free the memory buffer. \end{methoddesc} \section{\module{cStringIO} --- Faster version of \module{StringIO}} \declaremodule{builtin}{cStringIO} \modulesynopsis{Faster version of \module{StringIO}, but not subclassable.} \moduleauthor{Jim Fulton}{[email protected]} \sectionauthor{Fred L. Drake, Jr.}{[email protected]} The module \module{cStringIO} provides an interface similar to that of the \refmodule{StringIO} module. Heavy use of \class{StringIO.StringIO} objects can be made more efficient by using the function \function{StringIO()} from this module instead. Since this module provides a factory function which returns objects of built-in types, there's no way to build your own version using subclassing. Use the original \refmodule{StringIO} module in that case. The following data objects are provided as well: \begin{datadesc}{InputType} The type object of the objects created by calling \function{StringIO} with a string parameter. \end{datadesc} \begin{datadesc}{OutputType} The type object of the objects returned by calling \function{StringIO} with no parameters. \end{datadesc} There is a C API to the module as well; refer to the module source for more information.
{ "alphanum_fraction": 0.7636195753, "avg_line_length": 32.8181818182, "ext": "tex", "hexsha": "332bdf15244e2971a2a7916be2f84c3f7d128eb1", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-27T01:55:17.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-16T08:14:13.000Z", "max_forks_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "marcosptf/cpython-2.0.1", "max_forks_repo_path": "Doc/lib/libstringio.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_issues_repo_issues_event_max_datetime": "2021-05-03T21:20:50.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-18T15:48:14.000Z", "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "marcosptf/cpython-2.0.1", "max_issues_repo_path": "Doc/lib/libstringio.tex", "max_line_length": 72, "max_stars_count": 5, "max_stars_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "marcosptf/cpython-2.0.1", "max_stars_repo_path": "Doc/lib/libstringio.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T21:47:20.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-26T21:53:36.000Z", "num_tokens": 548, "size": 2166 }
\documentclass[hidelinks__VERSION__]{adamyi-cv} \begin{document} \revision{__REVISION__} \footertext{Compiled on \today\ at \DTMcurrenttime. Latest version at \href{https://adamy.io/cv}{https://adamy.io/cv}; source code at \href{https://github.com/adamyi/cv}{github.com/adamyi/cv} under MIT license.} \header{Adam}{Yi} %{Software/Security Engineer interested in infrastructure, enthusiastic researcher/developer} {\href{https://www.adamyi.com}{\faGlobe \hspace{0.1em} adamyi.com}\\ \href{https://github.com/adamyi}{\faGithub \hspace{0.1em} adamyi}\\ \href{https://linkedin.com/in/adam-yi}{\faLinkedin \hspace{0.1em} adam-yi}} {\href{mailto:[email protected]}{[email protected]}\\ Sydney, Australia} %---------------------------------------------------------------------------------------- % EDUCATION SECTION %---------------------------------------------------------------------------------------- \section{Education} \begin{entrylist} %------------------------------------------------ \entry {\href{https://unsw.edu.au}{University of New South Wales}, Sydney Australia} {Jul. 2018 -- May. 2021} {\emph{\textbf{B.S.} in Computer Science}\\ Average Mark for all Comp-Sci courses: 93.8/100, High Distinction\\ Academic Statement at \href{https://adamy.io/transcript}{https://adamy.io/transcript} } \end{entrylist} %---------------------------------------------------------------------------------------- % EMPLOYMENT SECTION %---------------------------------------------------------------------------------------- \section{Employment} \begin{entrylist} %------------------------------------------------ \entry {\href{https://www.cse.unsw.edu.au/}{School of Computer Science and Engineering}, \href{https://unsw.edu.au/}{UNSW Sydney} - Casual Academic} {Feb. 2020 -- Present} { \begin{minipage}[t]{1\linewidth} \begin{itemize}[leftmargin=*,align=left,noitemsep] \item Taught 500+ undergraduate and postgraduate students Computer Networks (COMP3331/9331), Database Systems (COMP3311), Web App Security (COMP6443/6843), Advanced Operating Systems (COMP9242), Web Data Compression and Search (COMP9319) \item Developed course infrastructure for web application security course with 100+ micro-services in a Kubernetes cluster \item Designed and marked assignment and exam challenges \item Run consultation sessions for C language users to answer questions on Linux's low-level socket syscalls \end{itemize} \end{minipage} } %------------------------------------------------ \entry {\href{https://www.janestreet.com/}{Jane Street Capital} - Software Engineer Intern} {Dec. 2020 -- Feb. 2021} { \begin{minipage}[t]{1\linewidth} \begin{itemize}[leftmargin=*,align=left,noitemsep] \item Worked on optimizing performance for internal distributed systems and reduced CPU usage for key routines by 80\% \item Rewrote one of the servers for the next generation of one of our core market data system from scratch \item Contributed 12,000+ lines of code to our production OCaml, Java, and Scala codebase \end{itemize} \end{minipage} } \end{entrylist} %---------------------------------------------------------------------------------------- % ACTIVITIES SECTION %---------------------------------------------------------------------------------------- \section{Activities} \begin{entrylist} %------------------------------------------------ \entry {Capture the Flag (CTF) Author} {Mar. 2019 -- Present} {Created 10+ CTF challenges (\href{https://adamy.io/ctfs}{adamy.io/ctfs}) for the community, including for \href{https://www.sectalks.org/sydney/}{SecTalks Sydney}, \href{https://unswsecurity.com/}{UNSW Security Society}, and \href{https://sec.edu.au/}{SECedu}} %------------------------------------------------ \entry {White Hat Hacker / Cybersecurity Researcher} {Feb. 2015 -- Present} {Discovered 10+ vulnerabilities in various bug bounty programs like Atlassian's} %------------------------------------------------ \entry {Google Developers Community Programs - Translator (Volunter)} {Jun. 2016 -- May. 2018} {Reviewed and translated Google's dev videos, docs, and blog posts. Invited to Google I/O as a Community Partner} \end{entrylist} %---------------------------------------------------------------------------------------- % PROJECTS SECTION %---------------------------------------------------------------------------------------- \section{Side Projects} \begin{entrylist} %------------------------------------------------ \entry {CTFProxy (Kubernetes, Docker, Bazel, Golang, Python, Jsonnet)} {\href{https://github.com/adamyi/CTFProxy}{github.com/adamyi/CTFProxy}} {Scalable CTF infrastructure, with a BeyondCorp-like zero-trust network and simple infrastructure-as-code configuration. This was used for SECedu CTF 2019 and UNSW's Web Application Security (COMP6443/6843) course in 2020} %------------------------------------------------ \entry {MessageOnTap (Java, Android, HCI, NLP)} {\href{https://github.com/MessageOnTap}{github.com/MessageOnTap}} {A development framework for developers to implement intelligent plugins for instant messaging apps by suggesting actions based on chat message semantics (research project at HCII, Carnegie Mellon University)} %------------------------------------------------ \entry {Productimon (Golang, C, C++, JavaScript)} {\href{https://github.com/productimon/productimon}{github.com/productimon/productimon}} {A cross-platform activity/usage/screen time tracking and analysis tool that supports Linux, Windows, macOS, and Chrome OS/Chromium OS with a cross-platform core codebase that's easy to extend to other platforms} %------------------------------------------------ \entry {MUNPANEL (PHP)} {\href{https://github.com/munpanel/MUNPANEL_v1}{github.com/munpanel/MUNPANEL\_v1}} {An online high school conference registration system, which had served 3 conferences, 170+ registered schools, 2,000+ registered users, and 75,000+ USD cash flow} %------------------------------------------------ \item[\textbf{\href{https://github.com/adamyi}{And many more, check them out on \faGithub \hspace{0.1em} adamyi}!}] \end{entrylist} % \pagebreak %---------------------------------------------------------------------------------------- % AWARDS SECTION %---------------------------------------------------------------------------------------- \section{Awards} \begin{entrylist} %------------------------------------------------ \entry {Dean's Award} {2020} {\emph{\textbf{UNSW Faculty of Engineering}}, for 100 best performing student across the entire School of Engineering.} %------------------------------------------------ \entry {1st Place} {2019} {\emph{\textbf{UNSW CSE Performance Prize for Year 2}}, for the single best performing second year CSE student.} %------------------------------------------------ \entry {CommBank Cyber Prize} {2019} {\emph{\href{https://www.commbank.com.au/articles/newsroom/2020/08/tomorrow-cyber-security-talent.html}{\textbf{Commonwealth Bank of Australia}}}} %------------------------------------------------ \entry {2nd Place} {Oct. 2018} {\emph{\textbf{\href{https://www.cyberchallenge.com.au/}{Cyber Security Challenge Australia (CySCA)}}}, run by Australian Government (Australian Cyber Security Centre, Australian Signals Directorate, AustCyber), Telstra, PwC, Cisco, Microsoft, Commonwealth Bank, Splunk, BAE Systems, HackLabs} %------------------------------------------------ \entry {Gold Medal} {Apr. 2015} {\emph{\textbf{\href{https://inventions-geneva.ch/}{International Exhibition of Inventions of Geneva}}}, run by Swiss Federal Government} %------------------------------------------------ \end{entrylist} %------------------------------------------------ \end{document}
{ "alphanum_fraction": 0.5838412657, "avg_line_length": 38.3631840796, "ext": "tex", "hexsha": "ba4b86196c37b6ced2cbf346140a66eb3cdee5aa", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-10-22T05:55:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-15T13:34:02.000Z", "max_forks_repo_head_hexsha": "ea1b848da7de5b9c392d7746660dae8ceb6ecfa6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamyi/cv", "max_forks_repo_path": "src/cv.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "ea1b848da7de5b9c392d7746660dae8ceb6ecfa6", "max_issues_repo_issues_event_max_datetime": "2019-06-12T04:21:08.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-17T13:35:45.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamyi/cv", "max_issues_repo_path": "src/cv.tex", "max_line_length": 293, "max_stars_count": 2, "max_stars_repo_head_hexsha": "ea1b848da7de5b9c392d7746660dae8ceb6ecfa6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamyi/cv", "max_stars_repo_path": "src/cv.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-04T23:04:57.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-04T12:09:30.000Z", "num_tokens": 1752, "size": 7711 }
\documentclass[epsfig,10pt,fullpage]{article} \newcommand{\LabNum}{6} \newcommand{\CommonDocsPath}{../../common/docs} \input{\CommonDocsPath/preamble.tex} \begin{document} \centerline{\huge Embedded Systems} ~\\ \centerline{\huge Laboratory Exercise \LabNum} ~\\ \centerline{\large Introduction to Graphics and Animation} ~\\ \noindent The purpose of this exercise is to learn how to draw graphics and perform animation. You will create a Linux* character device driver that uses the video-out port on a DE-series board to display graphics. To do this exercise you need to know how to write Linux kernel modules and character device drivers. This exercise is meant to be done after Lab Exercise 3, which introduces character device drivers. This lab writeup assumes that the reader is using the DE1-SoC board, but the discussion is equally applicable to other boards, such as the DE10-Standard and DE10-Nano. We will describe any major differences that exist between DE-series boards as needed. ~\\ \noindent {\bf Background Information} ~\\ \noindent You may want to familiarize yourself with the documentation for the DE1-SoC Computer that pertain to the use of the video-out port. The DE1-SoC Computer includes a video-out port with a VGA controller that can be connected to a standard VGA monitor. The VGA controller supports a screen resolution of 640 $\times$ 480. The image that is displayed by the VGA controller is derived from two sources: a {\it pixel} buffer, and a {\it character} buffer. The pixel buffer is described below, and the character buffer will be discussed later in the exercise. ~\\ \noindent {\bf Pixel Buffer} \label{sec:pixel_buffer} ~\\ \noindent The pixel buffer for the video-out port holds the data (color) for each pixel that is displayed by the VGA controller. As illustrated in Figure \ref{fig:video_coord}, the pixel buffer in the DE1-SoC Computer provides an image resolution of 320 $\times$ 240 pixels, with the coordinate (0,0) being at the top-left corner of the image. Since the VGA controller supports the resolution 640 $\times$ 480, each of the pixel values in the pixel buffer is replicated in both the {\it x} and {\it y} dimensions when it is being displayed on the VGA screen. \begin{figure}[h!] \begin{center} \includegraphics{figures/fig_video_coord.pdf} \end{center} \caption{Pixel buffer coordinates in the DE1-SoC Computer.} \label{fig:video_coord} \end{figure} \noindent Figure \ref{fig:pixels} shows how pixel colors and pixel addresses are specified for the DE1-SoC Computer. As illustrated in part $a$ of the figure, each pixel color has 16 bits, with five bits for the blue and red components, and six bits for green. As depicted in part $b$ of Figure \ref{fig:pixels}, pixels are addressed in the pixel buffer by using the combination of a {\it base} address and an {\it x,y} offset. In the DE1-SoC Computer the default address of the pixel buffer is {\sf 0xC8000000}, which corresponds to the starting address of the FPGA on-chip memory. Using this scheme, the pixel at coordinates $(0,0)$ has the address {\sf 0xC8000000}, the pixel $(1,0)$ has the address {\it base} $+$ (00000000~000000001~0)$_2$ = {\sf 0xC8000002}, the pixel $(0,1)$ has the address {\it base} $+$ (00000001~000000000~0)$_2$ = {\sf 0xC8000400}, and the pixel at coordinates $(319,239)$ has the address {\it base} $+$ (11101111 100111111 0)$_2$ = {\sf 0xC803BE7E}. ~\\ \noindent If you are using the DE10-Standard Computer, then pixel colors and addresses will be the same as those given in Figure~\ref{fig:pixels}. But if you are using the DE10-Nano Computer, then there are two major differences. First, each pixel color has eight bits, with three bits for the red and green components, and two bits for blue. Second, pixel addresses are assigned as illustrated in Figure~\ref{fig:pixels}$b$, except that there is no longer a zero ``padded'' on the right. Hence, using the default starting address of the pixel buffer, in the FPGA on-chip memory, the pixel at coordinates $(0,0)$ has the address {\sf 0xC8000000}, the pixel $(1,0)$ has the address {\it base} $+$ (00000000~000000001)$_2$ = {\sf 0xC8000001}, the pixel $(0,1)$ has the address {\it base} $+$ (00000001~000000000)$_2$ = {\sf 0xC8000200}, and the pixel at coordinates $(319,239)$ has the address {\it base} $+$ (11101111 100111111)$_2$ = {\sf 0xC801DF3F}. \begin{figure}[h!] \begin{center} \includegraphics{figures/fig_pixels.pdf} \end{center} \caption{Pixel values and addresses in the DE1-SoC Computer.} \label{fig:pixels} \end{figure} ~\\ \noindent You can create an image by writing color values into the pixel addresses as described above. A dedicated {\it pixel buffer controller} reads this pixel data from the memory and sends it to the VGA display. The controller reads the pixel data in sequential order, starting with the pixel data corresponding to the upper-left corner of the VGA screen and proceeding to read the whole buffer until it reaches the data for the lower-right corner. This process is then repeated, continuously. You can modify the pixel data at any time, by writing to the pixel addresses. Writes to the pixel buffer are automatically interleaved in the hardware with the read operations that are performed by the pixel buffer controller. ~\\ \noindent It is also possible to prepare a new image for the VGA display without changing the content of the pixel buffer, by using the concept of {\it double-buffering}. In this scheme two pixel buffers are involved, called the {\it front} and {\it back} buffers, as described below. ~\\ \noindent {\bf Double Buffering} \label{sec:double_buffer} ~\\ \noindent As mentioned above, a pixel buffer controller reads data out of the pixel buffer so that it can be displayed on the VGA screen. This controller includes a programming interface in the form of a set of registers, as shown in Figure~\ref{fig:pixel_ctrl}. The register at address {\sf 0xFF203020} is called the {\it Buffer} register, and the one at address {\sf 0xFF203024} is the {\it Backbuffer} register. Each of these registers stores the starting address of a pixel buffer. The Buffer register holds the address of the pixel buffer that is displayed on the VGA screen. As mentioned above, in the default configuration of the DE1-SoC Computer this Buffer register is set to the address {\sf 0xC8000000}, which points to the start of the FPGA on-chip memory. The default value of the Backbuffer register is also {\sf 0xC8000000}, which means that there is only one pixel buffer. But software can modify the address stored in Backbuffer, thereby creating a second pixel buffer. An image can be drawn into this second buffer by writing to its pixel addresses. This image is not displayed on the VGA monitor until a pixel buffer {\it swap} is performed, explained below. ~\\ \noindent A pixel buffer swap is caused by writing the value 1 to the Buffer register. This write operation does not directly modify the content of the Buffer register, but instead causes the contents of the Buffer and Backbuffer registers to be swapped. The swap operation does not happen right away; it occurs at the end of a VGA screen-drawing cycle, after the last pixel in the bottom-right corner has been displayed. This time instance is referred to as the {\it vertical synchronization} time, and occurs every 1/60 seconds. Software can poll the value of the $S$ bit in the {\it Status} register, at address {\sf 0xFF20302C}, to see when the vertical synchronization has happened. Writing the value 1 into the Buffer register causes $S$ to be set to 1. Then, when the swap of the Buffer and Backbuffer registers has been completed $S$ is reset back to 0. The programming interface includes a {\it Resolution} register, shown in the figure, which contains the X and Y resolution of the pixel buffer(s). Additional information is provided in the {\it Status} register. The $m$ and $n$ bits specify the number of $y$ and $x$ VGA address bits, respectively. The {\it BS} bits indicate pixel-size; if this value is set to 7 then a pixel is one byte, or if it is set to 15 then a pixel is two bytes. \begin{figure}[h!] \begin{center} \includegraphics{figures/fig_DMA_ctrl.pdf} \end{center} \caption{Pixel buffer controller registers.} \label{fig:pixel_ctrl} \end{figure} ~\\ \noindent In a typical application the pixel buffer controller is used as follows. While the image contained in the pixel buffer that is pointed to by the Buffer register is being displayed, a new image is drawn into the pixel buffer pointed to by the Backbuffer register. When this new image is ready to be displayed, a pixel buffer swap is performed. Then, the pixel buffer that is now pointed to by the Backbuffer register, which was already displayed, is cleared and the next new image is drawn. In this way, the next image to be displayed is always drawn into the ``back'' pixel buffer, and the ``front'' and ``back'' buffer pointers are swapped when the new image is ready to be displayed. Each time a swap is performed software has to synchronize with the VGA controller by waiting until the $S$ bit in the Status register becomes 0. \noindent \section*{Part I} \noindent You are to create a character device driver that controls the VGA display. Your driver should use the file {\it /dev/video} to communicate with the user. Reading from this file should return the character string ``\texttt{ccc rrr}'', where the three-digit decimal number \texttt{ccc} is the number of columns in the VGA screen, and \texttt{rrr} is the number of rows. One way to read from the device driver is to type a command such as \texttt{cat /dev/video} on a Linux Terminal window. The device driver should support the following commands that can be written to the file {\it /dev/video}: \texttt{clear}, and \texttt{pixel x,y color}. The \texttt{clear} command should erase the VGA screen by setting all pixels in the pixel buffer to the color 0 ({\it black}). The \texttt{pixel} command should set the pixel on the VGA screen at the coordinates $(x, y)$ to the value \texttt{color}. One way to write to your video driver is to type a command such as \texttt{echo clear $>$ /dev/video} on a Linux Terminal window. The command \texttt{echo "pixel 319,239 0x07E0" > /dev/video} would set the pixel at the bottom-right corner of the screen to the color green. ~\\ \noindent An outline of the required code for the character device driver is given in Figure~\ref{fig:video}. Lines \ref{line:includes} to~\ref{line:includes2} include header files that are needed for the driver. Global variables that are used to access the pixel buffer, which will be described later, are declared in lines~\ref{line:dec1} to~\ref{line:dec2}. Line~\ref{line:vars} is a placeholder for the declarations of function prototypes and variables that are needed for the character device driver. Prototypes have to be declared for the functions that are executed when opening, reading, writing, and closing the driver. Variables of type \texttt{dev\_t}, \texttt{cdev}, and \texttt{class} have to be declared and initialized in the function {\it start\_video}, shown in lines~\ref{line:start1} to~\ref{line:start2}, which is executed when the video driver is inserted into the Linux kernel. Refer to Exercise 3 for a more detailed discussion about the functions and variables that are needed for character device drivers. ~\\ \noindent To provide the kernel module with access to the FPGA light-weight bridge, line~\ref{line:c1} calls the \texttt{ioremap\_nocache} function. Information about this function, and the FPGA light-weight bridge, can be found in the tutorial {\it Using Linux on DE-series Boards}. Line~\ref{line:c2} computes the base address for the pixel buffer controller registers, which are illustrated in Figure~\ref{fig:pixel_ctrl}. This address is then passed to the function named \texttt{get\_screen\_specs}, which reads the {\it Resolution} register in the pixel controller, so that it can use this information to set the global variables {\it resolution\_x} and {\it resolution\_y}. ~\\ \noindent In Line~\ref{line:c3} \texttt{ioremap\_nocache} is called again, to map the physical addresses of the pixel buffer into virtual addresses. This code assumes that the pixel buffer is in its default location, which is within the FPGA on-chip memory. The \texttt{clear\_screen} function is then called, which is used to set all pixels in the pixel buffer to the color 0, which is {\it black}. \lstset{language=C,numbers=left,escapechar=|} \begin{figure}[h] \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=dots] |\label{line:includes}|#include <linux/module.h> #include <linux/kernel.h> #include <linux/fs.h> #include <linux/cdev.h> #include <linux/device.h> #include <asm/io.h> #include <asm/uaccess.h> |\label{line:includes2}|#include "address_map_arm.h" // Declare global variables needed to use the pixel buffer |\label{line:dec1}|void *LW_virtual; // used to access FPGA light-weight bridge volatile int * pixel_ctrl_ptr; // virtual address of pixel buffer controller int pixel_buffer; // used for virtual address of pixel buffer |\label{line:dec2}|int resolution_x, resolution_y; // VGA screen size // Declare variables and prototypes needed for a character device driver |\label{line:vars}$\ldots$| code not shown /* Code to initialize the video driver */ |\label{line:start1}|static int _|$\,$|_init start_video(void) { // initialize the dev_t, cdev, and class data structures |$\ldots$| code not shown // generate a virtual address for the FPGA lightweight bridge |\label{line:c1}|LW_virtual = ioremap_nocache (0xFF200000, 0x00005000); if (LW_virtual == 0) printk (KERN_ERR "Error: ioremap_nocache returned NULL\n"); // Create virtual memory access to the pixel buffer controller |\label{line:c2}|pixel_ctrl_ptr = (unsigned int *) (LW_virtual + 0x00003020); get_screen_specs (pixel_ctrl_ptr); // determine X, Y screen size // Create virtual memory access to the pixel buffer |\label{line:c3}|pixel_buffer = (int) ioremap_nocache (0xC8000000, 0x0003FFFF); if (pixel_buffer == 0) printk (KERN_ERR "Error: ioremap_nocache returned NULL\n"); /* Erase the pixel buffer */ clear_screen ( ); return 0; |\label{line:start2}|} \end{lstlisting} \end{minipage} \caption{An outline of the video-driver code (Part $a$).} \label{fig:video} \end{center} \end{figure} \clearpage \newpage \lstset{language=C,numbers=left,escapechar=|} \begin{center} \begin{minipage}[t]{12.5 cm} \begin{lstlisting}[name=dots] void get_screen_specs(volatile int * pixel_ctrl_ptr) { |$\ldots$| code not shown } void clear_screen( ) { |$\ldots$| code not shown } void plot_pixel(int x, int y, short int color) { |$\ldots$| code not shown } static void _|$\,$|_exit stop_video(void) { /* unmap the physical-to-virtual mappings */ iounmap (LW_virtual); iounmap ((void *) pixel_buffer); /* Remove the device from the kernel */ |$\ldots$| code not shown } static int device_open(struct inode *inode, struct file *file) { return SUCCESS; } static int device_release(struct inode *inode, struct file *file) { return 0; } static ssize_t device_read(struct file *filp, char *buffer, size_t length, loff_t *offset) { |$\ldots$| code not shown } static ssize_t device_write(struct file *filp, const char *buffer, size_t length, loff_t *offset) { |$\ldots$| code not shown } MODULE_LICENSE("GPL"); module_init (start_video); module_exit (stop_video); \end{lstlisting} ~\\ Figure \ref{fig:video}. An outline of the video-driver code (Part $b$). \end{minipage} \end{center} ~\\ \noindent Part $b$ of Figure~\ref{fig:video} gives an outline for the rest of the functions that are required for the character device driver. A detailed discussion of these functions can be found in Laboratory Exercise 3. ~\\ \noindent Perform the following: \begin{enumerate} \item Create a file named {\it video.c} and fill in the missing code from Figure~\ref{fig:video}. Create a Makefile for your character device driver. Compile the code to create the kernel module {\it video.ko}, and insert this module into the Linux kernel. \item Connect a VGA monitor to the DE-series board. \item Test your video character device driver by using a Terminal window. Reading from the file {\it /dev/video} should return the string ``320 240'', which provides the VGA screen size. Use \texttt{pixel} commands to color some pixels on the screen, and send the \texttt{clear} command to the video driver to erase the VGA display. \item Create a user-level program called {\it part1.c} that reads and writes to your video device driver. A skeleton of an example program is given in Figure~\ref{fig:part1}. Fill in the rest of the code so that it performs some operations using the device driver. For example, you could write a loop that uses \texttt{pixel} commands to fill the entire VGA screen with a certain color. Compile your program using a command such as \texttt{gcc -Wall -o part1 part1.c}. Test your program by trying various operations. \end{enumerate} \lstset{language=C,numbers=none} \begin{figure}[H] \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=part1] #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #include <unistd.h> #define video_BYTES 8 // number of characters to read from /dev/video int screen_x, screen_y; int main(int argc, char *argv[]){ int video_FD; // file descriptor char buffer[video_BYTES]; // buffer for data read from /dev/video char command[64]; // buffer for commands written to /dev/video int x, y; // Open the character device driver if ((video_FD = open("/dev/video", O_RDWR)) == -1) { printf("Error opening /dev/video: %s\n", strerror(errno)); return -1; } // Set screen_x and screen_y by reading from the driver |$\ldots$| code not shown // Use pixel commands to color some pixels on the screen |$\ldots$| code not shown close (video_FD); return 0; } \end{lstlisting} \end{minipage} \vspace{-0.5cm} \caption{A program that communicates with /{\it dev}/{\it video}.} \label{fig:part1} \end{center} \end{figure} \vspace{-1cm} \section*{Part II} \noindent In this part you will add a simple line-drawing algorithm to your video driver. Drawing a line on a screen requires coloring pixels between two coordinates $(x_1,y_1)$ and $(x_2,y_2)$, such that the pixels represent the desired line as closely as possible. Consider the example in Figure~\ref{fig:line_drawing}, where we want to draw a line between coordinates $(1,1)$ and $(12,5)$. The squares in the figure represent the location and size of pixels on the screen. As indicated in the figure, we cannot draw the line precisely---we can only draw a shape that is similar to the line by coloring the pixels that fall closest to the line's ideal location on the screen. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{figures/fig_line_drawing} \end{center} \caption{Drawing a line between coordinates $(1,1)$ and $(12,5)$.} \label{fig:line_drawing} \end{figure} ~\\ \noindent We can use algebra to determine which pixels to color. This is done by using the end points and the slope of the line. The slope of our example line is $slope = (y_2 - y_1)/(x_2 - x_1) = 4/11$. Starting at point $(1,1)$ we move along the $x$ axis and compute the $y$ coordinate for the line as follows: \begin{eqnarray*} y = y_1 + slope \times (x - x_1) \end{eqnarray*} \noindent Thus, for column $x = 2$, the $y$ location of the pixel is $1 + \frac{4}{11} \times (2-1) = 1 \frac{4}{11}$. Since pixel locations are defined by integer values we round the $y$ coordinate to the nearest integer, and determine that in column $x = 2$ we should color the pixel at $y = 1$. For column $x = 3$ we perform the calculation $y = 1 + \frac{4}{11} \times (3-1) = 1 \frac{8}{11}$, and round the result to $y = 2$. Similarly, we perform such computations for each column between $x_1$ and $x_2$. ~\\ \noindent The approach of moving along the $x$ axis has drawbacks when a line is steep. A steep line spans more rows than it does columns, and hence has a slope with absolute value greater than~1. In this case our calculations will not produce a smooth-looking line. Also, if the line is vertical we cannot use the slope to make a calculation. To address this problem, we can alter the algorithm to move along the $y$ axis when a line is steep. With this change, we can implement a line-drawing algorithm known as {\it Bresenham's algorithm}. A key property of this algorithm is that all variables are {\it integers}. Pseudo-code for this algorithm is given in Figure~\ref{fig:bresenham}. The first 15 lines of the algorithm make the needed adjustments depending on whether or not the line is steep, and on its vertical (down or up) and horizontal (left or right) directions. Then, in lines 17 to 22 the algorithm increments the {\it x} variable 1 step at a time and computes the {\it y} value. The {\it y} value is incremented when needed to stay as close to the ideal location of the line as possible. Bresenham's algorithm calculates an {\it error} variable to decide whether or not to increment each {\it y} value. The {\it error} variable takes into account the relative difference between the width ({\it deltax}) and height of the line ({\it deltay}) in deciding how often {\it y} should be incremented. The version of the algorithm shown in Figure~\ref{fig:bresenham} uses only integers to perform all calculations. \begin{figure}[h] \begin{center} \begin{minipage}[t]{12.5 cm} \begin{lstlisting}[language=C, morekeywords={then}, numbers=left,name=bresenham] draw_line(x0, x1, y0, y1) is_steep = (abs(y1 - y0) > abs(x1 - x0)) if is_steep then swap(x0, y0) swap(x1, y1) if x0 > x1 then swap(x0, x1) swap(y0, y1) deltax = x1 - x0 deltay = abs(y1 - y0) error = -(deltax / 2) y = y0 |\label{line:adjust}|if y0 < y1 then y_step = 1 else y_step = -1 |\label{line:f1}|for x from x0 to x1 if is_steep then draw_pixel(y,x) else draw_pixel(x,y) error = error + deltay if error |$\ge$| 0 then y = y + y_step |\label{line:f2}|error = error - deltax \end{lstlisting} \end{minipage} \caption{Pseudo-code for a line-drawing algorithm.} \label{fig:bresenham} \end{center} \end{figure} \newpage \noindent Perform the following: \begin{enumerate} \item Using your character device driver from Part I it would be possible to draw lines by repeatedly sending \texttt{pixel} commands to the driver. However, this would be an inefficient approach due to the large number of writes that may have to be made to the file {\it /dev/video}. A better approach is to implement a line-drawing algorithm within the video driver. Augment your driver by adding a function called {\it draw\_line} that implements Bresenham's algorithm. Also, add a new command \texttt{line x1,y1 x2,y2 color} that invokes this function. \item Remove your kernel module from Part I, recompile the new version which includes the \texttt{line} command, and then re-insert the module back into the kernel. \item Use the \texttt{echo} command in a Terminal window to test your line-drawing function. \item Part of a user-level program that draws a few lines on the VGA screen is depicted in Figure~\ref{fig:part2}. Fill in the missing parts of this program. Compile and test it. \end{enumerate} \lstset{language=C,numbers=none} \begin{figure}[H] \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=part2] #include <stdio.h> |$\ldots$| other include statements not shown #define video_BYTES 8 // number of characters to read from /dev/video int screen_x, screen_y; int main(int argc, char *argv[]){ int video_FD; // file descriptor char c, video_buffer[video_BYTES]; // buffer for video char data char command[64]; // buffer for command data // Open the character device driver if ((video_FD = open("/dev/video", O_RDWR)) == -1) { printf("Error opening /dev/video: %s\n", strerror(errno)); return -1; } // Read VGA screen size from the video driver |$\ldots$| code not shown /* Draw a few lines */ sprintf (command, "line %d,%d %d,%d %X\n", 0, screen_y - 1, screen_x - 1, 0, 0xFFE0); // yellow write (video_FD, command, strlen(command)); sprintf (command, "line %d,%d %d,%d %X\n", 0, screen_y - 1, (screen_x >> 1) - 1, 0, 0x07FF); // cyan write (video_FD, command, strlen(command)); (screen_x >> 2) - 1, 0, 0x07E0); // green write (video_FD, command, strlen(command)); |$\ldots$| etc. close (video_FD); return 0; } \end{lstlisting} \end{minipage} \caption{A user-level program that draws a few lines.} \label{fig:part2} \end{center} \end{figure} \pagebreak \noindent \section*{Part III} \noindent Animation is an exciting part of computer graphics. Moving a displayed object is an illusion created by showing this same object at different locations on the screen. A simple way to ``move'' an object is to first draw the object at one position, and then after a short time erase the object and draw it again at another nearby position. ~\\ \noindent To realize animation it is necessary to move objects at regular time intervals. The VGA controller in the DE1-SoC Computer redraws the screen every $1/60^{th}$ of a second. Since the image on the screen cannot change more often than that, it is reasonable to control an animation using this unit of time. ~\\ \noindent To ensure that you change the image only once every $1/60^{th}$ of a second, use the pixel buffer controller to synchronize with the vertical synchronization cycle of the VGA controller. As we discussed in the background section of this exercise, synchronizing with the VGA controller can be accomplished by writing the value~1 into the {\it Buffer} register in the pixel buffer controller, and then waiting until bit $S$ of the {\it Status} register becomes equal to 0. For this part of the exercise you do not need to use a back buffer, so ensure that the {\it Buffer} and {\it Backbuffer} addresses in the pixel buffer controller are the same. In this approach, a pixel buffer ``swap'' can be used as a way of synchronizing with the VGA controller via the {\it S} bit in the {\it Status} register. ~\\ \noindent Perform the following: \begin{enumerate} \item Augment your video driver so that it includes the ability to perform a pixel buffer swap, as described above. Also add a new command to the video driver, called \texttt{sync}, that allows a user-level program to synchronize with the VGA controller. Remove the previous version of the driver from the Linux kernel, and compile and insert the new version. \item Write a user-level program that creates a simple animation using your video driver. The animation should ``move'' a horizontal line up and down on the screen and ``bounce'' the line off the top and bottom edges of the display. Your program should first clear the screen and draw the line at a starting row on the screen. Then, in an endless loop you should perform a VGA synchronization, erase the line (by drawing the line using black), and redraw it one row above or below the last one. When the line reaches the top, or bottom, of the screen it should start moving in the opposite direction. \item Compile and execute your code. Notice how long it takes for the horizontal line to move through the 240 lines of the VGA display. It should take $240 \times 1/60 = 4$ seconds. \end{enumerate} \noindent \section*{Part IV} \noindent Having gained the basic knowledge, you can now create a more interesting animation. You are to create an animation of eight small filled rectangles on the screen. These rectangles should appear to be moving continuously and ``bouncing'' off the edges of the screen. The rectangles should be connected with lines to form a chain. An illustration of the animation is given in Figure~\ref{fig:animation_example}. Part $a$ of the figure shows one position of the rectangles with arrows that indicate the directions of movement, and Figure~\ref{fig:animation_example}$b$ shows a subsequent position of the rectangles. In each step of your animation each of the rectangles should appear to ``move'' on a diagonal line: up/left, up/right, down/left, or down/right. Move the rectangles one row and one column at a time on the VGA screen. \begin{figure}[h!] \begin{center} \includegraphics[scale = 0.5]{figures/fig_animation_example.pdf} \end{center} \vspace{-0.5cm} \caption{Two instants of the animation.} \label{fig:animation_example} \end{figure} ~\\ \noindent To make the animation look slightly different each time you run it, you can use the C library function {\it rand ()} to help calculate initial positions for each of the rectangles, and to determine their directions of movement. ~\\ \noindent Perform the following: \begin{enumerate} \item Augment your video driver to add a new command: \texttt{box x1,y1 x2,y2 color}. Remove the previous version of the driver from the Linux kernel, and compile and insert the new version. \item Write a user-level program to implement your animation. Use both a front and back buffer in your program, so that you can avoid making changes to the image while it is being displayed by the pixel buffer controller. If you are using the DE1-SoC Computer, then in addition to placing one pixel buffer in the FPGA on-chip memory, you can place the other buffer in the SDRAM memory, which has the starting address \texttt{0xC0000000}. This choice is also available in the DE10-Standard Computer. If using the DE10-Nano Computer, then place both pixel buffers in the FPGA on-chip memory; if the first buffer starts at address \texttt{0xC8000000}, then the other buffer can begin at \texttt{0xC8000000} $+$ $(512 \times 256)$. An outline of a suitable program is illustrated in Figure~\ref{fig:part4}. Compile, execute, and test your program. \item Experiment with your code by modifying it to use just a single pixel buffer (simply change the address of the back buffer to be the same as the front buffer). Notice how the animation differs with this change. \end{enumerate} \section*{Part V} \noindent For this part of the exercise you are to enhance the animation from Part~IV so that during the animation the following changes can take place: \begin{enumerate} \item The speed of movement of the rectangles can be increased or decreased \item The number of rectangles can be increased or decreased \item The lines between rectangles can be drawn or not drawn \end{enumerate} \noindent In Part IV the animation speed was set by the 1/60 seconds VGA vertical synchronization time. One way to control the speed of animation is to make use of a timer. You can use the {\it nanosleep} timer that is provided by the Linux kernel. In this scheme, the main program would draw the next step of the animation each time the timer expires. Lengthening the timeout would produce a slower animation, and shortening the timeout would speed up the animation. The maximum speed of the animation would be limited by the 1/60 seconds VGA synchronization time, as it was in Part IV. To cause the animation to appear to move more quickly than in Part IV, you have to increase the screen-distance that the rectangles move in each step of the animation. ~\\ \noindent Perform the following: \begin{enumerate} \item Implement the animation control discussed above. All of your code changes should be done in a user-level program; you do not need to modify your video driver. The user should be able to control the animation by using the KEY pushbuttons and SW slide switches. If you are using the DE1-SoC or DE10-Standard boards, then implement the actions shown in Table~\ref{tab:action1}. For the DE10-Nano board, use the actions given in Table~\ref{tab:action2}, because this board has fewer switches. When any of the SW switches is set to the ``on'' position the lines between objects should not be drawn; only when all SW switches are set to the ``off'' position should the lines appear. Access both the SW and KEY switches by using their respective character device drivers. \item Compile and test your animation code. \newpage \lstset{language=C,numbers=none} \begin{figure}[H] \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=part4] #include <stdio.h> |$\ldots$| other include statements not shown /* Declare global variables needed for the animation */ |$\ldots$| code not shown void draw (int); volatile sig_atomic_t stop; void catchSIGINT(int signum){ stop = 1; } int main(int argc, char *argv[]){ int i, video_FD; // file descriptor |$\ldots$| other declarations not shown // catch SIGINT from ^C, instead of having it abruptly close this program signal(SIGINT, catchSIGINT); |$\ldots$| code to open the video driver shown // set random initial position and "delta" for all rectangle boxes |$\ldots$| code not shown while (!stop) { draw (video_FD); // sync with VGA display write (video_FD, "sync", 5); } close (video_FD); return 0; } /* Code that draws the animation objects, and updates their positions */ void draw(int fd) { |$\ldots$| code not shown } \end{lstlisting} \end{minipage} \caption{A user-level program that makes an animation.} \label{fig:part4} \end{center} \end{figure} \end{enumerate} \begin{table}[h] \caption{Animation control for the DE1-SoC and DE10-Standard boards.} ~\\ \centering \label{tab:action1} \begin{tabular}{c|p{10cm}} {\bf KEY} & {\bf Action} \\ \hline \rule{0cm}{12pt}{\it KEY}$_0$ & The speed of animation should be increased by some noticeable amount \\ {\it KEY}$_1$ & The speed of animation should be decreased by some noticeable amount \\ {\it KEY}$_2$ & The number of displayed objects should be increased by one \\ {\it KEY}$_3$ & The number of displayed objects should be decreased by one \\ \end{tabular} \end{table} \begin{table}[h] \caption{Animation control for the DE10-Nano board.} ~\\ \centering \label{tab:action2} \begin{tabular}{c|p{13cm}} {\bf KEY} & {\bf Action} \\ \hline \rule{0cm}{12pt}{\it KEY}$_0$ & Each press of this key should increase the speed of animation by a noticeable amount, until some maximum is reached. After hitting the maximum speed, further presses of this KEY should reduce the animation speed, until some minimum is reached.\\ {\it KEY}$_1$ & Pressing this key should increase the number of displayed objects by some quantity, up to a maximum. After reaching the maximum, further presses of this KEY should reduce the number of displayed objects.\\ \end{tabular} \end{table} \section*{Part VI} \noindent We mentioned in the background section of this exercise that the image displayed by the VGA controller can be derived from both a {\it pixel} buffer, and a {\it character} buffer. For this part of the exercise you are to enhance your video driver so that it supports the display of text characters. The character buffer is stored in FPGA on-chip memory in the DE1-SoC Computer. Figure \ref{fig:chars}$a$ depicts the character buffer for the VGA display, which has a resolution of 80 $\times$ 60 characters. Each character occupies an 8 $\times$ 8 block of pixels on the screen. Characters are stored in each of the locations shown in Figure \ref{fig:chars}$a$ using their ASCII codes; when you store an ASCII character into the buffer, a corresponding pattern of pixels is automatically generated and displayed using a built-in font. Part~$b$ of Figure~\ref{fig:chars} shows that characters are addressed in the memory by using the combination of a {\it base} address, which has the value (C9000000)$_{16}$, and an {\it x,y} offset. Using this scheme, the character at coordinates $(0,0)$ has the address (C9000000)$_{16}$, $(1,0)$ has the address {\it base} $+$ (000000 0000001)$_2$ = (C9000001)$_{16}$, $(0,1)$ has the address {\it base} $+$ (000001 0000000)$_2$ = (C9000080)$_{16}$, and the character at location $(79,59)$ has the address {\it base} $+$ (111011 1001111)$_2$ = (C9001DCF)$_{16}$. \begin{figure}[h] \begin{center} \includegraphics{figures/fig_chars.pdf} \end{center} \caption{Character buffer coordinates and addresses.} \label{fig:chars} \end{figure} ~\\ \noindent The character buffer has an associated controller, with a register interface like the one shown in Figure~\ref{fig:pixel_ctrl}. The base address of this controller is {\sf 0xFF203030}. You can read from the {\it Resolution} register to obtain the number of text columns ($X$) and rows ($Y$) on the screen. For the VGA screen, the values will be 80 columns $\times$ 60 rows. ~\\ \noindent Perform the following: \begin{enumerate} \item Enhance your kernel module from Part V by adding two new commands: \texttt{erase}, and \texttt{text x,y string}. The \texttt{erase} command should clear all characters from the VGA screen (by filling the screen with the ` ' space character). The \texttt{text} command should store the ASCII characters in the {\it string} into coordinates starting at \texttt{(x, y)} in the character buffer. Remove the previous version of your video driver from the Linux kernel, and compile and insert the new version. \item Augment your user-level program from Part V so that it displays, in the upper-left corner of the screen, the number of video frames that have been drawn. Using the default animation speed, the frame counter should increment at the rate of 60 times per second. \item Compile and test your animation. \end{enumerate} \vskip 0.8in \noindent \newpage \input{\CommonDocsPath/copyright.tex} \end{document}
{ "alphanum_fraction": 0.7489077766, "avg_line_length": 43.9662398137, "ext": "tex", "hexsha": "41dbc86ad3ddb5064309efffe37460e76b133849", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f6e974e6824e4b02b4896f4a00c5b4a09ac859e0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fpgacademy/Lab_Exercises_Embedded_Systems", "max_forks_repo_path": "lab6/doc/lab6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f6e974e6824e4b02b4896f4a00c5b4a09ac859e0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fpgacademy/Lab_Exercises_Embedded_Systems", "max_issues_repo_path": "lab6/doc/lab6.tex", "max_line_length": 277, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f6e974e6824e4b02b4896f4a00c5b4a09ac859e0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fpgacademy/Lab_Exercises_Embedded_Systems", "max_stars_repo_path": "lab6/doc/lab6.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T23:21:30.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-09T23:21:30.000Z", "num_tokens": 10008, "size": 37767 }
%---------------------------Oddy----------------------------- \section{Oddy} Let $\vec L_4 = \vec L_0$. The Oddy metric is then defined as \[ q = \max_{i\in\{0,1,2,3\}}\left\{ \frac{(\normvec{L_i}^2 - \normvec{L_{i+1}}^2)^2 + 4 (\vec L_i \cdot \vec L_{i+1})^2} {2 \normvec{N_{i+1}}^2 } \right\}. \] This metric measures the maximum deviation of the metric tensor at the corners of the quadrilateral. Note that if $\normvec{N_{i+1}}^2 < DBL\_MIN$, we set $q = DBL\_MAX$. \quadmetrictable{Oddy}% {$1$}% Dimension {$[0,0.5]$}% Acceptable range {$[0,DBL\_MAX]$}% Normal range {$[0,DBL\_MAX]$}% Full range {$0$}% Unit square {\cite{odd:88}}% Citation {v\_quad\_oddy}% Verdict function name
{ "alphanum_fraction": 0.443715847, "avg_line_length": 36.6, "ext": "tex", "hexsha": "099ad21425f69b4f8aa933d1d0fd64cc6750fede", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/QuadOddy.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/QuadOddy.tex", "max_line_length": 100, "max_stars_count": 8, "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/QuadOddy.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "num_tokens": 274, "size": 915 }
\section{Background: Traditional Blockchain Scalability}\label{sec:background} This section discusses how traditional blockchains perform with respect to scalability, with an emphasis on Bitcoin. \subsection{Technical Problem Description} One way to define a blockchain is \emph{a distributed database (DB) that solves the “Strong Byzantine Generals” (SBG) problem} \cite{koshy2014revolution}, the name given to a combination of the Byzantine Generals Problem and the Sybil Attack Problem. In the Byzantine Generals Problem \cite{lamport1982byzantine}, nodes need to agree on some value for a DB entry, under the constraint that the nodes may fail in arbitrary ways (including malicious behavior)\footnote{It has been noted that the Bitcoin blockchain falls short of solving the original Byzantine Generals Problem; it would be more accurate to say that it solves a relaxation of the problem \cite{garay2015bitcoin, luuscp}.}. The Sybil Attack Problem \cite{douceur2002sybil} arises when one or more nodes figure out how to get unfairly disproportionate influence in the process of agreeing on a value for an entry. It’s an “attack of the clones”—an army of seemingly independent voters actually working together to game the system. \subsection{Bitcoin Scalability Issues} Bitcoin has scalability issues in terms of throughput, latency, capacity, and network bandwidth. \medskip \noindent\textbf{Throughput.} The Bitcoin network processes just $1$ transaction per second (tps) on average, with a theoretical maximum of $7$ tps \cite{bitcoin2015scalability}. It could handle higher throughput if each block was bigger, though right now making blocks bigger would lead to size issues (see Capacity and network bandwidth, below). This throughput is unacceptably low when compared to the number of transactions processed by Visa ($2,000$ tps typical, $10,000$ tps peak) \cite{trillo2013visa}, Twitter ($5,000$ tps typical, $15,000$ tps peak), advertising networks ($500,000$ tps typical), trading networks, or email networks (global email volume is $183$ billion emails/day or $2,100,000$ tps \cite{sourabh2014email}). An ideal global blockchain, or set of blockchains, would support all of these multiple high-throughput uses. \medskip \noindent\textbf{Latency.} Each block on the Bitcoin blockchain takes $10$ minutes to process. For sufficient security, it is better to wait for about an hour, giving more nodes time to confirm the transaction. By comparison, a transaction on the Visa network is approved in seconds at most. Many financial applications need latency of $30$ to $100$~ms. \medskip \noindent\textbf{Capacity and network bandwidth.} The Bitcoin blockchain is about $70$~GB (at the time of writing); it grew by $24$~GB in $2015$ \cite{blockchaininfo2015blockchain_size}. It already takes nearly a day to download the entire blockchain. If throughput increased by a factor of $2,000$, to Visa levels, the additional transactions would result in database growth of $3.9$~GB/day or $1.42$~PB/year. At $150,000$~tps, the blockchain would grow by $214$~PB/year (yes, petabytes). If throughput were 1M tps, it would completely overwhelm the bandwidth of any node’s connection. \subsection{Technology Choices Affecting Scalability} The Bitcoin blockchain has taken some technology choices which hurt scaling: \begin{enumerate} \item \textbf{Consensus Algorithm: POW.} Bitcoin’s mining reward actually incentivizes nodes to increase computational resource usage, without any additional improvements in throughput, latency, or capacity. A single confirmation from a node takes $10$ minutes on average, so six confirmations take about an hour. In Bitcoin this is by design; Litecoin and other altcoins reduce the latency, but compromise security. \item \textbf{Replication: Full.} That is, each node stores a copy of all the data; a “full node.” This copy is typically kept on a single hard drive (or in memory). Ironically, this causes centralization: as amount of data grows, only those with the resources to hold all the data will be able to participate. \end{enumerate} These characteristics prevent the Bitcoin blockchain from scaling up. \subsection{Blockchain Scalability Efforts} The Bitcoin / blockchain community has spent considerable effort on improving the performance of blockchains. Appendix~\ref{appendix:blockchain_scalability} reviews various proposals in more detail. Previous approaches shared something in common: they all started with a block chain design then tried to increase its performance. There’s another way: start with a ``big~data'' distributed database, then give it blockchain-like characteristics.
{ "alphanum_fraction": 0.8015924252, "avg_line_length": 96.8125, "ext": "tex", "hexsha": "8f9c1251314c3b3766235ae5db39bdd430f30693", "lang": "TeX", "max_forks_count": 16, "max_forks_repo_forks_event_max_datetime": "2020-03-19T00:08:45.000Z", "max_forks_repo_forks_event_min_datetime": "2016-02-23T16:25:47.000Z", "max_forks_repo_head_hexsha": "ef42daf8c06bea0bb04747963b3ae23f667ea338", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "bigchaindb/whitepaper", "max_forks_repo_path": "src/ch02_background.tex", "max_issues_count": 10, "max_issues_repo_head_hexsha": "ef42daf8c06bea0bb04747963b3ae23f667ea338", "max_issues_repo_issues_event_max_datetime": "2018-05-16T09:50:51.000Z", "max_issues_repo_issues_event_min_datetime": "2016-02-22T16:16:56.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "bigchaindb/whitepaper", "max_issues_repo_path": "src/ch02_background.tex", "max_line_length": 436, "max_stars_count": 19, "max_stars_repo_head_hexsha": "ef42daf8c06bea0bb04747963b3ae23f667ea338", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "bigchaindb/whitepaper", "max_stars_repo_path": "src/ch02_background.tex", "max_stars_repo_stars_event_max_datetime": "2018-03-29T03:12:55.000Z", "max_stars_repo_stars_event_min_datetime": "2016-02-19T15:36:39.000Z", "num_tokens": 1042, "size": 4647 }
\section{Question 3} We use lead controller to fix transient mode and solve problem. \newpage \begin{itemize} \item system step responde \begin{figure}[H] \caption{system step responde} \centering \includegraphics[width=12cm]{../Figure/Q3/Q3_system_respond.png} \end{figure} \item system nichols \begin{figure}[H] \caption{system nichols plot} \centering \includegraphics[width=12cm]{../Figure/Q3/Q3_system_nichols.png} \end{figure} \item system with controller step responde \begin{figure}[H] \caption{system with controller step responde} \centering \includegraphics[width=12cm]{../Figure/Q3/Q3_system_controller_respond.png} \end{figure} \item system with controller nichols \begin{figure}[H] \caption{system with controller nichols plot} \centering \includegraphics[width=12cm]{../Figure/Q3/Q3_system_controller_nichols.png} \end{figure} \item system with and without controller step responde \begin{figure}[H] \caption{system with and without controller step responde} \centering \includegraphics[width=12cm]{../Figure/Q3/Q3_respond_all.png} \end{figure} \item system with and without controller nichols \begin{figure}[H] \caption{system with and withou controller nichols plot} \centering \includegraphics[width=12cm]{../Figure/Q3/Q3_nichols.png} \end{figure} \end{itemize}
{ "alphanum_fraction": 0.6795986622, "avg_line_length": 36.4634146341, "ext": "tex", "hexsha": "7f1928e734f1f9870ed2bbc616cec1366f46d98e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_path": "HW/HW III/Report/Q3/Q3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_path": "HW/HW III/Report/Q3/Q3.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_path": "HW/HW III/Report/Q3/Q3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 403, "size": 1495 }
% !TEX root = ../../../proposal.tex This chapter is adapted from a joint publication that originally appeared in the proceedings of the 22nd ACM Conference on Computer and Communications Security (CCS~'15)~\cite{logjam-2015}. After the discovery of the FREAK attack by Beurdouche et al.~\cite{freak-attack-2015}, there were many questions raised about the security of other export cryptography in TLS, and if other types of weakened cryptography were similarly opening up modern clients to vulnerability. In addition to ``export-grade'' RSA, the TLS protocol contained export Diffie-Hellman (\dheexp{}) ciphers through version 1.0~\cite{rfc2246}. In this chapter, we investigate the security of export-grade Diffie-Hellman in TLS, and measure its prevalence on the Internet, in order to determine if decades-old weakened cryptography remains relevant to the security of the Internet today. First, we present Logjam, a novel flaw in TLS that lets a man-in-the-middle downgrade connections to export-grade Diffie-Hellman. To carry out this attack, we implement the number field sieve discrete log algorithm. After a week-long precomputation\footnote{\small Except where otherwise noted, the experimental data and network measurements for this chapter were obtained in early 2015.} for a specified 512-bit group, we can compute arbitrary discrete logs in that group in about a minute. We find that 82\% of vulnerable servers use a single 512-bit group, allowing us to compromise connections to 7\% of Alexa Top Million HTTPS sites. In response, major browsers have changed to reject short groups. %\looseness=-1 We further investigate the security of Diffie-Hellman key exchange as used in popular Internet protocols and find it to be less secure than widely believed. We go on to consider Diffie-Hellman with 768- and 1024-bit groups. We estimate that even in the 1024-bit case, the computations are plausible given nation-state resources. A small number of fixed or standardized groups are used by millions of servers; performing precomputation for a single 1024-bit group would allow passive eavesdropping on 18\% of popular HTTPS sites, and a second group would allow decryption of traffic to 66\% of IPsec VPNs and 26\% of SSH servers. A close reading of published NSA leaks shows that the agency's attacks on VPNs are consistent with having achieved such a break. We conclude that moving to stronger key exchange methods should be a priority for the Internet community. %\looseness=-1 %\end{abstract} \section{Introduction} Diffie-Hellman key exchange is a popular cryptographic algorithm that allows Internet protocols to agree on a shared key and negotiate a secure connection. It is fundamental to protocols such as HTTPS, SSH, IPsec, SMTPS, and other protocols that rely on TLS\@. Many protocols use Diffie-Hellman to achieve \emph{perfect forward secrecy}, the property that a compromise of the long-term keys used for authentication does not compromise sessions keys for past connections. We examine how Diffie-Hellman is commonly implemented and deployed with common protocols and find that, in practice, it frequently offers less security than widely believed. There are two reasons for this. First, a surprising number of servers use weak Diffie-Hellman parameters or maintain support for obsolete 1990s-era ``export-grade'' crypto. More critically, the common practice of using standardized, hard-coded, or widely shared Diffie-Hellman parameters has the effect of dramatically reducing the cost of large-scale attacks, bringing some within range of feasibility today. The current best technique for attacking Diffie-Hellman relies on compromising one of the private exponents ($a$, $b$) by computing the discrete logarithm of the corresponding public value ($g^a \bmod p$, $g^b \bmod p$). With state-of-the-art number field sieve algorithms, computing a single discrete log is more difficult than factoring an RSA modulus of the same size. However, an adversary who performs a large precomputation for a prime $p$ can then quickly calculate arbitrary discrete logs in that group, amortizing the cost over all targets that share this parameter. Although this fact is well known among mathematical cryptographers, it seems to have been lost among practitioners deploying cryptosystems. We exploit it to obtain the following results: \paragraph{Active attacks on export ciphers in TLS} We introduce Logjam, a new attack on TLS by which a man-in-the-middle attacker can downgrade a connection to export-grade cryptography. This attack is reminiscent of the FREAK attack~\cite{freak-attack-2015} but applies to the ephemeral Diffie-Hellman ciphersuites and is a TLS protocol flaw rather than an implementation vulnerability. We present measurements that show that this attack applies to 8.4\% of Alexa Top Million HTTPS sites and 3.4\% of all HTTPS servers that have browser-trusted certificates. To exploit this attack, we implemented the number field sieve discrete log algorithm and carried out precomputation for two 512-bit Diffie-Hellman groups used by more than 92\% of the vulnerable servers. This allows us to compute individual discrete logs in about a minute. Using our discrete log oracle, we can compromise connections to over 7\% of Top Million HTTPS sites. Discrete logs over larger groups have been computed before~\cite{dlp180}, but, as far as we are aware, this is the first time they have been exploited to expose concrete vulnerabilities in real-world systems. %\looseness=-1 \begin{figure*}[ht] \centering\includegraphics[width=\linewidth]{\LogjamFigures/nfs} \caption{\textbf{Number field sieve for discrete log}\,---\,% This algorithm consists of a precomputation stage that depends only on the prime $p$ and a descent stage that computes individual logarithms. With sufficient precomputation, an attacker can quickly break any Diffie-Hellman instances that use a particular $p$. } \label{fig:nfs} \end{figure*} \paragraph{Risks from common 1024-bit groups} We explore the implications of precomputation attacks for 768- and 1024-bit groups, which are widely used in practice and still considered secure. We estimate the computational resources necessary to compute discrete logs in groups of these sizes, concluding that 768-bit groups are within range of academic teams, and 1024-bit groups may plausibly be within range of nation-state adversaries. In both cases, individual logarithms can be quickly computed after the initial precomputation. We then examine evidence from published Snowden documents that suggests NSA may already be exploiting 1024-bit Diffie-Hellman to decrypt VPN traffic. We perform measurements to understand the implications of such an attack for popular protocols, finding that an attacker who could perform precomputations for ten 1024-bit groups could passively decrypt traffic to about 66\% of IKE VPNs, 26\% of SSH servers, and 24\% of popular HTTPS sites. \paragraph{Mitigations and lessons} In response to the Logjam attack, mainstream browsers have implemented a more restrictive policy on the size of Diffie-Hellman groups they accept, and Chrome has discontinued support for finite field key exchanges. We further recommend that TLS servers disable export-grade cryptography and carefully vet the Diffie-Hellman groups they use. In the longer term, we advocate that protocols migrate to elliptic curve Diffie-Hellman. \section{Diffie-Hellman Cryptanalysis} \label{sec:dl} Diffie-Hellman key exchange was the first published public-key algorithm~\cite{new-directions-in-crypto-1976}. In the simple case of prime groups, Alice and Bob agree on a prime $p$ and a generator $g$ of a multiplicative subgroup modulo $p$. Then each generates a random private exponent, $a$ and $b$. Alice sends $g^a \bmod p$, Bob sends $g^b \bmod p$, and each computes a shared secret $g^{ab} \bmod p$. While there is also a Diffie-Hellman exchange over elliptic curve groups, we address only the ``mod $p$'' case. The security of Diffie-Hellman is not known to be equivalent to the discrete logarithm problem, but computing discrete logs remains the best known cryptanalytic attack. An attacker who can find the discrete log $x$ from $y = g^x \bmod p$ can easily find the shared secret. Textbook descriptions of discrete log can be misleading about the computational tradeoffs, for example by optimizing for computing a \emph{single} discrete log. In fact, as illustrated in Figure~\ref{fig:nfs}, a single large precomputation on $p$ can be used to efficiently break \emph{all} Diffie-Hellman exchanges made with that prime. Diffie-Hellman is typically implemented with prime fields and large group orders. In this case, the most efficient discrete log algorithm is the number field sieve (NFS)~\cite{discrete-log-nfs-1993,virtual-logarithms-2005,nfs-prime-field-2003}. The algorithm has four stages with different computational properties. The first three steps are only dependent on the prime $p$ and comprise most of the computation. First is \emph{polynomial selection}, in which one finds a polynomial $f(z)$ defining a number field $\QQ[z]/f(z)$ for the computation. This parallelizes well and is only a small portion of the runtime. In the second stage, \emph{sieving}, one factors ranges of integers and number field elements in batches to find many relations of elements, all of whose prime factors are less than some bound $B$ (called $B$-smooth). Sieving parallelizes well, but is computationally expensive, because we must search through and attempt to factor many elements. In the third stage, \emph{linear algebra}, we construct a large, sparse matrix consisting of the coefficient vectors of prime factorizations we have found. This stage can be parallelized in a limited fashion, and produces a database of logarithms which are used as input to the final stage. The final stage, \emph{descent}, actually deduces the discrete log of the target $y$. We re-sieve until we find a set of relations that allow us to write the logarithm of $y$ in terms of the logarithms in the precomputed database. Crucially, descent is the only NFS stage that involves $y$ (or $g$), so polynomial selection, sieving, and linear algebra can be done once for a prime $p$ and reused to compute the discrete logs of many targets. The numerous parameters of the algorithm allow some flexibility to reduce time on some computational steps at the expense of others. For example, sieving more will result in a smaller matrix, making linear algebra cheaper, and doing more work in the precomputation makes the final descent step easier. \paragraph{Standard primes} Generating safe primes\footnote{\small An odd prime $p$ is safe when $(p-1)/2$ is prime.} can be computationally burdensome, so many implementations use standardized Diffie-Hellman parameters. A prominent example is the Oakley groups~\cite{rfc2412}, which give ``safe'' primes of length 768 (Oakley Group 1), 1024 (Oakley Group 2), and 1536 (Oakley Group 5). These groups were published in 1998 and have been used for many applications since, including IKE, SSH, Tor, and OTR\@. When primes are of sufficient strength, there seems to be no disadvantage to reusing them. However, widespread reuse of Diffie-Hellman groups can convert attacks that are at the limits of an adversary's capabilities into devastating breaks, since it allows the attacker to amortize the cost of discrete log precomputation among vast numbers of potential targets. \section{Attacking TLS} \label{sec:attacking-tls} TLS supports Diffie-Hellman as one of several possible key exchange methods, and prior to public disclosure of the attack, about two-thirds of popular HTTPS sites supported it, most commonly using 1024-bit primes. However, a smaller number of servers also support legacy ``export-grade'' Diffie-Hellman using 512-bit primes that are well within reach of NFS-based cryptanalysis. Furthermore, for both normal and export-grade Diffie-Hellman, the vast majority of servers use a handful of common groups. In this section, we exploit these facts to construct a novel attack against TLS\@, which we call the Logjam attack. First, we perform NFS precomputations for the two most popular 512-bit primes on the web, so that we can quickly compute the discrete log for any key exchange message that uses one of them. Next, we show how a man-in-the-middle, so armed, can attack connections between popular browsers and any server that allows export-grade Diffie-Hellman, by using a TLS protocol flaw to downgrade the connection to export-strength and then recovering the session key. We find that this attack with our precomputations can compromise connections to about 7.8\% of HTTPS servers among Alexa Top Million domains. \begin{table}[t] \centering\small \begin{tabular}{lll} \toprule Source & Popularity & Prime \\ \midrule Apache & 82\% & \tt 9fdb8b8a004544f0045f1737d0ba2e0b\\ & & \tt 274cdf1a9f588218fb435316a16e3741\\ & & \tt 71fd19d8d8f37c39bf863fd60e3e3006\\ & & \tt 80a3030c6e4c3757d08f70e6aa871033\smallskip\\ mod\_ssl & 10\% & \tt d4bcd52406f69b35994b88de5db89682\\ & & \tt c8157f62d8f33633ee5772f11f05ab22\\ & & \tt d6b5145b9f241e5acc31ff090a4bc711\\ & & \tt 48976f76795094e71e7903529f5a824b\smallskip\\ (\emph{others\/}) & \ 8\% & (463~distinct primes) \\ \bottomrule \end{tabular} \caption{\textbf{Top 512-bit DH primes for TLS}\,---\,% 8.4\% of Alexa Top~1M HTTPS domains allow \dheexp{}, of which 92.3\% use one of the two most popular primes, shown here. } \label{tab:export-primes} \end{table} \subsection{TLS and Diffie-Hellman} The TLS handshake begins with a negotiation to determine the crypto algorithms used for the session. The client sends a list of supported ciphersuites (and a random nonce $cr$) within the \textsf{ClientHello} message, where each ciphersuite specifies a key exchange algorithm and other primitives. The server selects a ciphersuite from the client's list and signals its selection in a \textsf{ServerHello} message (containing a random nonce $sr$). TLS specifies ciphersuites supporting multiple varieties of Diffie-Hellman. Textbook Diffie-Hellman with unrestricted strength is called ``ephemeral'' Diffie-Hellman, or \dhe{}, and is identified by ciphersuites that begin with \texttt{TLS\_DHE\_*}.\footnote{\small New ciphersuites that use elliptic curve Diffie-Hellman (\ecdhe{}) are gaining in popularity, but we focus exclusively on the traditional prime field variety.} In \dhe{}, the server is responsible for selecting the Diffie-Hellman parameters. It chooses a group $(p,g)$, computes $g^b$, and sends a \textsf{ServerKeyExchange} message containing a signature over the tuple $(cr, sr, p, g, g^b)$ using the long-term signing key from its certificate. The client verifies the signature and responds with a \textsf{ClientKeyExchange} message containing $g^a$. To ensure agreement on the negotiation messages, and to prevent downgrade attacks, each party computes the TLS master secret from $g^{ab}$ and calculates a MAC of its view of the handshake transcript. These MACs are exchanged in a pair of \textsf{Finished} messages and verified by the recipients. \paragraph{Export-grade Diffie-Hellman} To comply with 1990s-era U.S. export restrictions on cryptography, SSL 3.0 and TLS 1.0 supported reduced-strength \dheexp{} ciphersuites that were restricted to primes no longer than 512 bits. In all other respects, \dheexp{} protocol messages are identical to \dhe{}. The relevant export restrictions are no longer in effect, but many servers maintain support for backwards compatibility. To understand how HTTPS servers in the wild use Diffie-Hellman, we modified the ZMap~\cite{zmap-2013} toolchain to offer \dhe{} and \dheexp{} ciphersuites and scanned TCP/443 on both the full public IPv4 address space and the Alexa Top~1M domains. The scans took place in March 2015. Of 539,000 HTTPS sites among Top~1M domains, we found that 68.3\% supported \dhe{} and 8.4\% supported \dheexp{}. Of 14.3~million IPv4 HTTPS servers with browser-trusted certificates, 23.9\% supported \dhe{} and 4.9\% \dheexp{}. \iffalse \begin{table}[t] \centering\small \begin{tabular}{rllrl} \toprule Fraction & Source & Year & Bits & Prime \\ \midrule 0.8255 & Apache 2.2 & 2005 & 512 & \texttt{9fdb8b8a}$\ldots$\texttt{aa871033} \\ 0.0997 & mod\_ssl & 1999 & 512 & \texttt{d4bcd524}$\ldots$\texttt{9f5a824b}\\ 0.0414 & IKE & 2000 & 2048 & \texttt{fff}$\ldots$\texttt{c90fdaa2}$\ldots$\texttt{fff} \\ 0.0069 & JDK & 2003 & 512 & \texttt{fca682ce}$\ldots$\texttt{37592e17} \\ 0.0012 & (unknown)& --- & 512 & \texttt{acc8149e}$\ldots$\texttt{67ec1505} \\ \midrule 0.0253 & \multicolumn{4}{c}{\emph{other primes}} \\ \bottomrule \end{tabular} \label{tab:export-primes} \end{table} \fi While the TLS protocol allows servers to generate their own Diffie-Hellman parameters, just two 512-bit primes account for 92.3\% of Alexa Top~1M domains that support \dheexp{} (Table~\ref{tab:export-primes}), and 92.5\% of all servers with browser-trusted certificates that support \dheexp{}. The most popular 512-bit prime was hard-coded into many versions of Apache; the second most popular is the \texttt{mod\_ssl} default for \dheexp{}. \subsection{Active Downgrade to Export-Grade DHE} \label{sec:dhead} \begin{figure}[t] \includegraphics[width=\linewidth]{\LogjamFigures/mitm-dhe-export} \caption{\textbf{The Logjam attack}\,---\,% A man-in-the-middle can force TLS clients to use export-strength DH with any server that allows \dheexp{}. Then, by finding the 512-bit discrete log, the attacker can learn the session key and arbitrarily read or modify the contents. $\AData^{fs}$ refers to False Start application data that some TLS clients send before receiving the server's \textsf{Finished} message.} \label{fig:mitm-export} \end{figure} Given the widespread use of these primes, an attacker with the ability to compute discrete logs in 512-bit groups could efficiently break \dheexp{} handshakes for about 8\% of Alexa Top~1M HTTPS sites, but modern browsers never negotiate export-grade ciphersuites. To circumvent this, we show how an attacker can downgrade a regular \dhe{} connection to use a \dheexp{} group, and thereby break both the confidentiality and integrity of application data. The attack, which we call Logjam, is depicted in Figure~\ref{fig:mitm-export} and relies on a flaw in the way TLS composes \dhe{} and \dheexp{}. When a server selects \dheexp{} for a handshake, it proceeds by issuing a signed \textsf{ServerKeyExchange} message containing a 512-bit $p_{512}$, but the structure of this message is identical to the message sent during standard \dhe{} ciphersuites. Critically, the signed portion of the server's message fails to include any indication of the specific ciphersuite that the server has chosen. Provided that a client offers \dhe{}, an active attacker can rewrite the client's \textsf{ClientHello} to offer a corresponding \dheexp{} ciphersuite accepted by the server and remove other ciphersuites that could be chosen instead. The attacker rewrites the \textsf{ServerHello} response to replace the chosen \dheexp{} ciphersuite with a matching non-export ciphersuite and forwards the \textsf{ServerKeyExchange} message to the client as is. The client will interpret the export-grade tuple $(p_{512}, g, g^b)$ as valid \dhe{} parameters chosen by the server and proceed with the handshake. The client and server have different handshake transcripts at this stage, but an attacker who can compute $b$ in close to real time can then derive the master secret and connection keys to complete the handshake with the client. There are two remaining challenges in implementing this active downgrade attack. The first is to compute individual discrete logs in close to real time, and the second is to delay handshake completion until the discrete log computation has had time to finish. \subsection{512-bit Discrete Log Computations} \label{subsec:512bit-dl-computation} We modified CADO-NFS~\cite{cado-nfs-2.3} to implement the number field sieve discrete log algorithm and applied it to the top two \dheexp{} primes shown in Table~\ref{tab:export-primes}. Precomputation took 7~days for each prime, after which computing individual logarithms requires a median of 70~seconds. \paragraph{Precomputation} As illustrated in Figure~\ref{fig:nfs}, the precomputation phase includes the polynomial selection, sieving, and linear algebra steps. For this precomputation, we deliberately sieved more than strictly necessary. This enabled two optimizations: first, with more relations obtained from sieving, we eventually obtain a larger database of known logarithms, which makes the descent faster. Second, more sieving relations also yield a smaller linear algebra step, which is desirable because sieving is much easier to parallelize than linear algebra. %\looseness=-1 For the polynomial selection and sieving steps, we used idle time on 2000--3000 CPU cores in parallel. Polynomial selection ran for about 3~hours (7,600 core-hours). Sieving ran for 15~hours (21,400 core-hours). This sufficed to collect 40\,M relations of which 28\,M were unique, involving 15\,M primes of at most 27~bits. From this data set, we obtained a square matrix with 2.2\,M rows and columns, with 113 nonzero coefficients per row on average. We solved the corresponding linear system on a 36-node cluster using the block Wiedemann algorithm~\cite{coppersmith-block-wiedemann-1994,thome-block-wiedemann-2002}. Using unoptimized code, the computation finished in 120 hours (60,000 core-hours). The experiment above was done with CADO-NFS in early 2015. As of 2017, release 2.3 of CADO-NFS~\cite{cado-nfs-2.3} performs 20\% faster for sieving, and drastically faster for linear algebra, since 9,000 core-hours suffice to solve the same linear system on the same hardware. In total, the wall-clock time for each precomputation was slightly over one week in 2015, and is reduced to about two days with current hardware and more recent software. \paragraph{Descent} Once this precomputation was finished, we were able to run the final descent step to compute individual discrete logs in about a minute. We implemented the descent calculation in a mix of Python and C\@. On average, computing individual logarithms took about 70~seconds, but the time varied from 34 to 206 seconds on a server with two 18-core Intel Xeon E5-2699 CPUs. For purposes of comparison, a single 512-bit RSA factorization using the CADO-NFS implementation takes about 4 days of wall-clock time on the computer used for the descent\cite{cado-nfs-2.3}. % reference: revision 580f1f1 of CADO-NFS with tasks.sieve.qmin=15470309 % on a catrel node: % Total cpu/elapsed time for entire factorization: 1.06608e+07/351570 % 351570/3600/24 = 4.07 \subsection{Active Attack Implementation} The main challenge in performing this attack is to compute the shared secret $g^{ab}$ before the handshake completes in order to forge a \textsf{Finished} message from the server. With our descent implementation, the computation takes an average of 70 seconds, but there are several ways an attacker can work around this delay: \paragraph{Non-browser clients} Different TLS clients impose different time limits, after which they kill the connection. Command-line clients such as \texttt{curl} and \texttt{git} have long or no timeouts, and we can hijack their connections without difficulty. \paragraph{TLS warning alerts} Web browsers tend to have shorter timeouts, but we can keep their connections alive by sending TLS warning alerts, which are ignored by the browser but reset the handshake timer. For example, this allows us to keep Firefox TLS connections alive indefinitely. \paragraph{Ephemeral key caching} Many TLS servers do not use a fresh value $b$ for each connection, but instead compute $g^b$ once and reuse it for multiple negotiations. For example, F5 BIG-IP load balancers will reuse $g^b$ by default. Microsoft Schannel caches $g^b$ for two hours---this setting is hard-coded. For these servers, an attacker can compute the discrete log of $g^b$ from one connection and use it to attack later handshakes. \paragraph{TLS False Start} Even when clients enforce shorter timeouts and servers do not reuse values for $b$, the attacker can still break the confidentiality of user requests that use TLS False Start. Recent versions of Chrome, Internet Explorer, and Firefox implement False Start, but their policies on when to enable it vary. Firefox 35, Chrome 41, and Internet Explorer (Windows 10) send False Start data with \dhe{}\@. In these cases, a man-in-the-middle can record the handshake and decrypt the False Start payload at leisure. \section{Nation-State Threats to DH} \label{sec:nationstate} The previous sections demonstrate the existence of practical attacks against Diffie-Hellman key exchange as currently used by TLS\@. However, these attacks rely on the ability to downgrade connections to export-grade crypto. In this section we address the following question: how secure is Diffie-Hellman in broader practice, as used in other protocols that do not suffer from downgrade, and when applied with stronger groups? \input{\LogjamFigures/fig_scaling.tex} To answer this question we must first examine how the number field sieve for discrete log scales to 768- and 1024-bit groups. As we argue below, 768-bit groups in relatively widespread use are now within reach for academic computational resources. Additionally, performing precomputations for a small number of 1024-bit groups is plausibly within the resources of nation-state adversaries. The precomputation would likely require special-purpose hardware, but would not require any major algorithmic improvements. In light of these results, we examine several standard Internet security protocols---IKE, SSH, and TLS---to determine their vulnerability. Although the cost of the precomputation for a 1024-bit group is several times higher than for an RSA key of equal size, a one-time investment could be used to attack millions of hosts, due to widespread reuse of the most common Diffie-Hellman parameters. Finally, we apply this new understanding to a set of recently published documents to evaluate the hypothesis that the National Security Agency has {\em already} implemented such a capability. \subsection{Scaling NFS to 768- and 1024-bit DH} Estimating the cost for discrete log cryptanalysis at larger key sizes is far from straightforward due to the complexity of parameter tuning. We attempt estimates up to 1024-bit discrete log based on the existing literature and our own experiments but further work is needed for greater confidence. We summarize all the costs, measured or estimated, in Table~\ref{tab:costs}. \paragraph{DH-768: Completed in 2016} At the time of disclosure, the latest discrete log record was a 596-bit computation. Based on that work, and on prior experience with the 768-bit factorization record in 2009~\cite{factor-rsa-768}, we made the conservative prediction that it was possible, as explained in~\S\ref{sec:dl}, to put more computational effort into sieving for the discrete log case than for factoring, so that the linear algebra step would run on a slightly smaller matrix. This led to a runtime estimate of around 35,000 core-years, most of which was spent on linear algebra. This estimate turned out be overly conservative, for several reasons. First, there have been significant improvements in our software implementation (see~\S\ref{subsec:512bit-dl-computation}). In addition, our estimate did not use the Joux-Lercier alternative polynomial selection method~\cite[\S2.1]{nfs-prime-field-2003}, which is specific to discrete logs. For 768-bit discrete logs, this polynomial selection method leads to a significantly smaller computational cost. In 2016, Kleinjung et al.\ completed a 768-bit discrete log computation~\cite{compute-dlog-768}. While this is a massive computation on the academic scale, a computation of this size has likely been within reach of nation-states for more than a decade. This data is mentioned in Table~\ref{tab:costs}. \paragraph{DH-1024: Plausible with nation-state resources} Experimentally extrapolating sieving parameters to the 1024-bit case is difficult due to the tradeoffs between the steps of the algorithm and their relative parallelism. The prior work proposing parameters for factoring a 1024-bit RSA key is thin and we resort to extrapolating from asymptotic complexity. For the number field sieve, the complexity is $\exp\big((k+o(1))(\log N)^{1/3}(\log\log N)^{2/3}\big),$ where $N$ is the integer to factor or the prime modulus for discrete log and $k$ is an algorithm-specific constant. This formula is inherently imprecise, since the $o(1)$ in the exponent can hide polynomial factors. This complexity formula, with $k=1.923$, describes the overall time for both discrete log and factorization, which are both dominated by sieving and linear algebra in the precomputation. Evaluating the formula for 768- and 1024-bit $N$ gives us estimated multiplicative factors by which time and space will increase from the 768- to the 1024-bit case. For 1024-bit precomputation, the total time complexity can be expected to increase by a factor of 1220 using the complexity formula, while space complexity increases by its square root, approximately 35. These ratios are relevant for both factorization and discrete log since they have the same asymptotic behavior. For DH-1024, we get a total cost estimate for the precomputation of about 6M core-years. The time complexity for each individual log after the precomputation should be multiplied by $L_{2^{1024}}(1.206)/L_{2^{768}}(1.206)\approx86$, where $k=1.206$ follows from~\cite{hidden-snfs-1024-2017}. This last number does not correspond to what we observed in practice and we attribute that to the fact that the descent step has been far less studied. In practice, it is not uncommon for estimates based merely on the complexity formula to be off by a factor of 10. Estimates of Table~\ref{tab:costs} must therefore be considered with due caution. For 1024-bit descent, we experimented with our early-abort implementation to inform our estimates for descent initialization, which should dominate the individual discrete log computation. For a random target in Oakley Group~2, initialization took 22 core-days, and yielded a few primes of at most 130 bits to be descended further. In twice this time, we reached primes of about 110 bits. At this point, we were certain to have bootstrapped the descent and could continue down to the smoothness bound in a few more core-days if proper sieving software were available. Thus we estimate that a 1024-bit descent would take about 30~core-days, once again easily parallelizable. \paragraph{Costs in hardware} Although several millions of core-years is a massive computational effort, it is not necessarily out of reach for a nation-state. At this scale, significant cost savings could be realized by developing application-specific hardware given that sieving is a natural target for hardware implementation. To our knowledge, the best prior description of an ASIC implementation of 1024-bit sieving is the 2007 work of Geiselmann and Steinwandt~\cite{hardware-scaling-nfs-2007}. Updating their estimates for modern techniques and adjusting parameters for discrete log allows us to extrapolate the financial and time costs. We increase their chip count by a factor of ten to sieve more and save on linear algebra as above giving an estimate of 3M chips to complete sieving in one year. Shrinking the dies from the 130~nm technology node used in the paper to a more modern size reduces costs as transistors are cheaper at newer technologies. With standard transistor costs and utilization, it would cost about \$2 per chip to manufacture after fixed design and tape-out costs of roughly \$2M~\cite{jefferies-report-2012}. This suggests that an \$8M investment would buy enough ASICs to complete the DH-1024 sieving precomputation in one year. Since a step of descent uses sieving, the same hardware could likely be reused to speed calculations of individual logarithms. Estimating the financial cost for the linear algebra is more difficult since there has been little work on designing chips that are suitable for the larger fields involved in discrete log. To derive a rough estimate, we can begin with general purpose hardware and the core-year estimate from Table~\ref{tab:costs}. Using the 300,000 CPU core Titan supercomputer it would take 4~years to complete the 1024-bit linear algebra stage (notwithstanding the fact that estimates from Table~\ref{tab:costs} are known to be extremely coarse, and could be optimistic by a factor of maybe 10). Titan was constructed in 2012 for \$94M, suggesting a cost of \$0.5B in supercomputers to finish this step in a year. In the context of factorization, moving linear algebra from general purpose CPUs to ASICs has been estimated to reduce costs by a factor of 80~\cite{improving-linear-algebra-nfs-2005}. If we optimistically assume that a similar reduction can be achieved for discrete log, the hardware cost to perform the linear algebra for DH-1024 in one year is plausibly on the order of tens of millions of dollars. To put this dollar figure in context, the FY\,2012 budget for the U.S. Consolidated Cryptologic Program (which includes the NSA) was \$10.5 billion\footnote{\small The National Science Foundation's budget was \$7 billion.}~\cite{black-budget-leak-2013}. The 2013 budget request, which prioritized investment in ``groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic'' included notable \$100M+ increases in two programs under Cryptanalysis \& Exploitation Services: ``Cryptanalytic IT Systems'' (to \$247M), and the cryptically named ``PEO Program C'' (to \$360M)~\cite{black-budget-leak-2013}. \subsection{Is NSA Breaking 1024-bit DH?} Our calculations suggest that it is plausibly within NSA's resources to have performed number field sieve precomputations for a small number of 1024-bit Diffie-Hellman groups. This would allow them to break any key exchanges made with those groups in close to real time. If true, this would answer one of the major cryptographic questions raised by the Edward Snowden leaks: How is NSA defeating the encryption for widely used VPN protocols? Virtual private networks (VPNs) are widely used for tunneling business or personal traffic across potentially hostile networks. We focus on the Internet Protocol Security (IPsec) VPN protocol using the Internet Key Exchange (IKE) protocol for key establishment and parameter negotiation and the Encapsulating Security Payload (ESP) protocol for protecting packet contents. \paragraph{IKE} There are two versions, IKEv1 and IKEv2, which differ in message structure but are conceptually similar. For the sake of brevity, we will use IKEv1 terminology~\cite{rfc7296}. \newcommand{\skeyid}{\textsf{\small SKEYID}} \newcommand{\keymat}{\textsf{\small KEYMAT}} \newcommand{\psk}{\textsf{\small PSK}} Each IKE session begins with a Phase~1 handshake in which the client and server select a Diffie-Hellman group from a small set of standardized parameters and perform a key exchange to establish a shared secret. The shared secret is combined with other cleartext values transmitted by each side, such as nonces and cookies, to derive a value called \skeyid\@. When authenticated with a pre-shared key (PSK) in IKEv1, the PSK value is incorporated into the derivation of \skeyid. The resulting \skeyid\ is used to encrypt and authenticate a Phase~2 handshake. Phase~2 establishes the parameters and key material, \keymat, for protecting the subsequently tunneled traffic. Ultimately, \keymat{} is derived from \skeyid, additional nonces, and the result of an optional Phase~2 Diffie-Hellman exchange. \paragraph{NSA's VPN exploitation process} Documents published by Der Spiegel describe NSA's ability to decrypt VPN traffic using passive eavesdropping and without message injection or man-in-the-middle attacks on IPsec or IKE\@. Figure~\ref{fig:scarynsafigure} illustrates the flow of information required to decrypt the tunneled traffic. \begin{figure} \noindent\includegraphics[width=\linewidth]{\LogjamFigures/NSA_combined.png} \caption{\textbf{NSA's VPN decryption infrastructure}\,---\,% This classified illustration published by Der Spiegel~\cite{media-35526} shows captured IKE handshake messages being passed to a high-performance computing system, which returns the symmetric keys for ESP session traffic. The details of this attack are consistent with an efficient break for 1024-bit Diffie-Hellman.} \label{fig:scarynsafigure} \end{figure} \input{\LogjamFigures/scan_summary.tex} When the IKE/ESP messages of a VPN of interest are collected, the IKE messages and a small amount of ESP traffic are sent to the Cryptanalysis and Exploitation Services (CES)~\cite{media-35526,media-35529,media-35515}. Within the CES enclave, a specialized ``attack orchestrator'' attempts to recover the ESP decryption key with assistance from high-performance computing resources as well as a database of known PSKs (``CORALREEF'')~\cite{media-35529,media-35526,media-35515}. If the recovery was successful, the decryption key is returned from CES and used to decrypt the buffered ESP traffic such that the encapsulated content can be processed~\cite{media-35529,media-35522}. \paragraph{Evidence for a discrete log attack} The ability to decrypt VPN traffic does not necessarily indicate a defeat of Diffie-Hellman. There are, however, several features of the described exploitation process that support this hypothesis. The IKE protocol has been extensively analyzed~\cite{ike-security-2002,ike-nrl-1999} and is not believed to be exploitable in standard configurations under passive eavesdropping attacks. Absent a vulnerability in the key derivation function or transport encryption, the attacker must recover the decryption keys. This requires the attacker to calculate \skeyid{} generated from the Phase~1 Diffie-Hellman shared secret after passively observing an IKE handshake. While IKE is designed to support a range of Diffie-Hellman groups, our Internet-wide scans (\S\ref{sec:1024effects}) show that the vast majority of IKE endpoints select one particular 1024-bit DH group even when offered stronger groups. Conducting an expensive, but feasible, precomputation for this single 1024-bit group (Oakley Group 2) would allow the efficient recovery of a large number of Diffie-Hellman shared secrets used to derive \skeyid{} and the subsequent \keymat. Given an efficient oracle for solving the discrete logarithm problem, attacks on IKE are possible provided that the attacker can obtain the following: $(1)$ a complete two-sided IKE transcript, and $(2)$ any PSK used for deriving {\skeyid} in IKEv1. The available documents describe both of these as explicit prerequisites for the VPN exploitation process outlined above and provide the reader with internal resources available to meet these prerequisites~\cite{media-35515}. Of course, this explanation is not dispositive and the possibility remains that NSA could defeat VPN encryption using alternative means. A published NSA document refers to the use of a router ``implant'' to allow decryption of IPsec traffic indicating that the use of targeted malware is possible. This implant ``allows passive exploitation with just ESP''~\cite{media-35515} without the prerequisite of collecting the IKE handshake messages. This indicates that it is an alternative mechanism to the attack described above. The most compelling argument for a pure cryptographic attack is the generality of the NSA's VPN exploitation process. This process appears to be applicable across a broad swath of VPNs without regard to endpoint's identity or the ability to compromise individual endpoints. \subsection{Effects of a 1024-bit Break} \label{sec:1024effects} In this section, we use Internet-wide scanning to assess the impact of a hypothetical DH-1024 break on IKE, SSH, and HTTPS\@. Our measurements, performed in early 2015, indicate that these protocols would be subject to widespread compromise by a nation-state attacker who had the resources to invest in precomputation for a small number of 1024-bit groups. \paragraph{IKE} We measured how IPsec VPNs use Diffie-Hellman in practice by scanning a 1\% random sample of the public IPv4 address space for IKEv1 and IKEv2 (the protocols used to initiate an IPsec VPN connection) in May~2015. We used the ZMap UDP probe module to measure support for Oakley Groups~1 and~2 (two popular 768- and 1024-bit, built-in groups) and which group servers prefer. Of the 80K hosts that responded with a valid IKE packet, 44.2\% were willing to negotiate a connection using one of the two groups. We found that 31.8\% of IKEv1 and 19.7\% of IKEv2 servers support Oakley Group~1 (768-bit) while 86.1\% and 91.0\% respectively supported Oakley Group~2 (1024-bit). In our sample of IKEv1 servers, 2.6\% of profiled servers preferred the 768-bit Oakley Group~1 and 66.1\% preferred the 1024-bit Oakley Group~2. For IKEv2, 5.8\% of profiled servers chose Oakley Group~1, and 63.9\% chose Oakley Group~2. \paragraph{SSH} All SSH handshakes complete either a finite field or elliptic curve Diffie-Hellman exchange. The protocol explicitly defines support for Oakley Group~2 (1024-bit) and Oakley Group~14 (2048-bit) but also allows a server-defined group to be negotiated. We scanned 1\% random samples of the public IPv4 address space in April 2015. We find that 98.9\% of SSH servers support the 1024-bit Oakley Group~2, 77.6\% support the 2048-bit Oakley Group~14, and 68.7\% support a server-defined group\@. During the SSH handshake, the server selects the client's highest priority mutually supported key exchange algorithm. To estimate what servers will prefer in practice, we performed a scan in which we mimicked the algorithms offered by OpenSSH 6.6.1p1, the latest version of OpenSSH\@. In this scan, 21.8\% of servers preferred the 1024-bit Oakley Group~2, and 37.4\% preferred a server-defined group. 10\% of the server-defined groups were 1024-bit, but, of those, nearly all provided Oakley Group~2 rather than a custom group. Combining these equivalent choices, we find that a nation-state adversary who performed NFS precomputations for the 1024-bit Oakley Group~2 could passively eavesdrop on connections to 3.6M (25.7\%) publicly accessible SSH servers. \paragraph{HTTPS} \dhe is commonly deployed on web servers. 68.3\% of Alexa Top~1M sites support \dhe, as do 23.9\% of sites with browser-trusted certificates. Of the Top~1M sites that support \dhe, 84\% use a 1024-bit or smaller group, with 94\% of these using one of five groups. Despite widespread support for \dhe, a passive eavesdropper can only decrypt connections that organically agree to use Diffie-Hellman. We estimate the number of sites for which this will occur by offering the same sets of ciphersuites as Chrome, Firefox, and Safari. Approximately 24.0\% of browser connections with HTTPS-enabled Top~1M sites (and 10\% with browser-trusted sites) will negotiate \dhe with one of the ten most popular 1024-bit primes; 17.9\% of connections with Top~1M sites could be passively eavesdropped given the precomputation for a single 1024-bit prime. \section{Recommendations} \label{sec:lessons} In this section, we present concrete recommendations to recover the expected security of Diffie-Hellman. \paragraph{Transition to elliptic curves.} Transitioning to elliptic curve Diffie-Hellman (ECDH) key exchange avoids all known feasible cryptanalytic attacks. Current elliptic curve discrete log algorithms do not gain as much of an advantage from precomputation. In addition, ECDH keys are shorter and computations are faster. We recommend transitioning to elliptic curves; this is the most effective solution to the vulnerabilities in this paper. We note that in August 2015, the NSA announced that it was planning to transition away from elliptic curve cryptography for its Suite B cryptographic algorithms and would replace them with algorithms resistant to quantum computers~\cite{nsa-suiteb}. However, since no fully vetted and standardized quantum-resistant algorithms exist currently, elliptic curves remain the most secure choice for public key operations. \paragraph{Increase minimum key strengths.} To protect against the Logjam attack, server operators should disable \dheexp and configure \dhe ciphersuites to use primes of 2048 bits or larger. Browsers and clients should raise the minimum accepted size for Diffie-Hellman groups to at least 1024 bits in order to avoid downgrade attacks. \paragraph{Don't deliberately weaken crypto.} The Logjam attack illustrates the fragility of cryptographic ``front doors''. Although the key sizes originally used in \dheexp were intended to be tractable only to NSA, two decades of algorithmic and computational improvements have significantly lowered the bar to attacks on such key sizes. Despite the eventual relaxation of crypto export restrictions and subsequent attempts to remove support for \dheexp{}, the technical debt induced by the additional complexity has left implementations vulnerable for decades. Like FREAK~\cite{freak-attack-2015}, our attacks warn of the long-term debilitating effects of deliberately weakening cryptography. \section{Conclusion} \label{sec:conclusion} We find that Diffie-Hellman key exchange, as used in practice, is often less secure than widely believed. The problems stem from the fact that the number field sieve for discrete log allows an attacker to perform a single precomputation that depends only on the group, after which computing individual logarithms in that group has a far lower cost. Although this is well known to cryptographers, it apparently has not been widely understood by system builders. Likewise, many cryptographers did not appreciate that a large fraction of Internet communication depends on a few small, widely shared groups. %\looseness=-1 A key lesson is that cryptographers and creators of practical systems need to work together more effectively. System builders should take responsibility for being aware of applicable cryptanalytic attacks. Cryptographers should involve themselves in how crypto is actually being applied, such as through engagement with standards efforts and software review. Bridging the perilous gap that separates these communities will be essential for keeping future systems secure. %\section*{Acknowledgments} % %The authors thank Michael Bailey, Daniel Bernstein, Ron Dreslinski, %Tanja Lange, Adam Langley, Kenny Paterson, Andrei Popov, Ivan Ristic, %Edward Snowden, Brian Smith, Martin Thomson, and Eric Rescorla. This %work was supported by the U.S. National Science Foundation, the Office %of Naval Research, the European Research Council, and the French %National Research Agency, with additional support from the Mozilla %Foundation, Supermicro, Google, Cisco, the Morris Wellman %Professorship, and the Alfred P. Sloan Foundation. Some experiments %used the Grid'5000 testbed, supported by INRIA, CNRS, RENATER, and %others.
{ "alphanum_fraction": 0.7932910435, "avg_line_length": 55.5519253209, "ext": "tex", "hexsha": "fa1d6f0b2fdc913ed52656bdb7f53278ca66df34", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dadrian/dissertation", "max_forks_repo_path": "papers/logjam/paper/text.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dadrian/dissertation", "max_issues_repo_path": "papers/logjam/paper/text.tex", "max_line_length": 113, "max_stars_count": null, "max_stars_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dadrian/dissertation", "max_stars_repo_path": "papers/logjam/paper/text.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11733, "size": 47608 }
\chapter{A SAMPLE APPENDIX} \label{app:app1} Just put in text as you would into any chapter with sections and whatnot. That's the end of it. More details on how to use these specific packages are available along with the documentation of the respective packages.
{ "alphanum_fraction": 0.7894736842, "avg_line_length": 29.5555555556, "ext": "tex", "hexsha": "3bfd0029d8d26bb3d769ab9ecf436a3d49267928", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "02c601f7da5c809ad0b8443ec6b4830ed2a86a24", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SumitPadhiyar/IIT-Madras-Thesis-Synopsis-Template", "max_forks_repo_path": "D_appx/D01_app1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "02c601f7da5c809ad0b8443ec6b4830ed2a86a24", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SumitPadhiyar/IIT-Madras-Thesis-Synopsis-Template", "max_issues_repo_path": "D_appx/D01_app1.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "02c601f7da5c809ad0b8443ec6b4830ed2a86a24", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SumitPadhiyar/IIT-Madras-Thesis-Synopsis-Template", "max_stars_repo_path": "D_appx/D01_app1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 63, "size": 266 }
\documentclass[]{book} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={A collection of notes on R, Git and statistics}, pdfauthor={Olalla Diaz-Yañez}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{natbib} \bibliographystyle{apalike} \usepackage{longtable,booktabs} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{A collection of notes on R, Git and statistics} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Olalla Diaz-Yañez} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{2018-03-03} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \usepackage{amsthm} \newtheorem{theorem}{Theorem}[chapter] \newtheorem{lemma}{Lemma}[chapter] \theoremstyle{definition} \newtheorem{definition}{Definition}[chapter] \newtheorem{corollary}{Corollary}[chapter] \newtheorem{proposition}{Proposition}[chapter] \theoremstyle{definition} \newtheorem{example}{Example}[chapter] \theoremstyle{definition} \newtheorem{exercise}{Exercise}[chapter] \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem*{solution}{Solution} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{a-work-in-progress}{% \chapter{A work in progress}\label{a-work-in-progress}} This is a collection of notes (a book?) with useful resources and instructions related to R, RStudio, Git and Statistics with R. Right now it mainly works as a personal repository. But I hope that litle by litle it will have more meaninful content with detailed instructions to more complex concepts and examples. {A collection of notes on R, Git and statistics} by Olalla Díaz-Yáñez is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. \hypertarget{intro}{% \chapter{Introduction}\label{intro}} \hypertarget{what-is-r}{% \section{What is R}\label{what-is-r}} R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files. To be able to use and understand R you will also need to know some basic concepts that all programming depends on. But do not be scared! \href{https://cran.r-project.org/doc/FAQ/R-FAQ.html\#What-is-R_003f}{Here} you can read a longer deffinition of what is R. \textbf{R vs.~Python} I really like \href{http://ucanalytics.com/blogs/r-vs-python-comparison-and-awsome-books-free-pdfs-to-learn-them/}{this} comparison of python and R. The idea is that R is Batman and Python is Superman. Batman (R) does better detective work, has a more developed intelligence, or in other words Batmand is more brain than muscles. On the other hand Superman (Python) has muscle power and super strength, you could consider him more elegant, but in general words is more muscles than brain. FUN FACT: The ``Python'' programming language name derived from the series \href{https://en.wikipedia.org/wiki/Monty_Python\%27s_Flying_Circus}{Monty Python's Flying Circus}. \hypertarget{why-r}{% \subsection{Why R}\label{why-r}} There are several reason why R: \begin{itemize} \tightlist \item it´s free \item it´s well-documented and has an amazing user community \item it runs almost everywhere \item it has a large user base among researchers, data scientists, companies \item it has a extensive library of packages helping to solve different tasks \item it´s not a black box \end{itemize} The best way to learn a tool is to use it for something useful, for example analyze data. Thats why this is the tool prefered in our courses. The goal it is not to master the tool or actually teach you R but to have the enough knowledge to be able to find your way to suceed in your goals and to have a basic knowledge that will aloow you to explore independtly and find the solutions that you need. \hypertarget{what-is-rstudio}{% \section{What is RStudio}\label{what-is-rstudio}} \href{https://www.rstudio.com/products/RStudio/}{RStudio} is an integrated development environment (IDE) for R. It includes a console, syntax-highlighting editor that supports direct code execution, as well as tools for plotting, history, debugging and workspace management. \hypertarget{why-rstudio}{% \subsection{Why RStudio}\label{why-rstudio}} My friend usually sais that we have not come to this world to suffer, using Rstudio instead of just R makes your life easier, so it will avoid some unnecesary suffering. There is nothing wrong of just using R, without RStudio, and actually some people prefer to learn R without RStudio, not me. Some of the reasons why I prefer to use RStudio are: \begin{itemize} \tightlist \item window docking (all necessary things in one window) \item full-featured text editor \item tab-completion of filenames, function names and arguments (you do not need to remember everything) \item Rmarkdown and knitr integration \end{itemize} \hypertarget{what-is-git}{% \section{What is Git}\label{what-is-git}} \href{http://git-scm.com}{Git} is a \textbf{version control system}. Probably the best ever description og Git is whaty XXX wrote: ``it is as the ``Track Changes'' features from Microsoft Word on steroids''. It was created to help groups of developers to deal with big and complex projects. For a data science user, it is basically a clever way to ovoid having hundred ``final version'' files as described \href{http://phdcomics.com/comics.php?f=1531}{here}. It is both benefitial when working alone as you can delete that not-so-clever code that you wrote and never used, without been scared, as if future you need that piced of code you will be able to go back and take it. But Git benefits increase exponinetally when you include collaborators in the equation, use git it is a smart way to collaborate and to be up-to-date with each others work and at the same time have a version control of your and others people work. Some people think that hthe Git-pain is only worth it when collaborating, but even in that case, I am sure you are going to work with others in some point, and avoid taken that into consideration from the beginning will mean a delay in the implementation of it in your workflow and a higher pain than just Git-pain. \hypertarget{what-is-github}{% \section{What is GitHub}\label{what-is-github}} A way of hosting your work online. \hypertarget{what-it-is-in-for-me}{% \section{What it is in for me?}\label{what-it-is-in-for-me}} Maybe you are still wondering: what it is really in for me? I just wanted to do my statistical anaylisis and be done with it, do really all the gains possibly justify the inevitable pain of start using R, Rstudio, GitHub and Git?\ldots{} My personal experience is that doing things from the beginning with the correct aproach, although painful, may avoid a bigger pain in the future. As soon as you get into your best workflow easier for you would be to do things right and realised early about mistakes. Of course your workflow won't be a static thing, you will continiouly learn new aproaches and techniques that will improve the way you do things. Saying that I also think that one thing is what it is the best workflow for each of us as individuals (if you like to write your essays on a paper and then in a word document, thats fine for me), but a different thing is what is the best way to collaborate and work with others. In the first case you chose, in the second case chosing should be always on what is the best for the team-work, meaning higher productivity, less chances of errors, easier collaboration etc. And yes, you are going to work with others most of the times. There is were Git and RStudio are going to make your life easier! \hypertarget{applications}{% \chapter{Applications}\label{applications}} Some \emph{significant} applications of the tools presented \hypertarget{example-one}{% \section{Example one}\label{example-one}} \hypertarget{example-two}{% \section{Example two}\label{example-two}} \hypertarget{sessions}{% \chapter{Sessions}\label{sessions}} \hypertarget{session-0}{% \section{Session 0}\label{session-0}} \textbf{1. Install R and R studio} \begin{itemize} \item Install pre-compiled binary of R for your operating system: \url{https://cloud.r-project.org} \item Install Preview version RStudio Desktop: \url{https://www.rstudio.com/products/rstudio/download/preview/} \end{itemize} If you have a pre-existing installation of R and/or RStudio, it is highly recommend that you reinstall both. You will face more dificulties if you run an old softwared version. If you upgrade R, you will need to update any packages you have installed. \textbf{2. Test it} Launch RStudio, you should see something simmilar to \href{https://www.flickr.com/photos/xmodulo/22093054381}{this} but a bit emptier as you have not written anythign yet. Put your cursor in the pane labelled Console, which is where you interact with the live R process. In console, write something like: 2 + 1 and press return and you should get a 3. If you get a 3, you've succeeded in the installation of R and RStudio, congrats! \textbf{Some extra resources:} \url{https://cran.r-project.org/doc/manuals/R-admin.html} \url{https://cran.r-project.org/doc/FAQ/R-FAQ.html\#What-is-R_003f} \url{https://cran.r-project.org/doc/FAQ/R-FAQ.html} \hypertarget{session-1}{% \section{Session 1}\label{session-1}} \hypertarget{learning-objectives}{% \subsection{Learning objectives}\label{learning-objectives}} \begin{itemize} \tightlist \item Learn what is programming \item Learn what is R \item Learn the basics of RStudio \item Learn the basics of R: \begin{itemize} \tightlist \item Variable assigment \item Basic data types \item Vectors \item Data frames \end{itemize} \end{itemize} \hypertarget{contents}{% \subsection{Contents}\label{contents}} \textbf{Basics of R:} \begin{itemize} \tightlist \item Program \item Language: Not compiled, simple sintaxys \item R ``flow'': \begin{itemize} \tightlist \item Variables, data, functions, results, etc, are stored in the active memory of the computer in the form of objects which have a name. \item You can do actions on these objects with operators (arithmetic, logical, comparison, . . .) and functions (which are themselves objects). \end{itemize} \end{itemize} \textbf{Practice} \begin{itemize} \tightlist \item Using Rstudio \begin{itemize} \tightlist \item Open a project \item Organize your folder (rstudio projet / Code / Data / Figures / ) \textbf{Super tip of the day!} \item Console \item Files \item Script \item \href{https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts}{Keyboard shortcuts} \end{itemize} \item Variable assigment: the ``assign'' operator \begin{itemize} \tightlist \item concept overwiting a variable (s in the active memory, not the data on the disk) \item Note that R is case sensitive! \item ``\#'' is used for comments \end{itemize} \item Basic data types \begin{itemize} \tightlist \item Decimals values like 4.5 are called numerics. \item Natural numbers like 4 are called integers. Integers are also numerics. \item Boolean values (TRUE or FALSE) are called logical. \item Text (or string) values are called characters. The quotation marks indicate that is a character. \end{itemize} \item Vectors \begin{itemize} \tightlist \item create vectors \item name a vector \item select elements from the vector \item compare different vectors \item combine vectors \end{itemize} \item Data frames \begin{itemize} \tightlist \item creating a data frame \item quick look to the data frame: structure, rownames, number of columns, number of rows, summary, \item select data fram elements \item subset in a data frame \item ordering \item sorting \end{itemize} \end{itemize} \textbf{The code of this session} can be found \href{https://github.com/oldiya/DataScience/blob/master/scripts/session1.R}{here} \hypertarget{carry-on-learning}{% \subsection{Carry on learning}\label{carry-on-learning}} \href{https://www.datacamp.com/courses/free-introduction-to-r?utm_source=adwords_ppc\&utm_campaignid=898687156\&utm_adgroupid=48303643819\&utm_device=c\&utm_keyword=\&utm_matchtype=b\&utm_network=g\&utm_adpostion=1t1\&utm_creative=229335520231\&utm_targetid=dsa-377762271983\&utm_loc_interest_ms=\&utm_loc_physical_ms=1005620\&gclid=EAIaIQobChMIkOeW3InG2QIVdijTCh23qgmmEAAYASAAEgIJL_D_BwE}{Data camp (still) free basic r course} \href{https://www.rstudio.com/wp-content/uploads/2016/10/r-cheat-sheet-3.pdf}{Base R Cheat Sheet} \hypertarget{session-2}{% \section{Session 2}\label{session-2}} \hypertarget{learning-objectives-1}{% \subsection{Learning objectives}\label{learning-objectives-1}} \begin{itemize} \tightlist \item Learn the basics of R: \begin{itemize} \tightlist \item Variable assigment (review) \item Basic data types (review) \item Vectors (review) \item Data frames (review \& new) \item Lists \item Matrices \item Factors \end{itemize} \item Basic math functions \item Basic visualizations of the data \item Basic statistics \item Using Libraries \item File path \item Working directory \end{itemize} \hypertarget{contents-1}{% \subsection{Contents}\label{contents-1}} *Data frames + Understanding a data frame + Count rows and columns + Add rows and columns + Subset a data frame + list of variables + name of the columns + Vector functions (sort, counts of values, unique values) \begin{itemize} \tightlist \item Lists \begin{itemize} \tightlist \item Lists, as opposed to vectors, can hold components of different types \item create a list \item list subsetting \item name lists \end{itemize} \item Matrices \begin{itemize} \tightlist \item create matrices and to understand how you can do basic computations with them. \end{itemize} \item Factors \begin{itemize} \tightlist \item create, subset and compare \end{itemize} \item Maths functions \begin{itemize} \tightlist \item Maximun value \item Minimun value \item Mean value \item Median \item Variance \item Standart deviation \item Correlation \item Round values \end{itemize} \item Basic visualizations of the data \begin{itemize} \tightlist \item Ploting \item histograms \end{itemize} \item Basic statistics \begin{itemize} \tightlist \item linear model \item summary of the linear model \end{itemize} \item Using Libraries \item File path \item Working directory \end{itemize} \hypertarget{final-words}{% \chapter{Final Words}\label{final-words}} We have finished a nice book. \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.7494554626, "avg_line_length": 27.8690095847, "ext": "tex", "hexsha": "53b4d1d6df94f69eede880a8c095f59f3627f455", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9f281bc57282a0d90c33e79c51537876dae3e76e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oldiya/jekyll.olalladiaz.net", "max_forks_repo_path": "classes/datasci-basic/book/bookdown-demo.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "9f281bc57282a0d90c33e79c51537876dae3e76e", "max_issues_repo_issues_event_max_datetime": "2021-05-02T18:31:48.000Z", "max_issues_repo_issues_event_min_datetime": "2018-08-15T12:16:53.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oldiya/jekyll.olalladiaz.net", "max_issues_repo_path": "classes/datasci-basic/book/bookdown-demo.tex", "max_line_length": 394, "max_stars_count": null, "max_stars_repo_head_hexsha": "9f281bc57282a0d90c33e79c51537876dae3e76e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oldiya/jekyll.olalladiaz.net", "max_stars_repo_path": "classes/datasci-basic/book/bookdown-demo.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5016, "size": 17446 }
\chapter{TUTORIALS ON OBJECT EDITING} \section{Object Editing a Six-Sided Polyhedron} This tutorial illustrates the application of the {\bf OBJ EDIT} state to the ``arb8'' primitive. The editing of the arb8 illustrates the absolute movement of the points. \begin{figure} \centering \includegraphics{eo-start.ps} \caption{``arb8'' Object Edit; Top View.} \label{eo-start} \end{figure} \noindent {\tt mged> {\em e arb8}\\ vectorized in 0 seconds\\ mged> {\em size 8}\\ mged>\\ {\em Select the {\bf BUTTON MENU} if not already displayed.} \\ {\em Select the {\bf Object Illum} menu entry.} \\ {\em Move the mouse away from the menu area and select twice.} \\ } These operations select an object for editing. Control is passed through the {\bf OBJ PICK} and {\bf OBJ PATH} states to the {\bf OBJ EDIT} state. The display should look similar to Figure \ref{eo-start}. \subsection{Scale Operation} \begin{figure} \centering \includegraphics{eo-scale.ps} \caption{``arb8'' Object Edit; Scaled by 0.5.} \label{eo-scale} \end{figure} \noindent {\tt mged> {\em scale 0.5}\\ mged>\\ } As always, the selected operation operates with respect to the key vertex -- point 1 remains the same, and the distances from point 1 to the other points are multiplied by the scale factor. See Figure \ref{eo-scale}. What will happen if another {\bf scale 0.5} command is given? The {\bf scale} operator is an absolute operator. It sets the scale factor associated with a particular transformation matrix. It does not multiply the current transformation matrix scale factor by the new scale factor. \subsection{X, Y, and XY Move Operation} \begin{figure} \centering \includegraphics{eo-xyzmove.ps} \caption{``arb8'' Object Edit; Translated to (0.5, -2, 1.5).} \label{eo-xyzmove} \end{figure} \noindent {\tt mged> {\em scale 1}\\ mged> {\em translate .5 -2 1.5}\\ mged>\\ {\em Select the {\bf X move} menu entry.}\\ } The first two commands undo the effects of the previous scale operation, and translate the key point to (0.5, -2, 1.5). The coordinates of the other points are changed accordingly; preserving their distances relative to point 1 (see Figure \ref{eo-xyzmove}). The last operation above placed MGED into a state where the key point of the object will track the X component of successive selects, but not the Y component. Note that on some displays point 1 may not be directly visible. It is actually behind point 4. Watch the area listing the information concerning ``arb8'' as selections are made. \noindent {\em Do several selects while moving the cursor slowly in a circle.}\\ Observe that only the X axis information changes. Similarly, the {\bf Y move} tracks only changes in the Y axis, and {\bf XY move} tracks changes in both axes. \begin{figure} \centering \includegraphics{eo-xymove.ps} \caption{XY Move.} \label{eo-xymove} \end{figure} \noindent {\tt mged> }{\em press 45,45}\\ {\tt mged> }\\ {\em Do several more selects while moving the cursor slowly in a circle.}\\ This time, moving point 1 modifies the model x and y axes (observe the changes in the list of vertices). If {\bf Y move} is selected, all three sets of axes are modified. These operators work in screen space, not model space, so using them with an oblique view moves the model in more than one axis (Figure \ref{eo-xymove} for example).. \subsection{Rotate Operation} \begin{figure} \centering \includegraphics{eo-arbrot.ps} \caption{``arb8'' Rotated by (30, 45, 60).} \label{eo-arbrot} \end{figure} \noindent {\tt mged> {\em press reset}\\ mged> {\em translate 0 0 0}\\ mged> {\em rotobj 30 45 60}\\ mged>\\ } The primitive is rotated 60, 45, and 30 about the z, y, and x axes in that order as shown in Figure \ref{eo-arbrot}. Note that the coordinates of the points are changed when scaling, translation, and rotation are performed. \noindent {\tt mged> {\em press reset}\\ mged> {\em d arb8}\\ mged>\\ } If an object which is being edited is deleted from view, MGED transitions back into the {\bf VIEWING} state. \section{Object Editing an Ellipsoid} \begin{figure} \centering \includegraphics{eo-ellg.ps} \caption{Object Edit; Ellipse viewed from 45,45 preset.} \label{eo-ellg} \end{figure} \noindent {\tt mged> {\em e ellg}\\ mged>\\ {\em Select the {\bf 45,45} button menu entry.}\\ {\em Select the {\bf Object Illum} button menu entry.}\\ {\em Move the cursor outside the menu area and select twice.}\\ } Control is again passed from the {\bf OBJ PICK} state through the {\bf OBJ PATH} state to the {\bf OBJ EDIT} state. When editing a cylinder, ellipsoid, or torus, the coordinates of the primitives are set relative to the center of the primitive, and are not changed by any translation of the primitive. Figure \ref{eo-ellg} represents the display. \subsection{Scale Operation} \begin{figure} \centering \includegraphics{eo-ellg2x.ps} \caption{Object Edit; Ellipse scaled up by 2.} \label{eo-ellg2x} \end{figure} \noindent {\tt mged> {\em scale 2}\\ mged>\\ } The magnitudes of the vectors from the center V to the points A, B, and C are multiplied by the scale factor 2. The location of the center V is unchanged. See Figure \ref{eo-ellg2x}. \subsection{Move Operations} \begin{figure} \centering \includegraphics{eo-ellgxyz.ps} \caption{Object Edit; Ellipse Translated.} \label{eo-ellgxyz} \end{figure} \noindent {\tt mged> {\em scale 1}\\ mged>\\ {\em Select the {\bf XY move} button menu entry.}\\ {\em Move the cursor to some location away from the menu area and select.}\\ } The as with the arb8 in the previous section, the ellipsoid is moved in the model space plane parallel to the plane of the screen so the key point (vertex V) is ``placed at'' the point corresponding to the cursor location. Note that there is no explicit control over the location of the point with respect to screen Z (depth in the viewing cube); the selected point has the same screen Z as the original point. The screen should look similar to Figure \ref{eo-ellgxyz}. \noindent {\tt mged> {\em d ellg}\\ mged>\\ } \section{Object Path and Object Edit} This section illustrates the use of the {\bf OBJ PATH} state to select the number of objects that are affected by one edit command. In the previous sections the user was shown how to manipulate only one primitive. A group of primitives may also be edited as a single entity. This section shows how an entire group may be edited without addressing each individual primitive. MGED generally has several ways to achieve a particular result. In this section, keyboard commands are used instead of the control buttons and the display menu. The creation and saving of special primitives is illustrated. In the database, the original primitives are centered around the origin. Copies of these primitives will be made, translated away from the origin and saved for future editing. \subsection{Organize the Primitives and Groups} \begin{figure} \centering \includegraphics{eo-stacked.ps} \caption{Stacked Primitives.} \label{eo-stacked} \end{figure} \noindent {\tt mged> {\em cp arb8 arb8.s}\\ mged> {\em cp ellg ellg.s}\\ mged> {\em cp tgc tgc.s}\\ mged> {\em cp tor tor.s}\\ mged>\\ } Figure \ref{eo-stacked} shows the four primitives; arb8.s, ellg.s, tgc.s, and tor.s. One convention frequently used by experienced modelers is to tack an identifying suffix on the names of the various primitives and objects. Often, a ``.s'' suffix denotes a {\em solid}, a ``.r'' denotes a {\em region} and a ``.g'' denotes a {\em group} (note that this is a different ``.g'' from the ``.g'' suffix used with filenames.). \noindent {\tt mged> {\em size 16}\\ mged> {\em sed arb8.s}\\ mged> {\em press sxy}\\ mged> {\em p 3 -3 1}\\ mged> {\em press accept}\\ mged>\\ } Several things happened in the above sequence. The net result is that the solid arb8.s was ``unstacked'' using a Solid Edit so it is more visible. The same sequence of operations will be performed with the other objects to move them to other locations. \begin{figure} \centering \includegraphics{eo-spread.ps} \caption{Primitives After Translation.} \label{eo-spread} \end{figure} \noindent {\tt mged> {\em sed ellg.s}\\ mged> {\em press sxy}\\ mged> {\em p -3 3 1}\\ mged> {\em press accept}\\ mged> {\em sed tgc.s}\\ mged> {\em press sxy}\\ mged> {\em p -3 -3 1}\\ mged> {\em press accept}\\ mged> {\em sed tor.s}\\ mged> {\em press sxy}\\ mged> {\em p 3 3 1}\\ mged> {\em press accept}\\ mged>\\ } The screen should now look like Figure \ref{eo-spread}. The next step is to group the primitives \noindent {\tt mged> {\em g a.g tgc.s arb8.s}\\ mged> {\em g b.g ellg.s tor.s}\\ mged> {\em g c.g a.g b.g}\\ mged> {\em B c.g}\\ vectorized in 0 sec\\ mged> {\em tree c.g} \begin{verbatim} | c.g_____________| a.g_____________| tgc.s | arb8.s | b.g_____________| ellg.s | tor.s \end{verbatim} \noindent mged>\\ } The {\em group} operator ({\bf g}) generates an object, named by the first argument, which is the union of all objects named in succeeding arguments. Therefore, the object ``a.g'' is composed of the union of ``tgc.s'' and ``arb8.s''. Likewise, the object ``c.g'' is the union of ``a.g'' and ``b.g''. The next command in the above sequence is called the {\em blast} command. It is effectively a {\em zap} ({\bf Z}) followed by an {\em edit} ({\bf e}). The final command is the {\em tree} command. It is intended to give the user some idea of the hierarchical structure of an object. It presents a tree laid on its side. The root is at the left, and the leaves are at the right. A vertical bar denotes a connection at a given level, with the proviso that a vertical bar having a line of underscores coming in from the left represents the start of a particular subtree when read from top down (``ellg.s'' and ``arb8.s'' do not have a common parent). \begin{figure} \centering \includegraphics{eo-grpath.ps} \caption{Object Path With ``tor.s'' as Reference Solid.} \label{eo-grpath} \end{figure} \noindent {\tt mged> {\em press oill}\\ mged>\\ {\em Move the cursor up and down the screen until the primitive ``tor.s'' is illuminated, then select}\\ } Selecting the solid ``tor.s'' transitions MGED into the {\bf OBJ PATH} state, and establishes ``tor.s'' as the reference solid for any future editing operations. Note that the name ``tor.s'' is shown in the upper left corner of the display, and on the second status line at the bottom of the display (Figure \ref{eo-grpath}). The {\bf OBJ PATH} state has little meaning unless there is more than one path or group in the display. One of the following paths may be selected: \begin{quote} c.g/b.g/\_MATRIX\_/tor.s c.g/\_MATRIX\_/b.g/tor.s \_MATRIX\_/c.g/b.g/tor.s \end{quote} Although the torus primitive has been selected as the reference solid, the position of {\bf \_MATRIX\_} determines the extent of the effects of the edit. The first choice affects only the torus. The second choice affects everything under the group ``b.g'' (the torus and ellipsoid). The third choice affects all of the primitives. Remember though, that in all cases, what is being edited is the Homogeneous Transformation Matrix (thought of as the arc connecting objects), not the underlying solid. \subsection{Editing One Primitive} \begin{figure} \centering \includegraphics{eo-gredit.ps} \caption{Object Edit With ``tor.s'' as Reference Solid.} \label{eo-gredit} \end{figure} \noindent {\em Move the cursor up and down until {\tt /c.g/b.g/\_MATRIX\_/tor.s} appears in the bottom status line, then select.}\\ Figure \ref{eo-gredit} is the new display. Note that the torus is illuminated. The {\bf OBJ EDIT} state has been reached. \begin{figure} \centering \includegraphics{eo-tor111.ps} \caption{Object Edit Affecting Torus Only.} \label{eo-tor111} \end{figure} \noindent {\tt mged> {\em translate 1 1 1}\\ mged>\\ } The key point of the torus is moved to 1, 1, 1. The other primitives are not moved. See Figure \ref{eo-tor111}. \noindent {\tt mged> {\em press reject}\\ mged>\\ } \subsection{Editing a Group of Two Primitives} \begin{figure} \centering \includegraphics{eo-bgrp.ps} \caption{Torus and Ellipsoid Selected for Object Edit.} \label{eo-bgrp} \end{figure} \noindent {\tt mged> {\em press oill}\\ mged>\\ {\em Move the cursor up and down the screen until the primitive ``tor.s'' is illuminated, then select.}\\ {\em Move the cursor up and down until {\tt /c.g/\_MATRIX\_/b.g/tor.s} appears in the bottom status line, then select.}\\ } Control has again been passed to the {\bf OBJ EDIT} state. Notice that both the torus and ellipsoid are illuminated (Figure \ref{eo-bgrp}. Although only the parameters for the torus will be changed in the display, the ellipsoid in group ``b.g'' will be affected by the edit. \begin{figure} \centering \includegraphics{eo-bgrp311.ps} \caption{Torus and Ellipsoid Translated by (3, 1, 1).} \label{eo-bgrp311} \end{figure} \noindent {\tt mged> {\em translate 3 1 1}\\ mged>\\ } The key point of the torus is moved to 3, 1, 1. The ellipsoid is moved by the same amount. See Figure \ref{eo-bgrp311}. \noindent {\tt mged> {\em press reject}\\ mged>\\ } \subsection{Editing Two Groups of Four Primitives} \begin{figure} \centering \includegraphics{eo-cgrp.ps} \caption{All Primitives Selected for Object Edit.} \label{eo-cgrp} \end{figure} \noindent {\tt mged> {\em press oill}\\ mged>\\ {\em Move the cursor up and down the screen until the primitive ``tor.s'' is illuminated, then select.}\\ {\em Move the cursor up and down until {\tt /\_MATRIX\_/c.g/b.g/tor.s} appears in the bottom status line, then select.}\\ } Control has again been passed to the {\bf OBJ EDIT} state. Notice that all of the primitives are illuminated (Figure \ref{eo-cgrp}). Although only the parameters for the torus will be changed in the display, the ellipsoid in group ``b.g'' will be affected by the edit. \begin{figure} \centering \includegraphics{eo-cgrp321.ps} \caption{All Primitives Translated by (3, 2, 1).} \label{eo-cgrp321} \end{figure} \noindent {\tt mged> {\em translate 3 2 1}\\ mged>\\ } The key point of the torus is moved to 3, 2, 1. All other primitives are moved by the same amount. See Figure \ref{eo-cgrp321}. \noindent {\tt mged> {\em press reject}\\ mged>\\ } Control has now returned to the VIEWING state.
{ "alphanum_fraction": 0.7220520674, "avg_line_length": 29.2586558045, "ext": "tex", "hexsha": "03170506a7eadd238926864702954edf9a44d4c4", "lang": "TeX", "max_forks_count": 54, "max_forks_repo_forks_event_max_datetime": "2022-03-28T23:20:37.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-10T07:57:06.000Z", "max_forks_repo_head_hexsha": "34b72d3efd24ac2c84abbccf9452323231751cd1", "max_forks_repo_licenses": [ "BSD-4-Clause", "BSD-3-Clause" ], "max_forks_repo_name": "dservin/brlcad", "max_forks_repo_path": "doc/mged/h.tex", "max_issues_count": 13, "max_issues_repo_head_hexsha": "34b72d3efd24ac2c84abbccf9452323231751cd1", "max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:33.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-24T17:07:48.000Z", "max_issues_repo_licenses": [ "BSD-4-Clause", "BSD-3-Clause" ], "max_issues_repo_name": "dservin/brlcad", "max_issues_repo_path": "doc/mged/h.tex", "max_line_length": 78, "max_stars_count": 83, "max_stars_repo_head_hexsha": "34b72d3efd24ac2c84abbccf9452323231751cd1", "max_stars_repo_licenses": [ "BSD-4-Clause", "BSD-3-Clause" ], "max_stars_repo_name": "dservin/brlcad", "max_stars_repo_path": "doc/mged/h.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:33:46.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-10T05:54:52.000Z", "num_tokens": 4179, "size": 14366 }
This chapter describes the methodology followed in the development of this thesis. This chapter comprises the following sections: Section \ref{sec:ch3-tools} describes the computational tools utilized in this work, Section \ref{sec:ch3-math} presents Moltres' mathematical basis, Section \ref{sec:ch3-bench} outlines the organization of the OECD/NEA MHTGR-350 Benchmark, and Section \ref{sec:ch3-mhtgr} summarizes the characteristics of the MHTGR-350. \section{Computational tools} \label{sec:ch3-tools} The following sections describe the computational tools that participated in the development of this thesis. The main computational tool was Moltres \cite{lindsay_introduction_2018}, described in Section \ref{sec:ch3-moltres}. However, a description of Moltres is incomplete if not accompanied by an introduction to the underlying framework MOOSE \cite{gaston_moose_2009}, presented in Section \ref{sec:ch3-moose}. Additionally, Chapter \ref{ch:neutronics} uses Serpent \cite{leppanen_development_2007}\cite{leppanen_calculation_2014} for obtaining group constants that serve as an input to Moltres. Section \ref{sec:ch3-serpent} summarizes Serpent's most important features. \subsection{MOOSE} \label{sec:ch3-moose} % intro MOOSE is a computational framework that supports engineering analysis applications. In a nuclear reactor, several partial differential equations describe the physical behavior. These equations are typically nonlinear, and they are often strongly coupled to each other. \gls{MOOSE} targets such systems and solves them in a fully coupled manner. % more details about MOOSE \gls{MOOSE} is an open-source FEM framework under a \gls{LGPL}. The framework itself relies on LibMesh \cite{kirk_libmesh_2006}, an LGPL finite element library, and PetSc, a \gls{BSD}-licensed toolkit for solving nonlinear equations \cite{balay_petsc_2016}. MOOSE applications define weak forms of the governing equations and modularize the physics expressions into "Kernels." Kernels are C++ classes containing methods for computing the residual and Jacobian contributions of individual pieces of the governing equations. \gls{MOOSE} and LibMesh translate them into residual and Jacobian functions. These functions become inputs into PetSc solution routines. \gls{MOOSE} utilizes the \gls{JFNK} method \cite{knoll_jacobian-free_2004} mathematical structure \cite{gaston_moose_2009}. \gls{JFNK} methods are synergistic combinations of Newton-type methods for superlinearly convergence of nonlinear equations and Krylov subspace methods for solving the Newton correction equations. The Jacobian-vector product links the two methods. JFNK methods compute such products approximately without forming and storing the elements of the true Jacobian. The ability to perform a Newton iteration without forming the Jacobian gives JFNK methods potential for application throughout problems governed by nonlinear partial differential equations. All the software built on the MOOSE framework shares the same \gls{API}. The applications, by default, utilize monolithic and implicit methods \cite{lindsay_introduction_2018}. This feature facilitates relatively easy coupling between different phenomena and allows for great flexibility, even with a great variance in time scales \cite{novak_pronghorn_2018}. Additionally, the framework and its applications use \gls{MPI} for parallel communication and allow deployment on massively-parallel cluster-computing platforms. \subsection{Moltres} \label{sec:ch3-moltres} \textit{Moltres} is a MOOSE-based application initially designed for modeling fluid-fuelled \glspl{MSR}. Moltres inherits all the attributes from MOOSE as its application. Moltres is an open-source simulation tool that operates under an LGPL. It uses \textit{git} for version control, emphasizing its openness and promoting quality through peer review. Moltres openness is an important feature and contrasts previous multi-physics applications, which operated under restrictive licenses. Moltres solves arbitrary-group neutron diffusion, delayed neutron precursor concentration, and temperature governing equations. It can solve the equations in a fully-coupled way or solve each system independently, allowing for great flexibility and making it applicable to a wide range of nuclear engineering problems. Moltres is the primary tool in the development of this thesis. Its role is to simulate prismatic HTGRs. Chapter \ref{ch:neutronics} and \ref{ch:thermalfluids} compare the results calculated by Moltres and other software to validate Moltres' calculation scheme. This work also intends to identify Moltres areas of improvement in prismatic HTGR simulations and sets a basis for future work. \subsection{Serpent} \label{sec:ch3-serpent} The Serpent Monte Carlo code \cite{leppanen_development_2007} \cite{leppanen_serpent_2015} is a three-dimensional continuous-energy neutron transport application developed by the VTT Technical Research Centre of Finland, and it has been in public distribution since 2009. Monte Carlo neutron transport tools have several reactor physics applications related to criticality safety analyses, radiation shielding problems, detector modeling, and validation of deterministic solvers. The Monte Carlo method's main advantage is its capability to model geometry and interaction physics without significant approximations. The main disadvantage is that simulating complex systems is computing-intensive, restricting applications to some extent. In general, Serpent serves two purposes: (1) reactor modeling and (2) group constant generation. In reactor modeling, the Monte Carlo simulation itself represents the solution to the full-scale problem. In group constant generation, the transport simulation produces input parameters for a deterministic solver. Based on a few energy groups, deterministic solvers allow for carrying out coupled full-core analyses. In this work, Serpent produces group constants that serve as an input for Moltres and solves for neutron fluxes in high geometric fidelity and continuous energy cross-sections. This last step provides the reference solutions for the validation of the Moltres calculation scheme. This work used Serpent 2.1.31 and the cross-section library JEFF3.1.2 for the calculations. The reason for using Serpent to generate group constants is due to its ability to run explicit simulations of randomly located TRISO particles. Applying a simple volume homogenization proves inaccurate due to the resonance self-shielding effect of the kernel and coated layers \cite{strydom_results_2015}. Although the particles' explicit modeling is time-consuming, costly, and impractical for most applications, it is necessary. \section{Mathematical basis} \label{sec:ch3-math} The last section introduced Moltres as the primary computational tool utilized in this thesis. This section presents this thesis' mathematical basis, which are the equations that describe the neutronics and thermal-fluids of prismatic HTGRs in the steady-state limit. For a more detailed explanation, refer to Section \ref{appendix:equations}. Moltres and MOOSE heat conduction module translate these equations into kernels. \subsection{Diffusion and precursors equations} % Neutronics Moltres translates the multi-group diffusion equations into kernels, equations that describe the steady-state behavior of the neutron flux and calculates the multiplication factor \cite{duderstadt_nuclear_1976} \begin{align} % diffusion-eigenvalue calculation \nabla \cdot D_g \nabla \phi_g - \Sigma_g^r \phi_g & + \sum_{g' \ne g}^G \Sigma_{g'\rightarrow g}^s \phi_{g'} + \chi_g^t \sum_{g' = 1}^G \frac{1}{k_{eff}}\nu \Sigma_{g'}^f \phi_{g'} = 0 \label{eq:diffusion-eig} \intertext{where} & D_g = \mbox{group $g$ diffusion coefficient } [cm] \notag \\ & \phi_g = \mbox{group $g$ neutron flux } [n \cdot cm^{-2} \cdot s^{-1}] \notag \\ & \Sigma_g^r = \mbox{group $g$ macroscopic removal cross-section } [cm^{-1}] \notag \\ & \Sigma_{g'\rightarrow g}^s = \mbox{group $g'$ to group $g$ macroscopic scattering cross-section } [cm^{-1}] \notag \\ & \chi_t^p = \mbox{group $g$ total fission spectrum } [-] \notag\\ & G = \mbox{number of discrete energy groups } [-] \notag \\ & k_{eff} = \mbox{multiplication factor } [-] \notag \\ & \nu = \mbox{number of neutrons produced per fission } [-] \notag \\ & \Sigma_g^f = \mbox{group $g$ macroscopic fission cross-section } [cm^{-1}]. \notag \end{align} % Boundary conditions In different sections of this thesis, Moltres simulations apply several boundary conditions. The vacuum boundary condition \cite{duderstadt_nuclear_1976} states that no neutrons penetrate the boundary in the inward direction --- the incoming current density is equal to zero \begin{align} & J^-_g(r_s) = \frac{1}{4} \phi_g(r_s) + \frac{D}{2} \hat{n_s} \cdot \nabla \phi_g(r_s) = 0 \label{eq:vacuumbc} \\ \intertext{where} & J^-_g(r) = \mbox{incoming current density } [n \cdot cm^{-2} \cdot s^{-1}] \notag \\ & r_s = \mbox{position of the boundary } [cm] \notag \\ & \hat{n_s} = \mbox{normal direction to the boundary } [-]. \notag \end{align} The reflective boundary condition states that no neutrons penetrate the boundary --- the current density is equal to zero \begin{align} & J_g(r_s) = 0 \label{eq:reflecbc} \intertext{where} & r_s = \mbox{position of the boundary } [cm] \notag \\ & J_g(r) = \mbox{current density } [n \cdot cm^{-2} \cdot s^{-1}]. \notag \end{align} The periodic boundary condition states that the flux and current density are equal on both sides of the boundary \begin{align} & \phi_g(r_{s^-}) = \phi_g(r_{s^+}) \label{eq:perbc1} \\ & J_g(r_{s^-}) = J_g(r_{s^+}) \label{eq:perbc2} \intertext{where} & J_g(r) = \mbox{current density } [n \cdot cm^{-2} \cdot s^{-1}] \notag \\ & r_{s^\pm} = \mbox{left and right side of $r_s$ boundary } [cm]. \notag \end{align} \subsection{Thermal-fluids} \label{ch3:th} The three-dimensional steady-state heat conduction equation solves the temperature in the fuel, helium gap, moderator, coolant film, and reflector \cite{melese_thermal_1984} \begin{align} & k_i \nabla^2 T_i + Q_i = 0 \label{eq:tempsolid} \\ \intertext{where} & i = \mbox{f (fuel), g (helium gap), m (moderator), cf (coolant film), r (reflector)} \notag \\ & k_i = \mbox{material $i$ thermal conductivity } [W \cdot cm^{-1} \cdot K^{-1}] \notag \\ & T_i = \mbox{material $i$ temperature } [^{\circ}C] \notag \\ & Q_i = \mbox{material $i$ volumetric heat source } [W \cdot cm^{-3}]. \notag \end{align} Equations \ref{eq:heatsource1} and \ref{eq:heatsource2} define the heat source in the stand-alone and coupled calculations, respectively \begin{align} & Q_f = Q_0 \label{eq:heatsource1} \\ & Q_f = \sum_{g = 1}^{G} \epsilon_g^f \Sigma_g^f \phi_g \label{eq:heatsource2} \\ & Q_g = Q_m = Q_{cf} = Q_r = 0 \label{eq:heatsource3} \intertext{where} & Q_i = \mbox{material $i$ volumetric heat source } [W \cdot cm^{-3}] \notag \\ & \epsilon_g^f = \mbox{energy released per fission } [J] \notag \\ & \Sigma_g^f = \mbox{group $g$ macroscopic fission cross-section } [cm^{-1}] \notag \\ & \phi_g = \mbox{group $g$ neutron flux } [n \cdot cm^{-2} \cdot s^{-1}]. \notag \end{align} % coolant The governing equation of the coolant is the one-dimensional steady-state energy conservation equation \cite{white_viscous_2006}\cite{tak_practical_2012} \begin{align} & \rho_{c,i} u_i \frac{\partial}{\partial z} (c_{p,c} T_c) = q'''_{conv} \label{eq:tempcool} \\ & q'''_{conv} = h\frac{\varepsilon}{A} (T_i-T_c) \label{eq:convection} \intertext{where} & \rho_{c, i} = \mbox{inlet coolant density } [kg \cdot cm^{-3}] \notag \\ & u_i = \mbox{inlet coolant velocity } [cm \cdot s^{-1}] \notag \\ & c_{p,c} = \mbox{coolant specific heat capacity } [J \cdot kg^{-1} \cdot K^{-1}] \notag \\ & T_c = \mbox{coolant temperature } [^{\circ}C] \notag \\ & q'''_{conv} = \mbox{convective heat transfer } [W \cdot cm^{-3}] \notag \\ & A = \mbox{cross-sectional area } [cm^2] \notag \\ & h = \mbox{heat transfer coefficient } [W \cdot cm^{-2} \cdot K^{-1}] \notag \\ & T_i = \mbox{solid temperature } [^{\circ}C]. \notag \end{align} Equation \ref{eq:film-conduc} calculates the film thermal conductivity $k_f$ \cite{melese_thermal_1984} \begin{align} & Nu = 0.023 Re^{0.8} Pr^{0.4} \label{eq:dittus} \\ & h = \frac{Nu \cdot k_c}{D_h} \\ & k_f = h R_{cf} ln(R_{cf}/R_c) \label{eq:film-conduc} \intertext{where} & Nu = \mbox{Nusselt number } [-] \notag \\ & Re = \mbox{Reynolds number } [-] \notag \\ & Pr = \mbox{Prandtl number } [-] \notag \\ & h = \mbox{heat transfer coefficient } [W \cdot cm^{-2} \cdot s^{-1}] \notag \\ & D_h = \mbox{hydraulic diameter } [cm] \notag \\ & k_c = \mbox{coolant thermal conductivity } [W \cdot cm^{-1} \cdot K^{-1}] \notag \\ & k_{cf} = \mbox{coolant film thermal conductivity } [W \cdot cm^{-1} \cdot K^{-1}] \notag \\ & R_{cf} = \mbox{coolant film radius } [cm] \notag \\ & R_c = \mbox{coolant channel radius } [cm]. \notag \end{align} \section{OECD/NEA MHTGR-350 Benchmark} \label{sec:ch3-bench} Chapters \ref{ch:neutronics} and \ref{ch:thermalfluids} conduct several exercises of the benchmark \cite{oecd_nea_benchmark_2017} using Moltres. The \gls{OECD}/\gls{NEA} defined such a benchmark using the \gls{MHTGR}-350 reactor \cite{silady_licensing_1988} as the reference design. The scope of the benchmark is twofold: (1) establish a well-defined problem, based on a common given data set, to compare methods and tools in core simulation and thermal fluids analysis, and (2) test the depletion capabilities of various lattice physics tools available for prismatic HTGRs. The benchmark defines several phases and exercises, summarized in Table \ref{tab:benchmark}. Chapters \ref{ch:neutronics} and \ref{ch:thermalfluids} provide a more detailed description of Phase I Exercises 1, 2, and 3. \begin{table}[htbp!] \centering \caption{Description of the OECD/NEA MHTGR-350 Benchmark phases and exercises.} \begin{tabular}{lcl} \toprule Phase & Exercise & Description \\ \midrule \multirow{3}{*}{I} & 1 & Neutronics solution with fixed cross-sections \\ & 2 & Thermal fluids solution with given heat sources \\ & 3 & Coupled neutronics-thermal fluids steady state solution \\ \midrule \multirow{4}{*}{II} & 1 & Depressurized Conduction Cooldown without reactor trip \\ & 2 & Pressurized Conduction Cooldown with reactor trip \\ & 3 & Water ingress with reactor trip \\ & 4 & Power 100-80-100 load follow \\ \midrule III & - & Lattice Depletion Calculation \\ \bottomrule \end{tabular} \label{tab:benchmark} \end{table} \section{MHTGR-350 Reactor Description} \label{sec:ch3-mhtgr} This section describes the \gls{MHTGR}-350 reactor. The core consists of an array of hexagonal fuel elements in a cylindrical arrangement, pictured in Figure \ref{fig:layout}. Nineteen graphite replaceable reflector elements compose the inner reflector region. A ring of identically sized graphite replaceable reflector elements surrounds the fuel elements. Then, a region of permanent reflector elements surrounds the replaceable reflectors. The reactor vessel encases all the elements. Ten layers of fuel elements stacked on top of each other compose the 66 fuel columns that integrate the active core. Figure \ref{fig:layoutb} shows an axial view of the reactor. Table \ref{tab:maincharac} summarizes the reactor's main characteristics. \begin{figure}[htbp!] \centering \subfloat[Core radial layout. Image reproduced from \cite{oecd_nea_benchmark_2017}.]{ \includegraphics[width=0.45\textwidth]{figures/radial-layout.png} } \subfloat[Core axial layout. Image reproduced from \cite{oecd_nea_benchmark_2017}.\label{fig:layoutb}]{ \includegraphics[width=0.45\textwidth]{figures/axial-layout.png} } \hfill \caption{MHTGR-350 reactor layout.} \label{fig:layout} \end{figure} \begin{table}[htbp!] \centering \caption{MHTGR-350 Characteristics \cite{oecd_nea_benchmark_2017}.} \begin{tabular}{lc} \toprule Characteristics & Value \\ \midrule Installed Thermal Capacity & 350 MWth \\ Installed Electric Capacity & 165 MWe \\ Core inlet/outlet Temperature & 259/687 $^{\circ}$C \\ Power Density & 5.9 MW$ \cdot m^{-3}$ \\ Reactor Vessel Outside diam. & 6.8 m \\ Reactor Vessel Height & 22 m \\ Active core radius & 2.97 m \\ Active core height & 7.93 m \\ Top reflector height & 1.20 m \\ Bottom reflector height & 1.60 m \\ Number of fuel columns & 66 \\ Number of inner reflector columns & 19 \\ Number of outer reflector columns & 78 \\ \bottomrule \end{tabular} \label{tab:maincharac} \end{table} The core has two types of fuel elements: a standard element and a reserve shutdown element that contains a channel for Reserve Shutdown Control, Figure \ref{fig:fuelassembly}. Table \ref{tab:element-characteristics} specifies the details of the MHTGR-350 fuel elements. Twelve columns in the core contain Reserve Shutdown Control channels for borated graphite pellets. Hoppers above the core house the pellets, and if the control rods become inoperable, the pellets drop into the channels \cite{oecd_nea_benchmark_2017}. \begin{figure}[htbp!] \centering \subfloat[Standard fuel assembly. Image reproduced form \cite{tak_numerical_2008}.]{ \includegraphics[width=0.45\textwidth]{figures/fuel-assembly} } \subfloat[RSC fuel assembly. Image reproduced form \cite{tak_practical_2012}.]{ \includegraphics[width=0.45\textwidth]{figures/fuel-assembly-rsc} } \hfill \caption{MHTGR-350 fuel assembly layout.} \label{fig:fuelassembly} \end{figure} \begin{table}[htbp!] \centering \caption{MHTGR350 fuel element characteristics \cite{oecd_nea_benchmark_2017}.} \label{tab:element-characteristics} \begin{tabular}{@{}l S[table-format=2.2] c} \toprule \multicolumn{1}{c}{Shared characteristics} & \multicolumn{1}{c@{}}{Value} & \multicolumn{1}{c@{}}{Units} \\ \midrule Block pitch (flat-to-flat) & 36 & cm \\ Fuel length & 79.3 & cm \\ Fuel handling diameter & 3.5 & cm \\ Fuel handling length & 26.4 & cm \\ RSC hole diameter & 9.525 & cm \\ RSC center to assembly center & 9.756 & cm \\ Fuel/coolant pitch & 1.879 & cm \\ Fuel hole radius & 0.635 & cm \\ Compacts per fuel hole & \multicolumn{1}{c@{}}{15} & - \\ Large coolant hole radius & 0.794 & cm \\ Small coolant hole radius & 0.635 & cm \\ Burnable posion hole radius & 0.635 & cm \\ Block graphite density & 1.85 & g $\cdot cm^{-3}$ \\ \midrule \multicolumn{1}{c}{Standard element} & & \\ \midrule Number of large coolant holes & \multicolumn{1}{c@{}}{120} & - \\ Number of small coolant holes & \multicolumn{1}{c@{}}{6} & - \\ Number of fuel holes & \multicolumn{1}{c@{}}{210} & - \\ \midrule \multicolumn{1}{c}{RSC element} & & \\ \midrule Number of large coolant holes & \multicolumn{1}{c@{}}{88} & - \\ Number of small coolant holes & \multicolumn{1}{c@{}}{7} & - \\ Number of fuel holes & \multicolumn{1}{c@{}}{186} & - \\ \bottomrule \end{tabular} \end{table} % Fuel assemblys and triso particles The fuel elements contain blind holes for fuel compacts and full-length channels for helium coolant flow. Table \ref{tab:compact} specifies the details of the TRISO particle and fuel compact designs of the \gls{MHTGR}-350. \begin{table}[htbp!] \centering \caption{TRISO and fuel compact characteristics \cite{oecd_nea_benchmark_2017}.} \label{tab:compact} \begin{tabular}{@{}l S[table-format=2.3] c} \toprule \multicolumn{1}{c}{Characteristic} & \multicolumn{1}{c@{}}{Value} & \multicolumn{1}{c@{}}{Units} \\ \midrule Fuel & UC$_{0.5}$O$_{1.5}$ & - \\ Enrichment (average) & 15.5 & wt\% \\ Packing fraction (average) & 0.35 & - \\ Kernel radius & 0.02125 & cm \\ Buffer radius & 0.03125 & cm \\ IPyC radius & 0.03475 & cm \\ SiC radius & 0.03825 & cm \\ OPyC radius & 0.04225 & cm \\ Compact radius & 0.6225 & cm \\ Compact gap radius & 0.6350 & cm \\ Compact length & 4.9280 & cm \\ Kernel density & 10.50 & g $\cdot cm^{-3}$ \\ Buffer density & 1.00 & g $\cdot cm^{-3}$ \\ IPyC density & 1.90 & g $\cdot cm^{-3}$ \\ SiC density & 3.20 & g $\cdot cm^{-3}$ \\ OPyC density & 1.90 & g $\cdot cm^{-3}$ \\ Compact matrix density & 1.74 & g $\cdot cm^{-3}$ \\ \bottomrule \end{tabular} \end{table} % Reactivity control A combination of lumped burnable poison and control rods manages the core reactivity. The lumped burnable poison consists of \gls{B4C} granules dispersed in graphite compacts. The current design uses six lumped burnable poison rods per element. Table \ref{tab:LBP} displays the characteristics of the lumped burnable poison compacts. The reactor has 30 control rods. Six are for reactor start-up and are in the inner reflector, while the remaining 24 are operating control rods and manage the reactivity during power operation and reactor trips. \begin{table}[htbp!] \centering \caption{Burnable poison compact characteristics \cite{oecd_nea_benchmark_2017}.} \label{tab:LBP} \begin{tabular}{@{}l S[table-format=1.4] c} \toprule \multicolumn{1}{c}{Characteristic} & \multicolumn{1}{c@{}}{Value} & \multicolumn{1}{c@{}}{Units} \\ \midrule Absorber & B$_{4}$C & - \\ Packing fraction & 0.109 & - \\ Kernel radius & 0.0100 & cm \\ Buffer radius & 0.0118 & cm \\ PyC radius & 0.0141 & cm \\ Compact radius & 0.5715 & cm \\ Compact gap radius & 0.6350 & cm \\ Rod length & 72.187 & cm \\ Kernel density & 2.47 & g $\cdot cm^{-3}$ \\ Buffer density & 1.00 & g $\cdot cm^{-3}$ \\ PyC density & 1.87 & g $\cdot cm^{-3}$ \\ Compact matrix density & 0.94 & g $\cdot cm^{-3}$ \\ \bottomrule \end{tabular} \end{table} \section{Organization of the Simulations} This thesis separates the description of Moltres simulations into Chapters \ref{ch:neutronics} and \ref{ch:thermalfluids}. Chapter \ref{ch:neutronics} focuses on stand-alone neutronics simulations in which Moltres solves equations \ref{eq:diffusion-eig} and \ref{eq:heatsource2} to calculate the multiplication factor, the neutron flux, and the radial power distribution. Chapter \ref{ch:neutronics} discusses two main validation exercises. The first exercise compares a Serpent and Moltres model of the MHTGR-350. Serpent obtains the multiplication factor, the neutron flux distribution, and the radial power distribution in the reactor core. Serpent also generates the group constants of the MHTGR-350 that serve as an input for Moltres simulations. The second exercise follows Phase I Exercise 1 of the OECD/NEA MHTGR-350 Benchmark using Moltres. The benchmark exercise specifies the group constants that Moltres uses in the simulations. Chapter \ref{ch:thermalfluids} focuses on stand-alone thermal-fluids and coupled neutronics/thermal-fluid simulations. Chapter \ref{ch:thermalfluids} discusses several exercises to validate the thermal-fluids model, in which Moltres solves equations \ref{eq:tempsolid}-\ref{eq:film-conduc} to calculate the solid and coolant temperatures. The first validation exercises study the accuracy of the thermal-fluids model in several configurations, including an equivalent cylindrical model of a unit cell, a three-dimensional unit cell, and a three-dimensional fuel column. The next exercise follows Phase I Exercise 2 of the OECD/NEA MHTGR-350 Benchmark using Moltres. Chapter \ref{ch:thermalfluids} also describes a coupling exercise that follows Phase I Exercise 3 of the OECD/NEA MHTGR-350 Benchmark using Moltres.
{ "alphanum_fraction": 0.6759065145, "avg_line_length": 61.4749403341, "ext": "tex", "hexsha": "dd4e6c7f572612834bd80acd0534740d7ffdbbe7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "87bc9d4f93d083b08d82c8576b9491f85d0a6457", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "robfairh/ms-thesis", "max_forks_repo_path": "methodology.tex", "max_issues_count": 22, "max_issues_repo_head_hexsha": "87bc9d4f93d083b08d82c8576b9491f85d0a6457", "max_issues_repo_issues_event_max_datetime": "2020-12-02T22:37:02.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-10T18:40:23.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "robfairh/ms-thesis", "max_issues_repo_path": "methodology.tex", "max_line_length": 368, "max_stars_count": 1, "max_stars_repo_head_hexsha": "87bc9d4f93d083b08d82c8576b9491f85d0a6457", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "robfairh/ms-thesis", "max_stars_repo_path": "methodology.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-27T15:39:36.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-27T15:39:36.000Z", "num_tokens": 6937, "size": 25758 }
% !TEX root = ../bigmpi.tex \section{Related Work} As noted in \S\ref{sec:intro} and \S\ref{sec:mpi4}, the MPI Forum has made efforts to address count-safety issues in the MPI standard. Both MPICH and OpenMPI have made significant strides toward count-safety at the implementation level. MPICH currently passes all of the large-count tests in its own test suite, although these tests may not exercise all possible code paths. We are not aware of other efforts to implement a high-level library on top of MPI-3 that supports large-count usage in the manner that BigMPI does. \subsection{OpenSHMEM} OpenSHMEM 1.0~\cite{kuehn2012openshmem} conscientiously uses \texttt{size\_t} for counts and \texttt{ptrdiff\_t} for offsets throughout and hence is a count-safe API. Since numerous implementations of OpenSHMEM exist, we cannot evaluate the count-safety of all of them. When a count-safe API such as DMAPP~\cite{DMAPP} is used, however, count-safety is more likely than if the implementation is required to map from 64-bit counts to 32-bit counts internally. \subsection{GASNet} GASNet uses \texttt{size\_t} and is thus count-safe. We have not attempted to evaluate the count-safety of GASNet implementations, since there are numerous conduits, each of which might have large-count issues due to platform-specific low-level APIs and bugs in system software. \subsection{GA/ARMCI} Both the Global Arrays~\cite{nieplocha:94} and ARMCI~\cite{nieplocha:99} interfaces use native integer types in both C and Fortran to represent element counts; and in the case of ARMCI Put and Get, the count is in terms of bytes, not elements. Thus, both models have the same (or worse) large-count issues as MPI-3.
{ "alphanum_fraction": 0.7882697947, "avg_line_length": 47.3611111111, "ext": "tex", "hexsha": "110e0268f545d8580890a0ce751ee1b80846b1eb", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-05T13:55:08.000Z", "max_forks_repo_forks_event_min_datetime": "2015-08-12T17:20:24.000Z", "max_forks_repo_head_hexsha": "8d988e7f1242e50fa922f057378379444caa8310", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jeffhammond/BigMPI-paper", "max_forks_repo_path": "text/related.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8d988e7f1242e50fa922f057378379444caa8310", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jeffhammond/BigMPI-paper", "max_issues_repo_path": "text/related.tex", "max_line_length": 93, "max_stars_count": 2, "max_stars_repo_head_hexsha": "8d988e7f1242e50fa922f057378379444caa8310", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "jeffhammond/BigMPI-paper", "max_stars_repo_path": "text/related.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-06T16:59:20.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-14T09:16:06.000Z", "num_tokens": 438, "size": 1705 }
\chapter{Methodology} \label{ch:methodology} \begin{itemize} \item Expound \#4 (\emph{What is your idea to address this gap?}), \#5 (\emph{Why do you think your idea will work?}), and \#10 (\emph{Introduce the idea on how to solve this gap}). \item Propose a solution. Use math models, diagrams, and algorithms to formalize your method. \item Cite related literature of the building blocks used in the proposed method. \item Justify every decision in your proposed method (\eg we used batch normalization to stabilize the training) \end{itemize} % Extracted from the CVPR template \section{Style and Formatting} \label{sec:style-formatting} \subsection{Mathematics} Please number all of your sections and displayed equations as in these examples: \begin{equation} E = m\cdot c^2 \label{eq:important} \end{equation} and \begin{equation} v = a\cdot t. \label{eq:also-important} \end{equation} It is important for readers to be able to refer to any particular equation. Just because you did not refer to it in the text does not mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \begin{figure}[t] \centering \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} %\includegraphics[width=0.8\linewidth]{egfigure.eps} \caption[Example of a shortened caption.]{Example of a long caption title to demonstrate how to use the optional caption parameter. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} \subsection{Footnotes} Please use footnotes\footnote{This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the page on which it is referenced. Use Times 10-point type, single-spaced. \subsection{Cross-references} For the benefit of author(s) and readers, please use the {\small\begin{verbatim} \Cref{...} \end{verbatim}} command for cross-referencing to figures, tables, equations, or sections. This will automatically insert the appropriate label alongside the cross-reference as in this example: \begin{quotation} To see how our method outperforms previous work, please see \Cref{fig:onecol} and \Cref{tab:example}. It is also possible to refer to multiple targets as once, \eg~to \Cref{fig:onecol,fig:short-a}. You may also return to \Cref{sec:style-formatting} or look at \Cref{eq:also-important}. \end{quotation} If you wish to abbreviate the label, you can use the {\small\begin{verbatim} \cref{...} \end{verbatim}} command. Here is an example: \begin{quotation} \cref{fig:onecol} is also quite important. \end{quotation} %------------------------------------------------------------------------- \subsection{References} List and number all bibliographical references in 12-point Times, single-spaced, at the end of your manuscript. To make a reference in the text, use the {\small\begin{verbatim} \autocite{...} \end{verbatim}} command instead of plain \verb`\cite{}`, for example~\autocite{Authors14}. Where appropriate, include page numbers and the name(s) of editors of referenced books. When you cite multiple papers at once, please make sure that you cite them in numerical order like this \autocite{Alpher02,Alpher03,Alpher05,Authors14b,Authors14}. If you use the template as advised, this will be taken care of automatically. \begin{table} \centering \caption[Results]{Results. Ours is better. Note that table captions should be placed above, in contrast to figure captions which are positioned below.} \begin{tabular}{@{}lc@{}} \toprule Method & Frobnability \\ \midrule Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \bottomrule \end{tabular} \label{tab:example} \end{table} %------------------------------------------------------------------------- \subsection{Illustrations, graphs, and photographs} All graphics should be centered. In \LaTeX, avoid using the \texttt{center} environment for this purpose, as this adds potentially unwanted whitespace. Instead use {\small\begin{verbatim} \centering \end{verbatim}} at the beginning of your figure. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths that render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } %------------------------------------------------------------------------- \subsection{Color} If you use color in your plots, please keep in mind that a significant subset of reviewers and readers may have a color vision deficiency; red-green blindness is the most frequent kind. Hence avoid relying only on color as the discriminative feature in plots (such as red \vs green lines), but add a second discriminative feature to ease disambiguation. \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word). If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. However, use it only when there are three or more authors. Thus, the following is correct: ``Frobnication has been trendy lately. It was introduced by Alpher~\autocite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\autocite{Alpher03}, and Alpher \etal~\autocite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\autocite{Alpher03} ...'' because reference~\autocite{Alpher03} has just two authors. \begin{figure*} \centering \begin{subfigure}{0.68\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{An example of a subfigure.} \label{fig:short-a} \end{subfigure} \hfill \begin{subfigure}{0.28\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{Another subfigure.} \label{fig:short-b} \end{subfigure} \caption{Example of a short caption title. It should be centered.} \label{fig:short} \end{figure*} \section{Chapter Summary} \textbf{Start Strong:} For every chapter (except possibly the Problem Statement), make an introduction (2 or 3 paragraphs) on what the chapter is all about. \textbf{Stay Strong:} Explain ideas in the simplest and most direct way that many people in your field can understand. If a certain topic is a bit specialized or hard to remember, make a concise introduction. Point the reader to a reference for further understanding. Each chapter should be complete or stand-alone and concise. \textbf{Finish Strong:} At the end, make a summary (2 or 3 paragraphs) to re-emphasize the points discussed in the chapter.
{ "alphanum_fraction": 0.7283152373, "avg_line_length": 46.3107344633, "ext": "tex", "hexsha": "d5c392ff657d64422047482e420f28ffd84a945d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "df0c02de230f0dfab1489ca10c5b4d824d5999a4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "baudm/ngse-manuscript", "max_forks_repo_path": "chapters/4-methodology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "df0c02de230f0dfab1489ca10c5b4d824d5999a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "baudm/ngse-manuscript", "max_issues_repo_path": "chapters/4-methodology.tex", "max_line_length": 327, "max_stars_count": 7, "max_stars_repo_head_hexsha": "df0c02de230f0dfab1489ca10c5b4d824d5999a4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "baudm/ngse-manuscript", "max_stars_repo_path": "chapters/4-methodology.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T05:49:31.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-09T05:20:37.000Z", "num_tokens": 2113, "size": 8197 }
\documentclass[12pt, letterpaper]{article} \usepackage[margin=1in]{geometry} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{tikz} \usepackage{hyperref} \hypersetup{ colorlinks = true, linkcolor = black, urlcolor = blue } \setlength{\headheight}{15pt} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} \rhead{\leftmark} \lhead{GV263} \title{ {GV263 Public Policy Analysis}\\ {\large{Dr Daniel Berliner, Dr Charlotte Haberstroh, Professor Edward Page}}\\ {\large{Lecture Notes}} } \author{Cedric Tan} \date{September 2019} \begin{document} \maketitle {\small \noindent\textbf{Concept}\\ Concept. \hspace*{\fill}[1] \vspace{10pt} \noindent\textbf{Concept}\\ Concept.\hspace*{\fill}[2] \newpage \tableofcontents \newpage \section{Introduction to Public Policy and the Policy Cycle} \subsection{Structure} Policy process and policy-making \begin{itemize} \item Agenda-setting \item Policy formulation and decision-making styles \item Implementation \item Policy evaluation \item Public opinion \item Interest groups \item Political parties \item Bureaucracies \end{itemize} Contested issues in public policy \begin{itemize} \item Science and Public Policy \item Policy fiascos \item Nudging \item Austerity \item ICTs and Public Policy \item Corruption \item Transparency \item Developing countries \item Cross-national learning \end{itemize} \subsection{Public Policy} Some of the questions covered in the course: \begin{itemize} \item Why are we talking about developing policy for an issue now when it has been ignored for years? \item Who shapes the policy responses to problems? \item Why do we appear to make only small changes to existing policies? \item Why do many policies seem to achieve little? \item Do laws work better than persuasion? (Nudge) \item When do policy makers listen to evidence? (Thunberg) \end{itemize} There is a troubled relationship between academic study of policy and its practice. Enthusiasm for academic research has shaped policy practice from the 18th century Cameralism through Fabianism to Blairite "evidence based policy". However, results of academic impact has been modest, even where invited. \vspace{10pt} \noindent The What Works initiative aims to improve the way government and other organisations create, share and use high quality evidence for decision-making. Academic papers aim to inform public policy but the way in which they are written makes public policy implementation difficult. The evaluation aspect of academic papers can confuse the application of public policy. Hence we could ask: \textbf{what has the academic study of policy got to offer then?} \vspace{10pt} \noindent Is the academic idea useless? Pursuing sets of questions that have cultural impact and value, acquiring and developing insights, acquiring and developing skills and techniques that have practical pay-off, these are all aspects of the analysis idea. \vspace{10pt} \noindent The cultural importance has a few questions: \begin{itemize} \item Who sets the agenda in public policy? \item Does public opinion shape policy? \item Does bureaucracy constrain politics? (Brexit and lawmakers becoming Anti-Brexit constraining policy) \item Do parties make a difference? (Does it matter for a variety of public policy?) \end{itemize} There are practical payoffs: Lindblom and Cohen (2009): When one is making policy, you are attempting to solve a problem. You have two types of knowledge: 1. Ordinary Knowledge - the things we know about people e.g. people do not want to go to jail, people want to earn money i.e. things we do not need to research so heavily and 2. Professional Social Inquiry (PSI) - knowledge that requires more in depth analysis e.g. police presence and their impact on crime in a certain area. \vspace{10pt} \noindent From Lindblom and Cohen we can infer that a lot of Public Policy is created from Ordinary Knowledge and not Professional Social Inquiry. But we can see three methods of PSI impact public policy: \begin{itemize} \item Results of PSI can shape policy alongside Ordinary Knowledge: methods of combating corruption, the beneficiaries of "open government" \item Knowledge produced from PSI can become Ordinary Knowledge: incrementalism, bounded rationality e.g. cancer research, new public management and thermostat theory - public opinion reacts to public policy like a thermostate (up, down, up again style fluctuation) \item PSI can debunk Ordinary Knowledge: (possibly controversial) could lead to policy learning, checks on corruption and the role of the private sector \end{itemize} \vspace{10pt} \noindent There are also practical contributions: \begin{itemize} \item Forensic skills: where problems/issues arise (e.g. analysis of stages of policy making) \item Techniques of assessing and evaluating policies (e.g. Weiss on Evaluation) \item Exploring alternative ways of assessing policies (e.g. debate on "success" of policy) \item Skepticism: bias in academic literature is that things do not work as expected \end{itemize} A lot of debate is centered around this. \subsection{What is Public Policy?} There is no one definition but there are several famous ones: \begin{itemize} \item Anything that government chooses to do or not to do - Dye \item Set of interrelated decisions - Jenkins \end{itemize} There are many meanings of public policy: \begin{itemize} \item It is a field of activity \item A form of Intention and desire \item A bundle of measures/Specific proposals/Decisions by government e.g. tobacco \item It is strategy e.g. blame avoidance \item Formal authorisation \item Output and Outcome \item Cause/Effect assumptions \end{itemize} \subsection{Policy process and policy cycle} Analyst Perspective \begin{itemize} \item Problem formulation \item Selection of criteria \item Comparison of alternatives and selection \item Consideration of constraints \item Implementation and evaluation \end{itemize} Policy Process Perspective \begin{itemize} \item Agenda-setting \item Policy formulation \item Decision-making \item Policy implementation \item Policy evaluation \item Termination \end{itemize} Policy cycle and questions: \begin{itemize} \item Agenda Setting: problem recognition and issue selection \begin{itemize} \item What issues receive political attention? \item Why are issues framed in particular ways? \item What processes lead to mobilisation? \end{itemize} \item Policy formulation and decision making \begin{itemize} \item Who was involved in making the decision? \item What did they want \item How powerful were they? \item What resources and support could they mobilise? \item How was conflict handled and resolved? \end{itemize} \item Implementation \begin{itemize} \item Why did things not work out as intended? \item Were there unforeseen hitches and obstacles? \item Were street-level bureaucrats selective in how they implemented instructions? (Policy and cattling) \end{itemize} \item Evaluation \begin{itemize} \item How clear were the objectives behind the policy? \item How do you measure the impact of the policy? \item Was there a hidden set of motives behind the policy? \end{itemize} \end{itemize} Yet is the policy cycle a useful device? Policy cannot be differentiated into discrete stages, sequencing is empirically misleading. \begin{itemize} \item Implementation is policy formulation on the ground. \item Interaction among multiple cycles and policies: there are trade-offs and conflicts between goals (Garbage Can Theory - haphazard development of policy) \item No theory or cause \item Implicit rational and top-down assumption \item Operates at what level? \end{itemize} Simply put it is: \begin{itemize} \item A useful friend \item Offers way to cut complexity and to ask distinct questions \item Has theoretical openness \item Has process-tracing (from start to finish) \end{itemize} \section{Agenda and Agenda Setting} Agendas coming to attention of policy makers is a very important process since they have the political power as agenda setters. What is the government agenda? Kingdon: \textit{The agenda, as I conceive of it, is the list of subjects or problems to which governmental officials, and people outside of government closely associated with those officials, are paying some serious attention at any given time. Out of the set of all conceivable subjects or problems they do in fact seriously attend to some rather than others. It narrows this set of conceivable subjects to the set that actually becomes the focus of attention.} \subsection{Setters} Are there agenda setters? Two views are adopted here: \begin{enumerate} \item Uncompetitive: a limited number of people can manipulate the process \item Competitive: a process that is essentially uncontrollable but can be exploited ("ride the wave"). \end{enumerate} Agendas as manipulable - Schattschneider: Watch the reaction of the crowd and do the crowd want to get involved in the conflict? Conflict is contagious Watch the crowd The scope and bias of conflict: how many people did this affect or how was it contained - understanding the depth/gravity Socialising and privatising conflict: wide versus narrow questions on the conflict issue Displacement: 1932 New Deal coalition idea Open a new conflict within American society: appealing to blue collar workers, ethnic minorities, religious minorities, the south Introduced a new cleavage and restructured the way politics worked This set the agenda for the coming years Controlling the agenda: Big Food example Nutritional positioning (focus on what's in processed foods rather than processed foods in general) e.g. Nutritional deficiencies etc. By privatising conflict - nutritional positioning - make it much more technical, takes it away from the general public who might not understand it Big Business does not always win Passive Smoking - socialisation of conflict - general public can Non-decisions Highlight an issue that is impossible to study, once you see it, it does not exist Gary Indiana (Un-Politics of Air pollution) Pollution regulation vs Pollution in the air Cities can stand out with high pollution but no pollution regulation Some things are simply kept off the agenda \subsection{Agendas as Largely Uncontrollable} Agendas as largely uncontrollable (but exploitable) Downs: The Issue Attention Cycle (1972) \begin{itemize} \item The pre-problem stage: period before it was perceived to be a problem \item Alarmed discovery and euphoric enthusiasm: a disaster or an event that alarms the public \item Realising the cost of significant progress: hard to do or rectify the consequences of the disaster \item Gradual decline of intense public interest: people gradually move away from the \item The post-problem stage \end{itemize} Hogwood and Gunn argue that only certain types of issues go through this Issue attention cycle. \begin{itemize} \item Issue has reached crisis proportions \item Issue has reach "particularity" \item Issue is motive \item Issue has wide impact \item Issue raises questions of power and legitimacy \item Issue is fashionable \end{itemize} Garbage Can theory: March and Olsen Agendas as happenstance. Organised anarchy of decisions Streams: Problems, solutions, participants Hard to predict that decision-making is well structured. Kingdon: Agendas as competition Streams: - Problems: issues getting worse or better - Policies: policies being raised to solve issues stagnant or otherwise - Politics: political sentiment at the time related to the issues and policies Windows of opportunity exist where the three streams coincide: "the time is right". They get onto the agenda! Limits of agenda setting as a metaphor for why things get discussed - Loads of agendas - When is something on agenda? - Routine (Budgets, EU Fisheries): budgets are on the agenda because they are necessary on an annual basis - Party priorities (BoE Independence) - Dominance of executive in system of party government Cobb, Ross and Ross; Cobb and Elder Outside Initiative: things come to the agenda because of an external force Mobilisation model: things come to the agenda because of public opinion Inside access: government or bureaucrats puts things on the agenda themselves Have we got much further? Verdict? - Not a predictive set of theories - Stating the obvious? - Highlights haphazard nature of policy maker attention - Highlights the garbage can - Highlights the important of what is NOT discussed Bureaucracy and the Policy Agenda - is this not obvious? Dominance of Kingdon/Jones approach Focus on describing analysing content of agendas a. Not whether they are new and to a limited edegree how they come on to it The emphasis on function rather than content We are not that much wiser about the basic dynamics of agenda setting than with Cobb et. al in the 1970s www.comparativeagendas.net \end{document}
{ "alphanum_fraction": 0.7928094031, "avg_line_length": 41.5878594249, "ext": "tex", "hexsha": "6da0e48233a58bff27cb2af263db712f6808daaf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e7b134844ff1e589b891466ba450f6fd117c51d0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CedricTan/UniversityNotes", "max_forks_repo_path": "1920MT/GV263/GV263.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "e7b134844ff1e589b891466ba450f6fd117c51d0", "max_issues_repo_issues_event_max_datetime": "2019-10-10T08:06:50.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-10T08:06:50.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CedricTan/UniversityNotes", "max_issues_repo_path": "1920MT/GV263/GV263.tex", "max_line_length": 481, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e7b134844ff1e589b891466ba450f6fd117c51d0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CedricTan/UniversityNotes", "max_stars_repo_path": "1920MT/GV263/GV263.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-24T05:03:18.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-24T05:03:18.000Z", "num_tokens": 3039, "size": 13017 }
%!TEX root = ms.tex \section{Performance of the selectors} \label{sec:simulation} \subsection{Some other selectors} In this section we use computer simulations to explore the behavior of different criteria when used for model selection under linear restrictions (variable selection and general linear restrictions). In addition to the criteria already discussed, we also consider two other well-known criteria: \begin{equation*} \text{BIC}(k) = n \log\left( \frac{\text{RSS}(k)}{n} \right) +\log(n) k %\label{eq:bic} \end{equation*} \citep{schwarz1978estimating}, and generalized cross-validation (GCV) \begin{equation} \text{GCV}(k) = \text{RSS}(k)\frac{n^2}{(n-k)^2}. \label{eq:gcv_subsetselection} \end{equation} BIC is a consistent criterion, in the sense that under some conditions, if the true model is among the candidate models, the probability of selecting the true model approaches one, as the sample size becomes infinite. GCV, derived by \citet{craven1978smoothing} in the context of smoothing, is equivalent to the mean square over degrees of freedom criterion proposed by \citet{tukey1967discussion}. By comparing the expressions of \eqref{eq:gcv_subsetselection} and \eqref{eq:sp_subsetselection}, GCV and S$_p$ only differ by a multiplicative factor of $1 + k / [(n-k)(n-1)]$. By analogy to the criteria discussed in Section \ref{sec:ic_fixedx} and \ref{sec:ic_randomx}, if we substitute $k=p-m$ into the expressions of BIC$(k)$ and GCV$(k)$, we obtain their corresponding expressions for general linear restrictions BIC$(R,r)$ and GCV$(R,r)$, respectively. We also consider two types of the cross-validation (CV): 10-fold CV (denoted as 10FCV) and leave-one-out CV (LOOCV). The LOOCV is based on the PRESS(k) statistic for the variable selection problem and PRESS(R,r) for the general restriction problem. \subsection{Random-X} We first consider the variable selection problem. The candidate models include the predictors of $X$ in a nested fashion, i.e. the candidate model of size $k$ includes the first $k$ columns of $X$ ($X_1,\cdots,X_k$). We describe the simulation settings reported here; description of and results from all other settings (243 configurations in total) can be found in the Online Supplemental Material\footnote{\url{https://github.com/sentian/RAICc}}, where we also provide the code to reproduce all of the simulation results in this paper. The sample sizes considered are $n\in\{40, 1000\}$, with number of predictors $p=n-1$, being close to the sample size. Predictors exhibit moderate correlation with each other of an AR(1) type (see the Online Supplemental material for further details), and the strength of the overall regression is characterized as either low (average $R^2$ on the set of true predictors roughly $20\%$) or high (average $R^2$ on the set of true predictors roughly $90\%$). The true model is either sparse (with six nonzero slopes) or dense (with $p$ nonzero slopes exhibiting diminishing strengths of coefficients; \citealp{Taddy2017}). The design matrix $X$ is random. In each replication, we generate a matrix $X$ such that the rows $x_i$ ($i=1,\cdots,n$) are drawn from a $p$-dimensional multivariate normal distribution with mean zero and covariance matrix $\Sigma_0$, and we draw the response $y$ from the conditional distribution of $y|X$ based on \eqref{eq:truemodel}. The entire process is repeated $1000$ times. We consider the following metrics to evaluate the fit. The values of each criterion over all of the simulation runs are plotted using side-by-side boxplots, with the average value over the simulation runs given below the boxplot for the corresponding criterion. \begin{itemize} \item Root mean squared error for random-X: \begin{equation*} \text{RMSER} = \sqrt{ E_{X^n} \lVert X^n\hat\beta-X^n\beta_0 \rVert_2^2 } = \sqrt{ (\hat{\beta}-\beta_0)^T \Sigma_0 (\hat{\beta}-\beta_0) }. \end{equation*} \item KL discrepancy for random-X \eqref{eq:KLR} in the log scale (denoted as logKLR). \item Size of the subset selected for variable selection problem, and number of restrictions in the selected model for general restriction problem. \end{itemize} The results are presented in Figures \ref{fig:subsetselection_randomx_hsnr_largep} and \ref{fig:subsetselection_randomx_lsnr_largep}. We find that RAICc provides the best predictive performance and the sparsest subset while rarely underfitting, compared to other information criteria designed for random-X, including RC$_p$ and S$_p$. The underperformance of RC$_p$ and S$_p$ is due to overfitting. S$_p$, as an estimate of the squared prediction error, is a product of an unlogged residual sum of squares and a penalty term that is increasing in $k$. This results in higher variability in S$_p$ for models that overfit, thereby potentially increasing the chances for spurious minima of these criteria at models that drastically overfit. In RAICc, on the other hand, the residual sum of squares is logged, thereby stabilizing the variance and avoiding the problem. RC$_p$ drastically overfits in all scenarios, reflecting the price of estimating $\sigma_0^2$ using the full model, especially when $p$ is close to $n$. S$_p$, on the other hand, estimates $\sigma_0^2$ using the candidate model, which mitigates the problem. Nevertheless, S$_p$ also can sometimes strongly overfit, but only when $n$ is small. Even for large $n$, S$_p$ selects slightly larger subsets on average than does RAICc. We also note that information criteria designed for random-X generally perform better than their counterparts for the fixed-X case. Both C$_p$ and FPE are largely outperformed by RC$_p$ and S$_p$, respectively. The advantage of RAICc over AICc is statistically significant in most scenarios, based on the Wilcoxon signed-rank test (the $p$-value for the test comparing the criteria for RAICc and AICc is given above the first two boxes in the first two columns of the table), but is not obvious in a practical sense. The only place that we see an advantage of AICc is for \{Dense model, $n=40$ and high signal\}. In this scenario, a model with many predictors with nonzero slopes can predict well, but that advantage disappears when there is a relatively weak signal, as in that situation the added noise from including predictors with small slopes cannot be overcome by a small error variance. We further note that choosing the appropriate family of information criteria (the KL-based AICc and RAICc) is more important than choosing the information criteria designed for the underlying designs of $X$. AICc, despite being designed for fixed-X, outperforms RC$_p$ and S$_p$, which are designed for random-X in all of the scenarios, in terms of both predictive performance and providing sparse results. The KL-based criteria have a clear advantage compared to the squared-error based criteria. Finally, we note some other findings that have been discussed previously in the literature. Despite its apparently strong penalty, BIC often chooses the model using all predictors when $n$ is close to $p$, as discussed in \citet{hurvich1989regression} and \citet{baraud2009gaussian}. We also see that even though GCV has a similar penalty term as S$_p$, it is more likely to suffer from overfitting. Unlike S$_p$, GCV can sometimes drastically overfit even when $n$ is large. The overfitting problem of GCV was also observed in the context of smoothing by \citet{hurvich1998smoothing}. We further find that 10-fold CV performs better than LOOCV, the latter of which sometimes drastically overfits. The tendency of LOOCV to strongly overfit was noted by \citet{scott1987biased} and \citet{hall1991local} in the context of smoothing. \citet{zhang2015cross} showed that when applied as selection rules, the larger validation set used by 10-fold CV can better distinguish the candidate models than can LOOCV, and this results in a model with smaller predictive error. RAICc performs better than 10-fold CV for small $n$, and performs similarly for large $n$. Computationally, 10-fold CV is ten times more expensive compared to RAICc, and since the split of validation samples is random, 10-fold CV can select different subsets if applied multiple times on the same dataset; that is, the result of 10-fold CV is not reproducible. The fact that LOOCV provides better estimate of the test error while being outperformed by 10-fold CV, further emphasizes the difference between the goal of providing the best estimate of the test error, and the goal of selecting the models with the best predictive performance. Clearly, KL-based criteria (AICc and RAICc) bridge the gap between the two goals more effectively than the squared-error based criteria (including cross-validation). In a related study by \citet{leeb2008evaluation}, the author found that S$_p$ and GCV outperform AICc under random-X, but those results are not directly comparable to ours. That paper did not consider the case where $p$ is extremely close to $n$, which is the scenario that most separates the performances of the different criteria. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/randomx_VS_hsnr.eps} \caption{Results of simulations for variable selection. Random-X, high signal and $\rho=0.5$. The Sparse and Dense models correspond to the VS-Ex2 and VS-Ex3 configurations (details are given in the Online Supplemental Material). The first column refers to RMSE, the second column corresponds to KL discrepancy (in log scale), and the third column gives the number of variables in the selected model with nonzero slopes, jittered horizontally and vertically, so the number of models with that number of nonzero slopes can be ascertained more easily. The mean values of the evaluation metrics for each criterion are presented at the bottom of each graph. The p-values of the Wilcoxon signed-rank test (paired and two-sided) for comparing RAICc and AICc are also presented.} \label{fig:subsetselection_randomx_hsnr_largep} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/randomx_VS_lsnr.eps} \caption{Results of simulations for variable selection. Random-X, low signal and $\rho=0.5$. } \label{fig:subsetselection_randomx_lsnr_largep} \end{figure} We next consider the general restriction problem. We take $\beta_0 = [2,2,2,1,1,1]^T$, $n\in\{10,40\}$, moderate correlations between the predictors, and either high or low signal levels. The candidate models are constructed in the following way. We consider a set of restrictions: $\beta_1=\beta_4$, $\beta_1=2\beta_2$, $\beta_1=\beta_2$, $\beta_2=\beta_3$, $\beta_4=\beta_5$, $\beta_5=\beta_6$, where the last four restrictions hold for our choice of $\beta_0$. We then consider all of the possible subsets of the six restrictions, resulting in $64$ candidate models in total. The detailed configurations and complete results for this and other examples of the general restriction problem ($54$ scenarios in total) are given in the Online Supplemental Material. We see from Figure \ref{fig:generalrestriction_randomx} that differences in performance between the criteria are less dramatic. This is not surprising, since for these models the number of parameters never approaches the sample size. Still, RAICc is consistently the best selection rule for small sample size $n$, and it is second-best for large $n$, where it is outperformed by BIC (note that BIC has a strong tendency to select too few restrictions when the sample is small, which corresponds to overfitting in the variable selection context). We also note an advantage of RAICc over AICc, with AICc having a stronger tendency to select too few restrictions. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/randomx_GR-Ex1.eps} \caption{Results of simulations for general restrictions. Random-X, $\rho=0.5$. The configuration of the model is GR-Ex1 (details can be found in the Online Supplemental Material). Third column gives the number of restrictions in the selected models, jittered horizontally and vertically. } \label{fig:generalrestriction_randomx} \end{figure} Finally, we extend the general restriction example by including restrictions that force additional predictors to have zero coefficients (as in the variable selection problem). Besides the six restrictions specified, we also consider $\beta_i=0$ for $i=7,\cdots,p$ resulting in $p$ possible restrictions in total. The candidate models are formulated by excluding the restrictions in a nested fashion. We start from the model including all $p$ restrictions (corresponding to the null model), and the next model includes the $p-1$ restrictions except the first one $\beta_1=\beta_4$. The process is repeated until all restrictions are excluded (the full model including all predictors with arbitrary slopes) resulting in $p+1$ candidate models in total. The true coefficient vector is the same as that used in Figure \ref{fig:generalrestriction_randomx}, implying that the correct number of restrictions is $p-2$. We present the detailed configurations and complete results for this and other examples ($243$ scenarios in total) in the Online Supplemental Material. We see from Figure \ref{fig:subsetgeneral_randomx} that our findings for the variable selection problem also hold in this case. This is not surprising, since variable selection is just a special example of general restrictions, and in this scenario the set of candidate models includes ones where the number of parameters is close to the sample size. Thus, overall, RAICc and AICc are the best performers among all of the selectors. RAICc tends to provide the sparsest subset (or select more restrictions), while rarely underfitting, having a slight advantage over AICc in terms of predictive performance. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/randomx_GR-Ex4.eps} \caption{Results of simulations for general restrictions. Random-X, $\rho=0.5$. The configuration of the model is GR-Ex4 (details can be found in the Online Supplemental Material).} \label{fig:subsetgeneral_randomx} \end{figure} \iffalse \subsubsection{Without a true model} \begin{itemize} \item Omit: The configuration is the same as Sparse-Ex1, but with the 6-th predictor treated as missing for all the fitting procedures. \item Exponential: We take $\Sigma = I$. The responses are generated by $y_i=exp(4i/n) + \epsilon_i$, for $i=1,\cdots,n$, where $\epsilon_i$ are independent $\mathcal{N}(0, \sigma_0^2)$. \end{itemize} We consider a fixed trigonometric configuration of $X$ that is studied by \citet{Hurvich1991}, where $X$ is an $n$ by $p$ matrix with components defined by $$ x_{t, 2j-1} = \sin\left(\frac{2\pi j}{n}t\right),$$ and $$ x_{t,2j} = \cos\left(\frac{2\pi j}{n}t\right),$$ for $j=1,\cdots,p/2$ and $t=0,\cdots,n-1$. The responses are generated by $y_i=exp(4i/n) + \epsilon_i$, for $i=1,\cdots,n$, where $\epsilon_i$ are independent $\mathcal{N}(0, \sigma_0^2)$. \fi \iffalse \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/randomx/subset_selection/smallp_hsnr.eps} \caption{Random-X, high signal and $\rho=0.5$. The mean values of the evaluation metrics for each criterion are presented at the bottom of each graph. The p-values of the Wilcoxon signed-rank test (paired and two-sided) for comparing RAICc and AICc are also presented.} \label{fig:subsetselection_randomx_hsnr_smallp} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/randomx/subset_selection/smallp_lsnr.eps} \caption{Random-X, low signal and $\rho=0.5$. Other details are the same as in Figure \ref{fig:randomx_hsnr_smallp}.} \label{fig:subsetselection_randomx_lsnr_smallp} \end{figure} \fi \subsection{Fixed-X} The simulation structure for random-X can also be applied to fixed-X. We only generate the design matrix $X$ once and draw $1000$ replications of the response vector $y$ from the conditional distribution of $y|X$ based on \eqref{eq:truemodel}. The evaluation metrics for fixed-X are as follows. The complete simulation results are given in the Online Supplemental Material. \begin{itemize} \item Root mean squared error for fixed-X: \begin{equation*} \text{RMSEF} = \sqrt{ \frac{1}{n}\lVert X\hat\beta-X\beta_0 \rVert_2^2 }. \end{equation*} \item KL discrepancy for fixed-X \eqref{eq:KLF} in the log scale (denoted as logKLF). \item Size of the subset selected for variable selection problem, and number of restrictions in the selected model for general restriction problem. \end{itemize} The patterns for the fixed-X scenario are similar to those for random-X, as can be seen in Figures \ref{fig:subsetselection_fixedx_hsnr_largep}, \ref{fig:subsetselection_fixedx_lsnr_largep}, \ref{fig:generalrestriction_fixedx} and \ref{fig:subsetgeneral_fixedx}. In some ways this is surprising, in that the random-X versions of the criteria still seem to outperform the fixed-X versions, even though that is not the scenario for which they are designed. This seems to be related to the tendency for the fixed-X versions to overfit (or choose too few restrictions) compared to their random-X counterparts, which apparently works against the goal of selecting the candidate with best predictive performance. Otherwise, the KL-based criteria (RAICc and AICc) noticeably outperform the other criteria in general, especially $\mbox{C}_p$ and FPE, particularly for small samples. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/fixedx_VS_hsnr.eps} \caption{Results of simulations for variable selection. Fixed-X, high signal. The configurations are the same as in Figure \ref{fig:subsetselection_randomx_hsnr_largep}.} \label{fig:subsetselection_fixedx_hsnr_largep} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/fixedx_VS_lsnr.eps} \caption{Results of simulations for variable selection. Fixed-X, low signal. The configurations are the same as in Figure \ref{fig:subsetselection_randomx_lsnr_largep}.} \label{fig:subsetselection_fixedx_lsnr_largep} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/fixedx_GR-Ex1.eps} \caption{Results of simulations for general restrictions. Fixed-X, GR-Ex1, $\rho=0.5$. The configurations are the same as in Figure \ref{fig:generalrestriction_randomx}.} \label{fig:generalrestriction_fixedx} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{figures/main/fixedx_GR-Ex4.eps} \caption{Results of simulations for general restrictions. Fixed-X, GR-Ex4, $\rho=0.5$. The configurations are the same as in Figure \ref{fig:subsetgeneral_randomx}.} \label{fig:subsetgeneral_fixedx} \end{figure}
{ "alphanum_fraction": 0.7825140673, "avg_line_length": 119.2278481013, "ext": "tex", "hexsha": "a3d4b555109fda1383f0d8fc532b91c1550648d7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e3b620354733de1fe953a2a21559bcb20055b96", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sentian/RAICc", "max_forks_repo_path": "paper/simulation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0e3b620354733de1fe953a2a21559bcb20055b96", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sentian/RAICc", "max_issues_repo_path": "paper/simulation.tex", "max_line_length": 1869, "max_stars_count": null, "max_stars_repo_head_hexsha": "0e3b620354733de1fe953a2a21559bcb20055b96", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sentian/RAICc", "max_stars_repo_path": "paper/simulation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4746, "size": 18838 }
\begin{appendices} \chapter{Block Diagrams} \label{App:blockdiagrams} \begin{figure}[ht!] \centering \includegraphics[width=6in]{block_diagrams/full.png} \caption{Full block diagram} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=3in]{block_diagrams/video.png} \caption{Display block diagram} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=3in]{block_diagrams/fpga.png} \caption{FPGA main components} \end{figure} \chapter{Schematic Diagrams} \label{App:schematicdiagrams} \begin{figure}[ht!] \centering \includegraphics[width=6in]{circuit/main_fpga_page.png} \caption{FPGA and supporing components schematic diagram} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{circuit/analog_page.png} \caption{Analog section schematic diagram} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{circuit/memory_page.png} \caption{Memory schematic diagram} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{circuit/switches_page.png} \caption{Switches schematic diagram} \end{figure} \chapter{Board Layouts} \label{App:boardlayouts} \begin{figure}[ht!] \centering \includegraphics[width=6in]{circuit/board_page.png} \caption{Board layout (raw)} \end{figure} \chapter{FPGA Block Diagram Files} \label{App:fpgablockdiagrams} \begin{figure}[ht!] \centering \includegraphics[width=6in]{fpga_logic/clocks.png} \caption{FPGA main clocks} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{fpga_logic/proc_overview.png} \caption{FPGA NIOS II processor block} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{fpga_logic/adc_overview.png} \caption{FPGA overview of scope sample handling} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{fpga_logic/vram_disp_overview.png} \caption{FPGA overview of VRAM controller and display controller} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=6in]{fpga_logic/keys_overview.png} \caption{FPGA overview of key and rotary encoder input} \end{figure} \chapter{Memory Maps} \label{App:memorymaps} \begin{figure}[ht!] \centering \includegraphics[width=6in]{fpga_logic/address_map.png} \caption{NIOS II address memory map} \end{figure} \chapter{Control Registers} \label{App:ctrlregs} \input{code_ctrlregs} \chapter{VHDL Code} \label{App:vhdlcode} \section{VRAM State Machine Code} \input{code_vstates} \newpage \section{Oscilloscope Trigger Code} \input{code_triggering} \chapter{NIOS II Code} \label{App:niosiicode} \section{Main Loop} \input{code_mainloop} \newpage \section{Menu} \input{code_menu} \newpage \section{Interface and Definitions} \input{code_intf} \newpage \section{Key Processing} \input{code_keys} \newpage \section{Display Processing} \input{code_disp} \newpage \section{Trace Processing} \input{code_trace} \chapter{Test Code} \label{App:testcode} \section{Display Timing with ModelSim} \input{code_disptiming} \newpage \section{VRAM Testing} \input{code_testvram} \newpage \section{Display Testing} \input{code_testdisp} \end{appendices}
{ "alphanum_fraction": 0.748458693, "avg_line_length": 21.7718120805, "ext": "tex", "hexsha": "7b3fa27b5774851eab838f74815abd570cbecfe1", "lang": "TeX", "max_forks_count": 29, "max_forks_repo_forks_event_max_datetime": "2022-03-11T13:42:38.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-07T17:35:24.000Z", "max_forks_repo_head_hexsha": "35b932f87fee8d7c1d1176fb03251e897a8c22c3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "agural/FPGA-Oscilloscope", "max_forks_repo_path": "Documentation/Full/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "35b932f87fee8d7c1d1176fb03251e897a8c22c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "agural/FPGA-Oscilloscope", "max_issues_repo_path": "Documentation/Full/appendix.tex", "max_line_length": 67, "max_stars_count": 64, "max_stars_repo_head_hexsha": "35b932f87fee8d7c1d1176fb03251e897a8c22c3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "agural/FPGA-Oscilloscope", "max_stars_repo_path": "Documentation/Full/appendix.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T22:37:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-13T12:46:36.000Z", "num_tokens": 1003, "size": 3244 }
\chapter{Please nominate me for prizes} Please nominate me for this book and other texts at my site \url{https://mathematics21.org} for math prizes. In particular, this page gives instructions how to nominate me for Abel Prize: \url{https://mathematics21.org/nominate-me-for-abel-prize/} I also need a recommendation letter from you to be nominated for the Breakthrough Prize. Please write and send me ([email protected]) a letter. Nominate me for other prizes, too.
{ "alphanum_fraction": 0.7808510638, "avg_line_length": 29.375, "ext": "tex", "hexsha": "87e398ec75a4f35f9eaa254f3fa109e0017eb620", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_path": "chap-prize.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_path": "chap-prize.tex", "max_line_length": 60, "max_stars_count": 9, "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_path": "chap-prize.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "num_tokens": 115, "size": 470 }
\documentclass[11pt]{article} %============Macros==================% % you don't need to change anything. start editing from main body. \usepackage{amsmath,amsfonts,amssymb,amsthm, cancel, float, enumitem} \usepackage{qcircuit} \usepackage[margin=1in]{geometry} %--------------Cosmetic----------------% \usepackage{mathtools} \usepackage{hyperref} \usepackage{fullpage} \usepackage{microtype} \usepackage{xspace} \usepackage[svgnames]{xcolor} \usepackage[sc]{mathpazo} \usepackage{enumitem} \setlist[enumerate]{itemsep=1pt,topsep=2pt} \setlist[itemize]{itemsep=1pt,topsep=2pt} %--------------Header------------------% \def\course{CS 410/510 Introduction to Quantum Computing} \def\term{Portland State U, Spring 2017} \def\prof{Lecturer: Fang Song} \newcommand{\handout}[5]{ \renewcommand{\thepage}{\arabic{page}} \begin{center} \framebox{ \vbox{ \hbox to 5.78in { \hfill \large{\course} \hfill } \vspace{2mm} \hbox to 6in { {\Large \hfill #5 \hfill} } \vspace{2mm} \hbox to 6in { \term \hfill \emph{#2}} \hbox to 6in { {#3 \hfill \emph{#4}}} } } \end{center} \vspace*{4mm} } \newcommand{\lecture}[4]{\handout{#1}{#2}{#3}{#4}{{Lecture #1}}} \def\complex{\mathbb{C}} \def\real{\mathbb{R}} \def\natural{\mathbb{N}} \def\integer{\mathbb{Z}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\snorm}[1]{\lVert#1\rVert} \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\set}[1]{\left\{ #1 \right\}} \newcommand{\vecb}[1]{\boldsymbol{\vec{#1}}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\op}[2]{#1#2#1^\dag} \newcommand{\tr}{\operatorname{Tr}} \newcommand{\se}{\operatorname{H}} \newcommand{\vne}{\operatorname{S}} \newcommand{\e}{\operatorname{E}} \newcommand{\rank}{\operatorname{rank}} \renewcommand{\det}{\operatorname{Det}} \newcommand{\bra}[1]{\langle #1 \rvert} \newcommand{\ket}[1]{\lvert #1 \rangle} \newcommand{\bret}[2]{\langle{#1}|{#2}\rangle} \newcommand{\kret}[2]{\ket{#1}\bra{#2}} \newcommand{\kera}[1]{\kret{#1}{#1}} \newcommand{\Example}[1]{{\bf Example #1}:} \def\X{\mathcal{X}} \def\Y{\mathcal{Y}} \def\Z{\mathcal{Z}} \def\W{\mathcal{W}} \def\A{\mathcal{A}} \def\B{\mathcal{B}} \def\V{\mathcal{V}} \def\U{\mathcal{U}} \def\C{\mathcal{C}} \def\D{\mathcal{D}} \def\E{\mathcal{E}} \def\F{\mathcal{F}} \def\M{\mathcal{M}} \def\R{\mathcal{R}} \def\P{\mathcal{P}} \def\Q{\mathcal{Q}} \def\S{\mathcal{S}} \def\T{\mathcal{T}} \def\K{\mathcal{K}} \def\L{\mathcal{L}} \def\fill{ \hfill} %=============Main Doc=================% \begin{document} %-----Specs: change accordingly--------% \def\lecdate{May 16, 2017} % put lecture date here \def\scribe{Scribe: Enis K. Inan} % put your name here \def\lecnum{13} % change the lecture number \lecture{\lecnum}{\lecdate}{\prof}{\scribe}% \begin{center} {\textsc{Version: \today}} \end{center} \noindent Last lecture we introduced the density operator, which made it easier for us to describe general quantum states. We will briefly review it before proceeding any further. In the old notation, we only had pure states of the form $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$ where $\alpha, \beta \in \complex$ and $\abs{\alpha}^2 + \abs{\beta}^2 = 1$. With our new framework, a pure state is expressed as $\kera{\psi}$ where for our example, \begin{equation*} \kera{\psi} = \left(\begin{array}{cc} \alpha\conj{\alpha} & \alpha\conj{\beta} \\ \conj{\alpha}\beta & \beta\conj{\beta} \end{array} \right) = \left(\begin{array}{cc} \abs{\alpha}^2 & \alpha\conj{\beta} \\ \conj{\alpha}\beta & \abs{\beta}^2 \end{array} \right) \end{equation*} The expressive power of this notation is apparent when we have general quantum states. Specifically, imagine we have a register $X$ that could be in one of the quantum states $\ket{\psi_1}, \dots, \ket{\psi_k}$ with a probability distribution $p_1, \dots, p_k$ where $p_i$ is the probability that $X$ is in state $\ket{\psi_i}$. With our new notation, we expressed our ``knowledge'' of $X$ as a density matrix $\rho$ where \begin{equation*} \rho = \sum_{i} p_i \kera{\psi_i} \end{equation*} Intuitively, $\rho$ corresponds to a weighted average of each state. There were also several important properties associated with density matrices: \begin{enumerate} \item $\tr{(\rho)} = 1$ \item $\rho$ is positive semidefinite. \end{enumerate} Property 2 will be important later in this lecture, so keep it in the back of your head for now. \fill \noindent In our old notation, the physically allowable operations on $\ket{\psi}$ were unitary operations $U$ -- the state $\ket{\psi'} = U\ket{\psi}$ is the resulting state after applying the unitary $U$ on $\ket{\psi}$. In our new notation, $\rho' = U\rho U^\dag$ corresponds to applying the unitary $U$ on the general quantum state described by $\rho$ -- the result is $\rho'$. %==============================% \section{General (Physically Admissible) Quantum Operations} %==============================% The density matrix formalism does not limit us to just unitary operations. Specifically, any \emph{physically admissible operation} $\Phi$ is a series of matrices $A_1, A_2, \dots, A_k$ where $A_{i_{jk}} \in \complex$ and \begin{equation} \label{eq:phya:completeness} \sum_{i} A^\dag_iA_i = I \end{equation} \begin{equation} \label{eq:phya:app} \Phi(\rho) = \sum_{i} \op{A_i}{\rho} \end{equation} Note that each $A_i$ in $\Phi$ does \emph{not} have to be a square matrix, and that unitary matrices $U$ are a special case of $\Phi$ with $k = 1$. \noindent Applying $\Phi$ to $\rho$ yields another density matrix, just like how applying unitary $U$ to $\ket{\psi}$ resulted in another valid quantum state. In fact, $\Phi$ is also known as a completely positive trace preserving (CPTP) operator, which is just jargon for an operation that preserves a matrix's trace and semi-positive definitiveness. Another way to think of $\Phi$ is as follows. Let $\X$ and $\Y$ denote spaces describing $m$ and $n$ qubits, respectively, and let $\L(\X, \Y)$ be the set of all linear mappings from $\X$ to $\Y$. Let $\D(\X)$ be the set of all density matrices in $\X$; define $\D(\Y)$ in a similar manner. Then $\Phi : \D(\X) \rightarrow \D(\Y)$ or, informally, $\Phi$ is the set of all density matrix mappings from $\X$ to $\Y$. \fill \noindent \Example{1} Here we investigate decoherence as a physically admissible operation, which is when a quantum system interacts with some environmental noise and loses its quantumness (i.e. gets measured). Note that the operation used in this example is equivalent to measurement in the standard basis $\set{\ket{0}, \ket{1}}$. Here we have $A_0 = \kera{0}$ and $A_1 = \kera{1}$ so that \begin{equation*} \Phi(\rho) = \op{A_0}{\rho} + \op{A_1}{\rho} \end{equation*} Firs we check if $A_0$ and $A_1$ are valid, i.e. that they satisfy Eqn. \ref{eq:phya:completeness}: \begin{align*} \sum_{i} A^\dag_iA_i &= A^\dag_0A_0 + A^\dag_1A_1 \\ &= \ket{0}\bret{0}{0}\bra{0} + \ket{1}\bret{1}{1}\bra{1} \\ &= \kera{0} + \kera{1} \\ &= I \end{align*} which they do. Now assume that $\rho=\kera{\psi}$ where $\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$, i.e. that $\rho$ corresponds to a pure state. We have, \begin{align*} \Phi(\kera{\psi}) &= \op{A_0}{\kera{\psi}} + \op{A_1}{\kera{\psi}} \\ &= \kera{0}\kera{\psi}\kera{0} + \kera{1}\kera{\psi}\kera{1} \\ &= \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right) \left(\begin{array}{cc} \abs{\alpha}^2 & \alpha\conj{\beta} \\ \conj{\alpha}\beta & \abs{\beta}^2 \end{array} \right) \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right) + \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right) \left(\begin{array}{cc} \abs{\alpha}^2 & \alpha\conj{\beta} \\ \conj{\alpha}\beta & \abs{\beta}^2 \end{array} \right) \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right) \\ &= \left(\begin{array}{cc} \abs{\alpha}^2 & 0 \\ 0 & \abs{\beta}^2 \end{array} \right) \\ &= \abs{\alpha}^2\kera{0} + \abs{\beta}^2\kera{1} \end{align*} which is exactly what we get by doing a measurement on the standard basis! To see why, the last line shows that we can think of $\Phi(\kera{\psi})$ as representing the general quantum state where we have an $\abs{\alpha}^2$ chance of being in $\ket{0}$ and a $\abs{\beta}^2$ chance of being in $\ket{1}$. This is the definition of measurement in the standard basis. \fill \noindent Another important operation is the partial trace. Imagine that we have an $m + n$ qubit-state, e.g. $\ket{\psi} = \sum_{x \in \set{0, 1}^m}\sum_{y \in \set{0, 1}^n} \alpha_{xy}\ket{x}\ket{y}$. We want to examine the resulting state after we discard either $x$ or $y$. For our old notation, this is easy -- for $\ket{\psi}$, we can just remove $\ket{x}$ or $\ket{y}$. But in our new notation of density matrices, it is not as obvious. Let $\X$ and $\Y$ be the spaces describing $\ket{x}$ and $\ket{y}$, respectively, and assume without any loss of generality that we want to remove $\ket{y}$. Then what we want is an operation $\Phi: \D(\X \otimes \Y) \rightarrow \D(\X)$ -- this is the partial trace, denoted as $\tr_y$ for our specific case to explicitly indicate that we want to discard $\ket{y}$. \fill \noindent Let's examine what $\tr_y$ would look like when $\X$ and $\Y$ both describe 1-qubit each -- i.e., when we have a two-qubit state and we want to discard the second qubit. We have $\Phi$ be the matrices: \begin{equation*} \begin{split} A_0 &= I_x \otimes \bra{0}_y = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array}\right) \\ A_1 &= I_x \otimes \bra{1}_y = \left(\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right) \end{split} \end{equation*} Note that $A_0$ and $A_1$ are $2 \times 4$ matrices -- $I_x$ is the $2 \times 2$ identity matrix. We first check that $A_0$ and $A_1$ satisfy Eqn. \ref{eq:phya:completeness}: \begin{align*} \sum_{i} A^\dag_iA_i &= A^\dag_0A_0 + A^\dag_1A_1 \\ &= (I_x \otimes \ket{0}_y)(I_x \otimes \bra{0}_y) + (I_x \otimes \ket{1}_y)(I_x \otimes \bra{1}_y) \\ &= (I_x \otimes \kera{0}_y) + (I_x \otimes \kera{1}_y) \\ &= I_x \otimes (\kera{0}_y + \kera{1}_y) \\ &= I_x \otimes I_y = I_{xy} \end{align*} Now let's do an example calculation using $\Phi$. \fill \noindent \Example{2} Let $\rho = \kera{0} \otimes \kera{1}$. Intuitively, we should get $\kera{0}$ back after applying $\Phi$. Let's see if this is indeed the case. \begin{align*} \Phi(\rho) &= \op{A_0}{\rho} + \op{A_1}{\rho} \\ &= (I_x \otimes \bra{0}_y)(\kera{0} \otimes \kera{1})(I_x \otimes \ket{0}_y) + (I_x \otimes \bra{1}_y)(\kera{0} \otimes \kera{1})(I_x \otimes \ket{1}_y) \\ &= \cancelto{0}{(\kera{0} \otimes \bret{0}{1}\bra{1})}(I_x \otimes \ket{0}_y) + (\kera{0} \otimes \bret{1}{1}\bra{1})(I_x \otimes \ket{1}_y) \\ &= (\kera{0} \otimes \bra{1})(I_x \otimes \ket{1}_y) \\ &= \kera{0} \end{align*} which is exactly what we expected. In the general case, e.g. when we have $\rho' = \rho \otimes \ket{0}$, then $\Phi(\rho') = \rho$ which is consistent with the physical meaning of $\tr_y$ -- take away $y$ and only look at $x$. We can think of it as ``tracing out $y$'', which is where the name ``partial trace'' comes from. \fill \noindent Now what if the qubits $x$ and $y$ are entangled? What do we get after we apply $\tr_y$? We will use $\ket{\phi^+} = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11})_{xy}$ as our example. \fill \noindent \Example{3} $\tr_y(\kera{\phi^+}) = \frac{1}{2}\kera{0} + \frac{1}{2}\kera{1} = \frac{1}{2}I$ which can be verified by brute-force computation using $\Phi$, like we did in Example 2. So we see that we get a \emph{maximally mixed state}, i.e. a density matrix where the pure states form an orthonormal basis and each state has a probability $1/n$ of occurring, where $n$ is the dimension of the matrix. Here, $n = 2$. In fact, it can be shown that $\tr_y$ for \emph{any} Bell state is $\frac{1}{2}I$. Here's why this makes sense. In a Bell state, when we measure the second register, we \emph{collapse} the resulting state to either just $\ket{0}$ with probability $1/2$, or just $\ket{1}$ with probability $1/2$ -- exactly our maximally mixed state. For example in $\ket{\phi^+}$, if we measure a $1$ in the second register, then the state collapses to $\ket{1}_x$; otherwise, it collapses to $\ket{0}_x$. \fill \noindent The next operation we will consider is an extension of Example 1. Instead of constraining the measurement to a specific basis, e.g. the standard basis, let's generalize it. Imagine that $\Gamma$ represents the set of all possible outcomes resulting from our measurement and, for an $a \in \Gamma$, $M_a$ is the matrix ``capturing'' an outcome $a$. For example in Example 1, $\Gamma = \set{0, 1}$, and $M_0 = \kera{0}$, $M_1 = \kera{1}$. Then if \begin{equation*} \sum_{a \in \Gamma} M^\dag_aM_a = I \end{equation*} then the set of matrices $M = \set{M_a \: | \: a \in \Gamma}$ form a physically admissible operation $\Phi$, i.e. $\Phi$ is a valid measurement. Let's take an outcome $a$ and its measurement matrix $M_a$ where $M_a$ is a projecter ($M^2_a = M_a$). Then $\tr(M_a\rho M^\dag_a)$ is the probability that we will observe outcome $a$ when we measure the quantum system described by $\rho$. In Example 1, $\tr(M_0\rho M^\dag_0) = \abs{\alpha}^2$, which is exactly the probability of measuring a $0$. Now assume that we did measure $a$. Then our state would collapse to \begin{equation*} \frac{M_a\rho M^\dag_a}{\tr(M_a\rho M^\dag_a)} \end{equation*} which makes sense. If $M_a$ is the measurement matrix ``capturing'' an outcome $a$ then when we see $a$, we should only have $M_a\rho M^\dag_a$ left in our summation for $\Phi(\rho)$. However $M_a\rho M^\dag_a$ may not be a valid density matrix, so we need to normalize its trace back down to $1$ -- this is what the $\tr(M_a\rho M^\dag_a)$ in the denominator does. This corresponds to what we do in the pure state case, which is to keep only parts of the state that contain the measured outcome in its qubits ($M_a\rho M^\dag_a$ here), then normalize that part (divide by $\tr(M_a\rho M^\dag_a)$). \fill \noindent The above is known as a Von-Neumann measurement. Another way to think about measurement is through positive operator valued measurements (POVMs) which are the most general class of quantum measurements. In a POVM, we do not care about the resulting state after the measurement; only the probability of a specific outcome $a$. Above, we determined this to be $\tr(M_a\rho M^\dag_a)$. Using a nice property of the trace, $\tr(AB) = \tr(BA)$, we see that $\tr(M_a\rho M^\dag_a) = \tr(M^\dag_aM_a \rho ) = \tr(E_a \rho)$ where $E_a = M^\dag_aM_a$. Given that $\tr(E_a \rho)$ is the probability that we will see outcome $a$, we can think of $E_a$ as a ``probability'' matrix for outcome $a$ that we multiply with $\rho$ to obtain the probability that we measure $a$. Then note that the completeness condition, i.e. Eqn. \ref{eq:phya:completeness}, can be restated as $\sum_{a} E_a = 1$ -- intuitively, this is like $\sum_{i} p_i = 1$ for a probability distribution. So POVM is a generalization of measurement, but in a different flavor. \fill \noindent It would appear on the surface that a physically admissible operator $\Phi$ is more powerful than a unitary $U$ because $U$ is just $\Phi$ with $k = 1$, i.e. $\Phi$ has more matrices that we're applying on $\rho$. Surprisingly according to Stinespring's Dilation Theorem, this is not the case - unitaries and physically admissible operators are \emph{equally} powerful. Proving this result is beyond the scope of this class, but the basic idea is that we can simulate any circuit implementing $\Phi$ with a corresponding unitary circuit using some ancilla bits as input to the latter. At a high-level, the diagram looks like the following: \includegraphics[scale = 0.7]{phisim} %NOTE: Don't know how to draw the above as a circuit, spent too much time trying to figure it out. \noindent \Example{4} As an example, let's simulate $\Phi$ as described in Example 1. The following circuit will do the trick: \centerline{ \Qcircuit @C=1em @R=0.75em { \lstick{\ket{0}} & \targ & \qw & \qw \\ \lstick{\alpha\ket{0}+\beta\ket{1}} & \ctrl{-1} & \qw & \rstick{\rho' = \Phi(\rho)} } } \fill \noindent so we see that we've introduced an extra ancillary bit and applied the CNOT gate to do the measurement, discarding the ancilla at the end. %==============================% \section{Quantum Information Theory} %==============================% So far we talked about the no-cloning theorem (1), distinguishing between two quantum states (2), entanglement (3), and the density matrix formalistm (4). An example of (2) was given in HW. 1 Problem 4. Here Alice and Bob are physically separated, each given one of the qubits of some 2-qubit state; the 2-qubit state is either State I or State II. The problem asked us to devise a strategy they could use to distinguish whether they're sharing State I or State II given that they are allowed to make only local measurements and communicate across a classical channel. An example of (3) was quantum teleportation where we showed that through entanglement, Alice can transmit a 1-qubit state to Bob. \fill \noindent In both classical and quantum information theory, we have three important issues that we would like to consider. These are: \begin{enumerate}[start=0] \item What is the information comprised of? What is its source? How do we measure the \emph{amount} of information that's being transmitted? \begin{itemize} \item In the classical world, the source is a random variable $X$. We measure the amount of information using the Shannon entropy, $\se(X)$. \item In the quantum world, the source is a mixed state $\rho$. We measure the amount of information using the Von Neumann entropy, $\vne(\rho)$. \end{itemize} \item How do we transmit information over a noiseless channel? To put another way, if we store the information on a physical device, then what is the optimal compression that's possible for storing this information source? \begin{itemize} \item Given an $m$-bit source, if we want to represent it using only $\alpha\bullet m$ bits where $\alpha < 1$, then Shannon's noiseless source coding theorem says that we can only do so iff $\alpha \geq H(X)$ \item Given a density matrix $\rho$ on $m$-qubits, if we want to represent it using only $\alpha\bullet m$ qubits where $\alpha < 1$, then Schumacher's noiseless quantum source coding theorem says that we can only do so iff $\alpha \geq S(\rho)$. \end{itemize} \item How do we transmit information over a noisy channel? \begin{itemize} \item We can do so using error correction codes, ECC. \item We can do so using quantum error correction codes, QECC. \end{itemize} \end{enumerate} \fill \noindent Imagine that we have a classical source churning out $m$-bit binary strings that are being transmitted on a quantum channel. The information transfer can be summarized by the following steps for a given $n$-bit binary string $x$: \begin{enumerate} \item The classical source churns out $x$ \item Some kind of process encodes $x$ as a $\alpha\bullet n$ qubit quantum source $\rho_x$, where $\alpha \leq 1$. \item $\rho_x$ is transmitted across the quantum channel. \item $\rho_x$ is decoded at the end to get back $x$. \end{enumerate} The question here is whether we can use a shorter quantum message to transmit our classical message, i.e. if we can find an $\alpha < 1$. Unfortunately, we're stuck with $\alpha = 1$ due to Holevo's theorem, which states that it is impossible to communicate more than $n$ bits of classical information when transmitting $n$ qubits. %==============================% \section{Entropy} %==============================% \subsection{Shannon Entropy} We think of information as some kind of a message but, as counter-intuitive as it might be, a message may not necessarily contain any ``information''. For example, if all participating parties know the exact contents of the message that's being transmitted, then anybody who opens to read the message does not gain any new ``information'' -- they already know what to expect! So really, it is better to think of ``information'' as the \emph{amount of uncertainty} that's present in a message. The more uncertainty we have in our message, the more ``information'' it contains. For example, an e-mailed job offer would tell you a lot of information because, in general, there are a lot of factors that determine whether you will get the job or not and hence, a lot of uncertainty on whether you will get said job. \fill \noindent In physics, entropy is a quantity used to capture the amount of uncertainty in a given system. Similarly, the \emph{Shannon Entropy} quantifies the amount of uncertainty in a given piece of information -- that is why we use the term entropy. Mathematically it is defined as follows. Say we have a source that outputs a random variable $X \in \set{0, 1}^m$ with probability $p_x$. Then we define the Shannon Entropy $\se(x)$ as: \begin{equation} \label{eq:def-se} \se(X) = -\sum_{x} p_x \lg{p_x} \end{equation} where we define $0\lg{0} = 1$. \fill \noindent \Example{5} Let $X \in \set{0, 1}$ where $p_0 = 0$ and $p_1 = 1$. Note that there is no uncertainty involved in $X$ -- we know that our source will always transmit a $1$. Thus we expect that $\se(X) = 0$. Doing the calculation: \begin{align*} \se(X) &= -\sum_{x} p_x \lg{p_x} \\ &= -1\lg{1} \\ &= 0 \end{align*} \fill \noindent \Example{6} Let $X \in \set{0, 1}$ where $p_0 = p_1 = 1/2$, i.e. $X$ corresponds to a fair coin. Then we see that: \begin{align*} \se(X) &= -\sum_{x} p_x \lg{p_x} \\ &= -\left( \frac{1}{2}\lg{\frac{1}{2}} + \frac{1}{2}\lg{\frac{1}{2}} \right) \\ &= -(-\frac{1}{2} + -\frac{1}{2}) \\ &= 1 \end{align*} which is exactly 1-bit of information. This is the mathematical definition of a bit. \fill \noindent Generalizing Example 6, if we have $X \in \set{0, 1}^m$ and $H(X) = m$, then this corresponds to $m$ fair coins each with 1-bit of information for a total of $m$-bits of information. Note that the Shannon Entropy is non-negative, bounded by the size of %NOTE: I didn't quite hear this part in lecture, so couldn't record it. It also works well in a syntactic sense in that we can take many independent and identically distributed random variables %NOTE: Didn't hear this part in lecture too, I think it was glazed over. \subsection{Von Neumann Entropy} Imagine that we have an $m$-qubit space $\X$, and a source that outputs $k$ pure states with probability $p_k$ for each. Then our source is summarized by the density matrix $\rho_x = \sum_{i} p_i \kera{\psi_i}$. First, note that the quantum source subsumes the classical source as any classical source with $X \in \{0, 1\}^m$ and associated probability $p_x$ is just the densitry matrix $\rho' = \sum_{x \in \set{0, 1}^m} p_x \kera{x}$. %NOTE: In class, we had a density matrix with density matrices p1, p2, ..., pm along its diagonals as being the one that summarizes a classical source. I don't understand why this is the case. The previous sentence makes more sense to me so I included that instead. If it is incorrect, then I apologize. Let us try to come up with a way to describe the amount of information contained in the density matrix, i.e. its entropy. At a glance, it might seem like we could use $\se$ defined above. Letting $\e(\rho_x)$ denote the entropy quantity of $\rho_x$, we get: \begin{equation*} \e(\rho_x) = -\sum_{i} p_i \lg{p_i} \end{equation*} However this definition does not work because two different quantum sources can have the same entropy -- entropy is meant to be unique. For example, the quantum source $S_1$ that outputs $\ket{0}$ with probability $1/2$ and $\ket{1}$ with probability $1/2$ is certainly different than the quantum source $S_2$ that outputs $\ket{0}$ with probability $1/2$ and $\ket{+}$ with probability $1/2$ -- the density matrices are not the same! However the probabilities of all the possible pure states in each source are the same, so $\e(S_1) = \e(S_2)$. The intuition is that in the quantum world, we have different bases that we can output our information in such as the standard and Hadamard bases. Further, there is no requirement that the pure states outputted by our source be orthogonal or even form an orthonormal basis -- $S_2$ is an example of one such source. In the classical world, however, everything is in binary -- all our possible sources will output information that's the same representation, so it is enough to just examine the probabilities. But the Shannon Entropy is too limited in the quantum world because we have sources that output different kinds of pure states together. \fill \noindent Fortunately, there is a way we can resolve our problem and that is by using the spectral decomposition. Because $\rho$ is positive semi-definite, it has a unique decomposition under the eigenbasis which is: \begin{equation*} \rho = \sum_{i = 0}^{d-1} \lambda_i \kera{\psi_i} \end{equation*} where $d$ is the dimension of $\rho$'s space, $\lambda_i$ is an eigenvalue, and $\ket{\psi_i}$ is an eigenvector such that $\rho\ket{\psi_i} = \lambda_i \ket{\psi_i}$ and $\bret{\psi_i}{\psi_j} = \delta_{ij}$. Note that $\lambda_i \geq 0$. Because $\tr(\rho) = 1$, we see that $\sum_{i} \lambda_i = 1$ so the eigenvalues form a valid probability distribution. Then we define the Von-Neumann entropy $\vne$ as: \begin{equation} \label{eq:def-vne} \vne(\rho) = -\sum_{i = 0}^{d-1} \lambda_i \lg{\lambda_i} \end{equation} \fill \noindent \Example{7} Consider the source summarized by $\rho = \kera{0}$, i.e. a source that always outputs the pure state $\ket{0}$. We would expect that $\vne(\rho) = 0$ since there is no uncertainty in what state we get. This is indeed the case, as $\vne(\rho) = -1\lg{1} = 0$, i.e. $\lambda = 1$ is the only eigenvalue of $\rho$. \fill \noindent \Example{8} Consider a fair coin, i.e. the source where $\rho = \frac{1}{2}\kera{0} + \frac{1}{2}\kera{1}$. Here, $\vne(\rho) = 1$ which matches the corresponding classical case. In fact, if we have a mixed state on $n$-qubits where $\rho = \frac{1}{2^n}I$, then $\vne(\rho) = n$. \end{document}
{ "alphanum_fraction": 0.6931017408, "avg_line_length": 64.6625615764, "ext": "tex", "hexsha": "3524de594ed70b33dff76d4329df2257ee1f9fa4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "31a42b297a4644b307b97acd293d9111e567f5c1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fangsonghub/fangsonghub.github.io", "max_forks_repo_path": "teaching/s17_4510_qc/170516_ei_lec13.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "31a42b297a4644b307b97acd293d9111e567f5c1", "max_issues_repo_issues_event_max_datetime": "2021-05-06T23:19:12.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-06T23:19:12.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fangsonghub/fangsonghub.github.io", "max_issues_repo_path": "teaching/s17_4510_qc/170516_ei_lec13.tex", "max_line_length": 1190, "max_stars_count": null, "max_stars_repo_head_hexsha": "31a42b297a4644b307b97acd293d9111e567f5c1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fangsonghub/fangsonghub.github.io", "max_stars_repo_path": "teaching/s17_4510_qc/170516_ei_lec13.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8255, "size": 26253 }
\chapter{Hull-White Single-Factor Model} \label{chapter:hullwhitemodel} The following chapter will study the Hull-White Single-Factor Model to understand the algorithm behind it and prepare a strategy for a parallel implementation. It will first introduce a general overview of how a trinomial tree is being constructed, and later show how prices are being discounted from it. \section{Hull-White Trinomial Tree} In this project, we implement the trinomial tree numerical method to discretize the Hull-White model. In contrast to the standard trinomial tree, the tree used in the Hull-White model incorporates the mean-reversion of the interest rate, by using a width limit and modified branching methods for the tree. Standard branching (see fig.~\ref{fig:background:standardbranching}) remains the same throughout the tree. At the bottom of the tree, where interest rates are very low, the \enquote{up one/straight along/down one} (see fig.~\ref{fig:background:altbranchingbottom}) branching is used. At the top of the tree, where interest rates are very high, the \enquote{straight along/down one/down two} branching is used (see fig.~\ref{fig:background:altbranchingtop}). \begin{figure} \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.7\linewidth]{img/standardbranch.jpg} \caption{Standard branching} \label{fig:background:standardbranching} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.93\linewidth]{img/bottombranch.jpg} \caption{Bottom branching} \label{fig:background:altbranchingbottom} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.7\linewidth]{img/topbranch.jpg} \caption{Top branching} \label{fig:background:altbranchingtop} \end{subfigure} \caption{Alternative branching methods for a trinomial tree.} \label{fig:background:allbranchings} \source{Based on Options, Futures and Other Derivatives\cite[pg. 698]{ofod}.} \end{figure} We observe that this pruning characteristic allows us to asses the size of the tree upfront and use this knowledge to enable more specific implementation optimizations, in particular how to map the tree to the parallel device thread and memory architecture. Otherwise, the tree could grow infinitely in its width, making it impossible to map to limited parallel architecture memory resources and invalidating certain parallel implementations. \section{Overview} Pricing a single option using Hull-White short-rate\footnote{The short rate, \textit{r}, at time \textit{t} is the rate that applies to an infinitesimally short period of time at time \textit{t}\cite[pg. 682]{ofod}} single-factor trinomial tree model enables the term structure of interest rates at any given time to be obtained from the value of the short rate \textit{r} at that time and the risk-neutral process for \textit{r}. This shows that, once the process for \textit{r} has been defined, everything about the initial zero curve and its evolution through time can be determined\cite[pg. 683]{ofod}. \\\\ The model consists of two steps. The first (forward propagation along the tree) is the construction of the trinomial tree in order to obtain a list of alpha values for each time step. These alphas are later used in the second step (backward propagation along the tree) to fit the option/bond prices and obtain the option value back at the root node of the tree. The input fed to the algorithm consists of an option, which includes its strike price, maturity, time step length, mean-reversion rate\footnote{denoted as $a$ - determines the relative volatilities of long and short rates\cite[pg.9]{npfits}} and volatility\footnote{denoted as $\sigma$ - determines the overall level of volatility \cite[pg. 9]{npfits}}. The output is the estimated price of the option/bond. The two steps can be generalized as follows: \begin{enumerate} \item \textbf{Forward propagation step:} Construct a term structure for the underlying asset by progressing one time step at a time. Determine neutral risk rate for a new time step using estimated yield curve data and estimated current asset values. \item \textbf{Backward propagation step:} Discount the asset prices to estimate option payoff at maturity going from the leaves of the tree to its root. \end{enumerate} Algorithm~\ref{alg:loops} shows a high-level overview of a function implementing this procedure for pricing one option. The input of the algorithm is an option and a yield curve (used for the computation of alphas) and the output is the estimated price of the option. The function consists primarily of two sequential (convergence) loops of count tree height, which contain inner parallel operators of count tree width, where tree height and width are specific to each option (and thus vary across options). The tree height is dependent on the number of time steps, i.e., maturity of the underlying bond and precision. The tree width is dependent on the number of terms and input parameters. Different option/bond maturities (leading to different tree heights) and different level of pricing accuracy (number of simulated time steps leading to different tree dimensions) make the choice of an effective parallelization strategy difficult. It is necessary to have a deep understanding of the algorithm itself to achieve maximum parallelization efficiency. The book by John Hull\cite{ofod} provides a solid background on the topic, describing the mechanics of interest rates, markets, as well as application of binomial trees and eventually trinomial trees to option pricing. Chapter 30 further narrows the topic of using trinomial trees as a numerical method and introduces a step by step walk-through of applying the algorithm on a basic example. While some of the calculation details are omitted in the book, the authors provide references to previous articles\cite{npfits}\cite{uhwirt}, where they provide a thorough explanation backed with more detailed examples. \newpage It is important to mention that the construction of a trinomial tree is a discrete-time, lattice-based\footnote{A model that takes into account expected changes in various parameters e.g. interest rate over the duration of the option} numerical method, but the example in the book is simplified by cutting the tree at a certain height and using analytic formulas to produce a concrete result for a specific financial instrument - a zero-coupon bond maturing at time $(m + 1) * \triangle t$ \cite[pg. 704]{ofod}. These formulas have been found and proven to be effective by the authors of the book and the articles. While this simplification gives more precise results in the above mentioned specific case, constructing the entire tree and using all of the time steps provides a foundation for pricing other options with more sophisticated cashflows. All the implementations of this thesis will be focused on the described numerical approach. The following sub-chapters are focused primarily on the intuition behind the algorithm, with the sole purpose to provide the reader with a general overview for it. For this reason, many of the details and formulas of calculating specific values are omitted, however they are thoroughly described in the book and the articles by Hull and White. As the model is best understood visually, we have included some of the supplementary images from the Hull and White book in order to support our algorithm explanation. \newpage \begin{algorithm}[H] \DontPrintSemicolon \caption{High-level overview of pricing a single option using Hull-White Single-Factor Model\label{alg:loops}} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Option, YieldCurve} \Output{Price approximation} \; alphas[0] = Compute yield at initial interest rate\; Qs[0][$\text{width}/2$] = 1\tcc*{Initialize the root node to 1\textdollar} \; \tcc{Forward propagation (convergence) loop} \For{$i = 0$ \KwTo height} { \tcc{Compute Qs at the next time step} \For{$j = 0$ \KwTo width} { Qs[$\text{i} + 1$][j] = Compute Q from Qs[i] and alphas[i]\; } Compute alphas[$\text{i} + 1$] from Qs[i]\; } \; \tcc{Initialize prices at the last time step to 100\textdollar} Prices[$\text{height} - 1$] = 100\; \; \tcc{Backward propagation (convergence) loop} \For{$i = \mathit{height} - 1$ \KwTo 0} { \tcc{Compute prices at the previous time step} \For{$j = 0$ \KwTo width} { Prices[i][j] = Compute price from Prices[$\text{i}+1$] using alphas[i]\; } } \; \tcc{Return price at the root node} \Return Prices[0][$\text{width}/2$]\; \end{algorithm} \section{Forward Propagation} \label{section:hullwhite:forwardpropagation} The forward propagation by itself consists of two stages. Each of them computes different values on the same tree. While the tree height can grow indefinitely, depending on the number of time steps, the width of the tree is limited by mean-reversion (as reasoned for in chapter \ref{chapter:background}), by determining its max. width (or as we refer to it for simplicity - width). \paragraph{Indexing} Since the tree is 2-dimensional, locating and computing values on individual nodes boils down to locating them by index first. Hence, we introduce the two indexes - \textit{i} and \textit{j}, used to indicate the tree height and its width respectively. To describe the meaning of \textit{i} and \textit{j} visually, we can use fig. \ref{fig:treeconststage1}, where nodes along the height - A, C, G - can be indexed as i = 0, 1, 2. Indexing across the width is different, as we intuitively denote the core of the tree - nodes A, C, G - with $j=0$. Going down along the three decreases the value of $j$, while going up increases it. This means that nodes D,H are located on $j=-1$, nodes B,F are located on $j=1$ and so on. We denote the highest node across the width as $j_{max}$ and $j_{min}$ as the lowest. Note that $j_{min}=-j_{max}$ due to the symmetry of the tree, making it possible to omit $j_{min}$ occurrences in our implementations by replacing them with $-j_{max}$. On fig. \ref{fig:treeconststage1} nodes E and I are at the top and at the bottom of the tree width, hence their $j$ indexes are equal to $j_{max}$ and $j_{min}$ respectively. The index values in this example are $j_{max}=2$ and $j_{min}=-2$. Note that our code cannot always be aligned with our intuition and indexes cannot be negative in our programs. Hence we can shift the indexing between $j_{min}$ and $j_{max}$ and count from $0$ to $2*j_{max}+1$ instead, making the root node to be located at index $j_{max}$. Despite that, we have used $j_{min}$ and $j_{max}$ as width boundaries throughout this report, as they are easier to comprehend. \paragraph{Stage 1} aims to construct a tree for a variable $R^*$ that is initially 0 and follows the Ornstein–Uhlenbeck stochastic\footnote{With a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely.} process\footnote{Tends to drift towards its long-term mean (also called mean-reverting).} $dR^*=-aR^*dt + \sigma dz$ which is symmetrical about $R^*=0$\cite[pg.698-699]{ofod}. Together with $R^*$, $p_u$, $p_m$ and $p_d$ are calculated to match the respective up, mid and down probabilities that match the expected change and variance of the change in R over the next interval $\triangle t$. Since the width of the tree is limited between $j_{min}$ and $j_{max}$, some of the branching is calculated differently, thus $p_u$, $p_m$ and $p_d$ depend on the node position. Naturally the tree construction happens iteratively, node by node, starting from the root. In the end of the stage, this first tree will always have a symmetrical shape\footnote{Trinomial trees are recombining, meaning that at any time, an up move followed by a down move has exactly the same effect on the price as a down move followed by an up move.} similar\footnote{Note that the width and height of the tree may differ based on the number of time steps and the maturity of the financial instrument} to fig.\ref{fig:treeconststage1}. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{img/treeconststage1wh.jpg} \caption{Example of the trinomial tree for $R^*$.} \source{Modified by the authors, based on Options, Futures and Other Derivatives\cite[pg. 699]{ofod}.} \label{fig:treeconststage1} \end{figure} An important property of this tree include first of all that it is self-recombining, causing it to be symmetric. The probabilities on the lower part of the tree will be the negative of the probabilities on the upper part of the tree, e.g. probability that node A reaches node D is minus the probability of node A reaching node B. Furthermore, also due to symmetry, all unique probabilities can be stored in an array of size the width of the tree, because e.g. the probability of node A reaching node B is the same as the probability of node C reaching node F and so on. Probabilities are used both in stage 2 of the forward propagation, but also in the backward propagation, thus it is necessary to contain them to the end, if they are to be stored. Last but not least, the way probabilities are calculated is different on $j_{min}$ or $j_{max}$, because of the difference in branching. This can be seen on fig. \ref{fig:treeconststage1} where nodes E and I branch out differently in comparison to all other nodes. \paragraph{Stage 2} In this stage, the rates at each node in the tree at each time step are shifted up by an amount - $\alpha$, chosen so that the revised tree correctly prices discount bonds \cite[pg. 6]{uhwirt}. This is done by defining $Q_{i,j}$ as the present value of a security that pays off \$1 if node (i, j) is reached and 0 otherwise. The starting point is to set $Q_{0,0}=0$ and $\alpha_0$ to the interest rate at time $\triangle t$. $Q$s at the next time step are then calculated by using the generalized formula \cite[pg.705]{ofod}: \begin{equation} \begin{gathered} \begin{aligned} Q_{m+1, j} = \sum_k Q_{m,k}q(k,j)exp[-(\alpha_m+k\triangle r)\triangle t] \nonumber \end{aligned} \end{gathered} \end{equation} Assuming that we start at step $m$, to calculate the $Q$s on step $m+1$, we need to have the $\alpha$ on step $m$. Furthermore, once the $Q$s on step $m+1$ have been calculated, they are used to also find the $\alpha$ on $m+1$ later. This leads to conclude that $\alpha$s and $Q$s are interrelated on each time step. $\alpha$s are calculated using the generalized formula \cite[pg.703]{ofod}: \begin{equation} \begin{gathered} \begin{aligned} \alpha_{m} = \dfrac{\sum_{j=-n_m}^{n_m} Q_{m,j}e^{-j\triangle r\triangle t} - \ln{P_{m + 1}}}{\triangle t} \nonumber \end{aligned} \end{gathered} \end{equation} At the end of this stage, the new tree will have changed visually. For example, the tree from fig. \ref{fig:treeconststage1} can be re-shaped as shown on fig. \ref{fig:treeconststage2}. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{img/treeconststage2.png} \caption{Example of the trinomial tree for $R$.} \source{Options, futures and other derivatives fig. 30.9 \cite[pg. 702]{ofod}} \label{fig:treeconststage2} \end{figure} An important observation here is that the only outcome of this tree that is used in the backward propagation is the array of $\alpha$s. $Q$s are in this case intermediary values, used to compute the $\alpha$ on each step and for this reason, the $Q$ values do not need to be stored any longer once all $\alpha$s have been computed. \section{Backward Propagation} The backward propagation starts with the previously constructed tree during the forward propagation step, in particular with array of $\alpha$s. At each time-step the option payoff is computed as the discounted value of the expected value at the next step \cite[pg. 6]{uhwirt}. From this it follows that the nodes at time step $i$ (e.g. the nodes without assigned letters in figures \ref{fig:treeconststage1} and \ref{fig:treeconststage2} above) are the starting point of the backward propagation. Their values are set to 100\$ and are used to compute the previous set of nodes (at time step $i-1$). That is done by discounting bond price values up until the exercise of the option. At the option expiration time step, we decide if we exercise the option or let it expire worthless. To achieve that, we calculate the difference between bond price and the strike price. The positive values mean exercise, while non-positive mean expiry worthless and are set to $0$. We discount the option prices further down to the root of the tree to get the approximation of the option price on the valuation day. This is the output of the algorithm. We use the array of $\alpha$s computed during the forward propagation through this procedure. It is important to note that determining the option price depends on the type of option (whether it is a put or a call option). \section*{Summary} This chapter has provided a detailed overview of the Hull-White Single-Factor Model, in particular its two-stage procedure fo propagating along a trinomial tree. It will be used in the following chapter, which will introduce the challenges of implementing a sequential version of the algorithm in C++.
{ "alphanum_fraction": 0.7736199884, "avg_line_length": 120.3496503497, "ext": "tex", "hexsha": "cbbe822a89ea5178d898935eb850e9e2f459f1dd", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-01-09T21:47:46.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-09T21:47:46.000Z", "max_forks_repo_head_hexsha": "1734801dddf7faa9e7c6ed1a46acb6ef8492e33c", "max_forks_repo_licenses": [ "ISC" ], "max_forks_repo_name": "MartinMetaksov/diku.OptionsPricing", "max_forks_repo_path": "thesis/chapters/HullWhiteModel.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1734801dddf7faa9e7c6ed1a46acb6ef8492e33c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "ISC" ], "max_issues_repo_name": "MartinMetaksov/diku.OptionsPricing", "max_issues_repo_path": "thesis/chapters/HullWhiteModel.tex", "max_line_length": 1643, "max_stars_count": 3, "max_stars_repo_head_hexsha": "1734801dddf7faa9e7c6ed1a46acb6ef8492e33c", "max_stars_repo_licenses": [ "ISC" ], "max_stars_repo_name": "MartinMetaksov/diku.OptionsPricing", "max_stars_repo_path": "thesis/chapters/HullWhiteModel.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-16T10:20:46.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-11T11:13:13.000Z", "num_tokens": 4266, "size": 17210 }
\documentclass[12pt, a4paper]{article} \newcommand{\modulename}{Graph Theory} \newcommand{\projectyear}{2018} \newcommand{\projectname}{Project \projectyear} \newcommand{\duedate}{last commit on or before April 8\textsuperscript{th}} \usepackage{xcolor} \usepackage{minted} \usepackage[utf8]{inputenc} \usepackage{tikz} \usepackage{caption} \usepackage{gensymb} \usepackage{lmodern} \usepackage{multirow} \usepackage{booktabs} \usepackage{array} \usepackage{adjustbox} \usepackage{upquote} \usepackage{amsmath} \usepackage{url} \usepackage{fancyhdr} \usepackage{lastpage} \usepackage{abstract} % Abstract \renewcommand{\abstractname}{} \renewcommand{\absnamepos}{empty} % Header and footer \fancypagestyle{plain}{ \renewcommand{\headrulewidth}{0pt} \lhead{} \rhead{} \cfoot{Page~\thepage~of~\pageref{LastPage}} \renewcommand{\footrulewidth}{0.1pt} } \pagestyle{fancy} \fancyhf{} \lhead{\modulename} \rhead{\projectname} \cfoot{Page~\thepage~of~\pageref{LastPage}} \renewcommand{\headrulewidth}{0.1pt} \renewcommand{\footrulewidth}{0.1pt} % Links \usepackage[hidelinks]{hyperref} % Tables \newcolumntype{x}[1]{>{\centering\arraybackslash\hspace{0pt}}p{#1}} % Bibliography \renewcommand{\refname}{\selectfont\normalsize References} \title{\projectname} \author{\modulename} \date{Due: \duedate} \begin{document} \maketitle \noindent This document contains the instructions for \projectname{} for \modulename{}. Please be advised that all students are bound by the Quality Assurance Framework~\cite{gmitqaf} at GMIT which includes the Code of Student Conduct and the Policy on Plagiarism. \subsection*{Context} The following project concerns a real-world problem -- one that you could be asked to solve in industry. You are not expected to know how to do it off the top of your head. Rather, it is expected that you will research and investigate possible ways to tackle the problem, and then come up with your own solution based on those. A quick search for solutions online will convince you that many people have written solutions to the problem already, in many different programming languages, and many of those are not experienced software developers. Note that a series of videos will be provided to students on the course page to help them with the project. \subsection*{Problem statement} You must write a program in the Go programming language~\cite{golang} that can build a non-deterministic finite automaton (NFA) from a regular expression, and can use the NFA to check if the regular expression matches any given string of text. You must write the program from scratch and cannot use the \mintinline{go}{regexp} package from the Go standard library nor any other external library. A regular expression is a string containing a series of characters, some of which may have a special meaning. For example, the three characters ``$.$'', ``$|$'', and ``$^*$'' have the special meanings ``concatenate'', ``or'', and ``Kleene star'' respectively. So, $0.1$ means a $0$ followed by a $1$, $0|1$ means a $0$ or a $1$, and $1^*$ means any number of $1$'s. These special characters must be used in your submission. Other special characters you might consider allowing as input are brackets ``$()$'' which can be used for grouping, ``$+$'' which means ``at least one of'', and ``$?$'' which means ``zero or one of''. You might also decide to remove the concatenation character, so that $1.0$ becomes $10$, with the concatenation implicit. You may initially restrict the non-special characters your program works with to $0$ and $1$, if you wish. However, you should at least attempt to expand these to all the digits, and the characters $a$ to $z$, and $A$ to $Z$. You are expected to be able to break this project into several smaller tasks that are easier to solve, and to plug these together after they have been completed. You might do that for this project as follows: \begin{enumerate} \item Parse the regular expression from infix to postfix notation. \item Build a series of small NFA's for parts of the regular expression. \item Use the smaller NFA's to create the overall NFA. \item Implement the matching algorithm using the NFA. \end{enumerate} Overall your program might have the following layout. \begin{minted}{go} type nfa struct { ... } func regexcompile(r string) nfa { ... return n } func (n nfa) regexmatch(n nfa, r sting) bool { ... return ismatch } func main() { n := regexcompile("01*0") t := n.regexmatch("01110") f := n.regexmatch("1000001") } \end{minted} \subsection*{Minimum Viable Project} The minimum standard for this project is a GitHub repository containing a single Go program that can execute regular expressions against strings over the alphabet ${0,1}$ and with the special characters $.$, $|$, and $*$. The README should clearly document how to compile, run and test your program. It should also explain how your program works, and how you wrote it. A better project will be well organised and contain detailed explanations. The architecture of the system will be well conceived, and examples of running the program will be provided. \subsection*{Submissions} GitHub must be used to manage the development of the software. Your GitHub repository will form the main submission of the project. You must submit the URL of your GitHub repository using the link on the course Moodle page before the deadline. You can do this at any time, as the last commit before the deadline will be used as your submission for this project. Any submission that does not have a full and incremental git history with informative commit messages over the course of the project timeline will be accorded a proportionate mark. It is expected that your repository will have at least tens of commits, with each commit relating to a reasonably small unit of work. In the last week of term, or at any other time, you may be asked by the lecturer to explain the contents of your git repository. While it is encouraged that students will engage in peer learning, any unreferenced documentation and software that is contained in your submission must have been written by you. You can show this by having a long incremental commit history and by being able to explain your code. \subsection*{Marking scheme} This project will be worth 50\% of your mark for this module. The following marking scheme will be used to mark the project out of 100\%. Students should note, however, that in certain circumstances the examiner's overall impression of the project may influence marks in each individual component. \begin{center} \begin{tabular}{llp{8.4cm}} \toprule 25\% & \textbf{Research} & Investigation of problem and possible solutions. \\ \midrule 25\% & \textbf{Development} & Clear architecture and well-written code. \\ \midrule 25\% & \textbf{Consistency} & Good planning and pragmatic attitude to work. \\ \midrule 25\% & \textbf{Documentation} & Detailed descriptions and explanations. \\ \bottomrule \end{tabular} \end{center} \subsection*{Advice for students} \begin{itemize} \item Your git log history should be extensive. A reasonable unit of work for a single commit is a small function, or a handful of comments, or a small change that fixes a bug. If you are well organised you will find it easier to determine the size of a reasonable commit, and it will show in your git history. \item Using information, code and data from outside sources is sometimes acceptable -- so long as it is licensed to permit this, you clearly reference the source, and the overall project is substantially your own work. Using a source that does not meet these three conditions could jeopardise your mark. \item You must be able to explain your project during it, and after it. Bear this in mind when you are writing your README. If you had trouble understanding something in the first place, you will likely have trouble explaining it a couple of weeks later. Write a short explanation of it in your README, so that you can jog your memory later. \item Everyone is susceptible to procrastination and disorganisation. You are expected to be aware of this and take reasonable measures to avoid them. The best way to do this is to draw up an initial straight-forward project plan and keep it updated. You can show the examiner that you have done this in several ways. The easiest is to summarise the project plan in your README. Another way is to use a to-do list like GitHub Issues. \item Students have problems with projects from time to time. Some of these are unavoidable, such as external factors relating to family issues or illness. In such cases allowances can sometimes be made. Other problems are preventable, such as missing the submission deadline because you are having internet connectivity issues five minutes before it. Students should be able to show that up until an issue arose they had completed a reasonable and proportionate amount of work, and took reasonable steps to avoid preventable issues. \item Go easy on yourself -- this is one project in one module. It will not define you or your life. A higher overall course mark should not be determined by a single project, but rather your performance in all your work in all your modules. Here, you are just trying to demonstrate to yourself, to the examiners, and to prospective future employers, that you can take a reasonably straight-forward problem and solve it within a few weeks. \end{itemize} \bibliographystyle{plain} \bibliography{bibliography} \end{document}
{ "alphanum_fraction": 0.7639018329, "avg_line_length": 49.2704081633, "ext": "tex", "hexsha": "b4407dd3b59988cbd978daff00c79aee196a9885", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eb0ac78083dea91a62bbf99e60710117c9abc2d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "graph-theory/project-2018", "max_forks_repo_path": "project.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eb0ac78083dea91a62bbf99e60710117c9abc2d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "graph-theory/project-2018", "max_issues_repo_path": "project.tex", "max_line_length": 243, "max_stars_count": null, "max_stars_repo_head_hexsha": "eb0ac78083dea91a62bbf99e60710117c9abc2d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "graph-theory/project-2018", "max_stars_repo_path": "project.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2304, "size": 9657 }