Search is not available for this dataset
text
string | meta
dict |
---|---|
\section{Discussion}
| {
"alphanum_fraction": 0.7083333333,
"avg_line_length": 4.8,
"ext": "tex",
"hexsha": "71c531d39d56a9420368ad7f9930e7154d99dc2e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-26T19:59:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-26T19:59:13.000Z",
"max_forks_repo_head_hexsha": "302d6dcc7c0a85a9191098366b076cf9cb5a9f6e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AllSafeCyberSecur1ty/Nuclear-Engineering",
"max_forks_repo_path": "BEAVRS/docs/specifications/discussion/discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "302d6dcc7c0a85a9191098366b076cf9cb5a9f6e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AllSafeCyberSecur1ty/Nuclear-Engineering",
"max_issues_repo_path": "BEAVRS/docs/specifications/discussion/discussion.tex",
"max_line_length": 20,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "302d6dcc7c0a85a9191098366b076cf9cb5a9f6e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AllSafeCyberSecur1ty/Nuclear-Engineering",
"max_stars_repo_path": "BEAVRS/docs/specifications/discussion/discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-26T20:01:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-26T20:01:13.000Z",
"num_tokens": 5,
"size": 24
} |
\documentclass{scrartcl}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{babel}
\usepackage{xcolor}
\usepackage{tikz}
\usetikzlibrary{arrows.meta,calc,decorations.markings,arrows,positioning,fit,intersections,patterns,scopes,datavisualization,backgrounds}
\usetikzlibrary{shapes.geometric,decorations.pathreplacing,shadings,decorations.text}
\usepackage{hyperref}
\usepackage[footnotes,definitionLists,smartEllipses,hybrid,pipeTables=true,shiftHeadings=1,tableCaptions=true]{markdown}
\usepackage[edges]{forest}
\usepackage{cleveref}
\title{Appendix A: Unit Test Documentation}
\author{InfraRoom: IFC Infrastructure Extension Deployment project}
\date{document automatically generated on: \today}
\setkeys{Gin}{width=\linewidth}
\usepackage{tabularx}
\makeatletter
\def\markdownLaTeXReadAlignments#1{%
\advance\markdownLaTeXColumnCounter by 1\relax%
\if#1d%
\addto@hook\markdownLaTeXTableAlignment{p{0.5\textwidth}}%
\else%
\addto@hook\markdownLaTeXTableAlignment{#1}%
\fi\ifnum\markdownLaTeXColumnCounter<\markdownLaTeXColumnTotal\relax\else\expandafter\@gobble%
\fi\markdownLaTeXReadAlignments%
}
\makeatother
\begin{document}
\maketitle
\section{Summary}
\label{sec:summary}
This is a summary document of all approved unit tests during the IFC Infrastructure Extensions Deployment project.
This document has been automatically produced from the \texttt{readme} files of individual unit tests.
The originating documentation together with all mentioned files can be obtained
from the project's official GitHub repository\footnote{\url{https://github.com/bSI-InfraRoom/IFC-infra-unit-test}}.
\section{Schematics}
\label{sec:schematics}
The following figure represents the dependencies between the individual unit tests.
Unit tests are represented as boxes, while the arrows point from the dependant to the independent unit test.
Each unit test has a link to the corresponding section of this document in the upper right corner.
\tikzset{every label/.style={xshift=1ex, text width=10ex, align=left,
inner sep=1pt, font=\footnotesize}}
\begin{forest}
for tree={ % style of tree nodes
font=\footnotesize,
draw, semithick, rounded corners,
align = center,
inner sep = 2mm,
% style of tree (edges, distances, direction)
edge = {draw, semithick, latex'-},
parent anchor = east,
child anchor = west,
grow = south,
forked edge, % for forked edge
l sep = 12mm, % level distance
fork sep = 6mm, % distance from parent to branching point
}
[Setup-1,name=boiler1,label=\labelcref{sec:project_setup_1}
[Setup-2,name=boiler2,label=\labelcref{sec:project_setup_2}],
[TIN-1,name=tin1,label=\labelcref{sec:tin_1}
[GeoRef-1,name=georef1,label=\labelcref{sec:georeferencing_1}] {
\draw[-latex'] () to[out=north east,in=south east] (boiler1);
}
]
]
\end{forest}
\clearpage
\section{Project Setup 1}
\label{sec:project_setup_1}
\markdownInput{../ProjectSetup-1/readme.md}
\clearpage
\section{Project Setup 2}
\label{sec:project_setup_2}
%\markdownInput{../ProjectSetup-2/readme.md}
\clearpage
\section{TIN 1}
\label{sec:tin_1}
\markdownInput{../Tin-1/readme.md}
\clearpage
\section{Georeferencing 1}
\label{sec:georeferencing_1}
\markdownInput{../Georeferencing-1/readme.md}
\clearpage
\section{Spatial Structure 1}
\label{sec:spatial_1}
%\markdownInput{../SpatialStructure-1/readme.md}
\clearpage
\section{Spatial Structure 2}
\label{sec:spatial_2}
%\markdownInput{../SpatialStructure-2/readme.md}
\clearpage
\section{Spatial Structure 3}
\label{sec:spatial_3}
%\markdownInput{../SpatialStructure-3/readme.md}
\clearpage
\section{Spatial Structure 4}
\label{sec:spatial_4}
\markdownInput{../SpatialStructure-4/readme.md}
\clearpage
\section{Alignment 1}
\label{sec:alignment_12d_1}
\markdownInput{../Alignment-12d-1/readme.md}
\clearpage
\section{Alignment 2}
\label{sec:alignment_12d_2}
\markdownInput{../Alignment-12d-2/readme.md}
\clearpage
\section{Drainage System 1}
\label{sec:drainage_1}
\markdownInput{../DrainageSystem-1/Readme.md}
\clearpage
\section{Drainage System 2}
\label{sec:drainage_2}
\markdownInput{../DrainageSystem-2/Readme.md}
\clearpage
\end{document}
| {
"alphanum_fraction": 0.741580756,
"avg_line_length": 27.8025477707,
"ext": "tex",
"hexsha": "0acb075082938e71aa481504829c2b1c75698b58",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "286ee6c3201bb06dd026cb80c586749549bf1f39",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "javierNadal/IFC-infra-unit-test",
"max_forks_repo_path": "Report/UnitTest_Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "286ee6c3201bb06dd026cb80c586749549bf1f39",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "javierNadal/IFC-infra-unit-test",
"max_issues_repo_path": "Report/UnitTest_Report.tex",
"max_line_length": 137,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "286ee6c3201bb06dd026cb80c586749549bf1f39",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "javierNadal/IFC-infra-unit-test",
"max_stars_repo_path": "Report/UnitTest_Report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1279,
"size": 4365
} |
%
%
%
% Created by Rafid Hoda on 2012-08-06.
% Copyright (c) 2012 . All rights reserved.
%
\documentclass[]{article}
% Use utf-8 encoding for foreign characters
\usepackage[utf8]{inputenc}
% Setup for fullpage use
\usepackage{fullpage}
% Uncomment some of the following if you use the features
%
% Running Headers and footers
%\usepackage{fancyhdr}
% Multipart figures
%\usepackage{subfigure}
% More symbols
%\usepackage{amsmath}
%\usepackage{amssymb}
%\usepackage{latexsym}
% Surround parts of graphics with box
\usepackage{boxedminipage}
% Package for including code in the document
\usepackage{listings}
% If you want to generate a toc for each chapter (use with book)
\usepackage{minitoc}
% This is now the recommended way for checking for PDFLaTeX:
\usepackage{ifpdf}
%\newif\ifpdf
%\ifx\pdfoutput\undefined
%\pdffalse % we are not running PDFLaTeX
%\else
%\pdfoutput=1 % we are running PDFLaTeX
%\pdftrue
%\fi
\ifpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage{graphicx}
\fi
\title{Report for Summer Job at PetroStreamz}
\author{Rafid Hoda}
\date{2012-08-07}
\begin{document}
\ifpdf
\DeclareGraphicsExtensions{.pdf, .jpg, .tif}
\else
\DeclareGraphicsExtensions{.eps, .jpg}
\fi
\maketitle
\section*{Summary}
My main task at PetroStreamz was to develop a prototype of a differentiation utility for the Optimizer. We agreed that the best option would be to use open source libraries to handle the differentiation and that Python would be a sensible language to use because of it's simplicity. The utility was to work in such a way that the user could choose a txt or ppo file as input with functions and variables listed. The utility would differentiate all the functions with respect to all the variables and output the result to a txt or ppo file. This is especially useful for use with the Optimizer because it automates the differentiation process saving the user a lot of time and is much less error-prone.
\\\\
SymPy, a Python based library seemed to be the best choice. CasADI was also considered and tested, but did not work as smoothly as SymPy. After simpler prototypes, the final one was coded and tested with typical equations.
\section*{WEEK 1}
The first week I used most of my time getting to know Pipe-It and the Optimizer. I played with the Optimizer and used to it solve simple optimization problems in math. I was then given other example files for further testing. Based on my testing I evaluated the Optimizer's use for math and science students.
\\\\
At the end of the week we got an idea of how the problem could be solved. I also used much of the week researching different symbolic differentiation libraries and picking out worthy candidates. The conclusion after the first week was that SymPy(Python based) and CasADI(Python/C++ based) could be the best options. Sage, Scilab and Octave were also considered. I also read about licensing for the different options. SymPy is licensed under the modified BSD license. This basically means it can be used for commercial purposes either in it's original state or modified, but a copyright note must be included with the product.
\section*{WEEK 2}
Started seriously testing SymPy to see if it could work effectively with Pipe-It's Optimizer. I was given some example files to work with and tested them with SymPy. The results were positive and SymPy seemed to be the best choice. I coded a prototype in Python that would consume a txt file and output another txt file with derivatives.
\\\\
*See separate file for SymPy testing*
\section*{WEEK 3}
Continued work and testing on the prototype. Tested with common equations in Petroleum. Got started on testing CasADI. Conversed with the developer of CasADI to see if it could be a good option. Although CasADI seemed to work for most cases, it had awkward notation and did not work as smoothly as SymPy. Had problems making a txt prototype for CasADI and ended up concluding that it was not a good choice.
\section*{WEEK 4, 5 and 6}
The final weeks I coded the final utility. A lot of time went away to test with different types of files and filetypes. The three main projects were to handle txt to txt, txt to ppo, and ppo to ppo. I coded these separately and finally combined them into one. Some research and testing had to be done in order to compile the final product. PyInstaller was used for compiling.
\bibliographystyle{plain}
\bibliography{}
\end{document}
| {
"alphanum_fraction": 0.7839981868,
"avg_line_length": 49.0222222222,
"ext": "tex",
"hexsha": "970bc88028cf3049b3cc14f3ff39d6dfca653538",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b30ef54abef5bde46fb2d3eac8cb43218a7a4c17",
"max_forks_repo_licenses": [
"Xnet",
"X11"
],
"max_forks_repo_name": "rafidhoda/differentiator_utility",
"max_forks_repo_path": "report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b30ef54abef5bde46fb2d3eac8cb43218a7a4c17",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Xnet",
"X11"
],
"max_issues_repo_name": "rafidhoda/differentiator_utility",
"max_issues_repo_path": "report/report.tex",
"max_line_length": 701,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b30ef54abef5bde46fb2d3eac8cb43218a7a4c17",
"max_stars_repo_licenses": [
"Xnet",
"X11"
],
"max_stars_repo_name": "rafidhoda/differentiator_utility",
"max_stars_repo_path": "report/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1074,
"size": 4412
} |
\lab{Mayavi}{Mayavi}
\section*{3-D Plotting with Mayavi} % =========================================
Although Matplotlib is capable of creating 3-D plots, Mayavi\footnote{If Mayavi is not installed on your machine, run \li{conda install mayavi} from the command line. See Appendix \ref{updateinstall} for more info.} does it much faster and with better visuals.
Here we introduce methods for plotting space curves, scatter plots, and surfaces in 3-D.
The \li{mlab} submodule within the \li{mayavi} package contains the functions for creating these plots.
% We will use Mayavi for all 3-D plots in these labs. % O RLY?
\begin{warn}
Mayavi must be imported before Matplotlib.
\begin{lstlisting}
from mayavi import mlab
from matplotlib import pyplot as plt
\end{lstlisting}
\end{warn}
\begin{comment}
\begin{info} % Installation note. Make into a footnote?
If you do not have the \li{Mayavi} package installed on your system, you may download it by running the following commands from the command line:
\begin{lstlisting}
$ conda install conda # Download the most recent installer.
$ conda install anaconda # Update all packages.
$ conda install mayavi # Installs mayavi.
\end{lstlisting}
% For more information regarding installing Python packages, see Appendix \ref{updateinstall}.
\end{info}
\begin{table} % Mayavi plotting functions. Probably should put this back in.
\begin{center}
\begin{tabular}
{|c|l|}
\hline
Function & Description \\
\hline
\li{barchart} & Produces 3D histogram-like plots\\
\li{contour3d} & Plots level surfaces of functions of three variables\\
\li{flow} & Creates a trajectory of particles following the flow of a vector field\\
\li{imshow} & Use a colormap to view a 2D array as an image\\
\li{mesh} & Plot a surface using \li{(x,y,z)} coordinates supplied as three 2D arrays\\
\li{plot3d} & Draws lines between points\\
\li{points3d} & Plots glyphs (like points) at the coordinates supplied\\
\li{quiver3d} & Generate 3D vector fields\\
\li{surf} & Plot a surface with a 2D array as elevation data\\
\hline
\end{tabular}
\end{center}
\caption{Some plotting functions in \li{mlab}.}
\label{table:mlab_functions}
\end{table}
\end{comment}
\subsection*{Lines} % ---------------------------------------------------------
The function \li{mlab.plot3d()} is the 3-D Mayavi equivalent for Matplotlib's \li{plt.plot()}.
Because the plot is 3-D, we must provide $x$, $y$, and $z$ coordinates, each contained in 1-D arrays of the same length.
The points \li{(x[i], y[i], z[i])} are graphed in $\mathbb{R}^3$ and connected with straight lines.
Consider the following curve, parametrized by time:
\begin{align*}
x(t) &= \cos(t)(1+\cos(6t))\\
y(t) &= \sin(t)(1+\sin(6t)\\
z(t) &= \sin\left(\frac{6}{11}t\right)
\end{align*}
The following code plots the curve over the time domain $t \in [0,2\pi]$.
The resulting plot is shown in Figure \ref{fig:plot3d}.
\begin{lstlisting}
>>> from mayavi import mlab
# Calculate the coordinates of a curve parametrized by time.
>>> t = np.linspace(0, 2*np.pi, 100)
>>> x = np.cos(t) * (1 + np.cos(t*6))
>>> y = np.sin(t) * (1 + np.cos(t*6))
>>> z = np.sin(t*6/11.)
# Plot and show the figure.
>>> mlab.plot3d(x, y, z)
>>> mlab.show()
\end{lstlisting}
\begin{figure} % Mayavi line and point plots.
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{plot3d.png}
\caption{A 3-D curve generated by \li{mlab.plot3d()}.}
\label{fig:plot3d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{points3d.png}
\caption{A 3-D scatter plot generated by \li{mlab.points3d()}.}
\label{fig:points3d}
\end{subfigure}
\end{figure}
\subsection*{Points} % --------------------------------------------------------
The function \li{mlab.points3d()} is the 3-D Mayavi equivalent for Matplotlib's \li{plt.scatter()}.
Each point is plotted, but not connected with lines.
In the code below, the optional input array \li{s} defines a scalar for each point that modifies the color and size of the point.
The output is in Figure \ref{fig:points3d}.
\begin{lstlisting}
>>> t = np.linspace(0, 4*np.pi, 30)
>>> x = np.sin(2*t)
>>> y = np.cos(t)
>>> z = np.cos(2*t)
>>> s = 2 + np.sin(t)
# Adjust the keyword argument 'scale_factor' so all points are visible.
>>> mlab.points3d(x, y, z, s, scale_factor=.15)
>>> mlab.show()
\end{lstlisting}
\subsection*{Surfaces} % ------------------------------------------------------
The function \li{mlab.surf()} renders a 3-D surface.
Because the surface is over a 2-D domain, we create a coordinate grid with \li{np.mgrid()} (similar to \li{np.meshgrid()}).
This function uses the slicing syntax \li{[start:stop:step]}, similar to \li{range()} and \li{np.arange()}, but is accessed with brackets instead of parentheses.
The following code produces the hyperbolic paraboloid $f(x,y) = \frac{x^2}{4} - \frac{y^2}{4}$ over the domain $[-4,4]\times[-4,4]$.
The result is displayed in Figure \ref{fig:surf_example}.
\begin{lstlisting}
>>> X, Y = np.mgrid[-4:4:.025, -4:4:.025]
>>> Z = (X**2)/4. - (Y**2)/4.
>>> mlab.surf(X, Y, Z, colormap='RdYlGn')
>>> mlab.show()
\end{lstlisting}
\begin{figure}
\includegraphics[width=.7\textwidth]{mesh_example.png}
\caption{Sample output of \li{mlab.surf()}.}
\label{fig:surf_example}
\end{figure}
Like Matplotlib, Mayavi supports various color schemes, either as a solid color or with a varying colormap.
For example, the plot in Figure \ref{fig:surf_example} uses the colormap \li{'RdYlGn'}.
For a list of all colormaps in Mayavi, see \url{http://docs.enthought.com/mayavi/mayavi/mlab_changing_object_looks.html}.
% TODO: More exercises with Mayavi!!! Don't have to be too hard either...
\begin{problem}
Plot the function $z = \frac{1}{10}\sin(10(x^2+y^2))$ on $[-1,1] \times [-1,1]$ using Mayavi.
\end{problem}
| {
"alphanum_fraction": 0.6837578006,
"avg_line_length": 39.7919463087,
"ext": "tex",
"hexsha": "fad9ed9ec39b0d2dd847dce780802398cf40fab7",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-11-05T14:45:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-05T14:45:03.000Z",
"max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "joshualy/numerical_computing",
"max_forks_repo_path": "Introduction/PlottingIntro/mayavi_/Mayavi.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "joshualy/numerical_computing",
"max_issues_repo_path": "Introduction/PlottingIntro/mayavi_/Mayavi.tex",
"max_line_length": 260,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "joshualy/numerical_computing",
"max_stars_repo_path": "Introduction/PlottingIntro/mayavi_/Mayavi.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1758,
"size": 5929
} |
\documentclass{ximera}
\input{../preamble}
\title{Series Comparison Tests}
%%%%%\author{Philip T. Gressman}
\begin{document}
\begin{abstract}
We study the direct and limit comparison theorems for infinite series and practice their application.
\end{abstract}
\maketitle
\section*{Online Texts}
\begin{itemize}
\item \link[OpenStax II 5.4: Comparison Tests]{https://openstax.org/books/calculus-volume-2/pages/5-4-comparison-tests}
\item \link[Ximera OSU: Comparison Tests]{https://ximera.osu.edu/mooculus/calculus2/comparisonTests/titlePage}
\item \link[Community Calculus 11.5: Comparison Tests]{https://www.whitman.edu/mathematics/calculus_online/section11.05.html}
\end{itemize}
\section*{Examples}
\begin{example}
\end{example}
\begin{example}
\end{example}
\end{document}
| {
"alphanum_fraction": 0.7709923664,
"avg_line_length": 24.5625,
"ext": "tex",
"hexsha": "fd8b4b3534501a034072ffc7c53f32cec58ef6bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ptgressman/math104",
"max_forks_repo_path": "series/20comparisonwarm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ptgressman/math104",
"max_issues_repo_path": "series/20comparisonwarm.tex",
"max_line_length": 125,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ptgressman/math104",
"max_stars_repo_path": "series/20comparisonwarm.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 228,
"size": 786
} |
\section{Finite Markov Decision Processes}
We say that a system has the \emph{Markov property} if each state includes all information about the previous states and actions that makes a difference to the future.\\
The MDP provides an abstraction of the problem of goal-directed learning from interaction by modelling the whole thing as three signals: action, state, reward.\\
Together, the MDP and agent give rise to the \emph{trajectory} $S_0$, $A_0$, $R_1$, $S_1$, $A_1$, $S_2$, $R_2$, $\dots$. The action choice in a state gives rise (stochastically) to a state and corresponding reward.
\subsection{The Agent–Environment Interface}
We consider finite Markov Decision Processes (MDPs). The word finite refers to the fact that the states, rewards and actions form a finite set. This framework is useful for many reinforcement learning problems.\\
We call the learner or decision making component of a system the \emph{agent}. Everything else is the \emph{environment}. General rule is that anything that the agent does not have absolute control over forms part of the environment. For a robot the environment would include it's physical machinery. The boundary is the limit of absolute control of the agent, not of its knowledge.\\
The MDP formulation is as follows. Index time-steps by $t \in \mathbb{N}$. Then actions, rewards, states at $t$ represented by $A_t \in \mathcal{A}(s)$, $R_t \in \mathcal{R} \subset \mathbb{R}$, $S_t \in \mathcal{S}$. Note that the set of available actions is dependent on the current state.\\
A key quantity in an MDP is the following function, which defines the \emph{dynamics} of the system.
\begin{equation}
p(s', r | s, a) \doteq \P{} (S_t = s', R_t = r | S_{t-1} = s, A_{t-1} = a)
\end{equation}
From this quantity we can get other useful functions. In particular we have the following:
\begin{description}
\item[state-transition probabilities]
\begin{equation}
p(s' | s, a) \doteq \P{}(S_t = s'| S_{t-1} = s, A_{t-1}=A) = \sum_{r \in \mathcal{R}} p(s', r | s, a)
\end{equation}
note the abuse of notation using $p$ again; and,
\item[expected reward]
\begin{equation}
r(s, a) = \mathbb{E}[R_t | S_{t-1} = s, A_{t-1} = a] = \sum_{r \in \mathcal{R}} r \sum_{s' \in \mathcal{S}} p(s', r | s, a).
\end{equation}
\end{description}
\subsection{Goals and rewards}
We have the \emph{reward hypothesis}, which is a central assumption in reinforcement learning:
\begin{quote}
All of what we mean by goals and purposes can be well thought of as the maximisation of the expected value of the cumulative sum of a received scalar signal (called reward).
\end{quote}
\subsection{Returns and Episodes}
Denote the sequence of rewards from time $t$ as $R_{t+1}$, $R_{t+2}$, $R_{t+3}$, $\dots$. We seek to maximise the \emph{expected return} $G_t$ which is some function of the rewards. The simplest case is where $G_t = \sum_{\tau > t} R_\tau$.\\
In some applications there is a natural final time-step which we denote $T$. The final time-step corresponds to a \emph{terminal state} that breaks the agent-environment interaction into subsequences called \emph{episodes}. Each episode ends in the same terminal state, possibly with a different reward. Each starts independently of the last, with some distribution of starting states. We denote the set of states including the terminal state as $\mathcal{S}^+$\\
Sequences of interaction without a terminal state are called \emph{continuing tasks}. \\
We define $G_t$ using the notion of \emph{discounting}, incorporating the \emph{discount rate} $0 \leq \gamma \leq 1$. In this approach the agent chooses $A_t$ to maximise
\begin{equation}
G_t \doteq \sum_{k = 0}^{\infty} \gamma^k R_{t+k+1}.
\end{equation}
This sum converges wherever the sequence $R_t$ is bounded. If $\gamma = 0$ the agent is said to be myopic. We define $G_T = 0$. Note that
\begin{equation}
G_t = R_{t+1} + \gamma G_{t+1}.
\end{equation}\\
Note that in the case of finite time steps or an episodic problem, then the return for each episode is just the sum (or whatever function) of the returns in that episode.
\subsection{Unified Notation for Episodic and Continuing Tasks}
We want to unify the notation for episodic and continuing learning. \\
We introduce the concept of an \emph{absorbing state}. This state transitions only to itself and gives reward of zero.\\
To incorporate the (disjoint) possibilites that $T=\infty$ or $\gamma = 1$ in our formulation of the return, we might like to write
\begin{equation}
G_t \doteq \sum_{k=t+1}^T \gamma^{k-t-1}R_k.
\end{equation}
\subsection{Policies \& Value Functions}
\subsubsection*{Policy}
A \emph{policy} $\pi(a|s)$ is a mapping from states to the probability of selecting actions in that state. If an agent is following policy $\pi$ and at time $t$ is in state $S_t$, then the probability of taking action $A_t$ is $\pi(a|s)$. Reinforcement learning is about altering the policy from experience.\\
\subsubsection*{Value Functions}
As we have seen, a central notion is the value of a state. The \emph{state-value function} of state $s$ under policy $\pi$ is the expected return starting in $s$ and following $\pi$ thereafter. For MDPs this is
\begin{equation}
v_\pi \doteq \Epi[G_t | S_t = s],
\end{equation}
where the subscript $\pi$ denotes that this is an expectation taken conditional on the agent following policy $\pi$. \\
Similarly, we define the \emph{action-value function} for policy $\pi$ to be the expected return from taking action $a$ in state $s$ and following $\pi$ thereafter
\begin{equation}
q_\pi(s, a) \doteq \Epi[G_t | S_t = s, A_t = a].
\end{equation}
The value functions $v_\pi$ and $q_\pi$ can be estimated from experience.\\
\subsubsection*{Bellman Equation}
The Bellman equations express the value of a state in terms of the value of its successor states. They are a consistency condition on the value of states.
\begin{align}
v_{\pi}(s) &= \Epi{}[G_t | S_t = s] \\
&= \Epi{}[R_{t+1} + \gamma G_{t+1} | S_t = s] \\
&= \sum_{a \in \mathcal{A}(s)} \pi(a|s) \sum_{s', r} p(s', r | s, a) \left[r + \gamma \Epi{}[G_{t+1} | S_{t+1} = s']\right] \\
&= \sum_{a \in \mathcal{A}(s)} \pi(a|s) \sum_{s', r} p(s', r | s, a) [r + \gamma v_{\pi}(s')]
\end{align}
The value function $v_\pi$ is the unique solution to its Bellman equation.
\subsection{Optimal Policies \& Optimal Value Functions}
We say that $\pi \geq \pi'$ iff $v_\pi (s) \geq v_{\pi'}(s) \quad \forall s \in \mathcal{S}$. The policies that are optimal in this sense are called optimal policies. There may be multiple optimal policies. We denote all of them by $\pi_*$.\\
The optimal policies share the same optimal value function $v_*(s)$
\begin{equation}
v_*(s) \doteq \max_\pi v_\pi(s) \quad \forall s \in \mathcal{S}.
\end{equation}
They also share the same optimal action-value function $q_*(s, a)$
\begin{equation}
q_*(s, a) = \max_\pi q_\pi (s, a) \quad \forall s \in \mathcal{S}, a \in \mathcal{A}(s),
\end{equation}
this is the expected return from taking action $a$ in state $s$ and thereafter following the optimal policy.
\begin{equation}
q_*(s, a) = \E{} [R_{t+1} + \gamma v_*(S_{t+1}) | S_{t} = s, A_t = a].
\end{equation}\\
Since $v_*$ is a value function, it must satisfy a Bellman equation (since it is simply a consistency condition). However, $v_*$ corresponds to a policy that always selects the maximal action. Hence
\begin{equation}
v_*(s) = \max_a \sum_{s', r} p(s', r|s, a) [r + \gamma v_*(s')].
\end{equation}
Similarly,
\begin{align}
q_*(s, a) &= \mathbb{E} [R_{t+1} + \gamma \max_{a'}q_*(S_{t+1}, a') | S_t=s, A_t = a]\\
&= \sum_{s', r} p(s', r| s, a ) [r + \gamma \max_{a'}q_*(s', a')].
\end{align} \\
Note that once one identifies an optimal value function $v_*$, then it is simple to find an optimal policy. All that is needed is for the policy to act greedily with respect to $v_*$. Since $v_*$ encodes all information on future rewards, we can act greedily and still make the long term optimal decision (according to our definition of returns).\\
Having $q_*$ is even better since we don't need to check $v_*(s')$ in the succeeding states $s'$, we just find $a_* = \argmax_a q_*(s, a)$ when in state $s$.
| {
"alphanum_fraction": 0.7025132914,
"avg_line_length": 62.2255639098,
"ext": "tex",
"hexsha": "4564588fd141eb7f41ff4f70971a554c8882714c",
"lang": "TeX",
"max_forks_count": 63,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T04:03:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-31T04:53:21.000Z",
"max_forks_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "15779235038/reinforcement_learning_an_introduction",
"max_forks_repo_path": "notes/chapters/chapter3/chapter3_content.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a",
"max_issues_repo_issues_event_max_datetime": "2021-12-13T17:11:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-11-29T21:04:36.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "15779235038/reinforcement_learning_an_introduction",
"max_issues_repo_path": "notes/chapters/chapter3/chapter3_content.tex",
"max_line_length": 463,
"max_stars_count": 234,
"max_stars_repo_head_hexsha": "c4fccb46a4bb00955549be3505144ec49f0132e5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ElliotMunro200/reinforcement_learning_an_introduction",
"max_stars_repo_path": "notes/chapters/chapter3/chapter3_content.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-21T03:55:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-01T00:26:29.000Z",
"num_tokens": 2499,
"size": 8276
} |
\section{Parameterized Surfaces}
\noindent
Parameterized surfaces are a natural of VVFs that map $\mathbb{R}^n \to \mathbb{R}^m$ where usually $n<m$.\\
\noindent
For example, a cylinder of radius 1 can be parameterized as $\vec{r}(u,v) = \langle\sin{u}, \cos{u}, v\rangle$. This particular surface maps $\mathbb{R}^2 \to \mathbb{R}^3$.\\
The paraboloid $z = x^2 + y^2$ can be parameterized as $\vec{r}(u,v)=\langle u, v, u^2+v^2 \rangle$.\\
\noindent
A general trick when trying to parameterize a surface is to substitute $u$ and $v$ for two variables like $x$ and $y$ and find an expression for the third variable in terms of the $u$ and $v$. Although this doesn’t always lead to the most useful parameterization, it can be a good starting point.\\
\noindent
For example, if we wanted to parameterize the surface $y^2=x^2+z^2$ from $y=1$ to $y=9$, we could use the general trick and get $\vec{r}(u,v) = \langle u,\sqrt{u^2+v^2},v\rangle$ where $1\leq u^2+v^2\leq 9^2$. Although this parameterization is technically correct, it is difficult to work with because the bounds for $u$ and $v$ are not independent.\\
Instead, we can recognize that the surface we are trying to parameterize has radial symmetry about the y-axis, and instead let $u$ be and angle and $v$ be a radius to get $\vec{r}(u,v) = \langle v\cos{u}, v, v\sin{u}\rangle$ where $0\leq u\leq 2\pi$ and $1\leq v\leq 9$. The parameterization now has independent bounds, which will make operations like integration much easier.
| {
"alphanum_fraction": 0.7118193891,
"avg_line_length": 100.4,
"ext": "tex",
"hexsha": "b9fa9ed245e7c6bcaca8dc46c6fb349ebd6e8d01",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "rawsh/Math-Summaries",
"max_forks_repo_path": "multiCalc/differentialMultivariableCalculus/parameterizedSurfaces.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "rawsh/Math-Summaries",
"max_issues_repo_path": "multiCalc/differentialMultivariableCalculus/parameterizedSurfaces.tex",
"max_line_length": 377,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "rawsh/Math-Summaries",
"max_stars_repo_path": "multiCalc/differentialMultivariableCalculus/parameterizedSurfaces.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 476,
"size": 1506
} |
\iffalse meta-comment
File: source3body.tex
Copyright (C) 1990-2012,2014-2021 The LaTeX Project
It may be distributed and/or modified under the conditions of the
LaTeX Project Public License (LPPL), either version 1.3c of this
license or (at your option) any later version. The latest version
of this license is in the file
https://www.latex-project.org/lppl.txt
This file is part of the "l3kernel bundle" (The Work in LPPL)
and all files in that bundle must be distributed together.
The released version of this bundle is available from CTAN.
-----------------------------------------------------------------------
The development version of the bundle can be found at
https://github.com/latex3/latex3
for those people who are interested.
\fi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% This file is used by
%
% source3.tex % documentation including implementation
%
% interface3.tex % only interface documentation
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
\setlength\parindent{0pt}
\setlength\parskip{\baselineskip}
\noindent
This is the reference documentation for the \pkg{expl3}
programming environment. The \pkg{expl3} modules set up an experimental
naming scheme for \LaTeX{} commands, which allow the \LaTeX{} programmer
to systematically name functions and variables, and specify the argument
types of functions.
The \TeX{} and \eTeX{} primitives are all given a new name according to
these conventions. However, in the main direct use of the primitives is
not required or encouraged: the \pkg{expl3} modules define an
independent low-level \LaTeX3 programming language.
The \pkg{expl3} modules are designed to be loaded on top of
\LaTeXe{}. With an up-to-date \LaTeXe{} kernel, this material is loaded
as part of the format. The fundamental programming code can also be loaded
with other \TeX{} formats, subject to restrictions on the full range of
functionality.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Each of the following \DocInput lines includes a file with extension
% .dtx. Each of these files may be typeset separately. For instance
% pdflatex l3box.dtx
% will typeset the source of the LaTeX3 box commands. If you use the
% Makefile, the index will be generated automatically; e.g.,
% make doc F=l3box
%
% If this file is processed, each of these separate dtx files will be
% contained as a part of a single document.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\makeatletter
% l3doc is based on article, but for these very large documents we need
% chapters; the code is based on the standard classes but somewhat simplified
\renewcommand\part{%
\clearpage
\thispagestyle{plain}%
\@tempswafalse
\null\vfil
\secdef\@part\@spart}
\newcounter {chapter}
\numberwithin{section}{chapter}
\renewcommand \thechapter {\@arabic\c@chapter}
\renewcommand \thesection {\thechapter.\@arabic\c@section}
\newcommand*\chaptermark[1]{}
\setcounter{secnumdepth}{2}
\newcommand\@chapapp{\chaptername}
\newcommand\chaptername{Chapter}
\def\ps@headings{%
\let\@oddfoot\@empty
\def\@oddhead{{\slshape\rightmark}\hfil\thepage}%
\let\@mkboth\markboth
\def\chaptermark##1{%
\markright {\MakeUppercase{%
\ifnum \c@secnumdepth >\m@ne
\@chapapp\ \thechapter. \ %
\fi
##1}}}}
\newcommand\chapter{\clearpage
\thispagestyle{plain}%
\global\@topnum\z@
\@afterindentfalse
\secdef\@chapter\@schapter}
\def\@chapter[#1]#2{\refstepcounter{chapter}%
\typeout{\@chapapp\space\thechapter.}%
\addcontentsline{toc}{chapter}%
{\protect\numberline{\thechapter}#1}%
\chaptermark{#1}%
\addtocontents{lof}{\protect\addvspace{10\p@}}%
\addtocontents{lot}{\protect\addvspace{10\p@}}%
\@makechapterhead{#2}%
\@afterheading}
\def\@makechapterhead#1{%
\vspace*{50\p@}%
{\parindent \z@ \raggedright \normalfont
\huge\bfseries \@chapapp\space \thechapter
\par\nobreak
\vskip 20\p@
\interlinepenalty\@M
\Huge \bfseries #1\par\nobreak
\vskip 40\p@
}}
\newcommand*\l@chapter[2]{%
\ifnum \c@tocdepth >\m@ne
\addpenalty{-\@highpenalty}%
\vskip 1.0em \@plus\p@
\setlength\@tempdima{1.5em}%
\begingroup
\parindent \z@ \rightskip \@pnumwidth
\parfillskip -\@pnumwidth
\leavevmode \bfseries
\advance\leftskip\@tempdima
\hskip -\leftskip
#1\nobreak\hfil
\nobreak\hb@xt@\@pnumwidth{\hss #2%
\kern-\p@\kern\p@}\par
\penalty\@highpenalty
\endgroup
\fi}
\renewcommand*\l@section{\@dottedtocline{1}{1.5em}{2.8em}}
\renewcommand*\l@subsection{\@dottedtocline{2}{3.8em}{3.2em}}
\renewcommand*\l@subsubsection{\@dottedtocline{3}{7.0em}{4.1em}}
\def\partname{Part}
\def\toclevel@part{-1}
\def\maketitle{\chapter{\@title}}
\let\thanks\@gobble
\let\DelayPrintIndex\PrintIndex
\let\PrintIndex\@empty
\providecommand*{\hexnum}[1]{\text{\texttt{\char`\"}#1}}
\makeatother
\clearpage
{%
\def\\{:}% fix "newlines" in the ToC
\tableofcontents
}
\clearpage
\pagenumbering{arabic}
\part{Introduction}
\chapter{Introduction to \pkg{expl3} and this document}
This document is intended to act as a comprehensive reference manual
for the \pkg{expl3} language. A general guide to the \LaTeX3
programming language is found in \href{expl3.pdf}{expl3.pdf}.
\section{Naming functions and variables}
\LaTeX3 does not use \texttt{@} as a \enquote{letter} for defining
internal macros. Instead, the symbols |_| and \texttt{:}
are used in internal macro names to provide structure. The name of
each \emph{function} is divided into logical units using \texttt{_},
while \texttt{:} separates the \emph{name} of the function from the
\emph{argument specifier} (\enquote{arg-spec}). This describes the arguments
expected by the function. In most cases, each argument is represented
by a single letter. The complete list of arg-spec letters for a function
is referred to as the \emph{signature} of the function.
Each function name starts with the \emph{module} to which it belongs.
Thus apart from a small number of very basic functions, all \pkg{expl3}
function names contain at least one underscore to divide the module
name from the descriptive name of the function. For example, all
functions concerned with comma lists are in module \texttt{clist} and
begin |\clist_|.
Every function must include an argument specifier. For functions which
take no arguments, this will be blank and the function name will end
\texttt{:}. Most functions take one or more arguments, and use the
following argument specifiers:
\begin{description}
\item[\texttt{N} and \texttt{n}] These mean \emph{no manipulation},
of a single token for \texttt{N} and of a set of tokens given in
braces for \texttt{n}. Both pass the argument through exactly as
given. Usually, if you use a single token for an \texttt{n} argument,
all will be well.
\item[\texttt{c}] This means \emph{csname}, and indicates that the
argument will be turned into a csname before being used. So
|\foo:c| |{ArgumentOne}| will act in the same way as |\foo:N|
|\ArgumentOne|. All macros that appear in the argument are expanded.
An internal error will occur if the result of expansion inside
a \texttt{c}-type argument is not a series of character tokens.
\item[\texttt{V} and \texttt{v}] These mean \emph{value
of variable}. The \texttt{V} and \texttt{v} specifiers are used to
get the content of a variable without needing to worry about the
underlying \TeX{} structure containing the data. A \texttt{V}
argument will be a single token (similar to \texttt{N}), for example
|\foo:V| |\MyVariable|; on the other hand, using \texttt{v} a
csname is constructed first, and then the value is recovered, for
example |\foo:v| |{MyVariable}|.
\item[\texttt{o}] This means \emph{expansion once}. In general, the
\texttt{V} and \texttt{v} specifiers are favoured over \texttt{o}
for recovering stored information. However, \texttt{o} is useful
for correctly processing information with delimited arguments.
\item[\texttt{x}] The \texttt{x} specifier stands for \emph{exhaustive
expansion}: every token in the argument is fully expanded until only
unexpandable ones remain. The \TeX{} \tn{edef} primitive carries out
this type of expansion. Functions which feature an \texttt{x}-type
argument are \emph{not} expandable.
\item[\texttt{e}] The \texttt{e} specifier is in many respects
identical to \texttt{x}, but with a very different implementation.
Functions which feature an \texttt{e}-type argument may be
expandable. The drawback is that \texttt{e} is extremely slow
(often more than $200$ times slower) in older engines, more
precisely in non-\LuaTeX{} engines older than 2019.
\item[\texttt{f}] The \texttt{f} specifier stands for \emph{full
expansion}, and in contrast to \texttt{x} stops at the first
non-expandable token (reading the argument from left to right) without
trying to expand it. If this token is a \meta{space token}, it is gobbled,
and thus won't be part of the resulting argument. For example, when
setting a token list variable (a macro used for storage), the sequence
\begin{verbatim}
\tl_set:Nn \l_mya_tl { A }
\tl_set:Nn \l_myb_tl { B }
\tl_set:Nf \l_mya_tl { \l_mya_tl \l_myb_tl }
\end{verbatim}
will leave |\l_mya_tl| with the content |A\l_myb_tl|, as |A| cannot
be expanded and so terminates expansion before |\l_myb_tl| is considered.
\item[\texttt{T} and \texttt{F}] For logic tests, there are the branch
specifiers \texttt{T} (\emph{true}) and \texttt{F} (\emph{false}).
Both specifiers treat the input in the same way as \texttt{n} (no
change), but make the logic much easier to see.
\item[\texttt{p}] The letter \texttt{p} indicates \TeX{}
\emph{parameters}. Normally this will be used for delimited
functions as \pkg{expl3} provides better methods for creating simple
sequential arguments.
\item[\texttt{w}] Finally, there is the \texttt{w} specifier for
\emph{weird} arguments. This covers everything else, but mainly
applies to delimited values (where the argument must be terminated
by some specified string).
\item[\texttt{D}] The \texttt{D} stands for \textbf{Do not use}.
All of the \TeX{} primitives are initially \cs[no-index]{let} to a \texttt{D}
name, and some are then given a second name.
These functions have no standardized syntax, they are engine
dependent and their name can change without warning, thus their
use is \emph{strongly discouraged} in package code: programmers
should instead use the interfaces documented in
\href{interface3.pdf}{interface3.pdf}%^^A
\footnote{If a primitive offers a functionality not yet in the
kernel, programmers and users are encouraged to write to the
\texttt{LaTeX-L} mailing list
(\url{mailto:[email protected]}) describing
their use-case and intended behaviour, so that a possible
interface can be discussed. Temporarily, while an interface is
not provided, programmers may use the procedure described in the
\href{l3styleguide.pdf}{l3styleguide.pdf}.}.
\end{description}
Notice that the argument specifier describes how the argument is
processed prior to being passed to the underlying function. For example,
|\foo:c| will take its argument, convert it to a control sequence and
pass it to |\foo:N|.
Variables are named in a similar manner to functions, but begin with
a single letter to define the type of variable:
\begin{description}
\item[\texttt{c}] Constant: global parameters whose value should not
be changed.
\item[\texttt{g}] Parameters whose value should only be set globally.
\item[\texttt{l}] Parameters whose value should only be set locally.
\end{description}
Each variable name is then build up in a similar way to that of a
function, typically starting with the module\footnote{The module names are
not used in case of generic scratch registers defined in the data
type modules, e.g., the
\texttt{int} module contains some scratch variables called \cs{l_tmpa_int},
\cs{l_tmpb_int}, and so on. In such a case adding the module name up front
to denote the module
and in the back to indicate the type, as in
\texttt{\string\l_int_tmpa_int} would be very unreadable.} name
and then a descriptive part.
Variables end with a short identifier to show the variable type:
\begin{description}
\item[\texttt{clist}] Comma separated list.
\item[\texttt{dim}] \enquote{Rigid} lengths.
\item[\texttt{fp}] Floating-point values;
\item[\texttt{int}] Integer-valued count register.
\item[\texttt{muskip}] \enquote{Rubber} lengths for use in
mathematics.
\item[\texttt{seq}] \enquote{Sequence}: a data-type used to implement
lists (with access at both ends) and stacks.
\item[\texttt{skip}] \enquote{Rubber} lengths.
\item[\texttt{str}] String variables: contain character data.
\item[\texttt{tl}] Token list variables: placeholder for a token list.
\end{description}
Applying \texttt{V}-type or \texttt{v}-type expansion to variables of
one of the above types is supported, while it is not supported for the
following variable types:
\begin{description}
\item[\texttt{bool}] Either true or false.
\item[\texttt{box}] Box register.
\item[\texttt{coffin}] A \enquote{box with handles} --- a higher-level
data type for carrying out \texttt{box} alignment operations.
\item[\texttt{flag}] Integer that can be incremented expandably.
\item[\texttt{fparray}] Fixed-size array of floating point values.
\item[\texttt{intarray}] Fixed-size array of integers.
\item[\texttt{ior}/\texttt{iow}] An input or output stream, for
reading from or writing to, respectively.
\item[\texttt{prop}] Property list: analogue of dictionary or
associative arrays in other languages.
\item[\texttt{regex}] Regular expression.
\end{description}
\subsection{Scratch variables}
Modules focussed on variable usage typically provide four scratch variables,
two local and two global, with names of the form
\cs{\meta{scope}_tmpa_\meta{type}}/\cs{\meta{scope}_tmpb_\meta{type}}. These
are never used by the core code. The nature of \TeX{} grouping means that as
with any other scratch variable, these should only be set and used with no
intervening third-party code.
\subsection{Terminological inexactitude}
A word of warning. In this document, and others referring to the \pkg{expl3}
programming modules, we often refer to \enquote{variables} and
\enquote{functions} as if they
were actual constructs from a real programming language. In truth, \TeX{}
is a macro processor, and functions are simply macros that may or may not take
arguments and expand to their replacement text. Many of the common variables
are \emph{also} macros, and if placed into the input stream will simply expand
to their definition as well~--- a \enquote{function} with no arguments and a
\enquote{token list variable} are almost the same.\footnote{\TeX{}nically,
functions with no arguments are \tn{long} while token list variables are not.}
On the other
hand, some \enquote{variables} are actually registers that must be
initialised and their values set and retrieved with specific functions.
The conventions of the \pkg{expl3} code are designed to clearly separate the
ideas of \enquote{macros that contain data} and
\enquote{macros that contain code}, and a
consistent wrapper is applied to all forms of \enquote{data} whether they be
macros or
actually registers. This means that sometimes we will use phrases like
\enquote{the function returns a value}, when actually we just mean
\enquote{the macro expands to something}. Similarly, the term
\enquote{execute} might be used in place of \enquote{expand}
or it might refer to the more specific case of
\enquote{processing in \TeX's stomach}
(if you are familiar with the \TeX{}book parlance).
If in doubt, please ask; chances are we've been hasty in writing certain
definitions and need to be told to tighten up our terminology.
\section{Documentation conventions}
This document is typeset with the experimental \pkg{l3doc} class;
several conventions are used to help describe the features of the code.
A number of conventions are used here to make the documentation clearer.
Each group of related functions is given in a box. For a function with
a \enquote{user} name, this might read:
\begin{function}[label = ]{\ExplSyntaxOn, \ExplSyntaxOff}
\begin{syntax}
|\ExplSyntaxOn| \dots{} |\ExplSyntaxOff|
\end{syntax}
The textual description of how the function works would appear here. The
syntax of the function is shown in mono-spaced text to the right of
the box. In this example, the function takes no arguments and so the
name of the function is simply reprinted.
\end{function}
For programming functions, which use \texttt{_} and \texttt{:} in their name
there are a few additional conventions: If two related functions are given
with identical names but different argument specifiers, these are termed
\emph{variants} of each other, and the latter functions are printed in grey to
show this more clearly. They will carry out the same function but will take
different types of argument:
\begin{function}[label = ]{\seq_new:N, \seq_new:c}
\begin{syntax}
|\seq_new:N| \meta{sequence}
\end{syntax}
When a number of variants are described, the arguments are usually
illustrated only for the base function. Here, \meta{sequence} indicates
that |\seq_new:N| expects the name of a sequence. From the argument
specifier, |\seq_new:c| also expects a sequence name, but as a
name rather than as a control sequence. Each argument given in the
illustration should be described in the following text.
\end{function}
\paragraph{Fully expandable functions}
\hypertarget{expstar}{Some functions are fully expandable},
which allows them to be used within
an \texttt{x}-type or \texttt{e}-type argument (in plain \TeX{} terms, inside an \tn{edef} or \tn{expanded}),
as well as within an \texttt{f}-type argument.
These fully expandable functions are indicated in the documentation by
a star:
\begin{function}[EXP, label = ]{\cs_to_str:N}
\begin{syntax}
|\cs_to_str:N| \meta{cs}
\end{syntax}
As with other functions, some text should follow which explains how
the function works. Usually, only the star will indicate that the
function is expandable. In this case, the function expects a \meta{cs},
shorthand for a \meta{control sequence}.
\end{function}
\paragraph{Restricted expandable functions}
\hypertarget{rexpstar}{A few functions are fully expandable} but cannot be fully expanded within
an \texttt{f}-type argument. In this case a hollow star is used to indicate
this:
\begin{function}[rEXP, label = ]{\seq_map_function:NN}
\begin{syntax}
|\seq_map_function:NN| \meta{seq} \meta{function}
\end{syntax}
\end{function}
\paragraph{Conditional functions}
\hypertarget{explTF}{Conditional (\texttt{if}) functions}
are normally defined in three variants, with
\texttt{T}, \texttt{F} and \texttt{TF} argument specifiers. This allows
them to be used for different \enquote{true}/\enquote{false} branches,
depending on
which outcome the conditional is being used to test. To indicate this
without repetition, this information is given in a shortened form:
\begin{function}[EXP,TF, label = ]{\sys_if_engine_xetex:}
\begin{syntax}
|\sys_if_engine_xetex:TF| \Arg{true code} \Arg{false code}
\end{syntax}
The underlining and italic of \texttt{TF} indicates that three functions
are available:
\begin{itemize}
\item |\sys_if_engine_xetex:T|
\item |\sys_if_engine_xetex:F|
\item |\sys_if_engine_xetex:TF|
\end{itemize}
Usually, the illustration
will use the \texttt{TF} variant, and so both \meta{true code}
and \meta{false code} will be shown. The two variant forms \texttt{T} and
\texttt{F} take only \meta{true code} and \meta{false code}, respectively.
Here, the star also shows that this function is expandable.
With some minor exceptions, \emph{all} conditional functions in the
\pkg{expl3} modules should be defined in this way.
\end{function}
Variables, constants and so on are described in a similar manner:
\begin{variable}[label = ]{\l_tmpa_tl}
A short piece of text will describe the variable: there is no
syntax illustration in this case.
\end{variable}
In some cases, the function is similar to one in \LaTeXe{} or plain \TeX{}.
In these cases, the text will include an extra \enquote{\textbf{\TeX{}hackers
note}} section:
\begin{function}[EXP, label = ]{\token_to_str:N}
\begin{syntax}
|\token_to_str:N| \meta{token}
\end{syntax}
The normal description text.
\begin{texnote}
Detail for the experienced \TeX{} or \LaTeXe\ programmer. In this
case, it would point out that this function is the \TeX{} primitive
|\string|.
\end{texnote}
\end{function}
\paragraph{Changes to behaviour}
When new functions are added to \pkg{expl3}, the date of first inclusion is
given in the documentation. Where the documented behaviour of a function
changes after it is first introduced, the date of the update will also be
given. This means that the programmer can be sure that any release of
\pkg{expl3} after the date given will contain the function of interest with
expected behaviour as described. Note that changes to code internals, including
bug fixes, are not recorded in this way \emph{unless} they impact on the
expected behaviour.
\section{Formal language conventions which apply generally}
As this is a formal reference guide for \LaTeX3 programming, the descriptions
of functions are intended to be reasonably \enquote{complete}. However, there
is also a need to avoid repetition. Formal ideas which apply to general
classes of function are therefore summarised here.
For tests which have a \texttt{TF} argument specification, the test if
evaluated to give a logically \texttt{TRUE} or \texttt{FALSE} result.
Depending on this result, either the \meta{true code} or the \meta{false code}
will be left in the input stream. In the case where the test is expandable,
and a predicate (|_p|) variant is available, the logical value determined by
the test is left in the input stream: this will typically be part of a larger
logical construct.
\section{\TeX{} concepts not supported by \LaTeX3{}}
The \TeX{} concept of an \enquote{\cs{outer}} macro is \emph{not supported}
at all by \LaTeX3{}. As such, the functions provided here may break when
used on top of \LaTeXe{} if \cs{outer} tokens are used in the arguments.
\DisableImplementation
\part{Bootstrapping}
\DocInput{l3bootstrap.dtx}
\DocInput{l3names.dtx}
\ExplSyntaxOn
\clist_gput_right:Nn \g_docinput_clist { l3kernel-functions.dtx }
\ExplSyntaxOff
\part{Programming Flow}
\DocInput{l3basics.dtx}
\DocInput{l3expan.dtx}
\DocInput{l3sort.dtx}
\DocInput{l3tl-analysis.dtx}
\DocInput{l3regex.dtx}
\DocInput{l3prg.dtx}
\DocInput{l3sys.dtx}
\DocInput{l3msg.dtx}
\DocInput{l3file.dtx}
\DocInput{l3luatex.dtx}
\DocInput{l3legacy.dtx}
\part{Data types}
\DocInput{l3tl.dtx}
\DocInput{l3str.dtx}
\DocInput{l3str-convert.dtx}
\DocInput{l3quark.dtx}
\DocInput{l3seq.dtx}
\DocInput{l3int.dtx}
\DocInput{l3flag.dtx}
\DocInput{l3clist.dtx}
\DocInput{l3token.dtx}
\DocInput{l3prop.dtx}
\DocInput{l3skip.dtx}
\DocInput{l3keys.dtx}
\DocInput{l3intarray.dtx}
\DocInput{l3fp.dtx}
% To get the various submodules of l3fp to appear in the implementation
% part only, they have to be added to the documentation list after typesetting
% the 'user' part just for the main module.
\ExplSyntaxOn
\clist_gput_right:Nn \g_docinput_clist
{
l3fp-aux.dtx ,
l3fp-traps.dtx ,
l3fp-round.dtx ,
l3fp-parse.dtx ,
l3fp-assign.dtx ,
l3fp-logic.dtx ,
l3fp-basics.dtx ,
l3fp-extended.dtx ,
l3fp-expo.dtx ,
l3fp-trig.dtx ,
l3fp-convert.dtx ,
l3fp-random.dtx ,
}
\ExplSyntaxOff
\DocInput{l3fparray.dtx}
\DocInput{l3cctab.dtx}
\part{Text manipulation}
\DocInput{l3unicode.dtx}
\DocInput{l3text.dtx}
\ExplSyntaxOn
\clist_gput_right:Nn \g_docinput_clist
{
l3text-case.dtx ,
l3text-purify.dtx
}
\ExplSyntaxOff
\part{Typesetting}
\DocInput{l3box.dtx}
\DocInput{l3coffins.dtx}
\DocInput{l3color.dtx}
\DocInput{l3pdf.dtx}
\part{Additions and removals}
\DocInput{l3candidates.dtx}
\ExplSyntaxOn
\clist_gput_right:Nn \g_docinput_clist { l3deprecation.dtx }
\ExplSyntaxOff
\endinput
| {
"alphanum_fraction": 0.7276280269,
"avg_line_length": 40.5539215686,
"ext": "tex",
"hexsha": "20e1875361e3677863406af67708f124deabb319",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "37f55a4b3148aa81881f1809c458762ba7a6b994",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "gucci-on-fleek/latex3",
"max_forks_repo_path": "l3kernel/doc/source3body.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37f55a4b3148aa81881f1809c458762ba7a6b994",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "gucci-on-fleek/latex3",
"max_issues_repo_path": "l3kernel/doc/source3body.tex",
"max_line_length": 109,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "37f55a4b3148aa81881f1809c458762ba7a6b994",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "gucci-on-fleek/latex3",
"max_stars_repo_path": "l3kernel/doc/source3body.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6770,
"size": 24819
} |
\chapter{Docker}
\label{ch:docker}
\textit{Docker is} ``an open platform for developing, shipping, and running applications''. It is a convenient way for us to provide all of the dependencies and the latest release source code so that we can use the ginan tool kit straight out of the box.
%
In order for this to work, we will first need to install the docker engine onto our local machine. If we are running a different operating system instructions on how to install docker can be found at \href{https://docs.docker.com/get-docker/}{docker desktop downlod link}, these also include alternative methods of installing on ubuntu and has links to recommended best practices.
%
To find more information on docker have a look at the \href{https://docs.docker.com/get-started/}{getting started guide} provided by docker.
%
\section{Ubuntu Docker dependency installation guide}
If we are running ubuntu, we can install a docker engine. A summary of the commands to download and install docker involve setting up the ubuntu repository system to link with the docker repsotory are given below.
\begin{lstlisting}[language=bash]
$ sudo apt -y update
$ sudo apt -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
\end{lstlisting}
Add the dockers official GPG key:
\begin{lstlisting}[language=bash]
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
$ echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
\end{lstlisting}
%
Then update the repository management system and install the packages.
%
\begin{lstlisting}[language=bash]
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
\end{lstlisting}
Verify that the Docker engine is installed correctly by running the \emph{hello-world} image.
\begin{lstlisting}[language=bash]
$ docker run hello-world
\end{lstlisting}
Then we will need to include the current user to the \emph{docker} group:
\begin{lstlisting}[language=bash]
$ sudo usermod -aG docker root
$ sudo usermod -aG docker ${USER}
\end{lstlisting}
You will need to log out and back in for the permissions to take effect.
\section{Using Docker}
Once we have docker installed on our local machine we will need to download our image for Ginan:
\begin{lstlisting}[language=bash]
$ docker pull gnssanalysis/ginan:latest
\end{lstlisting}
Then we can run the image as follows:
\begin{lstlisting}[language=bash]
$ docker run -it -v /data:/data gnssanalysis/ginan:latest bash
\end{lstlisting}
This gives us a run-time environment where Ginan is installed with the executables
\emph{pea} and \emph{pod} already in your path.
Here, the \emph{-v} option mounts a volume inside the docker instance at \emph{/data},
which maps to the \emph{/data} folder of the host machine. This way this folder
can be shared between the host and the container, and the results can persist.
You should now see a \emph{bash} prompt running inside the docker container.
You can check the availability of Ginan executables by running
\begin{lstlisting}[language=bash]
$ pea --help
$ pod --help
\end{lstlisting}
To run some tests, try:
\begin{lstlisting}[language=bash]
$ /ginan/docker/run-tests-pea.sh 1 # this is the test number to run, from 1 to 8 currently
$ /ginan/docker/run-tests-pod.sh 1 # this is the test number to run, from 1 to 6 currently
\end{lstlisting}
Finally, to exit this session, type:
\begin{lstlisting}[language=bash]
$ exit
\end{lstlisting}
\section{Keeping a container running}
If we instantiate a container this way, our session will finish when we quit the \emph{bash} prompt.
The changes we make to the container will also be lost, except the changes that persist outside
of the container, that is, the \emph{/data} folder in our example.
Therefore, it is sometimes useful to keep a container running, and connect to it and detach
from it as needed.
To start up a docker container in the detached mode, run:
\begin{lstlisting}[language=bash]
$ docker run -d -v /data:/data gnssanalysis/ginan:latest sleep infinity
\end{lstlisting}
we can verify that the container is running in the background:
\begin{lstlisting}[language=bash]
$ docker ps
\end{lstlisting}
This will show a container ID. Docker conveniently also provides an alias as a ``name``.
We can start a new bash shell inside the container by:
\begin{lstlisting}[language=bash]
$ docker exec -it <name> bash
\end{lstlisting}
where \emph{<name>} is the name or ID of the running docker container.
| {
"alphanum_fraction": 0.7519866165,
"avg_line_length": 45.5428571429,
"ext": "tex",
"hexsha": "2c36f5e4af7bde34a8da08be6f790e4b88e5eaae",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-12T15:15:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-12T15:15:12.000Z",
"max_forks_repo_head_hexsha": "b69593b584f75e03238c1c667796e2030391fbed",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "hqy123-cmyk/ginan",
"max_forks_repo_path": "docs/manual/docker.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b69593b584f75e03238c1c667796e2030391fbed",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "hqy123-cmyk/ginan",
"max_issues_repo_path": "docs/manual/docker.tex",
"max_line_length": 381,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b69593b584f75e03238c1c667796e2030391fbed",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "GnssTao/ginan",
"max_stars_repo_path": "docs/manual/docker.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-12T15:14:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-12T15:14:55.000Z",
"num_tokens": 1181,
"size": 4782
} |
\SetAPI{J-C}
\section{ISensorReceiverExtendable}
\label{extendable:ISensorReceiverExtendable}
\ClearAPI
\javadoc{com.koch.ambeth.sensor.ISensorReceiverExtendable}{ISensorReceiverExtendable}
\javadoc{com.koch.ambeth.sensor.ISensorReceiver}{ISensorReceiver}
\TODO
%% GENERATED LISTINGS - DO NOT EDIT
\inputjava{Extension point for instances of \type{ISensorReceiver}}
{jambeth-util/src/main/java/com/koch/ambeth/util/sensor/ISensorReceiverExtendable.java}
\begin{lstlisting}[style=Java,caption={Example to register to the extension point (Java)}]
IBeanContextFactory bcf = ...
IBeanConfiguration myExtension = bcf.registerBean(...);
bcf.link(myExtension).to(ISensorReceiverExtendable.class).with(...);
\end{lstlisting}
%% GENERATED LISTINGS END
| {
"alphanum_fraction": 0.811827957,
"avg_line_length": 41.3333333333,
"ext": "tex",
"hexsha": "655dbc3449570ca1c806d2763ab121476f46f3f6",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z",
"max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Dennis-Koch/ambeth",
"max_forks_repo_path": "doc/reference-manual/tex/extendable/ISensorReceiverExtendable.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Dennis-Koch/ambeth",
"max_issues_repo_path": "doc/reference-manual/tex/extendable/ISensorReceiverExtendable.tex",
"max_line_length": 90,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Dennis-Koch/ambeth",
"max_stars_repo_path": "doc/reference-manual/tex/extendable/ISensorReceiverExtendable.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 198,
"size": 744
} |
\haddockmoduleheading{VirtualArrow.Input}
\label{module:VirtualArrow.Input}
\haddockbeginheader
{\haddockverb\begin{verbatim}
module VirtualArrow.Input (
Preferences, Parliament, DistrictID,
Candidate(Candidate, candidateID, party),
District(District, districtID, nseats),
Voter(Voter, district, preferences),
Input(Input, districts, voters, nparties, districtMap), Party,
NumberOfSeats, parliamentSize, numberOfSeatsByDistrictID,
numberOfSeatsByDistrict, nvoters, firstChoices, firstChoicesAmongVoters,
calculateProportion, prefToPlaces, votersByDistrict, candidateMap
) where\end{verbatim}}
\haddockendheader
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
type\ Preferences\ =\ Vector\ Int
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
type\ Parliament\ =\ {\char 91}(Party,\ NumberOfSeats){\char 93}
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
type\ DistrictID\ =\ Int
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
data\ Candidate
\end{tabular}]\haddockbegindoc
\haddockbeginconstrs
\haddockdecltt{=} & \haddockdecltt{Candidate} & \\
\haddockdecltt{candidateID :: !Int} &
\haddockdecltt{party :: !Int} &
\end{tabulary}\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
instance\ Show\ Candidate
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
data\ District
\end{tabular}]\haddockbegindoc
\haddockbeginconstrs
\haddockdecltt{=} & \haddockdecltt{District} & \\
\haddockdecltt{districtID :: !DistrictID} &
\haddockdecltt{nseats :: !NumberOfSeats} &
\end{tabulary}\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
instance\ Show\ District
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
data\ Voter
\end{tabular}]\haddockbegindoc
\haddockbeginconstrs
\haddockdecltt{=} & \haddockdecltt{Voter} & \\
\haddockdecltt{district :: !DistrictID} &
\haddockdecltt{preferences :: !Preferences} &
\end{tabulary}\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
instance\ Show\ Voter
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
data\ Input
\end{tabular}]\haddockbegindoc
\haddockbeginconstrs
\haddockdecltt{=} & \haddockdecltt{Input} & \\
\haddockdecltt{districts :: ![District]} &
\haddockdecltt{voters :: ![Voter]} &
\haddockdecltt{nparties :: !Int} &
\haddockdecltt{districtMap :: Map DistrictID [Voter]} &
\end{tabulary}\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
instance\ Show\ Input
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
type\ Party\ =\ Int
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
type\ NumberOfSeats\ =\ Int
\end{tabular}]
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
parliamentSize\ ::\ Input\ ->\ NumberOfSeats
\end{tabular}]\haddockbegindoc
\section*{total number of seats.}
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
numberOfSeatsByDistrictID\ ::\ Input\ ->\ DistrictID\ ->\ NumberOfSeats
\end{tabular}]\haddockbegindoc
Returns number of seats by district id.\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
numberOfSeatsByDistrict\ ::\ Input\ ->\ {\char 91}(DistrictID,\ NumberOfSeats){\char 93}
\end{tabular}]\haddockbegindoc
Returns list of pairs (district id, number of seats in the district).\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
nvoters\ ::\ Input\ ->\ Int
\end{tabular}]\haddockbegindoc
\section*{total number of voters.}
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
firstChoices\ ::\ Input\ ->\ {\char 91}Party{\char 93}
\end{tabular}]\haddockbegindoc
Returns list of first choices (first preference)
| for each voter in the input.\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
firstChoicesAmongVoters\ ::\ {\char 91}Voter{\char 93}\ ->\ {\char 91}Party{\char 93}
\end{tabular}]\haddockbegindoc
Returns list of first choices (first preference) for each voter
| in the list.\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
calculateProportion\ ::\ Input\ ->\ Int\ ->\ Int
\end{tabular}]\haddockbegindoc
Returns proportion of parliament corresponding to x (number of votes).\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
prefToPlaces\ ::\ Preferences\ ->\ Vector\ Int
\end{tabular}]\haddockbegindoc
Returns vector of places,
| s.t. ! i = place of ith party in the list of preferences.\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
votersByDistrict\ ::\ {\char 91}Voter{\char 93}\ ->\ {\char 91}(DistrictID,\ {\char 91}Voter{\char 93}){\char 93}
\end{tabular}]\haddockbegindoc
Returns pairs (district id, voters in that district)\par
\end{haddockdesc}
\begin{haddockdesc}
\item[\begin{tabular}{@{}l}
candidateMap\ ::\ {\char 91}Candidate{\char 93}\ ->\ Map\ Int\ Int
\end{tabular}]\haddockbegindoc
Returns map from candidate id (Party) to party.\par
\end{haddockdesc} | {
"alphanum_fraction": 0.7017247844,
"avg_line_length": 30.6551724138,
"ext": "tex",
"hexsha": "e16e395f03932a7800203f73913454b376892eae",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2bb93960c03a7c92829f2863e4e9d6fe10e74e24",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "olya-d/virtual-arrow",
"max_forks_repo_path": "doc/virtual-arrow/VirtualArrow-Input.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2bb93960c03a7c92829f2863e4e9d6fe10e74e24",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "olya-d/virtual-arrow",
"max_issues_repo_path": "doc/virtual-arrow/VirtualArrow-Input.tex",
"max_line_length": 113,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2bb93960c03a7c92829f2863e4e9d6fe10e74e24",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "olya-d/virtual-arrow",
"max_stars_repo_path": "doc/virtual-arrow/VirtualArrow-Input.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1761,
"size": 5334
} |
\chapter{Game, Models and Metrics}
\label{ch3:game_model}
In this chapter, we first describe the proposed language game and the definition of numerals in our game. We then introduce the architecture of models we trained and also the transformed iterated learning framework for training models.
\section{Game Description}
\label{sec3.1:game_description}
Unlike traditional simulation methods in evolutionary linguistics introduced in Section \ref{sec2.1:evolang}, there are 3 necessary components in our architecture and they are given as follows:
\begin{itemize}
\item \textit{Environment}: To imply our linguistic assumption as well as make the size of environment limited and thus analysable, all perceptions in the established environment are sequences of objects represented by one-hot vectors. For ease of demonstration, we denote these objects as $o \in \mathcal{O}$ where $\mathcal{O} = \{A, B, C, \dots\}$ is the universal set of all kinds of objects in the following sections.
\item \textit{Agents}: There are 2 kinds of agents in our project: i) \textit{speakers} $S$ that can observe objects in the environment and emit messages $m_i$; ii) \textit{listeners} $L$ that can receive the messages and generate a sequence of objects.
\item \textit{Dynamics}: In this project, the dynamics mean not only the manually designed reward function for agents but also all elements related to training them, e.g. iterated learning and blank vocabulary. The details will be introduced in Subsection \ref{ssec3.2.3:loss_learning} and Subsection \ref{ssec3.2.4:iterated_learning}.
\end{itemize}
It worth mentioning that one premise of our project is that we do not have any assumption about the architecture of computational agents, and we focus more on the representations from environments as well as how agents are trained.
\subsection{Game Procedure}
\label{ssec3.1.1:game_procedure}
The first proposed game is to let listeners reconstruct sets of objects based on the messages transmitted by speakers, thus we call it ``Set-Reconstruct'' game. The overall view of the proposed Set-Reconstruct game is illustrated in Figure \ref{fig2:game_procedure} given as follow.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{graphs/task_illustration.pdf}
\caption{Diagram of Game Playing Procedure.}
\label{fig2:game_procedure}
\end{figure}
According to the steps of playing games at iteration $i$, the components of our games are illustrated as follows:
\begin{enumerate}
\item Perceptions: the perception from environments is a \textbf{set} of objects, i.e. $s_i=\{o_{i_1}, o_{i_2}, \dots\ o_{i_n}\} \in \mathcal{S}$ where $n$ is the number of elements and $\mathcal{S}$ is meaning space.
\item Speaker observation and message generation: after observing and encoding the perception, speaker $S$ would generate a message $m_i=\{t_{i_1}, t_{i_2}, \dots, t_{i_{|M|}}\} \in \mathcal{M}$ where $|M|$ is the maximum length of messages, $t_k \in V$ ($k \in {1, \dots, |V|}$) are selected from a randomly initialised vocabulary such that the symbols in the initially meaningless vocabulary whose size is $|V|$, and $\mathcal{M}$ is message space;
\item Listener receiving message and perception reconstruction: after receiving and encoding the message $m_i$, the listener would generate a \textbf{sequence} $\hat{s}_i = \{\hat{o}_{i_1}, \hat{o}_{i_2}, \dots\ \hat{o}_{i_n}\}$ whose symbols are identical to those in the original perception $s_i$;
\item Reward and parameter update: by comparing $s_i$ and $\hat{s}_i$, we take the cross-entropy between them as the reward for both listener and speaker and update parameters of both speaker and listener with respect to it.\footnote{Different ways of updating parameters are introduced in Section \ref{sec3.2:models}.}
\end{enumerate}
One thing that needs to be highlighted is that the perceptions $s_i$ are sets and thus order of objects would not make any difference. Further, we argue that the only important feature that need to be transmitted is actually the numbers of different objects which corresponds to the function of numerals in natural language.
\subsection{Functions of Numerals in the Game}
\label{ssec3.1.2:numeral_in_game}
Broadly speaking, numerals are words that can describe the numerical quantities and usually act as determiners to specify the quantities of nouns, e.g. "two dogs" and "three people". Also, under most scenarios, numerals correspond to non-referential concepts\cite{da2016wow}. Considering the objective of listeners $L$ in our language game, we define a numeral as a symbol $t^n$ at \textbf{position} $i$ indicating a function that reconstructs some object $o_i$ exactly $n$ times:
\begin{equation}
t^n: o_i \rightarrow \{\overbrace{o_i, \dots, o_i}^{n \mbox{ elements}}\}
\label{eq:3.1numeral_define}
\end{equation}
Note that, the meaning of a symbol is not only decided by itself but also its position in message, as $L$ would encode meanings of symbols according to their appearance in messages.
From the side of speakers $S$, a numeral is defined as a symbol $t^n$ at \textbf{position} $i$ that represents the numbers of specific object $o_i$, as we cannot tell whether agents realise the meanings of symbols are not related to their positions in the messages without specifically designed model architecture. Thus, we expect speaker $S$ would first learn to count the number of different objects and then encode them into a sequence of discrete symbols. As \cite{Siegelmann1992NN} shows that Recurrent Neural Networks (RNNs) are Turing-complete and Long-short Term Memory (LSTM) model proposed by \cite{hochreiter1997long} is a super set of RNN, it is safe to claim that LSTM is also Turing-complete and thus capable of counting numbers of objects.
\subsection{A Variant: Set-Select Game}
\label{ssec:3.1.3:refer_game}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{graphs/setrefer_game.pdf}
\caption{Diagram of Referential Game Playing Procedure.}
\label{fig3:refer_game_procedure}
\end{figure}
We illustrate the Set-Select game, a referential variant of Set-Reconstruct game, in Figure \ref{fig3:refer_game_procedure} given above. The only difference is that listeners need to select the correct set of objects among several distractors \footnote{A distractor is a set that contains different numbers of objects as the correct one.} instead of reconstructing it.
\section{Proposed Models}
\label{sec3.2:models}
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\textwidth]{graphs/set2seq2_.pdf}
\caption{Overall Diagram of Model Architectures for Playing Games.}
\label{fig4:model_arch}
\end{figure}
We illustrate the overall architecture of our models in Figure \ref{fig4:model_arch} given above.
A speaker $S$ consists of 2 components: i) a set encoder that takes a set of objects as input and outputs its vector representation $h_s^s$; ii) a standard LSTM sequence decoder that can generate a message $s_{i_1}, s_{i_2}, s_{i_3}, \dots$ based on $h_s^s$.
As for a listeners $L$, it would first encode messages with a LSTM sequence encoder and get the feature vector $h^l_m$. Then, in the Set-Reconstruct game, $L$ would take $h^l_m$ as the initial hidden state and predict a sequence of objects with a LSTM sequence decoder, which is shown by the right upper part of Figure \ref{fig4:model_arch}. As for in Set-Select game, $L$ would compare $h^l_m$ with several sets which are encoded by set encoders of $L$, and select the one shown to $S$ based on the dot product between $h^l_m$ and feature vectors of all candidate sets.
Further details are shown in the following subsections.
\subsection{Speaker}
\label{ssec3.2.1:speaker}
The architecture of our speaking agents is very similar to the Seq-to-Seq model proposed by \cite{sutskever2014sequence} except that replace the encoder for input sequences with a set encoder whose details are introduced in the following subsubsection. As Seq-to-Seq model is quite popular nowadays, we skip details about how to generate sequences which correspond to the messages in our games, and focus on how to encode sets of objects.
\subsubsection{Set Encoder}
\label{sssec3.2.1.1:set_encoder}
Our set encoder shares an almost same architecture of inputting sets proposed by \cite{vinyals2015order}. However, as there is an addition in $softmax$ function and it would introduce counting bias into the feature representation of sets, we replace equation (5) in \cite{vinyals2015order} with the following operation in order to avoid exposing counting system to models:
\begin{equation}
a_{i,t} = \sigma(e_{i,t})
\label{eq3.2.1.1:sigmoid_to_replace_softmax}
\end{equation}
where $\sigma$ is sigmoid function.
Thus, assume the input for speaker $S$ is a set $s_i=\{o_{i_1}, o_{i_2}, \dots\ o_{i_n}\}$. The first step is to read the set $s_i$ as a sequence and project all objects to dense vectors by an embedding layer. Based on the sequence $\{w_{i_1}^s, w_{i_2}^s, \dots\ w_{i_n}^s\}$ (where $w_{i_k}^s$ is the embedding vector of $o_{i_k}$ for speaker where $k\in \{1, \dots, n\}$), the calculation of $h_s^s$ can be given as follows:
\begin{equation}
\begin{split}
e_{i, t}^s & = f(q_{t-1}^s, w_i^s) \\
a_{i, t}^s & = \sigma(e_{i,t}^s) \\
r_t^s & = \sum_i a_{i,t}^s w_i^s \\
q_t^s &= LSTM(r_t, q_{t-1}^s, c_{t-1}^s)
\end{split}
\label{eq3.2.1.2:speaker_hidden_calculation}
\end{equation}
where $t\in \{1, \dots, T\}$ is the number of attention times, $f$ is an affine layer, $q^s_{t}$ and $c^s_t$ are hidden and cell states in LSTM respectively.
Besides, in our implementation, $T$ is set to be the same as the number of all types of objects, as we want to help models to represent number of each kind of objects as features in the vector representation of input set.
\subsubsection{Message Generator}
\label{sssec3.2.1.2:msg_generator}
To generate the message $m_i$, we follow \cite{havrylov2017emergence} and adopt a LSTM-based sequence decoder with 2 different kinds of sampling mechanisms: i) direct sampling that directly sample from the corresponding categorical dist6ribution specified by $softmax(Wh_k^s + b)\ \forall k\in {1, 2, \dots, |M|}$; ii) Gumbel-softmax estimator proposed by \cite{jang2016categorical} with straight-through trick introduced by \cite{bengio2013estimating}. Beside, the learning mechanisms also vary for these 2 different sampling methods, which is further discussed in Subsection \ref{ssec3.2.3:loss_learning}.
Note that the length of each message $m_i$ is fixed to $|M|$ and symbols $t_{i_1},\dots,t_{i_|M|}$ are all one-hot vectors that represent different discrete symbols. The effect of number of all discrete message symbols $|V|$ and length of messages $|M|$ on the emergent language is further discussed in Chapter \ref{ch4:results_analysis}.
\subsection{Listener}
\label{ssec3.2.2:listeners}
The architectures of listening agents are specifically designed for handling different kinds of tasks/games and thus vary from Set-Reconstruct game to Set-Select game.
\noindent\textbf{Listener in Set-Reconstruct Game}: The listener in Set-Reconstruct game has exactly the same architecture as Seq-to-Seq model proposed by \cite{sutskever2014sequence}. And, when combined with speaker model, the overall model is called ``Set2Seq2Seq''.
\noindent\textbf{Listener in Set-Select Game}: The listener in Set-Select game would also first encode messages with a LSTM like it is in standard Seq-to-Seq model. However, as it needs to select among several candidates, it also needs to encode all these sets with Set Encoder introduced in Subsection \ref{sssec3.2.1.1:set_encoder}. Then, the listener would make predictions based on the dot-products between embedding of message $h^r_m$ and embeddings of each set of objects. Similarly, when combined with speaker model, the overall model is called as ``Set2Seq2Choice''.
\subsection{Loss/Reward and Learning}
\label{ssec3.2.3:loss_learning}
\textbf{In Set-Reconstruct game}, as the predictions of listeners are a sequence of objects $\hat{s}_i=\{\hat{o}_{i_1}, \dots, \hat{o}_{i_n}\}$, we use cross-entropy between the original set and the predicted sequence as the objective function that needs to be minimised. Formally,
\begin{equation}
\mathcal{L}_{\theta^S, \theta^L}(o_{i_1}, \dots, o_{i_n}) =\mathbb{E}_{m_i\sim p_{\theta^S}(\cdot|s_i)} \left[ -\sum_{k=1}^{n} o_{i_k} \log(p(\hat{o}_{i_k}|m_i, \hat{o}_{-i_k})) \right]
\label{eq3.2.3.1:cross_entropy_seq}
\end{equation}
where $\hat{o}_{-i_k}$ represent all predicted objects preceding $\hat{o}_{i_k}$.
\noindent\textbf{In Set-Select game}, we still use the cross entropy between the correct candidate and as the loss to minimise, i.e.
\begin{equation}
\mathcal{L}_{\theta^S, \theta^L}(s_i) = \mathbb{E}_{m_i\sim p_{\theta^S}(\cdot|s_i)} \left[-\sum_{k=1}^{C} s_i log(p(c_k)) \right]
\label{eq3.2.3.2:cross_entropy_choose}
\end{equation}
where $c_k$ is the predicted logit score for candidate $k$ among $C$ candidates.
In the case that we use Gumbel-softmax to approximate sampling messages from speaker $S$, parameters $\theta^S$ and $\theta^L$ are learnt by back-propagation. In the case that we use direct sampling, $\theta^L$ is still learnt by back-propagation, where as $\theta^S$ is learnt by REINFORCE estimator \cite{williams1992simple} with cross-entropy scores as rewards.
\subsection{Neural Iterated Learning}
\label{ssec3.2.4:iterated_learning}
The evolutionary linguistic community has already studied the origins and metrics of language compositionality since \cite{kirby2002emergence} which points out a cultural evolutionary account of the origins of compositionality and proposes iterated learning to model this procedure. Thus, to facilitate the emergence of compositionality among the autonomous communication between agents, we also trained our agents in a iterated learning fashion. In the original iterated learning, an agent can both speak and listen. However, in this project, agent can be either a speaker or a listener, not both at the same time. Thus, we slightly transform the iterated learning framework and call the following one ``neural iterated learning'' (NIL).
Following the overall architecture of iterated learning, we also train agents generation by generation. In the beginning of each generation $t$, we would randomly re-instantiate a new speaker $S_t$ and a new listener $L_t$ and then execute the following 3 phases:
\begin{enumerate}
\item \textbf{Speaker Learning phase}: During this phase, we would train $S_t$ with the set-message pairs generated by $S_{t-1}$, and the number of epochs is set to be fixed. Note that there is no such phase in the initial generation, as there is no set-message pair for training $S_t$.
\item \textbf{Game Playing phase}: During this phase, we would let $S_t$ and $L_t$ cooperate to complete the game and update $\theta^S_t$ and $\theta^L_t$ with loss/reward illustrated in previous section, and use early-stopping to avoid overfitting.
\item \textbf{Knowledge Generation phase}: During this phase, we would feed all $s_i$ in training set into $S_t$ and get corresponding messages $m_i$. Then, we would keep the sampled ``language'' for $S_{t+1}$ to learn.
\end{enumerate}
\subsection{Baseline Models}
\label{ssec3.2.5:baselines}
To get the upper bounds of our multi-agent communication systems, we remove the communication between speaker and listener to be the baseline models.
In Set-Reconstruct game, our baseline is Set-to-Seq model which first encodes the input set $s_i$ with the set encoder introduced in subsection \ref{sssec3.2.1.1:set_encoder} and then directly generate the predicted sequence $\hat{s}_i$ following the sequence generation in standard seq-to-seq model.
As for in Set-Select game, our baseline is Set-to-Choose model, in which speaker directly transmit representation vector $h^s_s$ of set $s_i$ to listener. And, listener compare $h^s_s$ with all candidate sets to make a selection.
\section{Compositionality and Metrics}
\label{sec3.3:metrics}
With the recent rapid development of grounded language learning, measuring the compositionality of emergent communication protocol attracts more and more attention nowadays, e.g. \cite{andreas2019measuring}, \cite{lowe2019pitfalls}.
First of all, to better define compositionality, we argue that if a language is said to be perfect compositional, then it should satisfy the following 2 properties:
\begin{itemize}
\item \textbf{Mutual Exclusivity}: Symbols describing different values of a same property should be mutually exclusive to each other. For example, ``green'' and ``red'' are both used to describe colour of an object and they should not appear at the same time as an object can not be green and red at the same time.
\item \textbf{Orthogonality}: Appearance of symbols for describing a property should be independent from the appearance of symbols used to describe another property. For example, the appearance of symbols used for describing colours of objects should be independent from the appearance of symbols used for describing shapes of objects.
\end{itemize}
As the setting of our game is simple and the space size is limited, we follow \cite{brighton2006understanding} and take the topological similarity between meaning space (space of all sets of objects) and message space as the metric of compositionality. Briefly speaking, as much of language is neighbourhood related, i.e. nearby meanings tend to be mapped to nearby messages, the compositionality of language can be measured as the correlation degree between distances of meanings and distances of corresponding messages. For example, the meaning of set $\{A,A,A,B,B\}$ is closer to $\{A,A,B,B\}$ than $\{A,A,A,A,B,B,B\}$. In natural language (which is perfectly compositional), messages for $\{A,A,A,B,B\}, \{A,A,B,B\}, \{A,A,A,A,B,B,B\}$ are ``3A2B'', ``2A2B'' and ``4A3B''\footnote{In Chapter \ref{ch4:results_analysis}, we would illustrate messages with lower case alphabets. To make them easier to understand, we use natural language here.} respectively. However, in a non-compositional language, the messages may be ``5B5A'', ``1C2E'' and ``3A4C'', which are randomly sampled mappings between meaning space and message space.
In order to calculate the topological compositionality, we need define the distance metric for meaning space and message space respectively. Thus, for an input set $s_i$, we could first count the number of each kind of object and then concatenate the Arabic numerals as the meaning sequence. Take a set $s_i=\{A, A, A, B, B\}$ for example, the corresponding meaning sequence would be ``32'' as there are 3 $A$ and 2 $B$ in $s_i$.\footnote{Again, the appearing order of objects would not effect the meaning sequence of a set.} As for the message space, we have several different settings which are further illustrated in subsection \ref{ssec4.2.2:topo_sim}, and edit distance as in \cite{brighton2006understanding} is also included.
Meanwhile, as we could perfectly encode the meaning of a set into natural language, we could take the speaker as a machine translation model that translates a meaning represented in natural language into emergent language invented by computational agents themselves. Inspired by this point of view, we could also use BLEU score proposed by \cite{papineni2002bleu} as a metric of semantic similarities between messages. For the sets that share more similar meanings, we expect their corresponding messages to share more uni-grams or bi-grams or so on. Following the above example, in a perfectly compositional language, as $\{A,A,A,B,B\}$ locates very close to $\{A,A,B,B\}$, their messages (``3A2B'' and ``2A2B'') share $3$ uni-grams (``A'', ``B'' and ``2'') and $2$ bi-grams (``A2'' and ``2B'') in common. However, in a non-compositional language, e.g. in which the messages for $\{A,A,A,B,B\}$ and $\{A,A,B,B\}$ are ``5B5A'' and ``1C2E'' respectively, the messages share no uni-gram and bi-gram in common.
In our case, the BLEU score between $m_i$ and $m_j$ is calculated as follow:
\begin{equation}
BLEU(m_i, m_j) = 1 - \sum_{n=1}^{N} \omega_n \cdot \frac{\mbox{Number of common } n\mbox{-grams}}{\mbox{Number of total different } n\mbox{-grams}}
\label{eq3.3.1:bleu_score}
\end{equation}
where $n$ is the size of $n$-grams and $\omega_n$ is the weight for similarity based on $n$-grams. In the following discussions, we would denote BLEU score based on $n$-grams as BLEU-$n$, e.g. BLEU score based on uni-grams would be represented as BLEU-1.
| {
"alphanum_fraction": 0.7679204077,
"avg_line_length": 102.004950495,
"ext": "tex",
"hexsha": "5d8bedb3b166dc8e630fbe11f76ca6869d83c5f1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ef9786e5bd6c8c456143ad305742340e510f5edb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Shawn-Guo-CN/EmergentNumerals",
"max_forks_repo_path": "doc/dissertation/chapter3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ef9786e5bd6c8c456143ad305742340e510f5edb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Shawn-Guo-CN/EmergentNumerals",
"max_issues_repo_path": "doc/dissertation/chapter3.tex",
"max_line_length": 1133,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "ef9786e5bd6c8c456143ad305742340e510f5edb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Shawn-Guo-CN/EmergentNumerals",
"max_stars_repo_path": "doc/dissertation/chapter3.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-18T18:11:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-16T21:37:55.000Z",
"num_tokens": 5359,
"size": 20605
} |
\section{Reliability Models}
\label{sec:ReliabilityModels}
\newcommand{\aliasRequiredParameterDescription}[1]
{
\xmlNode{#1}, \xmlDesc{string or float, required parameter}. See the above definition.
If a string was provided, the reliability model would treat it as an input variable that came
from other RAVEN entity. In this case, the variable must be listed in the subnode
\xmlNode{variables} under \xmlNode{ExternalModel}.
}
\newcommand{\aliasOptionalParameterDescription}[1]
{
\xmlNode{#1}, \xmlDesc{string or float, optional parameter}. See the above definition.
If a string was provided, the reliability model would treat it as an input variable that came
from other RAVEN entity. In this case, the variable must be listed in the subnode
\xmlNode{variables} under \xmlNode{ExternalModel}.
\default{1}
}
\textbf{Reliability Models} are the most frequently used in life data analysis
and reliability engineering. These models/functions give the probability of a component
operating for a certain amount of time without failure. As such, the reliability models
are function of time, in that every reliability value has an associated time value. In
other words, one must specify a time value with the desired reliability value. This degree
of flexibility makes the reliability model a much better reliability specification that the
mean time to failure (MTTF), which only represents one point along the entire reliability
model.
\subsection{The Probability Density and Cumulative Density Functions}
From probability and statistics, given a continuous random variable $X$, we denote:
\begin{itemize}
\item The probability density function (pdf), as $f(x)$
\item The cumulative density function (cdf), as $F(x)$.
\end{itemize}
If $x$ is a continuous random variable, then the probability of $x$ takes on a value in the
interval $[a,b]$ is the area under the pdf $f(x)$ from $a$ to $b$
\begin{equation}
P(a\leq x\leq b) = \int_{a}^{b} f(x)dx
\end{equation}
The cumulative distribution function is a function $F(x)$ of a random variable $x$ and is
defined for a number $x_0$ by
\begin{equation}
F(x_0) = P(x\leq x_0) = \int_{-\infty}^{x_0} f(s)ds
\end{equation}
That is, for a given value $x_0$, $F(x_0)$ is the probability that the observed value of $x$
would be, at most, $x_0$. The mathematical relationship between the pdf and cdf is given by:
\begin{equation}
F(x) = \int_{-\infty}^{x} f(s)ds
\end{equation}
Conversely
\begin{equation}
f(x) = - \frac{dF(x)}{dx}
\end{equation}
The functions most commonly used in reliability engineering and life data analysis, namely the
reliability function and failure rate function, can be determined directly from the pdf definition,
or $f(t)$. Different distributions exist, such as Lognormal, Exponential, Weibull, etc., and each of
them has a predefined $f(t)$. These distributions were formulated by statisticians, mathematicians,
and engineers to mathematically model or represent certain behavior. Some distributions tend to better
represent life data and are most commonly referred to as lifetime distributions.
\subsection{The Reliability and Failure Rate Models}
Given the mathematical representation of a distribution, we can derive all functions needed
for reliability analysis (i.e., reliability models/functions). This would only depend on the value of $t$
after the value of the distribution parameters are estimated from data.
Now, let $T$ be the random variable defining the lifetime of the component with cdf $F(t)$, which is the
time the component would operate before failure. The cdf $F(t)$ of the random variable $T$ is given by
\begin{equation}
F(t) = \int_{-\infty}^{t} f(T)dT
\end{equation}
If $F(t)$ is a differentiable function, the pdf $f(t)$ is given by
\begin{equation}
f(t) = - \frac{dF(t)}{dt}
\end{equation}
The reliability function or survival function $R(t)$ of the component is given by
\begin{equation}
R(t) = P(T>t) = 1 - P(T\leq t) = 1-F(t)
\end{equation}
This is the probability that the component would operate after time t, sometimes called the survival probability.
The failure rate of a system during the interval $[t,t+\Delta t]$ is the probability that a failure per
unit time occurs in the interval, given that a failure has not occurred prior to t, the beginning of the
interval. The failure rate function (i.e., instantaneous failure rate, conditional failure rate) of the hazard
function is defined as the limit of the failure rate as the interval approaches zero
\begin{equation}
\lambda (t)= \lim_{\Delta t\rightarrow 0} \frac{F(t+\Delta t) - F(t)}{\Delta tR(t)}
= \frac{1}{R(t)} \lim_{\Delta t\rightarrow 0} \frac{F(t+\Delta t) - F(t)}{\Delta t}
= \frac{1}{R(t)}\frac{dF(t)}{dt} = \frac{f(t)}{R(t)}
\end{equation}
The failure rate function is the rate of change of the conditional probability of a failure at time $t$.
It measures the likelihood that a component that has operated up until time $t$ fails in the next
instance of time.
Generally, $\lambda (t)$ is the one tabulated because it is measured experimentally and because it tends to
vary less rapidly with time than the other parameters. When $\lambda (t)$ is given, all other three
parameters, $F(t)$, $f(t)$, $R(t)$, can be computed as follows
\begin{equation}
R(t) = \exp(-\int_{0}^{t} \lambda (s)ds)
\end{equation}
\begin{equation}
f(t) = \lambda (t)R(t) = \lambda (t)\exp(-\int_{0}^{t} \lambda (s)ds)
\end{equation}
\begin{equation}
F(t) = 1 - R(t) = 1 - \exp(-\int_{0}^{t} \lambda (s)ds)
\end{equation}
The mean time between failure (MTBF) can be obtained by finding the expected value of the random variable
$T$, time to failure. Hence
\begin{equation}
MTBF = E(T) = \int_{0}^{\infty} tf(t)dt = \int_{0}^{\infty} R(t)dt
\end{equation}
\subsection{The Lifetime Distributions or Aging Models}
We would consider several of the most useful reliability models based on different probability
distributions for describing the failure of continuous operating devices, including:
\begin{itemize}
\item Exponential, model \xmlAttr{type} is \xmlString{exponential}
\item Erlangian, model \xmlAttr{type} is \xmlString{erlangian}
\item Gamma, model \xmlAttr{type} is \xmlString{gamma}
\item Lognormal, model \xmlAttr{type} is \xmlString{lognorm}
\item Fatigue Life, model \xmlAttr{type} is \xmlString{fatiguelife}
\item Weibull, model \xmlAttr{type} is \xmlString{weibull}
\item Exponential Weibull, model \xmlAttr{type} is \xmlString{exponweibull}
\item Bathtub, model \xmlAttr{type} is \xmlString{bathtub}
\item Power Law, model \xmlAttr{type} is \xmlString{powerlaw}
\item Log Linear, model \xmlAttr{type} is \xmlString{loglinear}.
\end{itemize}
The specifications of these models must be defined within a RAVEN \xmlNode{ExternalModel}. This
XML node accepts the following attributes:
\begin{itemize}
\item \xmlAttr{name}, \xmlDesc{required string attribute}, user-defined identifier of this model.
\nb As with other objects, this identifier can be used to reference this specific entity from other
input blocks in the XML.
\item \xmlAttr{subType}, \xmlDesc{required string attribute}, defines which of the subtypes should
be used. For reliability models, the user must use \xmlString{SR2ML.ReliabilityModel} as the subtype.
\end{itemize}
In the reliability \xmlNode{ExternalModel} input block, the following XML subnodes are required:
\begin{itemize}
\item \xmlNode{variable}, \xmlDesc{string, required parameter}. Comma-separated list of variable
names. Each variable name needs to match a variable used or defined in the reliability model or variable
coming from another RAVEN entity (i.e. Samplers, DataObjects and Models).
\nb For all the reliability models, the following outputs variables would be available. If the user
added these output variables in the node \xmlNode{variables}, these variables would be also available to
for use anywhere in the RAVEN input to refer to the reliability model output variables.
\begin{itemize}
\item \xmlString{pdf\_f}, variable contains the calculated pdf value or values at given time instance(s),
(i.e., a series of times).
\item \xmlString{cdf\_F}, variable contains the calculated cdf value or values at given time instance(s),
(i.e., a series of times).
\item \xmlString{rdf\_R}, variable contains the calculated reliability function (rdf) value or values at
given time instance(s), (i.e., a series of times).
\item \xmlString{frf\_h}, variable contains the calculated failure rate function (frf) value or values
at given time instance(s), (i.e., a series of times).
\end{itemize}
\nb When the external model variables are defined, at run time, RAVEN initializes
them and tracks their values during the simulation.
\item \xmlNode{ReliabilityModel}, \xmlDesc{required parameter}. The node is used to define the reliability
model, and it contains the following required XML attribute:
\begin{itemize}
\item \xmlAttr{type}, \xmlDesc{required string attribute}, user-defined identifier of the reliability model.
\nb the types for different reliability models can be found at the beginning of this section.
\end{itemize}
In addition, this node accepts several different subnodes representing the model parameters depending on the
\xmlAttr{type} of the reliability model. The common subnodes for all reliability models are:
\begin{itemize}
\item \xmlNode{Tm}, \xmlDesc{string or float or comma-separated float, required parameter}. Time instance(s)
that the reliability models would use to compute the pdf, cdf, rdf, and frf values. If a string was provided,
the reliability model would treat it as an input variable that came from entities of RAVEN. In this
case, the variable must be listed in the sub-node \xmlNode{variables} under \xmlNode{ExternalModel}.
\item \xmlNode{Td}, \xmlDesc{string or float, optional parameter}. The time that the reliability models start to be active.
If a string was provided, the reliability model would treat it as an input variable that came
from RAVEN entities. In this case, the variable must be listed in the sub-node \xmlNode{variables}
under \xmlNode{ExternalModel}.
\default{0.}
\end{itemize}
\end{itemize}
In addition, if the user wants to use the \textbf{alias} system, the following XML block can be input:
\begin{itemize}
\item \xmlNode{alias} \xmlDesc{string, optional field} specifies an alias for
any variable of interest in the input or output space for the ExternalModel.
%
These aliases can be used anywhere in the RAVEN input to refer to the ExternalModel
variables.
%
In the body of this node, the user specifies the name of the variable that the ExternalModel is
going to use (during its execution).
%
The actual alias, usable throughout the RAVEN input, is instead defined in the
\xmlAttr{variable} attribute of this tag.
\\The user can specify aliases for both the input and the output space. As a sanity check, RAVEN
requires an additional required attribute \xmlAttr{type}. This attribute can be either ``input'' or ``output.''
%
\nb The user can specify as many aliases as needed.
%
\default{None}
\end{itemize}
Example XML (Bathtub Reliability Model):
\begin{lstlisting}[style=XML]
<ExternalModel name="bathtub" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="bathtub">
<!-- scale parameter -->
<beta>1.</beta>
<theta>1.0</theta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- shape parameter -->
<alpha>1.0</alpha>
<rho>0.5</rho>
<!-- weight parameter -->
<c>0.5</c>
</ReliabilityModel>
<!-- alias can be used to represent any input/output variables -->
<alias variable='bathtub_F' type='output'>cdf_F</alias>
<alias variable='bathtub_f' type='output'>pdf_f</alias>
<alias variable='bathtub_R' type='output'>rdf_R</alias>
<alias variable='bathtub_h' type='output'>frf_h</alias>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Lognormal Model}
The probability density function of the lognormal is given by
\begin{equation}
f(T_m) = \frac{1}{\alpha T_m\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{\ln{\frac{T_m-T_d}{\beta}}}{\alpha}\right)^2\right)
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta >0$, $\beta$ is the scale parameter, $\alpha$ is the shape
parameter, and $T_d$ is the location parameter.
This model accepts the following additional sub-nodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="lognorm" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<ReliabilityModel type="lognorm">
<!-- scale parameter -->
<beta>1.</beta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- shape parameter -->
<alpha>1.</alpha>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Exponential Model}
The probability density function of the exponential is given by
\begin{equation}
f(T_m) = \lambda\exp\left(-\lambda\left(T_m-T_d\right)\right)
\end{equation}
where $T_m\geq T_d$, $T_d, \lambda >0$, $\lambda$ is the mean failure rate or the inverse of scale parameter,
and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{lambda}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="exponential" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="exponential">
<!-- mean failure rate -->
<lambda>1.</lambda>
<!-- mission time -->
<Tm>tm</Tm>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Weibull Model}
The probability density function of the Weibull is given by
\begin{equation}
f(T_m) = \frac{\alpha}{\beta}\left(\frac{T_m-T_d}{\beta}\right)^{\alpha-1}\exp\left(-\left(\frac{T_m-T_d}{\beta}\right)^\alpha\right)
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta >0$, and $\beta$ is the scale parameter, $\alpha$ is the shape
parameter, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="weibull" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="weibull">
<!-- scale parameter -->
<beta>1.</beta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- time delay -->
<Td>2.0</Td>
<!-- shape parameter -->
<alpha>1.0</alpha>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Erlangian Model}
The probability density function of the Erlangian is given by
\begin{equation}
f(T_m) = \frac{\lambda\left(\lambda T_m\right)^{k-1}\exp\left(-\lambda T_m\right)}{\left(k-1\right)!}
\end{equation}
where $T_m\geq T_d$, $T_d, \lambda >0$, and $\lambda$ is the inverse of scale parameter, $k$ is positive integer
that control the shape, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{k}
\nb $k$ is a positive integer. If a float was provided, a warning would be raised.
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="erlangian" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="erlangian">
<!-- mean failure rate -->
<lambda>0.1</lambda>
<!-- mission time -->
<Tm>tm</Tm>
<!-- shape parameter -->
<k>2</k>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Gamma Model}
The probability density function of the Gamma is given by
\begin{equation}
f(T_m) = \frac{\beta \left(\beta \left(T_m-T_d\right)\right)^{\alpha-1}\exp\left(-\beta\left(T_m-T_d\right)\right)}{\Gamma \left(\alpha\right)}
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta >0$, and $\beta$ is the inverse of scale parameter, $\alpha$ is the shape
parameter, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="gamma" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="gamma">
<!-- rate parameter -->
<beta>0.1</beta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- shape parameter -->
<alpha>2.</alpha>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Fatigue Life Model (Birnbaum-Saunders)}
The probability density function of the fatigue life is given by
\begin{equation}
f(T_m) = \frac{\frac{T_m-T_d}{\beta}+1}{2\alpha\sqrt{2\pi\left(\frac{T_m-T_d}{\beta}\right)^3}}
\exp\left(-\frac{\left(\frac{T_m-T_d}{\beta}-1\right)^2}{2\left(\frac{T_m-T_d}{\beta}\right)\alpha^2}\right)
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta >0$, and $\beta$ is the scale parameter, $\alpha$ is the shape
parameter, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="fatiguelife" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="fatiguelife">
<!-- scale parameter -->
<beta>1.</beta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- shape parameter -->
<alpha>1.0</alpha>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Exponentiated Weibull Model}
The probability density function of the exponentiated Weibull is given by
\begin{equation}
f(T_m) = \gamma\alpha\left(1-\exp\left(-\left(\frac{T_m-T_d}{\beta}\right)^\alpha\right)\right)^{\gamma-1}
\left(\frac{T_m-T_d}{\beta}\right)^{\alpha-1}\exp\left(-\left(\frac{T_m-T_d}{\beta}\right)^\alpha\right)
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta, \gamma>0$, and $\beta$ is the scale parameter, $\alpha$ and $\gamma$ is the shape
parameter, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\item \aliasRequiredParameterDescription{gamma}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="exponweibull" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="exponweibull">
<!-- scale parameter -->
<beta>1.</beta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- time delay -->
<Td>2.0</Td>
<!-- shape parameter -->
<alpha>1.0</alpha>
<gamma>0.5</gamma>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Bathtub Model}
The reliability function is given by:
\begin{equation}
R(T_m) = \exp\left(-c\beta\left(\frac{T_m-T_d}{\beta}\right)^\alpha -(1-c)\left(\exp\left(\frac{T_m-T_d}{\theta}\right)^\rho -1\right)\right)
\end{equation}
The failure rate function is given by
\begin{equation}
\lambda(T_m) = c\alpha\left(\frac{T_m-T_d}{\beta}\right)^{\alpha-1}+(1-c)\rho\left(\frac{T_m-T_d}{\theta}\right)^{\rho-1}
\end{equation}
The probability density function of the Bathtub is given by
\begin{equation}
f(T_m) = \lambda(T_m) R(T_m)
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta, \theta, \rho, c >0$, and $\beta, \theta$ are the scale parameters,
$\alpha, \rho$ are the shape parameters, $c \in [0,1]$ is the weight parameter, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasRequiredParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\item \aliasOptionalParameterDescription{theta}
\item \aliasOptionalParameterDescription{rho}
\item \aliasOptionalParameterDescription{c}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="bathtub" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="bathtub">
<!-- scale parameter -->
<beta>1.</beta>
<theta>1.0</theta>
<!-- mission time -->
<Tm>tm</Tm>
<!-- shape parameter -->
<alpha>1.0</alpha>
<rho>0.5</rho>
<!-- weight parameter -->
<c>0.5</c>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Power Law Model for Failure Rate Function}
The hazard rate satisfies a power law as a function of time
\begin{equation}
\lambda(T_m) = \lambda + \alpha(T_m-T_d)^\beta
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta, \lambda >0$,
and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasOptionalParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\item \aliasOptionalParameterDescription{lambda}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="powerlaw" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="powerlaw">
<beta>1.0</beta>
<alpha>1.0</alpha>
<lambda>0.5</lambda>
<Tm>tm</Tm>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsubsection{The Log Linear Model for Failure Rate Function}
The hazard rate satisfies a exponential law as a function of time:
\begin{equation}
\lambda(T_m) = \exp\left(\alpha+\beta(T_m-T_d)\right)
\end{equation}
where $T_m\geq T_d$, $T_d, \alpha, \beta >0$, and $T_d$ is the location parameter.
This model accepts the following additional subnodes:
\begin{itemize}
\item \aliasOptionalParameterDescription{alpha}
\item \aliasOptionalParameterDescription{beta}
\end{itemize}
Example XML:
\begin{lstlisting}[style=XML]
<ExternalModel name="loglinear" subType="SR2ML.ReliabilityModel">
<variables>cdf_F, pdf_f, rdf_R, frf_h, tm</variables>
<!-- xml portion for this plugin only -->
<ReliabilityModel type="loglinear">
<beta>1.</beta>
<alpha>1.</alpha>
<Tm>tm</Tm>
</ReliabilityModel>
</ExternalModel>
\end{lstlisting}
\subsection{Reliability Models Reference Tests}
\begin{itemize}
\item SR2ML/tests/reliabilityModel/test\_bathtub.xml
\item SR2ML/tests/reliabilityModel/test\_erlangian.xml
\item SR2ML/tests/reliabilityModel/test\_expon.xml
\item SR2ML/tests/reliabilityModel/test\_exponweibull.xml
\item SR2ML/tests/reliabilityModel/test\_fatiguelife.xml
\item SR2ML/tests/reliabilityModel/test\_gamma.xml
\item SR2ML/tests/reliabilityModel/test\_loglinear.xml
\item SR2ML/tests/reliabilityModeltest\_lognorm.xml
\item SR2ML/tests/reliabilityModel/test\_normal.xml
\item SR2ML/tests/reliabilityModel/test\_powerlaw.xml
\item SR2ML/tests/reliabilityModeltest\_weibull.xml
\item SR2ML/tests/reliabilityModel/test\_time\_dep\_ensemble\_reliability.xml.
\end{itemize}
| {
"alphanum_fraction": 0.7361651718,
"avg_line_length": 43.21023766,
"ext": "tex",
"hexsha": "049c127724ab233b5369d1295fe2360b343ac6b3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "idaholab/SR2ML",
"max_forks_repo_path": "doc/user_manual/include/ReliabilityModels.tex",
"max_issues_count": 32,
"max_issues_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7",
"max_issues_repo_issues_event_max_datetime": "2022-02-17T19:45:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-01-12T18:43:29.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "idaholab/SR2ML",
"max_issues_repo_path": "doc/user_manual/include/ReliabilityModels.tex",
"max_line_length": 144,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "idaholab/SR2ML",
"max_stars_repo_path": "doc/user_manual/include/ReliabilityModels.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-27T03:14:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-25T02:01:22.000Z",
"num_tokens": 6913,
"size": 23636
} |
\chapter{Natural Deduction}
\marginurl{%
Natural Deduction for Predicate Logic:\\\noindent
Introduction to Mathematical Logic \#6
}{youtu.be/GVht3ES2qqo}
By analogy with the tautology, in the predicate logic we wish to prove that a
formula is true, whenever the structure and the values of the variables we
choose. Such formulas are called \textit{logically valid}.
In addition, we may define semantic implication for predicate formulas.
We say that a set of predicate formulas $\Sigma$ in a signature $\mathcal{S}$
semantically implies a formula $\phi$ ($\Sigma \models \phi$) in the signature
iff any structure with the signature $\mathcal{S}$ modeling $\Sigma$ models
$\phi$ as well.
Natural deduction for the predicate formulas is defined in the same manner as
the natural deduction for the propositional formulas but now the lines are
predicate formulas and we can use four additional rules.
\paragraph{Universal quantifier.}
The first logically-valid formula we use as a rule is
$A(x) \implies (\forall y \ A(y))$,
this rule allows us to introduce a universal quantifier.
In order to use the following rule, $x$ should not be a free variable of
an open hypothesis.
\[
\begin{nd}
\have [m] {1} {A(x)}
\have [~] {2} {\forall y \ A(y)} \Ai{1}
\end{nd}
\]
The second logically-valid formula we use as a rule says that if a statement is
true for all the values of a variable, then it is also true when you substitute
some specific term instead of the variable, i.e. $(\forall x \ A(x)) \implies
A(t))$, this rule allows us to eliminate an universal quantifier.
\[
\begin{nd}
\have [m] {1} {\forall x \ A(x)}
\have [~] {2} {A(t)} \Ae{1}
\end{nd}
\]
\paragraph{Existential quantifier.}
The first formula for the exisential quantifier says that you can name any term
in the formula by a variable and formula is still true for some value of the
variable. The corresponding formula is $A(t) \implies (\exists x \ A(x))$.
\[
\begin{nd}
\have [m] {1} {A(t)}
\have [~] {2} {\exists x \ A(x)} \Ei{1}
\end{nd}
\]
The last rule says that if $A(x)$ is true for some $x$ and we know that $A(y)$
implies $B$, then we can derive $B$ (note that this is true only when
$y$ is not used in $B$). Thus we can apply the following rule when $y$ is not
be a free variable neither of $B$ nor of any open hypothesis.
\[
\begin{nd}
\have [m] {1} {\exists x \ A(x)}
\open
\hypo [i] {2} {A(y)}
\have[j] {3} {B}
\close
\have [~] {4} {B} \Ee{1, 2-3}
\end{nd}
\]
\section{Examples of Derivations}
First example $\forall x \ F(x) \lor \lnot(\forall x \ F(x))$ is a special form
of the law of excluded middle, which we proved
in the previous chapter. However, in order to emphasize that the propositional
logic can prove all the statements provable in the predicate case we present
the proof of this statement as well.
\noindent $
\begin{nd}
\hypo {1} {}
\open
\hypo {2} {\lnot (\forall x \ F(x) \lor \lnot (\forall x \ F(x)))}
\open
\hypo {3} {\forall x \ F(x)}
\have {4} {\forall x \ F(x) \lor \lnot (\forall x \ F(x))} \oi{3}
\have {5} {\perp} \ne{2, 4}
\close
\have {6} {\lnot (\forall x \ F(x))} \ni{3-5}
\have {7} {\forall x \ F(x) \lor \lnot (\forall x \ F(x))} \oi{6}
\have {8} {\perp} \ne{2, 8}
\close
\have {9} {\forall x \ F(x) \lor \lnot (\forall x \ F(x))} \by{IP}{2-8}
\end{nd}
$
Unfortunately, this example just shows that a statement provable in the
propositional logic can be proven in the predicate logic. The next example is
an example that cannot be expressed in the propositional logic, we
prove that if we know that
$\forall x \forall y \ R(x, y) \implies R(y, x)$, the we can derive
$\forall x \forall y \ ((R(x, y) \implies R(y, x)) \land
(R(y, x) \implies R(x, y)))$.
\noindent $
\begin{nd}
\hypo {1} {\forall x \forall y \ R(x, y) \implies R(y, x)}
\have {2} {\forall y \ R(x', y) \implies R(y, x')} \Ae{1}
\have {3} {R(x', y') \implies R(y', x')} \Ae{2}
\have {4} {\forall y \ R(y', y) \implies R(y, y')} \Ae{1}
\have {5} {R(y', x') \implies R(x', y')} \Ae{4}
\have {6} {(R(x', y') \implies R(y', x')) \land R(y', x') \implies R(x', y')}
\ai{3, 5}
\have {6} {\forall y \ (R(x', y) \implies R(y, x')) \land
(R(y, x') \implies R(x', y))}
\Ai{6}
\have {6} {\forall x \forall y \ (R(x, y) \implies R(y, x)) \land
(R(y, x) \implies R(x, y))}
\Ai{7}
\end{nd}
$
\section{Soundness and Completeness}
Like in the propositional case, the most important properties of the natural
deduction are the following two theorems.
\begin{theorem}[completeness of natural deductions, G\"odel]
Let $\phi$ be a predicate formula. If $\phi$ is logically valid, then
there is a proof of $\phi$. Moreover, if $\Sigma \models \phi$,
for some finite set of predicate formulas $\Sigma$, then there is a
derivation of $\phi$ from $\Sigma$.
\end{theorem}
\begin{theorem}[soundness of natural deductions]
Let $\phi$ be a predicate formula. If there is a proof of $\phi$, then
$\phi$ is logically valid. Moreover, if there is a derivation of $\phi$ from
$\Sigma$, for some finite set of predicate formulas $\Sigma$, then
$\Sigma \models \phi$.
\end{theorem}
\begin{chapterendexercises}
\exercise Give a natural deduction derivation of
$\forall x \ A(x) \implies \forall x \ B(x)$ from
$\forall x \ (A(x) \implies B(x))$.
\exercise Give a natural deduction derivation of
$\exists x \ ( A(x) \lor B(x))$ from
$\exists x \ A(x) \lor \exists x \ B(x)$.
\end{chapterendexercises}
| {
"alphanum_fraction": 0.6368132839,
"avg_line_length": 38.5102040816,
"ext": "tex",
"hexsha": "7e4ebb69a050ce648f2679e919ca17366696f722",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2019-04-12T07:14:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-08T23:55:41.000Z",
"max_forks_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "aaknop/I2DM",
"max_forks_repo_path": "parts/part_7/chapter_34_natural_deduction_predicate.tex",
"max_issues_count": 69,
"max_issues_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796",
"max_issues_repo_issues_event_max_datetime": "2019-06-04T00:27:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-09T00:19:58.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "aaknop/I2DM",
"max_issues_repo_path": "parts/part_7/chapter_34_natural_deduction_predicate.tex",
"max_line_length": 81,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "alexanderknop/I2DM",
"max_stars_repo_path": "parts/part_7/chapter_34_natural_deduction_predicate.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-12T11:44:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-12T05:01:10.000Z",
"num_tokens": 1817,
"size": 5661
} |
\documentclass{article}
\title{ForkENGINE Installation Guides}
\author{Lukas Hermanns}
\date{March 2014}
\usepackage{listings}
\usepackage{color}
\usepackage{pxfonts}
\usepackage{geometry}
\usepackage{hyperref}
\geometry{
a4paper,
top=20mm,
bottom=20mm
}
\begin{document}
\definecolor{brightBlueColor}{rgb}{0.5, 0.5, 1.0}
\definecolor{darkBlueColor}{rgb}{0.0, 0.0, 0.5}
\lstset{
language = C++,
basicstyle = \footnotesize\ttfamily,
commentstyle = \itshape\color{brightBlueColor},
keywordstyle = \bfseries\color{darkBlueColor},
stringstyle = \color{red},
frame = single,
tabsize = 2
}
\maketitle
\section{Compiler}
The ForkENGINE has been written in modern C++11. Currently the following compilers are supported:
\begin{itemize}
\item
\href{http://www.microsoft.com/en-us/download/details.aspx?id=40787}{Microsoft VisualC++ 2013 (MSVC12)}
\item
\href{http://www.mingw.org/}{MinGW 4.7.1 (GNU C++)}
\end{itemize}
\section{Setup}
\subsection{Include Directory}
Set the include directory to "ForkENGINE/include".
In your sources you can then include the engine modules like in the following example:
\begin{lstlisting}
#include <fengine/core.h>
#include <fengine/video.h>
#include <fengine/scene.h>
\end{lstlisting}
Or if you always want to include everything, use this header file:
\begin{lstlisting}
#include <fengine/import.h>
\end{lstlisting}
\subsection{Library Directory}
There are several library directories.
\begin{itemize}
\item
Use "ForkENGINE/lib/MSVC12/Win32" when you are creating a 32-bit project with Microsoft Visual Studio 2013.
\item
Use "ForkENGINE/lib/MSVC12/Win64" when you are creating a 64-bit project with Microsoft Visual Studio 2013.
\item
Use "ForkENGINE/lib/MinGW" when you are creating a 32-bit project with MinGW on MS/Windows.
\item
Use "ForkENGINE/lib/Posix" when you are creating a 32-bit project with GNU/C++ on a posix system (e.g. GNU/Linux).
\end{itemize}
Only link to the libraries you need in your projects.
You'll find a list of all libraries and a brief description in the "Libraries" section.
\subsection{Binary Directory}
\subsubsection{MS/Windows}
On MS/Windows, the engine consists of the following binary files:
\begin{itemize}
\item
ForkAnimation.dll
\item
ForkAudio.dll
\item
ForkAudioAL.dll (\textit{OpenAL} audio system)
\item
ForkAudioXA2.dll (\textit{XAudio 2} audio system)
\item
ForkCore.dll (\textit{IO}, \textit{Math} and \textit{Platform} core systems)
\item
ForkENGINE.dll
\item
ForkNetwork.dll
\item
ForkPhysics.dll
\item
ForkPhysicsNw.dll (\textit{Newton Game Dynamics} physics system)
\item
ForkPhysicsPx.dll (\textit{NVIDIA PhysX} physics system)
\item
ForkRenderer.dll
\item
ForkRendererD3D11.dll (\textit{Direct3D 11.0} render system)
\item
ForkRendererGL.dll (\textit{OpenGL} render system)
\item
ForkScript.dll
\item
ForkScene.dll
\item
ForkUtility.dll
\item
OpenAL32.dll (OpenAL low level audio library)
\end{itemize}
The best way to use the binary files is to add the directory (respective to the used library directory,
(e.g. "ForkENGINE/bin/MSVC12/Win32") to the PATH variable on MS/Windows.
\subsubsection{Posix}
On posix systems (e.g. GNU/Linux), the engine consists of the following binary files:
\begin{itemize}
\item
ForkAnimation.so
\item
ForkAudio.so
\item
ForkAudioAL.so (\textit{OpenAL} audio system)
\item
ForkCore.so
\item
ForkENGINE.so
\item
ForkNetwork.so
\item
ForkPhysics.so
\item
ForkPhysicsNw.so (\textit{Newton Game Dynamics} physics system)
\item
ForkRenderer.so
\item
ForkRendererGL.so (\textit{OpenGL} render system)
\item
ForkScript.so
\item
ForkScene.so
\item
ForkUtility.so
\end{itemize}
\subsection{Dependencies}
The ForkENGINE has been written very modular, i.e. some libraries are loaded dynamically during run-time.
The render system implementations for instance are not statically linked to the engine and therefore
should not be linked statically to your projects.
The advantage is that if the host computer does not provide some libraries,
your application can handle this dynamically and choose another render system.
Consider the following situation: your application wants to create a Direct3D 11 render system
but the host computer does not have the d3d11.dll on MS/Windows.
A common result is, that the user gets an error message which states,
that a DLL is missing and the program should be re-installed.
In the case of the ForkENGINE your application can catch an exception and
create another render system, OpenGL for instance (which runs almost on all platforms).
This will reduce errors and libraries can be added or removed in a dynamic manner.
\section{Libraries}
The ForkENGINE consists of the following libraries:
\subsection{ForkCore}
Contains core functionalities: platform dependent code (such as creating a frame - also called "window"),
input devices (mouse and keyboard), extended file access and math functions.
\subsection{ForkScene}
Scene management library.
\subsection{ForkAnimation}
Animation library.
\subsection{ForkRenderer}
Render system library.
There are further libraries which will be loaded dynamically during run-time.
These are "ForkRendererGL" (OpenGL) and "ForkRendererD3D11" (Direct3D 11.0).
\subsection{ForkNetwork}
Network library.
\subsection{ForkUtility}
Utility library.
\subsection{ForkAudio}
Audio library.
There are further libraries which will be loaded dynamically during run-time.
These are "ForkAudioAL" (OpenAL) and "ForkAudioXA2" (XAudio 2).
\subsection{ForkPhysics}
Physics and collision library.
There are further libraries which will be loaded dynamically during run-time.
These are "ForkPhysicsNw" (Newton Game Dynamics) and "ForkPhysicsPx" (NVIDIA PhysX).
\subsection{ForkScript}
Scripting library. This includes the \textbf{Mono scripting engine}.
\subsection{ForkENGINE}
Engine Device etc., connects all sub-libraries.
\end{document} | {
"alphanum_fraction": 0.7802721088,
"avg_line_length": 23.0588235294,
"ext": "tex",
"hexsha": "875e44c0de7f15c688202dc5292f4561e4ead06c",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-07-30T01:32:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-23T19:56:41.000Z",
"max_forks_repo_head_hexsha": "8b575bd1d47741ad5025a499cb87909dbabc3492",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "LukasBanana/ForkENGINE",
"max_forks_repo_path": "docu/TeX/Installation Guides/Installation Guides.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8b575bd1d47741ad5025a499cb87909dbabc3492",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "LukasBanana/ForkENGINE",
"max_issues_repo_path": "docu/TeX/Installation Guides/Installation Guides.tex",
"max_line_length": 114,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "8b575bd1d47741ad5025a499cb87909dbabc3492",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "LukasBanana/ForkENGINE",
"max_stars_repo_path": "docu/TeX/Installation Guides/Installation Guides.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-30T01:31:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-21T22:46:18.000Z",
"num_tokens": 1587,
"size": 5880
} |
\chapter{Manual conventions and Tips}
Converting latex to markdown
pandoc
To display code listings use the package \url{https://www.overleaf.com/learn/latex/Code\_listing}{listings}. | {
"alphanum_fraction": 0.7789473684,
"avg_line_length": 31.6666666667,
"ext": "tex",
"hexsha": "bd86e0f86f68f3bc76da8e6735ef7b09aac321c3",
"lang": "TeX",
"max_forks_count": 39,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T15:15:34.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-12T05:42:51.000Z",
"max_forks_repo_head_hexsha": "4bd5cc0a9dd0e94b1c2d8b35385e128404009b0c",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "RodrigoNaves/ginan-bitbucket-update-tests",
"max_forks_repo_path": "docs/manual/manual.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "4bd5cc0a9dd0e94b1c2d8b35385e128404009b0c",
"max_issues_repo_issues_event_max_datetime": "2022-03-21T23:50:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-09-27T14:27:32.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "RodrigoNaves/ginan-bitbucket-update-tests",
"max_issues_repo_path": "docs/manual/manual.tex",
"max_line_length": 108,
"max_stars_count": 73,
"max_stars_repo_head_hexsha": "4bd5cc0a9dd0e94b1c2d8b35385e128404009b0c",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "RodrigoNaves/ginan-bitbucket-update-tests",
"max_stars_repo_path": "docs/manual/manual.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T15:17:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-08T23:35:08.000Z",
"num_tokens": 46,
"size": 190
} |
% =================
% BUSINESS GROUPS AND FACTIONS
% =================
\subsection{Corporations}
\begin{multicols}{2}
\subsubsection{Astronautics \& Spacecraft}
These corporations are the leaders in the field for creating the complex systems required for FTL and intra-solar travel, and the spacecraft that use them. They also deal with planetary navigational systems, creating the satellites required for GPS triangulation.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Curich Engineering & Marider, Zena (03-04)\\
Ferris Spacecraft & Marider, Zena (03-04)\\
Ghan Spacecraft & Gharda, Ghan Sector 2 (02-01)\\
Mavridou Cooperative & Raghd, Careeno (05-04)\\
Quebe-Luxfause Systems & Marider, Zena (03-04)\\
Sakeena Shipywards & Sakeena, Sakeena (07-08)\\
Solar Spacecraft & Raghd, Careeno (05-04)\\
Starlette Industries & Raghd, Careeno (05-04)\\
\end{redtable}
\subsubsection{Biotech}
Biotech focuses on using the cutting edge of biological and genetic sciences. These corporations can modify foodstuffs to increase it's viability, genetically enchance organics, and create complex organic machines.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Gentik & Bergthora, Penthus (06-04)\\
Interstellar Genetics Inc & Mahats, Careeno (05-04)\\
Morgath Industries & Khadim, Al-Taleqani (03-05)\\
Nazari Cooperative & Olaria, Eurymedon (00-03)\\
Orithyia Genetics & Orithyia, Pheegus (05-05)\\
Umbrella Coorporation & Ballestreros, Pheegus (05-05)\\
\end{redtable}
\subsubsection{Construction \& Manufacturing}
These corporations are leaders in terrestrial and space-borne construction, as well as the industrial factories that manufacture most goods in the Black. They can build space habitats, skyscrapers, and asteroid mining stations.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Gorallis Metalworking & Raghd, Careeno (05-04)\\
Kroeskin Fabrications & Thurid, Zena (03-04)\\
Merr-Sonn Industrial & Al-sahhah, Al-Taleqani (03-05)\\
Novaplex & Mecisteus, Yafiah (01-03)\\
Panstellar Zaibatsu & Bergthora, Penthus (06-04)\\
\end{redtable}
\subsubsection{Consumer Goods, Entertainment\& Finance}
These corporations target the average consumer with goods, entertainment, finance, and even pharmacueticals.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Anistonopoulos Clan & Mecisteus, Yafiah (01-03)\\
Al-Rhul Media Productions & Al-sahhah, Al-Taleqani (03-05)\\
Buy n Large & Thurid, Zena (03-04)\\
Ghan Systems & Ghalla, Ghan Sector 1 (00-01)\\
Gringotts & Mecisteus, Yafiah (01-03)\\
Haen Studio & Olaria, Eurymedon (00-03)\\
Los Pollos Hermanos & Al-sahhah, Al-Taleqani (03-05)\\
Saraphis Pharmacueticals & Mecisteus, Yafiah (01-03)\\
Velunza Circle & Olaria, Eurymedon (00-03)\\
\end{redtable}
\subsubsection{Cybernetics, Electronics \& Software}
Cyberware is the hottest technology in the Black. These corporations uses the latest in scientific discoveries to fuse the organic with the digital. Some also create the electronics that consumers use in their daily lives.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Advanced Ideas Mechanics & Marider, Zena (03-04)\\
Altin Alliance & Polypheme, Careeno (05-04)\\
Aperture Science, Inc. & Thurid, Zena (03-04)\\
Cyberdyne Systems & Bergthora, Penthus (06-04)\\
Gowix Computers & Hildegunn, Ianassa (01-04)\\
HoloGraphics Interstellar & Mecisteus, Yafiah (01-03)\\
Imaharatronics & Olaria, Eurymedon (00-03)\\
MicroData Technologies & Al-sahhah, Al-Taleqani (03-05)\\
Perzome SoftWEAR & Olaria, Eurymedon (00-03)\\
Pied Piper & Hildegunn, Ianassa (01-04)\\
Wayne Enterprises & Thurid, Zena (03-04)\\
\end{redtable}
\subsubsection{Food}
These corporations are the leaders for producing, packaging and transporting the food across the sector.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Aqualis Food Conglomerate & Mahats, Careeno (05-04)\\
Faraj Fishing & Amir, Duha (00-06)\\
Nopoulos Partnership & Ballesteros, Pheegus (05-05)\\
SuriTech Foodstuffs & Orithyia, Pheegus (05-05)\\
Tatum Farming Collective & Arantza, Mandad (02-03)\\
\end{redtable}
\subsubsection{Illegal goods \& Mercenaries}
Many of these "corporations" operate outside United Systems law, but are still semi-legal in some Independant systems. They usually smuggle and sell illicit goods from drugs, illegal AI, and even in people. A few are dedicated mercenaries and strong-men, and can pass off as "Security".
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Asset Group Solutions & Al-sahhah, Al-Taleqani (03-05)\\
Barichello Multistellar & Semera, Penthus (06-04)\\
Magnus Syndicate & Sagari, Shakoor (07-01)\\
Santhe Security & Mecisteus, Yafiah (01-03)\\
Spiker Group & Dirce, Pooja (04-01)\\
Zea Outfit & Vinata, Mahallati (04-07)\\
\end{redtable}
\subsubsection{Mining, Energy \& Fuel}
Produces the fuel, mining and energy needs for the population of the Black. Includes businesses or "prospectors" that explore the Black looking for valuable resources to exploit.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Bellixan Endeavors & Khadim, Al-Taleqani (03-05)\\
Burns Industries & Raghd, Careeno (05-04)\\
Colonial Mining & Parezi, Heraclitus (04-04)\\
Degan Explorations & Nishtha, Eurymedon (00-03)\\
Nexcore Mining Corp & Marider, Zena (03-04)\\
Parezi Energy & Parezi, Heraclitus (04-04)\\
\end{redtable}
\subsubsection{Weapons}
Makes everything from combat armour to handguns.
\begin{redtable}{\linewidth}{L{.5}L{.5}}
\textbf{Name} & \textbf{Headquarters}\\
Al-Astra Association & Al-sahhah, Al-Taleqani (03-05)\\
Cafu Systems & Dirce, Pooja (04-01)\\
Jayhoon Inc. & Vinata, Mahallati (04-07)\\
Stark Industries & Thurid, Zena (03-04)\\
Striker & Sagari, Shakoor (07-01)\\
West Wind Outfit & Vargos, Yongheng (07-05)\\
\end{redtable}
\end{multicols}
| {
"alphanum_fraction": 0.6431372549,
"avg_line_length": 48.1468531469,
"ext": "tex",
"hexsha": "063c2a283e1c442dc3481ad7fb90187c000de2c8",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-05-15T11:31:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-07-19T09:18:30.000Z",
"max_forks_repo_head_hexsha": "8e13ae6ae56e4426c5a6a1082aac2aa855ca2ec9",
"max_forks_repo_licenses": [
"Unlicense",
"MIT"
],
"max_forks_repo_name": "almightynassar/rpg-latex-template",
"max_forks_repo_path": "broken-stars/sector-corporations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e13ae6ae56e4426c5a6a1082aac2aa855ca2ec9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense",
"MIT"
],
"max_issues_repo_name": "almightynassar/rpg-latex-template",
"max_issues_repo_path": "broken-stars/sector-corporations.tex",
"max_line_length": 288,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "8e13ae6ae56e4426c5a6a1082aac2aa855ca2ec9",
"max_stars_repo_licenses": [
"Unlicense",
"MIT"
],
"max_stars_repo_name": "almightynassar/rpg-latex-template",
"max_stars_repo_path": "broken-stars/sector-corporations.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-14T09:28:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-09T05:30:17.000Z",
"num_tokens": 2142,
"size": 6885
} |
\documentclass{memoir}
\usepackage{notestemplate}
%\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png}
%\institute{Rice University}
%\faculty{Faculty of Whatever Sciences}
%\department{Department of Mathematics}
%\title{Class Notes}
%\subtitle{Based on MATH xxx}
%\author{\textit{Author}\\Gabriel \textsc{Gress}}
%\supervisor{Linus \textsc{Torvalds}}
%\context{Well, I was bored...}
%\date{\today}
%\makeindex
\begin{document}
% \maketitle
% Notes taken on
\begin{defn}[Convergence of Infinite Products]
Let \(\left\{ a_n \right\}_{n=1}^{\infty}\) be a sequence of non-zero complex numbers. We say that the **infinite produc**t
\begin{align*}
\prod_{n=1}^{\infty} a_n
\end{align*}
\textbf{converges absolutely} if
\begin{align*}
\lim_{n \to \infty} a_n = 1
\end{align*}
and if the corresponding series
\begin{align*}
\sum_{n=1}^{\infty} \ln(a_n)
\end{align*}
converges absolutely.
\end{defn}
The transformation between the products of \(a_n\) and the sum of \(\ln(a_n)\) is natural, as exponentiation gives equality for partial sums/products.
\begin{anki}
TARGET DECK
Complex Qual::Complex Analysis
START
MathJaxCloze
Text: Let \(\left\{ a_n \right\}_{n=1}^{\infty}\) be a sequence of non-zero complex numbers. We say that the **infinite product**
\(\begin{align*}
\prod_{n=1}^{\infty} a_n
\end{align*}\)
**converges absolutely** if
{{c1::\( \begin{align*}
\lim_{n \to \infty} a_n = 1
\end{align*}\)::coefficients}}
and if the corresponding power series
{{c1::\(\begin{align*}
\sum_{n=1}^{\infty} \ln(a_n)
\end{align*}\)}}
converges absolutely.
Extra: The transformation between the products of \(a_n\) and the sum of \(\ln(a_n)\) is natural, as exponentiation gives equality for partial sums/products.
Tags: analysis complex_analysis entire_meromorphic
<!--ID: 1626995479520-->
END
\end{anki}
An astute reader will notice that we need to be careful about our determination of \(\ln(a_n)\).
For finitely many \(\ln(a_n)\), we can take any determination without concern-- but as \(a_n\) approaches 1, we will face some issues.
Fortunately, there exists an \(N\) so that for all \(n\geq N\), we can express \(a_n = 1-\alpha_n\) for some \(\left| \alpha_n \right| <1\), for which the logarithm will remain well-defined for the rest of the sequence.
Of course, we ought to verify this transformation won't affect convergence.
\begin{lemma}
Let \(\left\{ a_n \right\}_{n=1}^{\infty}\) be a sequence of complex numbers with \(a_n\neq 1\) for all \(n\). Suppose that \(\left\{ a_n \right\} \) is absolutely convergent:
\begin{align*}
\sum_{n=1}^{\infty} \left| a_n \right|
\end{align*}
Then
\begin{align*}
\prod_{n=1}^{\infty} (1-a_n)
\end{align*}
converges absolutely.
\end{lemma}
This lemma makes our study of infinite products convenient, as it allows us to reduce infinite products to infinite sums, a case which we are already familiar with.\\
If we consider infinite products of functions, we see something rather interesting.
The lemma above already gives us conditions for uniform convergence of
\begin{align*}
\prod_{n=1}^{\infty} (1-g_n(z)) .
\end{align*}
Furthermore, we can leverage our knowledge of the logarithmic derivative to obtain more information about uniformly convergent products of functions.
\begin{lemma}
Let \(\Omega \subset \C\) be an open set and \(\left\{ f_n \right\}_{n=1}^{\infty}\) a sequence of holomorphic functions on \(\Omega \).
Consider the corresponding sequence \(\left\{ g_n \right\}_{n=1}^{\infty}\) determined so that \(f_n(z) = 1 + g_n(z)\), and suppose that
\begin{align*}
\sum_{n=1}^{\infty} g_n(z)
\end{align*}
converges uniformly and absolutely on \(\Omega \).
Let \(K\subset \Omega \) be a compact subset so that \(f_n^{-1}(0)\cap K = \emptyset\) for all \(n\).
Then the infinite product of \(f_n\) converges to a holomorphic function on \(\Omega \):
\begin{align*}
\prod_{n=1}^{\infty} f_n = f
\end{align*}
and we have absolute and uniform convergence on \(K\) for the following sum:
\begin{align*}
\frac{f'}{f} = \sum_{n=1}^{\infty} \frac{f_n'}{f_n}.
\end{align*}
\end{lemma}
\begin{anki}
START
MathJaxCloze
Text: Let \(\Omega \subset \C\) be an open set and \(\left\{ f_n \right\}_{n=1}^{\infty}\) a sequence of holomorphic functions on \(\Omega \).
Consider the corresponding sequence \(\left\{ g_n \right\}_{n=1}^{\infty}\) determined so that \(f_n(z) = 1 + g_n(z)\), and suppose that
\(\begin{align*}
\sum_{n=1}^{\infty} g_n(z)
\end{align*}\)
converges {{c1::uniformly and absolutely on \(\Omega \)}}.
Let \(K\subset \Omega \) be a compact subset so that {{c1::\(f_n^{-1}(0)\cap K = \emptyset\)}} for all \(n\).
Then the infinite product of \(f_n\) {{c1::converges to a holomorphic function}} on \(\Omega \):
{{c1::\(\begin{align*}
\prod_{n=1}^{\infty} f_n = f
\end{align*}\)}}
and we have absolute and uniform convergence on \(K\) for the following sum:
{{c1::\(\begin{align*}
\frac{f'}{f} = \sum_{n=1}^{\infty} \frac{f_n'}{f_n}.
\end{align*}\)::logarithmic derivative}}
Tags: analysis complex_analysis entire_meromorphic
<!--ID: 1626995479539-->
END
\end{anki}
\begin{hw}[Blaschke Products]
Let \(\left\{ a_n \right\} \subset D_1 \) be a sequence in the unit disc such that \(a_n\neq 0\) for all \(n\) and
\begin{align*}
\sum_{n=1}^{\infty} (1-\left| a_n \right| )
\end{align*}
converges.
Show that the \textbf{Blaschke product}
\begin{align*}
f(z) = \prod_{n=1}^{\infty} \frac{\left| a_n \right| }{a_n}\cdot \frac{a_n - z}{1 - \overline{a}_n z}
\end{align*}
converges uniformly on \(\left| z \right| \leq r<1\) for some fixed \(r\) and defines a holomorphic function on \(D_1\) having only the zeros \(\left\{ a_n \right\} \).
Furthermore, show that \(\left| f(z) \right| \leq 1\).\\
(Hint: prove that for \(0<\left| a \right| <1\) and for some fixed \(r\) and \(\left| z \right| \leq r<1\), the inequality
\begin{align*}
\left| \frac{a + \left| a \right| z}{(1-\overline{a}z)a} \right| \leq \frac{1+r}{1-r}
\end{align*}
holds)
\end{hw}
One can use Blaschke products to construct some unusual functions.
For example, if we choose \(a_n = 1-\sfrac{1}{n^2}\), then our resulting function is holomorphic on the unit disc with a zero at 1.
Modifying this construction allows us to construct a bounded holomorphic function \(f\) on \(D_1\) for which each point of the unit circle is a singularity.
(Note that this is a useful example that demonstrates that non-isolated singularities need not conform to our standard types of singularities-- we refer to this form of non-isolated singularity as a \textbf{natural boundary})
\begin{anki}
START
MathJaxCloze
Text: Let \(\left\{ a_n \right\} \subset D_1 \) be a sequence in the unit disc such that \(a_n\neq 0\) for all \(n\) and
\(\begin{align*}
\sum_{n=1}^{\infty} (1-\left| a_n \right| )
\end{align*}\)
converges.
Then the **Blaschke product**
\(\begin{align*}
f(z) = \prod_{n=1}^{\infty} \frac{\left| a_n \right| }{a_n}\cdot \frac{a_n - z}{1 - \overline{a}_n z}
\end{align*}\)
{{c1::converges uniformly}} on \(\left| z \right| \leq r<1\) for some fixed \(r\) and defines a {{c1::holomorphic function on \(D_1\) having only the zeros \(\left\{ a_n \right\} \)}}.
Furthermore, \(\left| f(z) \right| \leq 1\).
Extra: One can use Blaschke products to construct some unusual functions. For example, if we choose \(a_n = 1-\sfrac{1}{n^2}\), then our resulting function is holomorphic on the unit disc with a zero at 1. Modifying this construction allows us to construct a bounded holomorphic function \(f\) on \(D_1\) for which each point of the unit circle is a singularity.
Tags: analysis complex_analysis entire_meromorphic
<!--ID: 1626995479562-->
END
\end{anki}
\subsection{Weierstrass Products}
\label{sub:weierstrass_products}
Our goal will be to show that we can use infinite products to classify entire functions.
We will classify a restricted class of entire functions, then utilize this to extend to the general case.
\begin{thm}[Non-vanishing Entire Functions]
Let \(f\) be a non-vanishing entire function. Then there exists a second entire function \(g\) so that
\begin{align*}
f(z) = e^{g(z)}.
\end{align*}
Furthermore, \(g\) is unique up to an additive constant. That is, if
\begin{align*}
f(z) = \lambda e^{h(z)}
\end{align*}
for some \(\lambda \in \C\setminus\left\{ 1 \right\} \), then
\begin{align*}
h(z) = g(z) + \ln(\lambda ).
\end{align*}
\end{thm}
This follows from our logarithmic derivatives directly.
We leave the verification as an exercise to the reader.
\begin{anki}
START
MathJaxCloze
Text: Let \(f\) be a non-vanishing entire function. Then there exists a second entire function \(g\) so that
{{c1::\(\begin{align*}
f(z) = e^{g(z)}.
\end{align*}\)}}
Furthermore, \(g\) is unique up to an additive constant. That is, if
{{c1::\(\begin{align*}
f(z) = \lambda e^{h(z)}
\end{align*}\)}}
for some \(\lambda \in \C\setminus\left\{ 1 \right\} \) and distinct entire function \(h\), then
{{c1::\(\begin{align*}
h(z) = g(z) + \ln(\lambda ).
\end{align*}\)}}
Extra: This follows from our logarithmic derivatives directly.
Suppose \(f,g\) are two entire functions with the same zeros of equal multiplicity. Then it follows that
\(\begin{align*}
f(z) = g(z)e^{h(z)}
\end{align*}\)
for some entire function \(h(z)\) (uniquely determined up to an additive constant).
It also follows that for \(h\) entire,
\(\begin{align*}
g(z) = 0 \iff g(z)e^{h(z)} = 0
\end{align*}\)
with the same multiplicities.
Tags: analysis complex_analysis entire_meromorphic
<!--ID: 1626995479580-->
END
\end{anki}
Suppose \(f,g\) are two entire functions with the same zeros of equal multiplicity. Then it follows that
\begin{align*}
f(z) = g(z)e^{h(z)}
\end{align*}
for some entire function \(h(z)\) (uniquely determined up to an additive constant).
It also follows that for \(h\) entire,
\begin{align*}
g(z) = 0 \iff g(z)e^{h(z)} = 0
\end{align*}
with the same multiplicities.
Hence, we can construct a canonical entire function for a set of zeros of fixed multiplicity-- and thus all entire functions with those zeros of fixed multiplicity can be expressed in terms of the canonical form.\\
We will now give the intuition for the canonical form.
First, we should order our zeros by increasing absolute value, so that our zeros \(\left\{ z_n \right\} \) satisfy
\begin{align*}
\left| z_1 \right| \leq \left| z_2 \right| \leq \ldots
\end{align*}
We'd like to define our function by the infinite product
\begin{align*}
\prod_{n=1}^{\infty} \left( 1 - \frac{z}{z_n} \right)
\end{align*}
but this product may not converge.
To resolve this, we introduce a convergence factor which does not introduce any zeros-- an exponential.
Our exponent in this term ought to be a polynomial whose degree is dependent on the term of the sequence (to ensure independence between terms).
In other words, our convergence factor will be of the form
\begin{align*}
e^{w_n + \frac{1}{2}w_n^2 + \ldots + \frac{1}{n-1}w_n^{n-1}}
\end{align*}
where \(w_n = \sfrac{z}{z_n}\). We combine the convergence term with our original terms and write
\begin{align*}
E_n(w) = (1-w) e^{w_n + \frac{1}{2}w_n^2 + \ldots + \frac{1}{n-1}w_n^{n-1}}.
\end{align*}
The polynomial in the exponent is chosen because
\begin{align*}
\ln(1-z) = \sum_{n=1}^{\infty} -\frac{z^{n}}{n}
\end{align*}
and so
\begin{align*}
\ln \left( \prod_{n=1}^{\infty} E_n\left(\sfrac{z}{z_n}\right) \right)\\
= \sum_{n=1}^{\infty} \ln \left(E_n\left(\sfrac{z}{z_n}\right) \right)\\
= \sum_{n=1}^{\infty} \left( \frac{z}{z_n} + \ldots + \frac{1}{n-1}\frac{z}{z_{n-1}} \right) + \ln \left( 1 - \frac{z}{z_n}\right) \\
= \sum_{n=1}^{\infty} \sum_{k=n}^{\infty} - \frac{1}{k} \left( \frac{z}{z_n} \right)^{k}
\end{align*}
Of course, this identity is desirable as we want to show our infinite product converges absolutely.\\
Now we formally justify the work we've shown thus far.
First, we verify that convergence will occur as we expect.
\begin{lemma}
If \(\left| w \right| \leq \frac{1}{2}\) then
\begin{align*}
\frac{\left| \ln E_n(w) \right| }{\left| w \right|^{n}} \leq 2.
\end{align*}
Furthermore, let a sequence of complex numbers \(\left\{ z_n \right\} \) be given with \(\left| z_1 \right| \leq \left| z_2 \right| \leq \ldots\). There exists a corresponding increasing sequence of positive integers \(\left\{ k_n \right\} \) so that, for all positive real \(a>0\)
\begin{align*}
\sum_{n=1}^{\infty} \left( \frac{a}{\left| z_n \right| } \right)^{k_n}
\end{align*}
converges. In fact, for every \(a>0\) there is a corresponding integer \(N_a>0\) so that for all \(k_n \geq N_a\):
\begin{align*}
\left( \frac{a}{\left| z_n \right| } \right)^{k_n} \leq \frac{1}{2^{k_n}}.
\end{align*}
\end{lemma}
Now that we have introduced this sequence \(\left\{ k_n \right\} \), we can finally connect the various ideas we've constructed together into a well-defined product with the properties desired.
\begin{thm}[Weierstrass Product Theorem]
Let \(\left\{ z_n \right\}_{n=1}^{\infty} \subset \C\setminus\left\{ 0 \right\} \) be a sequence of complex numbers in the complex plane with
\begin{align*}
\left| z_1 \right| \leq \left| z_2 \right| \leq \ldots
\end{align*}
and let \(\left\{ k_n \right\} \subset \N\) be the corresponding smallest sequence of positive integers so that for all positive real \(a>0\)
\begin{align*}
\sum_{n=1}^{\infty} \left( \frac{a}{\left| z_n \right| } \right)^{k_n}
\end{align*}
converges. Define
\begin{align*}
P_n(z) &= \sum_{k=1}^{k_n-1} \frac{z^{k}}{k}\\
E_n( \sfrac{z}{z_n}) &= \left( 1- \frac{z}{z_n} \right) e^{P_n(z / z_n)} .
\end{align*}
Then
\begin{align*}
\prod_{n=1}^{\infty} E_n( \sfrac{z }{z_n})
\end{align*}
converges uniformly and absolutely on every disc \(D_a\), and hence defines an entire function with zeros exclusively at \(\left\{ z_n \right\} \).\\
If \(\sup_{n} k_n = k< \infty\), then we take the canonical sequence to be \(k_n = \sup_{n} k_n\). In this case, we refer to \(E_n(\sfrac{z}{z_n})\) as the \textbf{elementary form} and the product
\begin{align*}
z^{m} \prod_{n=1}^{\infty} E_n \left( \sfrac{z}{z_n} \right)
\end{align*}
as the \textbf{Weierstrass product} and take it to be the canonical form for a set of zeros \(\left\{ z_n \right\} \subset \C\setminus\left\{ 0 \right\} \).
\end{thm}
\begin{anki}
START
MathJaxCloze
Text: **Weierstrass Product Theorem**
Let \(\left\{ z_n \right\}_{n=1}^{\infty} \subset \C\setminus\left\{ 0 \right\} \) be a sequence of complex numbers in the complex plane with
\(\begin{align*}
\left| z_1 \right| \leq \left| z_2 \right| \leq \ldots
\end{align*}\)
and let \(\left\{ k_n \right\} \subset \N\) be the corresponding smallest sequence of positive integers so that for all positive real \(a>0\)
\(\begin{align*}
\sum_{n=1}^{\infty} \left( \frac{a}{\left| z_n \right| } \right)^{k_n}
\end{align*}\)
converges. Define
\(\begin{align*}
P_n(z) &= \sum_{k=1}^{k_n-1} \frac{z^{k}}{k}\\
E_n( \sfrac{z}{z_n}) &= \left( 1- \frac{z}{z_n} \right) e^{P_n(z / z_n)} .
\end{align*}\)
Then
{{c1::\(\begin{align*}
\prod_{n=1}^{\infty} E_n( \sfrac{z }{z_n})
\end{align*}\)}}
converges {{c1::uniformly}} and {{c1::absolutely}} on every disc \(D_a\), and hence defines an entire function with zeros {{c1::exclusively at \(\left\{ z_n \right\} \)}}.
If \(\sup_{n} k_n = k< \infty\), then we take the canonical sequence to be \(k_n = \sup_{n} k_n\). In this case, we refer to \(E_n(\sfrac{z}{z_n})\) as the **elementary form** and the product
{{c1::\( \begin{align*}
z^{m} \prod_{n=1}^{\infty} E_n \left( \sfrac{z}{z_n} \right)
\end{align*}\)}}
as the \textbf{Weierstrass product} and take it to be the canonical form for a set of zeros \(\left\{ z_n \right\} \subset \C\setminus\left\{ 0 \right\} \).
Tags: analysis complex_analysis entire_meromorphic defn
<!--ID: 1626995479602-->
END
\end{anki}
\begin{cor}[Hadamard's Theorem]
Every entire function \(f\) with zeros exactly at \(\left\{ z_n \right\}_{n=1}^{\infty}\subset \C\setminus\left\{ 0 \right\}\) and possibly at zero with order \(m\) can be written uniquely in the form
\begin{align*}
f(z) = e^{g(z)} z^{m} \prod_{n=1}^{\infty} E_n( \sfrac{z}{z_n})
\end{align*}
where \(g\) is a polynomial of fixed degree \(\leq \sup_{n} k_n\) uniquely determined up to an additive constant.
\end{cor}
\begin{anki}
START
MathJaxCloze
Text: **Hadamard's Theorem**
Every entire function \(f\) with zeros exactly at \(\left\{ z_n \right\}_{n=1}^{\infty}\subset \C\setminus\left\{ 0 \right\}\) and possibly at zero with order \(m\) can be written uniquely in the form
{{c1::\(\begin{align*}
f(z) = e^{g(z)} z^{m} \prod_{n=1}^{\infty} E_n( \sfrac{z}{z_n})
\end{align*}\)}}
where \(g\) is a polynomial of fixed degree \(\leq \sup_{n} k_n\) uniquely determined up to an additive constant.
Tags: analysis complex_analysis entire_meromorphic
<!--ID: 1626995479618-->
END
\end{anki}
\begin{proof}[Proof of Weierstrass Product Theorem]
\end{proof}
Of course, we can immediately leverage our classification of entire functions to classify meromorphic functions on \(\C\).
\begin{cor}[Classification of Meromorphic Functions on \(\C\)]
Every function \(F\) which is meromorphic in the whole plane can be expressed uniquely by:
\begin{align*}
F(z) = f(z)\frac{g(z)}{h(z)}
\end{align*}
where \(f(z)\) is a non-vanishing entire function, \(g(z)\) is the canonical Weierstrass product corresponding to the zeros of \(F\) and \(h(z)\) is the canonical Weierstrass product corresponding to the poles of \(F\).
\end{cor}
We can use either this form or equivalently the form
\begin{align*}
F(z) = e^{f(z)} \frac{g(z)}{h(z)}
\end{align*}
as our canonical choice depending on the context.
\begin{anki}
START
MathJaxCloze
Text: Every function \(F\) which is meromorphic in the whole plane can be expressed uniquely by:
{{c1::\(\begin{align*}
F(z) = f(z)\frac{g(z)}{h(z)}
\end{align*}\)}}
where \(f(z)\) is a non-vanishing entire function, \(g(z)\) is the canonical Weierstrass product corresponding to the {{c1::zeros of \(F\)}} and \(h(z)\) is the canonical Weierstrass product corresponding to the {{c1::poles of \(F\)}}.
Extra: We can use either this form or equivalently the form
\(\begin{align*}
F(z) = e^{f(z)} \frac{g(z)}{h(z)}
\end{align*}\)
as our canonical choice depending on the context.
Tags: analysis complex_analysis entire_meromorphic
<!--ID: 1626995479636-->
END
\end{anki}
While this form is natural, we will explore other constructions for meromorphic functions later that will prove more fruitful.\\
First, we give an example of a few Weierstrass products and investigate the structure of \(\left\{ k_n \right\} \).
\begin{exmp}
\begin{align*}
\sin(\pi z) = \pi z \prod_{n=1}^{\infty} \left( 1- \frac{z^2}{n^2} \right) .
\end{align*}
so
\begin{align*}
\frac{\pi ^2}{\sin^2(\pi z)} = \sum_{n=-\infty}^{\infty} \frac{1}{(z-n)^{2}}
\end{align*}
\end{exmp}
% \printindex
\end{document}
| {
"alphanum_fraction": 0.6699669967,
"avg_line_length": 44.6004672897,
"ext": "tex",
"hexsha": "02d5d2e804d3f48231f265e1f884990c24793d40",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "gjgress/LibreMath",
"max_forks_repo_path": "Complex Analysis/Notes/source/InfiniteProducts.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b",
"max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "gjgress/Libera-Mentis",
"max_issues_repo_path": "Complex Analysis/Notes/source/InfiniteProducts.tex",
"max_line_length": 362,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "gjgress/Libera-Mentis",
"max_stars_repo_path": "Complex Analysis/Notes/source/InfiniteProducts.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z",
"num_tokens": 6559,
"size": 19089
} |
\documentclass[twocolumn]{article}
\begin{document}
\title{Primary Jungle Game Summary}
\author{Phil Crow}
\maketitle
\section{Premise}
Players are campaign managers who first choose a candidate. They guide the candidate to the Iowa Jungle Caucus in which all candidates compete together
regardless of party. Each candidate has poll chips which are equally divided at the start. Each turn moves the voting closer. When voting day arrives the
candidate with the most poll chips wins.
\section{Candidates}
There are 16 candidates to choose from. This is probably too many, but making them up was the most fun. Here are some examples:
Fiona Carlson - Business executive hired a few, layed off a few more.
Ronald Chump - Business magnate, I'm afraid to say more. He can be a tad litigious and I wrote the disclaimer myself.
Ed Coast - Southern senator who loves Jesus and hates immigrants, except himself (he was born in Mexico, to a US mother and a Swedish father), never smiles.
Mallory Hinton - Former senator and secretary of state who long ago stepped out of her political husband's long shadow. Still wants the validation of winning one more race.
Earnie Painter - Sitting senator from New England with a strong a affinity for cold winters, especially those in Scandanavia. Feel the Pain'.
Mick Sanatarium - Former purple state senator, lost big, still trying for a comeback. Wishes google observed single issue right to be forgotten in the US like it does in Europe.
Jethro Shrub - Fourth son of great old political family. Wonders why all these other people have bothered to run.
Aaron Snore - Former senator from a southern state who loves to talk about the problem of climate change when he is not running for office.
\subsection{Disclaimer}
Any similarity between persons in this game and any person living or dead is purely coincidental, unless that person ran for office, hosted or appeared on a TV show
(cable or broadcast) in which case they are public figures and should expect a certain amount of humor at their expense. Oh, and I have only good things to say about Nancy Kassebaum.
\section{Mechanics}
There is a board for the calendar, draw and discard piles, and states where candidates open field offices.
On each turn candidates raise money, pay expenses, play a weapon card, open or close offices, and draw.
The key activity is playing a weapon card from a hand of five. Each weapons has a category. Categories include: Attack Ad, Scandal, Defeat Scandal, Stump for a group, Endorsement.
There are 137 weapon cards, instead of 140, because honesty, integrity, and foresight don't seem apply to any of the candidates.
\section{Weapons}
Each weapon card is unique. The rules for play are the same for each card type. Currently the rules are printed on each card.
Here are some examples of the weapons:
\subsection{Attack Ads}
Claim your opponent supports the mohair subsidy.
Claim your opponent wants to take away everyone's steak knives.
Claim your opponent denies the theory of gravity.
Suggest that your opponent favors requiring each voter to recite the pledge of allegiance before getting a ballot.
Remind voters that your opponent still uses a flip phone.
Accuse your opponent of wanting to tax fingernail clippers.
Accuse your opponent of preferring soccer to North American football.
Accuse your opponent of wanting to tax Little League tickets.
Show your opponent wearing plaid Bermuda shorts, with black knee high socks and sandals.
Claim your opponent supports NSA surveillance devices embedded in TV remote controls.
\subsection{Endorsement from...}
the National Association of Puppy Owners
the Society of Underweight Engineers
Parents of Children with Excellent Teeth (PCET)
National Reptile Association (the real NRA)
Student Selfie Society
Fraternal Order of \textbf{Lawn} Enforcement Officers
Clown Car Owner's Association
United Tire Balancers Union
\subsection{Stump for a Group}
Stump speeches are more effective if given in states where the candidate has field offices open. Here are some groups canidates might stump for:
a high school chess club
a group of mall Santas
a pie making contest
a federation of foosball players
a rotary dial phone owner's club
a convention of county seat tourist bureau coordinators
a county fair rabbit show
the pessimists club national convention
a travelling unicycle troup tryout camp
a group of Johnny Cash impersonators
the contestants in a lawn tractor pull
\subsection{Teach Your Candidate}
There are various cards for instructing the candidates. These improve the effect of policy speeches. You could teach them:
the difference between a tax bracket and a tax loophole.
the relationship between the national debt and the current year budget deficit.
the difference between Slovenia and Slovakia.
the difference between Idaho and Iowa.
the difference between state and federal budgeting.
how to order at a fast food restaurant.
how much a typical worker actually makes.
that cloud computing is not affected by the jet stream.
\subsection{Speak}
Your candidate might choose to give a speech about one of these policy topics:
the rising price of envelopes
Estonian relations
the tax on suction cup tipped arrows
the need for a tax on dental floss
the risk to Thanksgiving due to the monopoly on canned pumpkin
the porousness of our Canadian border
the importance of uniforms for college students
the need for more skilled penny polishers
the beauty of the national parks your candidate has not visited
\end{document}
| {
"alphanum_fraction": 0.7955317248,
"avg_line_length": 34.537037037,
"ext": "tex",
"hexsha": "53212eb942df9b9429bcf8c7f365d9614043b4f8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c358d9b21ed366a3f7c92b2a6cdbedd200871669",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "philcrow/primary-jungle-game",
"max_forks_repo_path": "summary.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c358d9b21ed366a3f7c92b2a6cdbedd200871669",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "philcrow/primary-jungle-game",
"max_issues_repo_path": "summary.tex",
"max_line_length": 182,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c358d9b21ed366a3f7c92b2a6cdbedd200871669",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "philcrow/primary-jungle-game",
"max_stars_repo_path": "summary.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1169,
"size": 5595
} |
\chapter{Research Method}
\input{content/03researchMethod/01methodEvaluation.tex}
\input{content/03researchMethod/02researchModel.tex}
\input{content/03researchMethod/03operationalization.tex}
| {
"alphanum_fraction": 0.8608247423,
"avg_line_length": 32.3333333333,
"ext": "tex",
"hexsha": "dc19ae206ee911a6d9670b6d68fb46c54f2ccd86",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cb3dd3d7541e2fecba482a29facb67cbe4aa2edc",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "trahloff/bachelorThesis",
"max_forks_repo_path": "content/archive/03researchMethod/master.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cb3dd3d7541e2fecba482a29facb67cbe4aa2edc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "trahloff/bachelorThesis",
"max_issues_repo_path": "content/archive/03researchMethod/master.tex",
"max_line_length": 57,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cb3dd3d7541e2fecba482a29facb67cbe4aa2edc",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "trahloff/bachelorThesis",
"max_stars_repo_path": "content/archive/03researchMethod/master.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 49,
"size": 194
} |
\selectlanguage{british}%
\chapter{An Alternative to Contextuality\label{chap:An-Alternative-to-Contextuality}}
\section{Rationale}
In principle, we have achieved what we set out to; we were able to
understand how BM explains PM, a test of contextuality. Having clarified
various concepts through the heavy machinery of BM, here we attempt
to further our understanding, without using BM. We will start with
refining the assumptions of the PM test. We will see how, when one
of the assumptions is refined, one version holds for QM. The other
is not even experimentally testable. Thereafter, we construct simple
toy models, one of which is unambiguously non-contextual (unlike BM,
where one must restrict the measurement process to make the same claim).
This is used to demonstrate the distinction between the two versions
of multiplicativity. We then generalize this toy model to produce
statistics consistent with QM, for an arbitrary, but discrete Hilbert
space. Finally, we generalize this to continuous variables (without
spins) and obtain a consistent one dimensional completion of QM. Its
relation with other known completions (BM \& RS) are also discussed.
This model aims to facilitate easy value assignments in case of the
phase space versions of determinism and contextuality tests. These
are hard to compute from BM and RS.
\section{Multiplicativity \label{sec:Multiplicativity}}
Let us recall two assumptions from \prettyref{sec:Peres-Mermin-Revisited},
that of non invasiveness and multiplicativity. Let $\hat{B}_{1},\hat{B}_{2},\dots,\hat{B}_{n}$
be a set of commuting observables and also let $m_{i}(\hat{*})$ represent
the value assigned to an operator. When $m$ occurs multiple times
in a given expression, then $i$ encodes the sequence of measurements.
If we assume non invasiveness, then multiplicativity has a clear meaning;
\begin{defn*}
In a Hilbert space $\mathcal{H}$, a map $m_{i}$ from $\mathcal{H}\otimes\mathcal{H}^{\dagger}$
$\to$ $\mathbb{R}$, is \emph{multiplicative} iff
\begin{equation}
m_{1}(f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n}))=f(m_{1}(\hat{B}_{1}),m_{1}(\hat{B}_{2}),\dots m_{1}(\hat{B}_{n})).\label{eq:multiplicativity}
\end{equation}
\end{defn*}
Here we have generalized the notion from simply a product to any arbitrary
function. We have used $1$ as the subscript for each $m$, since
in this case, the sequence of a measurement is irrelevant. If one
relaxes the no-disturbance assumption, then the following also becomes
a possibility.
\begin{defn*}
In a Hilbert space $\mathcal{H}$, a map $m_{i}$ from $\mathcal{H}\otimes\mathcal{H}^{\dagger}$
$\to$ $\mathbb{R}$, is \emph{sequentially-multiplicative} iff
\begin{equation}
m_{1}(f(\hat{B}_{1},\hat{B}_{2},\dots,\hat{B}_{n}))=f(m_{k_{1}}(\hat{B}_{1}),m_{k_{2}}(\hat{B}_{2}),\dots m_{k_{n}}(\hat{B}_{n})),\label{eq:seqMultiplicativity}
\end{equation}
where $\mathbf{k}\equiv(k_{1},k_{2},\dots k_{n})\in\{(1,2,3,\dots n),(2,1,3,\dots n),$
+ all permutations$\}$.
\end{defn*}
We take a moment to understand the meaning of these statements more
carefully, in the context of QM. Consider the state $\left|\chi\right\rangle =\left|00\right\rangle $,
$\hat{B}_{1}\equiv\hat{\sigma}_{x}\otimes\hat{\sigma}_{y}$ and $\hat{B}_{2}\equiv\hat{\sigma}_{y}\otimes\hat{\sigma}_{x}$
while $\hat{C}\equiv f(\hat{B}_{1},\hat{B}_{2})=\hat{B}_{1}\hat{B}_{2}=\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}$.
Measuring $\hat{B}_{1}$ yields $\pm1$ with equal probabilities,
and so does a measurement of $\hat{B}_{2}$. However, a measurement
of $\hat{C}$ is guaranteed to yield $+1$. QM can't explicitly contradict
\prettyref{eq:multiplicativity}, since it only yields probabilities.
Further, a measurement in QM leads to disturbance, unless ofcourse
we consider simultaneous eigenstates. Therefore, after making a measurement,
say $m_{1}(\hat{B}_{1})$, one can't obtain $m_{1}(\hat{B}_{2})$.
Even if we agree to start the measurement with the same state, we
can't make the hidden variables identical. However, \prettyref{eq:seqMultiplicativity}
is certainly testable in QM, for one can first measure $\hat{B}_{1}$
to obtain $m_{1}(\hat{B}_{1})$, then measure $\hat{B}_{2}$, to find
$m_{2}(\hat{B}_{2})$ and obtain $m_{1}(\hat{B}_{1})m_{2}(\hat{B}_{2})$.
Starting from the state $\left|\chi\right\rangle $ again, one can
measure $\hat{C}$ to get $m_{1}(\hat{C})$ which has a precise value
in this case. One may again claim that experimentally, there maybe
some hidden variables which can't be made identical and thus after
measuring the $B$s, it is impossible to obtain $m_{1}(\hat{C})$.
This difficulty is circumvented in this case, by the fact that regardless
of the value of the hidden variable, given $\left|\chi\right\rangle $
and $\hat{C}$, the measurement outcome is the certain. Thus, taking
the same state $\left|\chi\right\rangle $ is enough. One can check,
from all the possibilities, if $m_{1}(\hat{B}_{1})m_{2}(\hat{B}_{2})=m_{1}(\hat{C})$.
We tried looking for states and operators that would violate this
condition, but failed. Infact, as we will prove momentarily, \emph{QM
weakly enforces sequentially multiplicative}. However, since restoring
the hidden variables is not possible in general, \emph{QM doesn't
enforce multiplicativity}.
We worked out two proofs of weak sequential multiplicativity in QM.
The first was a restricted, brute force proof. The second we showed
holds in general, which we will discuss here.
\begin{prop*}
Let the system be in a state, s.t. measurement of $\hat{C}$ yields
repeatable results (same result each time). Then according to QM,
sequential multiplicativity (see \prettyref{eq:seqMultiplicativity})
holds, where $\hat{C}\equiv f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n})$,
and $\hat{B}_{i}$ are as defined. \end{prop*}
\begin{proof}
Assume without loss of generality that $\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n}$
are mutually compatible (commuting) and complete set of operators.
If say for instance the set is not complete, then one can add the
missing operators and label them as aforesaid. It follows that $\exists$
$\left|\mathbf{b}=\left(b_{1}^{(l_{1})},b_{2}^{(l_{2})},\dots b_{n}^{(l_{n})}\right)\right\rangle $
s.t. $\hat{B}_{i}\left|\mathbf{b}\right\rangle =b_{i}^{(l_{i})}\left|\mathbf{b}\right\rangle $,
where $l_{i}$ indexes the eigenvalues corresponding to $\hat{B}_{i}$
and that $\sum_{\mathbf{b}}\left|\mathbf{b}\right\rangle \left\langle \mathbf{b}\right|=\hat{\mathbb{I}}$.
Let the state of the system be given by $\left|\psi\right\rangle $
and it must be s.t. $\hat{C}\left|\psi\right\rangle =c\left|\psi\right\rangle $,
by assumption. For the statement to follow, one need only show that
$\left|\psi\right\rangle $ must be made of only those $\left|\mathbf{b}\right\rangle $s,
which satisfy $c=f(b_{1}^{(l_{1})},b_{2}^{(l_{2})},\dots b_{n}^{(l_{n})})$.
This is the crucial step. Proving this is so is trivial. We start
with $\hat{C}\left|\psi\right\rangle =c\left|\psi\right\rangle $
and take its inner product with $\left\langle \mathbf{b}\right|$
to get
\begin{eqnarray*}
\left\langle \mathbf{b}\right|\hat{C}\left|\psi\right\rangle & = & c\left\langle \mathbf{b}|\psi\right\rangle ,\\
\left\langle \mathbf{b}\right|f(\hat{B}_{1},\hat{B}_{2},\dots\hat{B}_{n})\left|\psi\right\rangle & = & c\left\langle \mathbf{b}|\psi\right\rangle ,\\
f(b_{1}^{(l_{1})},b_{2}^{(l_{2})},\dots b_{n}^{(l_{n})})\left\langle \mathbf{b}|\psi\right\rangle & = & c\left\langle \mathbf{b}|\psi\right\rangle .
\end{eqnarray*}
Also, we have $\left|\psi\right\rangle =\sum_{\mathbf{b}}\left\langle \mathbf{b}|\psi\right\rangle \left|\mathbf{b}\right\rangle $,
from completeness. If we consider $\left|\mathbf{b}\right\rangle $s
for which $\left\langle \mathbf{b}|\psi\right\rangle \neq0$, then
we can conclude that indeed $c=f(b_{1}^{(l_{1})},b_{2}^{(l_{2})},\dots b_{n}^{(l_{n})})$.
However, when $\left\langle \mathbf{b}|\psi\right\rangle =0$, viz.
$\left|\mathbf{b}\right\rangle $s that are orthogonal to $\left|\psi\right\rangle $,
then nothing can be said. We can thus conclude that $\left|\psi\right\rangle $
is made only of those $\left|\mathbf{b}\right\rangle $s that satisfy
the required relation. That completes the proof.
\end{proof}
Note that we can't enforce sequential multiplicativity in general,
because of the hidden variable resetting objection that arises, which
was discussed with the example. However, in the PM case, where $\hat{R}_{i}$
and $\hat{C}_{j}$ are just $\pm\hat{\mathbb{I}}$, it follows that
all states are their eigenstates. Consequently, for these operators,
sequential multiplicativity must always hold. With the two notions
well defined, we now proceed with constructing two simple models,
which don't satisfy the non-invasive assumption.
\section{Simple Models}
We aim to distinguish the notion of contextuality and multiplicativity
by means of two simple models. These will not reproduce the statistics
as predicted by QM, but serve as examples and are generalized later.
\subsection{Contextual, Memory Model\label{sub:Contextual,-Memory-Model}}
The Memory Model presented here, is perhaps the simplest contextual
model. It is also non-multiplicative and invasive, viz. it doesn't
satisfy any of (1), (2) and (3), as listed in \prettyref{sec:Peres-Mermin-Revisited}.
The model is assumed to be sequentially multiplicative\footnote{infact according to QM, for row and column measurements, sequential
multiplicativity is a requirement}, and the assignments are made iteratively through the following algorithm.
We assume that the system has a matrix that can store values and has
a memory that can store the last 3 operators that were measured. Initially
assume that the matrix has all entries equal to $+1$. The algorithm
is that (i) upon measurement of an observable, yield the value as
saved in the matrix, (ii) append the observable in the 3 element memory
and (iii) update the matrix, once the context is known, to satisfy
the PM conditions on the rows and columns.
Let us take a quick example to understand how things are working.
Assume we start with measuring $\hat{A}_{33}$. The system will yield
$m_{1}(\hat{A}_{33})=1$, in accordance with the values stored initially
(see \prettyref{eq:memoryAssignments}). The memory at this stage
would read $\{*,*,\hat{A}_{33}\}$. Since the context is not yet known,
the matrix is left unchanged. Say the next operator measured is $\hat{A}_{23}$.
Then $m_{2}(\hat{A}_{23})=1$, and the memory would be updated to
$\{*,\hat{A}_{33},\hat{A}_{23}\}$. The context is now known, and
we update the matrix to ensure that $m_{1}(\hat{C})=m_{1}(A_{33})m_{2}(A_{23})m_{3}(A_{13})=-1$.
Since the first measurements yielded $+1$, we update the matrix so
that $m_{3}(A_{13})=-1$ to finally obtain the correct PM constraint.
This has been summarized by the following equations.
\begin{equation}
m_{1}(\hat{A}_{ij})=m_{2}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}
1 & 1 & 1\\
1 & 1 & 1\\
1 & 1 & 1
\end{array}\right],\,m_{3}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}
1 & 1 & -1\\
1 & 1 & 1\\
1 & 1 & 1
\end{array}\right].\label{eq:memoryAssignments}
\end{equation}
The reader can convince her(him)self that the assignments are indeed
consistent, regardless of which row/column is measured. There are
two remarks which need to be made. First, note this model is not multiplicative,
since $m_{1}(\hat{A}_{33})m_{1}(\hat{A}_{23})m_{1}(\hat{A}_{13})=1\neq m_{1}(\hat{C})=-1$,
by construction. Second observe that the value assigned to the observables,
depends explicitly on the context, thus the model is contextual.
\subsection{Non-Contextual, Toy Model \label{sub:Non-Contextual-Toy-Model}}
We now discuss another simple model, which is non-contextual and still
consistent with QM. It is also non-multiplicative and invasive, viz.
assumption (1) holds, but (2) and (3) don't, as listed in \prettyref{sec:Peres-Mermin-Revisited}.
Reference to the PM square will be made, and it has been reproduced,
\[
\hat{A}_{ij}\doteq\left[\begin{array}{ccc}
\hat{\mathbb{I}}\otimes\hat{\sigma}_{x} & \hat{\sigma}_{x}\otimes\hat{\mathbb{I}} & \hat{\sigma}_{x}\otimes\hat{\sigma}_{x}\\
\hat{\sigma}_{y}\otimes\hat{\mathbb{I}} & \hat{\mathbb{I}}\otimes\hat{\sigma}_{y} & \hat{\sigma}_{y}\otimes\hat{\sigma}_{y}\\
\hat{\sigma}_{y}\otimes\hat{\sigma}_{x} & \hat{\sigma}_{x}\otimes\hat{\sigma}_{y} & \hat{\sigma}_{z}\otimes\hat{\sigma}_{z}
\end{array}\right],
\]
for convenience. The assignments are made by a three step process.\\
\quad{}1. Initial State: Choose an appropriate initial state $\left|\psi\right\rangle $
(say $\left|00\right\rangle $).\\
\quad{}2. Hidden Variable (HV): Toss a coin and assign $c=+1$ for
heads, else assign $c=-1$.\\
\quad{}3. Predictions/Assignments: For an operator $\hat{p}'\in\{\hat{A}_{ij},\hat{R}_{i},\hat{C}_{j}\,(\forall\,i,j)\}$
check if $\exists$ a $\lambda$, s.t. $\hat{p}'\left|\psi\right\rangle =\lambda\left|\psi\right\rangle $.
If $\exists$ a $\lambda$, then assign $\lambda$ as the value. Else,
assign $c$.\\
So far the model has only predicted the outcomes of measurements.
If however, a measurement is made on the system, then although we
know the result from the predictions, we must update the state $\left|\psi\right\rangle $
of the system, depending on which observable is measured and arrive
at new predictions, using the aforesaid steps. The following final
step fills precisely this gap. \\
\quad{}4. Update: Say $\hat{p}$ was observed. If $\hat{p}$ is s.t.
$\hat{p}\left|\psi\right\rangle =\lambda\left|\psi\right\rangle $,
then leave the state unchanged. Else, find $\left|p_{\pm}\right\rangle $
(eigenkets of $\hat{p}$), s.t. $\hat{p}\left|p_{\pm}\right\rangle =\pm\left|p_{\pm}\right\rangle $
and update the state $\left|\psi\right\rangle \to\left|p_{c}\right\rangle $.
Let us explicitly apply the aforesaid algorithm, to the state $\left|\psi\right\rangle =\left|00\right\rangle $.
Say we obtained tails, and thus $c=-1$. To arrive at the assignments,
note that $\left|00\right\rangle $ is an eigenket of only $\hat{R}_{i},\hat{C}_{j}$
and $\hat{A}_{33}=\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}$. Thus,
in the first iteration, all these should be assigned their respective
eigenvalues. The remaining operators must be assigned $c$ (see \prettyref{eq:toyModel}).
Two remarks are in order. First, this model is \emph{non}-multiplicative,
for $m_{1}(\hat{C}_{3})=-1\neq m_{1}(\hat{A}_{13})m_{1}(\hat{A}_{23})m_{1}(\hat{A}_{33})=1$.
Second, we must impose sequential multiplicativity as a consistency
check of the model, which in particular entails that $m_{1}(\hat{C}_{3})=m_{1}(\hat{A}_{33})m_{2}(\hat{A}_{23})m_{3}(\hat{A}_{13})$.
To illustrate this, we must choose to measure $\hat{A}_{33}$. According
to step 4, since $\left|00\right\rangle $ is an eigenstate of $\hat{A}_{33}$,
the final state remains $\left|00\right\rangle $. %
\begin{comment}
{\center {\footnotesize asb}}
\end{comment}
For the next iteration, $i=2$, say we again yield $c=-1$. Since
$\left|\psi\right\rangle $ is also unchanged, the assignment remains
invariant. For the final step, we choose to measure $\hat{p}=\hat{A}_{23}(=\hat{\sigma}_{y}\otimes\hat{\sigma}_{y})$,
to proceed with sequentially measuring $\hat{C}_{3}$. To simplify
calculations, we note
\[
\left|00\right\rangle =\frac{(\left|\tilde{+}\tilde{-}\right\rangle +\left|\tilde{-}\tilde{+}\right\rangle )/\sqrt{2}+(\left|\tilde{+}\tilde{+}\right\rangle +\left|\tilde{-}\tilde{-}\right\rangle )/\sqrt{2}}{\sqrt{2}},
\]
where $\left|\tilde{\pm}\right\rangle =\left|0\right\rangle \pm i\left|1\right\rangle $
(eigenkets of $\hat{\sigma}_{y}$). Since $\left|00\right\rangle $
is manifestly not an eigenket of $\hat{p}$, we must find $\left|p_{-}\right\rangle $,
since $c=-1$. It is immediate that $\left|p_{-}\right\rangle =\left(\left|\tilde{+}\tilde{-}\right\rangle +\left|\tilde{-}\tilde{+}\right\rangle \right)/\sqrt{2}=\left(\left|00\right\rangle +\left|11\right\rangle \right)/\sqrt{2}$,
which becomes the final state. \begin{landscape}
\begin{table}
\begin{equation}
\begin{array}{c|ccc}
\text{Iteration} & i=1 & i=2 & i=3\\
\left|\psi_{\text{init}}\right\rangle & \left|00\right\rangle & \left|00\right\rangle & \frac{\left|00\right\rangle +\left|11\right\rangle }{\sqrt{2}}\\
\text{HV/Toss} & c=-1 & c=-1 & c=+1\\
\\
\text{Predictions} & m_{1}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}
-1 & -1 & -1\\
-1 & -1 & -1\\
-1 & -1 & +1
\end{array}\right] & m_{2}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}
-1 & -1 & -1\\
-1 & -1 & -1\\
-1 & -1 & +1
\end{array}\right] & m_{3}(\hat{A}_{ij})\doteq\left[\begin{array}{ccc}
+1 & +1 & +1\\
+1 & +1 & -1\\
+1 & +1 & +1
\end{array}\right]\\
\text{(Assignments)}\\
& m_{1}(\hat{R}_{i}),m_{1}(\hat{C}_{j})=+1\,(j\neq3) & m_{2}(\hat{R}_{i}),m_{2}(\hat{C}_{j})=+1\,(j\neq3) & m_{3}(\hat{R}_{i}),m_{3}(\hat{C}_{j})=+1\,(j\neq3)\\
& m_{1}(\hat{C}_{3})=-1 & m_{2}(\hat{C}_{3})=-1 & m_{3}(\hat{C}_{3})=-1\\
\text{Operator}\\
\text{Measured} & \hat{A}_{13}=\hat{\sigma}_{z}\otimes\hat{\sigma}_{z};m_{1}(\hat{A}_{13})=+1 & \quad\quad\hat{A}_{23}=\hat{\sigma}_{y}\otimes\hat{\sigma}_{y};m_{2}(\hat{A}_{23})=-1\quad\quad & \hat{A}_{33}=\hat{\sigma}_{x}\otimes\hat{\sigma}_{x};m_{3}(\hat{A}_{33})=+1\\
\\
\left|\psi_{\text{final}}\right\rangle & \left|00\right\rangle & \frac{\left|00\right\rangle +\left|11\right\rangle }{\sqrt{2}} & \frac{\left|00\right\rangle +\left|11\right\rangle }{\sqrt{2}}\\
\\
\\
\end{array}\label{eq:toyModel}
\end{equation}
\end{table}
\end{landscape}For the final iteration, $i=3$, say we yield $c=1$.
So far, we have $m_{1}(\hat{A}_{33})=1$ and $m_{2}(\hat{A}_{23})=-1$.
We must obtain $m_{3}(\hat{A}_{13})=1$, independent of the value
of $c$, to be consistent. Let's check that. According to step 3,
since $\hat{\sigma}_{x}\otimes\hat{\sigma}_{x}\left(\left|00\right\rangle +\left|11\right\rangle \right)/\sqrt{2}=1\left(\left|00\right\rangle +\left|11\right\rangle \right)/\sqrt{2}$,
$m_{3}(\hat{A}_{13})=1$ indeed. As a remark, it maybe emphasised
that the $m_{2}(\hat{A}_{33})=m_{3}(\hat{A}_{33})$ and $m_{2}(\hat{A}_{23})=m_{3}(\hat{A}_{23})$,
which essentially expresses compatibility of these observables, viz.
measurement of $\hat{A}_{13}$ doesn't affect the result one would
obtain by measuring operators compatible to it (granted they have
been measured once before).
While this model serves as a simple counter-example to the usual `QM
is contextual' conclusion one draws the PM situation, the model fails
to yield the appropriate statistics. For instance, if we consider
simply the state $\sqrt{2}\left|\psi\right\rangle =\cos\theta\left|++\right\rangle +\sin\theta\left|--\right\rangle $,
then it follows that a measurement of $\hat{A}_{11}=\hat{\mathbb{I}}\otimes\hat{\sigma}_{x}$,
would yield $\pm1$ with equal probability according to the toy model,
whereas it (the probabilities) should depend on $\theta$ according
to QM.\footnote{This was pointed out by Prof. Arvind.}
\section{Generic Models}
In this section, we present arguably, the simplest HV theories, one
of which is for spin like systems (discrete Hilbert space) while the
other is for phase space (continuous Hilbert Space) for spin-less
particles. These models are essentially non-contextual completions
of QM, which facilitate easy computation of value assignment to operators.
\subsection{Discretely C-ingle Theory \label{sub:Discretely-C-ingle-Theory}}
The state of the system is $\left|\chi\right\rangle $, defined on
a discrete Hilbert space (spin like) and we wish to assign a value
to an arbitrary operator $\hat{A}=\sum_{a}a\left|a\right\rangle \left\langle a\right|$,
which has eigenvalues $\{a_{\text{min}}=a_{1}\le a_{2}\dots\le a_{n}=a_{\text{max}}\}$.
This theory has the following postulates: \\
1. Initial HV: Pick a $c\in[0,1]$, from a uniform random distribution.\\
2. Assignment/Prediction: The value assigned to $\hat{A}$ is given
by finding the smallest $a$ s.t.
\[
c\le\sum_{a'=a_{\text{min}}}^{a}\left|\left\langle a'|\chi\right\rangle \right|^{2}.
\]
A measurement of $\hat{A}$, would yield $a$.\\
3. Update: After measuring an operator, the state must be updated
(collapsed) in accordance with the rules of QM.
To see how this works, we restrict ourselves to a single spin case.
Say $\left|\chi\right\rangle =\cos\theta\left|0\right\rangle +\sin\theta\left|1\right\rangle $,
and $\hat{A}=\hat{\sigma}_{z}=\left|0\right\rangle \left\langle 0\right|-\left|1\right\rangle \left\langle 1\right|$.
Now, according to the postulates of this theory, $\hat{A}$ will be
assigned $+1$, if $c\le\cos^{2}\theta$, else $\hat{A}$ will be
assigned $-1$. It follows then, from $c$ being uniformly random
in $[0,1]$, that the statistics agree with predictions of QM. The
reader can convince him(her)self that the said scheme works in general,
specifically for the PM situation.
We can clearly see that the assignment is non-contextual, since given
an operator and a state (+ the hidden variable), the value is uniquely
assigned. The assignment is non-multiplicative, because this scheme
when applied to the PM situation, becomes effectively the same as
the toy model (barring the statistics). We have already seen explicitly
that the toy-model non-multiplicative. The theory is ofcourse invasive,
for the state is updated after each measurement.
\subsection{Continuously C-ingle Theory | Preliminary}
The state of the system is $\left|\psi\right\rangle $, defined for
a single spin-less particle, and we wish to assign a value to an arbitrary
operator
\[
\hat{A}=\int_{a_{\text{min}}}^{a_{\text{max}}}daa\left|a\right\rangle \left\langle a\right|.
\]
This theory has the following postulates: \\
1. Initial HV: Pick a $c\in[0,1]$, from a uniform random distribution.\\
2. Assignment/Prediction: The value assigned to $\hat{A}$ is given
by an $a$ that satisfies
\[
c=\int_{a_{\text{min}}}^{a}\left|\psi(a')\right|^{2}da',
\]
where $\psi(a')=\left\langle a'|\psi\right\rangle $. A measurement
of $\hat{A}$, would yield $a$.\\
3. Update: After measuring an operator, the state must be updated
(collapsed) in accordance with the rules of QM.
The continuous variable version has some exclusive interesting features.
First, note that for $\hat{A}=\hat{q}$ (the position), one can predict
the trajectory of a particle. This is quite intuitive to observe graphically.
Say $c$ had a value as shown in the graph (see \prettyref{fig:Illustration-of-cingle})
\begin{figure}
\centering{}\includegraphics[width=0.95\columnwidth]{Chapter4/Figs/Vector/cingle}\caption{Illustration of the underlying principle of the continuously c-ingle
model\label{fig:Illustration-of-cingle}}
\end{figure}
and the initial state is given by a Gaussian. Now the cumulative of
this can be quickly constructed, and where the $c$ line intersects
the graph, that will yield the position of the particle. At some later
time if the Gaussian shifts (assume it had some momentum), then for
the same value of $c$, the particle's location would've moved with
the Gaussian as expected. Infact this is a general feature and can
be done for all observables, including $p$. %
\begin{comment}
Reasoning similar to the previous subsection entails that this theory
is also non-multiplicative and non-contextual.
\end{comment}
The theory is explicitly non-contextual, since values are uniquely
assigned to operators, given $\psi$ and $c$. However, one needs
more analysis to extend this to multiple particles. Once that is accomplished,
one must show that measuring the observables using the Hamiltonian
approach, would yield results consistent with those obtained from
the postulates of the theory. To show then that the theory is non-contextual,
one would be required to show that all concievable measurement schemes
would produce the same result, else, like BM, this theory wouldn't
be able to predict the values of operators with only the state and
hidden variable information. %
\begin{comment}
{[}TODO: write this section properly; the main idea is that to be
consistent with the method of measuring arbitrary observables, by
position measurements, one can't assign values in the aforesaid way.
This way, you end up with two assignments for the same observable
and have to accept eventually, that the result you get would depend
on the experiment that is performed and then there's no meaning one
can assign to an observable having a value; thus if I assign a value
to position, then effectively all my remaining freedom is lost. Thus,
one would have to restrict the theory in some, which is ok for spins,
black box the other degree of freedom, but here one can't do this.
{]}
\end{comment}
Once extended appropriately, this theory would effectively overcome
all the challenges we had for testing the continuous variable extensions
of the GHZ and contextuality tests. Predictions of observables will
be straight forward. To obtain values from BM, one had to do a detailed
analysis, for RS, evaluating values of observables other than $q$
and $p$ (even $q+p$ is hard) was hard.
We had this hunch however, that the trajectory so obtained, must be
identical to Bohmian trajectories. This infact turns out to be so.
Let us take a moment to prove this, for using the same method, one
can evalute a trajectory for the momentum also. This would be distinct
from the momentum in BM. In this case, a measurement of $\hat{p}$
yields $p$ (the assigned value), unlike in BM.
\begin{prop*}
$\dot{q}=\nabla S/m$ in a continuously c-ingle theory, for a single
particle, with one degree of freedom.\end{prop*}
\begin{proof}
Let $f$ quantify some property of a particle. To point to a particle,
we can either use the variables $c,t$ or $x,t$, since we know how
$x$ and $c$ are related. Thus, one can write $f(c,t)$ or $f(x,t)$.\footnote{If you think that the two functions should be different, then the
notation is confusing you. $f(x,t)=f(t,x)$ should help resolve. The
position of the arguments is not important, this is not like a computer
function. $f(x,t)$ is the statement that $f$ is a function of $x,t$.
That's it.} Now we have
\[
\left[\frac{\partial f}{\partial t}\right]_{c}=\left[\frac{\partial f}{\partial t}\right]_{t}.\left[\frac{\partial q}{\partial t}\right]_{c}+\left[\frac{\partial f}{\partial t}\right]_{q},
\]
where we used $f(x,t)$ on the RHS. Note also that $\dot{q}$ refers
to the velocity of a given particle, thus it must equal $\left[\partial q/\partial t\right]{}_{c}$.
Now for $f=\int_{-\infty}^{q}\left|\psi(q')\right|^{2}dq'=c$, the
LHS disappears. Consequently, we have
\[
\left[\frac{\partial q}{\partial t}\right]_{c}=-\left[\frac{\partial f}{\partial t}\right]_{q}/\left[\frac{\partial f}{\partial q}\right]_{t},
\]
which is in a form which can be evaluated directly from the relation
given. $\left[\partial f/\partial q\right]_{t}=\left|\psi(q,t)\right|^{2}=R^{2}$,
if $\psi=Re^{iS/\hbar}$. It is easy to show that the probability
current is given by $\nabla S/m$. Using $\left[\partial\left|\psi(q,t)\right|/\partial t\right]_{q}=-\nabla.(R^{2}\nabla S/m)$,
which is effectively the probability conservation statement, derived
from Schrodinger's equation, written in polar form, it follows that
$-\left[\partial f/\partial t\right]_{q}=R^{2}\nabla S$. Thus we
have $\dot{q}=\nabla S$ as claimed.
\end{proof}
An equation similarly for $\dot{p}$ can be obtained and an appropriate
dynamics can perhaps then be constructed. It maybe mentioned as a
remark that, quite independent of it's motivation, this scheme can
be used to compute Bohmian trajectories. It is much more efficient,
as is apparent from a comparison of the resources required between
computing an integral and the steps listed in \prettyref{sec:Bohm's-Theory,-Bohmian}.
\section{The Verdict: Contextuality vs. Non-Multiplicativity}
We have already constructed an explicit non-contextual model, which
is consistent with QM. This model we knew had to be non-multiplicative.
We will see how non-multiplicativity gives rise to what one might
confuse to mean contextuality.
The approach is to take some compatible observables, and to construct
a `super-operator', a measurement of which can yield the values of
all of these compatible observables in a single shot. We would see
then, that the in-principle measurement outcome of observing these
compatible operators, would not be consistent. Now this one might
treat as contextuality, but according to the definition in \prettyref{sec:Peres-Mermin-Revisited},
we note that this is non-multiplicativity, as per \prettyref{sec:Multiplicativity}.
Let us construct an explicit situation and make more precise statements.
Borrowing the notation from \prettyref{sec:Multiplicativity}, imagine
\[
\hat{B}_{1}=\hat{\sigma}_{z}\otimes\hat{\mathbb{I}}=\left|00\right\rangle \left\langle 00\right|+\left|01\right\rangle \left\langle 01\right|-\left[\left|10\right\rangle \left\langle 10\right|+\left|11\right\rangle \left\langle 11\right|\right],
\]
\[
\hat{B}_{2}=\hat{\mathbb{I}}\otimes\hat{\sigma}_{z}=\left|10\right\rangle \left\langle 10\right|+\left|11\right\rangle \left\langle 11\right|-\left[\left|00\right\rangle \left\langle 00\right|+\left|01\right\rangle \left\langle 01\right|\right],
\]
while we define
\[
\hat{C}=f(\{\hat{B}_{i}\})=0.\left|00\right\rangle \left\langle 00\right|+1.\left|01\right\rangle \left\langle 01\right|+2.\left|10\right\rangle \left\langle 10\right|+3.\left|11\right\rangle \left\langle 11\right|.
\]
$\hat{C}$ maybe viewed as a function of $\hat{B}_{1}$, $\hat{B}_{2}$
and other operators $\hat{B}_{i}$ which are constructed to obtain
a maximally commuting set. A measurement of $\hat{C}$, will collapse
the state into one of the states which are simultaneous eignkets of
$B_{1}$ and $B_{2}$. Consequently, from the observed value of $\hat{C}$,
one can deduce the values of $\hat{B}_{1}$ and $\hat{B}_{2}$. Now
consider $\sqrt{2}\left|\chi\right\rangle =\left|10\right\rangle +\left|01\right\rangle $,
for which $m_{1}(\hat{B}_{1})=1$, and $m_{1}(\hat{B}_{2})=1$, using
the discretely c-ingle theory (\prettyref{sub:Discretely-C-ingle-Theory}),
with $c<0.5$. However, $m_{1}(\hat{C})=1$, from which one can deduce
that $B_{1}$ was $+1$, while $B_{2}$ was $-1$. This property itself,
one may be tempted call contextuality, viz. the value of $B_{1}$
depends on whether it is measured alone or with the remaining $\{B_{i}\}$.
However, it must be noted that $B_{1}$ has a well defined value,
and so does $\hat{C}$. Thus by our accepted definition, there's no
contextuality. It is just that $m_{1}(\hat{C})\neq f(m_{1}(\hat{B}_{1}),m_{1}(\hat{B}_{2}),\dots)$,
viz. the theory is non-multiplicative. Note that after measuring $\hat{C}$
however, $m_{2}(\hat{B}_{1})=+1$ and $m_{2}(\hat{B}_{2})=-1$ (for
any value of $c$) consistent with those deduced by measuring $\hat{C}$.
\section{Denouement}
We have already learnt that the proof of `contextuality', infact requires
three assumptions, (1) multiplicativity, (2) non-contextuality and
(3) non-invasiveness. Here we were able to construct an explicit non-contextual
theory (for spins) which is non-multiplactive, but invasive and completely
consistent with QM. Succinctly stated, it satisfies (2) but neither
(3) nor (1), serving as a counter-example to the claim that non-contextual
hidden variable theories can't be consistent with QM. We also showed
how the theory might be misconstrued to be contextual and provided
a clarification. This is of interest because the contextuality arising
in BM from the measurement schemes, has been a source of confusion
about the said notion, arising in the PM situation.
\begin{comment}
\section{First Section of the Third Chapter}
And now I begin my third chapter here . . . And now to cite some more
people \cite{prime-number-theorem,texbook,SFPT,latex}
\subsection{First Subsection in the First Section . . .}
and some more
\subsection{Second Subsection in the First Section . . . }
and some more . . .
\subsubsection{First subsub section in the second subsection . . . }
and some more in the first subsub section otherwise it all looks the
same doesn\textquoteright t it? well we can add some text to it .
. .
\subsection{Third Subsection in the First Section . . . }
and some more text . . .
\subsubsection{First subsub section in the third subsection . . . }
and some more in the first subsub section otherwise it all looks the
same doesn\textquoteright t it? well we can add some text to it and
some more and some more and some more and some more
\section{Second section with a Table}
Oh I have a table, which I can to refer (See \ref{tab:My-first-table}).
\begin{table}[H]
\hfill{}%
\begin{tabular}{|c|c|c|}
\hline
\textbf{1} & \textbf{2} & \textbf{3}\tabularnewline
\hline
\hline
4 & 5 & 6\tabularnewline
\hline
7 & 8 & 9\tabularnewline
\hline
\end{tabular}\hfill{}
\caption{\label{tab:My-first-table}My first table (I know, it is a really
intuitive name) }
\end{table}
\end{comment}
\selectlanguage{english}%
| {
"alphanum_fraction": 0.7220715918,
"avg_line_length": 55.3541315346,
"ext": "tex",
"hexsha": "4e3d1b1e5567d4e69a7a2b301c6b76ad039c8f28",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "357f4599338906c213e3f453422d6f414a8ac97d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "toAtulArora/msThesisLyX",
"max_forks_repo_path": "Chapter4/Chapter4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "357f4599338906c213e3f453422d6f414a8ac97d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "toAtulArora/msThesisLyX",
"max_issues_repo_path": "Chapter4/Chapter4.tex",
"max_line_length": 271,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "357f4599338906c213e3f453422d6f414a8ac97d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "toAtulArora/msThesisLyX",
"max_stars_repo_path": "Chapter4/Chapter4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10152,
"size": 32825
} |
\chapter*{Extended Abstract}
\begin{center}
\begingroup
\renewcommand*{\arraystretch}{1}
\rowcolors{2}{white}{white}
{\makeatletter
\begin{tabular}{p{3.2cm}p{9.6cm}}
Topic: & \thema \\
& \\
Team members: & \verfasserA, \verfasserB, \verfasserC, \verfasserD \\
& \\
Advisor: & \hoschschule \newline \institut \newline \prueferA \\
& \\
\end{tabular}
\makeatother}
\endgroup
\end{center}
\bigskip
In this Paper we try to predict precipitation for a range of 35 minutes in an area around Constance.
Therefore we are using machine learning techniques and train a UNet on radar data images.
Here we present the result of precipitation prediction as well with regression as with classification.
Both approaches provide good results. Source code and full length documentation in german can be found at GitHub: \url{https://github.com/thgnaedi/DeepRain}.
\printbibliography[title={References}, heading=subbibliography]
| {
"alphanum_fraction": 0.7352941176,
"avg_line_length": 30.7096774194,
"ext": "tex",
"hexsha": "da720e5ad646dcb36ce2bb215d4af650d0c719ab",
"lang": "TeX",
"max_forks_count": 9,
"max_forks_repo_forks_event_max_datetime": "2022-01-09T02:48:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T10:00:21.000Z",
"max_forks_repo_head_hexsha": "4cf9323901f38898a3b119faf07e2869630046c3",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "thgnaedi/DeepRain",
"max_forks_repo_path": "Docs/paper/extended_abstract.tex",
"max_issues_count": 30,
"max_issues_repo_head_hexsha": "4cf9323901f38898a3b119faf07e2869630046c3",
"max_issues_repo_issues_event_max_datetime": "2020-01-08T04:39:24.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-10-23T09:06:15.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "thgnaedi/DeepRain",
"max_issues_repo_path": "Docs/paper/extended_abstract.tex",
"max_line_length": 157,
"max_stars_count": 29,
"max_stars_repo_head_hexsha": "4cf9323901f38898a3b119faf07e2869630046c3",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "thgnaedi/DeepRain",
"max_stars_repo_path": "Docs/paper/extended_abstract.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-04T07:23:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-15T09:55:45.000Z",
"num_tokens": 282,
"size": 952
} |
\chapter*{Vita}
\addtocontents{toc}{
\unexpanded{\unexpanded{\renewcommand{\cftchapdotsep}{\cftnodots}}}%
}
\addcontentsline{toc}{chapter}{Curriculum Vitae}
\doublespacing
Vita may be provided by doctoral students only. The length of the vita is preferably one page. It may include the place of birth and should be written in third person. This vita is similar to the author biography found on book jackets.
\pagenumbering{gobble} | {
"alphanum_fraction": 0.7862068966,
"avg_line_length": 43.5,
"ext": "tex",
"hexsha": "0a106f83560cfe0fc51b75e9f1008af8d15e1fb4",
"lang": "TeX",
"max_forks_count": 23,
"max_forks_repo_forks_event_max_datetime": "2022-03-23T15:44:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-06T06:33:09.000Z",
"max_forks_repo_head_hexsha": "dcb8dc3a7b4747dd66837fb7e1310d2cd6d77ded",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "atreyee-m/PhD-Thesis",
"max_forks_repo_path": "vita.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dcb8dc3a7b4747dd66837fb7e1310d2cd6d77ded",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "atreyee-m/PhD-Thesis",
"max_issues_repo_path": "vita.tex",
"max_line_length": 235,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "dcb8dc3a7b4747dd66837fb7e1310d2cd6d77ded",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "atreyee-m/PhD-Thesis",
"max_stars_repo_path": "vita.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-18T21:22:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-14T20:06:24.000Z",
"num_tokens": 115,
"size": 435
} |
% Created 2021-07-20 Tue 10:14
% Intended LaTeX compiler: pdflatex
\documentclass[presentation,aspectratio=169]{beamer}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{grffile}
\usepackage{longtable}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{amssymb}
\usepackage{capt-of}
\usepackage{hyperref}
\usepackage{khpreamble}
\usepackage{amssymb}
\DeclareMathOperator{\shift}{q}
\DeclareMathOperator{\diff}{p}
\usetheme{default}
\author{Kjartan Halvorsen}
\date{\today}
\title{Polynomial pole placement - part 2}
\hypersetup{
pdfauthor={Kjartan Halvorsen},
pdftitle={Polynomial pole placement - part 2},
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 26.3 (Org mode 9.4.6)},
pdflang={English}}
\begin{document}
\maketitle
\section{Intro}
\label{sec:org8bb9cb2}
\begin{frame}[label={sec:orgef2154c}]{Goal of today's lecture}
Understand the design procedure of polynomial pole placement
\end{frame}
\section{2-dof controller}
\label{sec:orga4e419a}
\begin{frame}[label={sec:org6dfc545}]{Two-degree-of-freedom controller}
\begin{center}
\includegraphics[width=0.7\linewidth]{../../figures/2dof-block-explicit}
\end{center}
\begin{align*}
Y(z) &= \frac{F_f(z)H(z)}{1 + z^{-d}F_b(z)H(z)}U_c(z) + \overbrace{\frac{1}{1 + z^{-d}F_b(z)H(z)}}^{S_s(z)}V(z) - \overbrace{\frac{z^{-d}F_b(z)H(z)}{1 + z^{-d}F_b(z)H(z)}}^{T_s(z)}N(z)\\
\end{align*}
\alert{Evidently} \(S_s(z) + T_s(z) = 1\) \alert{Conclusion:} One must find a balance between disturbance rejection and noise attenuation.
\end{frame}
\section{Sensitivity, revisited}
\label{sec:orgaa3ab2a}
\begin{frame}[label={sec:org523ac99}]{The sensitivity function}
\[S_s(z) = \frac{1}{1 + z^{-d}F_b(z)H(z)} = \frac{1}{1 + G_o(z)}= \frac{1}{G_o(z) - (-1)}\]
\begin{columns}
\begin{column}{0.45\columnwidth}
\[|S_s(\mathrm{e}^{i\omega h})| = |S_s(i\omega)| = \frac{1}{| G_o(i\omega) - (-1)|}\]
\alert{The magnitude of the sensitivity function is inverse proportional to the distance of the Nyquist curve to the critical point -1}
\end{column}
\begin{column}{0.65\columnwidth}
\begin{center}
\includegraphics[width=0.6\linewidth]{../../figures/implane-nyquist-margins}
\end{center}
\end{column}
\end{columns}
\end{frame}
\section{RST}
\label{sec:orgb0db44e}
\begin{frame}[label={sec:org73a5512}]{The design procedure}
\end{frame}
\begin{frame}[label={sec:org07f51fc}]{The design procedure}
Given plant model \(H(z)=\frac{B(z)}{A(z)}\) and specifications on the desired closed-loop poles \(A_{cl}(z)\)
\begin{enumerate}
\item Find polynomials \(R(z)\) and \(S(z)\) with \(n_R \ge n_S\) such that
\[ A(z)R(z)z^{d} + B(z)S(z) = A_{cl}(z) \]
\item Factor the closed-loop polynomials as \(A_{cl}(z) = A_c(z)A_o(z)\), where \(n_{A_o} \le n_R\). Choose
\[T(z) = t_0 A_o(z),\] where \(t_0 = \frac{A_c(1)}{B(1)}\).
\end{enumerate}
The control law is then
\[ R(q) u(k) = T(q)u_c(k) - S(q)y(k). \]
The closed-loop response to the command signal is given by
\[ A_c(q)y(k) = t_0 B(q) u_c(k). \]
\end{frame}
\begin{frame}[label={sec:org1a53433}]{Determining the order of the controller}
With Diophantine equation
\[ A(z)R(z)z^{d} + B(z)S(z) = A_{cl}(z) \qquad (*) \]
and feedback controller
\[F_b(z) = \frac{S(z)}{R(z)} = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{z^n + r_1 z^{n-1} + \cdots + r_n}\]
\alert{How should we choose the order of the controller?} Note:
\begin{itemize}
\item the controller has \(n+n+1 = 2\deg R + 1\) unknown parameters
\item the LHS of \((*)\) has degree \(\deg \big(A(z)R(z)z^d + B(z)S(z)\big) = \deg A + \deg R + d\)
\item The diophantine gives as many (nontrivial) equations as the degree of the polynomials on each side when we set the coefficients equal.
\alert{\(\Rightarrow\;\)Choose \(\deg R\) so that \(2\deg R + 1 = \deg A + \deg R + d\)}
\end{itemize}
\end{frame}
\begin{frame}[label={sec:org2ebbb1d}]{Determining the order of the controller - Exercise}
With the plant model \[H(z) = \frac{B(z)}{A(z)} = \frac{b}{z + a}\] and \(d=0\) (no delay), what is the appropriate degree of the controller
\[F_b(z) = \frac{S(z)}{R(z)} = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{z^n + r_1 z^{n-1} + \cdots + r_n}\]
so that all parameters can be determined from the diophantine equation
\[ A(z)R(z) + B(z)S(z) = A_c(z)A_o(z)?\]
\begin{center}
\begin{tabular}{ll}
1. \(n = 0\) & 2. \(n = 1\)\\
3. \(n=2\) & 4. \(n=3\)\\
\end{tabular}
\end{center}
\end{frame}
\begin{frame}[label={sec:orgefa8757}]{Determining the order of the controller - Exercise - Solution}
With the plant model \[H(z) = \frac{B(z)}{A(z)} = \frac{b}{z + a}\] and \(d=0\) (no delay), what is the appropriate degree of the controller \[F_b(z) = \frac{S(z)}{R(z)} = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{z^n + r_1 z^{n-1} + \cdots + r_n}\]
so that all parameters can be determined from the diophantine equation
\[ A(z)R(z) + B(z)S(z) = A_c(z)A_o(z)?\]
\begin{center}
\begin{tabular}{rr}
1. \(n = 0\) & 2.\\
3. & 4.\\
\end{tabular}
\end{center}
\end{frame}
\begin{frame}[label={sec:orgcbd8cc2}]{Two-degree-of-freedom controller, the importance of the observer poles}
\begin{center}
\includegraphics[width=0.7\linewidth]{../../figures/2dof-block-explicit}
\end{center}
\begin{align*}
Y(z) &= \frac{t_0B(z)z^d}{A_c(z)}U_c(z) + \frac{A(z)R(z)z^d}{A_c(z)A_o(z)}V(z)- \frac{S(z)B(z)}{A_c(z)A_o(z)}N(z)
\end{align*}
\alert{Conclusiones} 1) There is a partial separation between designing for reference tracking and designing for perturbance rejection. 2) The observer poles (the roots of \(A_o(z)\)) can be used to determine a balance between disturbance rejection and noise attenuation.
\end{frame}
\section{Example}
\label{sec:orgabd7f07}
\begin{frame}[label={sec:org2ecba8c}]{Example - Level control of a dam}
\begin{center}
\includegraphics[width=0.5\linewidth]{../../figures/kraftverk}
\end{center}
\alert{Objective} Design a control system to maintain the water level under influence of disturbances.
\end{frame}
\begin{frame}[label={sec:org3c2cd7f}]{Example - Level control of a dam}
\begin{center}
\includegraphics[width=0.3\linewidth]{../../figures/kraftverk}
\end{center}
\alert{The process dynamics}
\begin{center}
\begin{tikzpicture}
\node at (0,0) {$y(k) = y(k-1) -v(k-1) + u(k-2)$};
\node[coordinate, pin=140:{Cambio en el nivel de agua}] at (-2.6,0.2) {};
\node[coordinate, pin=-140:{Cambio en flujos no controlados}] at (0.8,-0.2) {};
\node[coordinate, pin=60:{Cambio en flujo controlado}] at (2,0.2) {};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
\node[coordinate] (input) {};
\node[block, right of=input, node distance=20mm] (delay) {$z^{-1}$};
\node[sumnode, right of=delay, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[block, right of=sum, node distance=20mm] (plant) {$H_p(z)$};
\node[coordinate, above of=sum, node distance=12mm] (disturbance) {};
\node[coordinate, right of=plant, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (delay);
\draw[->] (sum) -- node[above] {} (plant);
\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);
\draw[->] (disturbance) -- node[right, pos=0.2] {$v(k)$} node[left, pos=0.8] {$-$} (sum);
\draw[->] (delay) -- (sum);
\end{tikzpicture}
\end{center}
\end{frame}
\begin{frame}[label={sec:org61013f5}]{Example - Level control of a dam}
\alert{The process dynamics}
\begin{center}
\begin{tikzpicture}
\node at (0,0) {$y(k) = y(k-1) -v(k-1) + u(k-2)$};
\node[coordinate, pin=140:{Cambio en el nivel de agua}] at (-2.6,0.2) {};
\node[coordinate, pin=-140:{Cambio en flujos no controlados}] at (0.8,-0.2) {};
\node[coordinate, pin=60:{Cambio en flujo controlado}] at (2,0.2) {};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
\node[coordinate] (input) {};
\node[block, right of=input, node distance=20mm] (delay) {$z^{-1}$};
\node[sumnode, right of=delay, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[block, right of=sum, node distance=20mm] (plant) {$H_p(z)$};
\node[coordinate, above of=sum, node distance=12mm] (disturbance) {};
\node[coordinate, right of=plant, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (delay);
\draw[->] (sum) -- node[above] {} (plant);
\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);
\draw[->] (disturbance) -- node[right, pos=0.2] {$v(k)$} node[left, pos=0.8] {$-$} (sum);
\draw[->] (delay) -- (sum);
\end{tikzpicture}
\end{center}
\alert{Activity} What is the transfer function from \(u(k)\) to \(y(k)\)?
\begin{center}
\begin{tabular}{lll}
1: \(H(z) = \frac{z}{z-1}\) & 2: \(H(z)=\frac{1}{z-1}\) & 3: \(H(z)=\frac{1}{z(z-1)}\)\\
\end{tabular}
\end{center}
\end{frame}
\begin{frame}[label={sec:orgd0cf498}]{Example - Level control of a dam}
Given process \(H(z) = \frac{B(z)}{A(z)} = \frac{1}{z(z-1)}\) and desired poles in \(z=0.9\).
\begin{enumerate}
\item The Diophantine equation \(A(z)R(z)z^d + B(z)S(z) = A_{cl}(z)\)
\[ z(z-1)R(z) + S(z) = A_{cl}(z)\]
The order of the controller is
\[\deg R = \deg A + d - 1 = 2-1 = 1, \quad \Rightarrow \quad F_b(z)=\frac{S(z)}{R(z)} = \frac{s_0z + s_1}{z + r_1}\]
\item Resulting Diophantine equation
\[ z(z-1)(z+r_1) + s_0z + s_1 = A_{cl}(z)\]
The degree of \(A_{cl}(z)\) is 3. Choose \(A_o(z) = z\), ( \(\deg A_o = \deg R\))
\[ A_{cl}(z) = A_o(z) A_c(z) = z(z-0.9)^2\]
\end{enumerate}
\end{frame}
\begin{frame}[label={sec:orgc75df91}]{Example - Level control of a dam}
\begin{enumerate}
\setcounter{enumi}{2}
\item From the Diophantine equation \[ z(z-1)(z+r_1) + s_0z + s_1 = z(z-0.9)^2\]
\[ z^3 + (r_1-1)z^2 - r_1z + s_0z + s_1 = z^3 -1.8z^2 + 0.81z\]
we obtain the equations
\begin{align*}
\begin{cases} z^2 &: \quad r_1-1 = -1.8\\
z^1 &: \quad -r_1 + s_0 = 0.81\\
z^0 &: \quad s_1 = 0
\end{cases}
\quad \Rightarrow \quad
\begin{cases} r_1 &= -0.8\\ s_0 &= 0.01\\ s_1 &=0 \end{cases}
\end{align*}
\[F_b(z) = \frac{0.01z}{z - 0.8}\]
\end{enumerate}
\end{frame}
\begin{frame}[label={sec:org2dcdae7}]{Example - Level control of a dam}
\begin{enumerate}
\setcounter{enumi}{3}
\item We have \(A_o(z) = z\), so
\[T(z) = t_0A_o(z) = t_0z\]
\[G_c(z) = \frac{T(z)B(z)}{A_o(z)A_c(z)} = \frac{t_0 B(z)}{A_c(z)}, \quad \text{queremos}\, G_c(1)=1\]
\[ t_0 = \frac{A_c(1)}{B(1)} = \frac{(1-0.9)^2}{1} = 0.01\]
\end{enumerate}
\alert{Control law}
\[R(\shift) u(kh) = T(\shift)u_c(kh) - S(\shift)y(kh)\]
\[ (\shift - 0.8)u(kh) = 0.01\shift u_c(kh) - 0.01\shift y(kh)\]
\[ u(kh+h) = 0.8u(kh) + 0.01 u_c(kh+h) - 0.01y(kh+h)\]
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6404153648,
"avg_line_length": 37.9163763066,
"ext": "tex",
"hexsha": "af2c642a40365fa451fb9349d724b6c65276a140",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z",
"max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control",
"max_forks_repo_path": "polynomial-design/slides/lecture-polynomial-design-2.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa",
"max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control",
"max_issues_repo_path": "polynomial-design/slides/lecture-polynomial-design-2.tex",
"max_line_length": 271,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control",
"max_stars_repo_path": "polynomial-design/slides/lecture-polynomial-design-2.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z",
"num_tokens": 4208,
"size": 10882
} |
\vsssub
\subsubsection{~$S_{bot}$: \showex\ bottom friction} \label{sec:BT4}
\vsssub
\opthead{BT4}{Crest model}{F. Ardhuin}
\noindent
A more realistic parameterization for sandy bottoms is based on the eddy
viscosity model by \cite{art:GM79} and a roughness parameterization that
includes the formation of ripples and transition to sheet flow. The
parameterization of \cite{tol:JPO94}, was adjusted by \cite{art:Aea03a} to
field measurements from the DUCK'94 and SHOWEX experiments on the North
Carolina continental shelf. The parameterization has been adapted to \ws\ by
also including a sub-grid parameterization for the variability of the water
depth, as given by \cite{tol:CE95}. This parameterization is activated by the
switch BT4.
The source term can be written as
%------------------------%
% SHOWEX bottom friction %
%------------------------%
% eq:SHOWEX_bot
\begin{equation}
\cS_{bot}(k,\theta) = - f_e u_b \: \frac{\sigma^2}{2 g \sinh^2(kd)} \: N(k,\theta)
\: , \label{eq:SHOWEX_bot}
\end{equation}
\noindent
where $f_e$ is a dissipation factor that is a function of the r.m.s. bottom orbital
displacement amplitude $a_b$ and the Nikuradse roughness length $k_N$, and
$u_b$ is the r.m.s. of the bottom orbital
velocity.
The present bed roughness parameterization
(\ref{eq:SHOWEX_kr})--(\ref{eq:SHOWEX_krr}) contains seven empirical
coefficients listed in Table \ref{tab:BT4}.
\begin{table} \begin{center}
\begin{tabular}{|l|c|c|c|c|c|} \hline \hline
Par. & WWATCH var. & namelist & SHOWEX & \cite{tol:JPO94} \\
\hline
$A_1$ & RIPFAC1 & BT4 & 0.4 & 1.5 \\
$A_2$ & RIPFAC2 & BT4 & -2.5 & -2.5 \\
$A_3$ & RIPFAC3 & BT4 & 1.2 & 1.2 \\
$A_4$ & RIPFAC4 & BT4 & 0.05 & 0.0 \\
$\sigma_d$ & SIGDEPTH & BT4 & 0.05 & user-defined \\
$A_5$ & BOTROUGHMIN & BT4 & 0.01 & 0.0 \\
$A_6$ & BOTROUGHFAC & BT4 & 1.00 & 0.0 \\
\hline
\end{tabular} \end{center}
\caption{Parameter values for the SHOWEX bottom friction (default values) and the original
parameter values used by \cite{tol:JPO94}. Source term
parameters can be modified via the BT4 namelist. Please
note that the name of the variables only apply to the namelists. In the source
term module the seven variables are contained in the array SBTCX. } \label{tab:BT4}
\botline
\end{table}
The roughness $k_{N}$ is decomposed in a ripple roughness $k_{r}$ and
a sheet flow roughness $k_{s}$,
\begin{eqnarray}
k_{r} &=&a_{b}\times A_{1}\left( \frac{\psi }{\psi _{c}}\right) ^{A_{2}}, \label{eq:SHOWEX_kr}\\
k_{s} &=&0.57\frac{u_{b}^{2.8}}{\left[ g\left( s-1\right) \right] ^{1.4}}%
\frac{a_{b}^{-0.4}}{\left( 2\pi \right) ^{2}}\label{eq:SHOWEX_ks}.
\end{eqnarray}
In Eqs. (\ref{eq:SHOWEX_kr}) and (\ref{eq:SHOWEX_ks}) $A_1$ and $A_2$ are empirical constants, $s$ is the
sediment specific density, $\psi$ is the Shields number determined from $u_b$
and the median sand grain diameter $D_{50}$,
\begin{equation}
\psi =f_{w}^{\prime }u_{b}^{2}/\left[g\left( s-1\right) D_{50}\right],
\end{equation}
with $f_{w}^{\prime }$ the friction factor of sand grains (determined in the
same way as $f_e$ with $D_{50}$ instead of $k_r$ as the bottom roughness), and
$\psi _{c}$ is the critical Shields number for the initiation of sediment
motion under sinusoidal waves on a flat bed. We use an analytical fit
\citep{bk:Soul97}
\begin{eqnarray}
\psi _{c} &=&\frac{0.3}{1+1.2D_{*}}+0.055\left[ 1-\exp \left(
-0.02D_{*}\right) \right]\label{Soulsby_psic} , \\
D_{*} &=&D_{50}\left[ \frac{g\left( s-1\right) }{\nu ^{2}}\right] ^{1/3},
\end{eqnarray}
where $\nu $ is the kinematic viscosity of water.
When the wave motion is not strong enough to generate vortex ripples, i.e.
for values of the Shields number less than a threshold $\psi _{{\rm rr}}$,
$k_{N}$ is given by a relic ripple roughness $k_{{\rm rr}}$. The threshold is
\begin{equation}
\psi _{{\rm rr}}=A_{3}\psi _{c}.
\end{equation}
Below this threshold, $k_{N}$ is given by
\begin{equation}
k_{{\rm rr}}=\max \left\{ A_5 {\rm m,} A_6 D_{50}, A_{4}a_{b}\right\}
{\rm for\ }\psi <\psi _{{\rm rr}}.\label{eq:SHOWEX_krr}
\end{equation}
| {
"alphanum_fraction": 0.652184124,
"avg_line_length": 43.0101010101,
"ext": "tex",
"hexsha": "5ffe997ea8131c94a9658df5b0c31dbee625a6b4",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z",
"max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "minsukji/ci-debug",
"max_forks_repo_path": "WW3/manual/eqs/BT4.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z",
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "minsukji/ci-debug",
"max_issues_repo_path": "WW3/manual/eqs/BT4.tex",
"max_line_length": 105,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "minsukji/ci-debug",
"max_stars_repo_path": "WW3/manual/eqs/BT4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1498,
"size": 4258
} |
\chapter{Parallel computing architectures}\label{chap:parallelArchitectures}
\section{Introduction}
Parallel computing has a tremendous impact on various areas, raging from
scientific computation or simulation, commercial and industrial application and
data mining. Lot of effort has put during these years in order to try to
mitigate and overcome the limits regarding the sequential computer architecture.
In particular sequential architecture consist of three main components:
\begin{enumerate}
\item Processor
\item Memory
\item Communication system (datapaths, usually buses)
\end{enumerate}
All three components present bottlenecks that limit the overall computing rate
of a system. Caches, low-latency high bandwidth and small capacity storage, for
example can hide latency of DRAM chips storing the fetched data and serving
subsequent requests of the same memory location\footnote{The fraction of the
data satisfied by the cache is called \textit{\textbf{hit rate}}.}. But one of
the most important innovation that addresses these bottlenecks is multiplicity
(in processor, memories and dataphats) that allows to extend the class of
tractable problems with larger instances and more cases that can be handled. It
is so popular that even smartphones are multicore; Iphone 4S is 2-core, and nexus 4 is 4-core.
This multiplicity has been organized in several manners during
the years giving birth to a variety of architectures. Here a brief classification of the
most important ones.
\section{Architectures}
Classifying a parallel system is not a trivial task. Lots of definitions and
classifications have been proposed in years, mostly based on their hardware
configuration or logical approach in handling and implementing the parallelism.
\subsection{Classical classification - Flynn's taxonomy}
The classification is based on the notion of \textit{stream of information}.
Two types of information flow into the processor: instructions and data.
Conceptually they can be separated into two independent streams. A coarse
classification can be made taking in account only the number of instructions and
data streams that a parallel machine can manage (see figure \ref{fig:parallelClassification1}).
That's how Flynn's taxonomy\cite{Flynn1972} classifies machines: according to
whether they have one or more streams of each type.
\begin{figure}
\includegraphics[scale=0.28]{./images/parallelClassification}
\caption[Parallel architecture classification]{Parallel architecture
classification.}
\label{fig:parallelClassification1}
\end{figure}
\begin{description}
\item[SISD:] \textit{\textbf{Single}} instruction \textit{\textbf{Single}}
data.\hfill\\
No parallelism in either instruction or data streams. Each arithmetic
instruction initiates an operation on a data item taken from a single stream of data elements (e.g. mainframes)
\item[SIMD:] \textit{\textbf{Single}} instruction
\textit{\textbf{Multiple}} data. \hfill \\ Data parallelism. The same
instruction is executed on a batch of different data. The control unit is
responsible for fetching and interpreting instructions. When it encounters an
arithmetic or other data processing instruction, it broadcasts the instruction
to all processing elements (PE), which then all perform the same operation. For
example, the instruction might be \textit{add R3,R0.}. Each PE would add the
contents of its own internal register R3 to its own R0. (e.g. stream
processors\footnote{Vector processing is performed on an SIMD machine by distributing
elements of vectors across all data memories.}.)
\item[MISD:] \textit{\textbf{Multiple}} instruction
\textit{\textbf{Single}} data. \hfill \\ Multiple instruction operating on the
same data stream. It is a class of system very unusual, mostly for fault-tolerance reasons.
\item[MIMD:] \textit{\textbf{Multiple}} instruction
\textit{\textbf{Multiple}} data. \hfill \\ Multiple instruction operating
independently on multiple data streams.
(e.g. most modern computers)
\end{description}
\subsection{Memory classification}
Architectures can be further organized by memory architecture and software
models.
A first rough categorization can be obtained analyzing the memory layout:
\begin{figure}
\centering
\caption{Distributed memory architecture.}
\label{fig:distribuiteMemory}
\setlength{\fboxrule}{1pt}%
\includegraphics[scale=0.25]{./images/distribuitedMemory}
\end{figure}
\begin{description}
\item[Shared memory] \hfill \\
All the processors have in common the ability to access
all memory locations as global space, usually sharing them via buses. Changes
in memory effected by one processor are visible to all the others. Historically
can be divided in :
\begin{itemize}
\item UMA (Uniform Memory Access) : Identical processors with equal
access time to memory (see figure \ref{fig:UMA_NUMA}),
sometimes called CC-UMA acronym for Cache Coherent UMA, because the
hardware ensures that all the processor can see a memory modification
performed by one of them.
\item NUMA (Non Uniform Memory Access): Usually different groups
of processors (SMP, Symmetric multiprocessors\footnote{Group of processors
connected via buses. Usually they consist of not more than 32 processors.})
are connected, and processors belonging to different SMP can access memory
spaces of each others. As NUMA if is present a cache coherence mechanism
this architecture is called CC-NUMA.
\end{itemize}
This memory architecture provides a user friendly perspective to memory and
data sharing across processors, is fast due to the proximity of memory to
CPUs, but it is not scalable because adding more CPUs can
geometrically increase the traffic on the bus and for cache management. Is up
to the programmer to ensure the correct accesses to global memory in order to
avoid race-conditions.
Coupled with this architecture many software solution can be used to program
shared memory machines. The most used are:
\begin{itemize}
\item Threads. Lightweight processes but with same PID (e.g. pthreads)
\item A standard language with preprocessor directives to the compiler that is
capable of converting the serial program in a parallel program without any (or
very few) intervention by the programmer (e.g. OpenMP\footnote{Built on top
of pthreads}, see example code \ref{code:OpenMPFOR} and
\ref{code:OpenMPREDUCTION} at page \pageref{code:OpenMPFOR} for complete examples ).
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.8]{./images/hybrid_model}
\caption{Hybrid memory architecture (Each processor is milti-core)}
\label{fig:hybridMemory}
\end{figure}
\item[Distributed Memory]
Different systems, and hence, different processors connected via some
kind of network (see figure \ref{fig:distribuiteMemory}) (usually high speed
networks) and the memory space in one processor do not map to another processor. Each of them operate independently on its memory space,
so changes are not reflected on memory spaces of the others. Explicit
communication is required between processors and is like synchronization
programmer's responsibility.
This architecture is very scalable and there's not any overhead in maintaining
cache coherency but all the communication work rely on the programmer.
The most used paradigm for programming distributed memory machines is the
message passing\footnote{MPI is the \textit{de facto} industry standard for
message passing. Visit \url{http://www.mpi-forum.org/}} for further
informations.
\item[Hybrid Systems] As the name suggest is a mix of the two architectures seen
before. Only a limited number of processors, say N, have
access to a common pool of shared memory. These N processor are connected to the
others via network and each processor can consist of many cores.
A common example of a programming model for hybrid system is the combination
of the message passing model (MPI) with the threads model (OpenMP) in which
\begin{inparaenum}[\itshape a\upshape)]
\item threads perform computationally intensive task, using local
\textbf{on-node} memory space and
\item communications between processes on different nodes occurs over network
using MPI (see figure\ref{fig:hybridMemory}).
\end{inparaenum}
\end{description}
\begin{figure}
\centering
\setlength{\fboxrule}{0.5pt}%
\subfigure[UMA]{ \fbox{\includegraphics[scale=0.47]{./images/shared_mem}}}
\hfill
\subfigure[NUMA]{\fbox{\includegraphics[scale=0.47]{./images/numa}}}
\hfill
\caption{Shared memory architectures.}
\label{fig:UMA_NUMA}
\end{figure}
\section{Network Topologies}
An important role is played, as seen before, by the interconnection network
because provide mechanisms for data transfer between processing nodes.
Typically they consist of \textit{n} inputs and \textit{m} outputs and are built
in by switches and links (set of wires or fibers capable of carrying
information\footnote{Links may have different characteristics depending on
the material they are made of that can limit speed propagation of the signals
or the maximum length of the wire itself}).
\begin{figure}
\begin{center}
\caption{Start network topology}
\includegraphics[scale=0.30]{./images/StartMesh}
\label{fig:starMesh}
\end{center}
\end{figure}
A first classification can be done considering only if the nodes are connected
directly to each others or not. In the first case the entire network consist of
point-to-point links and they are called \textbf{\textit{static}} or
\textbf{\textit{direct}} networks. On the other hand when the nodes are
connected to each others using switches we talk about \textbf{\textit{indirect}}
networks. Because communication is an important task in parallel computing,
the way the nodes are connected to each other is important and can influence
the overall performance of the system that is determined by the capabilities of
the network access devices, the level of control or fault tolerance desired, and
the cost associated with cabling or telecommunications circuits.
Networks topologies try to trade off cost and scalability with performance
\subsection{Bus-based networks}\label{bus-based}
The most simple family of topologies that give birth to the bus-based
networks. Each node is connected to a single shared medium that is common to all
the nodes. The most important advantage of bus is that the distance of two nodes is constant \textit{O}(1) and the cost scale linearly
as the number of nodes \textit{p}\footnote{The cost is usually associated with
the bus interface coupled with each node and is inexpensive to implement
compared to other topologies}.
A message from the source is broadcasted to all machines connected to the bus
and every machine ignore the message except for the intended address for the
message that accept the data. The low cost of implementing this topology if
tradeoff by the difficulties in managing it and plus, because of the bounded
bandwidth of buses, typical bus-based machine are limited to dozen of nodes.
\subsection{Completely connected networks}
In a completely-connected network, each node has a direct communication link to every other
node in the network. This kind of networks don't need any switching or
broadcasting mechanism because a node can send a message to another in a single
step, and interferences during the communication are completely prevented.
But completely connected network are not suitable for practical uses because of
the cost related to their implementation. The number of connection grows
quadratically \(O(p^2)\) with the number of the nodes (see figure
\ref{fig:completelyConnected}).
\[
c=\frac{n(n-1)}{2}
\]
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{./images/FullMesh}
\caption{Fully connected mesh network. 6 nodes and c=15 links}
\label{fig:completelyConnected}
\end{center}
\end{figure}
\subsection{Star-Connected Networks}
Star topology (see figure \ref{fig:starMesh}) every node is connected to
a central node that acts as the central processor (as server if we consider the
analogy with the LAN) . It is similar to the bus-based (see section \ref{bus-based}) network, because communication
between any pair of nodes is routed through the central processor (and the
latter is the shared medium the all the other nodes share) and because the
central processor is the bottleneck in this topology and also the single point
of failure (if this node goes down the entire network stops to work).
\subsection{K-Meshes}
\begin{wrapfigure}{r}{0.4\textwidth}
\centering
\includegraphics[scale=0.3]{./images/ring}
\caption{1-mesh ring topology}
\setlength{\fboxrule}{1pt}%
\label{fig:ring}
\end{wrapfigure}
Due to the large number of links in fully connected networks, sparser networks
are typically used to build parallel computers. A family of such networks spans the space of linear arrays and
hypercubes.
Linear array or ring are static network in which each node is connected with
other two (called neighbors, see figure \ref{fig:ring}).
The 2-D version of the ring topology is the 2-D torus (see figure
\ref{fig:torus}).
It consist of \(\sqrt{p}\) processor per each dimension, and each processor is connected to
four neighbors (ones whose indices differ in any dimension by one). 2-D or 3-D
meshes are very often used to build parallel machines because they are attractive from a
wiring standpoint (2-D mesh laid out in 2-D space) and because it naturally
maps a variety of computation ( matrices, 3-D weather modeling, structural
modeling, etc.).
\begin{figure}
\caption{2D (a) and 3D (b) torus topologies.}
\label{fig:torus}
\centering
\setlength{\fboxrule}{0.5pt}%
\subfigure[2D]{ \includegraphics[scale=0.47]{./images/torus2D}}
\hfill
\subfigure[3D]{\includegraphics[scale=0.47]{./images/torus3D}}
\end{figure}
\subsection{Three based}
This type of network topology is based on a hierarchy of nodes. The highest
level of any tree network consists of a single node, the ``\textit{root}'' that
connects nodes (a fixed number referred to as the ``branching
factor''\footnote{If the branching factor is \(1\) the topology is called
linear.} of the tree) in the level below by point-to-point links. These lower
level nodes are also connected to nodes in the next level down. Tree networks
are not constrained to any number of levels, but as tree networks are a variant
of the bus network topology, they are prone to crippling network failures should
a connection in a higher level of nodes fail/suffer damage.
This topology exhibits good scalability and thanks to the different levels makes
fault identification and isolation easier but the maintenance may be an issue
when the network spans a great area.
| {
"alphanum_fraction": 0.6537187763,
"avg_line_length": 58.4098360656,
"ext": "tex",
"hexsha": "1c328cf1c5793037f7afe46f17097a83883df9f9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "021634dbf93b3189c9d94a1691459cf1aea0563c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "knotman90/MSc-Thesis",
"max_forks_repo_path": "parallelComputing_architectures.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "021634dbf93b3189c9d94a1691459cf1aea0563c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "knotman90/MSc-Thesis",
"max_issues_repo_path": "parallelComputing_architectures.tex",
"max_line_length": 2847,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "021634dbf93b3189c9d94a1691459cf1aea0563c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "knotman90/MSc-Thesis",
"max_stars_repo_path": "parallelComputing_architectures.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3490,
"size": 17815
} |
A number of initiatives address the issue of reusability of research objects and replicability of science, some of them through proposed metadata standards. None of these efforts can completely provide the information and benefits that our proposed \metajelo package (described in more detail below) provides. Nevertheless, we have endeavoured to leverage these efforts when possible (i.e., when semantics of tags overlap with our goals and when their XML schema can be cloned for interoperability).%
\footnote{We originally attempted to re-use other schemata by reference, import, and use of name spaces. However, we encountered multiple problems. Name spaces were not handled consistently across parsers. The schemas we intended to re-use were not designed for that purpose. We thus reverted to "re-use by cloning", for lack of robust alternatives.}
Our hope is that this makes both interoperability with those efforts as easy and possible, and that the use of already established and perhaps familiar tags, attributes, and controlled vocabularies decreases the learning curve for use of our proposed schema. In the remainder of this section we describe related initiatives and the influence they have on our metadata design.
\subsection{DataCite}
The most related metadata vocabulary comes from \urlcite{https://www.datacite.org/}{DataCite}, which provides infrastructure to locate, identify, and cite research data. Identification is done via the DOI infrastructure for persistent identification, which has emerged as the standard for naming scholarly objects. The DataCite metadata schema \parencite{DataCiteMetadataWorkingGroup2017a,DataCiteMetadataWorkingGroup2017} specifies elements and attributes to describe data resources for the purpose of citation, location and retrieval. Because of the notable overlap in the purpose of DataCite and our proposal, we make use of multiple parts of this schema. Note, however, that DataCite is targeted as describing the data products themselves, where our concern is to register the placement of those products in a repository and ancillary information about that placement. While the DataCite schema has a \texttt{license} field, it is optional, and often empty. There is no information on more complex access policies, and no information on preservation.
\subsection{Re3data}
The Re3data initiative \parencite{Rucknagel2015,Re3data.Org2015} addresses the goal of describing repositories via an online registry of research data repositories based on a common metadata standard describing such repositories.
%The goal of describing repositories and archives for data curation is directly addressed by the Re3data initiative \parencite{RucknagelMetadataSchemaDescription2015,Re3data.Orgre3dataorgMetadata2015}. The goal of Re3Data is to support an online registry of research data repositories. The mechanics underlying this is to establish a common metadata standard for describing such repositories,
This metadata is then used to power a search interface. The registry and search interface are targeted at researchers searching for the appropriate repository in which to store their data.
A primary technical output of the work of re3data is a ``Metadata Schema for Description of Research Data Repositories'' now in its 3rd version and expressed as an XML schema. The schema addresses repository characteristics such as identification, language, administrative contacts, subject focus, funding basis and the like. Our work addresses repository characteristics and reuses semantics from the Re3data schema where appropriate and possible. We will describe the details of this reuse later in this paper.
\subsection{CrossRef}
\urlcite{https://www.crossref.org/}{CrossRef} sits functionally between our work and the two initiatives described above. It was conceived by publishers as a DOI registry that, in addition to providing the resolution of those DOIs, stores metadata for the corresponding scholarly object. An important aspect of this metadata are cross-references (citations) among the named objects \parencite{CrossRef}. In that sense, CrossRef acts as a ``switchboard'', documenting linkages between scholarly objects. Originally, the linkages were citations between journals, but with increasing interest in data these linkages have been expanded to include these supplementary materials. In this context, CrossRef collaborates and interoperates with DataCite, with the former focusing on registration and description of journal articles and conference papers, and the latter on data and other supplementary artifacts . The CrossRef schema is a relatively complex tag set for describing articles.
%As our intention is to promote a lightweight approach (not necessarily exclusive but perhaps in tandem with CrossRef), we have not directly borrowed from their schema. Also, our focus is linking to repositories or archives that contain supplementary material, as opposed to the object itself.
\subsection{Scholix}
The Scholix effort \parencite{Burton2017} is also closely related to our proposed package. However, while it may lay the groundwork for the information here, it fundamentally does not have rich enough information about the linked objects to fulfill our core purpose.
\subsection{CoreTrustSeal}
Two additional related initiatives are worthy of mention. The Core Trustworthy Data Repository Requirements \citep{CoreTrustSealCoreTrustSeal2017} are the result of work within the Research Data Alliance to establish standards for so-called ``trustworthy'' repositories. These are repositories that meet a set of criteria that deem them dependable for the long-term curation of data. The criteria are a mixture of technical, administrative, financial, and personnel characteristics. The criteria are not as of yet, or planned to be, encoded in a machine-readable schema. Instead, repositories apply for trusted status through a form that his reviewed by a human board of review. Our proposed metadata format allows for the attribution of a repository as ``trusted'' and thus integrates minimally with the CoreTrustSeal effort. However, as the CoreTrustSeal does not provide an \ac{API}, the information embedded within the certification cannot be re-used. Furthermore, as noted for re3data, an institution may have multiple policies, and it may not always be easy to attribute a particular policy to a particular object.
\subsection{JATS}
The \urlcite{https://jats.nlm.nih.gov/}{JATS (Journal Article Tag Suite)}, led by the NCBI (National Center for Biotechnology Information) aims to develop specifications for standardized (XML) markup for scholarly articles. The effort grows out of work done on so-called ``NLM DTDS'', which modelled tag sets for scholarly document structuring. \urlcite{https://jats4r.org/}{JATS4R} (JATS for reuse) is a follow-on effort, designed to reuse and extend XML models defined by JATS, with the primary goal of facilitating reuse of existing scholarly material (publications and supplementary data). The result is a set of models specifying document structure, rather than simply metadata. The structural elements address issues such as how to mark-up authors and affiliations, citations, data citations and the like.
\subsection{Data Accessibility Statements}
The Belmont Forum published a template ``Data Availability Policy and Statement'' \citep{murphy_fiona_2018_1476871}, with similar goals as our project, though the focus seems to be primarily on human-readable statements.
| {
"alphanum_fraction": 0.8195828351,
"avg_line_length": 242.8064516129,
"ext": "tex",
"hexsha": "bba930415efd21c204b78042ed1ba7fd275a7aaa",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-05-08T01:12:17.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-02-25T16:48:24.000Z",
"max_forks_repo_head_hexsha": "78674fbaf9c00c8ba0cbe669400afa1e77ad27f9",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "labordynamicsinstitute/metajelo",
"max_forks_repo_path": "text/idcc2019-related-metadata.tex",
"max_issues_count": 13,
"max_issues_repo_head_hexsha": "78674fbaf9c00c8ba0cbe669400afa1e77ad27f9",
"max_issues_repo_issues_event_max_datetime": "2021-01-20T19:18:00.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-19T20:56:50.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "labordynamicsinstitute/metajelo",
"max_issues_repo_path": "text/idcc2019-related-metadata.tex",
"max_line_length": 1129,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "78674fbaf9c00c8ba0cbe669400afa1e77ad27f9",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "labordynamicsinstitute/metajelo",
"max_stars_repo_path": "text/idcc2019-related-metadata.tex",
"max_stars_repo_stars_event_max_datetime": "2019-01-10T16:01:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-09T01:35:57.000Z",
"num_tokens": 1521,
"size": 7527
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{hyperref}
\usepackage{multirow}
\hypersetup{
colorlinks,
citecolor=black,
filecolor=black,
linkcolor=black,
urlcolor=black
}
\usepackage{listings}
\usepackage[T1]{fontenc}
\usepackage{xcolor}
\usepackage{textcomp}
\usepackage{graphicx}
%New colors defined below
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\definecolor{codebg}{HTML}{EEEEEE}
\definecolor{codeframe}{HTML}{CCCCCC}
%Code listing style named "mystyle"
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
rulecolor=\color{codeframe},
basicstyle=\ttfamily\footnotesize,
frame=single,
framesep=10pt,
breakatwhitespace=false,
upquote=true,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=none,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2,
columns=flexible
}
\makeatletter
\def\lst@outputspace{{\ifx\lst@bkgcolor\empty\color{white}\else\lst@bkgcolor\fi\lst@visiblespace}}
\makeatother
%"mystyle" code listing set
\lstset{style=mystyle}
\title{Programmes}
\date{ }
\begin{document}
\begin{center}
\rule{\textwidth}{1.6pt}\vspace*{-\baselineskip}\vspace*{2pt} % Thick horizontal line
\rule{\textwidth}{0.4pt}\\[\baselineskip] % Thin horizontal line
{\LARGE Mathematical Physics Lab Practicals \\[0.2\baselineskip] }% Title
\rule{\textwidth}{0.4pt}\vspace*{-\baselineskip}\vspace{3.2pt} % Thin horizontal line
\rule{\textwidth}{1.6pt}\\[2\baselineskip] % Thick horizontal line
\textbf{\Large \\[\baselineskip] SGTB Khalsa College, University of Delhi}\\[\baselineskip]
\textbf{\Large Preetpal Singh(2020PHY1140)}\\[\baselineskip]
\vspace*{\baselineskip}
\textbf{\Large University Roll No: 20068567043}\\[\baselineskip]
\vspace*{\baselineskip}
\textbf{\Large Unique Paper Code: 32221301}\\[\baselineskip]
\vspace*{\baselineskip}
\textbf{\Large Paper Title: Mathematical Physics Lab}\\[\baselineskip]
\vspace*{\baselineskip}
\textbf{\Large \\[\baselineskip] Submitted to: Dr. Savinder Kaur}\\[\baselineskip]
\textbf{\Large Sushil Kumar Singh}\\[\baselineskip]
\end{center}
\newpage
\maketitle
\tableofcontents
\begin{table}[]
\begin{tabular}{lrrll}
\textbf{Practical} & \multicolumn{1}{l}{\textbf{Submission}} & \multicolumn{1}{l}{\textbf{Page No.}} & & \\
Trapezoidal and Simpson Method & Sep 20, 2021 & 4 & & \\
Legendre Polynomial & Sep 21, 2021 & 9 & & \\
Lagrange Interpolation & Sep 27, 2021 & 16 & & \\
Radioactive Decay, RC Circuit and Stokes Law by Euler, RK2, RK4 Method & Oct 16, 2021 & 20 & & \\
2nd Order Coupled diffential equations using Euler, RK2, RK4 Method & Oct 26, 2021 & 28 & & \\
RK4 Method for Simulataneous Differential Equations & Nov 2, 2021 & 34 & & \\
Gauss Elimination Method & Nov 16, 2021 & 39 & & \\
Gauss Seidel Method & Nov 22, 2021 & 42 & &
\end{tabular}
\end{table}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=10cm \textwidth]{Capture.PNG}
\caption{Submission of all Practicals}
\end{figure}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=7cm \textwidth]{1.PNG}
\caption{Trapezoidal and Simpson Method}
\end{figure}
\section{Trapezoidal and Simpson Method}
%Importing code from file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\lstinputlisting[language=python,
caption=Trapezoidal and Simpson Method
]{Trapezoidal_and_simpson/trep_simp.py}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=5cm \textwidth]{Trapezoidal_and_simpson/2.PNG}
\caption{Trapezoidal and Simpson Method Output}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=12cm \textwidth]{Trapezoidal_and_simpson/Capture.PNG}
\caption{Trapezoidal and Simpson Method Output}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=16cm,height=10cm \textwidth]{2.PNG}
\caption{Legendre}
\end{figure}
\section{Legendre Polynomial}
\lstinputlisting[language=python,
caption=Legendre Polynomial
]{Legendre/Legendre.py}
\begin{figure}[h]
\centering
\includegraphics[width=16cm,height=7cm \textwidth]{Legendre/3.PNG}
\caption{Legendre}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=16cm,height=7cm \textwidth]{Legendre/4.PNG}
\caption{Legendre}
\end{figure}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=12cm,height=10cm \textwidth]{Legendre/Figure_1.png}
\caption{Legendre}
\end{figure}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=12cm,height=10cm \textwidth]{Legendre/Figure_2.png}
\caption{Legendre}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=12cm \textwidth]{3.PNG}
\caption{Lagrange Interpolation}
\end{figure}
\section{Lagrange Interpolation}
\lstinputlisting[language=python,
caption=Lagrange Interpolation
]{Lagrange_Interpolation/Lagrange.py}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=12cm \textwidth]{Lagrange_Interpolation/Capture.PNG}
\caption{Lagrange Interpolation}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{4.PNG}
\caption{Radioactive Decay, RC Circuit and Stokes Law by Euler, RK2, RK4 Method}
\end{figure}
\section{Radioactive Decay, RC Circuit and Stokes Law by Euler, RK2, RK4 Method}
\lstinputlisting[language=python,
caption=Radioactive Decay\text{,} RC Circuit and Stokes Law by Euler\text{,} RK2\text{,} and RK4 Method
]{Euler/euler.py}
\newpage
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{Euler/radioactive.png}
\caption{Radioactive Decay}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{Euler/rc.png}
\caption{RC Circuit}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{Euler/stokes.png}
\caption{Stokes Law}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=13cm \textwidth]{5.PNG}
\caption{2nd Order Coupled diffential equations using Euler, RK2, RK4 Method}
\end{figure}
\section{2nd Order Coupled diffential equations using Euler, RK2, RK4 Method}
\lstinputlisting[language=python,
caption=2nd Order coupled diffential equations using Euler\text{,} RK2 and RK4 Method
]{2nd_order_diff_using_rk2/rk2.py}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=13cm \textwidth]{2nd_order_diff_using_rk2/Figure_1.png}
\caption{Simple Harmonic Oscillator}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=13cm \textwidth]{2nd_order_diff_using_rk2/Figure_3.png}
\caption{Simple Pendulum}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=13cm \textwidth]{2nd_order_diff_using_rk2/Figure_2.png}
\caption{Damped Harmonic Oscillator}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{6.PNG}
\caption{RK4 Method}
\end{figure}
\section{RK4 Method for Simulataneous Differential Equations}
\lstinputlisting[language=python,
caption=RK4 Method for Simulataneous Differential Equations
]{rk4/rk4.py}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{rk4/Figure_1.png}
\caption{RK4 Method}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=12cm \textwidth]{rk4/Figure_2.png}
\caption{RK4 Method}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=5cm \textwidth]{7.PNG}
\caption{Gauss Elimination Method Output}
\end{figure}
\section{Gauss Elimination Method}
\lstinputlisting[language=python,
caption=Gauss Elimination
]{Gauss_Elimination/gauss_elim.py}
\begin{figure}[h]
\centering
\includegraphics[width=14cm,height=5cm \textwidth]{Gauss_Elimination/Capture.PNG}
\caption{Gauss Elimination Method Output}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=8cm \textwidth]{8.PNG}
\caption{Gauss Seidel Method Output}
\end{figure}
\section{Gauss Seidel Method}
\lstinputlisting[language=python,
caption=Gauss Seidel Method
]{Gauss_Seidel/gaussseidel.py}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=15cm,height=8cm \textwidth]{Gauss_Seidel/Capture.PNG}
\caption{Gauss Seidel Method Output}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.6465203268,
"avg_line_length": 31.746875,
"ext": "tex",
"hexsha": "90d5ff632759075d46cf92ce2ffc4c7f08983491",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fabe34d0fb1492ad177c9e7be99e1dbe718fda69",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hinton024/Mathematical-Physics",
"max_forks_repo_path": "ppt/MP Lab Practicals/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fabe34d0fb1492ad177c9e7be99e1dbe718fda69",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hinton024/Mathematical-Physics",
"max_issues_repo_path": "ppt/MP Lab Practicals/main.tex",
"max_line_length": 170,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fabe34d0fb1492ad177c9e7be99e1dbe718fda69",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hinton024/Mathematical-Physics",
"max_stars_repo_path": "ppt/MP Lab Practicals/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2897,
"size": 10159
} |
% template-v1.tex: LaTeX2e template for Usenix papers.
% Version: usetex-v1, 31-Oct-2002
% Revision history at end.
\documentclass[XXX,endnotes]{usetex-v1}
% Choose the appropriate option:
%
% 1. workingdraft:
%
% For initial submission and shepherding. Features prominent
% date, notice of draft status, page numbers, and annotation
% facilities. The three supported annotation macros are:
% \edannote{text} -- anonymous annotation note
% \begin{ednote}{who} -- annotation note attributed
% text to ``who''
% \end{ednote}
% \HERE -- a marker that can be left
% in the text and easily
% searched for later
% 2. proof:
%
% A galley proof identical to the final copy except for page
% numbering and proof date on the bottom. Annotations are
% removed.
%
% 3. webversion:
%
% A web-publishable version, uses \docstatus{} to indicate
% publication information (where and when paper was published),
% and page numbers.
%
% 4. finalversion:
%
% The final camera-ready-copy (CRC) version of the paper.
% Published in conference proceedings. This doesn't include
% page numbers, annotations, or draft status (Usenix adds
% headers, footers, and page numbers onto the CRC).
%
% If several are used, the last one in this list wins
%
%
% In addition, the option "endnotes" permits the use of the
% otherwise-disabled, Usenix-deprecated footnote{} command in
% documents. In this case, be sure to include a
% \makeendnotes command at the end of your document or
% the endnotes will not actually appear.
%
% These packages are optional, but useful
\usepackage{epsfig} % postscript figures
\usepackage{url} % \url{} command with good linebreaks
\begin{document}
\title{Wonderful: A Terrific Application and Fascinating Paper}
% document status: submitted to foo, published in bar, etc.
\docstatus{Submitted to Cool Stuff Conference 2002}
% authors. separate groupings with \and.
\author{
\authname{Your N.\ Here}
\authaddr{Your Department}
\authaddr{Your Institution}
\authaddr{ Your City, State, ZIP}
\authurl{\url{[email protected]}}
\authurl{\url{http://host.dom/yoururl}}
\and
\authname{Name Two}
\authaddr{Two's Institution}
\authurl{\url{[email protected]}}
%
} % end author
\maketitle
\begin{abstract}
Your abstract text goes here. Just a few facts. Whet our
appetites.
\end{abstract}
\section{Introduction}
A paragraph of text goes here. Lots of text. Plenty of interesting
text. Lots of text. Lots of text. Lots. \edannote{We can make
notes here on the workingdraft, after which they must be removed.
This one is anonymous.}
More fascinating text. Features galore, plethora of promises.
\begin{ednote}{BCM}
This ednote is marked as mine.
\end{ednote}
\section{This is Another Section}
Some embedded literal typeset code is shown below. Note that line or
page breaks can occur in the middle of code typeset this way. To
avoid such line or page breaks, put the code inside a figure
environment instead.
\begin{small}
\begin{verbatim}
int wrap_fact(ClientData clientData,
Tcl_Interp *interp,
int argc, char *argv[]) {
int result;
int arg0;
if (argc != 2) {
interp->result = "wrong # args";
return TCL_ERROR;
}
arg0 = atoi(argv[1]);
result = fact(arg0);
sprintf(interp->result,"%d",result);
return TCL_OK;
}
\end{verbatim}
\end{small}
Now we're going to cite somebody. Watch for the cite tag. Here it
comes~\cite{heidrich,perl5,otcl}. And a bit later we will cite
another one. Stay tuned~\cite{ousterhout}.
\section{This Section has Sub-Sections}
\label{sec:secs}
This text is the introduction to Section~\ref{sec:secs}.
\begin{figure}[htbp]
\begin{centering}
\epsfig{file=sample.eps, width=2.50in}
\small\itshape
\caption{\small\itshape This figure was created with \texttt{xfig}. If you
want it to span two columns, use \texttt{figure*} in the LaTeX source file.}
\label{fig-sample}
\end{centering}
\end{figure}
\subsection{First Sub-Section}
Here's a typical figure reference. Figure~\ref{fig:flowchart} is
centered at the top of the column. It may be scaled. If so, you may
have to tweak the numbers to get the size you want. It may be
hard to do this.
\begin{figure}[tb]
\begin{center}
% \psfig{file=figure.eps,scale=0.45} % PostScript figure
\texttt{<insert figure here>} % remove this line
\caption{Wonderful flowchart}
\label{fig:flowchart}
\end{center}
\end{figure}
This text came after the figure, so we'll casually refer to
Figure~\ref{fig:flowchart} as we go on our merry way.
% you need to work on the workingdraft document right here soon
\HERE
\subsection{Footnotes}
For the Usenix style, footnotes are not allowed: endnotes
are, although they are
deprecated.\ifhasendnotes\footnote{Thus, this is not a
footnote}\fi\ Try to avoid both footnotes and endnotes in
technical writing. It is best to use parenthetical or
subordinate clauses instead. If you want endnotes anyhow,
use the "endnotes" documentstyle option and include a
\verb+\makeendnotes+ command at the end of your document.
You will still be whined at.
\subsection{Tables and Code}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table*}[htbp]
\centering
\begin{tabular}{|c||c|c|c|c||c|c|c|c|c|c|l|}\hline
{\bf Cloak}
& \multicolumn{4}{c||}{\textbf{User \texttt{ezk}}}
& \multicolumn{6}{c|}{\textbf{User \texttt{joe}}}
& \multicolumn{1}{c|}{\textbf{Meaning}}
\\\cline{2-11}
{\bf Mask}
&{J1}
&{J2}
&{J3}
&{J4}
&{E5}
&{E6}
&{E7}
&{E8}
&{E9}
&{E10}
&\multicolumn{1}{c|}{\textbf{for files J1--E10}}
\\\hline
{+000}
&
&
&
&
&
&
&
&
&
&
& Show files to owners only
\\\hline
{+007}
&
&
&{A}
&
&
&
&{A}
&{A}
&
&
& Show files to owners and others
\\\hline
{+070}
&
&{A}
&{A}
&
&{A}
&
&{A}
&{A}
&
&
& Show files to owners and group members
\\\hline
\end{tabular}
\small\itshape
\caption{\small\itshape Here is a complex table that spans two columns. It
shows how also to straddle the table cells.}
\label{tab-sample}
\end{table*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
It can get tricky typesetting Tcl and C code in LaTeX because they
share a lot of mystical feelings about certain magic characters. You
will have to do a lot of escaping to typeset curly braces and percent
signs, for example, like this: ``The \verb@%module@ directive sets
the name of the initialization function. This is optional, but is
recommended if building a Tcl 7.5 module. Everything inside the
\verb@%{@, \verb@%}@ block is copied directly into the
output. allowing the inclusion of header files and additional C code.''
Sometimes you want to really call attention to a piece of text. You
can center it in the column like this:
\begin{center}
\verb@_1008e614_Vector_p@
\end{center}
and people will really notice it.
Now this is an ingenious way to get a forced space. \texttt{Real~$*$}
and \texttt{double~$*$} are equivalent.
\subsection{Lists}
You can make lists using LaTeX's listing environments
(\texttt{itemize}, \texttt{enumerate}, and \texttt{description}).
These environments can be nested (e.g. an itemized list can be an
element of an enumerated list).
An \texttt{itemize} list looks like this:
\begin{itemize}
\item The map structure defines an address space.
\item The page structure manages a page of physical memory.
\end{itemize}
An \texttt{enumerate} list is like an itemized list, except that it is
numbered:
\begin{enumerate}
\item The map structure defines an address space.
\item The page structure manages a page of physical memory.
\end{enumerate}
A \texttt{description} list uses words rather bullets or numbers:
\begin{description}
\item[\textbf{map structure:}] defines an address space.
\item[\textbf{page structure:}] manages a page of physical memory.
\end{description}
\subsection{Last Sub-Section}
Well, it's getting boring isn't it. This is the last subsection
before we wrap it up.
\section{Acknowledgments}
A polite author always includes acknowledgments. You
should thank everyone,
especially those who funded the work.
\section{Availability}
It is great news if this section can say that your
app, WonderfulApp is free
software, available via anonymous FTP from
\url{ftp://ftp.dom/pub/myname/Wonderful}. Also, it's even better
when you can write that information is also available on the Wonderful
homepage at \url{http://www.dom/~myname/SWIG}.
Now we get serious and fill in those references. Remember you will
have to run latex twice on the document in order to resolve those
cite tags you met earlier. This is where they get resolved.
We've preserved some real ones in addition to the template-speak.
After the bibliography you are DONE.
% This is where the endnotes (see the ``footnote'' above)
% are filled in. Use this only if you have endnotes.
\ifhasendnotes\makeendnotes\fi
\begin{thebibliography}{99}
\bibitem{beazley} D.~M.~Beazley and P.~S.~Lomdahl,
\emph{Message-Passing Multi-Cell Molecular Dynamics on the Connection
Machine 5}, Parall.~Comp.~ 20 (1994) p. 173-195.
\bibitem{CitePetName} A.~N.~Author and A.~N.~Other,
\emph{Title of Riveting Article}, JournalName VolNum (Year) p. Start-End
\bibitem{embed} Embedded Tk, \url{ftp://ftp.vnet.net/pub/users/drh/ET.html}
\bibitem{expect} Don Libes, \emph{Exploring Expect}, O'Reilly \& Associates, Inc. (1995).
\bibitem{heidrich} Wolfgang Heidrich and Philipp Slusallek, \emph{
Automatic Generation of Tcl Bindings for C and C++ Libraries.},
USENIX 3rd Annual Tcl/Tk Workshop (1995).
\bibitem{ousterhout} John K. Ousterhout, \emph{Tcl and the Tk Toolkit}, Addison-Wesley Publishers (1994).
\bibitem{perl5} Perl5 Programmers reference,
\url{http://www.metronet.com/perlinfo/doc}, (1996).
\bibitem{otcl} D. Wetherall, C. J. Lindblad, ``Extending Tcl for
Dynamic Object-Oriented Programming'', Proceedings of the USENIX 3rd Annual Tcl/Tk Workshop (1995).
\end{thebibliography}
\end{document}
% Revision History:
% designed specifically to meet requirements of
% TCL97 committee.
% originally a template for producing IEEE-format articles using LaTeX.
% written by Matthew Ward, CS Department, Worcester Polytechnic Institute.
% adapted by David Beazley for his excellent SWIG paper in Proceedings,
% Tcl 96
% turned into a smartass generic template by De Clarke, with thanks to
% both the above pioneers
% use at your own risk. Complaints to /dev/null.
% make it two column with no page numbering, default is 10 point
% Munged by Fred Douglis <[email protected]> 10/97 to separate
% the .sty file from the LaTeX source template, so that people can
% more easily include the .sty file into an existing document. Also
% changed to more closely follow the style guidelines as represented
% by the Word sample file.
% This version uses the latex2e styles, not the very ancient 2.09 stuff.
%
% Revised July--October 2002 by Bart Massey, Chuck Cranor, Erez
% Zadok and the FREENIX Track folks to ``be easier to use and work
% better''. Hah. Major changes include transformation into a
% latex2e class file, better support for drafts, and some
% layout improvements.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% for Ispell:
% LocalWords: workingdraft BCM ednote SubSections xfig SubSection joe
| {
"alphanum_fraction": 0.6771319121,
"avg_line_length": 31.6833773087,
"ext": "tex",
"hexsha": "129d23e92036c8ba84c8e38268059e55e4b37ad8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "79f5bd6d67258912e63d248732add0d257446225",
"max_forks_repo_licenses": [
"LPL-1.02"
],
"max_forks_repo_name": "Plan9-Archive/vdiskfs",
"max_forks_repo_path": "doc/wiov08/template-v1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "79f5bd6d67258912e63d248732add0d257446225",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"LPL-1.02"
],
"max_issues_repo_name": "Plan9-Archive/vdiskfs",
"max_issues_repo_path": "doc/wiov08/template-v1.tex",
"max_line_length": 105,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "79f5bd6d67258912e63d248732add0d257446225",
"max_stars_repo_licenses": [
"LPL-1.02"
],
"max_stars_repo_name": "Plan9-Archive/vdiskfs",
"max_stars_repo_path": "doc/wiov08/template-v1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3238,
"size": 12008
} |
\chapter{Main Entities} \label{chap:entities}
In the following, the main entities of ViennaGrid are explained.
The nomenclature essentially follows the convention from topology and can partly be found in other mesh libraries.
Note that the purpose of this manual is not to give exact definitions from the field of geometry or topology, but rather to establish the link between abstract concepts and their representation in code within {\ViennaGrid}.
First, geometrical objects are discussed, then topological objects and finally complexes of topological objects.
\begin{figure}[tb]
\centering
\fbox{ \includegraphics[width=0.95\textwidth]{figures/entities.eps} }
\caption{Overview of the main entities in {\ViennaGrid} for a triangular mesh. A point refers to any location in the geometric space and does not carry topological information.}
\label{fig:entities}
\end{figure}
\section{Points (Geometrical Objects)}
The underlying space in {\ViennaGrid} is the $m$-dimensional Euclidean space $\mathbb{E}^m$, which is identified with the real coordinate space $\mathbb{R}^m$ in the following.
A \emph{point} refers to an element $\vector x$ in $\mathbb{R}^m$ and does not carry any topological information.
On the other hand, a point equivalently refers to the vector from the origin pointing to $\vector x$.
Given a configuration class \lstinline|Config| for {\ViennaGrid} (cf.~Chap.~\ref{chap:meshsetup}), a point is defined and manipulated as follows:
\begin{lstlisting}
using namespace viennagrid;
// obtain the point type from a meta-function
typedef result_of::point<Config>::type PointType;
// For a three-dimensional Cartesian space (double precision),
// the type of the point is returned as
// spatial_point<double, cartesian_cs<3> >
// Instantiate two points:
PointType p1(0, 1, 2);
PointType p2(2, 1, 0);
// Add/Subtract points:
PointType p3 = p1 + p2;
std::cout << p1 - 2.0 * p3 << std::endl;
std::cout << "x-coordinate of p1: " << p1[0] << std::endl;
\end{lstlisting}
The operators \lstinline|+, -, *, /, +=, -=, *=| and \lstinline|/=| can be used in the usual mnemonic manner. \lstinline|operator[]| grants access to the individual coordinate entries and allows for a direct manipulation.
Aside from the standard Cartesian coordinates, {\ViennaGrid} can also handle polar, spherical and cylindrical coordinate systems.
This is typically defined globally within the configuration class \lstinline|Config| for the whole mesh, and the meta-function in the previous snippet creates the correct point type. However, if no global configuration class is available, the point types can be obtained as
\begin{lstlisting}
typedef spatial_point<double, cartesian_cs<1> > CartesianPoint1d;
typedef spatial_point<double, cartesian_cs<2> > CartesianPoint2d;
typedef spatial_point<double, polar_cs> PolarPoint2d;
typedef spatial_point<double, cartesian_cs<3> > CartesianPoint3d;
typedef spatial_point<double, spherical_cs> SphericalPoint3d;
typedef spatial_point<double, cylindrical_cs> CylindricalPoint3d;
\end{lstlisting}
Conversions between the coordinate systems are carried out implicitly whenever a point is assigned to a point with a different coordinate system:
\begin{lstlisting}
CylindricalPoint3d p1(1, 1, 5);
CartesianPoint3d p2 = p1; //implicit conversion
\end{lstlisting}
An explicit conversion to the Cartesian coordinate system is provided by the free function \lstinline|to_cartesian()|, which allows for the implementation of generic algorithms based on Cartesian coordinate systems without tedious dispatches based on the coordinate systems involved.
\TIP{For details on the coordinate systems, refer to the reference documentation in \texttt{doc/doxygen/}.}
Since all coordinate systems refer to a common underlying Euclidean space, the operator overloads remain valid even if operands are given in different coordinate systems. In such a case, the coordinate system of the resulting point is given by the coordinate system of the left hand side operand:
\begin{lstlisting}
CylindricalPoint3d p1(1, 1, 5);
CartesianPoint3d p2 = p1; //implicit conversion
// the result of p1 + p2 is in cylindrical coordinates
CylindricalPoint3d p3 = p1 + p2;
// the result of p2 + p1 is in Cartesian coordinates,
// but implicitly converted to cylindrical coordinates:
CylindricalPoint3d p4 = p2 + p1;
\end{lstlisting}
For additional algorithms acting on points, e.g.~\lstinline|norm()| for computing the norm/length of a vector, please refer to Chapter \ref{chap:algorithms}.
\TIP{ViennaGrid is not restricted to one, two or three geometric dimensions! Cartesian coordinate systems for arbitrary dimensions are available.}
\section{Elements (Topological Objects)} \label{sec:ncells}
While the point type defines the underlying geometry, elements define the topological connections among distinguished points. Each of these distinguished points is called a \emph{vertex} and describes the corners or intersection of geometric shapes. Vertices are often also referred to as the \emph{nodes} of a mesh.
An \emph{edge} or \emph{line} is a line segment joining two vertices.
Note that this is a topological characterization -- the underlying geometric space can have arbitrary dimension.
A \emph{cell} is an element of maximum topological dimension $N$ within the set of elements considered.
The topological dimension of cells can be smaller than the underlying geometric space, which is for example the case in triangular surface meshes in three dimensions.
Note that the nomenclature used among scientists is not entirely consistent:
Some refer to topologically three-dimensional objects independent from the topological dimension of the full mesh as cells, which is not the case here.
The surface of a cell consists of \emph{facets}, which are objects of topological dimension $N-1$.
Some authors prefer the name \emph{face}, which is by other authors used to refer to objects of topological dimension two.
Again, the definition of a facet refers in {\ViennaGrid} to topological dimension $N-1$.
Boundary elements are elements which represent a boundary of another element.
For example a triangle is a boundary element of a tetrahedron.
But not only the direct boundaries are boundary elements in {\ViennaGrid}, also a boundary element of a boundary element of an element is a boundary element of that element:
for example, a vertex and a line are both boundary elements of a tetrahedron.
\begin{table}[tbp]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|l|l|l|l|l|}
\hline
& $1$-d & $2$-d & $3$-d & $n$-d \\
\hline
\textbf{Vertex} & Point & Point & Point & Point \\
\textbf{Edge} & Line & Line & Line & Line \\
\textbf{Facet} & Point & Line & Triangle, etc. & $n-1$-Simplex, etc. \\
\textbf{Cell} & Line & Triangle, etc. & Tetrahedron, etc. & $n$-Simplex \\
\hline
\end{tabular}
\caption{Examples for the vertices, edges, facets and cells for various topological dimensions.}
\label{tab:vertex-edge-facet-cell}
\end{table}
A brief overview of the corresponding meanings of vertices, edges, facets and cells is given in Tab.~\ref{tab:vertex-edge-facet-cell}. Note that edges have higher topological dimension than facets in the one-dimensional case, while they coincide in two dimensions. Refer also to Fig.~\ref{fig:entities}.
{\ViennaGrid} supports three element families: simplices, hypercubes and special elements.
Conceptually, {\ViennaGrid} is able to deal with simplices and hypercube of arbitrary dimensions,
yet the explicit template instantiation are only defined up to three spatial dimensions.
Special elements are polygons and piecewise linear complexes (PLCs).
To fully enable compiler optimizations, element types are identified during compilation time by so-called tags.
Tab.~\ref{tab:element-type-and-tags} gives an overview of all supported element types and their tags.
Internally, these tags are used for accessing static information such as the number of vertices of the respective element.
Where possible, these numbers are accessed as constants at compile time, thus enabling full loop unrolling.
\begin{table}[tbp]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|l|c|l|l|}
\hline
& Dimension & Generic Tag & Element Tag \\
\hline
\textbf{Simplex} & $n$ & \lstinline|simplex_tag<n>| & \lstinline|simplex_tag<n>| \\
\textbf{Hypercube} & $n$ & \lstinline|hypercube_tag<n>| & \lstinline|hypercube_tag<n>| \\
\textbf{Vertex} & $0$ & \lstinline|simplex_tag<0>| & \lstinline|vertex_tag| \\
\textbf{Line} or \textbf{Edge} & $1$ & \lstinline|simplex_tag<1>| & \lstinline|line_tag|, \lstinline|edge_tag| \\
\textbf{Triangle} & $2$ & \lstinline|simplex_tag<2>| & \lstinline|triangle_tag| \\
\textbf{Tetrahedron} & $3$ & \lstinline|simplex_tag<3>| & \lstinline|tetrahedron_tag| \\
\textbf{Quadrilateral} & $2$ & \lstinline|hypercube_tag<2>| & \lstinline|quadrilateral_tag| \\
\textbf{Hexahedron} & $3$ & \lstinline|hypercube_tag<3>| & \lstinline|hexahedron_tag| \\
\textbf{Polygon} & $2$ & \lstinline|polygon_tag| & \lstinline|polygon_tag| \\
\textbf{PLC} & $2$ & \lstinline|plc_tag| & \lstinline|plc_tag| \\
\hline
\end{tabular}
\caption{Element types and their tags}
\label{tab:element-type-and-tags}
\end{table}
\section{Mesh}
A \emph{mesh} $\Omega$ is the top level object in {\ViennaGrid} and is a container for its topological elements.
There are no topological restrictions on the elements inside a mesh.
A typical use-case for a mesh is to store a cell complex.
We characterize a cell complex as a collection of topological elements, such that the intersection of two elements (maybe of different type) $e_0$ and $e_1$ is another element $e_i$ from the cell complex.
{\ViennaGrid} fully supports conforming complexes and has partial support for non-conforming complexes.
Here, a \emph{conforming} complex is characterized by the property that the intersection element $e_i$ from above is a boundary element from both of the element $e_0$ and the element $e_1$.
If this is not the case, the complex is denoted \emph{non-conforming}, cf.~Fig.~\ref{fig:conformity}.
\begin{figure}[tb]
\centering
\subfigure[Conforming cell complex.]{
\includegraphics[width=0.23\textwidth]{figures/conforming.eps}
} \hspace*{2cm}
\subfigure[Non-conforming cell complex.]{ \label{subfig:non-conforming}
\includegraphics[width=0.23\textwidth]{figures/non-conforming}
}
\caption{Illustration of conforming and non-conforming cell complexes. The vertex in the center of \subref{subfig:non-conforming} intersects an edge in the interior, violating the conformity criterion.}
\label{fig:conformity}
\end{figure}
\pagebreak
The instantiation of a {\ViennaGrid} mesh object requires a configuration class \lstinline|Config|.
Table \ref{tab:mesh-configs} provides an overview of built-in configurations in namespace \lstinline|viennagrid::config|,
which can also serve as a starting point for user-defined configurations.
Given such a class, the mesh type is retrieved and the mesh object constructed as
\begin{lstlisting}
using namespace viennagrid;
// Type retrieval, method 1: use meta-function (recommended)
typedef result_of::mesh<Config>::type MeshType;
// Type retrieval, method 2: direct (discouraged, may be changed)
typedef mesh<Config> MeshType;
MeshType my_mesh; //create the mesh object
\end{lstlisting}
\begin{table}[tbp]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|l|c|l|l|}
\hline
Configuration class & Spatial Dim & Cell Type \\
\hline
\lstinline|vertex_1d| & $1$ & Vertex \\
\lstinline|vertex_2d| & $2$ & Vertex \\
\lstinline|vertex_3d| & $3$ & Vertex \\
\lstinline|line_1d| & $1$ & Line \\
\lstinline|line_2d| & $2$ & Line \\
\lstinline|line_3d| & $3$ & Line \\
\lstinline|triangular_2d| & $2$ & Triangle \\
\lstinline|triangular_3d| & $3$ & Triangle \\
\lstinline|quadrilateral_2d| & $2$ & Quadrilateral \\
\lstinline|quadrilateral_3d| & $3$ & Quadrilateral \\
\lstinline|polygonal_2d| & $2$ & Polygon \\
\lstinline|polygonal_3d| & $3$ & Polygon \\
\lstinline|plc_2d| & $2$ & PLC \\
\lstinline|plc_3d| & $3$ & PLC \\
\lstinline|tetrahedral_3d| & $3$ & Tetrahedron \\
\lstinline|hexahedral_3d| & $3$ & Hexahedron \\
\hline
\end{tabular}
\caption{Predefined default configurations in \lstinline|viennagrid::config|.}
\label{tab:mesh-configs}
\end{table}
\section{Segmentation and Segment}
A \emph{segment} $\Omega_i$ refers to a subset of the elements in a mesh $\Omega$. Unlike a mesh, a segment is not a container for its elements. Instead, only references (pointers) to the elements in the mesh are stored. In common C++ language, a \emph{segment} represents a so-called \emph{view} on the mesh.
A \emph{segmentation} represents a collection of segments. The typical use-case for a segmentation is the decomposition of the mesh into pieces of a common property. For example, a solid consisting of different materials can be set up in {\ViennaGrid} such that each regions of the same material are represented in a common segment.
| {
"alphanum_fraction": 0.7437373813,
"avg_line_length": 59.4355555556,
"ext": "tex",
"hexsha": "38ea6a0ddcac637a189109e6f77dd9e6eb5bb475",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-05-20T00:51:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-03T07:14:15.000Z",
"max_forks_repo_head_hexsha": "6e47c8d098a0b691d6b9988f2444cd11d440f4c2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "viennagrid/viennagrid-dev",
"max_forks_repo_path": "doc/manual/entities.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6e47c8d098a0b691d6b9988f2444cd11d440f4c2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "viennagrid/viennagrid-dev",
"max_issues_repo_path": "doc/manual/entities.tex",
"max_line_length": 332,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "6e47c8d098a0b691d6b9988f2444cd11d440f4c2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "viennagrid/viennagrid-dev",
"max_stars_repo_path": "doc/manual/entities.tex",
"max_stars_repo_stars_event_max_datetime": "2019-06-27T14:24:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-13T03:50:58.000Z",
"num_tokens": 3466,
"size": 13373
} |
\chapter{Colours} \label{sec:colours}
The design guide define 3 primary colours (dtured, white and black) and 10 secondary colours \url{https://www.designguide.dtu.dk/#stnd-colours}. Below are codes for the various colour modes. RGB is used for web and Office Programmes. CMYK is used for print. HTML is used for HTML-coding. If you know anything about colour codes you might notice that the RGB codes are ranging from 0-1 instead of the usual 0-255.
\begin{testcolors}[rgb,cmyk,HTML]
\testcolor{dtured}
\testcolor{white}
\testcolor{black}
\testcolor{blue}
\testcolor{brightgreen}
\testcolor{navyblue}
\testcolor{yellow}
\testcolor{orange}
\testcolor{pink}
\testcolor{red}
\testcolor{green}
\testcolor{purple}
\end{testcolors}
The default colour mode for this template is cmyk. The current colour model is \targetcolourmodel~which is also illustrated by the underlined numbers in the colour test table above. If you which to change the colour model to rgb go to Setup/Settings.tex and change \texttt{targetcolourmodel} to rgb. In Setup/Settings.tex it is also possible to change the background colour of the front and back page. The colours are primarily used for diagrams (the plotcyclelist DTU) and the front and back page.
Lighter colours can be achieved as written in the \LaTeX{} code below. For example to get a tint of 50\% you would write colourname!50. \newline
{\raggedright
\textcolor{dtured}{Normal dtured} \qquad
\textcolor{dtured!80}{80\% dtured} \qquad
\textcolor{dtured!70}{70\% dtured} \qquad
\textcolor{dtured!60}{60\% dtured} \qquad
\textcolor{dtured!50}{50\% dtured}
}
\newline
For more information about colours in \LaTeX{} read the \texttt{xcolor} manual.
| {
"alphanum_fraction": 0.7771191464,
"avg_line_length": 52.71875,
"ext": "tex",
"hexsha": "0bdb1e57d7a731afb577f78ac2e48358bc2c59cd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ceb479a79449ba90b9b5d0342481738695701bf3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "msboeg/msc-thesis",
"max_forks_repo_path": "reports/Chapters/_02_Colours.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ceb479a79449ba90b9b5d0342481738695701bf3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "msboeg/msc-thesis",
"max_issues_repo_path": "reports/Chapters/_02_Colours.tex",
"max_line_length": 499,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ceb479a79449ba90b9b5d0342481738695701bf3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "msboeg/msc-thesis",
"max_stars_repo_path": "reports/Chapters/_02_Colours.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 481,
"size": 1687
} |
\section{Nodal Basis and Nodal Space}
\label{sec:21nodalSpaces}
\minitoc{62mm}{4}
\mbox{}\vspace{-14mm}
\disableornamentsfornextheadingtrue
\subsection{Univariate Case}
\label{sec:211nodalUV}
\paragraph{Grid and basis functions}
In this thesis, we consider univariate functions
that are defined on the unit interval $\clint{0, 1}$.
\usenotation{l}
We discretize this domain by splitting it into $2^l$ equally-sized segments,
where $l \in \natz$ is the \term{level.}
\usenotation{i}
The resulting $2^l + 1$ \term{grid points} $\gp{l,i}$ are given by
\begin{equation}
\gp{l,i} \ceq i \cdot \ms{l},\quad
i = 0, \dotsc, 2^l,
\end{equation}
where $i$ is the \term{index} and $\ms{l} \ceq 2^{-l}$ is the \term{mesh size.}%
\footnote{%
Note that from a strict formal perspective,
this equation defined $\gp{l,i}$ only for $i = 0, \dotsc, 2^l$,
but we will later need $\gp{l,i}$ also for $i < 0$ or $i > 2^l$.
The convention in this thesis is that all definitions are
implicitly generalized whenever needed.%
}
Every grid point is associated with a \term{basis function}
\begin{equation}
\basis{l,i}\colon \clint{0, 1} \to \real.
\end{equation}
We assume $\basis{l,i}$ to be arbitrary,
satisfying required assumptions when needed and stated.
However, it helps for both the theory and the intuition to have a
specific example of basis functions in mind.
\usenotation{Ë1}
The so-called \term{hat functions} (linear B-splines)
are the most common choice for $\basis{l,i}$:
\begin{equation}
\label{eq:hatFunctionUV}
\bspl{l,i}{1}(x)
\ceq \max(1 - \abs{\tfrac{x}{\ms{l}} - i}, 0).
\end{equation}
Here and in the following,
the superscript ``1'' stands for the degree of the linear B-spline and
is not to be read as an exponent.
We generalize this notation to B-splines $\bspl{l,i}{p}$ of
arbitrary degrees $p$ in \cref{chap:30BSplines}.
\paragraph{Nodal space}
The \term{nodal space} $\ns{l}$ of level $l$
is defined as the linear span of all basis functions
$\basis{l,i}$:
\begin{equation}
\ns{l} \ceq \spn\{\basis{l,i} \mid i = 0, \dotsc, 2^l\}.
\end{equation}
We assume that the functions $\basis{l,i}$ form a basis of $\ns{l}$, i.e.,
they are linearly independent.
Consequently, every linear combination of these functions is unique.
This ensures that for every objective function $\objfun\colon \clint{0, 1} \to \real$,
there is a unique function $\fgintp{l}\colon \clint{0, 1} \to \real$ such that
\begin{equation}
\label{eq:interpFullGridUV}
\fgintp{l}
= \sum_{i=0}^{2^l} \interpcoeff{l,i} \basis{l,i},\quad
\falarge{i = 0, \dotsc, 2^l}{\fgintp{l}(\gp{l,i}) = \objfun(\gp{l,i})},
\end{equation}
for some $\interpcoeff{l,i} \in \real$.
In this case, $\fgintp{l}$ is called \term{interpolant} of $\objfun$ in $\ns{l}$.
The nodal space $\nsbspl{l}{1}$ is defined analogously to $\ns{l}$
as the span of the hat functions $\bspl{l,i}{1}$.
It is the space of all linear splines,
that is, the space of all continuous functions on $\clint{0, 1}$ that are
piecewise linear polynomials on $\clint{\gp{l,i}, \gp{l,i+1}}$ for
$i = 0, \dotsc, 2^l - 1$ \cite{Hoellig13Approximation}.
The nodal hat function basis of level~$l = 3$
and a linear combination are shown in \cref{fig:nodalHat}.
\begin{figure}
\subcaptionbox{%
Basis functions $\bspl{l,i}{1}$ ($i = 0, \dotsc, 2^l$)
and grid points $\gp{l,i}$ \emph{(dots).}%
}[72mm]{%
\includegraphics{hierarchicalBasis_1}%
}%
\hfill%
\subcaptionbox{%
Piecewise linear interpolant $\fgintp{l}$
of some function data $\objfun(\gp{l,i})$
as a weighted sum of the nodal hat functions.%
}[72mm]{%
\includegraphics{interpolant_1}%
}%
\caption[%
Univariate nodal hat functions%
]{%
Univariate nodal hat functions of level $l = 3$.%
}%
\label{fig:nodalHat}%
\end{figure}
\subsection{Multivariate Case}
\label{sec:212nodalMV}
\paragraph{Cartesian and tensor products}
\usenotation{d}
For the multivariate case with $d \in \nat$ dimensions,
we employ a tensor product approach,
for which we replace all indices, points, and functions with
multi-indices, Cartesian products, and tensor products, respectively.
\usenotation{@0}
\usenotation{@1}
Therefore, the domain is now $\clint{\*0, \*1} \ceq \clint{0, 1}^d$,
which can be partitioned into
$\prod_{t=1}^d 2^{l_t} = 2^{\normone{\vec{l}}}$ equally-sized hyper-rectangles,
where $\*l = (l_1, \dotsc, l_d) \in \natz^d$ is the $d$-dimensional level
and $\normone{\vec{l}} \ceq \sum_{t=1}^d \abs{l_t}$ is the level sum.
The corners of the hyper-rectangles are given by the grid points
\begin{equation}
\label{eq:gridPointMultivariate}
\gp{\*l,\*i} \ceq \*i \cdot \ms{\*l},\quad
\*i = \*0, \dotsc, \*2^{\*l}.
\end{equation}
Relations and operations with vectors in bold face
are to be read coordinate-wise in this thesis, unless stated otherwise.
Bold-faced numbers like $\*0$ are defined to be the vector $(0, \dotsc, 0)$
in which every entry is equal to that number.
This is to allow a somewhat intuitive and suggestive notation.
For example, \eqref{eq:gridPointMultivariate} is equivalent to
the much longer formula
\begin{equation}
\gp{\*l,\*i}
\ceq (i_1 \ms{l_1},\; \dotsc,\; i_d \ms{l_d}),\quad
i_t = 0, \dotsc, 2^{l_t},\quad
t = 1, \dotsc, d,
\end{equation}
with the $d$-dimensional mesh size
$\ms{\*l} \ceq \*2^{-\*l} = (\ms{l_1}, \dotsc, \ms{l_d})$.
Again, every grid point is associated with a basis function that is defined
as the tensor product of the univariate functions:%
\footnote{%
Note that,
although \cref{eq:tensorProduct} does not cover it,
one could employ basis functions of different types in
each dimension, for example B-splines of different degrees.
All remaining considerations in this thesis
regarding tensor product basis functions are independent
of whether we use the same function type or
different types in each dimension.%
}
\begin{equation}
\label{eq:tensorProduct}
\basis{\*l,\*i}\colon \clint{\*0, \*1} \to \real,\quad
\basis{\*l,\*i}(\*x)
\ceq \prod_{t=1}^d \basis{l_t,i_t}(x_t).
\end{equation}
\cref{fig:nodalHat2D} shows an example of a bivariate nodal hat function
$\bspl{\*l,\*i}{1}$.
\begin{SCfigure}
\includegraphics{nodalHat2D_1}%
\caption[%
Bivariate nodal hat function%
]{%
Bivariate nodal hat function of level $\*l = (2, 1)$ and
index $i = (1, 1)$ as the tensor product of two univariate
nodal hat functions.%
}%
\label{fig:nodalHat2D}%
\end{SCfigure}
\vspace*{\fill}
\pagebreak
\paragraph{Multivariate nodal space}
The multivariate nodal space $\ns{\*l}$ is defined analogously to
the univariate case:
\begin{equation}
\ns{\*l}
\ceq \spn\{\basis{\*l,\*i} \mid \*i = \*0, \dotsc, \*2^{\*l}\}.
\end{equation}
In the case of hat functions $\bspl{\*l,\*i}{1}$,
the nodal space $\nsbspl{\*l}{1}$ is the $d$-linear spline space
\cite{Hoellig13Approximation}, i.e.,
the space of all continuous functions
on $\clint{\*0, \*1}$ that are piecewise $d$-linear polynomials on
all hyper-rectangles
\begin{equation}
\clint{\gp{\*l,\*i}, \gp{\*l,\*i+\*1}}
\ceq \clint{\gp{l_1,i_1}, \gp{l_1,i_1+1}} \times \dotsb \times
\clint{\gp{l_d,i_d}, \gp{l_d,i_d+1}},\quad
\*i = \*0, \dotsc, \*2^\*l - \*1.
\end{equation}
Analogously to \eqref{eq:interpFullGridUV},
we can interpolate objective functions $\objfun\colon \clint{\*0, \*1} \to \real$
in the nodal space $\ns{\*l}$ with $\fgintp{\*l}\colon \clint{\*0, \*1} \to \real$ satisfying
\begin{equation}
\label{eq:interpFullGridMV}
\fgintp{\*l}
= \sum_{\*i=\*0}^{\*2^\*l} \interpcoeff{\*l,\*i} \basis{\*l,\*i},\quad
\falarge{\*i = \*0, \dotsc, \*2^\*l}{\fgintp{\*l}(\gp{\*l,\*i}) = \objfun(\gp{\*l,\*i})},
\end{equation}
where $\interpcoeff{\*l,\*i} \in \real$ and
the sum is over all $\*i = \*0, \dotsc, \*2^\*l$
(i.e., $i_t = 0, \dotsc, 2^{l_t}$, $t = 1, \dotsc, d$).
To ensure that the coefficients $\interpcoeff{\*l,\*i}$
exist for every objective function $\objfun$ and are uniquely determined by
the values at the grid points
\begin{equation}
\fgset{\*l}
\ceq \{\gp{\*l,\*i} \mid \*i = \*0, \dotsc, \*2^{\*l}\},
\end{equation}
we prove the following statement:
\vspace*{\fill}
\pagebreak
\begin{lemma}[linear independence of tensor products]
\label{lemma:tensorProductLinearIndependence}
The functions $\basis{\*l,\*i}$ ($\*i = \*0, \dotsc, \*2^\*l$)
form a basis of $\ns{\*l}$, if the univariate functions
$\basis{l_t,i_t}$ ($i_t = 0, \dotsc, 2^{l_t}$)
form a basis of the univariate nodal space $\ns{l_t}$
for $t = 1, \dotsc, d$.
\end{lemma}
\begin{proof}
Assume that $\interpcoeff{\*l,\*i} \in \real$ are chosen in \eqref{eq:interpFullGridMV}
such that $\fgintp{\*l} \equiv 0$.
Then for all $\*i' = \*0, \dotsc, \*2^\*l$,
we can evaluate \eqref{eq:interpFullGridMV} at $\gp{\*l,\*i'}$ to obtain
\begin{equation}
\sum_{i_1=0}^{2^{l_1}}
\paren*{
\sum_{i_2=0}^{2^{l_2}} \dotsb \paren*{
\sum_{i_d=0}^{2^{l_d}}
\interpcoeff{\*l,\*i} \basis{l_d,i_d}(\gp{l_d,i_d'})
} \dotsb \basis{l_2,i_2}(\gp{l_2,i_2'})
} \basis{l_1,i_1}(\gp{l_1,i_1'})
= 0.
\end{equation}
We apply the univariate linear independence ($x_1$ direction) to infer
that the sum over $i_2$ must vanish for all $i_1 = 0, \dotsc, 2^{l_1}$.
Repeating this argument for all dimensions, we have
$\interpcoeff{\*l,\*i} = 0$ for all~$\*i = \*0, \dotsc, \*2^\*l$,
implying the linear independence of the functions $\basis{\*l,\*i}$.
\end{proof}
\usenotation{n10}
A common choice for the level $\*l$ is $n \cdot \*1$ for some $n \in \natz$.
\usenotation{Vnd}
In this case, we replace ``$\*l$'' in the subscripts with ``$n{,}d$''
(for example, $\ns{n,d} \ceq \ns{n \cdot \*1}$).
For the hat function basis $\bspl{\*l,\*i}{1}$,
it can be shown that the $\Ltwo$ interpolation error of the interpolant
$\fgintp{n,d} \in \ns{n,d}$ is given by
\begin{equation}
\normLtwo{\objfun - \fgintp{n,d}} = \landauO{\ms{n}^2},
\end{equation}
i.e., the order of the interpolation error is quadratic in the mesh size
\multicite{Hoellig13Approximation,Bungartz04Sparse}.
| {
"alphanum_fraction": 0.6682294264,
"avg_line_length": 36.7216117216,
"ext": "tex",
"hexsha": "9cadfb7d215215df80060a77e218b29f29e79e77",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "valentjn/thesis",
"max_forks_repo_path": "tex/document/21nodalSpaces.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "valentjn/thesis",
"max_issues_repo_path": "tex/document/21nodalSpaces.tex",
"max_line_length": 93,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "valentjn/thesis",
"max_stars_repo_path": "tex/document/21nodalSpaces.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-15T20:16:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-15T19:50:36.000Z",
"num_tokens": 3609,
"size": 10025
} |
\subsection{The Importance of Asynchrony}
\label{sec-conc-lessons-asynchrony}
The benefits of asynchrony have long been known in the software
world~\cite{michelson2006event, AdyaEtAl02-Tasks}. For example, when
asynchronous Javascript was introduced, it enabled the creation of
exciting, responsive web applications such as Google Suggest and Google
Maps~\cite{garrett2005ajax}.
Yet, the role of asynchrony in storage systems has been limited. Until
the 1990s, storage systems were essentially synchronous, performing
one operation at time~\cite{Seltzer90-SchedRevisit,
McKusickEtAl-FFS-84}. With the introduction of tag queuing in SCSI
disks~\cite{AndersonEtAl03-SCSIvATA, RidgeField00-SCSI, Weber04-SCSI},
disks could accept sixteen simultaneous requests. Unfortunately, many
devices do not implement tag queuing
correctly~\cite{marshall2012disks}.
In this dissertation, we have shown that asynchronous, orderless I/O
has significant advantages. First, not constraining the order of I/O
allows large performance gains, especially on today's multi-tenant
systems~\cite{thereska2013ioflow}. Recent work on non-volatile memory
has shown that removing ordering constraints on I/O can increase
performance by 30$\times$~\cite{pelley2014memory}.
Second, using interfaces such as asynchronous durability notifications
allows each layer to introduce optimizations such as delaying,
batching, or re-ordering I/O, without affecting the correctness of the
file system or the application. Increasing the independence of each
layer in the storage stack leads to a more robust storage system.
| {
"alphanum_fraction": 0.8230478589,
"avg_line_length": 51.2258064516,
"ext": "tex",
"hexsha": "810c9d15ef44a5b640be540db8dc5de3fb221389",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "keqhe/phd_thesis",
"max_forks_repo_path": "lanyue_thesis/conclusion/asynchrony.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "keqhe/phd_thesis",
"max_issues_repo_path": "lanyue_thesis/conclusion/asynchrony.tex",
"max_line_length": 71,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "keqhe/phd_thesis",
"max_stars_repo_path": "lanyue_thesis/conclusion/asynchrony.tex",
"max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z",
"num_tokens": 387,
"size": 1588
} |
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provides euro and other symbols
\else % if luatex or xelatex
\usepackage{unicode-math}
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\usepackage{hyperref}
\hypersetup{
pdftitle={EEB 297: Population genomics of structural variants and transposable elements},
pdfauthor={Jesse Garcia},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{longtable,booktabs}
% Fix footnotes in tables (requires footnote package)
\IfFileExists{footnote.sty}{\usepackage{footnote}\makesavenoteenv{longtable}}{}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
% set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage[]{natbib}
\bibliographystyle{apalike}
\title{EEB 297: Population genomics of structural variants and transposable elements}
\author{Jesse Garcia}
\date{2021-12-15}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\hypertarget{description}{%
\chapter{Description}\label{description}}
This is my study guide for the class EEB 297 and its associated articles.
\hypertarget{week2}{%
\chapter{Week 2 Biology of CNVs \& CNVs in Drosophila}\label{week2}}
\hypertarget{hastings-et-al.mechanisms-of-change-in-gene-copy-number-2009-nat-rev-genet}{%
\section{Hastings et al.~Mechanisms of change in gene copy number (2009) Nat Rev Genet}\label{hastings-et-al.mechanisms-of-change-in-gene-copy-number-2009-nat-rev-genet}}
\hypertarget{abstract}{%
\subsection{Abstract}\label{abstract}}
\begin{itemize}
\tightlist
\item
Deletions and duplications underlie human phenotypes and form at rates much bigger than other kinds of mutations
\item
Repair of broken replication forks might promote CNV production
\end{itemize}
\hypertarget{introduction}{%
\subsubsection{Introduction}\label{introduction}}
\begin{itemize}
\tightlist
\item
(Somatic/Meiotically Generated) Identical twins differ in CNV and different organs and tissues vary in copy number in the same individual (Woah)
\item
CNV is at LEAST as important in the differences between humans as SNPs
\item
Can change protein structure
\item
CNV variation is disadvantageous and incolved in cancer formation and progression. Contributes to cancer proneness
\item
\emph{Question}: Wouldn't this be adaptive for the cancer cells and not necessairly deleterious?
\item
Although we are looking at other species, it is probably ok to extrapolate what we get from bacteria
\end{itemize}
\hypertarget{characteristics-of-copy-number-variants}{%
\subsubsection{Characteristics of copy number variants}\label{characteristics-of-copy-number-variants}}
\begin{itemize}
\tightlist
\item
Change in copy number == change in chromosome structure
\item
Low Copy Repeats (LCRs): recurrent CNVs whose end-points are confined to few genomic positions
\begin{itemize}
\tightlist
\item
Probably come from homologous recombination between repeated sequences
\end{itemize}
\item
\emph{Question} Paragraph 2: ``Most non-recurrent CNVs occur at sites of very limited homology of 2 to 15 base pairs (bp)''
\begin{itemize}
\tightlist
\item
How do you measure homology within yourself?
\end{itemize}
\item
Recurrent CNVs
\begin{itemize}
\tightlist
\item
Short
\item
Tend to be in regions where LCRs (recurrent CNVs) are
\end{itemize}
\end{itemize}
\hypertarget{mechanisms-of-structural-change}{%
\subsubsection{Mechanisms of structural change}\label{mechanisms-of-structural-change}}
\begin{itemize}
\tightlist
\item
Mechansims of all structural changes are the same as those that cause CNV
\item
Homologous Recombination (HR) requires \emph{extensive sequence identity} (to what? sister chromatid?)
\begin{itemize}
\tightlist
\item
Important in accurate DNA repair
\item
HR causes CNVs not because the mechanism is inaccurate, but because genomes have tracts of low copy repeats or segmental dupplications
\end{itemize}
\item
Rad51: Strand exchange protein important in most homologous recombination
\item
Nonhomology: use only ``microhomology'' of a few bases or no homology
\item
NAHR: Non-allelic or ectopic homologous recombination
\begin{itemize}
\tightlist
\item
when a damaged sequence is repaired by a homologous sequence in different chromosomal positions
\end{itemize}
\end{itemize}
\hypertarget{homologous-recombination-mechanisms}{%
\subsubsection{Homologous recombination mechanisms}\label{homologous-recombination-mechanisms}}
\begin{itemize}
\tightlist
\item
Homologous recombination underlies many DNA repair processes.
\begin{itemize}
\tightlist
\item
repair of dna breaks and gaps
\item
DSB: double-strand break induced recombination
-Spontaneous mitotic recombination is probably initionated by single strand DNA gaps
\end{itemize}
\end{itemize}
\hypertarget{double-holliday-junction-and-synthesis-dependent-strand-annealing-models-of-double-strand-break-repair}{%
\paragraph{Double Holliday junction and synthesis-dependent strand annealing models of double-strand break repair}\label{double-holliday-junction-and-synthesis-dependent-strand-annealing-models-of-double-strand-break-repair}}
\begin{itemize}
\tightlist
\item
Holliday junction Double strand break repair is a mechanism that can lead to gene converseion and crossing over,
\item
Synthesis-dependent strand-annealing (SDSA)
\begin{itemize}
\tightlist
\item
does not generate crossovers
\item
mechanism for avoiding crossing-over, and loss of Heterozygosity (\emph{Question}: Would LOH events contribute to ROH computations?)
\item
can still produce CNVs
\end{itemize}
\item
If chromatids carrying the same allele segregate together at mitosis then you can get LOH
\item
Repeats can cause NAHR (non-allelic or ectopic homologous recombination)
\item
\emph{Question}: What's a vegetative cell?
\item
Length of repeats might affect probability of Homologuous recombination which. Too short==Less HR because of physical constraints of loop
\item
Break Induced Replication (BIR) Homologous recombination is also used to repair collapsed/broken replication forks
\begin{itemize}
\tightlist
\item
Can induce LOH if uses homologu instead of sister chromatid
\item
suggested major mechanism for change in copy number
\item
Can cause small deletions
\end{itemize}
\end{itemize}
\hypertarget{correct-choice-of-recombination-partner-prevents-chromosomal-structural-change}{%
\paragraph{Correct choice of recombination partner prevents chromosomal structural change}\label{correct-choice-of-recombination-partner-prevents-chromosomal-structural-change}}
\begin{itemize}
\tightlist
\item
Just don't pick a nonallelic partner for repair and you might be ok
\item
MutS and MutL work together to undo base-paired DNA molecules that are imperfectly matched
\item
Homeologous Sequenced: Sequences that share less than about 95\% identity.
\item
Cohesins are porteins that literaly bind two sister chromatids together
\begin{itemize}
\tightlist
\item
loss of cohesion may induce copy number change at other loci
\end{itemize}
\item
Proteins even hold ends of a single Double strand break together
\item
Homologous recombination is dope for repair but can cause CNVS.
\end{itemize}
\hypertarget{nonhomologous-repair}{%
\subsubsection{Nonhomologous repair}\label{nonhomologous-repair}}
\begin{itemize}
\tightlist
\item
Mechanisms of DNA repair that use very limited of NO homology.
\item
Can cause CNVs
\item
Two type: replicative and non-replicative mechansims
\end{itemize}
\hypertarget{nonhomologous-repair-non-replicative-mechanisms}{%
\subsubsection{Nonhomologous repair: non-replicative mechanisms}\label{nonhomologous-repair-non-replicative-mechanisms}}
\hypertarget{nonhomologous-end-joining}{%
\paragraph{Nonhomologous end joining}\label{nonhomologous-end-joining}}
\begin{itemize}
\tightlist
\item
Two pathways of Double strand break repair that do not require homology/verrry short microhomologies for repair
\item
Nonhomologous end joining (NHEJ)
\begin{itemize}
\tightlist
\item
NHEJ rejoins DSB ends accurately or leads to small 1-4 bp deletions, and also in some cases to insertion of free DNA, often from mitochondria or retrotransposons
\item
\emph{Question}: What is free DNA?
\end{itemize}
\item
Microhomology-mediated end joining (MMEG)
\begin{itemize}
\tightlist
\item
uses 5 to 25 bp long homologies to anneal to ends of double strand breaks and, leads to deletions of sequences between annealed microhomologies.
\end{itemize}
\item
Likely to cause some chromosomal rearrangement by joining nonhomologus sequences
\end{itemize}
\hypertarget{breakage-fusion-bridge-cycle}{%
\paragraph{Breakage-fusion-bridge cycle}\label{breakage-fusion-bridge-cycle}}
\begin{itemize}
\tightlist
\item
If a chromosome loses its telomere due to a double strand break, there will be two sister chromatids that lack telomeres (during replication)
\begin{itemize}
\tightlist
\item
Sister chromatids that lack telomeres will fuse, creating a diecentric chromosome
\item
The fused chromosomes will be ripped appart during anaphase
\item
this will happen over and over again
\item
will lead to large inverted duplications/repeats
\item
seen a lot in cancer
\item
Barbara McClintock proposed this!
\end{itemize}
\end{itemize}
\hypertarget{nonhomologous-repair-replicative-mechanisms}{%
\subsubsection{Nonhomologous repair: replicative mechanisms}\label{nonhomologous-repair-replicative-mechanisms}}
\begin{itemize}
\tightlist
\item
If you see microhomology at a site of nonhomologous recombination its probably because of non-homologous end joining this CNV was created
\item
However, this might be a consequence instead of DNA replication, and Break induced repair instead of NHEj
\item
Replicative stree might induce CNV
\item
Aphidicolin: inhibitor of replicative DNA polymerases induces CNVs
\item
This suggests that replication can cause CNVs
\item
You see little homolgoy at these endpoints, suggests NOT HR
\end{itemize}
\hypertarget{replication-slippage-or-template-switching}{%
\paragraph{Replication slippage or template switching}\label{replication-slippage-or-template-switching}}
\begin{itemize}
\tightlist
\item
Single-stranded sequences that appear during replication (think Okazaki fragments) are often deleted or duplicated
\end{itemize}
\hypertarget{fork-stalling-and-template-switching}{%
\paragraph{Fork stalling and template switching}\label{fork-stalling-and-template-switching}}
\begin{itemize}
\tightlist
\item
During replication, forks can be stalled and the 3' primer end uses a single-stranded DNA template of another replication fork
\item
Microhomology suggest that Homologous recombination is not involved
\item
They messed with the concentraion of certain exoncucleases and were able to find out which exonucleases involved
\end{itemize}
\hypertarget{microhomology-mediated-break-induced-replication}{%
\subsubsection{Microhomology-mediated break-induced replication}\label{microhomology-mediated-break-induced-replication}}
\begin{itemize}
\tightlist
\item
Break induced replication can be mediated by microhomology
\item
Pol32 which is a non-essential DNA polymerase is needed for Break induced replication
\item
Some author suggest that this probably causes non-recurrent copy number changes in human
\item
This author disagrees
\end{itemize}
\hypertarget{effects-of-chromosome-architecture-on-cnv}{%
\subsection{Effects of chromosome architecture on CNV}\label{effects-of-chromosome-architecture-on-cnv}}
\begin{itemize}
\tightlist
\item
CNVs are not randomly distributed in the human genome
\begin{itemize}
\tightlist
\item
Clustered in regions of complex genomic architecture
\item
Complex patterns of direct/inverted Low copy repeats
\begin{itemize}
\tightlist
\item
Can cause for stalling in DNA replication
\end{itemize}
\item
heterochromatin near telomeres/centromeres
\item
replication origins and terminators
\item
scaffold attachment sequences
\item
occurence of nonrecurrent changes in regions carrying multiple LCRs
\item
inverted repeats and palindromic sequences
\item
highly repeated sequences
\item
LINEs \& SINE
\begin{itemize}
\tightlist
\item
Cause CNV by Non-allelic or ectopic homologous recombination
\end{itemize}
\item
non-B conformation able DNA
\item
specific consensus sequences associated with CNVs
\end{itemize}
\item
\emph{THEME}: Multiple genomic features can affect the probability of their occurence
-\emph{Question}: What are the genomic features that can affect the probability of SNPs?
\end{itemize}
\hypertarget{conclusions-and-ramifications}{%
\subsection{Conclusions and ramifications}\label{conclusions-and-ramifications}}
\begin{itemize}
\tightlist
\item
At least two mechanisms for change in copy number
-Non-allelic homologous recombination
-Formed by classical HR-mediated Double strand break repair via a double holliday junction
-Restarts broken replication forks by Homologous recombination
-Microhomology-mediated events
\begin{itemize}
\tightlist
\item
underlie most copy-number change
-Breakage-fusion-bridge cycle operates and mayber importanin in amplification in some cancers
\end{itemize}
\item
Don't think that only one mechanism causes one event. There's mediation/interference/synergy in these methods
\item
CNV could stem from stress reponse.
\begin{itemize}
\tightlist
\item
``evolvability''
\item
stressed cells can fuel CNV formation and therefore genetic diversity upon which natural selection acts
\end{itemize}
\item
Cancer cells loss of heterozygosity drives tumor progression and resistance to therapies
\begin{itemize}
\tightlist
\item
\emph{Question}: Cancer cell with runs of homozygosity will be more fit than other cells?
\end{itemize}
\item
Probably variants associated with CNVs
\begin{itemize}
\tightlist
\item
\emph{Question}: Can we do a GWAS on the phenotype: Total Length of Genome in Run of homoszygosity
\end{itemize}
\end{itemize}
\hypertarget{emerson-et-al.natural-selection-shapes-genome-wide-patterns-of-copy-number-polymorphism-in-drosophila-melanogaster.-2008-science}{%
\section{Emerson et al.~Natural Selection Shapes Genome-Wide Patterns of Copy-Number Polymorphism in Drosophila melanogaster. (2008) Science}\label{emerson-et-al.natural-selection-shapes-genome-wide-patterns-of-copy-number-polymorphism-in-drosophila-melanogaster.-2008-science}}
\hypertarget{abstract-1}{%
\subsection{Abstract}\label{abstract-1}}
\begin{itemize}
\tightlist
\item
We don't really know how selection affects the distribution/density of CNVs.
\item
This paper identifies CNP (copy-number polymorphisms) in Drosphila and concludes that the locations and frequencies of CNPs are shaped by purifying selection
\item
\emph{Strength of Purifying Selection}: Deletions \textgreater{} Duplications. Exon and Intron overlapping duplications and X chromosome duplications \textgreater{} random duplication
\end{itemize}
\hypertarget{paragraph-1}{%
\subsubsection{Paragraph 1}\label{paragraph-1}}
\begin{itemize}
\tightlist
\item
``CNPs can create new genes, change gene dosage, reshape gene structures, and/or modify the elements that regulate gene expression, understanding their evolution is at the very heart of understanding how such structural changes in the genome contribute to the phenotypic evolution of organisms''
\end{itemize}
\hypertarget{paragraph-2}{%
\subsubsection{Paragraph 2}\label{paragraph-2}}
\begin{itemize}
\tightlist
\item
Identify CNPs with a custom tiling array and use a HMM trained on a data from a line known to contain specific CNPs.
\end{itemize}
\hypertarget{paragraph-3}{%
\subsubsection{Paragraph 3}\label{paragraph-3}}
\begin{itemize}
\tightlist
\item
They validated their model with wet-lab procedures.
\item
Deletions have a relatively high false-ositive rate (47\%) because deletions are often near SNPs. This leads to DNA not binding well to the arrays
\_ \emph{Question}: Wonder what they'd estimate their positve-predictive value to be?
\end{itemize}
\hypertarget{paragraph-4}{%
\subsubsection{Paragraph 4}\label{paragraph-4}}
\begin{itemize}
\tightlist
\item
They compare predicted and ``true'' boundaries of CNPs and claim their model can detect small CNPs and estaimate CNP boundaries with precision
\item
They detect a lot more CNPs than in human with a smaller genome/sample size.
\item
Human CNPs might include a class that are larger than \emph{anything} found in drosophila. Current studies are missing small-scale variations.
\end{itemize}
\hypertarget{paragraph-5}{%
\subsubsection{Paragraph 5:}\label{paragraph-5}}
\begin{itemize}
\tightlist
\item
Duplications outnumbered deletions 2.5:1 (Sign test P value \textless{}2.22 × 10--16; Fig. 1) and were significantly larger (Wilcoxon rank sum test, P value \textless{}2.22 × 10--16; Table 1).
\item
Nonallelic homologous recombination should either generate a 1:! ratio of Duplications:Deletions OR more deletions than duplications.
\item
There is deletion bias.
\item
Suggests that a large proportion of deletions are removed from the population by purifying selection. In this context, the dearth of deletions observed in our data, as well as the smaller size of the deleted variants, suggest that they are far more deleterious than duplications and that larger mutations are more deleterious than smaller ones.
\item
Deletions == More Deleterious. Larger Mutations More Deleterious than small ones.
\end{itemize}
\hypertarget{paragraph-6}{%
\subsubsection{Paragraph 6:}\label{paragraph-6}}
\begin{itemize}
\tightlist
\item
Every region of the genome harbors at least low levels of CNPs. The median distance between two events was 12.6 kb (fig. S5).
\item
Pericentromeric regions were enriched in duplications, though not in deletions (fig. S5)
\item
Pericentromeric regions are also characterized by extremely low rates of crossing-over, leading to a lower effective population size as a result of linkage (14). Therefore, the higher density of CNPs observed in these regions may be a consequence of the reduced effectiveness of selection in purging deleterious mutations (14). Alternatively, the mutation rate may simply be higher in such regions (15).
\item
My favorite paragraph so far
\item
\emph{Question}: Didn't they design the tiling array? So the median distance/density is biased by themselves?
\item
\emph{Questions}: Positive selection/interference could also cause these high density of enrichment of duplications?
\end{itemize}
\hypertarget{paragraph-7}{%
\subsubsection{Paragraph 7:}\label{paragraph-7}}
\begin{itemize}
\tightlist
\item
More duplications in general in all categories of the genome
\item
Deletions relatively deleted in coding regions
\end{itemize}
\hypertarget{paragraph-8}{%
\subsubsection{Paragraph 8:}\label{paragraph-8}}
\begin{itemize}
\tightlist
\item
8\% of genes partially duplicated
\item
2\% of genes partially deleted
\item
Transposable elements and CNPs are arranged similarly with respect to the ends of genes
\end{itemize}
\hypertarget{paragraph-9}{%
\subsubsection{Paragraph 9:}\label{paragraph-9}}
\begin{itemize}
\tightlist
\item
Estimated demographic parameters, then used the parameters to not reject the standard neutral model, then estimated selection coefficients
\end{itemize}
\hypertarget{paragraph-10}{%
\subsubsection{Paragraph 10:}\label{paragraph-10}}
\begin{itemize}
\tightlist
\item
Notably, selection differentially influenced CNP evolution among different genomic features as well as among different chromosomes. We compared the patterns of variation between the different classes of variants: both correcting for bias and error and with no corrections.
\item
Intronic is the most deleterious (splicing?)
\end{itemize}
\hypertarget{paragraph-11}{%
\subsubsection{Paragraph 11:}\label{paragraph-11}}
\begin{itemize}
\tightlist
\item
Fail to reject neutrality for complete gene duplications
\end{itemize}
\hypertarget{paragraph-12}{%
\subsubsection{Paragraph 12:}\label{paragraph-12}}
\begin{itemize}
\tightlist
\item
We also found that the autosomes have higher selection coefficients (less deleterious) than the X chromosome (Fig. 2). This observation is compatible with the following models:
\begin{itemize}
\item
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Duplicate mutations on the X chromosome are more deleterious than those on autosomes (X-linked genes may be more sensitive to changes in dosage)\\
\end{enumerate}
\item
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Duplicate polymorphisms tend to be slightly deleterious and recessive
\end{enumerate}
\end{itemize}
\end{itemize}
\hypertarget{paragraph-13}{%
\subsubsection{Paragraph 13:}\label{paragraph-13}}
\begin{itemize}
\tightlist
\item
Genes overlapping toxin respones and known to be under positive selection because of increased rates of gene expression
\end{itemize}
\hypertarget{paragraph-14}{%
\subsubsection{Paragraph 14:}\label{paragraph-14}}
\begin{itemize}
\tightlist
\item
CNPs are distributed by natural selection
\end{itemize}
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7795619201,
"avg_line_length": 33.6704707561,
"ext": "tex",
"hexsha": "eb2b4b7fb651b47f44756891be517627a352347c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "37b55817a9529309dd58e71d19b5bc54fe0b3569",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "JesseGarcia562/EEB297Notes",
"max_forks_repo_path": "bookdown-demo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37b55817a9529309dd58e71d19b5bc54fe0b3569",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "JesseGarcia562/EEB297Notes",
"max_issues_repo_path": "bookdown-demo.tex",
"max_line_length": 405,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "37b55817a9529309dd58e71d19b5bc54fe0b3569",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "JesseGarcia562/EEB297Notes",
"max_stars_repo_path": "bookdown-demo.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6473,
"size": 23603
} |
% Document Type: LaTeX
% Master File: ps7.tex
\input ../6001mac
\def\fbox#1{%
\vtop{\vbox{\hrule%
\hbox{\vrule\kern3pt%
\vtop{\vbox{\kern3pt#1}\kern3pt}%
\kern3pt\vrule}}%
\hrule}}
\begin{document}
\psetheader{Sample Problem Set}{Simulation and Concurrency}
\medskip
The programming assignment for this week explores two ideas: the
simulation of a world in which objects are characterized by
collections of state variables and the characteristics of systems
involving concurrency. These ideas are presented in the context of a
market game. In order not to waste too much of your own time, it is
important to study the system and plan your work before coming to the
lab.
This problem set begins by describing the overall structure of the
simulation. The exercises will help you to master the ideas involved.
\begin{center}
{\bf How England lost her Barings } \\
or \\
an abject Leeson for the banking community
\end{center}
Earlier this year the English banking community was stunned by the
failure of the 233-year-old Baring Brothers investment banking
company. A 28-year-old trader in Singapore, Nick Leeson, lost over
\$1G in a rather short time. Mr. Leeson, who was originally involved
in arbitrage trading on the differences between the prices of the
Nikkei-225 average in the Osaka and Singapore stock exchanges incurred
massive losses from an extremely leveraged position in the Nikkei
average.
\section{Markets}
In our simplification, we model a market, such as the Osaka market in
Nikkei-225 derivatives or the Singapore market in Nikkei-225
derivatives, as a message acceptor with internal state variables, {\tt
price} and {\tt pending-orders}. The message acceptor can handle
requests to get the current price, to update the price, to accept an
order, and to process pending orders. We make these markets with a
market-constructor procedure as follows:
\beginlisp
;;; The initial price
(define nikkei-fundamental 16680.0)
\null
(define Osaka
(make-market "Osaka:Nikkei-225" nikkei-fundamental))
\null
(define singapore
(make-market "Singapore:Nikkei-225" nikkei-fundamental))
\endlisp
There are several messages accepted by a market. For example, one may
obtain the current price on the Singapore market as follows:
\beginlisp
(singapore 'get-price)
;;; Value: 16673.23
\endlisp
Traders interact with markets. A trader is modeled by an object that
holds two kinds of assets: a monetary balance and a number of
contracts. In this simulation there is only one kind of contract: A
Nikkei-225 derivative contract (whatever that is! However, we may
watch Mr. Leeson lose money with or without knowing what these
contracts are about.)
The message {\tt get-price} will be used by traders to obtain quotes
of the current market price of a contract. The trader bases his
decisions on this price.
Every so often, a trader may place a new order by sending a {\tt
new-order!} message to a market. An order is a procedure (of no
arguments) supplied by the trader to be executed by the market in the
near future. An order may modify the assets of the trader.
Our simulation system for the interaction of traders and markets sends
certain system messages to the objects to model the flow of time.
Thus, every so often a market receives, from the system, a message to
change the price or to process orders. The markets are implemented as
follows:
\beginlisp
(define (make-market name initial-price)
(let ((price initial-price)
(price-serializer (make-serializer))
(pending-orders '())
(orders-serializer (make-serializer)))
(define (the-market m)
(cond ((eq? m 'new-price!)
(price-serializer
(lambda (update)
(set! price (update price)))))
((eq? m 'get-price) price)
((eq? m 'new-order!)
(orders-serializer
(lambda (new-order)
(set! pending-orders
(append pending-orders (list new-order))))))
((eq? m 'execute-an-order)
(((orders-serializer
(lambda ()
(if (not (null? pending-orders))
(let ((outstanding-order (car pending-orders)))
(set! pending-orders (cdr pending-orders))
outstanding-order)
(lambda () 'nothing-to-do)))))))
((eq? m 'get-name) name)
(else (error "Wrong message" m))))
the-market))
\endlisp
We can instantiate the general market to make a particular market, say
the Osaka market in Nikkei-225 derivatives:
\beginlisp
(define nikkei-fundamental 16680.0)
\null
(define Osaka
(make-market "Osaka:Nikkei-225" nikkei-fundamental))
\endlisp
Notice that in {\tt make-market} the {\tt price-serializer} is applied
to a procedure that takes a procedure {\tt update} and uses it to
compute the new price from the old one. The {\em critical region}
guarded by the serializer contains both the assignment to the
protected variable and a read access of that variable. The acceptance
of a new order is similarly serialized. Execution of an order is also
serialized with the {\tt orders-serializer}, but this is a much more
complicated situation.
\paragraph{Exercise 1:}
Why do we need two different serializers to implement a market? What
bad result could we expect if we made only one serializer and used it
to serialize all three of the guarded regions? Why must the {\tt
orders-serializer} be used for guarding two regions?
The message {\tt new-price!} will be issued to each market every so
often by a process that models random market forces. There are many
factors that influence the price of a market commodity, and we cannot
begin to model those factors. However, we can imagine that the
commodity price will take a random walk starting with its fundamental
value, and drifting. The procedure {\tt nikkei-update} implements
just such a strategy, whose details are probably not important, except
in the way that it updates the prices when called (see the listing).
The {\tt execute-an-order} message is used by the system to cause the
market to execute one of the pending orders. If there are orders
pending then the first is selected and executed, and removed from the
list of orders.
\paragraph{Exercise 2:}
The code for handling an {\tt
execute-an-order} message is quite complicated. Louis Reasoner
suggests that it could be simplified as follows:
\beginlisp
((eq? m 'execute-an-order)
((orders-serializer
(lambda ()
(if (not (null? pending-orders))
(begin ((car pending-orders))
(set! pending-orders (cdr pending-orders))))))))
\endlisp
Unfortunately, Mr. Reasoner's suggestion is (as usual) not completely
correct; it will work in the simple cases we have included in the
problem set, but not in general. A slightly better idea, which is
still not quite correct is:
\beginlisp
((eq? m 'execute-an-order)
(let ((current-order (lambda () 'nothing-to-do)))
(if (not (null? pending-orders))
((orders-serializer
(lambda ()
(begin (set! current-order (car pending-orders))
(set! pending-orders (cdr pending-orders)))))))
(current-order)))
\endlisp
Explain in no more than three short, clear sentences each, what is
wrong with Louis's idea, and why the second try is better but still
not quite right. Watch out -- this question is subtle -- the answer
is not obvious. Try to draw timing diagrams showing how these methods
may fail.
\bigskip
The code supplied defines a particular kind of trader, an arbitrager,
Nick Leeson, who tries to make money on the difference of the value of
a commodity on different exchanges. He buys on the low exchange and
sells on the high one, pocketing the difference. The idea works, if
the orders can be processed faster than the exchange moves. The
arbitrager is implemented as follows:
\beginlisp
(define (make-arbitrager name balance contracts authorization)
(let ((trader-serializer (make-serializer)))
(define (change-assets delta-money delta-contracts)
((trader-serializer
(lambda ()
(set! balance (+ balance delta-money))
(set! contracts (+ contracts delta-contracts))))))
(define (a<b low-place low-price high-place high-price)
(if (> (- high-price low-price) transaction-cost)
(let ((amount-to-gamble (min authorization balance)))
(let ((ncontracts
(round (/ amount-to-gamble (- high-price low-price)))))
(buy ncontracts low-place change-assets)
(sell ncontracts high-place change-assets)))))
(define (consider-a-trade)
(let ((nikkei-225-Osaka (Osaka 'get-price))
(nikkei-225-singapore (singapore 'get-price)))
(if (< nikkei-225-Osaka nikkei-225-singapore)
(a<b Osaka nikkei-225-Osaka
singapore nikkei-225-singapore)
(a<b singapore nikkei-225-singapore
Osaka nikkei-225-Osaka))))
(define (me message)
(cond ((eq? message 'name) name)
((eq? message 'balance) balance)
((eq? message 'contracts) contracts)
((eq? message 'consider-a-trade) (consider-a-trade))
(else
(error "Unknown message -- ARBITRAGER" message))))
me))
\endlisp
We can instantiate a particular trader as an instance of the
arbtrager:
\beginlisp
(define nick-leeson
(make-arbitrager "Nick Leeson" 1000000000. 0.0 10000.))
\endlisp
So Nick is represented as a message acceptor that answers to a few
messages. One may ask for his name, his monetary balance, his stock
of contracts, and one may poke him to consider a trade. The
simulation system will do this aperiodically, as part of the model of
the flow of time.
Traders buy and sell contracts at a market using the procedure {\tt
transact}. A trader gives the market permission to subtract from the
trader's monetary balance the cost of the contracts purchased and to
add to the trader's stash the contracts he purchased. A sell order is
just a buy of a negated number of contracts. The {\tt permission}
argument is just a procedure supplied by the trader that takes an
amount of money and a number of contracts. (In the {\tt arbitrager}
trader above it is just the procedure {\tt change-assets}.) {\tt
Permission} performs the action of modifying the trader's assets when
the trade is executed.
\beginlisp
(define (buy ncontracts market permission)
(transact ncontracts market permission))
\null
(define (sell ncontracts market permission)
(transact (- ncontracts) market permission))
\endlisp
\beginlisp
(define (transact ncontracts market permission)
((market 'new-order!)
(lambda ()
(permission (- (* ncontracts (market 'get-price)))
ncontracts))))
\endlisp
\paragraph{Exercise 3:}
What shared variables are protected by the
{\tt trader-serializer} in the {\tt make-arbitrager} definition?
Describe, in a few concise sentences, an example of a problem that is
prevented by this serializer.
In this simulated market world there are a few other autonomous
agents. There is a ticker for each market, which periodically prints
the current price of a contract on that market, and there is an
auditor, which aperiodically prints the assets of a trader.
These minor players are just the following procedures:
\beginlisp
(define (ticker market)
(newline)
(display (market 'get-name))
(display " ")
(display (market 'get-price)))
\endlisp
\beginlisp
(define (audit trader)
(newline)
(display (trader 'name))
(display " Balance: ")
(display (trader 'balance))
(display " Contracts: ")
(display (trader 'contracts)))
\endlisp
Finally, there is the system that we use to run our simulation.
It is implemented by the procedure {\tt start-world}, which you may
find in the listing attached. {\tt Start-world} uses a procedure {\tt
parallel-execute}, which starts up any number of independent
processes. We will not try to tell you how {\tt parallel-execute}
works --- it is not pretty. The system also provides a procedure {\tt
sleep-current-thread} that returns to its caller after waiting a
number of milliseconds indicated by its argument. You will have to
understand how to use this procedure, but you need not try to figure
out how it works either.
\paragraph{Exercise 4:} If you run the system long enough, you will
see that the {\tt audit} procedure occasionally prints out anomolous
results. Sometimes, the balance/contracts for Nick are way out of
line, but they get corrected soon after that:
\beginlisp
Nick Leeson Balance: 1000098661.9071354 Contracts: 0.
Nick Leeson Balance: 1000135769.2921371 Contracts: 0.
Singapore:Nikkei-225 16723.26212008945
Tokyo:Nikkei-225 16719.39333592154
Singapore:Nikkei-225 16731.04299412824
Tokyo:Nikkei-225 16730.877484891316
Nick Leeson Balance: 1000179972.9031762 Contracts: -5207.
Nick Leeson Balance: 1145160434.4739444 Contracts: -8662.
Nick Leeson Balance: 1000235215.9529605 Contracts: 0.
Nick Leeson Balance: 1000584254.2787036 Contracts: 0.
...
Tokyo:Nikkei-225 16802.379854988434
Singapore:Nikkei-225 16805.91475593874
Tokyo:Nikkei-225 16803.939209184016
Singapore:Nikkei-225 16807.359095567597
Nick Leeson Balance: 922712662.5753176 Contracts: 0.
Tokyo:Nikkei-225 16805.338357625566
\endlisp
For tutorial, be prepared to explain, in a few simple sentences, what
is causing this problem and to describe what must be done to fix the
problem. Do {\em not} try to implement your changes.
\paragraph{Exercise 5:} Do exercise 3.42 on page 289 of the notes.
Is there any reason why our market simulation might have problems with
deadlock? If not, why not? If so, explain how it might happen.
\section{To do in the lab}
When you load the problem set the market system will be ready to
start. You may start it by executing {\tt (start-world)}. It will
produce a periodic time-sequence of market prices and aperiodic audits
of Mr. Leeson, the arbitrager. You may stop the system by executing
{\tt (stop-world)}. To restart the system execute {\tt (start-world)}
again. If you are into macho programming you can patch the code while
it is running (without stopping the system). This is commonly done by
``real programmers'' in debugging live operating systems.
\paragraph{Lab Exercise 1:}
Run the world for a bit. Does Mr. Leeson seem to lose money, gain
money, or break even on the average? The time constants for the way
the world runs are the numbers (of milliseconds) occuring in the
procedure {\tt start-world}. Also, Mr. Leeson's strategy depends on
the value of the constant {\tt transaction-cost}. Which of these
constants could you change to make arbitrage extremely profitable?
How do you think transaction-cost interacts with the timing constants?
To support your argument, make a change and demonstrate the improved
profits.
\paragraph{Lab Exercise 2:}
Make another arbitrager, say Bob Citron, who recently lost about
\$1.7G of money invested by the taxpayers of Orange County, CA.
Install him and demonstrate a system where both Mr. Leeson and Mr.
Citron are executing trades on the Osaka and Singapore markets.
\paragraph{Lab Exercise 3:} You will probably see a case where the
parallel interleaving of the process threads fouls up the I/O, mixing
the characters that are output by a ticker and the auditor. Can you
explain this? Can you figure our a way to fix it? Write the code
required to fix this bug. Be prepared to argue to your tutor that
your code fixes exactly this bug and has no other consequences (such
as preventing two markets from processing orders simultaneously.)
(Hint: You can use a serializer, carefully.)
\paragraph{Lab Exercise 4:}
Your job is to to invent another kind of trader, such as one who
tries to predict the future value of the commodity from analysis of
its past behavior. You may try whatever strategy you think is
effective, but you may not cheat by changing the code of any other
component of the system or by diddling with the parameters of the
nikki-update, or by setting the price of a market. Prizes will be
awarded for the most interesting trader invented. Implement your
trader, install the trader as a process, and demonstrate that the
program works. Show output demonstrating that interesting trades are
being made.
\end{document}
| {
"alphanum_fraction": 0.7295643216,
"avg_line_length": 39.8623188406,
"ext": "tex",
"hexsha": "f2b258d54cb7a06d6328b88a3abf6486d6280731",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8fbea7e0def60eda5f8b4be7a9d20635de95b4af",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Buxus/sicp",
"max_forks_repo_path": "additional-assignments/ps7/ps7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8fbea7e0def60eda5f8b4be7a9d20635de95b4af",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Buxus/sicp",
"max_issues_repo_path": "additional-assignments/ps7/ps7.tex",
"max_line_length": 74,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8fbea7e0def60eda5f8b4be7a9d20635de95b4af",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Buxus/sicp",
"max_stars_repo_path": "additional-assignments/ps7/ps7.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4070,
"size": 16503
} |
\documentclass[a4paper,12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{tabularx}
\setlength\parindent{0pt}
\title{}
\author{}
\date{}
\begin{document}
\thispagestyle{empty}
\section*{Statement of Contribution}
The undersigned declare that \textbf{Laurens R. Krol} ``has substantially contributed to the concept, content, or methodology'', as intended by the Doctoral Regulations of the Technische Universität Berlin as of 2014-02-05, of the work that has been published as: \\
\textbf{Krol, L. R., Haselager, P., \& Zander, T. O. (2020). Cognitive and affective probing: a tutorial and review of active learning for neuroadaptive technology. \emph{Journal of Neural Engineering, 17}(1), 012001.} \\
Krol has been primarily responsible for formulating and formalising the concept, reviewing the literature, and writing the paper. \\
\begin{tabularx}{\textwidth}{XXX}
\vspace{2cm} \hrule Laurens R. Krol & \vspace{2cm} \hrule Pim Haselager & \vspace{2cm} \hrule Thorsten O. Zander \\
\end{tabularx}
\end{document}
| {
"alphanum_fraction": 0.7565725414,
"avg_line_length": 39.5,
"ext": "tex",
"hexsha": "1ba614a7db894e9710014c5f2ad0648b3b266d8b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "548167344fada64384f95d23be67a48ee08f7449",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lrkrol/dissertation",
"max_forks_repo_path": "administrative/coauth_cp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "548167344fada64384f95d23be67a48ee08f7449",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lrkrol/dissertation",
"max_issues_repo_path": "administrative/coauth_cp.tex",
"max_line_length": 266,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "548167344fada64384f95d23be67a48ee08f7449",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lrkrol/dissertation",
"max_stars_repo_path": "administrative/coauth_cp.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 303,
"size": 1027
} |
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage[margin=1in]{geometry}
\title{An inner product space of sampled fission sources}
\author{James Holloway and Jeremy Conlin}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\begin{document}
\maketitle
\section{Source vectors}
We need a vector space that can contain all possible sampled fission sources. The purpose of this document is to define such a vector space. A vector space provides a notion of scalar multiplication and a notion of vector addition, with certain properties. The key properties are that addition is commutative and distributive, there is a zero vector, for every vector there is an additive inverse that when summed with that vector will yield zero, and the scalar multiplication is associative and distributive over vector addition.
\subsection{Source vectors}
First, we define a source point as a location in space combined with a weight, which can be positive, negative or zero.
\begin{definition} A \emph{source point} $s$ is a pair $s = (w, \mathbf{x})$ where $w \in \mathbf{R}\backslash0$ is a non-zero real number called the \emph{weight}, and $\mathbf{x} \in \Gamma$ is a point in a subset of 3-space, $\Gamma \subset \mathbf{R}^3$. Note that the weight cannot be zero, but can be positive or negative. We denote by $w(s)$ the weight of a source point, $s$, and by $\mathbf{x}(s)$ the location of the source point.
\end{definition}
We will need a notion of equality for source points; two source points are equal if they are at the same location and have equal weight.
\begin{definition}
Two source points, $s_1, s_2$, are equal, written as $s_1 = s_2$ if and only if $w(s_1) = w(s_2)$ as real numbers and $\mathbf{x}(s_1) = \mathbf{x}(s_2)$ as points in $\Gamma$.
\end{definition}
For convenience of notation it will be useful to define multiplication of a source point $s$ by a non-zero real number $\alpha$ as
\begin{definition}
For $\alpha \in \mathbf{R}\backslash 0$ and source point $s$, multiplication $\alpha s$ is defined as producing the new source point $\alpha s = (\alpha w(s), \mathbf{x}(s))$ at the same location, but with weight scaled by $\alpha$.
\end{definition}
Note that there is no meaning to adding source points together. A notion of this sort will be the heart of defining a vector space whose elements are lists of source points. In addition, we will need a sense of scalar multiplication that includes multiplication by zero.
Now we define a collection, $S$, of source points that can represent a fission source as a finite collection of source points.
\begin{definition}
Let $N$ be a non-negative integer. A \emph{source} $S$ is a set of $N$ non-zero-weight source points, $S = \{s_1, s_2, \ldots, s_N\}$, $w(s_i) \ne 0$ $i = 1, 2, \ldots N$, with distinct locations $\mathbf{x}(s_i) \ne \mathbf{x}(s_j)$, $i \ne j$. $N$ is called the number of source points, and will also be written as $N(S)$.
\end{definition}
It is very important that $S$ is a set, and so we do not allow repeated source point locations $\mathbf{x}(s_i) \ne \mathbf{x}(s_j)$. Note that sets are un-ordered (by definition), so the order of the source points does not matter; $\{s_1, s_2\}$ is the same source as $\{s_2, s_1\}$. Note also that a source does not just have distinct source points, but rather has source points with distinct locations.
There is a very interesting source, namely the source with $0$ source points. There is only one such source, since it is simply the empty set. This special source is very important to defining a vector space of sources, and it is physically important too.
\begin{definition}
The unique source with no source points is called the \emph{zero source, and will be denoted $0$}.
\end{definition}
Let's discuss equality of any two source $S_1, S_2$. This is simply the set equality imposed by our previous definition of equality of source points. For $S_1$ and $S_2$ to be equal they must:
\begin{enumerate}
\item Have the same number of source points, $N = N(S_1) = N(S_2)$
\item If $N > 0$, then for each source point $s_1 \in S_1$ there must exist a source point $s_2 \in S_2$ (and there can be only one because $S_1$ and $S_2$ are sets) such that $s_1 = s_2$.
\end{enumerate}
We are finally ready to define a vector space of sources.
Let $\mathcal{S}$ be the set of all sources (including sources of any number of source points from zero on up).
We can give $\mathcal{S}$ a vector space structure by defining scalar multiplication (over the real numbers) and by defining the addition of sources, with the appropriate properties. Let's define scalar multiplication first, basically as scaling the source weights by a scalar.
\begin{definition}
Let $S \in \mathcal{S}$, and $\alpha \in \mathbf{R}$ be a real number. If $S$ has no source points, or if $\alpha = 0$, then $\alpha S = 0$. Otherwise, with $M = N(S)$ the number of source points, there exist source points $s_i$, $i = 1, \ldots, M$ such that $S = \{s_1, \ldots, s_M\}$, and $\alpha S$ is defined to be the source $\alpha S = \{\alpha s_1, \ldots, \alpha s_M\}$.
\end{definition}
Note that $\alpha S$ is still a good source. It's either a zero source, or else a set of non-zero-weight source points all at distinct locations. Scalar multiplication is properly associative $(\alpha \beta) S = \alpha (\beta S)$, and that $\alpha = 1$ is the identity. Note also that multiplication by a non-zero scalar does not change the number of source points, but multiplication by zero does.
Next we must define vector addition, and in doing so we wish to capture the physical notion of adding together two sources.
\begin{definition}
Given any two sources $S_1, S_2 \in \mathcal{S}$, the sum $S_1 + S_2 \in \mathcal{S}$ is defined as the source consisting of all source points from $S_1$ and $S_2$ that are at distinct locations, and for every pair of source points $s_1 \in S_1$ and $s_2 \in S_2$ that share a common location, $\mathbf{x}(s_1) = \mathbf{x}(s_2)$, the sum $S_1 + S_2$ will contain only the single source point $(w(s_1) + w(s_2), \mathbf{x}(s_1))$ at the same location but with weight equal to the sum total weight of the two originals. If this combined weight is zero, there is no source point at that location, and that location is not included in the source points of the final sum vector.
\end{definition}
This definition is well posed; by construction $S_1 + S_2$ will contain a finite number of source points all of which are at distinct locations and none of which has zero weight. Note that the number of source points in $S_1 + S_2$ will be between 0 and $N(S_1) + N(S_2)$, inclusive; the sum will have fewer source points if there were points in $S_1$ and $S_2$ at common locations, and will have no source points if every point in $S_1$ has a partner of opposite weight in $S_2$.
We want now to show that this definition of addition makes $\mathcal{S}$ into a vector space.
\begin{theorem}
With the scalar multiplication and addition just defined, $\mathcal{S}$ is a vector space.
\end{theorem}
\begin{proof}
Note first that the zero source is the additive identity element, $S + 0 = S$ because $0$ contains no source points. Further, note that $-1 S$ is an additive inverse because every source point from $-1 S$ is at the same location as a source point in $S$, but the sum of the weights of these paired points will be zero. Hence $-1 S + S = 0$.
Next, we must check the distributive property $\alpha (S_1 + S_2) = \alpha S_1 + \alpha S_2$. If $\alpha$ is zero this is trivially true, and if either $S_1$ or $S_2$ is zero, it's also trivially true. Otherwise, we must show that that $\alpha(S_1 + S_2)$ and $\alpha S_1 + \alpha S_2$ contain the same source points. Scalar multiplication does not alter any source points locations, it just changes the source point weights; therefore for points in $S_1$ and $S_2$ at distinct points $\alpha S_1 + \alpha S_2$ contains the same source points as $\alpha (S_1 + S_2)$. For points in $S_1$ and $S_2$ at the same source locations there are two cases to consider: 1) either they sum to zero weight and are removed, or 2) they do not. In the first case, corresponding points in $\alpha S_1 + \alpha S_2$ will also sum to zero (if $w(s_1) + w(s_2) = 0$ then $\alpha w(s_1) + \alpha w(s_2) = 0$), and in both $\alpha (S_1 + S_2)$ and $\alpha S_1 + \alpha S_2$ the point in the sum will be eliminated. In the second case, $\alpha(S_1 + S_2)$ will contain a point at the common location with weight $\alpha(w(s_1) + w(s_2)) = \alpha w(s_1) + \alpha w(s_2)$ and this is the same weight and location as a particle in $\alpha S_1 + \alpha S_2$.
Now we must show that $(\alpha + \beta) S = \alpha S + \beta S$. If $\alpha + \beta = 0$ then both sides are the zero source, and the statement is true. Similarly if either $\alpha$ or $\beta$ is zero. So we assume now that $\alpha \ne0$, $\beta \ne 0$, and $\alpha + \beta \ne 0$ and note that $(\alpha + \beta)S$ will contain the same source locations as $S$, and that $S$, $\alpha S$, $\beta S$ and hence $\alpha S + \beta S$ will also all contain the same source locations. The weight for the source point $s$ in $S$ becomes $(\alpha + \beta) w(s)$ in $(\alpha + \beta) S$, and this same source point generates a point with weight $\alpha w(s) + \beta w(s) = (\alpha + \beta) w(s)$ in the sum $\alpha S + \beta S$. This establishes that $(\alpha + \beta) S = \alpha S + \beta S$ in all cases.
Next we must show the commutative property $S_1 + S_2 = S_2 + S_1$. The commutative property is obvious if either source is zero, so we now focus on the non-zero case. If $s \in S_1 + S_2$ then there exists a point $s'$ in either $S_1$ or $S_2$ or both, such that $\mathbf{x}(s) = \mathbf{x}{s'}$. Suppose this point $s'$ appears only in $S_1$; then $s = s'$ and this point is also in $S_2 + S_1$. Similarly if the point $s'$ appears only in $S_2$. Finally if there is an $s' \in S_1$ and $s'' \in S_3$ such that $\mathbf{x}(s) = \mathbf{x}(s') = \mathbf{x}(s'')$ then $w(s) = w(s') + w(s'') = w(s'') + w(s')$ and this point also appears in $S_2 + S_1$. Thus, addition is commutative.
Finally, we must show that addition is associative, $(S_1 + S_2) + S_3 = S_1 + (S_2 + S+3)$. The thinking that leads to this is identical to that showing that addition is commutative. It does not matter in what order we collect source points into the sum, and if multiple source points share a common location it does not matter in what order we add up their weights.
\end{proof}
\subsection{Mapping sources to functions}
Let $S \in \mathcal{S}$ be a source vector. We can, in a non-unique way, map this vector to a function $q(x)$ over $\Gamma$. Let $h(\mathbf{x}, \mathbf{y})$ be any non-negative function (a kernel) from $\Gamma \times \Gamma \to \mathcal{R}$ with the property
\begin{equation}
1 = \int_{\Gamma} h(\mathbf{x}, \mathbf{y}) \, d\mathbf{x} \,.
\end{equation}
Let $\{s_1, \ldots, s_M\}$ be the source points in $S$. Then
\begin{equation}
Q(S, \mathbf{x}) = \sum_{i=1}^M w(s_i) h(x, \mathbf{x}(s_i))
\end{equation}
is a physical representation of the source as a function of space. Normally we would also want the function $h$ to be zero outside of $\Gamma$ (so for example, there is no source outside $\Gamma$). Obviously we define $Q(0, \mathbf{x})$ as the zero function. Most importantly,
\begin{equation}
Q(\alpha S, \mathbf{x}) = \sum_{i=1}^M \alpha w(s_i) h(x, \mathbf{x}(s_i)) = \alpha Q(s, \mathbf{x})
\end{equation}
so the mapping from $\mathcal{S}$ to the space of functions is linear over scalar multiplication, and indeed, scalar multiplication in $\mathcal{S}$ maps to scalar multiplication of functions. (Exercise for the reader: show that the mapping is a vector space isomorphism, that is, the vector addition of vectors in $\mathcal{S}$ maps to the vector addition of functions. This is needed in order to have a well defined inner product below.)
This construction is fairly general. Note for example that if we want a histogram (in 1-D) then we can define a set of bins and
\begin{equation}
h(x,y) = \begin{cases}
1/\Delta & \text{if $x$ and $y$ are in the same bin}\\
0 & \text{otherwise}
\end{cases}
\end{equation}
where $\Delta$ is the width of the bin in which $x$ and $y$ lie.
Note: It might have been easier to start here and work out the properties needed to add vectors in $\mathcal{S}$ in order to create this vector space isomorphism.
\subsection{An inner product space}
Finally, we want to discuss equipping $\mathcal{S}$ with an inner product; with this in hand $\mathcal{S}$ becomes an inner product space and we can define a norm and orthogonality. We can also carry out the Arnoldi process, although we should note that $\mathcal{S}$ will be an infinite dimensional inner product space (a Hilbert space), and we don't know much about Arnoldi for infinite dimensional problems.
There is no unique way to attach an inner product to $\mathcal{S}$, but a general approach is to map $\mathcal{S}$ to a space of functions, and use the natural $L^2$ inner product. So we pick kernel functions $h$ as the the previous section and define
\begin{equation}
\langle S_1, S_2 \rangle = \int_{\Gamma} Q(S_1, \mathbf{x}) Q(S_2, \mathbf{x}) \, d\mathbf{x}
\end{equation}
This inner product is symmetric, and linear in each argument because mapping $\mathcal{S}$ to the space of finite expansions in kernels is a vector space isomorphism.
The inner product is positive-definite
\begin{equation}
\langle S, S \rangle = \int_{\Gamma} Q(S, \mathbf{x}) Q(S, \mathbf{x}) \, d\mathbf{x} \geq 0
\end{equation}
with equality if and only if $S = 0$.
\section{Practical issues}
Because $\mathbf{S}$ is an inner product space, all the steps in Arnoldi can be done by working on source vectors $S \in \mathcal{S}$. Doing this in practice is problematic because the vectors will contain more and more source points as we orthogonalize. However, maybe we do not need to explicitly construct vectors of the form $S_1 + \beta S_2$. Rather we simply need to track of $S_1$, $\beta$, and $S_2$. A simple data structure can keep this formation. Then we need a way to sample from the vector $S_1 + \beta S_2$ without explicitly constructing it. Is there a way to do this?
This is important, because explicitly adding two vectors $S_1$ and $S_2$ could be a very expensive process: to accomplish it we must find any points in the two vectors that share a single source location (of course the probability of this is so small we might neglect it).
\end{document} | {
"alphanum_fraction": 0.7243646825,
"avg_line_length": 110.5984848485,
"ext": "tex",
"hexsha": "b7ee6b8265a9c6cbbc08c7f10e382ca75d18906b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jlconlin/PhDThesis",
"max_forks_repo_path": "Notes/fissionSource/fissionSource.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jlconlin/PhDThesis",
"max_issues_repo_path": "Notes/fissionSource/fissionSource.tex",
"max_line_length": 1240,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jlconlin/PhDThesis",
"max_stars_repo_path": "Notes/fissionSource/fissionSource.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4213,
"size": 14599
} |
\section{Syntax and Semantics}\label{sec:abstractrefinements:check}
Next, we present a core calculus \corelan that formalizes the notion
of abstract refinements. We start with the syntax (\S~\ref{sec:syntax}),
present the typing rules (\S~\ref{sec:abstractrefinements:typing}), show soundness
via a reduction to contract calculi \cite{Knowles10,Greenberg11}
(\S~\ref{sec:soundness}), and inference via Liquid types (\S~\ref{sec:abstractrefinements:infer}).
\subsection{Syntax}\label{sec:syntax}
Figure~\ref{fig:abstractrefinements:syntax} summarizes the syntax of our core
calculus \corelan which is a polymorphic $\lambda$-calculus
extended with abstract refinements.
%
We write
$b$,
$\tref{b}{\reft}$ and
$\tpp{b}{\areft}$
to abbreviate
$\tpref{b}{\true}{\true}$,
$\tpref{b}{\true}{\reft}$, and
$\tpref{b}{\areft}{\true}$ respectively.
We say a type or schema is \emph{non-refined} if all the
refinements in it are $\true$. We write $\overline{z}$
to abbreviate a sequence $z_1 \ldots z_n$.
\mypara{Expressions}
\corelan\ expressions include the standard variables $x$,
primitive constants $c$, $\lambda$-abstraction $\efunt{x}{\tau}{e}$,
application $\eapp{e}{e}$, type abstraction $\etabs{\alpha}{e}$,
and type application $\etapp{e}{\tau}$. The parameter $\tau$ in
the type application is a \emph{refinement type}, as described shortly.
The two new additions to \corelan are the refinement abstraction
$\epabs{\rvar}{\tau}{e}$, which introduces a refinement variable
$\rvar$ (together with its type $\tau$), which can appear in refinements
inside $e$, and the corresponding refinement application $\epapp{e}{e}$.
%
%where the argument, is of the form $\efun{\bar{x}{e}}$ which is
%an abbreviation for $\efun{x_1 \ldots x_n}{e}$.
%which is of the form $\ptype{\bar{\tau}}$ which is an
%abbreviation for $\ptype{tau_1 \rightarrow \ldots \tau_n}$,
%where each $\tau_i$ is a simple (non-function) type.
\mypara{Refinements}
A \emph{concrete refinement} \reft is a boolean valued expression \reft
drawn from a strict subset of the language of expressions which
includes only terms that
(a)~neither diverge nor crash and
(b)~can be embedded into an SMT decidable refinement logic including
the theory of linear arithmetic and uninterpreted functions.
%
An \emph{abstract refinement} \areft is a conjunction of refinement
variable applications of the form $\rvapp{\pi}{e}$.
\mypara{Types and Schemas}
The basic types of \corelan include the base types $\tbint$ and $\tbbool$
and type variables $\alpha$. An \emph{abstract refinement type} $\tau$ is
either a basic type refined with an abstract and concrete refinements,
$\tpref{b}{\areft}{\reft}$, or
a dependent function type where the parameter $x$ can appear in the
refinements of the output type.
We include refinements for functions, as refined type variables can be
replaced by function types. However, typechecking ensures these refinements
are trivially true.
%
%type application
%Type application consists of a type constructor, its type arguments
%and its refinement arguments
%that are used to describe properties between its elements.
%
Finally, types can be quantified over refinement variables and type
variables to yield abstract refinement schemas.
\begin{figure}[t!]
\centering
\captionsetup{justification=centering}
$$
\begin{array}{rrcl}
\emphbf{Expressions} \quad
& e
& ::=
& x
\spmid c
\spmid \efunt{x}{\tau}{e}
\spmid \eapp{e}{e} \\
&&\spmid &\etabs{\alpha}{e}
\spmid \etapp{e}{\tau}
\spmid \epabs{\rvar}{\tau}{e}
\spmid \epapp{e}{e}
\\[0.05in]
\emphbf{Abstract Refinements} \quad
& \areft
& ::=
& \true
\spmid \areft \land \rvapp{\rvar}{e}
\\[0.05in]
\emphbf{Basic Types} \quad
& b
& ::=
& \tbint
\spmid \tbbool
\spmid \alpha
\\[0.05in]
\emphbf{Abstract Refinement Types} \quad
& \tau
& ::=
& \tpref{b}{\areft}{\reft}
\spmid \trfun{x}{\tau}{\tau}{\reft}
\\[0.05in]
\emphbf{Abstract Refinement Schemas} \quad
& \sigma
& ::=
& \tau
\spmid \ttabs{\alpha}{\sigma}
\spmid \tpabs{\rvar}{\tau}{\sigma}
\\[0.05in]
\end{array}
$$
\caption[Syntax of \corelan.]{\textbf{Syntax of Expressions, Refinements, Types and Schemas of \corelan.}}
\label{fig:abstractrefinements:syntax}
\end{figure}
\subsection{Static Semantics}\label{sec:abstractrefinements:typing}
\input{text/abstractrefinements/rules}
Next, we describe the static semantics of \corelan by describing the typing
judgments and derivation rules. Most of the rules are
standard~\cite{Ou2004,LiquidPLDI08,Knowles10,GordonTOPLAS2011}; we
discuss only those pertaining to abstract refinements.
%
\mypara{Judgments}
A type environment $\Gamma$ is a sequence of type bindings $x:\sigma$.
We use environments to define three kinds of typing judgments.
\mypara{Wellformedness judgments (\isWellFormed{\Gamma}{\sigma})}
state that a type schema $\sigma$ is well-formed under environment
$\Gamma$, that is, the refinements in $\sigma$ are boolean
expressions in the environment $\Gamma$.
%
The wellformedness rules check that the concrete and abstract
refinements are indeed $\tbbool$-valued expressions in the
appropriate environment.
The key rule is \wtBase, which checks, as usual, that the (concrete)
refinement $\reft$ is boolean and additionally, that the abstract
refinement $\areft$ applied to the value $\vref$ is also boolean.
This latter fact is established by \wtRVApp which checks that
each refinement variable application $\rvapp{\rvar}{e}\ \vref$
is also of type \tbbool in the given environment.
\mypara{Subtyping judgments}
(\isSubType{\Gamma}{\sigma_1}{\sigma_2})
state that the type schema $\sigma_1$ is a subtype of the type schema
$\sigma_2$ under environment $\Gamma$, that is, when the free variables
of $\sigma_1$ and $\sigma_2$
are bound to values described by $\Gamma$, the set of values described
by $\sigma_1$ is contained in the set of values described by $\sigma_2$.
%
The rules are standard except for \tsubVar, which encodes the base types'
abstract refinements $\areft_1$ and $\areft_2$ with conjunctions of
\emph{uninterpreted predicates}
$\inter{\areft_1\ \vref}$ and $\inter{\areft_2\ \vref}$ in the
refinement logic as follows:
\begin{align*}
\inter{\true\ \vref} & \defeq \true\\
\inter{(\areft \land \rvapp{\rvar}{e})\ \vref} & \defeq \inter{\areft\
\vref} \land \rvar(\inter{e_1},\ldots,\inter{e_n},\vref)
\end{align*}
where $\rvar(\overline{e})$ is a term in the refinement logic corresponding
to the application of the uninterpreted predicate symbol $\rvar$ to the
arguments $\overline{e}$.
% $\text{Valid}(p)$ (\tsubBase) holds if an SMT determines the formula $p$
% is \emph{valid}~\cite{Nelson81}.
\mypara{Typing judgments}
(\hastype{\Gamma}{e}{\sigma}) state that
the expression $e$ has the type schema $\sigma$ under environment $\Gamma$,
that is, when the free variables in $e$ are bound to values described by
$\Gamma$, the expression $e$ will evaluate to a value described by $\sigma$.
%
The type checking rules are standard except for \tpgen and \tpinst, which
pertain to abstraction and instantiation of abstract refinements.
%
The rule \tpgen is the same as \tfunction: we simply check the body
$e$ in the environment extended with a binding for the refinement
variable $\rvar$.
%
The rule \tpinst checks that the concrete refinement is of the appropriate
(unrefined) type $\tau$, and then replaces all (abstract) applications of
$\rvar$ inside $\sigma$ with the appropriate (concrete) refinement $\reft'$
with the parameters $\overline{x}$ replaced with arguments at that application.
Formally, this is represented as $\rpinst{\sigma}{\rvar}{\efunbar{x:\tau}{\reft'}}$
which is $\sigma$ with each base type transformed as
\begin{align*}
\rpinst{\tpref{b}{\areft}{\reft}}{\rvar}{z}
& \defeq \tpref{b}{\areft''}{\reft \land \reft''} \\
\mbox{where} \quad (\areft'', \reft'')
& \defeq \rpapply{\areft}{\rvar}{z}{\true}{\true}
\intertext{$\mathsf{Apply}$ replaces each application of $\rvar$ in
$\areft$ with the corresponding conjunct in $\reft''$, as}
\rpapply{\true}{\cdot}{\cdot}{\areft'}{\reft'}
& \defeq (\areft', \reft') \\
\rpapply{\areft \wedge \rvapp{\rvar'}{e}}{\rvar}{z}{\areft'}{\reft'}
& \defeq \rpapply{\areft}{\rvar}{z}{\areft' \land \rvapp{\rvar'}{e}}{\reft'} \\
\rpapply{\areft \wedge \rvapp{\rvar}{e}}{\rvar}{\efunbar{x:\tau}{\reft''}}{\areft'}{\reft'}
& \defeq
\rpapply{\areft}{\rvar}{\efunbar{x:\tau}{\reft''}}{\areft'}{\reft' \wedge \SUBST{\reft''}{\overline{x}}{\overline{e},\vref}}
\end{align*}
In other words, the instantiation can be viewed as two symbolic
reduction steps: first replacing the refinement variable with the
concrete refinement, and then ``beta-reducing" concrete refinement
with the refinement variable's arguments. For example,
$$\rpinst{\tpref{\tbint}{\rvar\ y}{\vref > 10}}
{\rvar}
{\efunt{x_1}{\tau_1}{\efunt{x_2}{\tau_2}{x_1 < x_2}}}
\defeq \tref{\tbint}{\vref > 10 \land y < \vref}$$
%%rp(x:tx->t , \rvar, z) = x:tx' -> t'
%% where tx' = rp(tx, \rvar, z)
%% t' = rp(t , \rvar, z)
%%
%%rp(\a.t, \rvar, z) = \a.t'
%% where t' = rp(t, \rvar, z)
%%
%%rp(\p.t, \rvar, z) = \p.t'
%% where t' = rp(t, \rvar, z)
%%The other rule that handles abstract refinements is \tcase.
%%This rule initially checks that the expression to be analyzed
%%has a type application type $T = \tcon{\chi}{e_\chi}{\listOf{T}}{\listOf{e}}$.
%%Then for all cases, the case expression is typechecked in the initial environment,
%%extended with case binders \listOf{x_i} and the initial expression binder $x$.
%%The types of these binders are gained after unfolding data constructor's type \tc{K_i}.
%%The unfolding is done by replacing its type variables with actual type arguments
%%of $T$, ie. \listOf{T}
%%its abstract refinements with actual inferred refinements \listOf{e},
%%and its binders with actual binders \listOf{x_i}.
\subsection{Soundness}\label{sec:abstractrefinements:soundness}
As hinted by the discussion about refinement variable instantiation,
we can intuitively think of abstract refinement variables as
\emph{ghost} program variables whose values are boolean-valued
functions. Hence, abstract refinements are a special case of
higher-order contracts, that can be statically verified using
uninterpreted functions. (Since we focus on static checking,
we don't care about the issue of blame.)
We formalize this notion by translating \corelan programs into
the contract calculus \conlan of \cite{Greenberg11} and use this
translation to define the dynamic semantics and establish soundness.
\mypara{Translation}
We translate \corelan schemes $\sigma$ to \conlan schemes $\tx{\sigma}$
as by translating abstract refinements into contracts,
and refinement abstraction into function types:
\[\arraycolsep=0.5pt
\begin{array}{rclcrcl}
\tx{\true\ \vref} & \defeq
& \true
& \quad \quad &
\tx{\tpabs{\rvar}{\tau}{\sigma}} & \defeq
& \tfun{\rvar}{\tx{\tau}}{\tx{\sigma}} \\
\tx{(\areft \land \rvapp{\rvar}{e})\ \vref} & \defeq
& \tx{\areft\ \vref} \land \eapp{\eapp{\rvar}{\overline{e}}}{\vref}
& \quad \quad &
\tx{\ttabs{\alpha}{\sigma}} & \defeq
& \ttabs{\alpha}{\tx{\sigma}} \\
\tx{\tpref{b}{\areft}{\reft}} & \defeq
& \tref{b}{\reft \land \tx{\areft\ \vref}}
& \quad \quad &
\tx{\tfun{x}{\tau_1}{\tau_2}} & \defeq
& \tfun{x}{\tx{\tau_1}}{\tx{\tau_2}}
%\tx{\trfun{x}{\tau_1}{\tau_2}{\reft}} \defeq
% & \trfun{x}{\tx{\tau_1}}{\tx{\tau_2}}{\tx{\reft}} \\
\end{array}\]
Similarly, we translate \corelan terms $e$ to \conlan
terms $\tx{e}$ by converting refinement abstraction and application
to $\lambda$-abstraction and application
\[\arraycolsep=0.5pt
\begin{array}{rclcrcl}
\tx{x} & \defeq & x & \quad \quad \quad & \tx{c} & \defeq & c \\
\tx{\efunt{x}{\tau}{e}} & \defeq & \efunt{x}{\tx{\tau}}{\tx{e}} & \quad & \tx{\eapp{e_1}{e_2}} & \defeq & \eapp{\tx{e_1}}{\tx{e_2}} \\
\tx{\etabs{\alpha}{e}} & \defeq & \etabs{\alpha}{\tx{e}} & \quad & \tx{\etapp{e}{\tau}} & \defeq & \eapp{\tx{e}}{\tx{\tau}} \\
\tx{\epabs{\rvar}{\tau}{e}} &\defeq & \efunt{\rvar}{\tx{\tau}}{\tx{e}} & \quad & \tx{\epapp{e_1}{e_2}} &\defeq & \eapp{\tx{e_1}}{\tx{e_2}}
\end{array}\]
%%\begin{align*}
%%\tx{\true\ \vref} \defeq
%% & \true\\
%%\tx{(\areft \land \rvapp{\rvar}{e})\ \vref} \defeq
%% & \tx{\areft\ \vref} \land \eapp{\eapp{\rvar}{\overline{e}}}{\vref}\\
%%\tx{\tpref{b}{\areft}{\reft}} \defeq
%% & \tref{b}{\reft \land \tx{\areft\ \vref}} \\
%%\tx{\tfun{x}{\tau_1}{\tau_2}} \defeq
%% & \tfun{x}{\tx{\tau_1}}{\tx{\tau_2}} \\
%%%\tx{\trfun{x}{\tau_1}{\tau_2}{\reft}} \defeq
%%% & \trfun{x}{\tx{\tau_1}}{\tx{\tau_2}}{\tx{\reft}} \\
%%\tx{\ttabs{\alpha}{\sigma}} \defeq
%% & \ttabs{\alpha}{\tx{\sigma}} \\
%%\tx{\tpabs{\rvar}{\tau}{\sigma}} \defeq
%% & \tfun{\rvar}{\tx{\tau}}{\tx{\sigma}}
%%\end{align*}
%%\tx{x} \defeq & x \\
%%\tx{c} \defeq & c \\
%%\tx{\efunt{x}{\tau}{e}} \defeq & \efunt{x}{\tx{\tau}}{\tx{e}} \\
%%\tx{\eapp{e_1}{e_2}} \defeq & \eapp{\tx{e_1}}{\tx{e_2}} \\
%%\tx{\etabs{\alpha}{e}} \defeq & \etabs{\alpha}{\tx{e}} \\
%%\tx{\etapp{e}{\tau}} \defeq & \eapp{\tx{e}}{\tx{\tau}} \\
%%\tx{\epabs{\rvar}{\tau}{e}} \defeq & \efunt{\rvar}{\tx{\tau}}{\tx{e}} \\
%%\tx{\epapp{e_1}{e_2}} \defeq & \eapp{\tx{e_1}}{\tx{e_2}}
\mypara{Translation Properties}
We can show by induction on the derivations that the
type derivation rules of \corelan \emph{conservatively approximate}
those of \conlan. Formally,
\begin{itemize}
\item If $\isWellFormed{\Gamma}{\tau}$ then $\isWellFormedH{\Gamma}{\tau}$,
\item If $\isSubType{\Gamma}{\tau_1}{\tau_2}$ then $\isSubTypeH{\Gamma}{\tau_1}{\tau_2}$,
\item If $\hastype{\Gamma}{e}{\tau}$ then
$\hastypeH{\tx{\Gamma}}{\tx{e}}{\tx{\tau}}$.
\end{itemize}
\mypara{Soundness} Thus rather than re-prove preservation and progress
for \corelan, we simply use the fact that the type derivations are
conservative to derive the following preservation and progress
corollaries from \cite{Greenberg11}:
%
\begin{itemize}
\item{\textbf{Preservation: }}
If $\hastype{\emptyset}{e}{\tau}$
and $\tx{e} \longrightarrow e'$
then $\hastypeH{\emptyset}{e'}{\tx{\tau}}$
\item{\textbf{Progress: }}
If $\hastype{\emptyset}{e}{\tau}$, then either
$\tx{e} \longrightarrow e'$ or
$\tx{e}$ is a value.
\end{itemize}
%
Note that, in a contract calculus like \conlan, subsumption is encoded
as a \emph{upcast}. However, if subtyping relation can be statically
guaranteed (as is done by our conservative SMT based subtyping)
then the upcast is equivalent to the identity function and can
be eliminated. Hence, \conlan terms $\tx{e}$ translated from well-typed
\corelan terms $e$ have no casts.
\subsection{Refinement Inference}\label{sec:abstractrefinements:infer}
Our design of abstract refinements makes it particularly easy to
perform type inference via Liquid typing, which is crucial for
making the system usable by eliminating the tedium of instantiating
refinement parameters all over the code. (With value-dependent
refinements, one cannot simply use, say, unification to determine
the appropriate instantations, as is done for classical type systems).
We briefly recall how Liquid types work, and sketch how they are
extended to infer refinement instantiations.
\mypara{Liquid Types}
The Liquid Types method infers refinements in three steps.
%
First, we create refinement \emph{templates} for the unknown,
to-be-inferred refinement types. The \emph{shape} of the template
is determined by the underlying (non-refined) type it corresponds to,
which can be determined from the language's underlying (non-refined)
type system.
The template is just the shape refined with fresh refinement variables
$\kappa$ denoting the unknown refinements at each type position.
For example, from a type ${\tfun{x}{\tbint}{\tbint}}$ we create
the template ${\tfun{x}{\tref{\tbint}{\kappa_x}}{\tref{\tbint}{\kappa}}}$.
%
Second, we perform type checking using the templates (in place of the
unknown types). Each wellformedness check becomes a wellformedness
constraint over the templates, and hence over the individual $\kvar$,
constraining which variables can appear in $\kvar$.
Each subsumption check becomes a subtyping constraint
between the templates, which can be further simplified, via syntactic
subtyping rules, to a logical implication query between the variables
$\kappa$.
%
Third, we solve the resulting system of logical implication constraints
(which can be cyclic) via abstract interpretation --- in particular,
monomial predicate abstraction over a set of logical qualifiers
\cite{Houdini,LiquidPLDI08}. The solution is a map from $\kvar$ to
conjunctions of qualifiers, which, when plugged back into the templates,
yields the inferred refinement types.
\mypara{Inferring Refinement Instantiations}
The key to making abstract refinements practical is a means of
synthesizing the appropriate arguments $\reft'$ for each refinement
application $\epapp{e}{\reft'}$.
Note that for such applications, we can, from $e$, determine the
non-refined type of $\reft'$, which is of the form
${\tau_1 \rightarrow \ldots \rightarrow \tau_n \rightarrow \tbbool}$.
Thus, $\reft'$ has the template
${\efunt{x_1}{\tau_1}{\ldots \efunt{x_n}{\tau_n}{\kvar}}}$
where $\kvar$ is a fresh, unknown refinement variable that
must be solved to a boolean valued expression over $x_1,\ldots,x_n$.
Thus, we generate a \emph{wellformedness} constraint
${\isWellFormed{x_1:\tau_1, \ldots, x_n:\tau_n}{\kvar}}$
and carry out typechecking with template, which, as before, yields
implication constraints over $\kvar$, which can, as before, be
solved via predicate abstraction.
Finally, in each refinement template, we replace each $\kvar$ with its
solution $e_\kvar$ to get the inferred refinement instantiations.
%%To infer appropabstract refinements we used the liquid type variables
%%$\kappa$ with explicit arguments to avoid inferring function
%%expressions:
%%To infer an expression to replace a predicate of type
%%$\listOf{x_i:\tau_i; \tau}$ in an environment $\Gamma$,
%%we create a fresh liquid variable $\kappa$ on $\tau$ which is
%%wellformed in the environment $\Gamma$ extended with the bindings
%%\listOf{x_i:\tau_i}.
%%When $\kvar$ is solved via predicate abstraction to an expression
%%$e_\kvar$, we simply replace $\kvar$ with
%%the set as the inferred expression
%%$e =\efun{x_1}{\dots\efun{x_n}{\efun{v}{e_\kappa}}}$.
%add a refinement variable in place of expressions.
%\mypara{Constants}\jhala{constant-refinements and soundness guarantees?}
%%TODO:
%%[SKIP] add paragraph on constants to opsem
%%[SKIP] redefine \reft to r? (to emphasize not arbitrary expression?)
| {
"alphanum_fraction": 0.7047517352,
"avg_line_length": 42.1846846847,
"ext": "tex",
"hexsha": "5ebac178f7bd802f73abf4ac7101279f2271c780",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z",
"max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "nikivazou/thesis",
"max_forks_repo_path": "text/abstractrefinements/typechecking.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "nikivazou/thesis",
"max_issues_repo_path": "text/abstractrefinements/typechecking.tex",
"max_line_length": 138,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "nikivazou/thesis",
"max_stars_repo_path": "text/abstractrefinements/typechecking.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z",
"num_tokens": 5913,
"size": 18730
} |
% When generating HTML, use:
% mkhowto --html --iconserver . --split 4 --link 2 xbel
%
% The catch:
% You have to be running the version of mkhowto from the Python
% 1.5.2 (post-alpha2) tree, since that's when I added bibtex
% support. ;-)
\documentclass{howto}
\usepackage{verbatim}
% define some local macros:
\newcommand{\element}[1]{\texttt{<#1>}}
\newcommand{\attribute}[1]{\texttt{#1}}
\newcommand{\nmtoken}[1]{\texttt{#1}}
\newcommand{\paramentity}[1]{\texttt{\char`\%#1;}}
\newenvironment{longexample}
{\begingroup\small}
{\endgroup}
\newcommand{\contributor}[2]{\term{#1 \textnormal{(\email{#2})}}}
\newenvironment{contributorlist}
{\begin{definitions}}
{\end{definitions}}
\title{The XML Bookmark Exchange Language}
\author{Fred L. Drake, Jr.}
\authoraddress{
PythonLabs at Digital Creations \\
E-mail: \email{[email protected]}
}
\date{\today} % XXX update before release!
\release{1.1}
\setshortversion{1.1}
\begin{document}
\maketitle
\begin{abstract}
\noindent
The XML Bookmark Exchange Language (XBEL) is a rich interchange
format for ``bookmark'' data as used by most Internet browsers. This
document describes the origin of the design, the requirements which
drove the design process, and the resulting document type.
\end{abstract}
\tableofcontents
\section{Introduction
\label{intro}}
The XML Bookmark Exchange Language, or XBEL, is an interchange
format for the hierarchical bookmark data used by current Internet
browsers. It is defined as an application of the Extensible Markup
Language, or XML \cite{w3c-xml}.
This section descibes the origin of the effort which created the XML
Bookmark Exchange Language (XBEL), identifies the contributors, and
provides information on the availability of the public text of the
DTD and additional documentation on the applications which support
XBEL.
\subsection{Origins
\label{origins}}
The XML Bookmark Exchange Language is a product of the Python XML
Special Interest Group (XML-SIG). The initial intent of the XBEL
effort was to create a demonstration of XML facilities available
to Python programmers which would also be useful.
\subsection{Contributors
\label{contrib}}
The initial idea for XBEL was contributed by Mark Hammond. Mark
sent his idea to the Python XML-SIG mailing list. This was closely
followed by discussions and additional ideas by many of the list
participants. The following people contributed to the design of
the DTD and the related software (listed in alphabetical order by
last name):
\begin{contributorlist}
\contributor{Fred L. Drake, Jr.}{[email protected]}
Documentation. Design input on DTD and the storage of
metadata. Implemented direct support for XBEL in Grail.
\contributor{David Faure}{[email protected]}
Suggested adding the \attribute{icon} attribute to
\element{folder} and \element{bookmark} elements, and
\attribute{icon} to \element{folder}. Implemented XBEL in the
Konqueror file manager for the K Desktop Envionment (KDE).
\contributor{Stefane Fermigier}{[email protected]}
Modified implementation of software for Internet Explorer
Favorites conversion using his original Python DOM
implementation.
\contributor{Lars Marius Garshol}{[email protected]}
Extended the concept to cover all Internet browsers bookmarks
and came up with the name and acronym. Implemented support
for Navigator and Opera bookmark formats.
\contributor{Geir Ove Gr{\o}nmo}{[email protected]}
General input on XML and the desired level of complexity.
\contributor{Marc van Grootel}{[email protected]}
Design input on the DTD, storage of metadata, and comments on
the use of XBEL with architectural forms.
\contributor{Mark Hammond}{[email protected]}
Original concept and DTD for an archival storage format for
Internet Explorer ``Favorites.''
\contributor{Jack Jansen}{[email protected]}
General input on potential advanced applications.
\contributor{Andrew M. Kuchling}{[email protected]}
Implemented conversion software between XBEL and Lynx
bookmarks.
\contributor{Fredrik Lundh}{[email protected]}
Initial software implementation for Internet Explorer.
\contributor{Sean McGrath}{[email protected]}
General input on XML and document type definitions.
\contributor{Greg Stein}{[email protected]}
General input on XML Namespaces and moderator of complexity.
\contributor{Walter R. Underwood}{[email protected]}
General input on the use of XML character entites instead of
adding general entities, and discussion on date/time values in
XML.
\end{contributorlist}
\subsection{Availability
\label{availability}}
Information on XBEL, including the public text and this document,
is available on the Python XML-SIG Web site at
\url{http://pyxml.sourceforge.net/topics/xml/xbel/} \cite{xbel-home}.
Please refer to this Web resource for information on new versions,
DTD variants, and supporting software.
The public text for XBEL will be made available through a SOCAT
catalog at available at:
\url{http://pyxml.sourceforge.net/topics/xml/dtds/catalog}. This
catalog may be used by including a DELEGATE entry in a catalog
already used by XML processing software. The DELEGATE entry
should be:
\begin{verbatim}
DELEGATE "+//IDN python.org" "http://www.python.org/topics/xml/dtds/catalog"
\end{verbatim}
\subsection{Formal Identification
\label{formal-ident}}
The XBEL DTD documented in this report has the Formal Public
Identifier:
\begin{verbatim}
+//IDN python.org//DTD XML Bookmark Exchange Language 1.1//EN//XML
\end{verbatim}
Valid instances of this document type may use the following document
type declaration:
\begin{verbatim}
<!DOCTYPE xbel
PUBLIC "+//IDN python.org//DTD XML Bookmark Exchange Language 1.1//EN//XML"
"http://www.python.org/topics/xml/dtds/xbel-1.1.dtd">
\end{verbatim}
\section{Requirements
\label{requirements}}
This section describes the functional capabilities which this
document type supports. There are three categories of
functionality supported: basic bookmark exchange between browsers,
data storage for advanced Internet resource management tools, and
simplicity in extending the DTD if needed for specific
applications.
\subsection{Relation to Browser Functionality
\label{req-browser}}
XBEL instances must be able to describe sufficient data to
represent the bookmarks of all major Internet browsers.
It must be possible to convert browser-specific bookmark data to
XBEL in a lossless manner, though specific conversions may remove
data for application-specific reasons. It is especially important
to consider privacy issues when exchanging bookmark data.
Conversion from XBEL to a browser-specific format may lose
information when the data originates from a browser supporting
bookmark features not available in all browsers. It is expected
that software implementing the conversion be able to warn the user
if conversion will cause the loss of information, as appropriate.
\subsection{Advanced Application Support
\label{req-applications}}
XBEL must be able to support interchange requirements for
applications not currently implemented as part of typical Internet
browsers, including (but not limited to!) application-specific
preference and history information which only pertains to specific
bookmarks, metadata information, and alternate sources or formats
for the documents.
It must be possible for applications to operate on subsets of the
information stored in an XBEL instance without affecting private
information stored by other applications. Application-specific
data stored in an XBEL instance may be simple text or may be
heavily structured.
\subsection{Extensibility
\label{req-extensibility}}
Some ability to extend the document type definition is required to
encourage reuse of the existing design. Due to the use of XML,
only a minimum of inherant flexibility is required, as new
document types may be formed using namespaces or by allowing the
use of well-formed but possibly invalid markup \cite{w3c-xml-names}.
\section{XBEL Document Structure
\label{document-structure}}
This section describes the structure of XBEL documents. This
includes information on each element and attribute defined in the
DTD. Some descriptions include references to the parameter entities
used to construct the DTD; these are described in Section
\ref{parameter-entities}, ``Use of Parameter Entities.''
\subsection{Date/time Attribute Values
\label{date-time}}
Several attributes defined in this document type require date/time
values stored as CDATA. For these attributes, the value must be
formatted as an ISO 8601:1988 value containing a date
\cite{iso8601,iso8601-houston,iso8601-kuhn}. A time value
should be supplied whenever the information is available to the
application which set the value. The format of the values is
restricted to the forms specified in the profile defined in
\emph{Date and Time Formats} \cite{w3c-datetime}. Attributes
which require this form of value are described below as having a
\dfn{date/time value} rather than a CDATA value.
\subsection{Top-level Information
\label{top-level}}
This section describes the top-level element type of XBEL
documents.
\subsubsection{The \element{xbel} Element
\label{element-xbel}}
The \element{xbel} element defines the top-level data structure
stored in an XBEL instance. It may contain optional
\element{title}, \element{info}, and \element{desc} elements,
followed by any number of elements from
\paramentity{nodes.mix}. This is similar to the
\element{folder} element, but it may not be nested and carries
different attributes.
\paragraph*{Attributes}
The \element{xbel} element carries a \attribute{version}
attribute which has a fixed value that specifies the version
of the XBEL DTD. Other attributes indicate the similarity to
the \element{folder} element.
\begin{definitions}
\term{\attribute{version}, \emph{fixed}}
Fixed value which specifies the version of the DTD in use.
\term{\attribute{id}}
ID value to allow linking to this element; only the
\element{alias} element's \attribute{ref} attribute supports
a corresponding IDREF value.
\term{\attribute{added}}
Date/time value which can be used to record when the
collection of bookmarks was created.
\end{definitions}
\paragraph*{Processing Expectations}
The \element{xbel} element is in many ways similar to a
\element{folder} element, but may not be ``folded.''
Auxillary information, such as an optional \element{title}
element, may be shown in a substantially different way than
for \element{folder} in a user interface.
\subsection{Common Elements
\label{common-elements}}
Elements described in this section may occur in different contexts
within an XBEL instance, but share fundamental semantic
interpretation in each case.
\subsubsection{The \element{title} Element
\label{element-title}}
The \element{title} element is used to mark the title associated
with the immediately enclosing element. It is used for
the \element{xbel}, \element{folder}, and \element{bookmark}
elements. This element is always optional and may contain
only character data.
\paragraph*{Processing Expectations}
Software which presents bookmark information to the user in
any form should use the content of this element to identify
the resource to the user. Additional information may be
needed to make the identification unambiguous.
Applications may use the text of the \element{title}
during search operations.
\paragraph*{Rationale}
Many Internet resources are described by a short title, often
displayed by the bookmarking facilities. Storing the title
allows a significant improvement in user interface
responsiveness when compared to retrieving the resource to
reload the title. Title storage is the approach taken by all
browsers known to the XBEL designers.
\subsubsection{The \element{desc} Element
\label{element-desc}}
The \element{desc} element is used to store a human-readable
description of the enclosing element. For a \element{folder} or
the \element{xbel} element,
this may be used to more thoroughly explain the subject of the
bookmarks stored in the collection and why they may be
interesting. For a \element{bookmark}, a summary of the
resource pointed to by the bookmark may be more appropriate.
This element is always optional and may contain only character
data.
\paragraph*{Processing Expectations}
The content of this element may be displayed to a user
requesting more information on the folder or bookmark
containing the description. In the case of a
\element{bookmark}, this can be used before actually making a
request over the network to retrieve the resource.
Applications may use the text of the \element{desc}
during search operations.
\paragraph*{Rationale}
Many Internet browsers support simple annotation of bookmark
data with human readable text. This element is required to
support exchange of this data.
\subsubsection{The \element{info} Element
\label{element-info}}
The \element{info} element is used to store metadata related to
the immediately enclosing element. The intended use is for
\element{info} to store a series of \element{metadata} elements,
each of which ``belongs'' to some application. An
``application'' in this sense may be either a program, such as a
specific Internet browser, or a more general metadata scheme,
such as the Dublin Core \cite{dublin-core}.
The \element{info} element is always optional. If present, it
must contain one or more \element{metadata} elements.
\paragraph*{Processing Expectations}
Applications are expected to ignore \element{info} elements if
they are not able to deal with the contents of constituent
\element{metadata} elements. Whether or not \element{info}
elements should be ``passed through'' transparently or removed
depends on the purpose of the processing application, but an
effort should be made to retain the information whenever the
enclosing element is retained, even in a modified form.
\paragraph*{Rationale}
This element provides a clean way of isolating
application-specific metadata from more generally supported
constructs within the bookmark data.
\subsubsection{The \element{metadata} Element
\label{element-metadata}}
The \element{metadata} element is used as a container for all
auxillary information related to a node which belongs to a
single metadata scheme. The specific contents of
\element{metadata} is highly dependent on the metadata scheme
which applies; XML namespaces should be used to identify
explicit markup used within the element.
The DTD for XBEL specifies the content model for
\element{metadata} as \code{EMPTY}, but any content should be
considered acceptable so long as the XBEL document is
well-formed. The use of \code{EMPTY} avoids making the DTD too
loose; applications which do not validate need not be
concerned. Derivative DTDs can define the parameter entity
\paramentity{metadata.mix} to be the appropriate content model
for the application.
\element{metadata} elements are always optional. Note that an
\element{info} element which contains no \element{metadata}
elements must be removed.
\paragraph*{Attributes}
\begin{definitions}
\term{\attribute{owner}, \emph{required}}
CDATA value specifies the application which ``owns'' the
content of the element. The value of this attribute
should be a URI which refers to a definition of the
application and content structure in either human- or
machine-processible form. It is not required that the URI
be addressable through the network.
\end{definitions}
It is expected that namespace attributes will be added to
the element to specify the markup defined for the content of
the \element{metadata} element.
\paragraph*{Processing Expectations}
Within an \element{info} element, each \element{metadata}
element is required to have a unique value for the
\attribute{owner} attribute. Programs which modify the
contents of \element{metadata} elements should ensure that
only one \element{metadata} exists for any \attribute{owner}
value normally modified by the application within affected
\element{info} elements. \element{metadata} elements for
other owners should remain unaffected.
Specific interpretation of \element{metadata} content is
highly dependent on both the \attribute{owner} and the
application, and is not otherwise within the scope of this
document.
\paragraph*{Rationale}
The \element{metadata} element is required to support owner
identification. It is entirely reasonable for multiple owners
of data to share a document type for their information, but
otherwise require separate processing. The Resource
Description Framework provides an example of an approach which
would require multiple applications to share a namespace
\cite{w3c-rdf-syntax,w3c-rdf-schema}. Some additional form of
ownership identification is required to ensure processors can
avoid destroying each other's data.
\subsection{Data Organization
\label{data-organization}}
The elements described in this section are used to impose
organization on a collection of \element{bookmark} nodes.
\element{folder} is used to support hierarchical organization and
\element{separator} is used to support non-hierarchical
organization.
\subsubsection{The \element{folder} Element
\label{element-folder}}
The \element{folder} element is the element used to support
hierarchical data organization. It is the only element type
which is allowed to nest within itself.
This element may contain optional \element{title},
\element{info} and \element{desc} elements. After this, any
number of elements from \paramentity{nodes.mix} are allowed.
\paragraph*{Attributes}
\begin{definitions}
\term{\attribute{id}}
ID value to allow linking to this element; only the
\element{alias} element's \attribute{ref} attribute supports
a corresponding IDREF value.
\term{\attribute{added}}
Date/time value which records when the folder was added to
the bookmark collection represented by the instance.
\term{\attribute{folded}}
Token which records whether the contents of the folder
should be displayed by default in a user interface. The
value may be \nmtoken{yes} or \nmtoken{no}.
\term{\attribute{icon}}
The value of this attribute is a name which identifies an
icon that the user agent can use to mark the folder for the
user. The value must be mapped to an actual icon; is it not
a URI which points to an image file, since the names should
(in theory) be usable in multiple user agents which have
differing capabilities for image display, and may require
different format or sizes of icon. Mechanisms to do this
are outside the scope of this specification.
\term{\attribute{toolbar}}
Token which records whether the contents of the folder
should be used for the ``Personal Toolbar'' provided by some
user agents. The value may be \nmtoken{yes} or \nmtoken{no}.
\end{definitions}
\paragraph*{Processing Expectations}
User interfaces should display \element{folder} elements as
collapsing lists, allowing the user to display or hide the
contents of the element on demand. Appropriate behavior
outside of user interfaces is expected to be application
specific.
\paragraph*{Rationale}
The \element{folder} element may be used to represent
hierarchical relationships within a collection of bookmarks,
as deployed in current Internet browsers.
The \attribute{toolbar} attribute is needed to support the
``Personal Toolbar'' information from Netscape Navigator.
\subsubsection{The \element{separator} Element
\label{element-separator}}
The \element{separator} element can be used to separate
bookmarks within a collection in a non-hierarchical fashion. It
may be used within a \element{folder} or the \element{xbel}
element.
\paragraph*{Processing Expectations}
The presence of this element may be represented by displaying
a horizontal line or vertical whitespace in an interactive
user interface or printed representation.
\paragraph*{Rationale}
A simple separator is required to support the bookmark
structures of existing Internet browsers.
\subsection{Bookmark Data
\label{bookmark-data}}
Only one element type is used to encapsulate information specific
to an individual bookmark. No need for alternate elements has
been demonstrated.
\subsubsection{The \element{bookmark} Element
\label{element-bookmark}}
A \element{bookmark} element is used to store information about
a specific resource. This element may contain the optional
elements \element{title}, \element{info} and \element{desc}.
\paragraph*{Attributes}
The \element{bookmark} element carries more attributes than
other elements defined in XBEL. These attributes are used to
carry much of the common information maintained on bookmarks
by the major browsers.
\begin{definitions}
\term{\attribute{href}, \emph{required}}
URI which specifies the resource described by the
\element{bookmark} element.
\term{\attribute{id}}
ID value to allow linking to this element; only the
\element{alias} element's \attribute{ref} attribute supports
a corresponding IDREF value.
\term{\attribute{icon}}
This is identical to the \attribute{icon} attribute of the
\element{folder} element; refer to that description for
information.
\term{\attribute{added}}
Date/time value which indicates when the \element{bookmark}
element was added to the bookmark collection.
\term{\attribute{modified}}
Date/time value which records the time of the
last known change to the resource identified by the
\element{bookmark}.
\term{\attribute{visited}}
Date/time value which represents the time of the user's last
``visit'' to the resource. Note that the value for
\attribute{modified} may be more recent than the value for
\attribute{visited} if software is used that checks for
resources which have changed since the user last visited the
resource. This feature is increasingly common in browsers.
\end{definitions}
\paragraph*{Processing Expectations}
In a user interface, \element{bookmark} should typically be
represented by the contents of the \element{title} element, if
present. The representation of the bookmark should be
``hot,'' allowing traversal to the referenced resource by the
user. Additional information on the resource, such as the
description given in a \element{desc} element, should be
available to the user on demand.
Outside a user interface, processing may be too
application-specific to discuss here.
\paragraph*{Rationale}
The use of a single structured element type to represent
external resources simplifies processing while allowing a rich
set of information to be maintained on each resource.
\subsection{Internal References
\label{internal-references}}
A single element is provided to support internal references to
other elements within an XBEL instance.
\subsubsection{The \element{alias} Element
\label{element-alias}}
\paragraph*{Attributes}
Only one attribute is needed for \element{alias}, and is
required to identify the link referent.
\begin{definitions}
\term{\attribute{ref}, \emph{required}}
IDREF value which refers to a \element{bookmark} or
\element{folder} element, or the document \element{xbel}
element.
\end{definitions}
\paragraph*{Processing Expectations}
Software which presents bookmarks in a user interface should
distinguish aliases from other bookmarks visually, but
otherwise allow examination of the referent transparently.
Netscape Navigator does this by presenting the bookmark title
in an italic font; the appropriate visual distrinction is
likely to be dependent on other aspects of the user
interface.
Outside of user interface considerations, treatment of aliases
is application-specific. However, some guidance may prove
useful. When encountering an \element{alias}, an application
should only need to traverse the \element{alias} and process
the referent if that referent would not otherwise be
processed, otherwise, the \element{alias} may usually be
ignored. This should become an issue only when the
application is processing a portion of the bookmark hierarchy
rather than the complete tree.
\paragraph*{Rationale}
Netscape Navigator and Microsoft Internet Explorer bookmarks
can include ``aliases'' to other nodes in the hierarchical
structure. Navigator supports only aliases to bookmark nodes,
while Internet Explorer also supports aliases to folders.
Navigator's format simply adds the attribute
\attribute{aliasid} to nodes which are referred to be aliases,
and the attribute \attribute{aliasof} to the actual alias.
All other information is duplicated for each alias of the
primary bookmark entry. XBEL uses a distinct element and the
ID/IDREF mechanism provided by XML to avoid redundency and
support validation.
\section{DTD Structure
\label{dtd-structure}}
This section discusses how the DTD itself is organized. This is
mostly of interest to the maintainers of XBEL and any descendent
document types that may be defined in the future.
\subsection{Use of Parameter Entities
\label{parameter-entities}}
Limited use of parameter entities is made in the XBEL DTD. The
suffix-notation is adopted from the ``XMLspec'' DTD report
\cite{w3c-xmlspec}. Specifically, the \samp{.mix} suffix is used
for entities which define repeatable-or groups of elements, and
\samp{.att} is used for entities which define attributes.
\subsubsection{The \paramentity{metadata.mix} Entity
\label{entity-metadata.mix}}
\subsubsection{The \paramentity{nodes.mix} Entity
\label{entity-nodes.mix}}
The \paramentity{nodes.mix} entity lists the element types which
may be used to form the nodes of the hierarchical data structure
described by an XBEL instance. This entity species a mixture of
\element{bookmark}, \element{folder}, \element{separator} and
\element{alias} elements.
\subsubsection{The \paramentity{node.att} Entity
\label{entity-node.att}}
This entity is used to define attributes for element types which
hold the real content of the bookmark data. It is used on the
\element{bookmark} and \element{folder} elements. It defines
the optional \attribute{added} and \attribute{id} attributes.
\subsubsection{The \paramentity{url.att} Entity
\label{entity-url.att}}
This entity defines the attributes which are available on
elements which refer to specific resources. In XBEL 1.0 and
1.1, this is only used on the \element{bookmark} element. It
defines a required \attribute{href} attribute and the optional
attributes \attribute{modified} and \attribute{visited}.
\subsection{Extending the DTD
\label{extending}}
Extensibility of XBEL relies on three foundations: XML namespaces
and the acceptability of well-formed instances, localized
parameters entities, and the simplicity of the DTD itself.
The primary expectation for DTD extensions is that new elements
and attributes will be introduced and defined using XML
namespaces. Though still in the stage of a working draft within
the W3C, namespaces offer the most flexible extension mechanism
available for XML-based markup languages used in wide-spread
deployment. Until validation requirements in the context of
namespaces are more clearly defined, XBEL instances using
namespaces can apply well-formedness rules as a vehicle for
partial validation.
More traditional document type extension uses parameter entities
reserved for localization. The XBEL public text provides three
such entities as ``hooks'' to allow local customization. For each
of the parameter entities described in Section
\ref{parameter-entities}, ``Use of Parameter Entities,'' a
\paramentity{local.\var{name}} variant is declared and used in the
definition of each of the entities described above. This is less
flexible than the namespace approach, but allows a new document
type to be created which can be used for validation with current
tools without having to create a new public text from scratch.
The third foundation for extensibility, the simplicity of the DTD,
can be effectively used only by taking a ``steal this code''
approach to reuse. XBEL is sufficiently simple that it can easily
be understood in its entirety, and a variant document type created
by crafting a new public text.
\subsection{General Entities
\label{general-entities}}
The XBEL DTD defines no general entities.
\paragraph*{Rationale}
Since XBEL is intended as an interchange format for software and
not as an authoring format, there is no need to support typical
entities used to enter special characters. Entities which do
not correspond to Unicode characters are too
application-specific to predict meaningfully
\cite{unicode20,unicode21}.
\appendix
\section{Public Text
\label{public-text}}
This section contains the entire public text of the XBEL DTD
corresponding to the Formal Public Identifier presented in Section
\ref{formal-ident}. No additional external entities are
referenced.
\begin{longexample}
\verbatiminput{../xbel.dtd}
\end{longexample}
\nocite{*}
\bibliographystyle{alpha}
\bibliography{xbel}
\end{document}
| {
"alphanum_fraction": 0.706609795,
"avg_line_length": 42.0490322581,
"ext": "tex",
"hexsha": "9999d8c83f739e48fbcc20709297a47e97e340ca",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "de15b3ad0fe095f6ce36b1c5ad7438046aae8d3d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jkloth/pyxml",
"max_forks_repo_path": "demo/xbel/doc/xbel.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "de15b3ad0fe095f6ce36b1c5ad7438046aae8d3d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jkloth/pyxml",
"max_issues_repo_path": "demo/xbel/doc/xbel.tex",
"max_line_length": 77,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "de15b3ad0fe095f6ce36b1c5ad7438046aae8d3d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jkloth/pyxml",
"max_stars_repo_path": "demo/xbel/doc/xbel.tex",
"max_stars_repo_stars_event_max_datetime": "2018-05-29T03:59:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-28T23:01:20.000Z",
"num_tokens": 7092,
"size": 32588
} |
%=========================================================================
% sec-opts-attr
%=========================================================================
\section{Experimenting with Attributes}
\label{sec-opts-attr}
% Reason for optimization
\subsection{Reason for Optimization}
Oftentimes the compiler will choose to be conservative with optimization
passes because there is not enough information from static analysis to
determine whether or not certain optimizations are safe. If the
programmer can give hints to the compiler about the intent of the code,
we can force the compiler to make aggressive optimizations to generate
higher performance binaries.
\smallskip
% Details of optimization
\subsection{Details of Optimization}
One significant compiler optimization is the inlining of
functions. Inlined functions are not explicitly called (i.e., no jumping
and linking in assembly), but appear to be embedded into point from which
they are called. This eliminates the overhead of the function call,
which can be especially useful when there are many arguments which need
to be communicated through the stack in memory, instead allowing us to
keep arguments in faster, local registers. Although function inlining is
a common pass in compilers, sometimes we need to force the compiler to do
so by using the {\tt{always\_inline}} attribute. Such C attributes can be
prepended to any function declaration.
\smallskip
Another way to give hints to the compiler is the {\tt{restrict}} keyword
in C. Usually, the compiler tends be conservative about pointer
optimizations if it cannot be sure if pointers alias to the same
locations in memory. However, if we know that certain pointers do not
alias, as in the input matrix pointers A/B/C in DGEMM, we can explicitly
mark such pointers as not aliasing in memory. By doing so, we can help
the compiler perform more aggressive pointer optimizations.
\smallskip
We can also use pragmas to mark loops that should be vectorized (i.e.,
{\tt{\#pragma simd}}), or give hints about when to issue prefetch requests
to saturate the memory bandwidth (i.e., {\tt{\#pragma prefetch A}}).
\smallskip
% Results and analysis
\subsection{Results}
\input{fig-opts-attr-results}
Figure~\ref{fig-opts-attr-results} compares the performance of the blocked
DGEMM implementation with the three previous optimizations: AVX, loop
ordering, copy optimization, with and without the additional compiler
hints described above.
\medskip
| {
"alphanum_fraction": 0.7543434343,
"avg_line_length": 42.6724137931,
"ext": "tex",
"hexsha": "f075ad4e412a9c19ac65d26d9a91b8ab92db8d4a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d672a084a6a2390714ede6ae8ae20003a07ee56b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jyk46/cs5220-hw1-report",
"max_forks_repo_path": "src/sec-opts-attr.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d672a084a6a2390714ede6ae8ae20003a07ee56b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jyk46/cs5220-hw1-report",
"max_issues_repo_path": "src/sec-opts-attr.tex",
"max_line_length": 74,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d672a084a6a2390714ede6ae8ae20003a07ee56b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jyk46/cs5220-hw1-report",
"max_stars_repo_path": "src/sec-opts-attr.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 511,
"size": 2475
} |
\subsection{G{\"o}del’s completeness theorem}
\subsubsection{Completeness of first-order logic}
We previously showed that zero-order logic was complete. What about first-order logic?
G{\"o}del’s’ completeness theorem says that for first order logic, a theory can include all tautologies, the first category.
If the completeness theorem is true and a formula is not in the theory, then the formula is either refutable or satisfiable under some, but not all interpretations.
That is, either the theory will contain \(\theta \), \(\neg \theta \), or \(\theta \) will be satisfiable in some but not all interpretations, and neither will be in the in theory.
To prove this we look for a proof that every formula is either refutable or true under some structure. So for an arbitrary formula \(\theta \) we want to show it is either refutable or satisfiable under some interpretation.
\subsubsection{Part 1: Converting the form of the formula}
Remove free variables, functions
Note that if this is true, all valid formulae of the form below are provable:
\(\neg \theta \)
This means that there is no interpretation where the following is true:
\(\theta \)
Conversely if \(\neg \theta \) is not in the theory, then \(\theta \) must be true under some interpretation.
That is, if all valid formulae are provable, then all
Reformulating the question:
This is the most basic form of the completeness theorem. We immediately restate it in a form more convenient for our purposes:
Theorem 2. Every formula \(\theta \) is either refutable or satisfiable in some structure.
"\(\theta \)is refutable" means by definition "\(\neg \theta \) is provable".
\subsubsection{Decidability}
Given a formula, can we find out if can be derived from the axioms? We can follow a process for doing so which would inform us if the formula was or was not a theorem. Alternative, the process could carry on forever.
If the process never carries on forever the system is decidable: there is a finite process to determine whether the formula is in or out. If the process halts for true formulas, but can carry on forever for false formulas, the system is semi-decidable. If the process takes a long time, we do not know if it is looping infinitely, or approaching its halt point.
Intuitively, use of axioms can make an existing formula shorter or longer, so finding all short formulas can require going forwards and backwards an infinite number of times.
| {
"alphanum_fraction": 0.772913257,
"avg_line_length": 50.9166666667,
"ext": "tex",
"hexsha": "9f8569117981cd35ebc73c74f8db26b18c96a991",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/logic/godelCompleteness/01-01-firstCompleteness.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/logic/godelCompleteness/01-01-firstCompleteness.tex",
"max_line_length": 361,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/logic/godelCompleteness/01-01-firstCompleteness.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 551,
"size": 2444
} |
\def\tableHeader{\begin{tabular}{|c|c|p{5cm}|}\hline
\bf name & \bf type & \multicolumn{1}{c|}{\bf description} \\\hline}
\def\entry#1#2#3{{#1} & {\tt#2} & {#3} \\\hline}
\def\tableFooter{\hline\end{tabular}}
\section{{\tt mlp}, the metalua parser}
The metalua parser is built on top of \verb|gg|, and cannot be understood
without some knowledge of it. Basically, \verb|gg| allows not only to
build parsers, but to build {\em extensible} parsers. Depending on a
parser's type (sequence, sequence set, list, expression\ldots),
different extension methods are available, which are documented in
\verb|gg| reference. The current section will give the information
needed to extend Metalua syntax:
\begin{itemize}
\item what \verb|mlp| entries are accessible for extension;
\item what do they parse;
\item what is the underlying parser type (and therefore, what
extension methods are supported)
\end{itemize}
\vfill\pagebreak
\subsection{Parsing expressions}
\tableHeader
\entry{mlp.expr}{gg.expr}{Top-level expression parser, and the main
extension point for Metalua expression. Supports all of the methods
defined by {\tt gg.expr}.}
\entry{mlp.func\_val}{gg.sequence}{Read a function definition
from the arguments' opening parenthesis to the final {\tt end}, but
excluding the initial {\tt function} keyword, so that it can be used
both for anonymous functions, for {\tt function some\_name(...) end}
and for {\tt local function some\_name(...) end}.}
% \entry{mlp.func\_params\_content}{gg.list}{Read a potentially empty
% (``{\tt)}''- or ``{\tt|}''-terminated) list of function definition
% parameters, i.e. identifiers or ``{\tt ...}'' varargs. Surrounding
% parentheses are excluded. Don't get confused by parameters versus
% arguments: parameters are the variable names used in a function
% definition; arguments are the values passed in a function call.}
% \entry{mlp.func\_args\_content}{gg.list}{Read a potentially emtpy list
% of function call arguments. Surrounding parentheses are excluded.}
% \entry{mlp.func\_args}{gg.sequence\_set}{Read function arguments: a
% list of expressions between parenthses, or a literal table, or a
% literal string.}
%\entry{mlp.func\_params}{}{}
\entry{mlp.expr\_list}{}{}
%\entry{mlp.adt}{\rm custom function}{Read an algebraic datatype
% without its leading backquote.}
\entry{mlp.table\_content}{gg.list}{Read the content of a table,
excluding the surrounding braces}
\entry{mlp.table}{gg.sequence}{Read a literal table,
including the surrounding braces}
\entry{mlp.table\_field}{\rm custom function}{Read a table entry: {\tt
[foo]=bar}, {\tt foo=bar} or {\tt bar}.}
\entry{mlp.opt\_id}{\rm custom function}{Try to read an identifier, or
an identifier splice. On failure, returns false.}
\entry{mlp.id}{\rm custom function}{Read an identifier, or
an identifier splice. Cause an error if there is no identifier.}
\tableFooter
\vfill\pagebreak
\subsection{Parsing statements}
\tableHeader
\entry{mlp.block}{gg.list}{Read a sequence of statements, optionally
separated by semicolons. When introducing syntax extensions, it's
often necessary to add block terminators with {\tt
mlp.block.terminators:add().}}
\entry{mlp.for\_header}{\rm custom function}{Read a {\tt for} header,
from just after the ``{\tt for}'' to just before the ``{\tt do}''.}
\entry{mlp.stat}{gg.multisequence}{Read a single statement.}
\tableFooter
Actually, {\tt mlp.stat} is an extended version of a multisequence: it
supports easy addition of new assignment operators. It has a field {\tt
assignments}, whose keys are assignment keywords, and values are
assignment builders taking left-hand-side and right-hand-side as
parameters. for instance, C's ``+='' operator could be added as:
\begin{verbatim}
mlp.lexer:add "+="
mlp.stat.assignments["+="] = function (lhs, rhs)
assert(#lhs==1 and #rhs==1)
local a, b = lhs[1], rhs[1]
return +{stat: (-{a}) = -{a} + -{b} }
end
\end{verbatim}
\subsection{Other useful functions and variables}
\begin{itemize}
\item{\tt mlp.gensym()} generates a unique identifier. The uniqueness
is guaranteed, therefore this identifier cannot capture another
variable; it is useful to write hygienic\footnote{Hygienic macros
are macros which take care not to use names that might interfere
with user-provided names. The typical non-hygienic macro in C
is {\tt \#define SWAP( a, b) \{ int c=a; a=b; b=c; \}}: this macro
will misearbly fail if you ever call it with an argument named
{\tt c}. There are well-known techniques to automatically make a
macro hygienic. Without them, you'd have to generate a unique name
for the temporary variable, if you had a {\tt gensym()} operator
in C's preprocessor} macros.
\end{itemize}
| {
"alphanum_fraction": 0.7306242145,
"avg_line_length": 41.1551724138,
"ext": "tex",
"hexsha": "dcdf02d159123b4cb066e40e7234971be7c1fb35",
"lang": "TeX",
"max_forks_count": 23,
"max_forks_repo_forks_event_max_datetime": "2022-02-10T07:34:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-04T23:52:59.000Z",
"max_forks_repo_head_hexsha": "1fec8ed671542b24af7d36c890757b85797ed250",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fab13n/metalua",
"max_forks_repo_path": "doc/manual/mlp-ref.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "1fec8ed671542b24af7d36c890757b85797ed250",
"max_issues_repo_issues_event_max_datetime": "2021-03-31T14:04:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-02T03:30:49.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fab13n/metalua",
"max_issues_repo_path": "doc/manual/mlp-ref.tex",
"max_line_length": 73,
"max_stars_count": 178,
"max_stars_repo_head_hexsha": "1fec8ed671542b24af7d36c890757b85797ed250",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fab13n/metalua",
"max_stars_repo_path": "doc/manual/mlp-ref.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T19:48:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-22T12:47:44.000Z",
"num_tokens": 1306,
"size": 4774
} |
\documentclass[a4paper]{scrreprt}
%% Language and font encodings and page settings
\usepackage[english]{babel}
\usepackage[utf8x]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[a4paper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=2cm]{geometry}
\usepackage{float}
%% Packages
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage[colorlinks=true, allcolors=blue]{hyperref}
\graphicspath{{./img/}}
%% Title page settings
\title{Brexit Vs Good Friday}
\subtitle{"Defend the Good Friday Agreement"}
\author{Cian Gannon}
\titlehead{\centering\includegraphics[width=10cm]{header}}
%% Page settings
\pagestyle{headings}
%% Start of document
\begin{document}
%% Render title settings
\maketitle
%% Start abstract page
\begin{abstract}
%% Center everything on this page
\centering
%% Image of Gerry Adams
\includegraphics[width=2.7cm]{gerrylad}
%% Quick run down of the game concept
Brexit Vs. Good Friday is a satirical representation of the current Brexit debacle and Ireland's core involvement in it due to the Good Friday Agreement.
Brexit Vs. Good Friday is a top-down shooter that will expand the genre and also involve the most loved elements of other shooters.
%% Note in bottom left of abstract page
\begin{flushleft}
\noindent
\null\vfill
Game Design Document: \\
Brexit Vs Good Friday \\
Created by - Cian Gannon \\
Student ID - G00337022 \\
\today
\end{flushleft}
\end{abstract}
%% Render table of contents
\tableofcontents
%% Chapter 1
\chapter{Overview}
\section{Main Concept}
Brexit Vs. Good Friday is a satirical representation of Brexit and the hysteria surrounding it.
It's a top-down shooter where the user plays Gerry Adams a politician who returns from retirement in order to save what many hold dear.
\section{Core Selling Points}
\subsection{Survival Elements}
\begin{description}
\item[$\bullet$] The player must dodge incoming fire.
\item[$\bullet$] The aim of the game is to protect Good Friday in a countdown until the UK leaves the EU and therefore single market.
If the player fails in their attempt by getting hit then Good Friday will be removed and the player fails.
If the player survives the countdown then the Good Friday is upheld.
\end{description}
\subsection{Humor}
\begin{description}
\item[$\bullet$] The game is made to be humorous. Its take on Brexit that is a core issue currently in the EU is used to make all players on both sides laugh.
\item[$\bullet$] The game is created to be topical, games that are topical tend to be a hit with players.
\end{description}
\subsection{Top Down Shooter}
\begin{description}
\item[$\bullet$] Top-down shooter which needs basic input so the game will be comfortable to play on mobile or desktop.
\item[$\bullet$] Top-down games have been around for generations of consoles and have evolved over time. Brexit Vs. Good Friday aims to take the general style and improve upon it.
\end{description}
%% Chapter 2
\chapter{References}
\section{Games Examined}
I analysed three games as part of my investigation as to what makes a shooter fun and what is unique about a top-down one.
\subsection{GTA}
\begin{figure}[H]
\centering
\includegraphics[width=0.70\textwidth]{gta1.jpg}
\caption{\label{fig:art} GTA 1}
\end{figure}
\subsubsection{Core Features}
\begin{description}
\item[$\bullet$] Single Player
\item[$\bullet$] Top-down Shooter
\item[$\bullet$] Open world
\item[$\bullet$] Action/Adventure
\item[$\bullet$] 2D
\end{description}
\subsubsection{Note}
Grand Theft Auto is a top-down action shooter developed by Rockstar Games. Set in an open world approximation of Ney York city.
A basic top-down game that revolutionized the genre by adding an open world.
What truly made Grand Theft Auto stand out from other game of the period are it's ease of use controls.
The pick up and play controls allowed users of all experience levels get to grips with the game. \\\\
I will be adapting GTA's ease of control and fluid movement as part of my development.
\subsection{Doom}
\begin{figure}[H]
\centering
\includegraphics[width=0.70\textwidth]{doom.png}
\caption{\label{fig:art} Doom}
\end{figure}
\subsubsection{Core Features}
\begin{description}
\item[$\bullet$] Single Player
\item[$\bullet$] First Person Shooter
\item[$\bullet$] Linear/Semi Open World
\item[$\bullet$] Action
\item[$\bullet$] 'Fake' 3D/ 2.5D
\end{description}
\subsubsection{Note}
Doom is a first-person shooter developed in 1993.
Although my game is going to be top down, taking inspiration from the first person shooter genre and adapting them for Brexit Vs. Good Friday.
Doom consists of a constant stream of enemies in a fast-paced environment that always keeps the player busy.
Doom has no story and keeps the player entertained with gameplay alone. \\\\
I will be adapting Doom's enemy variety and enemy frequency.
\subsection{Hotline Miami}
\begin{figure}[H]
\centering
\includegraphics[width=0.70\textwidth]{hotline-miami.jpg}
\caption{\label{fig:art} Hotline Miami}
\end{figure}
\subsubsection{Core Features}
\begin{description}
\item[$\bullet$] Single Player
\item[$\bullet$] Top-Down Shooter
\item[$\bullet$] Semi Open World
\item[$\bullet$] Action
\item[$\bullet$] 2D
\end{description}
\subsubsection{Note}
Hotline Miami is a top-down action shooter that throws more and more enemies at the player in order to increase the difficulty for the player.
The player has a choice of weaponry to deal with the onslaught of ever increasing enemies.
%% Chapter 3
\chapter{Specification}
\section{Target Group}
\begin{description}
\item[$\bullet$] The target group is a younger generation 35 and under.
\item[$\bullet$] Target platform is mobile devices specifically windows phones.
\item[$\bullet$] Interested in the current Brexit negotiations and just want to get away from the seriousness of it all.
\item[$\bullet$] People who understand Irish politics and the humor around it.
\end{description}
\section{Genre}
Brexit Vs. Good Friday is a top-down satirical shooter based on the current Brexit debate.
\section{Art Style}
\begin{center}
\includegraphics[width=4.5cm]{celtic-knot}
\includegraphics[width=5cm]{celtic-knot-arrow}
\includegraphics[width=4.5cm]{celtic-knot}
\begin{figure}[H]
\centering
\includegraphics[width=4.5cm]{start-game}
\caption{\label{fig:art} Game Menu Concept Art}
\end{figure}
\end{center}
I'll be going for a traditional Celtic art style as this game will portray the Irish viewpoint of Brexit.
It will mix Celtic symbols for user interface 'fluff' and use Gaelic font type for menus.
\begin{figure}[H]
\begin{addmargin}[13.5em]{0em}
\includegraphics[width=1.5cm]{gerry-right}
\includegraphics[width=1.5cm]{arlene}
\includegraphics[width=1.5cm]{tess}
\end{addmargin}
\caption{\label{fig:art} Character Concept Art}
\end{figure}
\begin{flushleft}
The game will use satirical representations of politicians throughout the game as player characters and non-player characters (NPCs).
\end{flushleft}
\begin{flushleft}
The game will heavily lean on Irish humor surrounding our history and politics.
\end{flushleft}
%% Chapter 4
\chapter{Gameplay and Setting}
\begin{center}
\includegraphics[width=7cm]{top-down}
\end{center}
The game is centered around the player who moved around the screen and defends against incoming enemies.
The player will have special abilities that they may use in addition to their main offensive weapon.
\section{Mood and Emotions}
The mood of the game is to be humorous.
The characters are over the top representations of real politicians the main menu will feature Irish in-jokes.
The who premise of the game is to make the user laugh while also delivering compelling gameplay.
\section{Characters in the Game}
\begin{figure}[H]
\begin{addmargin}[13.5em]{0em}
\includegraphics[width=1.5cm]{gerry-right}
\includegraphics[width=1.5cm]{arlene}
\includegraphics[width=1.5cm]{tess}
\end{addmargin}
\caption{\label{fig:art} Character Concept Art}
\end{figure}
The game will feature real-world politicians in an over the top satirical outlook.
The game will feature Gerry Adams as the main protagonist/antagonist (Depending on the user's outlook)
The game will use politicians from the UK as enemies that are trying to undermine the Good Friday Agreement.
\section{Main Objective}
Survive an onslaught of enemies while a counter is on screen.
The player will have to dodge oncoming fire while also fireing back at the oncoming enemies.
\section{Core Mechanics}
\begin{description}
\item[$\bullet$] Tactical Maneuvering \\
The player will have to dodge incoming fire from enemies. While also trying to line up shots on enemies.
\item[$\bullet$] Resource Management \\
The player will have to manage their ammo and other abilities. The player will also have to manage their time as the clocks counts down.
\item[$\bullet$] Survival \\
The player must survive hordes of enemies that will fire at them and try and eliminate the player. The player must sruvive while a clocks runs down.
\end{description}
\section{Controls}
\subsection{Mobile}
\begin{center}
\includegraphics[width=7cm]{mobile}
\includegraphics[width=7cm]{mobile-controls}
\end{center}
The player will be controlled by using touchscreen on a mobile device. \\\\
Directional pad is used to control player movement.\\
Left and right buttons used to fire and use secondary ability.
\subsection{PC}
\begin{center}
\includegraphics[width=7cm]{pc-gaming}
\includegraphics[width=7cm]{pc-controls}
\end{center}
The game will be developed on PC so controls will be tested with keyboard and mouse and as such will work in the final release. \\\\
W,A,S,D will be used as directional buttons as they are the standard movement keys in PC gaming. \\
Space will be primary ability, which is to shoot at enemies. \\
E will be used to execute the players second abilities.
\subsection{Xbox One}
\begin{center}
\includegraphics[width=7cm]{Xbox-one}
\includegraphics[width=7cm]{pc-controls}
\end{center}
The game will be compatible with Xbox One by using a keyboard and mouse that are connected to the console. \\\\
Just as above with the PC version, the Xbox One version will use a keyboard layout. \\
W,A,S,D will be used as directional buttons as they are the standard movement keys in PC gaming. \\
Space will be primary ability, which is to shoot at enemies. \\
E will be used to execute the players second abilities.
%% Section 5
\chapter{Front End}
\section{Start Screen}
The start screen will feature traditional Irish/Celtic symbols and font.\\
The start screen will feature background images of Ireland and its politicians while ambient music plays waiting for the player to interact with the game.\\
Once the player interacts with the game they will transition to the main menu.
\section{Menus}
There will be two menu types in the game for the user to interact with.\\
The main menu and the pause menu.
\subsection{Main Menu}
\begin{center}
\includegraphics[width=4.5cm]{celtic-knot}
\includegraphics[width=5cm]{celtic-knot-arrow}
\includegraphics[width=4.5cm]{celtic-knot}
\begin{figure}[H]
\centering
\includegraphics[width=4.5cm]{start-game}
\caption{\label{fig:art} Game Menu Concept Art}
\end{figure}
\end{center}
The main menu is where the user will have multiple options.
\begin{description}
\item[$\bullet$] Start Game \\
Start Game option will start a new game from the start.
\item[$\bullet$] Chapter Select \\
Chapter Select option allows the user to select a chapter.
\item[$\bullet$] Options \\
Options allow the user to change game options such as audio volume.
\item[$\bullet$] Quit \\
Quit exits the game and closes it.
\end{description}
\subsection{Pause Menu}
\begin{figure}[H]
\centering
\includegraphics[width=0.70\textwidth]{pause}
\caption{\label{fig:art} Pause Menu}
\end{figure}
The pause menu will be available when the user pauses the game while they are playing. When paused the user will have an option to quit or resume.
Quitting will bring the user back to the menu and resuming will bring the user back to the game when they left off.
\section{End Screen}
The end screen will transition the user from one level to another using a 'cut-scene'. If the user finished the game they will be greeted by you won message.
Music will play to congratulate the player on their completion of the journey.
%% Chapter 6
\chapter{Technology}
\includegraphics[width=0.50\textwidth]{Unity}
The game will be developed in unity for UWP (Universal Windows Platform).
\section{Target Systems}
\begin{center}
\includegraphics[width=12cm]{uwp}
\end{center}
UWP (Universal Windows Platform) devices will be the target devices. The game will use UWP (Universal Windows Platform) as it will be compatible with windows phones, Xbox and PC.
\section{Development Systems/Tools}
A computer will be used as the development system.\\
Unity will be the game engine that will be used to develop the game.\\
\subsection{Art Development}
\begin{description}
\item[$\bullet$] For developing the logo and all character sprites I will use \href{https://www.photopea.com/}{Photopea}.
\item[$\bullet$] I will source images from \href{https://images.google.com/}{Google Image Search}.
\end{description}
\subsection{Programming/Development}
\begin{description}
\item[$\bullet$] \href{https://unity3d.com/}{Unity} will be the game developement environment I will use to develop the game.
\item[$\bullet$] \href{https://visualstudio.microsoft.com/}{Visual Studio} is the IDE (Integrated development environment) used to edit C\# files used by the game.
\end{description}
\subsection{Source Control}
\begin{description}
\item[$\bullet$] I will use \href{https://git-scm.com/}{git} as my source control.
\item[$\bullet$] I will use \href{https://github.com}{Github} to store my source code using \href{https://git-scm.com/}{git}.
\item[$\bullet$] I will use Git Bash which is included with a \href{https://git-scm.com/downloads}{git}.
\end{description}
\subsection{Documentation}
\begin{description}
\item[$\bullet$] A game design document (this document) available on the \href{https://github.com/cian2009/UnityGame}{project Github}. The game design document will use \href{https://www.latex-project.org/}{Latex} as the editor and then I will render it in a pdf format.
\item[$\bullet$] A read me will be available on github detailing what was used in development, how to open the source code in unity and other important information for other developers to pick up.
\end{description}
\end{document} | {
"alphanum_fraction": 0.7689237545,
"avg_line_length": 35.8814814815,
"ext": "tex",
"hexsha": "bd8f97db4766151a50e4f3dafe1f8526cb6c0161",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "45976533be6e586c8130154a46fc23ed96ca8e29",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "smcguire56/BrexitVsGoodFridayUnityGame",
"max_forks_repo_path": "Design Document/Design.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "45976533be6e586c8130154a46fc23ed96ca8e29",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "smcguire56/BrexitVsGoodFridayUnityGame",
"max_issues_repo_path": "Design Document/Design.tex",
"max_line_length": 274,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "45976533be6e586c8130154a46fc23ed96ca8e29",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "smcguire56/BrexitVsGoodFridayUnityGame",
"max_stars_repo_path": "Design Document/Design.tex",
"max_stars_repo_stars_event_max_datetime": "2018-10-16T12:29:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-16T12:29:20.000Z",
"num_tokens": 3681,
"size": 14532
} |
\chapter{Multimodal Hashing for Aligned Data}
\label{chap:smh}
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
\section{Introduction}
As of now, almost all existing \mbox{HFL} methods assume that the data are unimodal, meaning that both the queries and the candidates are of the same modality. They cannot be adapted easily for multimodal search which is often encountered in many multimedia retrieval, image analysis and data mining applications. Take crossmodal multimedia retrieval for example, using an image about a historic event as query, one may want to retrieve relevant text articles that can provide more detailed information about the event. Obviously, existing unimodal methods cannot be applied directly to multimodal similarity search because they assume that all the data are of the same modality. Designing \mbox{HFL} methods for multimodal data is thus a very worthwhile direction to pursue.
Recently, \mbox{Bronstein} \etal~\cite{bronstein2010cvpr} proposed a general framework which is referred to as \textit{multimodal hashing} (\mbox{MH}). As illustrated in Figure~\ref{smh:fig:framework} for the bimodal case, \mbox{MH} functions hash documents of different modalities into a common Hamming space so that fast similarity search can be performed. The key challenge of \mbox{MH} is to learn effective hash functions for multiple modalities efficiently from the provided information.
\begin{figure}[htb]
\centering
\epsfig{figure=fig/mh_illustration, width=0.7\textwidth}
\caption{Illustration of the multimodal hashing framework. Under this framework, similar documents (with bounding boxes of the same color) of different modalities are hashed to nearby points in the Hamming space whereas dissimilar documents (with bounding boxes of different colors) are hashed to points far apart.}
\label{smh:fig:framework}
\end{figure}
In this chapter, we study a simple case of multimodal hashing, in which the data from different modalities are aligned. For example, suppose there are two modalities, i.e., image and text, if each image has been aligned with one and only one text article and vice versa, we consider the data to be aligned. The alignment can be determined by applications at hand, e.g, an image and a text can be paired if they are referring to the same object. To learn \mbox{MH} functions for these data, we first give a basic model which learns hash functions through spectral analysis of the correlation between modalities. Besides the basic method which can only handle vectorial data and is linear, we provide a kernel extension to handle nonvectorial data and incorporate nonlinearity. We further incorporate \textit{Laplacian} regularization for situations in which side information is also available in the data.
%In the second method, the hash functions are learned by minimizing the multimodal reconstruction error. More specifically, given a collection of pairwise distance within each modality and pairwise relations across modalities, the model aims at minimizing the squared error between the pairwise reconstructive distance (normalized Hamming distance) computed based on the hash codes and the original distance as well as relations.
%The third method is based on latent factor models.
The rest of this chapter is organized as follows. In Section~\ref{smh:relatedwork}, we introduce some related work. We then present our model, \textit{spectral multimodal hashing}, in Section~\ref{smh:SMH}. Empirical studies conducted on real-world data sets are presented in Section~\ref{smh:exps}, before we conclude this chapter in Section~\ref{smh:conclusion}.
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
\section{Related Work}
\label{smh:relatedwork}
%\footnote{It is straightforward to extend the problem formulation to more than two modalities, but we focus on the bimodel case in this paper for notational simplicity.}
Under the framework of multimodal hashing, we first introduce a recent work called cross modal similarity sensitive hashing (\mbox{CMSSH})~\cite{bronstein2010cvpr}, which is, to the best of our knowledge, the first work on multimodal hashing.
Suppose we have two sets of $N$ data points each from two modalities (\aka feature spaces), $\mathcal{X} = \{\x_i\in\mathbb{R}^{D_x}\}_{i=1}^{N}$ and $ \mathcal{Y} = \{\y_i\in\mathbb{R}^{D_y}\}_{i=1}^{N}$, and the corresponding points $(\x_i,\y_i)$ are paired. For applications studied in this paper, a pair $(\x_i,\y_i)$ may represent a multimedia document where $\x_i$ is an image and $\y_i$ is the corresponding text article. For notational convenience, we denote the data sets as matrices $\X\in\mathbb{R}^{D_x\times N}$ and $\Y\in\mathbb{R}^{D_y\times N}$ where each column corresponds to a data point. Without loss
of generality, we assume that $\X,\Y$ have been normalized to have zero mean.
CMSSH works as follows: Given a set of similar pairs $\{(\x_i,\y_i)\}$ and a set of dissimilar pairs $\{(\x_j,\y_j)\}$, where $\x\in\mathcal{X}$ and $\y\in\mathcal{Y}$ belong to two different modalities, \mbox{CMSSH} constructs two mapping functions $\xi:\mathcal{X}\rightarrow\mathbb{H}^{M}$ and $\eta:\mathcal{Y}\rightarrow\mathbb{H}^{M}$ such that, with high probability, the Hamming distance is small for similar points and large for dissimilar points. Specifically, the $m$th bit of Hamming representation $\mathbb{H}^{M}$ for $\mathcal{X}$ and $\mathcal{Y}$ can be defined by two functions, $\xi_ {m}$ and $\eta_{m}$, which are parameterized by projections $p_{m}$ and $q_{m}$, respectively. In their paper, $\xi_{m}$ and $\eta_{m}$ are assumed to have the form $\xi_{m}(\x) = \sgn(\p_{m}^{T}\x+a_{m})$ and $\eta_{m}(\y) =\sgn(\q_{m}^{T}\y+b_{m})$, where $\p_{m}$ and $\q_{m}$ are $D_{x}$- and $D_{y}$-dimensional unit vectors, $a_{m}$ and $b_{m}$ are scalars.
A method based on boosting is used to learn the mapping functions. The algorithm is briefly described here. First, it initializes the weight of each point pair to $w_{m}(k) = 1/K$ where $K$ is the total number of point pairs. Then, for the ${m}$th bit, it selects $\xi_{m}$ and $\eta_{m}$ that maximize the following objective function:
$$\sum\nolimits_{k=1}\nolimits^{K}\left(w_{m}(k)s_k\sgn(\p_{m}^{T}\x_k+a_{m})\sgn(\q_{m}^{T}\y_k+b_{m})\right),\nonumber$$
where $s_k=1$ if the $k$th pair is a similar pair and $s_k = -1$ otherwise. Since maximizing the objective function above is difficult, the $\sgn(\cdot)$ operator and bias terms $a_{m},b_{m}$ are dropped to give the following approximate objective function for maximization:
$$
\sum\nolimits_{k=1}\nolimits^{K}w_{m}(k)s_k(\p_{m}^{T}\x_k)(\q_{m}^{T}\y_k) = \p_{m}^{T}\left(\sum\nolimits_{k=1}\nolimits^{K}w_{m}(k)s_k\x_k\y_k^{T}\right)\q_{m}.
$$
It is easy to see that the $\p_{m}$ and $\q_{m}$ that maximize the above objective are the largest left and right singular vectors of $\C = \sum_{k=1}^{K}w_{m}(k)s_k\x_k\y_k^{T}$. After obtaining $\p_{m}$ and $\q_{m}$, the algorithm updates the weights with the update rule $w_{m+1}(k) = w_{m}(k)\exp(-s_{k}\xi_{m}(\x)\eta_{m}(\y))$ and then proceeds to learn $\p_{m+1},\q_{m+1}$ for the $(m+1)$st bit.
Roughly speaking, \mbox{CMSSH} tries to map similar points to similar codes and dissimilar points to different codes by exploiting pairwise relations across different modalities. However, it ignores relational information within the same modality which could be very useful for hash function learning~\cite{weiss2008nips,he2010kdd}. Furthermore, \mbox{CMSSH} can only handle vectorial data which might not be available in many applications.
Recently, Kumar \etal extended spectral hashing~\cite{weiss2008nips} to the multi-view case, leading to a method called cross-view hashing (\mbox{CVH})~\cite{kumar2011ijcai}. The objective of \mbox{CVH} is to minimize the inter-view and intra-view Hamming distances for similar points and maximize those for dissimilar points. The optimization problem is relaxed to several generalized eigenvalue problems which can be solved by off-the-shelf methods.
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
\section{Spectral Multimodal Hashing}
\label{smh:SMH}
In this section, we first formulate the multimodal hashing problem as a discrete embedding problem and show that it can be approximately solved by spectral decomposition followed by thresholding, which is similar to spectral hashing~\cite{weiss2008nips} for unimodal data. But unlike spectral hashing, our focus is multimodal data, which are often encountered in a vast range of multimedia applications. Therefore, we call the proposed method \textit{spectral multimodal hashing} (\mbox{SMH}). In the following, we first give a basic \mbox{SMH} model in Section~\ref{smh:Ssmh:FORMULATION} and then present the other two models as extensions in Section~\ref{smh:Ssmh:EXT}.
%present a basic multimodal hashing model in Section~\ref{sec:ssmh:smh}. In Section~\ref{sec:ssmh:ksmh}, we introduce its kernel extension (\mbox{KSMH}) which enables us to deal with nonvectorial data as well as nonlinearity. At last, to exploit side information in the form of labels or pairwise similarity relations between points, we further extend \mbox{KSMH} by introducing two novel regularizers and propose a regularized \mbox{KSMH} (\mbox{RKSMH}) model in Section~\ref{sec:ssmh:rksmh}.
\subsection{Formulation}
\label{smh:Ssmh:FORMULATION}
Let there be two data matrices $\X^{D_x\times N}$ and $\Y^{D_y\times N}$ from different modalities and the corresponding points $(\x_i,\y_i)$ be paired. For applications studied in this paper, a pair $(\x_i,\y_i)$ may represent a multimedia document where $\x_i$ is an image and $\y_i$ is the corresponding text article. Without loss of generality, we assume that $\X,\Y$ have been normalized to have zero mean. We want to learn two sets of hash functions $\{h_{m}\}_{m=1}^{M}$ and $\{g_{m}\}_{m=1}^{M}$ to give $M$-bit binary codes of $\X$ and $\Y$, respectively.
In this paper, we use thresholded linear projection to define the hash functions. More specifically, the $m$th hash functions for both modalities are defined as follows:
\begin{align}
h_{m}(\x)=\sgn(\x^T\w_{x}^{(m)}+t_x),\ \ %\mbox{or}
g_{m}(\y)=\sgn(\y^T\w_{y}^{(m)}+t_y)\nonumber,
\end{align}
where $\w_{x}^{(m)}\in\mathbb{R}^{D_{x}},\w_{y}^{(m)}\in\mathbb{R}^{D_{y}}$ correspond to two projection directions. The corresponding Hamming bits can be obtained as
\begin{align}
\label{eqn:bit}
b_{m}(\x) = \frac{1+h_{m}(\x)}{2}, \ \
%\mbox{or}
b_{m}(\y)= \frac{1+g_{m}(\y)}{2}.
\end{align}
%$(1+h_{m}(\x))/2$ and $(1+g_{m}(\y))/2$.
Let the binary vectors $\h(\x) = (h_{1}(\x),\dots,h_{M}(\x))^T$ and $\g(\y) = (g_{1}(\y)\dots,g_{M}(\y))^T$ denote the projections of points $\x$ and $\y$.
The goal of our basic \mbox{SMH} model is to seek the projections that maximize the correlation between variables in the projected space (Hamming space). Intuitively, two hash codes are more correlated in the Hamming space if the corresponding points in the original space are similar and less correlated otherwise. Moreover, the hash codes should be balanced in the sense that each bit has equal chance of being 1 and $-1$ and the hash bits should be independent of each other~\cite{weiss2008nips}. As a result, \mbox{SMH} can be formulated as the following constrained optimization problem:
\begin{eqnarray}
\max_{\{\w_{x}^{(m)},\w_{y}^{(m)}\}_{m=1}^{M}}& \frac{\mathbb{E}(\h^{T}\g)}{\sqrt{\mathbb{E}(\h^{T}\h)\mathbb{E}(\g^{T}\g)}}\\
\subto& \sum_{i=1}^N h_{m}(\x_i) =0, \ m=1,\dots,M\nonumber\\
&\sum_{i=1}^N g_{m}(\y_i) =0, \ m=1,\dots,M\nonumber\\
&\sum_{i=1}^{N}h_{m}(\x_i)h_{n}(\x_i) =0, \ \forall m\neq n\nonumber\\
&\sum_{i=1}^{N}g_{m}(\y_i)g_{n}(\y_i) =0, \ \forall m\neq n,\nonumber
\label{eqn:cmh1}
\end{eqnarray}
where the expectation is taken with respect to the data distribution in the corresponding feature space. This problem is difficult to solve even without the constraints since the objective function is non-differentiable. Moreover, the balancing constraints make the problem NP-hard~\cite{weiss2008nips}.
Similar to~\cite{wang2010cvpr}, we relax the problem by dropping the $\sgn(\cdot)$ operator, the thresholds $ t_x $ and $ t_y $ and the balancing constraints. Instead, we implicitly enforce the constraints by preprocessing the data through mean-centering. Hence we arrive at the following optimization problem for one bit:\footnote{For notational simplicity, we omit the indices of the hash functions.}
\begin{eqnarray}
\label{eqn:cca1}
\max_{\w_{x},\w_{y}}& \mathbb{E}(\w_{x}^{T}\x \, \w_{y}^{T}\y)\\
\subto& \mathbb{E}((\w_{x}^{T}\x)^2)=1, \, \mathbb{E}((\w_{y}^{T}\y)^2)=1,\nonumber
\end{eqnarray}
which in fact is the standard form of \emph{canonical correlation analysis} (\mbox{CCA})~\cite{hotelling1936cca}. Approximating the expectation by empirical expectation, we rewrite Problem~(\ref{eqn:cca1}) as follows:
\begin{eqnarray}
\label{eqn:csmh:optprob}
\max_{\w_{x},\w_{y}}& \w_{x}\C_{xy}\w_{y}\\
\subto& \w_{x}\C_{xx}\w_{x}=1, \, \w_{y}\C_{yy}\w_{y}=1, \nonumber
\end{eqnarray}
where $\C_{xy} = \frac{1}{N}\X\Y^{T}$, $\C_{xx} = \frac{1}{N}\X\X^{T}$, and $\C_{yy} = \frac{1}{N}\Y\Y^{T}$.
This problem is equivalent to the following generalized eigenvalue problem:
\begin{align}
\label{eqn:csmh:wx}
\C_{xy}\C_{yy}^{-1}\C_{xy}^{T}\w_{x} = \lambda^2\C_{xx}\w_{x}.
\end{align}
The solution $\w_{x}$ is the eigenvector that corresponds to the largest eigenvalue. With the $\w_{x}$ thus computed, we can compute $\w_{y}$ as
\begin{align}
\label{eqn:csmh:wy}
\w_{y} = \frac{1}{\lambda}\C_{yy}^{-1}\C_{yx}\w_{x}.
\end{align}
With projection vectors $ \w_x $ and $ \w_y $ computed, one common approach of getting the binary codes is simply using the $ \sgn(\cdot) $ operator. However, this may separate the points located near the boundary, impairing the model especially when the data distribution is dense in that area. To overcome this shortcoming, we use two thresholds, a fixed threshold of zero and a learned threshold, to get the binary codes.
The learning-based threshold can be obtained as follows. For each projection, we first divide the range of projected values into $ N_b $ bins and then calculate the relative data density of each bin as $ P_{t} = N_t/N, t=1,\dots,N_b, $ with $ N_t $ stands for the number of points in the $ t $th bin. The cost of cutting the $ t $th bin is defined as follows,
\begin{align}
C_t = \left(\sum\nolimits_{\hat{t}=1}^{t-1}P_{\hat{t}}\right)^2 + \left(\sum\nolimits_{\hat{t}=t+1}^{N_b}P_{\hat{t}}\right)^2 + P_t,\nonumber
\end{align}
which measures the relative density of the $ t $th bin and the relative density of its both sides. Intuitively, if $ C_{t} $ is small, a boundary cutting through the $ t $th bin will separate a sparse area and make the points located evenly at its both sides. Actually, $ C_t $ is an adapted surrogate of the average size of a proper hash bucket which should be as small as possible for nearest neighbor search~\cite{cayton2007nips}. We then use the center of the bin with the smallest $ C_t $ as the threshold and denote it as $ t_x $ or $ t_y $.
Now we are ready to generate binary bits with 0 and $ t_x $. For example, given $ \w_x $ and $ \x^{*} $, we have
\begin{align}
\label{eqn:csmh:hg1}
h_1(\x^{*}) = \sgn(\w_{x}^{T}\x^{*}), \ \ h_2(\x^{*}) = \sgn(\w_{x}^{T}\x^{*}-t_x),
\end{align}
and given $ \w_y $ and $ \y^{*} $, we have
\begin{align}
\label{eqn:csmh:hg2}
g_1(\y^{*}) = \sgn(\w_{y}^{T}\y^{*}), \ \ g_2(\y^{*}) = \sgn(\w_{y}^{T}\y^{*}-t_y).
\end{align}
%We adopt a alternating algorithm to find the best threshold. First initialize $t_y=0$, we change the value of $t_x$ gradually from the centers.
%
%We first sort the values $\w_x^T\x $, then we get a vector of threshold values $[\w_x^T\x_1-0.1,(\w_x^T\x_1+\w_x^T\x_2)/2, (\w_x^T\x_2+\w_x^T\x_3)/2, \dots, (\w_x^T\x_{n-1}+\w_x^T\x_n)/2, \w_x^T\x_n+0.001]$. For the first threshold, the initial precision is evaluated. For a new threshold $t_x^{*}$, only check those affected point pairs with one end point $\x_{*}$, the precision can be changed to $\frac{total \# of point pairs * previous precision +2(\# of correct pair-\# of incorrect pair)}{total \# of point pairs}$. Given each point is involved in only a small number of points, the algorithm can be very efficient with complexity $O(Nd+P)$, where $P$ is the total number of pairs and $d$ is the average number of pairs a point is involved in.
%Another extension is to generate multiple bits using one eigenvector. One bit uses threshold 0 and the other uses the threshold learned. This combination is also possible and might be useful.
%
%CCA may also depend on the first few eigenvectors. Let's try.
%
%We can compare these two approaches to see which is better.
The basic \mbox{SMH} algorithm is summarized in Algorithm~\ref{algorithm:cmh}.
\begin{algorithm}[ht]
\caption{Algorithm of \mbox{SMH}}
\label{algorithm:cmh}
\begin{algorithmic}
\STATE {\bfseries Input:} \\
$\X$, $\Y$ -- data matrices
\\ $M$ -- number of hash functions
\STATE {\bfseries Procedure:}
\STATE Compute $\C_{xx},\C_{xy},\C_{yy}$.
\STATE Obtain $M$ eigenvectors corresponding to the $M$ largest eigenvalues of the generalized eigenvalue problem~(\ref{eqn:csmh:wx}) as $\w_{x}$'s.
\STATE Obtain the corresponding $\w_{y}$'s using Equation~(\ref{eqn:csmh:wy}).
\STATE Learn thresholds $ t_x $ and $ t_y $.
\STATE Obtain the hash codes of points $\x^{*}$ and $\y^{*}$ using Equations~(\ref{eqn:csmh:hg1}),~(\ref{eqn:csmh:hg2}) \& (\ref{eqn:bit}).
\end{algorithmic}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Extensions}
\label{smh:Ssmh:EXT}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Kernel \mbox{SMH}}
\label{smh:Ssmh:EXT:KSMH}
The \mbox{SMH} model presented in the previous subsection has two limitations. First, it can only handle vectorial data. Second, the projection before thresholding is linear. In this subsection, we propose a kernel extension of \mbox{SMH}, abbreviated as \mbox{KSMH} thereafter, to overcome these limitations. % by taking the kernel approach~\cite{shawe2004book}.
Let $\mathcal{K}(\cdot,\cdot)$ be a valid kernel function and $\phi(\cdot)$ be the corresponding function that maps data points in the original input space to the kernel-induced feature space. In the sequel, we use $\Ph(\X) = [\phi(\x_1),\dots,\phi(\x_{N})]$ and $\Ph(\Y) = [\phi(\y_1),\dots,\phi(\y_{N})]$ to denote the data matrices in the kernel-induced feature space.
%Suppose we have a set of $P$ landmark points\footnote{*** You should explain what landmark points are and why they are needed.} $\hat{\mathcal{X}} \subset\mathcal{X}$, which can also be represented by $\hat{\X}$ in the original input space and $\Ph(\hat{\X})$ in the kernel-induced feature space. For the other modality, we define $\hat{\mathcal{Y}},\hat{\Y}$ and $\Ph(\hat{\Y})$ similarly.
Taking the kernel approach~\cite{scholkopf2001colt}\cite{kulis2009nips}, we represent $\w_{x}$ and $\w_{y}$ as linear combinations of two groups of landmark points in the kernel-induced feature space, i.e.,
\begin{align}
\w_{x} = \Ph(\hat{\X})^{T}\alpp, \ \
\w_{y} = \Ph(\hat{\Y})^{T}\bett,\nonumber
\end{align}
where $ \hat{\X}\in\mathbb{R}^{D_x\times P}$ and $\hat{\X}\in\mathbb{R}^{D_x\times P} $ are two landmark sets, in which the points are randomly chosen from $ \X $ and $ \Y $, respectively. We note that although the landmark points should be sampled from the corresponding data distribution and be sufficiently representative, it is enough in practice to select the landmarks randomly from the training set. $\alpp\in\mathbb{R}^{P\times 1},\bett\in\mathbb{R}^{P\times 1}$ are combination coefficients. To reduce the computational cost, $P$ is usually a small number compared to $N$.
The objective function of Problem~(\ref{eqn:cmh1}) can now be rewritten as
\begin{align}
\frac{\alpp^{T}\K_{\hat{x}x}\K_{y\hat{y}}\bett}{\sqrt{\alpp^{T}\K_{\hat{x}x}\K_{x\hat{x}}\alpp\bett^{T}\K_{\hat{y}y}\K_{y\hat{y}}\bett}},
\label{eqn:kcsmh:obj1}
\end{align}
where $\K_{\hat{x}x} = \K_{x\hat{x}}^{T} = \Ph(\hat{\X})^{T}\Ph(\X)$ and $\K_{\hat{y}y} = \K_{y\hat{y}}^{T} = \Ph(\hat{\Y})^{T}\Ph(\Y)$.
Since the objective function above can lead to degenerate solutions as discussed in~\cite{hardoon2004nc}, we penalize the norms of $\w_{x}$ and $\w_{y}$ in the denominator of (\ref{eqn:kcsmh:obj1}) and arrive at the following alternative form:
\begin{align}
\frac{\alpp^{T}\K_{\hat{x}x}\K_{y\hat{y}}\bett}{\sqrt{\alpp^{T}(\K_{\hat{x}x}\K_{x\hat{x}}+\kappa\K_{\hat{x}\hat{x}})\alpp\bett^{T}(\K_{\hat{y}y}\K_{y\hat{y}}+\kappa\K_{\hat{y}\hat{y}})\bett}},
\label{eqn:kcsmh:obj2}
\end{align}
where $\K_{\hat{x}\hat{x}} = \Ph(\hat{\X})^{T}\Ph(\hat{\X}),\K_{\hat{y}y}= \Ph(\hat{\Y})^{T}\Ph(\hat{\Y})$ and $\kappa>0$ is a regularization parameter.
%
%We are now ready to formulate \mbox{KSMH} as the following constrained optimization problem:
%\begin{eqnarray}
%\max_{\alpp,\bett}& \alpp^{T}\K_{\hat{x}x}\K_{y\hat{y}}\bett\\
%\subto& \alpp^{T}(\K_{\hat{x}x}\K_{x\hat{x}}+\kappa\K_{\hat{x}\hat{x}})\alpp=1\nonumber\\
%& \bett^{T}(\K_{\hat{y}y}\K_{y\hat{y}}+\kappa\K_{\hat{y}\hat{y}})\bett=1.\nonumber
%\end{eqnarray}
After some simple relaxations and manipulations similar to \mbox{SMH}, $\alpp$ can be obtained by solving the following generalized eigenvalue problem:
\begin{align}
\label{eqn:kcsmh:alpha}
\K_{\hat{x}x}\K_{y\hat{y}}(\K_{\hat{y}y}\K_{y\hat{y}} + \kappa\K_{\hat{y}\hat{y}})^{-1}\K_{\hat{y}y}\K_{x\hat{x}}\alpp =\lambda^2(\K_{\hat{x}x}\K_{x\hat{x}} + \kappa\K_{\hat{x}\hat{x}})\alpp.
\end{align}
After obtaining $\alpp$, we compute
\begin{align}
\label{eqn:kcsmh:beta}
\bett =\frac{1}{\lambda} (\K_{\hat{y}y}\K_{y\hat{y}}+\kappa\K_{\hat{y}\hat{y}})^{-1}\K_{\hat{y}y}\K_{x\hat{x}}\alpp.
\end{align}
We can also learn the thresholds $ t_x $ and $ t_y $ using the same approach presented in the last section. For any new point $\x^{*}$, two bits of binary code can be obtained as
\begin{align}
\label{eqn:kcsmh:hg1}
h_1(\x^{*}) = \sgn(\k_{x^{*}}^{T}\alpp), \ \ h_2(\x^{*}) = \sgn(\k_{x^{*}}^{T}\alpp-t_x)
\end{align}
and for $ \y^{*} $, we have
\begin{align}
\label{eqn:kcsmh:hg2}
g_1(\y^{*}) = \sgn(\k_{y^{*}}^{T}\bett), \ \ g_2(\y^{*}) = \sgn(\k_{y^{*}}^{T}\bett-t_y),
\end{align}
where $\k_{y^{*}} = \Ph(\hat{\X})^{T}\phi(\x^{*})$ and $\k_{y^{*}} = \Ph(\hat{\Y})^{T}\phi(\y^{*})$.
The algorithm of \mbox{KSMH} is summarized in Algorithm~\ref{algorithm:kcmh}.
\begin{algorithm}[ht]
\caption{Algorithm of \mbox{KSMH}}
\label{algorithm:kcmh}
\begin{algorithmic}
\STATE {\bfseries Input:} \\
$\X$, $\Y$ -- data matrices
\\$\mathcal{K}(\cdot,\cdot)$ -- kernel function
\\ $M$ -- number of hash functions
\\ $\kappa$ -- regularization parameter
\STATE {\bfseries Procedure:} \\
\STATE Compute $\K_{\hat{x}x}, \K_{\hat{y}y}, \K_{\hat{x}\hat{x}}, \K_{\hat{y}\hat{y}}$.
\STATE Obtain $M$ eigenvectors corresponding to the $M$ largest eigenvalues of the generalized \STATE eigenvalue problem~(\ref{eqn:kcsmh:alpha}) as $\alpp$'s.
\STATE Obtain the corresponding $\bett$'s using Equation~(\ref{eqn:kcsmh:beta}).
\STATE Learn thresholds $ t_x $ and $ t_y $.
\STATE Obtain the hash codes of points $\x^{*}$ and $\y^{*}$ using Equations~(\ref{eqn:kcsmh:hg1}), (\ref{eqn:kcsmh:hg2}) \& (\ref{eqn:bit}).
\end{algorithmic}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Regularized kernel \mbox{SMH}}
\label{smh:Ssmh:EXT:RKSMH}
Both \mbox{SMH} and \mbox{KSMH} proposed above aim at maximizing the correlation between variables in different modalities while ignoring the relational information within each modularity. Moreover, it is unclear how to make use of side information such as labels in the two models in case such information is available in the data.
Inspired by~\cite{blaschko2008ecml}, we further extend \mbox{KSMH} by adding two \textit{Laplacian} regularization terms to the objective~(\ref{eqn:kcsmh:obj2}). We name this new model \mbox{RKSMH}, whose objective is:
\begin{align}
\frac{\alpp^{T}\K_{\hat{x}x}\K_{y\hat{y}}\bett}{\sqrt{\alpp^{T}(\K_{\hat{x}x}\R_{x}\K_{x\hat{x}}+\kappa\K_{\hat{x}\hat{x}})\alpp\bett^{T}(\K_{\hat{y}y}\R_{y}\K_{y\hat{y}}+\kappa\K_{\hat{y}\hat{y}})\bett}},\nonumber
\end{align}
where $\R_{x} = (\I+\gamma\mathcal{L}_{x}),\R_{y} = (\I+\gamma\mathcal{L}_{y})$, $\gamma>0$ is a parameter controlling the impact of regularization, and $\mathcal{L}_{x},\mathcal{L}_{y}$ are graph \textit{Laplacians}~\cite{chung1997spectral} that incorporate some information about $\X$ and $\Y$, respectively. We note that the \textit{Laplacian} matrix is defined as $\mathcal{L} = \D-\W$, where $\D$ is a diagonal matrix with $D(i,i) = \sum_{j=1}^{N}W(i,j)$.\footnote{To avoid being cluttered, we omit the subscripts here.} We note that the \textit{Laplacian} matrix can be computed efficiently by using an anchor graph~\cite{liu2010icml}.
The regularizers $ \R_{x} $ and $ \R_{y} $ not only can exploit relational information of a single modality but can also incorporate into the model side information when it is available. For example, $\mathcal{L}_{x}$ can incorporate structural or geometric information in the input space $ \X $ by defining $\W_{x}$ as
\begin{align}
\label{eqn:wfeature}
W_{x}(i,j) = \left\{ \begin{array}{ll}
\exp\left(-\frac{d^2(\x_i,\x_j)}{\sigma^2}\right) & \textrm{if $\x_i,\x_j$ are neighbors}\\
0 & \textrm{otherwise}
\end{array} \right.
\end{align}
where $d(\cdot,\cdot)$ is the Euclidean distance between two points and $\sigma$ is a user-specified width parameter. In our experiments, we regard two points as neighbors if either one is among the $K$ nearest neighbors of the other one in the feature space. We call this type of \textit{Laplacian} the feature-based \textit{Laplacian}.
$\mathcal{L}_{x}$ can also be used to incorporate side information such as labels by defining $\W_{x}$ as
\begin{align}
\label{eqn:wlabel}
W_{x}(i,j) = \left\{ \begin{array}{ll}
1 & \textrm{if $\x_i$ and $\x_j$ have the same label}\\
0 & \textrm{otherwise}
\end{array} \right.
\end{align}
We call this \textit{Laplacian} the label-based \textit{Laplacian} thereafter. Note that $\mathcal{L}_{y}$ can be defined similarly.
In \mbox{RKSMH}, $\alpp$ can be obtained by solving the following generalized eigenvalue problem:
\begin{align}
\label{eqn:lrkcsmh:alpha}
\K_{\hat{x}x}\K_{y\hat{y}}(\K_{\hat{y}y}\R_{y}\K_{y\hat{y}} + \kappa\K_{\hat{y}\hat{y}})^{-1}\K_{\hat{y}y}\K_{x\hat{x}}\alpp= \lambda^2(\K_{\hat{x}x}\R_{x}\K_{x\hat{x}} + \kappa\K_{\hat{x}\hat{x}})\alpp,
\end{align}
and $\bett$ can be computed as
\begin{align}
\label{eqn:lrkcsmh:beta}
\bett =\frac{1}{\lambda} (\K_{\hat{y}y}\R_{y}\K_{y\hat{y}}+\kappa\K_{\hat{y}\hat{y}})^{-1}\K_{\hat{y}y}\K_{x\hat{x}}\alpp.
\end{align}
The thresholding procedure of \mbox{RKSMH} is the same as those of \mbox{KSMH} and \mbox{SMH}. The algorithm is summarized in Algorithm~\ref{algorithm:lrkcmh}.
\begin{algorithm}[htb]
\caption{Algorithm of \mbox{RKSMH}}
\label{algorithm:lrkcmh}
\begin{algorithmic}
%\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\STATE {\bfseries Input:} \\
$\X$, $\Y$ -- data matrices
\\$\mathcal{K}(\cdot,\cdot)$ -- kernel function
\\ $M$ -- number of hash functions
\\ $\kappa,\gamma$ -- regularization parameters
\STATE {\bfseries Procedure:} \\
\STATE Compute $\K_{\hat{x}x}, \K_{\hat{y}y}, \K_{\hat{x}\hat{x}}, \K_{\hat{y}\hat{y}}, \R_{x},\R_{y}$.
\STATE Obtain $M$ eigenvectors corresponding to the $M$ largest eigenvalues of the generalized eigenvalue problem~(\ref{eqn:lrkcsmh:alpha}) as $\alpp$'s.
\STATE Obtain the corresponding $\bett$'s using Equation~(\ref{eqn:lrkcsmh:beta}).
\STATE Learn thresholds $ t_x $ and $ t_y $.
\STATE Obtain the hash codes of points $\x^{*}$ and $\y^{*}$ using Equations~(\ref{eqn:kcsmh:hg1}), (\ref{eqn:kcsmh:hg2}) \& (\ref{eqn:bit}).
\end{algorithmic}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Beyond two modalities}
\label{smh:Ssmh:EXT:BEYOND}
Our \mbox{SMH} models can easily accommodate more than two modalities, using the corresponding extensions of \mbox{CCA} and \mbox{KCCA}~\cite{hardoon2004nc}\cite{blaschko2008ecml}.
Taking \mbox{SMH} for example, suppose we have $ K $ modalities and want to learn $ K $ projection vectors $ \{\w_1,\cdots,\w_K\}$, we solve the following generalized eigenvalue problem:
\begin{align}
\begin{pmatrix}
\C_{11} & \cdots &\C_{1K} \\
\vdots & \ddots &\vdots \\
\C_{K1}&\cdots &\C_{KK} \end{pmatrix}\begin{pmatrix}\w_1\\\vdots\\\w_K\end{pmatrix} = \lambda \begin{pmatrix}
\C_{11} & \cdots &\0 \\
\vdots & \ddots &\vdots \\
\0&\cdots &\C_{KK} \end{pmatrix}\begin{pmatrix}
\w_{1}\\
\vdots \\
\w_{K}\end{pmatrix},\nonumber
\end{align}
where $ \C_{ij} $ is the covariance matrix between modalities $ i $ and $ j $, and $ \C_{ij} = \C_{ji}^T $.
For \mbox{KSMH}, we select landmark points from each modality and index them with $ \{\hat{1},\cdots,\hat{K}\} $. The corresponding eigenvalue problem, for projection vectors $ \{\alpha_1,\cdots,\alpha_K\} $, is:
\begin{align}\footnotesize
\begin{pmatrix}
\K_{\hat{1}1}\K_{1\hat{1}} & \cdots &\K_{\hat{1}1}\K_{K\hat{K}} \\
\vdots & \ddots &\vdots \\
\K_{\hat{K}K}\K_{1\hat{1}}&\cdots &\K_{\hat{K}K}\K_{K\hat{K}} \end{pmatrix}\begin{pmatrix}\alpha_1\\\vdots\\\alpha_K\end{pmatrix} =
\lambda \begin{pmatrix}
\K_{\hat{1}1}\K_{1\hat{1}}+\kappa\K_{\hat{1}\hat{1}} & \cdots &\0 \\
\vdots & \ddots &\vdots \\
\0&\cdots &\K_{\hat{K}K}\K_{K\hat{K}}+\kappa\K_{\hat{K}\hat{K}} \end{pmatrix}\begin{pmatrix}
\alpha_{1}\\
\vdots \\
\alpha_{K}\end{pmatrix},\nonumber
\end{align}
where $ \K_{\hat{i}i} $ is the kernel matrix between the landmark points and the training data points and $ \K_{\hat{i}\hat{i}} $ is the kernel matrix for the landmark points, for the $ i $th modality. The generalized eigenvalue problems for \mbox{RKSMH} are similar, and we omit them here due to space limitations.
With the projection vectors learned, we can apply the same thresholding process to each modality and get the binary codes easily.
%For multiple views, we should talk about it here. At least two ways.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\section{Multimodal Binary Reconstructive Embedding}
%\label{smh:MBRE}
%The \mbox{SMH} model introduced in last subsection requires the data points in different modalities to be paired, which might not be the case in some applications. In this section, we extend a unimodal hashing method \mbox{BRE} to multimodal settings to given a novel method called \textit{multimodal binary reconstructive embedding} (\mbox{MBRE}), the inputs of which is pairwise distance or relations.
%
%%+++++++++++++++++++++++++++++++++++++++++++++++++++++++
%\subsection{Model}
%Let $M$ be the number of hash functions (\aka code length), $N$ be the number of data points, and $Q$ be the number of landmark points. Given two kinds of data points $\mathcal{X}$ and $\mathcal{Y}$,\footnote{Without loss of generality, here we assume there are two modalities and each modality has $N$ points.} similar to \mbox{BRE}, we define hash functions \wrt the $m$th bit for $\x\in\mathcal{X}$ and $\y\in\mathcal{Y}$, respectively, as follows,
%\begin{align}
%h_{m}(\x) = \frac{1+\sgn\left(\sum_{q=1}^{Q}W_{x}(m,q)\kappa(\x_q,\x)\right)}{2} \ \ \mbox{or} \ \ g_{m}(\x) = \frac{1+\sgn\left(\sum_{q=1}^{Q}W_{y}(m,q)\kappa(\y_q,\y)\right)}{2}\nonumber,
%\end{align}
%where $\W_{x}$ and $\W_{y}$ are two $M\times Q$ projection matrices, $\{\x_q\}_{q=1}^{Q}\subset\mathcal{X}$ and $\{\y_q\}_{q=1}^{Q}\subset\mathcal{Y}$ are landmark points for $\mathcal{X}$ and $\mathcal{Y}$ respectively, and $\kappa(\cdot,\cdot)$ is a kernel function. Note that defining hash functions this way is very common in kernel methods and brings us flexibility to work on a wide variety of data types. Therefore, given two points $\x\in\mathcal{X}$ and $\y\in\mathcal{Y}$, we denote their corresponding binary representations as $\tilde{\x}$ and $\tilde{\y}$ such that their $m$th bits can be evaluated by $\tilde{x}(m) = h_{m}(\x)$ and $\tilde{y}(m) = g_{m}(\y)$.
%
%Since in many real-world applications, it is much easier to obtain binary pairwise relationships rather than real-valued distance, here we simply define the \textit{original} distance between two points $\x_i,\x_j$ as follows,
%\begin{align}
%d(\x_i,\x_j) = \left\{ \begin{array}{ll}
%0 & \textrm{if $\x_i$ and $\x_j$ belong to the same class};\\
%1 & \textrm{otherwise},
%\end{array} \right. \nonumber%\\
%%d(\y_k,\y_l) = \left\{ \begin{array}{ll}
%%0 & \textrm{if $\y_k$ and $\y_l$ are similar};\\
%%1 & \textrm{if $\y_k$ and $\y_l$ are dissimilar},
%%\end{array} \right. \nonumber\\
%%%d(\x_i,\x_j) = \frac{1}{2}\|\x_i-\x_j\|^{2}_2, &\tilde{d}(\x_i,\x_j) = \frac{1}{M}\|\tilde{\x}_i-\tilde{\x}_j\|^{2}_2,\nonumber\\
%%%d(\y_k,\y_l) = \frac{1}{2}\|\y_k-\y_l\|^{2}_2, &\tilde{d}(\y_k,\y_l) = \frac{1}{M}\|\tilde{\y}_k-\tilde{\y}_l\|^{2}_2,\nonumber
%%d(\x_i,\y_k) = \left\{ \begin{array}{ll}
%%0 & \textrm{if $\x_i$ and $\y_k$ are similar};\\
%%1 & \textrm{if $\x_i$ and $\y_k$ are dissimilar},
%%\end{array} \right. \nonumber
%\end{align}
%$d(\y_k,\y_l)$ and $d(\x_i,\y_k)$ are defined similarly. Note that using binary values here to define distance is just a special case, and our model can accept other definitions of distance.
%
%We define the \textit{reconstructive} distance between two points as follows,
%\begin{align}
%\tilde{d}(\x_i,\x_j) = \frac{1}{M}\|\tilde{\x}_i-\tilde{\x}_j\|^{2}_2, \ \
%\tilde{d}(\y_k,\y_l) = \frac{1}{M}\|\tilde{\y}_k-\tilde{\y}_l\|^{2}_2, \ \
%\tilde{d}(\x_i,\y_k) = \frac{1}{M}\|\tilde{\x}_i-\tilde{\y}_k\|^{2}_2.\nonumber
%\end{align}
%
%Intuitively speaking, we try to find $\W_{x},\W_{y}$ such that the reconstructive distance are close to the original distance. More specifically, the goal of \mbox{MBRE} is to minimize the following objective,
%\begin{align}
%\mathcal{O}\left(\W_{x},\W_{y}\right)&=\sum_{(\x_i,\x_j)\in\mathcal{N}_{x}}\left(d(\x_i,\x_j)-\tilde{d}(\x_i,\x_j)\right)^{2}+\sum_{(\y_k,\y_l)\in\mathcal{N}_{y}}\left(d(\y_k,\y_l)-\tilde{d}(\y_k,\y_l)\right)^{2}\nonumber\\
%&+\sum_{(\x_i,\y_k)\in\mathcal{N}_{xy}}\left(d(\x_i,\y_k)-\tilde{d}(\x_i,\y_k)\right)^{2},
%\label{eqn:totalobj}
%\end{align}
%where $\mathcal{N}_{x}$ is a set of point pairs in $\mathcal{X}$, $\mathcal{N}_{y}$ is a set of point pairs in $\mathcal{Y}$ and $\mathcal{N}_{xy}$ is a set of pairs with one point in $\mathcal{X}$ and the other point in $\mathcal{Y}$. In our experiments, there are $k$ pairs for each point and so each set has size upper-bounded by $Nk$.\footnote{The total number of pairs might be smaller than $Nk$, since there might be some duplicate pairs. Besides, different sets may have different $k$ values.} We note that the objective function of \mbox{BRE} is just the first term of that in Eqn.~(\ref{eqn:totalobj}).
%
%
%%######################################
%\subsection{Algorithm}
%To solve the above optimization problem, we adapt the coordinate descent algorithm used in~\cite{kulis2009nips} for our model. The major difference between the adapted algorithm and the original one is threefold: 1) we update all parameters sequentially but the original algorithm randomly updates only a small subset of them;\footnote{Note that original algorithm is slow to converge because of random update.} 2) we use a warm-start approach to improve the convergence rate and obtain better performance; 3) our algorithm involves more updating terms.
%
%We first introduce Lemma~\ref{lemma:updatex} as follows.
%\begin{mylem}
%Let $\bar{D}_{x}(i,j)=d(\x_i,\x_j)-\tilde{d}(\x_i,\x_j),\bar{D}_{xy}(i,k)=d(\x_i,\y_k)-\tilde{d}(\x_i,\y_k)$. Consider updating one hash function of $\mathcal{X}$ from $h_{o}$ to $h_{n}$, and let $\h_{o}$ and $\h_{n}$ be the $N\times 1$ vectors obtained by applying the old and new hash functions to each data point in $\mathcal{X}$. Furthermore, we denote the hash function of $\mathcal{Y}$ with the same bit index as $g$ and the corresponding binary vector as $\g$. Then the objective function of using $h_{n}$ instead of $h_{o}$ can be expressed as
%\begin{align}
%\mathcal{O} &= \sum_{(\x_i,\x_j)\in\mathcal{N}_{x}}\left(\bar{D}_{x}(i,j)+\frac{1}{M}(h_{o}(i)-h_{o}(j))^2-\frac{1}{M}(h_{n}(i)-h_{n}(j))^2\right)^2\nonumber\\
%&+ \sum_{(\x_i,\y_k)\in\mathcal{N}_{xy}}\left(\bar{D}_{xy}(i,k)+\frac{1-2g(k)}{M}(h_{o}(i)-h_{n}(i))\right)^2+C,
%\end{align}
%where $C$ is a constant independent of $h_{o}$ and $h_{n}$.
%\label{lemma:updatex}
%\end{mylem}
%\begin{myproof}
%Let $\tilde{\D}_{x}^{o}$ and $\tilde{\D}_{x}^{n}$ be the matrices of reconstructive distance using $h_{o}$ and $h_{n}$ respectively, $\H_{o}$ and $\H_{n}$ be the $N\times M$ matrices of old and new hash codes of $\mathcal{X}$ respectively, and $\G$ be the hash codes of $\mathcal{Y}$. Moreover, we use $\1_{t}$ to denote the $t$th standard basis vector and $\1$ to denote a vector of all ones, and their dimensionalities will be clear in the context.
%
%We can express $\tilde{\D}_{x}^{o}$ as follows,
%\begin{align}
%\tilde{\D}_{x}^{o} = \frac{1}{M}\left(\Ell_{xo}\1^{T}+\1\Ell^T_{o}-2\H_{o}\H_{o}^{T}\right)\nonumber,
%\end{align}
%where $\Ell_{xo}$ is the vector of squared norms of the rows of $\H_{o}$. Accordingly, we can express $\Ell_{xn}$ for $\H_{n}$ as $\Ell_{xn} = \Ell_{xo} - \h_{o}+\h_{n}$, since $\h_{o}$ and $\h_{n}$ are binary vectors.
%Moreover, we can easily obtain $\H_{n} = \H_{o} +(\h_{n}-\h_{o})\1^{T}_{m}$, where $m$ is the index of the hash function being updated. Therefore,
%\begin{align}
%\tilde{\D}_{x}^{n}
%&= \frac{1}{M}\left(\Ell_{xn}\1^{T}+\1\Ell_{xn}^T-2\H_{n}\H_{n}^{T}\right)\nonumber\\
%&= \frac{1}{M}\left((\Ell_{xo} - \h_{o}+\h_{n})\1^{T}+\1(\Ell_{xo} - \h_{o}+\h_{n})^{T}-2(\H_{o} +(\h_{n}-\h_{o})\1^{T}_{m})(\H_{o} +(\h_{n}-\h_{o})\1^{T}_{m})^{T}\right)\nonumber\\
%&= \tilde{\D}_{x}^{o}-\frac{1}{M}\left((\h_{o}\1^{T}+\1\h_{o}^{T}-2\h_{o}\h_{o}^{T})-(\h_{n}\1^{T}+\1\h_{n}^{T}-2\h_{n}\h_{n}^{T})\right).\nonumber
%\end{align}
%
%Similarly, we have the following crossmodel reconstructive distance matrix,
%\begin{align}
%\tilde{\D}_{xy}^{o} = \frac{1}{M}\left(\Ell_{xo}\1^{T}+\1\Ell_{y}^T-2\H_{o}\G^{T}\right)\nonumber,
%\end{align}
%where $\Ell_{y}$ is the vector of squared norms of the rows of $\G$. Therefore,
%\begin{align}
%\tilde{\D}_{xy}^{n}
%&= \frac{1}{M}\left(\Ell_{xn}\1^{T}+\1\Ell_{y}^T-2\H_{n}\G^{T}\right)\nonumber\\
%&= \frac{1}{M}\left((\Ell_{xo} - \h_{o}+\h_{n})\1^{T}+\1\Ell_{y}^{T}-2(\H_{o} +(\h_{n}-\h_{o})\1^{T}_{m})\G^{T}\right)\nonumber\\
%&= \tilde{\D}_{x}^{o}-\frac{1}{M}\left((\h_{o}\1^{T}-2\h_{o}\g^{T})-(\h_{n}\1^{T}-2\h_{n}\g^{T})\right).\nonumber
%\end{align}
%
%Thus we can write the objective function of using $h_{n}$ instead of $h_{o}$ as
%\begin{align}
%\mathcal{O} &= \sum_{(\x_i,\x_j)\in\mathcal{N}_{x}}\left(\bar{D}_{x}(i,j)+\tilde{D}_{x}^{o}(i,j)-\tilde{D}_{x}^{n}(i,j)\right)^2+\sum_{(\x_i,\y_k)\in\mathcal{N}_{xy}}\left(\bar{D}_{xy}(i,k)+\tilde{D}_{xy}^{o}(i,k)-\tilde{D}_{xy}^{n}(i,k)\right)^2\nonumber\\
%&=\sum_{(\x_i,\x_j)\in\mathcal{N}_{x}}\left(\bar{D}_{x}(i,j)+\frac{1}{M}(h_{o}(i)-h_{o}(j))^2-\frac{1}{M}(h_{n}(i)-h_{n}(j))^2\right)^2\nonumber\\
%&+\sum_{(\x_i,\y_k)\in\mathcal{N}_{xy}}\left(\bar{D}_{xy}(i,k)+\frac{1-2g(k)}{M}(h_{o}(i)-h_{n}(i))\right)^2+C,
%\end{align}
%where we have made use of $h_{o}(i)^2 = h_{o}(i)$ and $h_{n}(i)^2 = h_{n}(i)$ and grouped terms irrelevant to $h_{o},h_{n}$ into $C$. This completes the proof.
%\end{myproof}
%
%Now we move to the details of updating one element of $\W_{x}$, e.g., $W_{x}(m,q_0)$, with all the other elements in $\W_{x}$ fixed. Given a point $\x_i$, the $m$th hash code can be obtained by computing
%\begin{align}
%W_{x}(m,q_0)\kappa(\x_{q_0},\x_i)+\sum\nolimits_{q\neq q_0}W_{x}(m,q)\kappa(\x_{q},\x_i).
%\label{eqn:threshold-1bit}
%\end{align}
%Equating (\ref{eqn:threshold-1bit}) to zero, we can easily obtain the incremental value for $W_{x}(m,q_0)$ that can change the current bit of $\x_i$ as
%\begin{align}
% \delta_{i} = \left(\sum\nolimits_{q\neq q_0}W_{x}(m,q)\kappa(\x_{q},\x_{i})\right)/\kappa(\x_{q_0},\x_{i}) - W_{x}(m,q_0).
%\end{align}
%
%If $h_m(\x_i)>0$, we should decrease $W_{x}(m,q_0)$ to flip the hash code, in another words, $\delta_i<0$. On the contrary, if $h_m(\x_i)<0$, we should increase $W_{x}(m,q_0)$ to flip the hash code, that is, $\delta_i>0$. As a result, we first find all the $\delta_{i}$'s for all $\x_{i}$'s. Then we sort $\{\delta_i\mid\delta_i>0\}$ in ascending order and $\{\delta_i\mid\delta_i<0\}$ in descending order, and thus obtain two sets of intervals. It is easy to observe that, in a fixed interval, changing $W_{x}(m,q_0)$ will not affect the hash code of any point. However, if we go across intervals, the hash code of exactly one point will be changed. As a result, starting from the current value of $W_{x}(m,q_0)$, we first increase it by adding $\delta_i+\epsilon>0$ from the smallest one to the largest one to obtain a set of possible values of objective function~(\ref{eqn:totalobj}). Note that $\epsilon$ is a very small positive number ensuring that only the $i$th bit is flipped. We then decrease $W_{x}(m,q_0)$ by adding $\delta_i-\epsilon<0$ to the starting value from the largest one to the smallest one to obtain another set of possible objective values. In total, we obtain a set of $N$ possible objective values. After getting all these values, we update $W_x(m,q_{0})$ by adding $\delta_i$ corresponding to the smallest objective $\mathcal{O}_i$ if it is smaller than original objective $\mathcal{O}$ before updating, or skip this iteration otherwise.
%
%The main idea of updating $W_{x}(m,q_0)$ is to find $\delta_i$ leading to the smallest objective function value. We can compute the values sequentially in an efficient way based on Lemma~\ref{lemma:updateh}.
%\begin{mylem}
%Given two hash vectors $\h_{t}$ and $\h_{t-1}$ for $\mathcal{X}$ which are different in only one position, the objective w.r.t. $\h_{t}$ can be computed from that w.r.t. $\h_{t-1}$ in $O(k)$ time.
%\label{lemma:updateh}
%\end{mylem}
%\begin{myproof}
%Let the index of the point in which $\h_{t}$ and $\h_{t-1}$ are different be $a$. The only terms that change in the objective are $(\x_a,\x_j)\in\mathcal{N}_{x},(\x_i,\x_a)\in\mathcal{N}_{x}$, and $(\x_a,\y_k)\in\mathcal{N}_{xy}$. Let $f_a = 1$ if $h_{t-1}(a)=0,h_{t}(a)=1$, and $f_a=-1$ otherwise. Therefore the relevant terms in the objective function as given in Lemma~\ref{lemma:updatex} may be written as
%\begin{align}
%\mathcal{O}'&=\sum_{(\x_a,\x_j)\in\mathcal{N}_{x}}\left(\bar{D}_{x}(a,j)-\frac{f_a}{M}(1-2h_{t}(j))\right)^2+\sum_{(\x_i,\x_a)\in\mathcal{N}_{x}}\left(\bar{D}_{x}(i,a)-\frac{f_a}{M}(1-2h_{t}(i))\right)^2\nonumber\\
%&+\sum_{(\x_a,\y_k)\in\mathcal{N}_{xy}}\left(\bar{D}_{xy}(a,k)-\frac{f_a}{M}(1-2g(k))\right)^2.
%\label{eqn:updateO-1bit}\end{align}
%
%Since $\x_{a}$ has $k$ nearest neighbors and lives in the neighborhood of $k$ points on average, it costs $O(k)$ time to update the objective.
%\end{myproof}
%
%We can update each element of $\W_{y}$ similarly with the help of the following two lemmas. %Due to lack of space, we omit the proof here.
%
%\begin{mylem}
%Let $\bar{D}_{y}(k,l)=d(\y_k,\y_l)-\tilde{d}(\y_k,\y_l),\bar{D}_{xy}(i,k)=d(\x_i,\y_k)-\tilde{d}(\x_i,\y_k)$. Consider updating one hash function of $\mathcal{Y}$ from $g_{o}$ to $g_{n}$, and let $\g_{o}$ and $\g_{n}$ be the $N\times 1$ vectors obtained by applying the old and new hash functions to each data point in $\mathcal{Y}$. We further denote the hash function of $\mathcal{X}$ with the same index as $h$ and the corresponding binary vector of $\mathcal{X}$ as $\h$. Then the objective function of using $g_{n}$ instead of $g_{o}$ can be expressed as
%\begin{align}
%\mathcal{O} &= \sum_{(\y_k,\y_l)\in\mathcal{N}_{y}}\left(\bar{D}_{y}(k,l)+\frac{1}{M}(g_{o}(k)-g_{o}(l))^2-\frac{1}{M}(g_{n}(k)-g_{n}(l))^2\right)^2\nonumber\\
%&+ \sum_{(\x_i,\y_k)\in\mathcal{N}_{xy}}\left(\bar{D}_{xy}(i,k)+\frac{1-2h(i)}{M}(g_{o}(k)-g_{n}(k))\right)^2+C',
%\end{align}
%where $C'$ is a constant independent of $g_{o}$ and $g_{n}$.
%\label{lemma:updatey}
%\end{mylem}
%
%\begin{mylem}
%Given two hash vectors $\g_{t}$ and $\g_{t-1}$ for $\mathcal{Y}$ which are different in only one position, the objective w.r.t. $\g_{t}$ can be computed from that w.r.t. $\g_{t-1}$ in $O(k)$ time.
%\label{lemma:updateg}
%\end{mylem}
%
%As a result, the general procedure of our algorithm can be summarized as follows. We first initialize model parameters $\W_{x}, \W_{y}$. Then we update each element of $\W_{x}$ based on Lemma~\ref{lemma:updatex}\&\ref{lemma:updateh}, and each element of $\W_{y}$ based on Lemma~\ref{lemma:updatey}\&\ref{lemma:updateg}. This updating procedure iterates until $\W_{x}, \W_{y}$ converge. We then use current values of $\W_{x}, \W_{y}$ as initialization and retrain the model to get better $\W_{x}, \W_{y}$. In our experiments, this warm-start approach is very effective, $\W_{x}, \W_{y}$ will converge very fast to a better local optimum. To update one element of $\W_{x}$ or $\W_{y}$, sorting $N$ incremental values $\delta_i$'s needs $O(N\log N)$ time, obtaining all objective function values needs $O(Nk)$ time and finding the smallest $\mathcal{O}_i$'s needs $O(N)$ time. Putting everything together, the time complexity of updating one element is $O(N\log N+Nk)$. As a result, one full iteration of updating $\W_{x}$ or $\W_{y}$ requires $O(MQN(\log N+k))$ time.
%
%%\begin{algorithm}
%%%\DontPrintSemicolon
%%%\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
%%%\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
%%\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
%%
%%\Input{$\mathcal{N}_{x}, \mathcal{N}_{y}, \mathcal{N}_{xy}$.}
%%\Output{$\W_{x}, \W_{y}$.}
%%\Begin{
%%Initialize $\W_{x}, \W_{y}$.
%%\While{NOT Converge}{
%%\For{$m=1$ to $M$}{ \For{$q=1$ to $Q$}{ Update $W_{x}(m,q)$.} }
%%\For{$m=1$ to $M$}{ \For{$q=1$ to $Q$}{ Update $W_{y}(m,q)$.} }
%%}}
%%\caption{General procedure of coordinate descent}
%%\label{algo:cmh}
%%\end{algorithm}
%
%Note that local convergence in a finite number of updates is guaranteed since each update will never increase the objective function value which is lower-bounded by zero. Therefore, the algorithm is efficient and can scale well even for large high-dimensional data sets.
%
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
%\section{Multimodal Latent Binary Embeddings}
%\label{smh:MLBE}
%
%Up to now, we have presented two models, namely, \mbox{SMH} and \mbox{MBRE}. To evaluate the data correlation, \mbox{SMH} requires paired or aligned input data which might not be easy to obtain. \mbox{MBRE} eliminates this constraint by directly finding a discrete embedding, so that the Hamming distance in the embedded space maximally approximates the original distance. In this section, we introduce an alternative multimodal hashing model to improve \mbox{SMH}, which is called \textit{multimodal latent binary embeddings} (\mbox{MLBE}), based on latent factor models. \mbox{MLBE} relates hash codes and observations of similarity, i.e., intramodel similarity and intermodel similarity, in a probabilistic model, and the hash codes can be learned easily by \mbox{MAP} estimation of the latent factors. %Among other things, \mbox{MLBE} can be easily extended to determine the proper length of hash codes.
%
%% the intramodel and intermodel similarities are generated based on latent binary factors and weighting matrices.
%
%\subsection{Model}
%
%In the following, we focus on the bimodel case but it is easy to extend \mbox{MLBE} to support multiple modalities. Assume we have binary latent factors for each modality, for example, $ \U \in \{+1,-1\}^{N\times K} $ for $ \X $ and $ \V \in \{+1,-1\}^{M\times K} $ for $ \Y $. Correspondingly, we also have two weighting matrices, $\W^{x} \in \mathbb{R}^{K\times K}$ and $ \W^{y} \in \mathbb{R}^{K\times K}$. The basic assumption of our model is that the observations of intramodel and intermodel similarities are determined by the latent factors and weighting matrices. The graphical representation of \mbox{MLBE} is depicted in Figure~\ref{fig:model}.
%
%\begin{figure}[tb]
%\centering
%\epsfig{figure=fig/mlbe/graphmodel, width=0.4\textwidth}
%\caption{Graphical representation of the model of multimodal latent binary embeddings. The shaded circles are observed variables and the empty ones are latent variables.}
%\label{fig:model}
%\end{figure}
%
%
%Given $ \U , \V , \W^{x} $ and $ \W^{y} $, the two symmetric intramodel similarity matrices $ \S^{x} \in \mathbb{R}^{N\times N}$ for $ \X $ and $ \S^{y} \in \mathbb{R}^{M\times M}$ for $ \Y $ are generated from the following distributions, respectively:
%$$S^{x}_{ij} \mid \U, \W^{x} \sim \mathcal{N}(\u_i^T\W^{x}\u_j,\theta_x^2 ), \ \ \forall i \ge j, \ i,j\in\{1,\cdots,N\}, $$
%$$S^{y}_{ij} \mid \V, \W^{y} \sim \mathcal{N}(\v_i^T\W^{y}\v_j,\theta_y^2 ), \ \ \forall i \ge j, \ i,j\in\{1,\cdots,M\}, $$
%where $ \u_i $ and $ \u_j $ denote the $ i $th row and $ j $th row of $ \U $. Similarly, $ \v_i $ and $ \v_j $ denote the $ i $th row and $ j $th row of $ \V $.
%
%We also observe a intermodel similarity matrix $ \S^{xy} \in \{1,0\}^{N\times M}$, where 1 and 0 stand for similar and dissimilar, respectively. For example, if an image and a text document are both for a historic event, we label them with 1. If they are irrelevant, we label them with 0. Note that it is quite common and easy to define intermodel similarity using binary values $ \{1,0\} $ in practice, but our model can also accommodate other values by simply changing the distribution. We further assume only a subset of the intermodel similarity values are observed and use an indicator matrix $ \O\in \{0,1\}^{N\times M} $ to denote this, i.e., $ O_{ij}=1 $ if $ S_{ij}^{xy} $ is observed and $ O_{ij}=0 $ otherwise. Given $ \U $ and $ \V $, the observed elements in $ \S^{xy} $ are generated by
%$$S^{xy}_{ij} \mid \U, \V \sim \mbox{Bernoulli}(\sigma(\u_i^{T}\v_j)),\ \ \forall i,j, \ O_{ij}=1,$$
%where $ \sigma(x) = 1/(1+\exp(-x))$ is the logistic sigmoid function.
%
%Assume each element in $ \U\in\{+1,-1\}^{N\times K} $ is determined identically and independently the following way,\footnote{Conventional the Bernoulli distribution is for $ \{0,1\} $ valued variables. Here, without loss of generality, we can map them to $ \{-1,+1\} $ by linear transformation.}
%\begin{align}
%\pi \mid \alpha_u,\beta_u &\sim \mbox{Beta}(\alpha_u,\beta_u),\nonumber\\
%U_{ik} \mid \pi &\sim \mbox{Bernoulli}(\pi),\nonumber
%\end{align}
%where $ \alpha_u $ and $ \beta_u $ are hyperparameters, we can integrate out $ \pi $ to give the following prior on $ \U $:
%\begin{align}
%U_{ik} \mid \alpha_u,\beta_u \sim \mbox{Bernoulli}(\frac{\alpha_u}{\alpha_u+\beta_u}), \ \ \forall i\in\{1,\cdots,N\}, \ k\in\{1,\cdots,K\}.\nonumber
%\end{align}
%
%Similarly, we define the prior on $ \V \in\{+1,-1\}^{M\times K} $ as
%\begin{align}
%V_{ik} \mid \alpha_v,\beta_v \sim \mbox{Bernoulli}(\frac{\alpha_v}{\alpha_v+\beta_v}), \ \
%\forall i\in\{1,\cdots,M\}, \ k\in\{1,\cdots,K\}.\nonumber
%\end{align}
%
%%The prior terms for $ \U \in \{+1,-1\}^{N\times K}$ and $ \V \in \{+1,-1\}^{M\times K} $ are from ~\cite{griffiths2006nips}:
%%$$\Pr(\U) = \prod_{k=1}^K\frac{\frac{\alpha}{K}\Gamma(N_k+\frac{\alpha}{K})\Gamma(N-N_k+1)}{\Gamma(N+1+\frac{\alpha}{K})}$$
%%and
%%$$\Pr(\V) = \prod_{k=1}^K\frac{\frac{\beta}{K}\Gamma(M_k+\frac{\beta}{K})\Gamma(M-M_k+1)}{\Gamma(M+1+\frac{\beta}{K})},$$
%%where $ N_k = \sum_{i=1}^{N}\delta(U_{ik}=1) $ and $ M_k = \sum_{i=1}^{M}\delta(V_{ik}=1) $ are the number of $ 1 $'s in the $ k $th column of $ \U $ and $ \V $, respectively.
%
%For $ \X $, the entries of the symmetric weight matrix $ \W^{x}\in\mathbb{R}^{K\times K} $ are generated identically and independently by a standard Gaussian distribution:
%$$\W^{x}_{ij} \mid \phi_x^2 \sim \mathcal{N}(0,\phi_x^2 ), \ \ \forall i\ge j, \ i,j\in\{1,\cdots,K\}.$$
%We put a similar prior on $ \W^{y}\in\mathbb{R}^{K\times K} $ for $ \Y $:
%$$\W^{y}_{ij} \mid \phi_y^2 \sim \mathcal{N}(0,\phi_y^2 ), \ \ \forall i\ge j, \ i,j\in\{1,\cdots,K\}.$$
%%We put simple matrix Gaussian prior on $ \W_x $ and $ \W_y $, which can be written as:
%%$$\Pr(\w_{x}) = \mathcal{N}(\0,\phi_x\I ), \w_{x} = \W_x(:)$$
%%$$\Pr(\w_y) = \mathcal{N}(\0,\phi_y\I ), \w_y = \W_y(:)$$
%
%\subsection{Algorithm}
%
%Based on the observations, we can learn the parameters $ \U $ and $ \V $ to give the hash codes. But finding exact posterior distributions of $ \U $ and $ \V $ is intractable, as a result, we adopt an alternating algorithm to find an \mbox{MAP} estimation of $ \U , \V ,\W^x $ and $ \W^y $.
%
%We first update $ U_{ik} $ while fixing the others. To decide the \mbox{MAP} estimation of $ U_{ik} $, we first define a loss function with respect to $ U_{ik}$ as in Definition~\ref{def:lossu}:
%
%% $ Let $ \u_i $ be the $ i $th row of $ \U $, $\w_{x} = \W_x(:) $ and $ \s^{x}_{i} = \S_x(:,i) $, we denote $ \A_i = \mbox{kron}(\u_i, \U) $ and have $ \Pr(\s^{x}_{i}\mid \A,\w_{x}) = \mathcal{N}(\A_i\w_{x},\theta_x\I) $. The loss function of updating one element $ \U_{ik} $:
%
%\begin{mydef}
%\begin{align}
%\mathcal{L}_{U_{ik}} &=\log\frac{\alpha_u}{\beta_u}-\frac{1}{2\theta_{x}^2}\sum_{j\neq i}^{N}\left[-2 S^{x}_{ij} \u_j^T\W^x(\u_{i}^{+} - \u_{i}^{-}) - \u_j^T\W^x(\u_{i}^{+}{\u_{i}^{+}}^{T}-\u_{i}^{-}{\u_{i}^{-}}^{T})\W^x\u_j\right]\nonumber\\
%&+\sum_{j=1}^{M}O_{ij}\left[S_{ij}^{xy}\log \frac{\sigma_{ij}^{+}}{\sigma_{ij}^{-}} + (1-S_{ij}^{xy})\log \frac{1-\sigma_{ij}^{+}}{1-\sigma_{ij}^{-}}\right],
%\end{align}
%where $ U_{-ik} $ denotes all the elements in $ \U $ but $ U_{ik} $, $ \s^{x}_i $ denotes the $ i $th row of $ \S^{x} $, $ \u^{+}_i $ is the $ i $th row of $ \U $ with $ U_{ik}=1 $ and $ \u^{-}_i $ is the $ i $th row of $ \U $ with $ U_{ik}=-1 $. We further define $ \sigma^{+}_{ij} = \sigma(\v_j^T\u_i^{+}) $ and $ \sigma^{-}_{ij} = \sigma(\v_j^T\u_i^{-}) $.
%\label{def:lossu}\end{mydef}
%
%Then we have the following lemma:
%\begin{mylem}
%The \mbox{MAP} solution of $ U_{ik} $ is $ U_{ik}=1 $ if $ \mathcal{L}_{U_{ik}}>0 $ and $ U_{ik}=-1 $ otherwise.
%\label{lemma:updateu}\end{mylem}
%
%\begin{myproof}
%To get the \mbox{MAP} estimation of $ U_{ik} $, we only need to compare the two posterior probabilities $ \Pr(U_{ik}=1) $ and $ \Pr(U_{ik}=-1) $ conditioned on the observations and all the other model parameters. Specifically, we compute the log ratio of the two probabilities which is larger than zero if $ \Pr(U_{ik}=1) > \Pr(U_{ik}=-1) $ and smaller than zero otherwise. The log ratio can be evaluated as follows:
%\begin{align}
% & \log \frac{\Pr(U_{ik} = 1\mid U_{-ik},\V , \W_x, \S^{x}, \S^{xy})}{\Pr(U_{ik} = -1\mid U_{-ik},\V , \W_x, \S^{x}, \S^{xy})}\nonumber\\
%=& \log \frac{\Pr(U_{ik} = 1\mid \alpha,\beta)}{\Pr(U_{ik} = -1\mid \alpha,\beta)}
%+\log \frac{\Pr(\s^{x}_i\mid U_{ik}=1, U_{-ik}, \W^{x})}{\Pr(\s^{x}_i\mid U_{ik}=-1, U_{-ik}, \W^{x})}\nonumber\\
%+&\log \frac{\Pr(\S^{xy}\mid U_{ik}=1, U_{-ik}, \V)}{\Pr(\S^{xy}\mid U_{ik}=-1, U_{-ik}, \V)}\nonumber\\
%=&\log\frac{\alpha_u}{\beta_u}-\frac{1}{2\theta_{x}^2}\sum_{j\neq i}^{N}\left[-2 S^{x}_{ij} \u_j^T\W^x(\u_{i}^{+} - \u_{i}^{-})\right]\nonumber\\
%-&\frac{1}{2\theta_{x}^2}\sum_{j\neq i}^{N}\left[\u_j^T\W^x(\u_{i}^{+}{\u_{i}^{+}}^{T}-\u_{i}^{-}{\u_{i}^{-}}^{T})\W^x\u_j\right]\nonumber\\
%+&\sum_{j=1}^{M}O_{ij}\left[S_{ij}^{xy}\log \frac{\sigma_{ij}^{+}}{\sigma_{ij}^{-}} + (1-S_{ij}^{xy})\log \frac{1-\sigma_{ij}^{+}}{1-\sigma_{ij}^{-}}\right],
%%-\frac{1}{\theta_x}\left[{\s^{x}_i}^T (\A^{-}_{i} - \A^{+}_{i} )\w_{x}\right]\nonumber\\
%%&-\frac{1}{2\theta_x}\left[\w_{x}^T({\A^{+}_{i}}^{T}\A^{+}_{i} - {\A^{-}_{i}}^{T}\A^{-}_{i})\w_{x}\right]\nonumber\\
%%&-\frac{1}{\mu}\sum_{i,j}I_{ij}\left[S^{xy}_{ij}(\sigma^{-}_{ij}-\sigma^{+}_{ij})+\frac{1}{2}({\sigma^{+}_{ij}}^2-{\sigma^{-}_{ij}}^2)\right],
%\label{eqn:lossu}\end{align}
%where $ U_{-ik} $ denotes all the elements in $ \U $ but $ U_{ik} $, $ \s^{x}_i $ denotes the $ i $th row of $ \S^{x} $, $ \u^{+}_i $ is the $ i $th row of $ \U $ with $ U_{ik}=1 $ and $ \u^{-}_i $ is the $ i $th row of $ \U $ with $ U_{ik}=-1 $. We further define $ \sigma^{+}_{ij} = \sigma(\v_j^T\u_i^{+}) $ and $ \sigma^{-}_{ij} = \sigma(\v_j^T\u_i^{-}) $.
%
%The log ratio computed in Eqn.~(\ref{eqn:lossu}) gives exactly $ \mathcal{L}_{U_{ik}} $, hence the proof is completed.
%\end{myproof}
%
%%The details can be found in Appendix.
%
%%We group all the terms irrelevant to $ U_{ik} $ in $ C $.
%
%%$ N_{-ik} = \sum_{j\neq i}\delta(U_{jk}=1)$ is the number of $ +1 $ in $ k $th column and all rows but the $ i $th row and $ I_{ij}=1 $ if $ \S^{xy}_{ij} $ is observed and $ I_{ij}= 0 $ otherwise. We define $ \A^{+}_{i} = \mbox{kron}(\u_i,\U ) $ and $ \sigma^{+}_{ij} = \sigma(\u_i^T\v_j) $ with $ U_{ik}=1 $. We define $ \A^{-}_{i} = \mbox{kron}(\u_i,\U ) $ and $ \sigma^{-}_{ij} = \sigma(\u_i^T\v_j) $ with $ U_{ik}=-1 $.
%
%%
%%$ (\hat{\u}_1-\hat{\u}_2)\w_x\S_x\nonumber\\&+(\hat{\u}_1(\W_x^T\W_x))(\hat{\u}_1-\hat{\u}_2)\nonumber\\& + (\hat{\u}_2(\W_x^T\W_x))(\hat{\u}_1-\hat{\u}_2)\nonumber\\& +\sum_{j}I_{ij}(S_{ij}-\sigma(\u_i^{T}\v_j)q)^2 $
%%We can easily evaluate loss function~(\ref{eqn:loss_u}) and set
%%\begin{align}
%%U_{ik} = \left\{ \begin{array}{ll}
%%+1 & \mathcal{L}_{U_{ik}}>0\\
%%-1 & \mbox{otherwise}
%%\end{array} \right.
%%\end{align}
%
%Similarly, we have Definition~\ref{def:lossv} and Lemma~\ref{lemma:updatev} for \mbox{MAP} estimation of $ \V $. %Due to space limitations, we omit the proof here.
%
%\begin{mydef}
%\begin{align}
%\mathcal{L}_{V_{ik}} &=\log\frac{\alpha_v}{\beta_v}-\frac{1}{2\theta_{y}^2}\sum_{j\neq i}^{N}\left[-2 S^{y}_{ij} \v_j^T\W^y(\v_{i}^{+} - \v_{i}^{-}) - \v_j^T\W^y(\v_{i}^{+}{\v_{i}^{+}}^{T}-\v_{i}^{-}{\v_{i}^{-}}^{T})\W^y\v_j\right]\nonumber\\
%&+\sum_{j=1}^{N}O_{ji}\left[S_{ji}^{xy}\log \frac{\sigma_{ji}^{+}}{\sigma_{ji}^{-}} + (1-S_{ji}^{xy})\log \frac{1-\sigma_{ji}^{+}}{1-\sigma_{ji}^{-}}\right],
%\end{align}
%where $ V_{-ik} $ denotes all the elements in $ \V $ but $ V_{ik} $, $ \s^{y}_i $ denotes the $ i $th row of $ \S^{y} $, $ \v^{+}_i $ is the $ i $th row of $ \V $ with $ V_{ik}=1 $ and $ \v^{-}_i $ is the $ i $th row of $ \V $ with $ V_{ik}=-1 $. We further define $ \sigma^{+}_{ji} = \sigma(\u_j^T\v_i^{+}) $ and $ \sigma^{-}_{ji} = \sigma(\u_j^T\v_i^{-}) $.
%\label{def:lossv}\end{mydef}
%
%
%\begin{mylem}
%The \mbox{MAP} solution of $ V_{ik} $ is $ V_{ik}=1 $ if $ \mathcal{L}_{V_{ik}}>0 $ and $ V_{ik}=-1 $ otherwise.
%\label{lemma:updatev}\end{mylem}
%
%When fixing $ \U , \V $ and $ \W^{y} $, we compute the \mbox{MAP} estimation of $ \W^{x} $ by maximizing the following loss function:
%\begin{align}
%\mathcal{L}_{\W^{x}}&= \log P(\W^{x}) + \log P(\S^{x}_{h}\mid \U ,\W^{x})\nonumber\\
%&=\sum_{ i\ge j}^{K}\sum_{ j=1}^{K}-\frac{{W^{x}_{ij}}^2}{2\phi_x^2} + \sum_{ i > j}^{N}\sum_{ j=1}^{N}-\frac{1}{2\theta_x^2}(S^{x}_{ij}-\u_i^T\W^x\u_j)^2\nonumber\\
%&=-\frac{1}{4\phi_x^2}\w_{x}^T(\I + \mbox{diag}(\m) )\w_{x}
%-\frac{1}{2\theta_x^2}\left[(\s^{x}_h-\A_h\w_x)^T(\s^{x}_h-\A_h\w_x)\right]\nonumber\\
%&= -\frac{1}{2}\w_{x}^T \left(\A_h^T\A_h +\frac{\theta^2_x}{4\phi^2_x}\left(\I + \mbox{diag}(\m) \right) \right)\w_{x}
% + \w_{x}^T\A^T_h \s_h^x+C'
%\label{eqn:loss_wx}\end{align}
%where $ \w_x $ is a $ K^2 $-dimensional column vector taken column-wise from $ \W^x $, $ \m $ is a $ K^2 $-dimensional indicator vector in which the value should be 1 if the index corresponds to $ W^{x}_{ii},i=1,\cdots,K $ and 0 otherwise.
%Let $ \S^{x}_{h} $ denote the left-lower half of $ \S^{x} $ and its vector form be $ \s^{x}_h $. We define $ \A = \U\otimes \U $ and $ \A_h$ consists of the rows corresponding to $ S^x_{ij}, i>j $. We group all the terms irrelevant to $ \W^{x} $ in $ C' $.\footnote{Here we have used a property of Kronnecker multiplication: $ \u^T \W \v = \w^T(\u\otimes\v) $ where $ \w $ is a column-wise vector of $ \W $ if $ \W $ is a symmetric matrix.}
%
%% $ \s^x = \S_x(:) $, we have $ \Pr(\s^x\mid \A,\w_{x}) = \mathcal{N}(\A\w_{x},\theta_x\I) $. The loss function of updating $ \w_{x} $ is:
%%\begin{align}
%%\mathcal{L}_x = -\frac{1}{2}\w_{x} (\A^T\A +\frac{\theta_x}{\phi_x}\I )\w_{x} + \s^x\A \w_{x},
%%\end{align}
%\begin{mylem}
%The \mbox{MAP} estimation of $\W^{x}$ can be evaluated by:
%\begin{align}
%\w_{x} =\left(\A_h^T\A_h +\frac{\theta^2_x}{4\phi^2_x}\left(\I + \mbox{diag}(\m) \right)\right)^{-1}\A_h^T \s^x. \nonumber
%\end{align}
%\label{lemma:updatewx}
%\end{mylem}
%
%Note that Lemma~\ref{lemma:updatewx} can be easily proved by setting the derivative of $ \mathcal{L}_{\W^{x}} $ with respect to $ \w_{x} $ to zero.\footnote{We can adopt gradient-based algorithms to find this global maximum, which may be much faster.} Similarly, we have Lemma~\ref{lemma:updatewy} for $ \W^{y} $.
%
%\begin{mylem}
%The \mbox{MAP} estimation of $\W^{y}$ can be evaluated by:
%\begin{align}
%\w_{y} =\left(\B_h^T\B_h +\frac{\theta^2_y}{4\phi^2_y}\left(\I + \mbox{diag}(\m) \right)\right)^{-1}\B_h^T \s^y,\nonumber
%\end{align}
%where $ \w_y $ is a $ k^2 $-dimensional column vector taken column-wise from $ \W^y $, $ \m $ is a $ k^2 $-dimensional indicator vector in which the value should be 1 if the index corresponds to $ W^{y}_{ii},i=1,\cdots,K $ and 0 otherwise.
%Let $ \S^{y}_{h} $ denote the left-lower half of $ \S^{y} $ and its vector form be $ \s^{y}_h $. We define $ \B = \V\otimes \V $ and $ \B_h$ consists of the rows corresponding to $ S^y_{ij}, i>j $.
%\label{lemma:updatewy}
%\end{mylem}
%
%%We can update $ \W^{y} $ similarly.\footnote{Updating $ \W^{x} $ and $ \W^{y} $ needs playing with a very large matrix $ \A_h$ which might not be handled in Matlab, so we use a small but sufficient reference set in $ \X $ and $ \Y $ to learn $ \W^{x} $ and $ \W^{y} $ and fix them to learn $ \U $ and $ \V $ for the whole database.}
%
%%Similarly, we have $ \w_y =(\B^T\B +\frac{\theta_y}{\phi_y}\I )^{-1}\B^T \s_y $, where $ \B = \mbox{kron}(\V, \V) , \w_y = \W_y(:) $ and $ \s^y = \S_y(:) $.
%
%
%%The algorithm should work as follows:
%%1 use training data to get Wx, Wy and U, V.
%%2 fix Wx, Wy and U, V for a reference set, we then paralelly update the U and V in the test set. Each update is conducted iteratively for the elements in U or V, should be converge very fast.
%%3 use the code to do retrieval.
%
%We summarize the algorithm of \mbox{MLBE} in Algorithm~\ref{algorithm:mlbe}. In our experiments, we use the log likelihood to determine the convergence.
%
%\begin{algorithm}[!t]
%%\DontPrintSemicolon
%%\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
%%\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
%\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
%\Input{$\S_{x}$, $\S_{y}$, $\S_{xy}$ -- similarity matrices
%\\ $\O_{xy}$ -- similarity matrices
%\\ $M$ -- number of hash functions
%\\ $\theta_x,\theta_y, \phi_x,\phi_y, \alpha_u,\alpha_v, \beta_u, \beta_v$ -- regularization parameters}
%\Begin{
%\textit{Training phase}:\\
% Initialize $ \U $ and $ \V $ with $ \{-1,+1\}$ of equal probability.
% \While{not converge}{
% Update each element of $ \W_x $ sequentially using Lemma~\ref{lemma:updatewx}.
% Update $ \U $ using Lemma~\ref{lemma:updateu}.
% Update each element of $ \W_y $ sequentially using Lemma~\ref{lemma:updatewy}.
% Update $ \V $ using Lemma~\ref{lemma:updatev}.
% }
%\textit{Testing phase}:\\
% Obtain hash codes of points $\x^{*}$ and $\y^{*}$ using Lemma~\ref{lemma:updateu} and Lemma~\ref{lemma:updatev}, respectively.
%}
%\caption{Algorithm of \mbox{MLBE}}
%\label{algorithm:mlbe}
%\end{algorithm}
%
%%\subsection{Complexity Analysis}
%%-------------------------------------------------------------------------
%
%%\section{Max margin multimodal hashing}
%%\label{smh:MMMH}
%%
%%In this section, we introduce a new model that utilize the idea of margin, which is equivalent to hinge loss. The key challenge is how to define margin in multimodal setting. And how to optimize. It will be the best if we can find some convex formulation. Nevertheless, we can use CCCP to achieve some global optimality. This should also be inspired from other embedding algorithms.
%
\section{Experiments}
\label{smh:exps}
We conduct several experiments to compare \mbox{SMH} and its extensions with some other related methods. Through the experiments, we want to answer the following questions for each method:
\begin{enumerate}
\item How does \mbox{SMH} perform when compared with other state-of-the-art hashing models on crossmodal retrieval task?
\item How does \mbox{SMH} perform when compared with other state-of-the-art hashing models on unimodal retrieval task?
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Data Sets}
\label{smh:exps:data}
In our experiments, we use two publicly available data sets that are, to the best of our knowledge, the only two up-to-date public data sets involving multiple modalities at large scale.
The first data set, named \textit{Wiki}, is based on a set of Wikipedia featured articles provided by~\cite{rasiwasia2010mm}.\footnote{\url{http://www.svcl.ucsd.edu/projects/crossmodal/}} It contains a total of 2,866 documents (image-text pairs), each of which consists of an image and a text article. Each document is annotated with a label chosen from ten semantic classes. The data set has been split into a training set of 2,173 documents and a test set of 693 documents. The image representation scheme is based on the popular \textit{scale invariant feature transformation} (\mbox{SIFT})~\cite{lowe2004ijcv} with a codebook of 128 words. Representation of a text article is based on its probability distribution over topics derived from a \textit{latent Dirichlet allocation} (\mbox{LDA}) model~\cite{blei2003jmlr} with ten topics.
The second data set, named \textit{\mbox{Flickr}} thereafter, is a subset of the NUS-WIDE database\footnote{\url{http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm}} which is based on images from \mbox{Flickr.com}~\cite{nus-wide-civr09}. We prune the original data set and keep only the points belonging to at least one of the ten largest classes. The data set contains a total of 186,577 image-text pairs, each of which belongs to at least one of ten possible labels (\aka concepts). The data set has been split into a training set of 185,577 pairs and a test set of 1,000 pairs. The images are represented by a $500$-dimensional \mbox{SIFT} representation. The text is simply represented by the number of occurrences of the 1,000 most frequently used tags according to the image.
Some characteristics of the two data sets are summarized in Table~\ref{table:data}.
\begin{table}[!t]
% increase table row spacing, adjust to taste
% \renewcommand{\arraystretch}{1.3}
% if using array.sty, it might be a good idea to tweak the value of \extrarowheight as needed to properly center the text within the cells
\caption{Characteristics of Data Sets}\vspace{0.5cm}
\label{table:data}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Data set & $D_{x}$ & $D_{y}$ & \# of points &\# of classes\\
\hline
Wiki& 128& 10& 2,866 & 10\\
\hline
\mbox{Flickr}& 500& 1000& 186,577 &10\\
\hline
\end{tabular}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Experimental Settings}
\label{smh:exps:settings}
To mimic real multimedia retrieval systems, we consider two tasks in our experiments: crossmodel and uni-modal retrieval. In cross-modal retrieval, the query and the database belong to different modalities, for example, an image is used as a query and a set of text is used as a database. In uni-modal retrieval, the query and the database belong to the same modality. For each task, we first train the models on the training set, and then use documents in the test set as queries and the training set as database.
We use two evaluation measures in both cases, namely, \textit{mean average precision} (\mbox{MAP}) and precision at a fixed Hamming radius. \mbox{MAP} is a measure widely used by the information retrieval community~\cite{baeza1999book,rasiwasia2010mm}. Specifically, the \mbox{MAP} for a set of queries is the mean of the \textit{average precision} (\mbox{AP}) scores for each query, with
$$\textrm{AP} = \frac{1}{L}\sum\nolimits_{r=1}\nolimits^{N}P(r)\times\delta(r),$$
where $r$ is the rank position, $N$ is the number of retrieved documents, $\delta(r)$ is a binary function that returns 1 if the document at rank position $r$ is relevant\footnote{In the experiments, an image is relevant to a text if they share the same class label and vice versa.} to the query and 0 otherwise, $P(r)$ is the precision of relevance at position $r$, and $L$ is the total number of relevant documents in the retrieved set. \mbox{MAP} is sometimes referred to geometrically as the area under the precision-recall curve for a set of queries~\cite{turpin2006sigir}. Thus a larger value of \mbox{MAP} indicates a better performance. To get the precision at Hamming radius $d$, we first retrieve all the documents which have Hamming distance at most $d$ to the query and then compute the precision of the retrieved documents. Similar to \mbox{MAP}, larger values of precision indicate better performance. In all experiments, we set the rank position $r=100$ and the Hamming radius $d=2$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Results} % of \mbox{SMH}
\label{smh:exps:results_smh}
In the following experiments, we randomly select $P=500$ data points from the training set as landmarks for \mbox{KSMH} and \mbox{RKSMH} and repeat the process ten times. Hence for these two methods, we report the average results with the corresponding standard deviations. Moreover, linear kernel is used, Laplacians are defined based on labels and the parameters are set to $\kappa = 10^{-4}$, $\gamma = 0.1$ for the \mbox{Wiki} data set and $\gamma=100$ for the \mbox{Flickr} data set. Besides, to reduce computational cost on the \mbox{Flickr} data set, we use a subset of 5,000 instances from the training set to train the models but the retrieval tasks are still conducted on the whole training set.
%%%%%%%%%%%%%%%%%%%
\subsubsection{Comparison for crossmodel retrieval}
\label{smh:exps:results:cross}
We first compare the four multimodal hashing methods, i.e., \mbox{CMSSH}, \mbox{SMH}, \mbox{KSMH} and \mbox{RKSMH}, for crossmodel retrieval. The results for different code lengths $M$ on the \mbox{Wiki} data set are reported in Table~\ref{table:comp-wiki-cross-it}~\&~\ref{table:comp-wiki-cross-ti}, and those on the \mbox{Flickr} data set are reported in Table~\ref{table:comp-flickr-cross-it}~\&~\ref{table:comp-flickr-cross-ti}.
\begin{table}[htb]\small
\caption{Performance comparison for Image-Text retrieval on \mbox{Wiki}}\label{table:comp-wiki-cross-it}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Image query -- Text database} \\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{CMSSH}&{MAP} & $0.1660 $ & $ 0.1640 $ &$ 0.1751$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1150 $ & $ 0.1501 $ & $ {\bf0.3487} $ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP & $0.1937 $ & $ 0.2290 $ & $ 0.2140$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1252 $ & $ {\bf 0.1640} $ & $ 0.2200$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & $0.1909\pm 0.0032$ & ${\bf0.2194\pm0.0052}$ & $0.2177\pm 0.0058$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & ${\bf0.1253\pm0.0008}$ & $0.1635\pm 0.0045$ & $0.2186\pm0.0130$\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & ${\bf 0.1918\pm0.0021}$ & $0.2189\pm0.0036$ & ${\bf0.2200\pm0.0046}$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1219\pm 0.0008 $ & $0.1633\pm0.0032$ & $0.2172\pm0.0084$ \\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]\small
\caption{Performance comparison for Text-Image retrieval on \mbox{Wiki}}\label{table:comp-wiki-cross-ti}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Text query -- Image database}\\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{CMSSH}&{MAP} &$ 0.1928 $&$ 0.1746 $& $ 0.1950 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1142$&$ 0.1389 $&$ 0.1320 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP &${\bf 0.2208} $&$ {\bf0.2784} $&$ {\bf0.3494} $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.1258 $&$ {\bf0.1800} $&$ {\bf0.3047} $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & $0.2207\pm0.0039$& $ 0.2765\pm 0.0052 $&$ 0.3108\pm 0.0059 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & ${\bf 0.1264\pm0.0013}$&$ 0.1782\pm 0.0071 $&$ 0.2780\pm 0.0118 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.2088\pm0.0038$&$ 0.2559\pm 0.0065 $&$ 0.3171\pm 0.0075 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.1228 \pm0.0008 $&$ 0.1740\pm 0.0045 $&$ 0.2774\pm 0.0106 $\\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]\small
\caption{Performance comparison for Image-Text retrieval on \mbox{Flickr}}\label{table:comp-flickr-cross-it}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Image query -- Text database} \\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{CMSSH}&{MAP} & $0.3723 $ & $ 0.3822 $ &$ 0.4100$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3458 $ & $ 0.3503 $ & $ 0.4104$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP & $0.3463 $ & $ 0.3872 $ & $ 0.4159$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3451 $ & $ 0.4130 $ & $ 0.4100$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & ${\bf0.4604 \pm 0.0116}$ & $ 0.4747 \pm 0.0136 $ & $ 0.4718\pm 0.0066$\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & ${\bf 0.3778 \pm0.0052} $ & $ 0.4148 \pm 0.0066 $ & $ 0.4390\pm 0.0111$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.4482 \pm 0.0125 $ & $ {\bf 0.4782 \pm 0.0052} $ & $ {\bf 0.4865\pm 0.0037} $ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3709 \pm0.0038 $ & $ {\bf 0.4185 \pm 0.0065}$ & $ {\bf 0.4690\pm 0.0094} $\\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]\small
\caption{Performance comparison for Text-Image retrieval on \mbox{Flickr}}\label{table:comp-flickr-cross-ti}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Text query -- Image database}\\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{CMSSH}&{MAP} &${\bf 0.4824} $&$ {0.4829} $& $ 0.4712 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3467 $&$ 0.3616 $&$ {\bf 0.5286 } $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP &$0.3590 $&$ 0.4056 $&$ 0.4520 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.3447 $&$ {\bf 0.4346 } $&$ 0.4460 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & $0.4683 \pm0.0146$&$ 0.4849 \pm 0.0119 $&$ 0.4860\pm 0.0102 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $ {\bf 0.3839 \pm0.0054} $&$ 0.4260\pm 0.0073 $&$ 0.4537\pm 0.0114 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.4610 \pm0.0066 $&$ {\bf 0.5013 \pm 0.0081} $&$ {\bf 0.5098 \pm 0.0050}$\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.3750 \pm0.0057 $&$ 0.4241 \pm 0.0082 $&$ 0.4751\pm 0.0141 $\\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
From the tables, we can see that all three \mbox{SMH} models outperform \mbox{CMSSH} by a large margin on both data sets. Among our three models, \mbox{RKSMH} performs the best on both data sets, indicating the effectiveness of \textit{Laplacian} regularization. We also note that \mbox{KSMH} achieves performance similar to that of \mbox{SMH} on the \mbox{Wiki} data set and better performance than \mbox{SMH} on the \mbox{Flickr} data set, showing that the kernel extension is quite useful.
%%%%%%%%%%%%%%%%%%%
\subsubsection{Comparison for unimodel retrieval}
\label{smh:exps:results:uni}
In this section, we compare the four multimodal and two well-known unimodel hashing-based methods for unimodel retrieval. It should be noted that multimodel hashing algorithms learn hash functions from both modalities whereas the unimodel hashing algorithms learn hash functions from only one modality. The results are summarized in Table~\ref{table:comp-wiki-uni-ii}~\&~\ref{table:comp-wiki-uni-tt} for the \mbox{Wiki} data set, and Table~\ref{table:comp-flickr-uni-ii}~\&~\ref{table:comp-flickr-uni-tt} for the \mbox{Flickr} data set.
\begin{table}[htb]\small
\caption{Performance comparison for image retrieval on \mbox{Wiki}}\label{table:comp-wiki-uni-ii}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Image query -- Image database} \\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{SH}&{MAP} & $0.1559$ & $0.1545$&$ 0.1552 $ \\
\cline{2-5}%
&{Precision} & $0.1084$ & $0.1084$ & $ 0.1083 $\\
\hline %\addlinespace[0pt]\midrule[0.8pt]\addlinespace[0pt]
\multirow{2}{*}{CMSSH}&{MAP} & $0.1640 $ & $ 0.1683 $ &$ 0.1743$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1139 $ & $ 0.1171 $ & $ 0.1294 $ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP & ${\bf 0.1773} $ & $ {\bf 0.1930} $ & $ {\bf 0.1903}$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $ {\bf 0.1178} $ & $ {\bf 0.1352} $ & $ {\bf 0.1536}$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & $0.1759 \pm 0.0014 $ & $0.1912\pm 0.0031 $ & $0.1892\pm 0.0019$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1173 \pm0.0004 $ & $ 0.1343 \pm 0.0011 $ & $ 0.1533\pm 0.0029$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.1747 \pm0.0014 $ & $0.1848\pm 0.0020$ & $0.1884\pm0.0019$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1155 \pm0.0006 $ & $0.1324\pm 0.0007$ & $0.1512\pm0.0022$ \\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]\small
\caption{Performance comparison for text retrieval on \mbox{Wiki}}\label{table:comp-wiki-uni-tt}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Text query -- Text database}\\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{SH}&{MAP} & ${\bf 0.3068}$ & $0.3986$&$ 0.5590 $\\
\cline{2-5}%
&{Precision} & $0.1084$ &$0.1086$&$ 0.1721 $\\
\hline %\addlinespace[0pt]\midrule[0.8pt]\addlinespace[0pt]
\multirow{2}{*}{CMSSH}&{MAP} &$0.3024 $&$ 0.4737 $& $ 0.5364 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.1739 $&$ 0.2752 $&$ 0.4179 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP &$0.2850 $&$ 0.4461 $&$ 0.5563 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.1358 $&$ 0.2648 $&$ 0.5704 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & $0.3066 \pm0.0220 $&$ 0.4627\pm 0.0173 $&$ 0.5590\pm 0.0068 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $ {\bf 0.1397 \pm0.0041 } $&$ 0.2742 \pm 0.0226 $&$ 0.5741\pm 0.0217 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.2891 \pm0.0046 $&$ {\bf 0.5078\pm 0.0046} $&$ {\bf 0.5697\pm0.0041 } $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.1328 \pm0.0013 $&$ {\bf 0.2786 \pm 0.0144 } $&$ {\bf 0.5927 \pm 0.0143} $\\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]\small
\caption{Performance comparison for image retrieval on \mbox{Flickr}}\label{table:comp-flickr-uni-ii}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Image query -- Image database} \\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{SH}&{MAP} & $0.3743$ & $0.3780$ &$0.3793 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3449$ & $0.3449$ &$0.3449$ \\
\hline %\addlinespace[0pt]\midrule[0.8pt]\addlinespace[0pt]
\multirow{2}{*}{CMSSH}&{MAP} & $0.4064 $ & $ 0.4262 $ &$ 0.4304$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3396 $ & $ 0.3376 $ & $ 0.3424$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP & $0.3753 $ & $ 0.4326 $ & $ 0.4388$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3450 $ & $ 0.3786 $ & $ 0.4075$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & ${\bf 0.4498 \pm 0.0061 } $ & $ 0.4642 \pm 0.0051 $ & $ 0.4606\pm 0.0034$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & ${\bf 0.3746 \pm0.0059 } $ & $ {\bf 0.4095 \pm 0.0103} $ & $ 0.4342\pm 0.0126$ \\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.4390 \pm0.0062 $ & $ {\bf 0.4726 \pm 0.0055} $ & $ {\bf 0.4783\pm 0.0029}$ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3668 \pm0.0050 $ & $ 0.4020 \pm 0.0070 $ & $ {\bf 0.4403\pm 0.0082}$ \\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]\small
\caption{Performance comparison for text retrieval on \mbox{Flickr}}\label{table:comp-flickr-uni-tt}\vspace{-0.5cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\toprule[1pt]\addlinespace[0pt]
\multirow{2}{*}{Method}& \multirow{2}{*}{Measure} & \multicolumn{3}{|c|}{Text query -- Text database}\\
\cline{3-5}%\addlinespace[0pt]\midrule[1pt]\addlinespace[0pt]
&&$M=4$&$M=8$&$M=16$\\
\hline
\multirow{2}{*}{SH}&{MAP} & $0.3753$ & $0.3756$&$ 0.3752 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3449$ &$0.3449$&$0.3449$\\
\hline %\addlinespace[0pt]\midrule[0.8pt]\addlinespace[0pt]
\multirow{2}{*}{CMSSH}&{MAP} &$0.4762 $&$ 0.5197 $&$ {\bf 0.5832 } $ \\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & $0.3824 $&$ 0.3962 $&$ 0.4112 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{SMH}&MAP &$0.3769 $&$ 0.4650 $&$ 0.5031 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$ 0.3449 $&$ 0.3838 $&$ 0.4356 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{KSMH}&MAP & $ {\bf 0.4866 \pm0.0135}$&$ 0.5132 \pm 0.0098 $&$ 0.5177\pm 0.0095 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} & ${\bf 0.3839 \pm0.0062 } $&$ 0.4342 \pm 0.0121 $& $ 0.4760\pm 0.0112 $\\
\hline%\addlinespace[0pt]\midrule[0.5pt]\addlinespace[0pt]
\multirow{2}{*}{RKSMH}&MAP & $0.4723 \pm0.0153 $&$ {\bf 0.5245 \pm 0.0103 } $&$ 0.5441\pm 0.0068 $\\
\cline{2-5}%\addlinespace[0pt]\cmidrule[0.5pt]{2-5}\addlinespace[0pt]
&{Precision} &$0.3777 \pm 0.0036 $&$ {\bf 0.4394 \pm 0.0073 } $&$ {\bf 0.5117\pm 0.0127 } $\\
\addlinespace[0pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
Similar to the results of crossmodal retrieval, our models outperform \mbox{CMSSH} on both data sets and the performance gap is larger on the \mbox{Flickr} data set. As expected, \mbox{RKSMH} achieves the best performance among our methods and \mbox{KSMH} is better than \mbox{SMH}. Note that our methods perform better than one state-of-the-art unimodal hashing methods, namely, \textit{spectral hashing}, indicating that information from other modalities can help to learn good hash codes for unimodal retrieval. As a result, \mbox{SMH}, especially \mbox{RKSMH}, is also very useful for unimodal retrieval systems.
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
\section{Conclusion}
\label{smh:conclusion}
In this chapter, we have proposed spectral multimodal hashing (\mbox{SMH}) under the framework of multimodal hashing, the goal of which is to perform similarity search on data of multiple modalities. \mbox{SMH} learns the hash codes through spectral analysis of the modality correlation. Experimental results show that our \mbox{SMH} model outperforms the state-of-the-art methods. %However, \mbox{SMH} has an apparent limitation, that is, it is only for the aligned data which may not be available in some applications.
In the future, we wish to relax the data alignment assumption of SMH and develop more general multimodal hashing methods. In addition, we would like to apply SMH to other applications such as multimodal medical image registration.%In the next chapter, we propose a new multimodal hashing model for graph data which is more general than aligned data.
| {
"alphanum_fraction": 0.6583904206,
"avg_line_length": 82.3485130112,
"ext": "tex",
"hexsha": "bbda188b19e357871dfa11146bb894e1c7b35066",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4043ea31f634669c46cc46318778e1a8317ca761",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "zhenyisx/paper",
"max_forks_repo_path": "thesis_zhen/TexFile/4_smh.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4043ea31f634669c46cc46318778e1a8317ca761",
"max_issues_repo_issues_event_max_datetime": "2020-05-19T07:15:40.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-05-19T06:22:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "yzhen-li/paper",
"max_issues_repo_path": "thesis_zhen/TexFile/4_smh.tex",
"max_line_length": 1465,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4043ea31f634669c46cc46318778e1a8317ca761",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "yzhen-li/paper",
"max_stars_repo_path": "thesis_zhen/TexFile/4_smh.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 32296,
"size": 88607
} |
\input{preamble}
\title{Hardware Review}
\subtitle{Projekt: }
\author{Wacher Tim}
\begin{document}
\ \\[0cm]
\begin{tikzpicture}
\draw(0,0)--(\textwidth,0)--(\textwidth,-2cm)--(0,-2cm)--(0,0);
\node at (0.32\textwidth,-1){%
\begin{minipage}[t]{0.6\textwidth}
\Huge{\textsf{Hardware Review}}\\[0.8ex]
\large{\textsf{Projekt:}}
\end{minipage}
};
\node at (\textwidth+3mm,-1.25){%
\begin{minipage}[t]{0.8\textwidth}
\textsf{HW-Version:}\\[1.8mm]
\textsf{Reviewer:}\\[1.8mm]
\textsf{Datum:}\\
\end{minipage}
};
\AddToShipoutPicture{%
\put(80,30){%
\begin{tikzpicture}
\draw(0,0.8cm)--(\textwidth,0.8);
\node at(140mm,4mm)[anchor=west]{\textsf{page \arabic{page}}};
\end{tikzpicture}
}}
\AddToShipoutPicture{%
\put(80,760){%
\begin{tikzpicture}
\draw(0,-0.8cm)--(\textwidth,-0.8);
\node[inner sep=0pt] (russell) at (1,0)
{\includegraphics[width=.08\textwidth]{img/icon.png}};
\end{tikzpicture}
}}
\draw (0.6\textwidth,0)--(0.6\textwidth,-2cm);
\draw (0.6\textwidth,-0.666cm)--(\textwidth,-0.666cm);
\draw (0.6\textwidth,-1.333cm)--(\textwidth,-1.333cm);
\draw (0.77\textwidth,0)--(0.77\textwidth,-2cm);
\end{tikzpicture}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Einleitung
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Einleitung}
\subsection{Material}
\subsubsection{Kommentar}
\kommentarfeld[25]
\tickbox[Tickbox]
\requirement[1.1, {Lorem Ipsum is simply dummy text of the printing and
typesetting industry. Lorem Ipsum has been the industry's standard
dummy text ever since the 1500s, when an unknown printer took a galley
of type and scrambled it to make a type specimen book.}, Okay]
\newpage
\section{Lorem Ipsum}
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,
when an unknown printer took a galley of type and scrambled it to make a type
specimen book. It has survived not only five centuries, but also the leap into
electronic typesetting, remaining essentially unchanged. It was popularised in
the 1960s with the release of Letraset sheets containing Lorem Ipsum passages,
and more recently with desktop publishing software like Aldus PageMaker
including versions of Lorem Ipsum.
\newpage
\section{Lorem Ipsum I}
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,
when an unknown printer took a galley of type and scrambled it to make a type
specimen book. It has survived not only five centuries, but also the leap into
electronic typesetting, remaining essentially unchanged. It was popularised in
the 1960s with the release of Letraset sheets containing Lorem Ipsum passages,
and more recently with desktop publishing software like Aldus PageMaker
including versions of Lorem Ipsum.
\end{document}
| {
"alphanum_fraction": 0.7058020478,
"avg_line_length": 30.8421052632,
"ext": "tex",
"hexsha": "d2011068f7bfb1aea286a39f94f84a5d4c658ce0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cb47ff1d559942874bad6119e13c6059ca5f2be6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wht4/latex_templates",
"max_forks_repo_path": "review/template/review.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cb47ff1d559942874bad6119e13c6059ca5f2be6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wht4/latex_templates",
"max_issues_repo_path": "review/template/review.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cb47ff1d559942874bad6119e13c6059ca5f2be6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wht4/latex_templates",
"max_stars_repo_path": "review/template/review.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 866,
"size": 2930
} |
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provides euro and other symbols
\else % if luatex or xelatex
\usepackage{unicode-math}
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\usepackage{hyperref}
\hypersetup{
pdftitle={TadoConnector \textbar{} bfmb-tado-connector},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
% set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\title{TadoConnector \textbar{} bfmb-tado-connector}
\date{}
\begin{document}
\maketitle
\hypertarget{class-tadoconnector}{%
\section{Class TadoConnector}\label{class-tadoconnector}}
Class TadoConnector. Extends Connector class from bfmb-base-connector
module.
\hypertarget{hierarchy}{%
\subsubsection{Hierarchy}\label{hierarchy}}
\begin{itemize}
\tightlist
\item
{Connector}
\begin{itemize}
\tightlist
\item
{TadoConnector}
\end{itemize}
\end{itemize}
\hypertarget{constructors-1}{%
\subsection{Constructors}\label{constructors-1}}
\protect\hypertarget{constructor}{}{}
\hypertarget{constructor}{%
\subsubsection{constructor}\label{constructor}}
\begin{itemize}
\tightlist
\item
new TadoConnector{(}{)}{:
}\href{_connector_.tadoconnector.html}{TadoConnector}
\end{itemize}
\begin{itemize}
\item
Overrides Connector.\_\_constructor
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L49}{connector.ts:49}
\end{itemize}
The constructor only calls to parent class passing the network
identification.
\hypertarget{returns-tadoconnector}{%
\paragraph{\texorpdfstring{Returns
\href{_connector_.tadoconnector.html}{TadoConnector}}{Returns TadoConnector}}\label{returns-tadoconnector}}
\end{itemize}
\hypertarget{properties-1}{%
\subsection{Properties}\label{properties-1}}
\protect\hypertarget{connections}{}{}
\hypertarget{protected-connections}{%
\subsubsection{\texorpdfstring{{Protected}
connections}{Protected connections}}\label{protected-connections}}
connections{:} {Array}{\textless{}}{Connection}{\textgreater{}}
Inherited from Connector.connections
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L5}{ext/bfmb-base-connector.d.ts:5}
\end{itemize}
\protect\hypertarget{deviceidrequiredmethods}{}{}
\hypertarget{private-deviceidrequiredmethods}{%
\subsubsection{\texorpdfstring{{Private}
deviceIdRequiredMethods}{Private deviceIdRequiredMethods}}\label{private-deviceidrequiredmethods}}
deviceIdRequiredMethods{:} {string}{{[}{]}}{ =~{[}"getMobileDevice",
"getMobileDeviceSettings"{]}}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L28}{connector.ts:28}
\end{itemize}
This array is required for knowing which functions require the existence
of device\_id attribute.
\protect\hypertarget{homeidrequiredmethods}{}{}
\hypertarget{private-homeidrequiredmethods}{%
\subsubsection{\texorpdfstring{{Private}
homeIdRequiredMethods}{Private homeIdRequiredMethods}}\label{private-homeidrequiredmethods}}
homeIdRequiredMethods{:} {string}{{[}{]}}{ =~{[}"getHome", "getWeather",
"getDevices", "getInstallations","getUsers", "getState",
"getMobileDevices", "getMobileDevice","getMobileDeviceSettings",
"getZones", "getZoneState","getZoneCapabilities","getZoneOverlay",
"getTimeTables","getAwayConfiguration", "getTimeTable",
"clearZoneOverlay","setZoneOverlay"{]}}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L16}{connector.ts:16}
\end{itemize}
This array is required for knowing which functions require the existence
of home\_id attribute.
\protect\hypertarget{name}{}{}
\hypertarget{protected-name}{%
\subsubsection{\texorpdfstring{{Protected}
name}{Protected name}}\label{protected-name}}
name{:} {string}
Inherited from Connector.name
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L4}{ext/bfmb-base-connector.d.ts:4}
\end{itemize}
\protect\hypertarget{powertemprequiredmethods}{}{}
\hypertarget{private-powertemprequiredmethods}{%
\subsubsection{\texorpdfstring{{Private}
powerTempRequiredMethods}{Private powerTempRequiredMethods}}\label{private-powertemprequiredmethods}}
powerTempRequiredMethods{:} {string}{{[}{]}}{ =~{[}"setZoneOverlay"{]}}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L49}{connector.ts:49}
\end{itemize}
This array is required for knowing which functions require the existence
of power, temperature and termination attributes.
\protect\hypertarget{timetableidrequiredmethods}{}{}
\hypertarget{private-timetableidrequiredmethods}{%
\subsubsection{\texorpdfstring{{Private}
timetableIdRequiredMethods}{Private timetableIdRequiredMethods}}\label{private-timetableidrequiredmethods}}
timetableIdRequiredMethods{:} {string}{{[}{]}}{ =~{[}"getTimeTable"{]}}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L44}{connector.ts:44}
\end{itemize}
This array is required for knowing which functions require the existence
of timetable\_id attribute.
\protect\hypertarget{zoneidrequiredmethods}{}{}
\hypertarget{private-zoneidrequiredmethods}{%
\subsubsection{\texorpdfstring{{Private}
zoneIdRequiredMethods}{Private zoneIdRequiredMethods}}\label{private-zoneidrequiredmethods}}
zoneIdRequiredMethods{:} {string}{{[}{]}}{ =~{[}"getZoneState",
"getZoneCapabilities","getZoneOverlay","getTimeTables",
"getAwayConfiguration", "getTimeTable","setZoneOverlay",
"clearZoneOverlay"{]}}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L35}{connector.ts:35}
\end{itemize}
This array is required for knowing which functions require the existence
of zone\_id attribute.
\hypertarget{methods-1}{%
\subsection{Methods}\label{methods-1}}
\protect\hypertarget{addconnection}{}{}
\hypertarget{addconnection}{%
\subsubsection{addConnection}\label{addconnection}}
\begin{itemize}
\tightlist
\item
addConnection{(}options{: }{any}, callback{: }{Function}{)}{: }{void}
\end{itemize}
\begin{itemize}
\item
Overrides Connector.addConnection
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L63}{connector.ts:63}
\end{itemize}
This method adds a TadoConnection object to the connector.
\hypertarget{parameters}{%
\paragraph{Parameters}\label{parameters}}
\begin{itemize}
\item ~
\hypertarget{options-any}{%
\subparagraph{\texorpdfstring{options:
{any}}{options: any}}\label{options-any}}
A not type-defined object. Requires the attributes \textbf{username}
and \textbf{password} to be valid. Those values are the login data
of Tadoº.
\item ~
\hypertarget{callback-function}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function}}
Callback function which it gives the results or the failure of the
task.
\end{itemize}
\hypertarget{returns-void}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void}}
\end{itemize}
\protect\hypertarget{callhttpapigetmethod}{}{}
\hypertarget{private-callhttpapigetmethod}{%
\subsubsection{\texorpdfstring{{Private}
callHttpApiGetMethod}{Private callHttpApiGetMethod}}\label{private-callhttpapigetmethod}}
\begin{itemize}
\tightlist
\item
callHttpApiGetMethod{(}connection{:
}\href{_connector_.tadoconnection.html}{TadoConnection}, options{:
}{any}, callback{: }{Function}{)}{: }{void}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L147}{connector.ts:147}
\end{itemize}
This method does the calling to Tado api.
\hypertarget{parameters-1}{%
\paragraph{Parameters}\label{parameters-1}}
\begin{itemize}
\item ~
\hypertarget{connection-tadoconnection}{%
\subparagraph{\texorpdfstring{connection:
\href{_connector_.tadoconnection.html}{TadoConnection}}{connection: TadoConnection}}\label{connection-tadoconnection}}
The connection to Tado api.
\item ~
\hypertarget{options-any-1}{%
\subparagraph{\texorpdfstring{options:
{any}}{options: any}}\label{options-any-1}}
A not type-defined object. Contains the parameters that the api
endpoint require.
\item ~
\hypertarget{callback-function-1}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function-1}}
Function which returns any result or error.
\end{itemize}
\hypertarget{returns-void-1}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void-1}}
Error May return an error object if there's some issues with the
options object.
\end{itemize}
\protect\hypertarget{callhttpapiputmethod}{}{}
\hypertarget{private-callhttpapiputmethod}{%
\subsubsection{\texorpdfstring{{Private}
callHttpApiPutMethod}{Private callHttpApiPutMethod}}\label{private-callhttpapiputmethod}}
\begin{itemize}
\tightlist
\item
callHttpApiPutMethod{(}connection{:
}\href{_connector_.tadoconnection.html}{TadoConnection}, options{:
}{any}, callback{: }{Function}{)}{: }{void}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L232}{connector.ts:232}
\end{itemize}
This method does the calling to Tado api.
\hypertarget{parameters-2}{%
\paragraph{Parameters}\label{parameters-2}}
\begin{itemize}
\item ~
\hypertarget{connection-tadoconnection-1}{%
\subparagraph{\texorpdfstring{connection:
\href{_connector_.tadoconnection.html}{TadoConnection}}{connection: TadoConnection}}\label{connection-tadoconnection-1}}
The connection to Tado api.
\item ~
\hypertarget{options-any-2}{%
\subparagraph{\texorpdfstring{options:
{any}}{options: any}}\label{options-any-2}}
A not type-defined object. Contains the parameters that the api
endpoint require.
\item ~
\hypertarget{callback-function-2}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function-2}}
Function which returns any result or error.
\end{itemize}
\hypertarget{returns-void-2}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void-2}}
Error May return an error object if there's some issues with the
options object.
\end{itemize}
\protect\hypertarget{getconnection}{}{}
\hypertarget{getconnection}{%
\subsubsection{getConnection}\label{getconnection}}
\begin{itemize}
\tightlist
\item
getConnection{(}id{: }{string}{)}{: }{Connection}
\end{itemize}
\begin{itemize}
\item
Inherited from Connector.getConnection
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L15}{ext/bfmb-base-connector.d.ts:15}
\end{itemize}
\hypertarget{parameters-3}{%
\paragraph{Parameters}\label{parameters-3}}
\begin{itemize}
\item ~
\hypertarget{id-string}{%
\subparagraph{\texorpdfstring{id:
{string}}{id: string}}\label{id-string}}
\end{itemize}
\hypertarget{returns-connection}{%
\paragraph{\texorpdfstring{Returns
{Connection}}{Returns Connection}}\label{returns-connection}}
\end{itemize}
\protect\hypertarget{getconnectionindex}{}{}
\hypertarget{getconnectionindex}{%
\subsubsection{getConnectionIndex}\label{getconnectionindex}}
\begin{itemize}
\tightlist
\item
getConnectionIndex{(}id{: }{string}{)}{: }{number}
\end{itemize}
\begin{itemize}
\item
Inherited from Connector.getConnectionIndex
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L13}{ext/bfmb-base-connector.d.ts:13}
\end{itemize}
\hypertarget{parameters-4}{%
\paragraph{Parameters}\label{parameters-4}}
\begin{itemize}
\item ~
\hypertarget{id-string-1}{%
\subparagraph{\texorpdfstring{id:
{string}}{id: string}}\label{id-string-1}}
\end{itemize}
\hypertarget{returns-number}{%
\paragraph{\texorpdfstring{Returns
{number}}{Returns number}}\label{returns-number}}
\end{itemize}
\protect\hypertarget{getme}{}{}
\hypertarget{getme}{%
\subsubsection{getMe}\label{getme}}
\begin{itemize}
\tightlist
\item
getMe{(}id{: }{string}, options{?: }{any}, callback{: }{Function}{)}{:
}{void}
\end{itemize}
\begin{itemize}
\item
Overrides Connector.getMe
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L84}{connector.ts:84}
\end{itemize}
This method calls to /me endpoint of Tadoº api.
\hypertarget{parameters-5}{%
\paragraph{Parameters}\label{parameters-5}}
\begin{itemize}
\item ~
\hypertarget{id-string-2}{%
\subparagraph{\texorpdfstring{id:
{string}}{id: string}}\label{id-string-2}}
The uuid of the connection to do the call.
\item ~
\hypertarget{default-value-options-any}{%
\subparagraph{\texorpdfstring{{Default value} options: {any}{
=~\{\}}}{Default value options: any =~\{\}}}\label{default-value-options-any}}
A not type-defined object. Actually it's empty.
\item ~
\hypertarget{callback-function-3}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function-3}}
Function which return response or error from the connection.
\end{itemize}
\hypertarget{returns-void-3}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void-3}}
\end{itemize}
\protect\hypertarget{getname}{}{}
\hypertarget{getname}{%
\subsubsection{getName}\label{getname}}
\begin{itemize}
\tightlist
\item
getName{(}{)}{: }{string}
\end{itemize}
\begin{itemize}
\item
Inherited from Connector.getName
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L9}{ext/bfmb-base-connector.d.ts:9}
\end{itemize}
\hypertarget{returns-string}{%
\paragraph{\texorpdfstring{Returns
{string}}{Returns string}}\label{returns-string}}
\end{itemize}
\protect\hypertarget{receivemessage}{}{}
\hypertarget{receivemessage}{%
\subsubsection{receiveMessage}\label{receivemessage}}
\begin{itemize}
\tightlist
\item
receiveMessage{(}id{: }{string}, options{?: }{any}, callback{:
}{Function}{)}{: }{void}
\end{itemize}
\begin{itemize}
\item
Overrides Connector.receiveMessage
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L103}{connector.ts:103}
\end{itemize}
This method is the universal method for calling get methods of Tado
client module.
\hypertarget{parameters-6}{%
\paragraph{Parameters}\label{parameters-6}}
\begin{itemize}
\item ~
\hypertarget{id-string-3}{%
\subparagraph{\texorpdfstring{id:
{string}}{id: string}}\label{id-string-3}}
The uuid of the connection to do the call.
\item ~
\hypertarget{default-value-options-any-1}{%
\subparagraph{\texorpdfstring{{Default value} options: {any}{
=~\{\}}}{Default value options: any =~\{\}}}\label{default-value-options-any-1}}
A not type-defined object. Contains the parameters that the api
endpoint require.
\item ~
\hypertarget{callback-function-4}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function-4}}
Function which return response or error from the connection.
\end{itemize}
\hypertarget{returns-void-4}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void-4}}
\end{itemize}
\protect\hypertarget{removeconnection}{}{}
\hypertarget{removeconnection}{%
\subsubsection{removeConnection}\label{removeconnection}}
\begin{itemize}
\tightlist
\item
removeConnection{(}id{: }{string}, callback{: }{Function}{)}{: }{void}
\end{itemize}
\begin{itemize}
\item
Inherited from Connector.removeConnection
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L17}{ext/bfmb-base-connector.d.ts:17}
\end{itemize}
\hypertarget{parameters-7}{%
\paragraph{Parameters}\label{parameters-7}}
\begin{itemize}
\item ~
\hypertarget{id-string-4}{%
\subparagraph{\texorpdfstring{id:
{string}}{id: string}}\label{id-string-4}}
\item ~
\hypertarget{callback-function-5}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function-5}}
\end{itemize}
\hypertarget{returns-void-5}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void-5}}
\end{itemize}
\protect\hypertarget{sendmessage}{}{}
\hypertarget{sendmessage}{%
\subsubsection{sendMessage}\label{sendmessage}}
\begin{itemize}
\tightlist
\item
sendMessage{(}id{: }{string}, options{?: }{any}, callback{:
}{Function}{)}{: }{void}
\end{itemize}
\begin{itemize}
\item
Overrides Connector.sendMessage
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L189}{connector.ts:189}
\end{itemize}
This method is the universal method for calling put/post methods of
Tado client module.
\hypertarget{parameters-8}{%
\paragraph{Parameters}\label{parameters-8}}
\begin{itemize}
\item ~
\hypertarget{id-string-5}{%
\subparagraph{\texorpdfstring{id:
{string}}{id: string}}\label{id-string-5}}
The uuid of the connection to do the call.
\item ~
\hypertarget{default-value-options-any-2}{%
\subparagraph{\texorpdfstring{{Default value} options: {any}{
=~\{\}}}{Default value options: any =~\{\}}}\label{default-value-options-any-2}}
A not type-defined object. Contains the parameters that the api
endpoint require.
\item ~
\hypertarget{callback-function-6}{%
\subparagraph{\texorpdfstring{callback:
{Function}}{callback: Function}}\label{callback-function-6}}
Function which return response or error from the connection.
\end{itemize}
\hypertarget{returns-void-6}{%
\paragraph{\texorpdfstring{Returns
{void}}{Returns void}}\label{returns-void-6}}
\end{itemize}
\protect\hypertarget{verifyreceivemessagebaseoptions}{}{}
\hypertarget{private-verifyreceivemessagebaseoptions}{%
\subsubsection{\texorpdfstring{{Private}
verifyReceiveMessageBaseOptions}{Private verifyReceiveMessageBaseOptions}}\label{private-verifyreceivemessagebaseoptions}}
\begin{itemize}
\tightlist
\item
verifyReceiveMessageBaseOptions{(}options{: }{any}{)}{: }{Error}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L121}{connector.ts:121}
\end{itemize}
This method verifies that options object has the required attributes.
\hypertarget{parameters-9}{%
\paragraph{Parameters}\label{parameters-9}}
\begin{itemize}
\item ~
\hypertarget{options-any-3}{%
\subparagraph{\texorpdfstring{options:
{any}}{options: any}}\label{options-any-3}}
A not type-defined object. Contains the parameters that the api
endpoint require.
\end{itemize}
\hypertarget{returns-error}{%
\paragraph{\texorpdfstring{Returns
{Error}}{Returns Error}}\label{returns-error}}
Error May return an error object if there's some issues with the
options object.
\end{itemize}
\protect\hypertarget{verifysendmessagebaseoptions}{}{}
\hypertarget{private-verifysendmessagebaseoptions}{%
\subsubsection{\texorpdfstring{{Private}
verifySendMessageBaseOptions}{Private verifySendMessageBaseOptions}}\label{private-verifysendmessagebaseoptions}}
\begin{itemize}
\tightlist
\item
verifySendMessageBaseOptions{(}options{: }{any}{)}{: }{Error}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L207}{connector.ts:207}
\end{itemize}
This method verifies that options object has the required attributes.
\hypertarget{parameters-10}{%
\paragraph{Parameters}\label{parameters-10}}
\begin{itemize}
\item ~
\hypertarget{options-any-4}{%
\subparagraph{\texorpdfstring{options:
{any}}{options: any}}\label{options-any-4}}
A not type-defined object. Contains the parameters that the api
endpoint require.
\end{itemize}
\hypertarget{returns-error-1}{%
\paragraph{\texorpdfstring{Returns
{Error}}{Returns Error}}\label{returns-error-1}}
Error May return an error object if there's some issues with the
options object.
\end{itemize}
\hypertarget{class-tadoconnection}{%
\section{Class TadoConnection}\label{class-tadoconnection}}
TadoConnection is the class which holds the required information and api
client. Extends Connection class from bfmb-base-connector module.
\hypertarget{hierarchy}{%
\subsubsection{Hierarchy}\label{hierarchy}}
\begin{itemize}
\tightlist
\item
{Connection}
\begin{itemize}
\tightlist
\item
{TadoConnection}
\end{itemize}
\end{itemize}
\hypertarget{constructors-1}{%
\subsection{Constructors}\label{constructors-1}}
\protect\hypertarget{constructor}{}{}
\hypertarget{constructor}{%
\subsubsection{constructor}\label{constructor}}
\begin{itemize}
\tightlist
\item
new TadoConnection{(}options{: }{any}{)}{:
}\href{_connector_.tadoconnection.html}{TadoConnection}
\end{itemize}
\begin{itemize}
\item
Overrides Connection.\_\_constructor
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L269}{connector.ts:269}
\end{itemize}
The constructor of TadoConnection. Username and password are given by
options object.
\hypertarget{parameters}{%
\paragraph{Parameters}\label{parameters}}
\begin{itemize}
\item ~
\hypertarget{options-any}{%
\subparagraph{\texorpdfstring{options:
{any}}{options: any}}\label{options-any}}
\end{itemize}
\hypertarget{returns-tadoconnection}{%
\paragraph{\texorpdfstring{Returns
\href{_connector_.tadoconnection.html}{TadoConnection}}{Returns TadoConnection}}\label{returns-tadoconnection}}
\end{itemize}
\hypertarget{properties-1}{%
\subsection{Properties}\label{properties-1}}
\protect\hypertarget{id}{}{}
\hypertarget{protected-id}{%
\subsubsection{\texorpdfstring{{Protected}
id}{Protected id}}\label{protected-id}}
id{:} {string}
Inherited from Connection.id
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L27}{ext/bfmb-base-connector.d.ts:27}
\end{itemize}
\protect\hypertarget{password}{}{}
\hypertarget{private-password}{%
\subsubsection{\texorpdfstring{{Private}
password}{Private password}}\label{private-password}}
password{:} {string}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L265}{connector.ts:265}
\end{itemize}
Password of Tadoº service.
\protect\hypertarget{tadoclient}{}{}
\hypertarget{private-tadoclient}{%
\subsubsection{\texorpdfstring{{Private}
tadoClient}{Private tadoClient}}\label{private-tadoclient}}
tadoClient{:} {any}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L269}{connector.ts:269}
\end{itemize}
The node-tado-client class. Defined as any because the Typescript module
definitions don't exist.
\protect\hypertarget{username}{}{}
\hypertarget{private-username}{%
\subsubsection{\texorpdfstring{{Private}
username}{Private username}}\label{private-username}}
username{:} {string}
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L261}{connector.ts:261}
\end{itemize}
Username of Tadoº service.
\hypertarget{methods-1}{%
\subsection{Methods}\label{methods-1}}
\protect\hypertarget{getclient}{}{}
\hypertarget{getclient}{%
\subsubsection{getClient}\label{getclient}}
\begin{itemize}
\tightlist
\item
getClient{(}{)}{: }{any}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L301}{connector.ts:301}
\end{itemize}
This method retrieves the Tado client object.
\hypertarget{returns-any}{%
\paragraph{\texorpdfstring{Returns
{any}}{Returns any}}\label{returns-any}}
any Returns the client class.
\end{itemize}
\protect\hypertarget{getid}{}{}
\hypertarget{getid}{%
\subsubsection{getId}\label{getid}}
\begin{itemize}
\tightlist
\item
getId{(}{)}{: }{string}
\end{itemize}
\begin{itemize}
\item
Inherited from Connection.getId
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/ext/bfmb-base-connector.d.ts\#L31}{ext/bfmb-base-connector.d.ts:31}
\end{itemize}
\hypertarget{returns-string}{%
\paragraph{\texorpdfstring{Returns
{string}}{Returns string}}\label{returns-string}}
\end{itemize}
\protect\hypertarget{getpassword}{}{}
\hypertarget{getpassword}{%
\subsubsection{getPassword}\label{getpassword}}
\begin{itemize}
\tightlist
\item
getPassword{(}{)}{: }{string}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L293}{connector.ts:293}
\end{itemize}
This method retrieves the password.
\hypertarget{returns-string-1}{%
\paragraph{\texorpdfstring{Returns
{string}}{Returns string}}\label{returns-string-1}}
string Returns the password.
\end{itemize}
\protect\hypertarget{getusername}{}{}
\hypertarget{getusername}{%
\subsubsection{getUsername}\label{getusername}}
\begin{itemize}
\tightlist
\item
getUsername{(}{)}{: }{string}
\end{itemize}
\begin{itemize}
\item
\begin{itemize}
\tightlist
\item
Defined in
\href{https://github.com/BFMBFramework/TadoConnector/blob/f05932b/src/connector.ts\#L285}{connector.ts:285}
\end{itemize}
This method retrieves the username.
\hypertarget{returns-string-2}{%
\paragraph{\texorpdfstring{Returns
{string}}{Returns string}}\label{returns-string-2}}
string Returns the username.
\end{itemize}
\end{document} | {
"alphanum_fraction": 0.7409196699,
"avg_line_length": 26.453358209,
"ext": "tex",
"hexsha": "0da0a829b1b67091a774fed43196ac20bba8e30c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "640c1682a7e468dda6cd84d0ee4f2e562845229a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "BFMBFramework/Docs",
"max_forks_repo_path": "Redaction/Template TFM/bfmb-tado-connector.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "640c1682a7e468dda6cd84d0ee4f2e562845229a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "BFMBFramework/Docs",
"max_issues_repo_path": "Redaction/Template TFM/bfmb-tado-connector.tex",
"max_line_length": 141,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "640c1682a7e468dda6cd84d0ee4f2e562845229a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "BFMBFramework/Docs",
"max_stars_repo_path": "Redaction/Template TFM/bfmb-tado-connector.tex",
"max_stars_repo_stars_event_max_datetime": "2018-02-24T16:37:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-24T16:37:52.000Z",
"num_tokens": 8465,
"size": 28358
} |
\input assets/380pre
\usepackage{minted}
\usepackage{hyperref}
\usepackage{cleveref}
\usepackage{lmodern}
\usepackage{placeins}
\begin{document}
\MYTITLE{Final Project: Advanced Topics in Data Management}
\MYHEADERS{}
\PURPOSE{DeeBee: Implementation of A Relational Database Management System}
\PLEDGE{}
\HANDIN{Friday, December 12th, 2014}
\ABSTRACT{DeeBee is a small relational database management system implemented for educational purposes. It implements a subset of the structured query language, enough to support simple database operations; and is designed for modularity, so that additional advanced database features can be added in the future.}
\tableofcontents
\listoflistings
\vfill
\pagebreak
\section{Introduction}
Relational database management systems (RDBMSs) are everywhere. The relational model is a model of data storage in which data is stored in \textit{tuples}, or rows, which are grouped together to form \textit{relations}, or tables~\cite{silberschatz2010database,harrington2009relational,garcia2000database}. The relational model is perhaps the most popular models of data storage currently in use, with Silberschatz, Korth, and Sudarshan calling it ``[t]he primary data model for commercial data-processing applications''~\cite[page 39]{silberschatz2010database}.
A majority of modern relational database management systems, from the SQLite embedded database in every Android phone and iPhone~\cite{sqliteFamous} to the MySQL databases used in many web applications~\cite{onLamp}, implement the \textit{Structured Query Language}, or SQL.\@ SQL is a \textit{query language}; domain-specific declarative programming language that is used by database administrators, users, and application software to interact with a database~\cite{silberschatz2010database}.
In order to learn more about how such SQL databases function `under the hood', I have implemented my own small RDBMS, called DeeBee. DeeBee implements a small subset of the SQL, chosen to be expressive enough to allow basic database operations to be performed but minimal enough to allow DeeBee to be implemented within the constraints of the Computer Science 380 Final Project. Implementing DeeBee has yielded many insights into the challenges, techniques, and patterns involved in the design and implementation of an RDBMS.
At the time of writing, the DeeBee codebase comprises almost 1700 lines of Scala source code and over 500 lines of comments. While this is small compared to many `real-world' systems, it still represents a significant undertaking for a single individual over a 24-day period. Therefore, I have made use of a number of software engineering tools and practices in order to best maximize my productivity. While these techniques are not the focus of this assignment, I will touch on them briefly as well.
DeeBee is released as open-source software under the MIT license. Current and past releases are available for download at \url{https://github.com/hawkw/deebee/releases}. The provided JAR file can be included as a library in projects which use the DeeBee API to connect to a DeeBee database; or it may be executed using the \textit{java -jar} command to interact with a DeeBee database from the command line. Finally, DeeBee's ScalaDoc API documentation is available at \url{http://hawkw.github.io/deebee/api/index.html#deebee.package}.
\section{Implementation}
\subsection{The Scala Programming Language}
DeeBee was implemented using the Scala programming language, an object-oriented functional programming language which runs on the Java virtual machine~\cite{odersky2004scala,odersky2004overview,odersky2008programming}. Scala was designed by Martin Odersky of the Programming Methods Laboratory at \'Ecole Polytechnique F\'ed\'erale de Lausanne with the intention of developing a highly scalable programming language, ``in the sense that the same concepts can describe small as well as large parts''~\cite{odersky2004scala} and in the sense that Scala should be applicable to tasks of various sizes and complexities, and provide high performance at large scales~\cite{odersky2008programming}.
Scala was inspired by criticisms of Java and by the recent popularity of functional programming langauges such as Haskell. It aims to provide a syntax that is more expressive than that of Java but is still readily accessable to Java programmers. It is a statically-typed language and was developed with a focus on type safety, immutable data structures, and pure functions~\cite{odersky2004scala,odersky2004overview,odersky2008programming}. Because it compiles to Java bytecode and runs on the JVM, Scala is mutually cross-compatible with Java, meaning that Scala code can natively call Java methods and use Java libraries, and vise versa~\cite{odersky2008programming}.
A key concept in the Scala design is the idea of an `embedded domain-specific language (DSL)'. Essentially, this concept suggests that Scala's syntax should be modifiable to the extent that code for specific tasks be expressed with its' own syntax within Scala. These DSLs are still Scala code and can still be compiled by the Scala compiler, but their syntax differs based on the task they are intended for~\cite{ghosh2010dsls,hofer2008polymorphic,odersky2008programming}. The Scala parsing library (\Cref{sec:parsing}) and the ScalaTest testing framework (\Cref{par:test}) both provide exmaples of embedded DSLs.
Scala was chosen as the ideal language for DeeBee's implementation due to the expressiveness of its syntax, which allows complex systems to be implemented in few lines of code; its performance at scale; the existence of powerful libraries for text parsing (\Cref{sec:parsing}) and concurrent programming using the actors model (\Cref{sec:arch,sec:query}); and the cross-platform capabilities of the JVM.
\subsection{Tools}
DeeBee builds were conducted using SBT, the Simple Build Tool. SBT is a build tool similar to Maven or Gradle, that is configured using a Scala-based domain-specific language. Like Maven and Gradle, it is capable of automatically managing dependencies using Ivy repositories~\cite{saxena2013getting}. SBT was chosen due to improved compatibility with Scala-focused plugins, its' high performance incremental Scala compiler, and my desire to try out a new build tool.
\paragraph{Continuous Integration}
Each push to the DeeBee GitHub repository was built using the Travis continuous integration service (\url{https://travis-ci.org/hawkw/deebee}), which ran tests, collected coverage information, generated and archived API documentation, and deployed to GitHub releases (when a new Git tag was pushed). Codacy (\url{https://www.codacy.com/public/hawkweisman/deebee}), a service which performs static analysis of a codebase and assesses it according to various quality criteria such as code complexity, code style, performance, and compatibility, was used to ensure that the codebase maintained an overall high quality.
\subsection{Testing}
\label{par:test}
Testing was conducted using the ScalaTest testing framework. ScalaTest provides an embedded DSL for conducting many different types of testing, including configuring test reports that automatically output specification information~\cite{vennersscalatest}. Three specifications were written to describe DeeBee's behaviour: an integration testing specification that deals with database-level behaviour, a specification for the SQL parser, and a specification for query processing at the table level.
Code coverage data was collected using the Scoverage SBT plugin, and was archived on Coveralls (\url{https://coveralls.io/r/hawkw/deebee?branch=master}), a service which tracks coverage for open-source projects. Working with Coveralls caused some issues, as files that were not targets for code coverageanalysis were tracked by Coveralls despire being excluded from coverage in the buildfile. This lead to coverage reports that were consistently lower than the actual recorded coverage percentage.
\section{Architecture}
\label{sec:arch}
DeeBee makes use of the actor model for concurrent programming. In this model, actors are independant objects capable of carrying out specific operations and communicating through asynchronous, anonymous passing of immutable message objects. When an actor recieves a message, that message enters its' queue (called a \textit{mailbox}), and it is capable of responding to the message by sending a message to the sender, sending messages to other actors, changing its' internal state, or carrying out some process~\cite{agha1985actors,haller2012integration,odersky2004scala}. Essentially, actors are finite-state machines with mailboxes.
The asynchronous and anonymous nature of actor message-passing makes actors a highly useful way to compose concurrent systems. The anonymity (an actor cannot access any fields or methods of another actor) and decoupled nature of actors allows for a high level of fault tolerance, as if the process corresponding to one actor crashes, it has little to no effect on the rest of the system. Furthermore, since actors communicate through immutable message objects, actors can run on separate computers on a network, and communication overhead (both over the network and locally) is as low as possible. Finally, the actors model is almost entirely reactive in nature --- since computation happens only as a response to messages, the system only uses computational resources when fulfulling a request, and does not use CPU time while `idle'~\cite{agha1985actors,haller2012integration,odersky2004scala}. This property makes the actors model ideal for databases. In Scala, functionality related to actors is provided by the Akka framework.
DeeBee models each database as an actor, and each table in the database as a child actor. The database actor is responsible for receiving queries and handling them by dispatching DML statements to the target tables or by creating and deleting table actors in response to DDL statements. The table actors can respond to queries by updating their state, or by sending result sets to the querying connection.
\section{Query Processing}
DeeBee processes SQL queries by parsing the input query strings and generating an abstract syntax tree (AST) which represents the query. The AST is then interpreted against a context (typically a database or table) to determine the actions necessary to carry out the query. The query is then conducted using the API of the target relations to carry out the necessary operations.
\subsection{Query Parsing}
\label{sec:parsing}
DeeBee's query parser was implemented using the Scala standard library's parser-combinators~\cite{moors2008parser} package. Combinator parsing represents a functional programming approach to text parsing. In this approach, a parser combinator is a higher-order function which takes as a parameter two parsers (here defined as functions which accept some strings and reject others) and produces a new parser that combines the two input parsers according to some rule, such as sequencing, repetition, or disjunction. The repeated combination of smaller primitive parsers through various combinators constructs a recursive-descent parser for the specified language.~\cite{moors2008parser,swierstra2001combinator,fokker1995functional,frost2008parser}.
Following the Scala philosophy of embedded domain-specific languages~\cite{ghosh2010dsls,hofer2008polymorphic,moors2008parser}, the Scala parsing library represents these combinators as symbols similar to those found in the Bauckus-Naur Form, a common symbolic representation of a language grammar. Using the Scala parser-combinators, then, is almost as simple as constructing the BNF for the language to be parsed.
The `Packrat Parser' class contained within the Scala parsing library enhances parser combinators with the addition of a memoization facility. This allows recursive-descent parsing of left-recursive grammars. It also improves performance, providing linear-time parsing for most grammars~\cite{jonnalagedda2009packrat}. I make liberal use of packrat parsers in order take advantage of their improved performance.
Some difficulties were encountered in parsing, mostly related to the parsing of nested predicate expressions in SQL \texttt{WHERE} clauses. These issues were eventually resolved by separating the productions for `leaf' predicates (those consisting of a comparison between an attribute and an expression) and predicates consisting of multiple predicate expressions, and using the longest-match parser combinator to back-track in order to parse the entire \texttt{WHERE} clause.
\subsection{Query Interpretation}
\label{sec:query}
Queries are interpreted against a table using the \texttt{Relation} API. DeeBee defines a core trait for all tables, \texttt{Relation}, which defines a number of `primitive' operations on a table, such as filtering rows by a predicate, projecting a relation by selecting specific columns, and taking a subset number of rows. These in turn rely on two abstract methods, for accessing the table's attribute definitions and the set of the table's rows. These methods are implemented by each concrete class that implements \texttt{Relation}.
Relation is extended by two other traits, \texttt{Selectable} and \texttt{Modifyable}, which provide polymorphic functions for actually processing queries. These can be mixed in as needed to represent tables which support these functionalities.
Queries involving predicates, such as \texttt{SELECT} and \texttt{DELETE} statements with \texttt{WHERE} clauses, must go through an additional step of predicate construction, which converts SQL ASTs for \texttt{WHERE} clauses into Scala partial functions which take as a parameter a row from a table and return a Boolean. These functions are emitted by the AST node for \texttt{WHERE} clauses, using the target relation as a context for accessing the attributes corresponding to names in the SQL predicate. A similar process is performed for \texttt{INSERT} statements, which must be checked against the type and integrity constraints in the attribute corresponding to each value in the statement.
\section{Storage}
\label{sec:storage}
DeeBee currently provides one storage type, \texttt{CSVRelation}, which stores each table as a comma-separated values file on disk. While this storage mechanism is not ideal for a relational database, as it provides poor performance, it was used for the first DeeBee storage backend due to ease of implementation and testing. However, the DeeBee storage system is designed to be modular, making it possible to implement other storage backends in the future. I am considering the implementation of a B+ tree backend, similar to those used in `real-world' RDBMSs, and/or some form of hash bucket based storage.
\section{Further Research}
While DeeBee currently implements the entirity of the planned subset of SQL, there are a number of additional features that could be implemented. As discussed in \Cref{sec:storage}, CSV storage is not as performant as the storage methods used by real RDBMSs, and I am considering implementing a more advanced storage system.
Furthermore, DeeBee SQL does not currently implement a large portion of SQL syntax, such as \texttt{JOIN}s, \texttt{UPDATE} statements, \texttt{VIEW}s, or aggregate functions such as \texttt{SUM} and \texttt{COUNT}. DeeBee also does not implement all of the data types available in other SQL databases, such as BLOBs (binary large objects) and CLOBs (character large objects). An implementation of the SQL \texttt{DATE} type based on \texttt{java.util.date} was prototyped but is currently not yet implemented.
Finally, DeeBee's connections API, used by other programs to interact with the system, could be expanded with support for non-blocking connections and additional options to configure the connected database, similar to those available in the Java DataBase Connection driver (JDBC). I have considered the implementing a JDBC compatible API for DeeBee, but am not currently certain if this is possible, due to the differences between DeeBee and other SQL databases.
\pagebreak
\bibliography{assets/final}{}
\bibliographystyle{plain}
\end{document} | {
"alphanum_fraction": 0.8178399704,
"avg_line_length": 155.9807692308,
"ext": "tex",
"hexsha": "c05344e21e71bc60585391c7ef0050c666eeb4f1",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-08-17T22:20:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-08-17T22:20:18.000Z",
"max_forks_repo_head_hexsha": "1edad39d3d69f82b71339735c76c334e59beacc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hawkw/deebee",
"max_forks_repo_path": "doc/cs380-final.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1edad39d3d69f82b71339735c76c334e59beacc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hawkw/deebee",
"max_issues_repo_path": "doc/cs380-final.tex",
"max_line_length": 1031,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "1edad39d3d69f82b71339735c76c334e59beacc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hawkw/deebee",
"max_stars_repo_path": "doc/cs380-final.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-18T00:08:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-19T06:10:51.000Z",
"num_tokens": 3441,
"size": 16222
} |
\chapter{Chapter name}
\section{Section name}
\label{secname}
\subsection{Subsection name}
\label{subsecname}
Let's write your text here... e.g. link to \ref{secname} or cite some papers (e.g. \cite{krizhevsky2012imagenet}) | {
"alphanum_fraction": 0.7264957265,
"avg_line_length": 39,
"ext": "tex",
"hexsha": "5c71ecbaf9d809d6aaf0a81dbaf062a51fcf9232",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a6799abe37fb32a37738d60eb168270527e67471",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "antoniolanza1996/latex_thesis_template",
"max_forks_repo_path": "chapter1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a6799abe37fb32a37738d60eb168270527e67471",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "antoniolanza1996/latex_thesis_template",
"max_issues_repo_path": "chapter1.tex",
"max_line_length": 116,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a6799abe37fb32a37738d60eb168270527e67471",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "antoniolanza1996/latex_thesis_template",
"max_stars_repo_path": "chapter1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-29T13:05:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-29T13:05:20.000Z",
"num_tokens": 78,
"size": 234
} |
% $Id$
\section{Architectural Overview}
\label{sec:ArchOver}
The ESMF architecture is characterized by the layering strategy shown in
Figure \ref{fig:TheESMFwich}. User code components that implement the
{\it science} portions of an application, for example a sea ice or land model,
are sandwiched between two layers. The upper layer is denoted the
{\bf superstructure} layer and the lower layer the {\bf infrastructure} layer.
The role of the superstructure layer is to provide a shell which encompasses
user code and provides a context for interconnecting input and output data
streams between components. The key elements of the superstructure are described
in Section \ref{sec:superstructure}. These elements include classes that wrap
user code, ensuring that all components present consistent interfaces. The
infrastructure layer provides a foundation that developers of science components
can use to speed construction and to ensure consistent, guaranteed behavior.
The elements of the infrastructure include constructs to support parallel
processing with data types tailored to Earth science applications, specialized
libraries to support consistent time and calendar management and performance,
error handling and scalable I/O tools. The infrastructure layer is described in
Section \ref{sec:infrastructure}.
A hierarchical combination of superstructure, user code components, and
infrastructure are joined together to form an ESMF application.
\subsection{Key Concepts}
The ESMF architecture and programming paradigm are based upon
five key concepts: modularity, flexibility, hierarchical
organization, communication within components, and a uniform
communication API.
\subsubsection{Modularity}
The ESMF design is based upon modular Components. There
are two types of Components, one of which represents models
(Gridded Components) and one which represents couplers (Coupler Components).
Data are always passed between Components using a data structure
called a State, which can store Fields, FieldBundles of Fields,
Arrays, and other States. A Gridded Component stores no information about
the internals of the Gridded Components that it interacts with; this information
is passed in through the argument lists of the initialize, run,
and finalize methods. The information that is
passed in through the argument list can be a State from
another Gridded Component, or it can be a function pointer that performs
a computation or communication on a State. These function
pointers are called Transforms, and they are available as AttachableMethods
created by Coupler Components. They are called inside the
Gridded Component they are passed into. Although Transforms add
some complexity to the framework (and their use is not required), they are what
will enable ESMF to accommodate virtually any model of communication
between Components.
{\bf Modularity means that an ESMF component stores nothing about
the internals of other components. This allows components to be
used more easily in multiple contexts.}
\subsubsection{Flexibility}
The ESMF does not dictate how models should be coupled; it
simply provides tools for creating couplers. For example,
both a hub-and-spokes type coupling strategy and
pairwise strategies are supported. The ESMF also allows model
communications to occur mid-timestep, if desired. Sequential,
concurrent, and mixed modes of execution are supported.
{\bf The ESMF does not impose restrictions on how data flows through
an application. This accommodates scientific innovation - if you
want your atmospheric model to communicate with your sea ice model
mid-timestep, ESMF will not stop you.}
\subsubsection{Hierarchical organization}
\label{sec:principles-hierarchy}
ESMF allows applications to be composed hierarchically.
For example, physics and dynamics modules can be defined as
separate Gridded Components, coupled together with a Coupler Component, and all
of these nested within a single atmospheric Gridded Component.
The atmospheric Gridded Component can be run standalone, or can be included
in a larger climate or data assimilation application. See Figure
\ref{fig:appunit} for an illustrative example.
The data structure that enables scalability in ESMF is the
derived type Gridded Component. Fortran alone does not allow you to create
generic components - you'd have to create derived types for
PhysComp, and DynComp, and PhysDynCouplerComp, and AtmComp. In
ESMF, these are always of type GridComp or CplComp, so they
can be called by the same drivers (whether that driver is a
standard ESMF driver or another model), and use the same methods
without having to overload them with many specific derived
types. It is the same idea when you want to support different
implementations of the same component, like multiple dynamics.
{\bf The ESMF defines a hierarchical, scalable architecture
that is natural for organizing very complex applications, and
for allowing exchangeable Components.}
\begin{figure}
\caption{A typical building block for an ESMF application consists
of a parent Gridded Component, two or more child Gridded Components, and
a Coupler Component. The parent Gridded Component is called by an
application driver. All ESMF Components have initialize, run, and
finalize methods. The diagram shows that when the application driver calls
initialize on a parent Gridded Component, the call cascades down to
all of its children, so that the result is that the entire ``tree''
of Components is initialized. The run and finalize methods work the
same way. In this example a hurricane simulation is built
from ocean and atmosphere Gridded Components. The data exchange between
the ocean and atmosphere is handled by an ocean-atmosphere Coupler Component.
Since the whole hurricane simulation is a Gridded Component,
it could be easily be treated as a child and coupled to another
Gridded Component, rather than being driven directly by the application
driver. A similar diagram could be drawn for an atmospheric model containing
physics and dynamics components, as described in Section
\ref{sec:principles-hierarchy}.}
\label{fig:appunit}
\scalebox{1.0}{\includegraphics{ESMF_appunit}}
\end{figure}
\subsubsection{Communication within Components}
Communication in ESMF always occurs within a Component. It
can occur internal to a Gridded Component, and have nothing to do
with interactions with other Components (setting aside
synchronization issues), or it can occur within a Coupler Component
or a transform generated by a Coupler Component. A result of the rule
that all communication happens within a Component is that
Coupler Components must always be defined on the union of all the
Components that they couple together. Models can choose to
use whatever mechanism they want for intra-model communications.
{\bf The point is that although the ESMF defines some simple rules
for communication, the communication mechanism that the
framework uses is not hardwired into its architecture -
the sends and receives or puts and gets are enclosed within
Gridded Components, Coupler Components and Transforms. The intent
is to accommodate multiple models of communication and technical
innovations.}
\subsubsection{Uniform communication API}
ESMF has a single API for shared and distributed
memory that, unlike MPI, accounts for NUMA architectures and
does not treat all processes as being identical. It is possible for
users to set ESMF communications to a strictly message passing
mode and put in their own OpenMP commands.
{\bf The goal is to create a programming paradigm
that is performance sensitive to the architecture beneath it
without being discouragingly complicated.}
\subsection{Superstructure}
\label{sec:superstructure}
The ESMF superstructure layer in a unifying context within which user
components are interconnected. Classes called {\bf Gridded Components},
{\bf Coupler Components}, and {\bf States} are used within the superstructure
to achieve this flexibility.
\subsubsection{Import and export State classes}
User code components under ESMF use special interface objects for Component to
Component data exchanges. These objects are of type import State and export
State. These special types support a variety of methods that allow user code
components to do things like fill an export State object with data to be shared
with other components or query an import State object to determine its contents.
In keeping with the overall requirements for high-performance it is permitted
for import State and export State contents to use references or pointers to
Component data, so that costly data copies of potentially large data structures
can be avoided where possible. The content of an import State and an export
State can be made self-describing.
\subsubsection{Interface standards}
The import State and export State abstractions are designed to be flexible
enough so that ESMF does not need to mandate a single format for fields. For
example, ESMF does not prescribe the units of quantities exported or imported.
However, ESMF does provide mechanisms to describe units, memory layout, and
grid coordinates. This allows the ESMF software to support a range of different
policies for physical fields. The interoperability experiments that we are using
to demonstrate ESMF make use of the emerging CF conventions \cite{ref:CF} for
describing physical fields. This is a policy choice for that set of experiments.
The ESMF software itself can support arbitrary conventions for labeling and
characterizing the contents of States.
\subsubsection{Gridded Component class}
The Gridded Component class describes a user component that takes in one import State and produces one
export State. Examples of Gridded Components are major Earth system
model components such as land surface models, ocean models, atmospheric models and sea ice models. Components
used for linear algebra manipulations in a state estimation or data assimilation optimization procedure are also
created as Gridded Components. In general the fields within an import State and export State of a Gridded Component will
use the same discrete grid.
\subsubsection{Coupler Component class}
The other top-level Component class supported in the ESMF architecture is a
Coupler Component. This class is used for Components that take one or more
import States as input and map them through spatial and temporal interpolation
or extrapolation onto one or more output export States. In a Coupler Component
it is often the case that the export State(s) is on a different discrete grid
to that of the import State(s). For example, in a coupled ocean-atmosphere
simulation a Coupler Component might be used to map a set of sea-surface fields
in an ocean model to appropriate planetary boundary layer fields in an
atmospheric model.
\subsubsection{Flexible data and control flow}
Import States, export States, Gridded Components and Coupler Components can
be arrayed flexibly within a superstructure layer. Using these constructs, it
is possible to configure a set of Components with multiple
pairwise Coupler Components, Figure \ref{fig:point2point}. It is also
possible to configure a set of concurrently
executing Gridded Components joined through a single Coupler Component of the
style shown in Figure \ref{fig:hubspoke}.
\begin{figure}
\caption{ESMF supports configurations with a single central Coupler Component.
In this case inputs from all Gridded
Components are transferred and regridded through the central coupler.}
\label{fig:hubspoke}
\scalebox{1.0}{\includegraphics{ESMF_hubandspokes}}
\end{figure}
\begin{figure}
\caption{ESMF also supports configurations with multiple point to point Coupler
Components. These take inputs from one Gridded Component and transfer and regrid
the data before passing it to another Gridded Component. This schematic shows a
flow of data between two Coupler Components that connect three Gridded Components:
an atmosphere model with a land model, and the same atmosphere model with a data
assimilation system.}
\label{fig:point2point}
\scalebox{1.0}{\includegraphics{ESMF_pairwise}}
\end{figure}
The set of superstructure abstractions allows flexible data flow and control
between components. However, components will often use different discrete grids,
and time-stepping components may march forward with different time intervals.
In a parallel compute environment different components may be distributed in a
different manner on the underlying compute resources. The ESMF infrastructure
layer provides elements to manage this complexity.
\subsection{Infrastructure}
\label{sec:infrastructure}
Figure \ref{fig:threecomponents} illustrates three Gridded Components,
each with a different Grids, being coupled together. In
order to achieve this coupling several steps beyond defining import State and
export State objects to act as data conduits are required. Coupler Components
are needed that can interpolate between the different Grids. The necessary
transformations may also involve mapping between different units and/or memory
layout conventions for the Fields that pass between Components. In a parallel
compute environment the Coupler Components may also be required to map between
different domain decompositions. In order to advance in time correctly the
separate Gridded Components must have compatible notions of time. Approaches to
parallelism within the Gridded Components must also be compatible. The
{\bf Infrastructure} layer contains a set of classes that address these issues
and assist in managing overall system complexity.
\begin{figure}
\caption{Schematic showing the coupling of components that use different discrete
Grids and different time-stepping. In this example, Component {\it NCAR Atmosphere}
might use a spectral Grid based on spherical harmonics, Component
{\it GFDL Ocean} might use a latitude-longitude Grid but with a patched decomposition
that does not include land masses, and Component {\it NSIPP Land} might use a m
osaic-based Grid for representing vegetation patchiness and a catchment area based
Grid for river routings. The ESMF infrastructure layer contains tools to help develop
software for coupling between Components on different Grids, mapping between Components
with different distributions in a multi-processor compute environment and synchronizing
events between Components with different time-stepping intervals
and algorithms. }
\label{fig:threecomponents}
\scalebox{0.5}{\includegraphics{regrid}}
\end{figure}
\subsubsection{FieldBundle, Field and Array classes}
FieldBundle, Field and Array classes contain data together with descriptive
physical and computational attribute information. The physical attributes
include information that describes the units of the data. The computational
attributes include information on the layout in memory of the field data. The
Field class is primarily geared toward structured data. A comparable class,
called Location Stream, provides a self-describing
container for unstructured observational data streams.
\subsubsection{Grid class}
The {\it Grid} class is an extensible class that holds discrete grid information. It has subtypes that allow
it to serve as a container for the full range of different physical grids that might arise in a coupled system.
In the example in figure \ref{fig:threecomponents} objects of type Grid would hold grid information for
each of the spectral grid, the latitude-longitude grid, the mosaic grid and the catchment grid.
The Grid class is also used to represent the decomposition of a data structure into subdomains, typically for
parallel processing purposes. The class is designed to support a
generalized ``ghosting'' for tiled
decompositions of finite difference, finite volume and finite element codes.
\subsubsection{Time and Calendar management}
To support synchronization between Components, several time and calendar
management classes are provided. These capabilities are provided in the Time,
Time Interval, Calendar, Clock, and Alarm classes. These classes allow Gridded
and Coupler Component processing to be latched to a common controlling Clock,
and to schedule notification of
regular events, such as a coupling intervals, and unique events.
\subsubsection{Config resource file manager}
The Config class is a utility for accessing configuration files that are in
ASCII format. This utility enables configuration files to be prepared using
more flexible formatting than Fortran namelists - for example, it permits the
input of tables of data.
\subsubsection{DELayout and virtual machine}
To provide a mechanism for ensuring performance portability, ESMF defines
DELayout and virtual machine (VM) classes. These classes provide a set of
high-level and platform independent interfaces to performance critical parallel
processing communication routines. These routines can be tuned
to specific platforms to ensure optimal parallel performance on many platforms.
\subsubsection{Logging and error handling}
The LogErr class is designed to aid in managing the complexity of
multi-Component applications. It provides ESMF with a unified mechanism
for managing logs and error reporting.
\subsubsection{File input and output}
The infrastructure layer will define a set of {\it IO} classes for storing and
retrieving Array, Field, and Grid information to and from persistent storage.
| {
"alphanum_fraction": 0.81561983,
"avg_line_length": 56.0031948882,
"ext": "tex",
"hexsha": "bd1c34e584d6405697ac2f34758d8b29e00f74de",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_forks_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_forks_repo_name": "joeylamcy/gchp",
"max_forks_repo_path": "ESMF/src/doc/ESMF_archoverview.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z",
"max_issues_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_issues_repo_name": "joeylamcy/gchp",
"max_issues_repo_path": "ESMF/src/doc/ESMF_archoverview.tex",
"max_line_length": 121,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_stars_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_stars_repo_name": "joeylamcy/gchp",
"max_stars_repo_path": "ESMF/src/doc/ESMF_archoverview.tex",
"max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z",
"num_tokens": 3645,
"size": 17529
} |
Galaxy wars (GW) is an RTS game published in 2012 inspired by the popular board game Risk. Galaxy wars has been used as a case study in related research \cite{papers_galaxy_wars}. The gameplay revolves around your strategic choices, where timing, battles, and resource management are key elements to prevail against the opponents.
The map field is a connected graph where nodes represent planets and links are the physical connection between planets. Planets produce fleets that are controllable by the players. A fleet can either defend its containing planet or move battle against other planets. Fleets move only through links. To every player global statistics are assigned. Global statistics influence fleets attack/defence capabilities as well as fleets speed and planets production. Few special planets feature special ability to update the global statistics of their owners.
\subsection{Galaxy Wars with REA}
We show an informal idea of implementation of Galaxy Wars with the REA pattern. The elements of Galaxy Wars that follow the REA pattern are: \textit{fleet}, \textit{planet}, \textit{statistic}, and \textit{link}. Resources are \textit{statistics}, the entities are \textit{fleets}, \textit{planets}, and \textit{links}. The possible actions are movement, fight, and upgrade. In GW most of the entities are static. An entity that can be created and deleted is fleet. A fleet is spawned after a player decides to send some units to a planet. A fleet is disposed after either it has reached its destination, or it has lost a battle. Moreover, the fleet entity is the only entity which might change its strategy/behavior during its lifetime (a fleet can either travel along a link or fight in a battle).
In the next section we show how to implement the REA pattern for Galaxy Wars in Casanova 2. | {
"alphanum_fraction": 0.8044198895,
"avg_line_length": 226.25,
"ext": "tex",
"hexsha": "d0ea9604e9c1c7fb90c43f43e6c206a8afade984",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vs-team/Papers",
"max_forks_repo_path": "15. Making RTS games in Casanova/Sections/case_study.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vs-team/Papers",
"max_issues_repo_path": "15. Making RTS games in Casanova/Sections/case_study.tex",
"max_line_length": 799,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vs-team/Papers",
"max_stars_repo_path": "15. Making RTS games in Casanova/Sections/case_study.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z",
"num_tokens": 385,
"size": 1810
} |
\documentclass[letterpaper, 12pt]{article}
%% Language and font encodings
\usepackage[english]{babel}
\usepackage[utf8x]{inputenc}
\usepackage[T1]{fontenc}
%% Sets page size and margins
\usepackage[letterpaper,top=2.5cm,bottom=2cm,left=2cm,right=2cm,marginparwidth=1.75cm]{geometry}
%% Useful packages
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage{physics}
\usepackage{bbold}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage[colorlinks=true, allcolors=blue]{hyperref}
\usepackage{listings}
\usepackage{multicol}
\usepackage{float}
\usepackage{enumitem}
\usepackage{bm}
\date{\today}
\title{CSC411 Assignment 3}
\author{Yue Guo}
\begin{document}
\maketitle
%%%%%%%%%%%Q1 20 NEWS GROUP %%%%%%%%%%%%%%
\section{20 News Group}
\subsection{Top 3 Algorithms}
Note: I have added a list of the names of all 20 categories in in load\_data( ) function for debugging purposes. This is does not affect my classification algorithm.
\subsubsection{Neural Network}
\begin{enumerate}
\item Hypterparameter: number of layers
Method: K-fold cross validation
II have tried a single layer neural network vs multi-layered neural network. It turns out that the single neural network is the fastest and also most accurate.
\item Train/test loss
\begin{itemize}
\item Train accuracy 0.974721583878
\item Test accuracy 0.663303239511
\end{itemize}
\item Reason to pick this algorithm
Neural network is commonly used in NLP, according to our lecture on neural networks. It is good at handling large data set with any features and produce a non-linear decision boundary.
\item My expectations
This meets my expectation because NNs are good at working with a large dataset with many feartures. When I increase the number of hidden layers, accuracy decreases. I think this is because it overfits.
\end{enumerate}
\subsubsection{Random forest}
\begin{enumerate}
\item Hypterparameter: number of estimators
Method: K-fold cross validation
I have tried Ensamble with 10 to 150 estimators, and 150 works best after comparing results from cross validation.
\item Train/test loss
\begin{itemize}
\item Train accuracy 0.974721583878
\item Test accuracy 0.59346787042
\end{itemize}
\item Reason to pick Random forest
I have tried decision tree, with different parameters(entropy vs gini) and reduced features, but all end up overfitting. Random forest introduces more randomness and also scales the weights of misclassified data in each iteration. To see decision tree, please uncomment from line 272 to 290 in q1.py.
\item My expectations
This meets my expectations because Random Forest adds more randomness in each step, and ensamble method is more resistant to overfitting because it assigns weights to different features in each step. I also tried decision\_tree, and it does not generalize well.
The number of estimators learns better with a larger group of weak learners.
\end{enumerate}
\subsubsection{SVM - Best Classifier}
\begin{enumerate}
\item Hypterparameter: rand\_state
Method: K-fold cross validation
In the code, I have tried different rand\_state and cross validated each. It turns out $rand\_state = 0$ is the best
\item Train/test loss
\begin{itemize}
\item Train accuracy 0.972511932119
\item Test accuracy 0.691980881572
\end{itemize}
\item Reason to pick SVM
Because of the natural of this given problem, it is still a classification problem. Using neural nets might be an overkill. SVM produces a linear decision boundary with a margin for two classes. It can be extended to multi-class using algorithms such as one-vs-all.
\item My expectations
SVM is the best out of all. I think it is because it has a linear decision boundary with a margin and does not overfit on training data. The test accuracy is close to single neuron neural net, but still higher.
\item Test confusion matrix
The most confused classes are class 16 talk.politics.guns and 18 talk.politics.misc
\resizebox{\columnwidth}{!}{%
\setcounter{MaxMatrixCols}{20}
$\begin{matrix}
157.0 & 6.0 & 4.0 & 0.0 & 2.0 & 0.0 & 1.0 & 6.0 & 2.0 & 3.0 & 1.0 & 4.0 & 3.0 & 7.0 & 6.0 & 19.0 & 5.0 & 22.0 & 12.0 & 31.0 \\
2.0 & 278.0 & 19.0 & 12.0 & 3.0 & 47.0 & 3.0 & 1.0 & 3.0 & 2.0 & 2.0 & 7.0 & 9.0 & 9.0 & 10.0 & 2.0 & 2.0 & 1.0 & 1.0 & 4.0 \\
3.0 & 18.0 & 242.0 & 37.0 & 10.0 & 36.0 & 6.0 & 4.0 & 4.0 & 0.0 & 1.0 & 7.0 & 12.0 & 2.0 & 3.0 & 2.0 & 5.0 & 1.0 & 1.0 & 1.0 \\
2.0 & 8.0 & 37.0 & 254.0 & 35.0 & 9.0 & 14.0 & 3.0 & 1.0 & 4.0 & 0.0 & 3.0 & 26.0 & 1.0 & 2.0 & 1.0 & 2.0 & 4.0 & 2.0 & 2.0 \\
1.0 & 6.0 & 18.0 & 22.0 & 267.0 & 4.0 & 12.0 & 2.0 & 3.0 & 1.0 & 2.0 & 4.0 & 10.0 & 2.0 & 3.0 & 0.0 & 1.0 & 0.0 & 0.0 & 0.0 \\
0.0 & 24.0 & 14.0 & 9.0 & 4.0 & 273.0 & 0.0 & 2.0 & 0.0 & 1.0 & 0.0 & 4.0 & 8.0 & 1.0 & 2.0 & 1.0 & 0.0 & 0.0 & 0.0 & 1.0 \\
2.0 & 5.0 & 3.0 & 13.0 & 9.0 & 2.0 & 311.0 & 12.0 & 4.0 & 7.0 & 1.0 & 6.0 & 16.0 & 3.0 & 3.0 & 1.0 & 2.0 & 0.0 & 0.0 & 2.0 \\
7.0 & 3.0 & 3.0 & 2.0 & 6.0 & 0.0 & 6.0 & 282.0 & 21.0 & 3.0 & 3.0 & 3.0 & 9.0 & 6.0 & 8.0 & 2.0 & 7.0 & 1.0 & 6.0 & 3.0 \\
6.0 & 3.0 & 2.0 & 0.0 & 5.0 & 1.0 & 6.0 & 15.0 & 302.0 & 6.0 & 2.0 & 3.0 & 10.0 & 4.0 & 5.0 & 1.0 & 8.0 & 5.0 & 3.0 & 1.0 \\
12.0 & 10.0 & 18.0 & 8.0 & 14.0 & 6.0 & 11.0 & 28.0 & 16.0 & 329.0 & 23.0 & 19.0 & 14.0 & 15.0 & 18.0 & 15.0 & 13.0 & 12.0 & 9.0 & 9.0 \\
1.0 & 0.0 & 2.0 & 1.0 & 1.0 & 0.0 & 2.0 & 2.0 & 1.0 & 20.0 & 345.0 & 3.0 & 2.0 & 4.0 & 3.0 & 0.0 & 0.0 & 1.0 & 4.0 & 3.0 \\
2.0 & 6.0 & 2.0 & 2.0 & 4.0 & 3.0 & 1.0 & 1.0 & 0.0 & 1.0 & 2.0 & 282.0 & 12.0 & 1.0 & 1.0 & 0.0 & 8.0 & 4.0 & 3.0 & 2.0 \\
8.0 & 8.0 & 2.0 & 27.0 & 15.0 & 4.0 & 7.0 & 13.0 & 8.0 & 1.0 & 1.0 & 10.0 & 230.0 & 8.0 & 14.0 & 2.0 & 1.0 & 3.0 & 2.0 & 1.0 \\
8.0 & 0.0 & 4.0 & 0.0 & 0.0 & 2.0 & 0.0 & 3.0 & 7.0 & 4.0 & 3.0 & 2.0 & 12.0 & 302.0 & 7.0 & 7.0 & 6.0 & 2.0 & 5.0 & 7.0 \\
11.0 & 8.0 & 9.0 & 2.0 & 4.0 & 5.0 & 0.0 & 6.0 & 7.0 & 1.0 & 3.0 & 4.0 & 9.0 & 4.0 & 289.0 & 2.0 & 8.0 & 2.0 & 11.0 & 6.0 \\
44.0 & 1.0 & 1.0 & 0.0 & 2.0 & 1.0 & 2.0 & 1.0 & 5.0 & 6.0 & 2.0 & 4.0 & 2.0 & 7.0 & 5.0 & 319.0 & 11.0 & 11.0 & 3.0 & 63.0 \\
6.0 & 2.0 & 2.0 & 0.0 & 3.0 & 0.0 & 4.0 & 3.0 & 4.0 & 1.0 & 4.0 & 12.0 & 3.0 & 4.0 & 6.0 & 0.0 & 243.0 & 6.0 & 86.0 & 20.0 \\
13.0 & 2.0 & 3.0 & 0.0 & 0.0 & 0.0 & 1.0 & 5.0 & 2.0 & 0.0 & 1.0 & 3.0 & 2.0 & 6.0 & 2.0 & 1.0 & 9.0 & 283.0 & 7.0 & 6.0 \\
8.0 & 1.0 & 6.0 & 3.0 & 1.0 & 0.0 & 2.0 & 6.0 & 6.0 & 7.0 & 0.0 & 12.0 & 2.0 & 7.0 & 7.0 & 3.0 & 21.0 & 15.0 & 145.0 & 10.0 \\
26.0 & 0.0 & 3.0 & 0.0 & 0.0 & 2.0 & 1.0 & 1.0 & 2.0 & 0.0 & 3.0 & 4.0 & 2.0 & 3.0 & 0.0 & 20.0 & 12.0 & 3.0 & 10.0 & 79.0 \\
\end{matrix}%
$}
\end{enumerate}
\subsubsection{Bernoulli Baseline}
\begin{enumerate}
\item Train/test loss
\begin{itemize}
\item Train accuracy 0.598727240587
\item Test accuracy 0.457912904939
\end{itemize}
\end{enumerate}
%%%%%%%%%%%Q2 SVM %%%%%%%%%%%%%%
\section{SVM}
\subsection{SVM test}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{q2part1plot.png}
\caption{\label{}Plot of test SVM}
\end{figure}
\subsection{SVM code}
see code
\subsection{SVM on MINIST}
\subsubsection{without momentum}
\begin{enumerate}
\item Train loss$=0.400699029921$
\item Test loss$=0.37243523202$
\item classification accuracy on training set $= 0.826985854189$
\item classification accuracy on testing set $=0.818503401361$
\item Plot of w
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{q2_3_1.png}
\caption{\label{}Plot momentum = 0 }
\end{figure}
\end{enumerate}
\subsubsection{with momentum}
\begin{enumerate}
\item Train loss = 0.424045274211
\item Test loss = 0.394698671058
\item classification accuracy on training set $= 0.817555313747$
\item classification accuracy on testing set $=0.809977324263$
\item Plot of w
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{q_2_3_2.png}
\caption{\label{}Plot momentum = 0.1 }
\end{figure}
\end{enumerate}
%%%%%%%%%%%Q3%%%%%%%%%%%%%%
\section{Kernels}
%%%%%%%%%% q 3.1%%%%%%%%%%%%%%
\subsection{Positive semidefinite and quadratic form}
Assume K is symmetric, we can decompose K into $U \Lambda U^T$
\begin{equation*}
\begin{split}
x^T K x &= x^T (U \Lambda U^T) x = (x^T U) \Lambda (U^T x)\\
\end{split}
\end{equation*}
$\Lambda$ has the eigenvalues $\lambda_{i}$, and if $K$ is positive, and all $\lambda_{i}$ > 0,
\begin{equation*}
\begin{split}
x^T K x &= \Sigma_{i =1}^{d} \lambda_{i} ([x^T U_{i}])^2 >= 0\\
\end{split}
\end{equation*}
Then $x^T K x >= 0$ for all $x$ in $ \mathbb{R}^{d}$ iff $K$ is postive semidefinite
%%%%%%%%%Q 3.2 %%%%%%%%%%%%%%
\subsection{Kernel properties}
%%%%%%%% Q 3.2 q2%%%%%%%%%%%%%
\subsubsection{$\alpha$}
Define mapping $\phi (x) = \sqrt{\alpha}$, $\alpha > 0$, and the kernel $\langle \phi(x), \phi(y) \rangle = \alpha$.
The resulting matrix K has item $K_{ij} = \alpha $, the matrix K has equal number of row and columns, and element is $\alpha$. Since $\alpha$ > 0, and all elements are equal, K is positive semidefinite
%%%%%%%Q 3.2 q3 %%%%%%%%%%%%%%
\subsubsection{$f(x), f(y)$}
$K_{ij} = \langle \phi (x), \phi (y) \rangle$, \\
define $\phi(x) = f(x), \forall f: \mathbb{R}^{d} \rightarrow \mathbb{R}$\\
define $\phi(y) = f(y), \forall f: \mathbb{R}^{d} \rightarrow \mathbb{R}$\\
Since f(x) and f(y) produce a scalar, $\langle \phi (x), \phi (y) \rangle = f(x) \cdot f(y)$
%%%%%%%%Q3.2 part 3%%%%%%%%%%%
\subsubsection{k1 and k2}
If the gram matrix, $K_{1}$ of kernel k1 and gram matrix, $K_{2}$ of kernel k2 are positive semidefinite, by scaling them and adding each element, the new gram matrix of $a \cdot k_{1}(x, y) + b \cdot k_{2}(x, y)$, call it $K$, each element of K is positive since a ,b > 0.\\
$K$ is also symmetric because $K_{1}$ and $K_{2}$ are symmetric with the same dimension, and element wise addition and linear combination preserve the symmetric property.\\
%%%%%%%%Q3.2 part 4%%%%%%%%%%%
\subsubsection{$k(x, y) = \frac{k_{1}(x ,y) }{\sqrt{k_{1}(x, x)} \sqrt{k_{1}(y, y)} }$}
Let $\phi_{1}$ be the mapping defined by $k_{1}$\\
We define a new mapping, $\phi$ for $k(x ,y)$\\
We let $\phi(x) = \frac{\phi_{1} (x)}{\norm{\phi_{1}(x)}}$\\
\begin{equation*}
\begin{split}
k(x, y) &= \langle \phi (x), \phi (y) \rangle \\
&= \frac{\phi_{1}(x)}{\norm{\phi_{1}(x)}} \cdot \frac{\phi_{1}(y)}{\norm{\phi_{1}(y)}}\\
&= \frac{\phi_{1}(x)}{\sqrt{\phi_{1}(x) \cdot \phi_{1}(x)}} \cdot \frac{\phi_{1}(y)}{\sqrt{\phi_{1}(y) \cdot \phi_{1}(y)}}\\
&= \frac{\phi_{1}(x)}{( \sqrt{\phi_{1}(x)} \cdot \sqrt{ \phi_{1}(y)} )} \cdot \frac{\phi_{1}(y)}{( \sqrt{\phi_{1}(x)} \cdot \sqrt{ \phi_{1}(y)} )} \\
&= \frac{\phi_{1}(x)}{ \sqrt{\phi_{1}(x) \cdot \phi_{1}(y) }} \cdot \frac{\phi_{1}(x)}{ \sqrt{\phi_{1}(x) \cdot \phi_{1}(y) }}\\
k(x, y) &= \frac{k_{1}(x ,y) }{\sqrt{k_{1}(x, x)} \sqrt{k_{1}(y, y)} }
\end{split}
\end{equation*}
Therefore, there is a new mapping $\phi(x)$ that supports $k(x, y)$ and it is a kernel because $\phi(x)$
is the product of two kernel mappings
\end{document} | {
"alphanum_fraction": 0.6071492086,
"avg_line_length": 42.1198501873,
"ext": "tex",
"hexsha": "06c04818d21324af5d7c6faa6a0773d98716029c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "744b7bfc586a8d629086c92248b9b9c2aa1eb071",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "violetguos/intro_machine_learning",
"max_forks_repo_path": "a3/Untitled.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "744b7bfc586a8d629086c92248b9b9c2aa1eb071",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "violetguos/intro_machine_learning",
"max_issues_repo_path": "a3/Untitled.tex",
"max_line_length": 306,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "744b7bfc586a8d629086c92248b9b9c2aa1eb071",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "violetguos/intro_machine_learning",
"max_stars_repo_path": "a3/Untitled.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4658,
"size": 11246
} |
\chapter{Conclusion}
I am done with this thesis now. | {
"alphanum_fraction": 0.7735849057,
"avg_line_length": 17.6666666667,
"ext": "tex",
"hexsha": "754dd3efa38c293db52f3b2ba18e5d8ae87df9c4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "Malificiece/Leap-Motion-Thesis",
"max_forks_repo_path": "docs/Thesis tools/ch5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "Malificiece/Leap-Motion-Thesis",
"max_issues_repo_path": "docs/Thesis tools/ch5.tex",
"max_line_length": 31,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "Malificiece/Leap-Motion-Thesis",
"max_stars_repo_path": "docs/Thesis tools/ch5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13,
"size": 53
} |
\documentclass{article}
\usepackage[final]{style}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{verbatim}
\usepackage{graphicx} % for figures
\title{Lecture \#X: NAME OF LECTURE}
\author{
Student1 name, student2 name, etc. \\
Department of Computer Science\\
Stanford University\\
Stanford, CA 94305 \\
\texttt{\{STUDENT1, STUDENT2, etc.\}@cs.stanford.edu} \\
}
\begin{document}
\maketitle
\section{Introduction}
Copy this template and use it to write up the lecture notes. Also copy over
bibliography.bib and add any references you use to that file.
% References
\small
\bibliographystyle{plain}
\bibliography{bibliography}
\end{document}
| {
"alphanum_fraction": 0.7237728585,
"avg_line_length": 26.641025641,
"ext": "tex",
"hexsha": "e79a7a629c66705d531993f3bbe6a33d0dbe4b24",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "98fd6511b684cd14f99d3a8d280385d36732e568",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "suryadheeshjith/CS_131",
"max_forks_repo_path": "cs131_notes/template/template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "98fd6511b684cd14f99d3a8d280385d36732e568",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "suryadheeshjith/CS_131",
"max_issues_repo_path": "cs131_notes/template/template.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "98fd6511b684cd14f99d3a8d280385d36732e568",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "suryadheeshjith/CS_131",
"max_stars_repo_path": "cs131_notes/template/template.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 300,
"size": 1039
} |
%% start of file `new.tex'.
%% Copyright 2006-2012 Xavier Danaux ([email protected]).
%
% This work may be distributed and/or modified under the
% conditions of the LaTeX Project Public License version 1.3c,
% available at http://www.latex-project.org/lppl/.
\documentclass[11pt,a4paper,roman]{moderncv} % possible options include font size ('10pt', '11pt' and '12pt'), paper size ('a4paper', 'letterpaper', 'a5paper', 'legalpaper', 'executivepaper' and 'landscape') and font family ('sans' and 'roman')
\renewcommand{\listitemsymbol}{- }
% moderncv themes
\moderncvstyle{classic} % style options are 'casual' (default), 'classic', 'oldstyle' and 'banking'
\moderncvcolor{purple} % color options 'blue' (default), 'orange', 'green', 'red', 'purple', 'grey' and 'black'
%\renewcommand{\familydefault}{\rmdefault} % to set the default font; use '\sfdefault' for the default sans serif font, '\rmdefault' for the default roman one, or any tex font name
%\nopagenumbers{} % uncomment to suppress automatic page numbering for CVs longer than one page
% character encoding
%\usepackage[utf8]{inputenc} % if you are not using xelatex ou lualatex, replace by the encoding you are using
%\usepackage{CJKutf8} % if you need to use CJK to typeset your resume in Chinese, Japanese or Korean
% adjust the page margins
\usepackage{verbatim}
\usepackage[top=1.9cm, bottom=1.7cm, left=2cm, right=2cm]{geometry}
\setlength{\hintscolumnwidth}{29mm} % if you want to change the width of the column with the dates
%\setlength{\makecvtitlenamewidth}{10cm} % for the 'classic' style, if you want to force the width allocated to your name and avoid line breaks. be careful though, the length is normally calculated to avoid any overlap with your personal info; use this at your own typographical risks...
% personal data
\firstname{Alperen}
\familyname{Karan}
\title{Curriculum Vitae} % optional, remove the line if not wanted
%\address{İTÜ Fen-Edebiyat Fakültesi \\ 34469, Maslak, İstanbul, Turkey \\} % optional, remove the line if not wanted
\mobile{+90~(505)~702~2281} % optional, remove the line if not wanted
%\phone{+2~(345)~678~901} % optional, remove the line if not wanted
%\fax{+3~(456)~789~012} % optional, remove the line if not wanted
\email{[email protected]} % optional, remove the line if not wanted
\homepage{alperenkaran.github.io} % optional, remove the line if not wanted
%\extrainfo{ORCID ID: 0000-0002-8682-7054} % optional, remove the line if not wanted
%\photo[110pt][0.01pt]{picture.jpg} % '64pt' is the height the picture must be resized to, 0.4pt is the thickness of the frame around it (put it to 0pt for no frame) and 'picture' is the name of the picture file; optional, remove the line if not wanted
%\quote{Some quote (optional)} % optional, remove the line if not wanted
% to show numerical labels in the bibliography (default is to show no labels); only useful if you make citations in your resume
%\makeatletter
%\renewcommand*{\bibliographyitemlabel}{\@biblabel{\arabic{enumiv}}}
%\makeatother
% bibliography with mutiple entries
%\usepackage{multibib}
%\newcites{book,misc}{{Books},{Others}}
%----------------------------------------------------------------------------------
% content
%----------------------------------------------------------------------------------
\begin{document}
%\begin{CJK*}{UTF8}{gbsn} % to typeset your resume in Chinese using CJK
%----- resume ---------------------------------------------------------
\makecvtitle
\vspace{-.7cm}
\section{Research Interests}
\cvitem{-}{Machine learning, Deep learning}
\cvitem{-}{Topological data analysis, Persistent homology}
\cvitem{-}{Cognitive psychology, Music cognition}
\section{Computer skills}
\cvitem{-}{Python (\emph{fluent}) - hands-on experience in several machine/deep learning libraries}
\cvitem{-}{PyCharm, DataGrip, MATLAB, SPSS}
\cvitem{-}{(Postgre)SQL, \LaTeX}
\cvitem{-}{Tableau}
\cvitem{-}{AWS (Redshift, S3, Step Functions)}
\section{Education}
\cventry{Present}{PhD}{Mathematical Engineering}{Istanbul Technical University, Turkey}{}{}{}
\cventry{2019}{M.A.}{Psychology}{Boğaziçi University, Turkey}{}{}{}
\cventry{2015}{M.S.}{Mathematics}{Boğaziçi University, Turkey}{}{}{}
\cventry{2013}{B.S.}{Mathematics}{Boğaziçi University, Turkey}{}{}{}
%\section{Certificates}
%\cvitem{-}{Inzva - Advanced Algorithm Program (2021)}
%\cvitem{-}{Inzva - Applied AI Study Group (2021)}
\section{Work Experience}
\cvitem{09.2021 - Present}{Data Scientist, \textit{Getir}, Turkey.}
\cvitem{12.2013 - 09.2021}{Research assistant, \textit{Istanbul Technical University}, Turkey.}
%\section{Courses Assisted}
%\cvitem{-}{Advanced Scientific and Engineering Computing (MATLAB)}
%\cvitem{-}{Mathematics I,II, III (for Mathematics students)}
%\cvitem{-}{Mathematics I, II (for Engineering students)}
%\section{Data Science Projects}
%\cvitem{-}{J.S.Bach's chorales (link)}
% etc etc ....
\section{Publications}
\cvitem{1.}{Karan, A., \& Kaygun, A. (2021). {Time Series Classification via Topological Data Analysis.} \textit{Expert Systems with Applications,115326.}}
\cvitem{2.}{Karan, A., \& Mungan, E. (2018). In Further Search of Tonal Grounds in Short Term Memory of Melodies. In
R. Parncutt \& A. Schiavio (Ed.), \textit{Proceedings of the Fifteenth International Conference on Music
Perception and Cognition} (p. 237-243), Karl-Franzens Universitaet Graz.}
\cvitem{3.}{Gillam, W. D., \& Karan, A. (2017). The Hausdorff topology as a moduli space. \textit{Topology and its Applications, 232}, 102-111.}
\begin{comment}
\section{Seminars and Posters}
\cvitem{1.}{Karan A., \& Mungan, E., (2018 July). \textit{In Further Search of Tonal Grounds in Short Term Memory of Melodies.} Poster presented at ICMPC15 - 15th International Conference on Music Perception and Cognition, Graz, Austria.}
\cvitem{2.}{Karan A., \& Mungan, E., (2018 May). \textit{Tonal Grounds for Short Term Melody Recognition.} Poster presented at ISBCS 2018 – International Symposium on Brain and Cognitive Science, Istanbul, Turkey.}
\cvitem{3.}{Karan, A., \& Mungan, E., (2017 November). \textit{Tonal Grounds for Short Term Melody Recognition.} Poster presented at the annual meeting of Psychonomic Society, Vancouver, BC, Canada.}
\cvitem{4.}{Karan A., (2015 April). \textit{Topologies on families of closed subsets.} Seminar presented at Istanbul Algebraic and Arithmetic Geometry Meetings, Istanbul, Turkey.}
\end{comment}
\section{Languages}
\cvitem{-}{Turkish (\textit{native})}
\cvitem{-}{English (\textit{fluent})}
\cvitem{-}{French (\textit{beginner})}
\end{document}
%% end of file `template.tex'.
| {
"alphanum_fraction": 0.6800115207,
"avg_line_length": 56.9180327869,
"ext": "tex",
"hexsha": "7ae2a83c51a164f07f8dd2b31e9df02c3699caf0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "60fcee74d4205cc2dae45871754e35080f23ee5b",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "alperenkaran/alperenkaran.github.io",
"max_forks_repo_path": "files/alperen_cv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "60fcee74d4205cc2dae45871754e35080f23ee5b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "alperenkaran/alperenkaran.github.io",
"max_issues_repo_path": "files/alperen_cv.tex",
"max_line_length": 292,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "60fcee74d4205cc2dae45871754e35080f23ee5b",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "alperenkaran/alperenkaran.github.io",
"max_stars_repo_path": "files/alperen_cv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1909,
"size": 6944
} |
\subsection{L'Hôpital's rule}
\subsubsection{L'Hôpital's rule}
If there are two functions which are both tend to \(0\) at a limit, calculating the limit of their divisor is hard. We can use L'Hopital's rule.
We want to calculate:
\(\lim_{x\rightarrow c}\dfrac{f(x)}{g(x)}\)
This is:
\(\lim_{x\rightarrow c}\dfrac{f(x)}{g(x)}=\lim_{x\rightarrow c}\dfrac{\dfrac{f(x)-0}{\delta}}{\dfrac{g(x)-0}{\delta}}\)
If:
\(\lim_{x\rightarrow c}f(x)=\lim_{x\rightarrow c}g(x)=0\)
Then
\(\lim_{x\rightarrow c}\dfrac{f(x)}{g(x)}=\lim_{x\rightarrow c}\dfrac{\dfrac{f(x)-f(c)}{\delta}}{\dfrac{g(x)-f(c)}{\delta}}\)
\(\lim_{x\rightarrow c}\dfrac{f(x)}{g(x)}=\dfrac{f'(x)}{g'(x)}\)
| {
"alphanum_fraction": 0.6364985163,
"avg_line_length": 25.9230769231,
"ext": "tex",
"hexsha": "055f0123631e1bb7142492feb28d2964473764f7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/analysis/calculus/01-06-lhopital.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/analysis/calculus/01-06-lhopital.tex",
"max_line_length": 144,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/analysis/calculus/01-06-lhopital.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 265,
"size": 674
} |
%-------------------------
% Resume in Latex
% Author : Bingpeng Xiang
% License : MIT
% Description: Modified from `https://github.com/sb2nov/resume`
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage{fancyhdr}
\usepackage[english]{babel}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
%-------------------------
% Custom commands
\newcommand{\name}[2]{
\centerline{
\textbf{\Large\scshape{\href{#1}{#2}}}
}
\vspace{1pt}
}
\newcommand{\contactInfo}[4]{
\centerline{\large{\ {#1} \textperiodcentered\ \ {#2} \textperiodcentered\ \ {#3}}}
\vspace{-5pt}
}
\newcommand{\generalEntry}[1]{
\item\small{
{#1 \vspace{-2pt}}
}
}
\newcommand{\entryWithKeyword}[2]{
\item\small{
\textbf{#1}{: #2 \vspace{-2pt}}
}
}
\newcommand{\generalEvent}[2]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & \textit{\small #2}
\end{tabular*}\vspace{-5pt}
}
\newcommand{\eventWithDetail}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\eventWithMoreDetail}[6]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\textit{\small#5} & \textit{\small #6} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\eventListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\eventListEnd}{\end{itemize}}
\newcommand{\entryListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\entryListEnd}{\end{itemize}\vspace{-5pt}}
\renewcommand{\labelitemii}{$\circ$}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\name{https://www.linkedin.com/in/bingpeng-xiang/}{Bingpeng Xiang}
\contactInfo{Brooklyn, NY}{929-319-1779}{[email protected]}
%-----------EDUCATION-----------------
\section{Education}
\eventListStart
\eventWithMoreDetail
{New York University}{Brooklyn, NY}
{Master in Computer Science; GPA: 3.78}{Sep. 2018 -- May. 2020}
{TA for CS6083 Principles of Database Systems}{Feb. 2019 -- Present}
\eventWithDetail
{Tianjin University of Technology}{Tianjin, China}
{Bachelor of Science in Information and Computing Science; GPA: 3.80}{Sep. 2014 -- Jul. 2018}
\eventListEnd
%-----------EXPERIENCE-----------------
\section{Experience}
\eventListStart
\eventWithDetail
% good
{Tencent}{Shenzhen, China}
{Software Engineer Intern, Continuous Integration(CI) Team}{Jun. 2019 -- Aug. 2019}
\entryListStart
\generalEntry
{Optimized the run-time environment of the CI runner to support custom language, container, and OS.}
\generalEntry
{Implemented multiple plugins to integrate other internal platforms including git platform, deployment platforms, Windows and iOS software sign service, and Android and iOS close alpha service.}
\generalEntry
{Collaborated with Tencent Cloud Virtual Machine Team to develop CI workflows which shorten the release cycle time from 5 hours to 1 hour. Also developed a release workflow for Tencent Video software.}
\entryListEnd
\eventWithDetail
{Yuantek}{Beijing, China}
{Software Engineer Intern, Traffic Analysis Team}{Apr. 2018 -- Jun. 2018}
\entryListStart
\generalEntry
{Implemented a declarative packet manipulation library using metaclasses in Python, which will be able to encode/decode whole company-wide Type-Length-Value(TLV) format communication protocols.}
\generalEntry
{Designed a Telnet command interface to provide a run-time monitor with features of command completion, command history, and subcommand.}
\generalEntry
{Refactored the legacy IP extraction program in Python to support run-time monitor, crash recovery, and also resulted in a 30\% reduction in the rate of the program crash.}
\generalEntry
{Launched a 24/7 service to extract URLs matching the specific pattern from email and transferred them to the partner companies that had over 99.9\% reliability. }
\entryListEnd
\eventWithDetail
{Tianjin University of Technology}{Tianjin, China}
{Research Assistant, Data Management Software for Smart Agriculture}{Dec. 2017 -- May. 2018}
\entryListStart
\generalEntry
{Designed fully managed service with two researchers, which is capable to connect, manage, and ingest data from agriculture sensors. }
\generalEntry
{Built RESTful APIs using the Flask framework in Python which provides authentication, filter, CRUD endpoints.}
\generalEntry
{Developed a desktop app using Qt framework in C++ which enables users to analyze, visualize IoT data in real time, and configures custom alerts and triggers to monitor any environmental change.}
\generalEntry
{Promoted the extensibility and control granularity of the sensor by integrating a special description file.}
\entryListEnd
\eventListEnd
%-----------PROJECTS-----------------
\section{Project}
\eventListStart
\generalEvent
{Oingo}{}
% consistent
\entryListStart
\generalEntry
{Designed a note share website using the Laravel framework in PHP. The users are able to publish notes linking to certain locations, date-time or tag. Their friends could receive notes based on their custom filters. The website also supports comments, friend management, and note search.}
\generalEntry
{Increased query efficiency by using Geohash to store location and designing a special scheme to store repeated event information.}
\generalEntry
{Visualized the note in the map to enhance the user experience by introducing Google Map API.}
\entryListEnd
\entryWithKeyword{AIS Message Parser}{A Python library to encode/decode AIS message, the parse core is written in C to acceleration.}
\entryWithKeyword{Web Search Engine}{Search engine indexed over 4 million pages whose retrieval time was less than 0.5s per query.}
\entryWithKeyword{TV Show Tracker}{The serverless application that could track what you're watching and find where to watch.}
\eventListEnd
% --------PROGRAMMING SKILLS------------
\section{Programming Skills}
\eventListStart
\entryWithKeyword{Languages}{Python, Java, C++, PHP, SQL, JavaScript, HTML}
\entryWithKeyword{Technologies}{Git, Vue.js, React, Vim, Bash, Regex, Docker, Markdown, MySQL, AWS}
\eventListEnd
%-------------------------------------------
\end{document} | {
"alphanum_fraction": 0.670249017,
"avg_line_length": 36.8599033816,
"ext": "tex",
"hexsha": "4d26fd267414893ecab62c94e7efea8389ef8a29",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c05924e6511dfdb109778b90643e6de81d21c69c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "macrovve/resume",
"max_forks_repo_path": "resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c05924e6511dfdb109778b90643e6de81d21c69c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "macrovve/resume",
"max_issues_repo_path": "resume.tex",
"max_line_length": 300,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "c05924e6511dfdb109778b90643e6de81d21c69c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "macrovve/resume",
"max_stars_repo_path": "resume.tex",
"max_stars_repo_stars_event_max_datetime": "2020-02-19T13:55:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-14T07:04:21.000Z",
"num_tokens": 2040,
"size": 7630
} |
\chapter*{Introduction}\addcontentsline{toc}{title}{Introduction}
\bgroup
\egroup
| {
"alphanum_fraction": 0.8048780488,
"avg_line_length": 20.5,
"ext": "tex",
"hexsha": "088e930574c9d67d7c3939ffe4b4a495703c8577",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-09-10T21:19:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-01-08T16:48:01.000Z",
"max_forks_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "d-torrance/Macaulay2-web-site",
"max_forks_repo_path": "Book/ComputationsBook/chapters/introduction/chapter.tex",
"max_issues_count": 19,
"max_issues_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a",
"max_issues_repo_issues_event_max_datetime": "2021-02-07T01:08:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-04-17T19:52:43.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "d-torrance/Macaulay2-web-site",
"max_issues_repo_path": "Book/ComputationsBook/chapters/introduction/chapter.tex",
"max_line_length": 65,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "d-torrance/Macaulay2-web-site",
"max_stars_repo_path": "Book/ComputationsBook/chapters/introduction/chapter.tex",
"max_stars_repo_stars_event_max_datetime": "2018-11-27T08:01:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-27T08:01:17.000Z",
"num_tokens": 23,
"size": 82
} |
\documentclass[11pt]{article}
%Gummi|065|=)elcome to Gummi 0.6.
\title{\textbf{Gitsome:\\Github Recommender Project Proposal}}
\author{Alejandra Vigil\\Benjamin Waters\\}
\usepackage{hyperref}
\usepackage{graphicx}
\date{}
\begin{document}
\maketitle
\section{Proposal}
\subsection{Summary}
We will be finding repository recommendations for a user by using k-means algorithm and finding user-similarities using. Github does not currently have a recommendation system implemented. They just look amongst "trending" as highest values within a time frame. A recommendation system is easy to create because a user "stars" code repositories. A star relates to a 1 and not starring a repository relates to 0. The Jaccard coefficient is a useful measure for binary values. We chose to create a website that allows a user to log in with their Github credentials and repositories would be recommended. We chose to use MeteorJS because we are familiar with its technologies.
\subsection{Mockup}
\includegraphics[scale=0.2]{site.png}
\subsection{Technologies}
\begin{enumerate}
\item NodeJS
\item MeteorJS
\item MongoDB
\item Redis Server
\item Github v3.0 API \ \url{https://developer.github.com/v3/}
\item Recommendation Racoon \ \url{https://github.com/guymorita/recommendationRaccoon}
\end{enumerate}
\subsection{Process}
\begin{enumerate}
\item Get user's starred values
\item Load into Raccoon to create binary vector
\item Get user's friend's stars and create binary vector
\item Load into Raccoon to create binary vector
\item Calculate Jaccard coefficient between friends
\item Use K-means to determine recommendations
\item Present recommendations to user through website medium
\end{enumerate}
\subsection{Evaluation}
We want to look for Jaccard Index values closest to 1. Raccoon returns the top values, but does not have a x is $>$ function.
\end{document}
| {
"alphanum_fraction": 0.7890541977,
"avg_line_length": 43.7674418605,
"ext": "tex",
"hexsha": "991cbcf9c1a5ff7e3d55bd76fe6f894da1de6d99",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3c1b72085b426093fb6d386d82afb98ff640a0f3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thebenwaters/gitrecd",
"max_forks_repo_path": "docs/proposal.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "3c1b72085b426093fb6d386d82afb98ff640a0f3",
"max_issues_repo_issues_event_max_datetime": "2018-01-03T11:31:06.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-01-03T11:31:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thebenwaters/gitrecd",
"max_issues_repo_path": "docs/proposal.tex",
"max_line_length": 674,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3c1b72085b426093fb6d386d82afb98ff640a0f3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thebenwaters/gitrecd",
"max_stars_repo_path": "docs/proposal.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 471,
"size": 1882
} |
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provides euro and other symbols
\else % if luatex or xelatex
\usepackage{unicode-math}
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\usepackage{hyperref}
\hypersetup{
pdftitle={Brain and Body Lab},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{longtable,booktabs}
% Fix footnotes in tables (requires footnote package)
\IfFileExists{footnote.sty}{\usepackage{footnote}\makesavenoteenv{longtable}}{}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
% set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage[]{natbib}
\bibliographystyle{apalike}
\title{Brain and Body Lab}
\author{}
\date{\vspace{-2.5em}}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\hypertarget{introduction}{%
\chapter{Introduction}\label{introduction}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
Welcome to the BABLab Wiki!
In the Brain and Body Lab we are interested in how early experiences influence interactions between the brain and body, contributing to mental and physical health. We hope to use this information to improve the wellbeing of children, adolescents, and adults across the world. In other words, in the BABLab, we aim to do good science that makes a difference to people's lives, today and tomorrow.
The BABLab is directed by Dr.~Bridget Callaghan, Assistant Professor of Psychology at UCLA.
\textbf{Current Projects:}
\begin{itemize}
\tightlist
\item
\href{https://bablab.github.io/wiki_mind_brain_body/}{Mind, Brain, Body (MBB)}
\item
\href{https://bablab.github.io/wiki_parenting_under_pressure/}{Parenting Under Pressure (PUP)}
\item
\href{https://bablab.github.io/wiki_inside_out/}{Inside Out}
\item
Transfer Mental Health
\end{itemize}
\hypertarget{finding-the-lab}{%
\section{Finding the Lab}\label{finding-the-lab}}
We're located in the Psychology Department at UCLA!
5581 Pritzker Hall
This is in the tower building, 5th floor.
\hypertarget{contact-info}{%
\section{Contact Info}\label{contact-info}}
If you have any questions about the lab please contact the lab's managers.
Emily Towner - \href{mailto:[email protected]}{\nolinkurl{[email protected]}}.
Kristen Chu - \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
Alternatively, you can reach out to our lab email, \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
\hypertarget{other-information}{%
\section{Other Information}\label{other-information}}
Feel free to contribute any relevant sections or information - best lunch spots on campus, tips and tricks, or anything else helpful to your fellow lab members!
\hypertarget{onboarding}{%
\chapter{Onboarding}\label{onboarding}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{first-steps}{%
\section{First Steps}\label{first-steps}}
If you are a new member of the BAB Lab, there are a few basic things you will want to set up before or on your first day.
\textbf{Here are some tips:}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Read the \href{https://bablab.github.io/lab_manual/}{Lab Manual}
\item
Ask the lab manager to be added to the following
\end{enumerate}
\begin{itemize}
\tightlist
\item
\href{https://slack.com/}{Slack} (download the desktop and mobile \href{https://slack.com/downloads/mac}{apps})
\item
\href{https://trello.com/emilyanntowner/boards}{Trello} (download the desktop and mobile \href{https://trello.com/en-US/platforms}{apps} - watch this \href{https://www.youtube.com/watch?v=_Ry-SnJygy8\&feature=youtu.be}{tutorial})
\item
Box
\item
Google calendars
\item
Email list
\item
Dropbox Paper
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Send (via Slack) the lab manager your information including your
\end{enumerate}
\begin{itemize}
\tightlist
\item
Preferred name
\item
Preferred pronoun
\item
Preferred email address
\item
Phone number
\item
Photo(s) for lab website
\item
Brief bio for the lab website
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Complete the onboarding process for your position below
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{onboarding---staff-research-associate}{%
\section{Onboarding - Staff Research Associate}\label{onboarding---staff-research-associate}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Submit to Bridget
\begin{itemize}
\tightlist
\item
Signed employment contract
\end{itemize}
\item
Contact HR
\begin{itemize}
\tightlist
\item
Human Resources Coordinator
\item
1283A Franz Hall
\item
(310) 206-9720
\end{itemize}
\item
Submit to HR
\begin{itemize}
\tightlist
\item
\href{https://ucla.app.box.com/s/z58tkq6l13qwl0zw5gtqhdxq0cne1wcu}{Union overtime/comp form}
\item
\href{https://ucla.app.box.com/s/7jmouwl8fbp5039qq4tfmreo5r5ydmjq}{Personal data form}
\item
\href{https://ucla.app.box.com/s/tothrcm0zcz50hj829bkcj1idclcudzr}{Background check authorization form}
\end{itemize}
\item
Schedule with HR
\begin{itemize}
\tightlist
\item
Background check phone call
\item
Hiring meeting
\item
Bring employment verification \href{https://ucla.app.box.com/s/iwguajwkedo2zf2lfr5ie3z5tc4vnz7m}{documents} to meeting (i.e.~passport)
\item
Sign \href{https://ucnet.universityofcalifornia.edu/forms/pdf/upay-585.pdf}{state oath of allegiance/patent policy/patent acknowledgment} (in person)
\end{itemize}
\item
Pick-up
\begin{itemize}
\tightlist
\item
Location: Psychology Main Office (1285 Psychology Building -- See Tyler Tuione)
\begin{itemize}
\tightlist
\item
Parking permits
\end{itemize}
\end{itemize}
\item
Respond
\begin{itemize}
\tightlist
\item
To the tracker I-9 email on or before your first day of work
\end{itemize}
\item
Create (once you have your employee ID)
\begin{itemize}
\tightlist
\item
Create a \href{https://accounts.iam.ucla.edu/\#/}{UCLA logon ID}
\item
Create a \href{https://idpproxy-ucpath.universityofcalifornia.edu/simplesaml/module.php/ucpathdiscovery/disco.php?entityID=https://ucpath.universityofcalifornia.edu\&return=https://idpproxy-ucpath.universityofcalifornia.edu/simplesaml/module.php/saml/sp/discoresp.php?AuthID=_6a3d8a7c8144ccec21e8fff0206d805c7b4d0beb08\%253Ahttps\%253A\%252F\%252Fidpproxy-ucpath.universityofcalifornia.edu\%252Fsimplesaml\%252Fsaml2\%252Fidp\%252FSSOService.php\%253Fspentityid\%253Dhttps\%25253A\%25252F\%25252Fucpath.universityofcalifornia.edu\%25253A443\%25252Fsimplesaml\%25252Fmodule.php\%25252Fsaml\%25252Fsp\%25252Fmetadata.php\%25252Fdefault-sp\%2526cookieTime\%253D1563390873\%2526RelayState\%253Dhttps\%25253A\%25252F\%25252Fucpath.universityofcalifornia.edu\%25252Fsaml_login\&returnIDParam=idpentityid}{UCPath account} (payroll, benefits, etc.)
\item
Create an At Your Service Online (\href{https://atyourserviceonline.ucop.edu/ayso/}{AYSO}) account (retirement)
\end{itemize}
\item
Visit (once you have your employee ID)
\begin{itemize}
\tightlist
\item
Location: UCLA BruinCard Center (Kerckhoff Hall, Room 123)
\begin{itemize}
\tightlist
\item
Bring ID and completed \href{https://secure.bruincard.ucla.edu/BCW/BruinCard_Web/Docs/BC\%20Terms\%20\%20Signature\%2006.pdf}{form}
\end{itemize}
\end{itemize}
\item
Complete (once you have your UCLA logon ID)
\begin{itemize}
\tightlist
\item
Sign-up and complete the required employee training \href{https://ucla.app.box.com/s/mizhokn39tq7z6odvnvutvaoko11n823}{courses}
\item
Sign-up and attend \href{https://www.chr.ucla.edu/training-and-development/new-employee-orientation}{orientation}
\item
Upload orientation training certificates to Box (BABLAB/Lab/Training)
\end{itemize}
\item
Select
\begin{itemize}
\tightlist
\item
Health insurance plan (within 30 days)
Retirement plan (within 90 days)
Union membership requires pension plan
\end{itemize}
\item
Pritzker Access
\begin{itemize}
\tightlist
\item
Email Tyler Tuione -- \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
\item
Include your name and Bruincard \# to be granted weekend swipe card access as well as B and C level access for freezer storage
\item
The swipe access reader is located on the right hand side of door to the right courtyard of the tower entrance.
\end{itemize}
\item
IRB Trainings
\begin{itemize}
\tightlist
\item
Create a \href{http://ora.research.ucla.edu/OHRPP/Documents/Education/SSO_CITI_New_Acct.pdf}{UCLA SSO for CITI Program}
\item
Add and complete the following courses:
\begin{itemize}
\tightlist
\item
Human Research -- Social \& Behavioral Researchers \& Staff
\item
Human Research- Biomedical Researchers \& Staff
\item
UCLA HIPAA
\end{itemize}
\item
Add certificates to the training folder on Box (BABLAB/Lab/Training)
\item
Get a WebIRB account
\begin{itemize}
\tightlist
\item
Email your faculty sponsor/advisor the following information:
\begin{itemize}
\tightlist
\item
Your UCLA Logon ID -- (Verify your \href{https://accounts.iam.ucla.edu/lookup}{UCLA Logon ID})
\item
Your UCLA UID \# (9-digit)
\item
Your full name
\begin{itemize}
\tightlist
\item
First
\item
Middle
\item
Last
\end{itemize}
\item
Your email address
\item
Your department and division
\end{itemize}
\item
Bridget to email this information to \href{mailto:[email protected]}{\nolinkurl{[email protected]}} to request the account.
\item
Ask the lab manager to be added to all IRB protocols
\end{itemize}
\end{itemize}
\item
IBC Trainings
\begin{itemize}
\tightlist
\item
Sign up for the following courses via UCLA \href{https://worksafe.ucla.edu/Ability/Programs/Standard/Control/elmLearner.wml?PortalID=LearnerWeb}{WorkSafe}
\begin{itemize}
\tightlist
\item
NIH Guidelines for UCLA Researchers IBC Compliance Training (online)
\item
Laboratory Safety Fundamentals (online)
\item
Blood-borne Pathogens Training (online)
\item
Medical Waste Management (online)
\item
Biosafety ABC's -- Biosafety Level 2 Training (in-person)
\item
Biosafety Cabinet (online)
\end{itemize}
\item
Add certificates to your user folder on Box (BABLAB/Lab/Training)
\item
Record completion for \href{https://docs.google.com/document/d/1hCYg4hYJ7wi4nsLl1vDs1a-7Of3Tnzvocx58bdcj2cc/edit}{HPL}
\item
Submit certificates to Arielle Radin (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) at HPL
\item
Read the \href{https://ucla.box.com/s/igqe24fbhh6cjqqbysdjye482tuw0axd}{Lab Specific Biosafety Manual} and sign off
\item
Complete Lab Specific Training and sign off
\begin{itemize}
\tightlist
\item
This must be updated annually
\end{itemize}
\item
Get vaccinations (suggested)
\begin{itemize}
\tightlist
\item
Visit OHF at 67-120 CHS x56771
\item
Recommended vaccines
\begin{itemize}
\tightlist
\item
Hepatitis B
\item
Flu (Influenza)
\item
MMR (Measles, Mumps \& Rubella)
\item
Varicella (Chickenpox)
\item
Tdap (Tetanus, Diptheria, Pertussis)
\item
Meningococcal
\end{itemize}
\end{itemize}
\end{itemize}
\item
Shipping of Biological Materials Training
\begin{itemize}
\tightlist
\item
Take the training survey to determine which trainings need to be completed \href{https://www.ehs.ucla.edu/training-support/courses/biosafety/bioshipping}{EHS Shipping of Biological Materials Trainings}. It should direct you to two separate trainings:
\begin{itemize}
\tightlist
\item
Login to worksafe to take the UCLA training \href{https://worksafe.ucla.edu/ucla/Programs/Standard/Control/elmLearner.wml}{UCLA Worksafe}
\item
The survey should also take you to the CDC website to take another training
\end{itemize}
\end{itemize}
\item
REDCap
\begin{itemize}
\tightlist
\item
Complete and send REDCap access form to Martin Lai (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) (BABLAB/Lab/Lab\_protocols/REDCap/Access/Template/)
\end{itemize}
\item
Website admin access
\begin{itemize}
\tightlist
\item
Contact Jun Wan (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) for access to the \href{https://sites.lifesci.ucla.edu/}{life sciences Wordpress multisite server}
\end{itemize}
\item
MRI Trainings
\begin{itemize}
\tightlist
\item
TBD
\end{itemize}
\item
Department of Psychology Printing Acess
\begin{itemize}
\tightlist
\item
\href{https://support.lifesci.ucla.edu/hc/en-us}{Login} (upper right hand corner)
\item
Click submit a request
\item
Inquire about gaining printer access- include your UID number and email
\item
Access printing at Franz Hall
\item
\href{https://ucla.app.box.com/s/db0zzvgrydw1yz99yo1nlooq1j4n7jos}{Instructions} - sending print jobs via email
\end{itemize}
\item
\href{https://ucla.app.box.com/v/Psych-Directory-List}{Departmental Email Distributions}
\end{enumerate}
Important links:
\begin{itemize}
\tightlist
\item
\href{24https://uctrs.it.ucla.edu/}{UCLA time reporting system}
\end{itemize}
Review:
\begin{itemize}
\tightlist
\item
\href{https://www.chr.ucla.edu/new-employee/getting-started}{Getting started at UCLA}
\item
\href{https://ucnet.universityofcalifornia.edu/forms/pdf/welcome-kit.pdf}{Welcome Kit}
\item
\href{https://www.centralresourceunit.ucla.edu/s/}{How-to access UCPath portal}
\item
\href{https://ucla.app.box.com/s/jyzoag8v9qw6katuvgegjil8an2tsx2j}{Workers' Comp
Pamphlet}
\item
\href{https://ucla.app.box.com/s/nua4ypfpjlt1226fusney4zyvo6qzzhj}{When an injury occurs}
\item
\href{https://ucla.app.box.com/s/qrj4j7bnca1r8fy9n1bdfdf6orf1g0dq}{Substance Abuse Brochure}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{onboarding---volunteer-research-assistant-ucla-students}{%
\section{Onboarding - Volunteer Research Assistant (UCLA Students)}\label{onboarding---volunteer-research-assistant-ucla-students}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Website \& Information
\begin{itemize}
\tightlist
\item
Please send us your preferred name, photo, and a brief bio for the lab website
\item
Lab Manager to add to Lab News
\item
Please provide the Lab Manager with your birthday, UID, phone number, and preferred email
\end{itemize}
\item
IRB Trainings- IMPORTANT
\begin{itemize}
\tightlist
\item
No one can access our Box or data until all IRB trainings are complete!
\item
Create a \href{https://ora.research.ucla.edu/OHRPP/Documents/Education/SSO_CITI_New_Acct.pdf}{UCLA SSO} for CITI Program or \href{https://www.research.ucla.edu/CITIProgram/}{login} through UCLA
\item
Add and complete the following courses:
\begin{itemize}
\tightlist
\item
UCLA HIPAA
\item
Human Research - Social \& Behavioral Researchers \& Staff
\item
Human Research - Biomedical Researchers \& Staff
\end{itemize}
\item
Slack message these certificates to the lab manager
\item
Ask to be added to any relevant IRB protocols
\end{itemize}
\item
Accessing Trello
\begin{itemize}
\tightlist
\item
\href{https://trello.com}{Trello} is our task management software!
\begin{itemize}
\tightlist
\item
Download the desktop an mobile \href{https://trello.com/en-US/platforms}{apps}
\item
Please watch this \href{https://youtu.be/_Ry-SnJygy8}{tutorial}
\end{itemize}
\end{itemize}
\item
HR Requirements
\begin{itemize}
\tightlist
\item
Lab Manager must consult with Bridget prior to hiring. If for any reason a non-UCLA student is brought on, a background check must be run (\textasciitilde{}\$50 each). Unless otherwise specified, the Lab Manager should refer to the RA application pool within UCLA students.
\item
This also applies to volunteers who have graduated from UCLA. Once a student volunteer has graduated, their status must be switched to staff volunteer and all HR paperwork should be resubmitted to reflect this change. A Background Check must also now be run, since the graduated student is technically no longer affiliated with UCLA on the date after their graduation.
\item
Please print, read, and complete the following forms located in Box (BABLAB/Lab/Documents/RA\_hiring\_documents/RA\_hiring\_forms\_templates/HR documents)
\begin{itemize}
\tightlist
\item
RA\_volunteer\_agreement
\item
RA\_volunteer\_application
\item
RA\_volunteer\_assignment
\item
RA\_volunteer\_workerscomp
\item
RA\_volunteer\_authorization (only if a Background Check is required)
\item
PAF form (obtained from HR)
\end{itemize}
\item
Each form MUST be completed thoroughly. Some persons omit information such as social security information, but if any area is left vacant, we cannot accept the forms and the volunteer will not be able to work on the UCLA campus. There must be a clear start date and a clear end date. The majority of the forms are filled out by the volunteer, but there are a few areas where the UCLA professor whose lab is overseeing the volunteer must sign and date as well.
\item
The volunteer may not work on the UCLA campus until all forms are filled out, signed, and submitted back to HR.
\end{itemize}
\item
Franz Access
\begin{itemize}
\tightlist
\item
Email the lab manager to request weekend access if you will be running participants on the evenings/weekends
\begin{itemize}
\tightlist
\item
Student volunteers: Include your name and Bruincard \#
\item
Non-student volunteers: We will determine if you are eligible for an access card.
\end{itemize}
\item
The swipe access reader is located on the right hand side of door to the right courtyard of the tower entrance.
\end{itemize}
\item
BMC Requirements
\begin{itemize}
\tightlist
\item
TBD
\end{itemize}
\item
Ask the Lab Manager to be added to REDCap
\begin{itemize}
\tightlist
\item
Complete and send REDCap access form to Lab Manager to be submitted to Martin Lai (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) (BABLAB/Lab/Lab\_protocols/REDCap/Access/Template/)
\end{itemize}
\item
Ask Lab Manager to be added to Slack, Box, Google Calendars, Google Group, OSF
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{onboarding---postdoctoral-scholar}{%
\section{Onboarding - Postdoctoral Scholar}\label{onboarding---postdoctoral-scholar}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Schedule hiring orientation meeting with someone from Human Resources (HR) (someone from HR will reach out to you once the office receives approval to hire). Prior to this meeting, you will need to submit:
\begin{itemize}
\tightlist
\item
Forms (Personal Data Form; Postdoc Personal Data Form; Postdoc Union Form)
\item
Your CV
\item
A brief (2-3 sentences) description of your postdoc research and goals
\item
A copy of your PhD diploma, your official transcript, or certificate of completion from Registrar's office
\end{itemize}
\item
At the hiring orientation, HR will go over how to:
\begin{itemize}
\tightlist
\item
Set up \href{https://ucpath.universityofcalifornia.edu/}{UC Path} for viewing and updating personal information; viewing paychecks; signing up for direct deposit; updating tax witholdings; viewing and printing W-2s; signing up for Benefits (period of eligibility to enroll in benefits is 31 days)
\item
Submit monthly timesheets through \href{https://uctrs.it.ucla.edu/}{Time Reporting System}
\item
Acquire a parking permit and office keys through the Psychology Main Office
\item
Get a BruinCard (campus ID card)
\item
Submit I-9 Tracker
\end{itemize}
\item
Create a \href{https://accounts.iam.ucla.edu/\#/}{UCLA logon ID}
\item
Attend the \href{https://www.postdoc.ucla.edu/resources/new-postdocs/}{Postdoctoral Scholar Orientation} and sign up for relevant listservs
\item
Complete CITI Trainings and submit certificates to Lab Manager:
\begin{itemize}
\tightlist
\item
Create a \href{http://ora.research.ucla.edu/OHRPP/Documents/Education/SSO_CITI_New_Acct.pdf}{UCLA SSO for CITI Program}
\item
Add and complete the following courses:
\begin{itemize}
\tightlist
\item
Human Research -- Social \& Behavioral Researchers \& Staff
\item
Human Research- Biomedical Researchers \& Staff
\item
UCLA HIPAA
\end{itemize}
\end{itemize}
\item
Get a WebIRB account:
\begin{itemize}
\tightlist
\item
Email your faculty sponsor/advisor the following information:
- Your UCLA Logon ID -- (Verify your \href{https://accounts.iam.ucla.edu/lookup}{UCLA Logon ID})
- Your UCLA UID \# (9-digit)
- Your full name
- First
- Middle
- Last
- Your email address
- Your department and division
\item
Bridget to email this information to \href{mailto:[email protected]}{\nolinkurl{[email protected]}} to request the account.
\item
Ask the lab manager to be added to all IRB protocols
\end{itemize}
\item
Have Lab Manager set you up with the lab's:
\begin{itemize}
\tightlist
\item
Slack (need UCLA logon to access)
\item
Trello
\item
Google Calendars
\item
Box (need UCLA logon to access)
\item
GitHub
\item
OSF
\item
IRBs (need UCLA logon to access)
\item
REDcap
\end{itemize}
\item
Send the following information to Lab Manager:
\begin{itemize}
\tightlist
\item
Preferred name
\item
Preferred pronoun
\item
Preferred e-mail address
\item
Phone number
\item
Photo(s) for lab website
\item
Brief bio for lab website
\end{itemize}
\item
Read \href{https://bablab.github.io/lab_manual/}{Lab Manual}
\item
Set up \href{https://www.library.ucla.edu/use/computers-computing-services/connect-campus}{access} to papers \& databases off-campus through UCLA
\item
Create a \href{https://www.hoffman2.idre.ucla.edu/}{Hoffman2 Cluster} account
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{onboarding---graduate-student}{%
\section{Onboarding - Graduate Student}\label{onboarding---graduate-student}}
For lab manager to do:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Add new graduate student to:
\end{enumerate}
\begin{itemize}
\tightlist
\item
BABLab calendars
\item
GitHub
\item
Slack workspace
\item
Google group
\item
Box
\item
Relevant REDcap projects
\item
OSF
\item
Trello
\item
IRB(s)
\end{itemize}
For graduate student to do:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Make your UCLA ID (instructions sent when accept admissions offer)
\item
Submit your statement of intent to register (SIR) and statement of legal residence (SLR) forms
\item
Check your UCLA email regularly and add it to your mail app
\item
Once you are given access to Box through UCLA (you will receive an email notification), \href{https://www.box.com/resources/downloads}{download box drive} and get set up
\item
Once lab manager requests that you be added to REDcap and you get an email about it, fill out the access form and go through the steps to set up an account
\item
Set up \href{https://www.library.ucla.edu/use/computers-computing-services/connect-campus}{access} to papers \& databases off-campus through UCLA.
\item
Watch the BABLab Trello how-to \href{https://www.youtube.com/watch?v=_Ry-SnJygy8\&feature=youtu.be\&ab_channel=BABLab}{video} and download \href{https://trello.com/en-US/platforms}{Trello}.
\item
Make an account on \href{https://accounts.osf.io/login?service=https://osf.io/myprojects/}{OSF}
\item
Do the following IRB trainings:
\end{enumerate}
\begin{itemize}
\tightlist
\item
These are done on the \href{https://www.research.ucla.edu/CITIProgram/}{CITI program} by creating a \href{https://ora.research.ucla.edu/OHRPP/Documents/Education/SSO_CITI_New_Acct.pdf}{UCLA SSO}, then add and complete the following
\item
UCLA HIPAA
\item
Human Research - Social \& Behavioral Researchers \& Staff
\item
Human Research- Biomedical Researchers \& Staff
\item
Once you have done them, please slack the lab manager the certificates for our records
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{9}
\tightlist
\item
Get an account on the IRB site (required before you can be added to IRBs!)
\end{enumerate}
\begin{itemize}
\tightlist
\item
Email Bridget the following info so that she can get you an account:
\begin{itemize}
\tightlist
\item
Your UCLA logon ID
\item
Your UCLA ID \# (9 digits)
\item
Your full name (first, middle, last)
\item
Your email address
\item
Your department and division
\end{itemize}
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{10}
\tightlist
\item
Train to collect data for whichever large study is currently going on in the lab
\end{enumerate}
\begin{itemize}
\tightlist
\item
Meet with lab manager \& Bridget to discuss study
\item
Read study wiki \& data collection protocols
\item
Meet with lab manager to address questions regarding protocols
\item
Shadow several data collection sessions
\item
Do a pilot session \& pass
\item
Run your first session reverse-shadowed by someone who is trained
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{11}
\tightlist
\item
Get your UCLA BruinCard
\end{enumerate}
\begin{itemize}
\tightlist
\item
Submit a message to the BruinCard center to request a new bruincard
\begin{itemize}
\tightlist
\item
Go to my.ucla.edu and log in
\item
Click the yellow ``Need help?'' in the top right corner, then click on ``message center''
\item
Submit a question to the BruinCard center, with the topic ``requesting new bruincard''
\end{itemize}
\item
In your message, include the following info:
\begin{itemize}
\tightlist
\item
UCLA ID number, contact phone, contact email, department, requested pickup date and time (Monday, Wednesday, Friday 10am-2pm), have you had a previous BruinCard? Was your online photo submission approved? (you will be prompted to upload a photo in the admissions process, if that was approved you can say yes)
\item
Attach a scanned copy of your govt issued photo ID
\end{itemize}
\item
Go pick up your card at the specified location at the time you were scheduled for
\begin{itemize}
\tightlist
\item
Make sure to bring the same photo ID you provided a copy of!
\end{itemize}
\item
Once you have your bruincard, let the lab manager know so that they can request lab access for you!
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{12}
\tightlist
\item
Learn how to make IRB amendments and navigate the UCLA IRB portal
\end{enumerate}
\begin{itemize}
\tightlist
\item
If you haven't already requested an IRB account, do that ASAP
\item
Once you have an account, meet with the lab manager to go over how to navigate the UCLA \href{https://webirb.research.ucla.edu/WEBIRB/Rooms/DisplayPages/LayoutInitial?Container=com.webridge.entity.Entity\%5BOID\%5BAC482809EC03C442A46F2C8EEC4D75D3\%5D\%5D}{IRB site}
\end{itemize}
\hypertarget{lab-protocols}{%
\chapter{Lab Protocols}\label{lab-protocols}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{meetings-and-training}{%
\section{Meetings and Training}\label{meetings-and-training}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{lab-meetings}{%
\subsection{Lab Meetings}\label{lab-meetings}}
We are happy to have a range of students join us for weekly lab meeting, whether you are an official member of the lab, or are just visiting -- we want a diversity of perspectives in the lab, so join in and make your voice heard.
You might be wondering why we need a protocol for a lab meeting? The answer is simple -- to make the meetings as time efficient, cohesive, and productive as possible. To achieve that goal, we follow a structured template for weekly lab meetings:
\hypertarget{meeting-blocks}{%
\subsubsection{Meeting Blocks}\label{meeting-blocks}}
The first layer to the lab meeting structure is to have `Meeting Blocks' which focus the content of our lab meetings for set periods of time (typically 3-5 weeks) on a particular topic. The topics of the Meeting Blocks are decided as a group and will be chosen for strategic purposes (e.g., if we are writing a grant or paper on a particular topic area we might assign a meeting block to that topic, likewise -- if we are exploring measures for a new study, we could assign a meeting block to searching for a range of measures and deciding on the best available). You can find a list of potential Meeting Block topics at the end of this document. If there is a topic of high general interest to the lab, we can also schedule a meeting block on it (even if we don't directly research that topic). At the end of each meeting block we will discuss the next block assignment as a group. If you have an idea for a meeting block, feel free to bring it up at the end of the current block (and add it to the list in this bookdown project).
\hypertarget{syllabus-development}{%
\subsubsection{Syllabus Development}\label{syllabus-development}}
The first step in a meeting block will be to develop a syllabus for the coming weeks. The syllabus can either be worked on as a group (e.g., in the first meeting of a new block), or one person can be in charge of developing the syllabus.
\emph{Roles \& Responsibilities}
Select a sub-topic or research question for each meeting within the block.
Select a set of readings/material (can be movie clips, podcasts etc.) to go through each week (keep in mind that people have limited time to review the material for lab meeting so assign one primary reading/material and place add additional materials into a supplement, in case people wish to review further).
Make a document for the meeting and share it with all meeting attendees.
Make sure that people are signed up to lead each meeting in the block.
Be in charge of sending reminders for the meetings in the block.
Make any meeting notes at the end of each meeting, and make sure the paper doc is up to date at the close of the meeting.
Make a post on the BABLab twitter for each meeting so people know what we are talking and thinking about.
\hypertarget{meeting-leaders}{%
\subsubsection{Meeting Leaders}\label{meeting-leaders}}
Each meeting will be assigned a meeting leader. The leader is the person who has chosen or been assigned the primary reading or media material for that week.
\emph{Roles and Responsibilities}
Read/watch/listen to the media assigned for that week in detail.
Think about themes that can be brought up in the lab meeting to discuss as a group.
Be ready to facilitate the meeting and stimulate conversation.
Keep the meeting on track (practice those assertive conversation steering techniques!).
The meeting leader does NOT need to make slides, prepare food, or do anything else beyond the roles and responsibilities outlined above.
\hypertarget{meeting-attendee}{%
\subsubsection{Meeting Attendee}\label{meeting-attendee}}
It is not always possible to read/watch/listen to the media for every lab meeting in detail. That is why we assign one person (the meeting leader) to do a deep dive into the material each week. While a deep dive is not necessary, all meeting attendees are expected to be familiar with the media and topic of conversation each week so that they may contribute meaningfully to discussions.
\emph{Roles and Responsibilities}
Familiarize yourself with the media being presented that week. If you have time, do a deep dive too.
Be thoughtful in the lab meetings and try to make constructive comments.
If you come across additional material that you think would be good to include in the lab meeting supplement, add it into the paper doc (on Dropbox).
Try to connect the discussions in lab meeting with the past meetings in the current meeting block, as well as with discussions in past blocks.
\hypertarget{potential-lab-meeting-block-topics}{%
\subsubsection{Potential Lab Meeting Block Topics}\label{potential-lab-meeting-block-topics}}
(in no particular order)
\begin{itemize}
\tightlist
\item
Sensitive periods in learning and memory
\item
Mind Brain Body Study: Questionnaires
\item
Role of the hippocampus in learning and memory across development
\item
Nutritional Psychiatry
\item
Nutrition and cognitive development
\item
How does early adversity or lifetime stress affect the microbiome?
\item
Bottom up: microbiome influences on brain and behavior
\item
Top down: brain and behavioral influences on microbiome
\item
Mind Brain Body Study: In lab task review
\item
Multivariate analytical techniques in fMRI
\item
Microbiome methods
\item
Electrogastrograph -- what do we know about the signal?
\item
Heart Rate Variability and early life stress
\item
Integrating physiological measures to enrich our understanding of behavior
\item
Kind of crazy ideas, but wouldn't it be cool if they worked session.
\item
Research group highlight - we pick a research group (or even a general research topic) and review the body of work they engage in, or in the case of the research topic, who the big research groups in the field are.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{trainee-tuesdays-thursdays}{%
\subsection{Trainee Tuesdays \& Thursdays}\label{trainee-tuesdays-thursdays}}
In order to encourage ``deep work'' time, we are implementing \emph{Trainee Tuesdays and Thursdays}!
All trainings, meetings, questions/concerns that will take longer than 10 minutes (unless URGENT) should be scheduled on Tuesdays and Thursdays if possible.
Please feel free to schedule a meeting if you'd like to discuss your research/work more deeply or learn a new skill.
If you are simply having an issue with an assignment, before you schedule a meeting with a lab manager we ask that you try the following steps in this order:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Check the OSF protocol - there might be step-by-step instructions for your issue in the BABLab OSF or study specific OSF protocols
\item
Watch a training video - if one exists for the issue/task at hand
\item
Consult a fellow RA - they may know what to do
\item
Consult a senior RA
\item
Make a list of notes in your RA notebook about the problems you are having and present them for discussion at Thursday's RA meeting
\item
Finally, schedule a one-on-one meeting with Emily or Kristen
\end{enumerate}
To do so, please create an event on the BABLab calendar.
Please create this event on the blue BABLab calendar using the template below during a time the lab manager is free. Invite yourself and the lab manager you'd like to meet with!
\emph{Title: Meeting - ``Meeting topic''
Description: ``Brief meeting description''
Guests: Individuals invited to the meeting}
Example:
\begin{figure}
\centering
\includegraphics{images/lab_protocols/trainee_tuesdays_thursdays/1.png}
\caption{}
\end{figure}
I (Emily) have also shared my personal calendar with the BABLab account, so you can see when I am available to meet with you. You can access it by selecting ``Emily Towner'' from ``Other calendars'' in the BABLab calendar. The off-white ``busy'' slots are times I am unavailable (doctor's appointments, non lab-related meetings etc.).
\begin{figure}
\centering
\includegraphics{images/lab_protocols/trainee_tuesdays_thursdays/2.png}
\caption{}
\end{figure}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{clinical-meetings}{%
\subsection{Clinical Meetings}\label{clinical-meetings}}
Purpose
The purpose of clinical meetings are to discuss and review ongoing clinical interviews (KSADs), troubleshoot any recent difficulties, and learn helpful interviewing tactics for future clinical interviews. During the meeting, you will present the team with background information from your clinical interview and walk through each supplement.
What To Prepare
Using a shared Dropbox Paper document, please prepare the following:
\begin{itemize}
\tightlist
\item
Who you are presenting
\begin{itemize}
\tightlist
\item
Participant's KSADS file
\item
Date/time of session
\end{itemize}
\item
Brief background
\begin{itemize}
\tightlist
\item
Was the child bio/adopted?
\begin{itemize}
\tightlist
\item
Age of adoption
\end{itemize}
\item
Was there any prenatal exposure?
\item
Any trouble in school?
\end{itemize}
\item
Supplements
\begin{itemize}
\tightlist
\item
Your thought process on why/why not you went through each supplements/diagnoses you have assigned
\end{itemize}
\item
Personal opinions
\begin{itemize}
\tightlist
\item
What was the child participant like in the session? (note relevant behaviors for context)
\end{itemize}
\item
WASI/WIAT
\begin{itemize}
\tightlist
\item
A quick overview of the participant's WASI/WIAT (admission and scores)
\end{itemize}
\item
Questions for the team
\begin{itemize}
\tightlist
\item
Any situations you feel may have been difficult to address during the clinical interview
\end{itemize}
\end{itemize}
\emph{These meetings are also a safe space to debrief potentially difficult interviews.}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{mail}{%
\section{Mail}\label{mail}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{usps}{%
\subsection{USPS}\label{usps}}
When sending things out USPS, you can place your recharge ID under the sender's address, circle it, \& drop it in the outgoing mail bin in 1282 (faculty mailroom)
\hypertarget{how-to-check-your-charges}{%
\subsubsection{How to check your Charges}\label{how-to-check-your-charges}}
\begin{itemize}
\tightlist
\item
Go to the MDDS ucla mail servicesS
\item
Click on MDDS Billing Data and sign in with your UCLA logon ID
\item
Navigate to the ``Billing Activity Review'' page
\item
At the Search bar: the financial services dept. code is 0875 and enter the appropriate recharge ID/month and year
\item
Under the Outgoing Mail Billing Activity, there will be a total cost of charges as well as number of pieces of mail for this FAU during the mobth/year you selected
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{recycling-waste}{%
\section{Recycling \& Waste}\label{recycling-waste}}
We can leave small items outside our door for recycling/trash pickup. For large items we should bring them to the A-level loading dock to be recycled.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{purchasing}{%
\section{Purchasing}\label{purchasing}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{pac-orders}{%
\subsection{PAC Orders}\label{pac-orders}}
PAC forms are used for most purchasing requests (besides Amazon which we can order from directly with our Amazon business account). Please consult the \href{http://staff.purchasing.ucla.edu/Portal/app/agreements/agreementsummary.aspx}{UCLA preferred vendors list} first before submitting a PAC form for an outside vendor.
\begin{itemize}
\tightlist
\item
Save any quote to (BABLAB/Lab/Finances/Purchasing/)
\item
Check Trello purchasing board for existing item
\item
If no existing item, create one and add description based on templates
\item
Fill out blank PAC form located in (BABLAB/Lab/Documents/Financial\_templates/Purchasing/)
\item
Save to (BABLAB/Lab/Finances/Purchasing)
\item
Email the completed PAC order form to \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
\begin{itemize}
\tightlist
\item
\textbf{Subject} - CB, {[}Fill in Vendor{]} Request, Bridget Callaghan
\item
CC' Bridget (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) - do not need signature if PI is cc'd
\end{itemize}
\item
Complete item order information on Trello purchasing board
\item
Save PO (purchase order) and CONF (confirmation) if received
\item
Once item is received lab manager log amount in funds spreadsheet
\begin{itemize}
\tightlist
\item
Add in any tax/shipping/expense that wasn't accounted for on Trello to most expensive item
\item
Mark as ``Logged'' on Trello
\end{itemize}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{amazon-orders}{%
\subsection{Amazon Orders}\label{amazon-orders}}
Instructions for checking out via our Amazon Business Account.
\begin{itemize}
\tightlist
\item
Check for existing item on Trello
\item
If existing item, move to ``To Order'' list, change label to not logged, and create new instance of purchase in description box
\item
To checkout via Amazon, Choose a Group
\begin{itemize}
\tightlist
\item
Upon clicking ``Proceed to Checkout'' you will arrive to the screen below. Select your fund manager's group and click continue:
\item
Be sure to select the correct group to avoid your order being rejected or sitting in a queue that is not being reviewed. In the event that your fund manager is out of the office, please check with the Business Office before starting your Amazon Business order so that we can add you to another group temporarily. Otherwise, the order will remain in your fund manager's queue until they are back in the office and able to approve orders.
\end{itemize}
\item
Business Order Information
\begin{itemize}
\tightlist
\item
Enter the Full Accounting Unit (FAU) or Recharge ID in the Purchase Order (PO) Number field and enter a business justification in the Comments for Approver field. These fields are required for the Psychology Department. If this information is not provided, your fund manager will reject the order.
\item
NOTE: Business justifications must describe the purpose of items being purchased, how and where the items will be used. Please be sure to be as detailed and specific as possible. If you are purchasing an item flagged as restricted your fund manager may reach out to you for additional information.\\
\item
Restricted items are not necessarily unallowable, but may require additional levels of approval from the Pcard Administrator in Purchasing before we can charge it to a Pcard.
\end{itemize}
\item
Next, select the appropriate shipping address
\item
Next, you will select the method of payment. This should be a VISA with your fund manager's name on the card. You do not have the option to edit this page and it is not necessary to include a reference number. Click continue.
\item
Review your order details and once confirmed, click on submit order for approval.
\item
Complete item order information on Trello and move to ``Submitted'' list
\item
Once placed, move item to ``Placed'' list on Trello
\item
Once item is received, lab manager to log amount in funds spreadsheet
\begin{itemize}
\tightlist
\item
Add in any tax/shipping/expense that wasn't accounted for on Trello to most expensive item
\item
Mark as ``Logged'' on Trello
\end{itemize}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{reimbursement}{%
\subsection{Reimbursement}\label{reimbursement}}
For reimbursement:
\begin{itemize}
\tightlist
\item
Fill out a blank reimbursement form found in (BABLAB/Lab/Documents/Financial\_templates/Reimbursement/)
\item
Save reimbursement form to (BABLAB/Lab/Finances/Reimbursement)
\item
Email the completed reimbursement form to \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
\begin{itemize}
\tightlist
\item
Subject - CB, {[}Fill in Vendor{]} Reimbursement, Bridget Callaghan
\item
CC' Bridget (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) - do not need signature if PI is cc'd
\end{itemize}
\item
Lab manager log reimbursement amount in funds spreadsheet
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{guest-parking-passes}{%
\subsection{Guest Parking Passes}\label{guest-parking-passes}}
\begin{itemize}
\tightlist
\item
Email Tyler Tuione (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) saying you would like to purchase guest parking passes.
\item
Information to include in this email:
\begin{itemize}
\tightlist
\item
Number of passes to order
\item
Recharge ID for fund to charge
\end{itemize}
\item
Wait for Parking Services to call the lab (about a week), record the confirmation code they give you.
\item
Pick up the passes with the confirmation code at 555 Westwood Plaza, Suite 100.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{petty-cash}{%
\subsection{Petty Cash}\label{petty-cash}}
\begin{itemize}
\tightlist
\item
Fill out a blank IRB research payment request form (for cash or card)(BABLAB/Lab/Documents/Financial\_templates/Petty\_cash/)
\item
Send it to Brian Hoang (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}) for a signature
\item
Submit the form at this \href{https://sa.ucla.edu/MessageCenter/OneStop/Home/PostMessage?topicId=293}{site}
\item
It can take up to 10 business days for them to reply back.
\item
When they recontact with a delivery time, ensure that either of the people who signed the form (Bridget and an RA) are in the lab at the time of delivery to sign off on the order.
\item
They will not deliver the cash if one of the signers is not present
\item
Once the disbursement is received, log it on the study specific payment log
\item
Ask the lab manager to log the pettycash amount in the funds spreadsheet
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{vendor-specific-protocols}{%
\subsection{Vendor specific protocols}\label{vendor-specific-protocols}}
Some vendors have special requirements or instructions to make purchases from them.
Biopac
- Email \href{mailto:[email protected]}{\nolinkurl{[email protected]}} and \href{mailto:[email protected]}{\nolinkurl{[email protected]}}
Uprinting
\begin{itemize}
\tightlist
\item
Go to Uprinting.com and log in.
\item
Select the items you want to purchase and add them to the cart.
\begin{itemize}
\tightlist
\item
Note that you need to have the pdf or image files on-hand and make sure they match the dimensions of what they will be printed on
\end{itemize}
\item
When checking out, select ``Terms'' as the payment method
\item
Create and submit a PAC form to purchasing as usual, but also cc' \href{mailto:[email protected]}{\nolinkurl{[email protected]}} and request that purchasing get in touch with her to pay for the order
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{logging-purchases-on-trello}{%
\subsection{Logging purchases on Trello}\label{logging-purchases-on-trello}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Go to the ``Purchasing'' board on Trello. It should be green.There are different tabs:
\end{enumerate}
\begin{itemize}
\tightlist
\item
\textbf{To Return}: items that will be returned
\item
\textbf{Maybe}: items that may be bought
\item
\textbf{To Order}: items to order/ buy
\item
\textbf{Submitted}: orders that have been submitted
\item
\textbf{Placed}: orders that have been placed
\item
\textbf{In Stock}: items that have arrived and are in lab
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\item
Add a card to ``To Order'' - name it with this format: \textbf{item being bought - \$price}
\item
Add the following labels:
\end{enumerate}
\begin{itemize}
\tightlist
\item
\textbf{Budget: Nonlogged} (always log this by default)
\item
\textbf{Fund} (ask lab manager whether it's Startup, R00, or other fund)
\item
\textbf{Category} (ask lab manager which category)
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\item
Add the link of the item on `add an attachment'. Rename the link the exact name of the item as written on Amazon (or whatever website).
\item
Add a description with this format:
\end{enumerate}
\begin{itemize}
\tightlist
\item
Units: (insert amount of item, ex. 20 pencils)
\item
Orders: (insert how many orders placed, ex. 1 order of 20 pencils)
\item
Date submitted: (insert date we submitted order)
\item
Date placed: (insert date vendor has placed order)
\item
Date received: (insert date we got it in lab)
\item
If the card is something that may run out eventually (ex. granola bars, notebooks) add an approximate due date.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Whenever an item has been submitted, placed, and in stock, move the card into its respective tab.
\end{enumerate}
Watch the video for a detailed explanation.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{fund-log---lab-manager}{%
\subsection{Fund Log - Lab Manager}\label{fund-log---lab-manager}}
\textbf{Items to add to the Fund Log}
\begin{itemize}
\tightlist
\item
TRELLO - amazon purchases
\item
reimbursements
\item
purchases in Box (uPrinting should be stored on box)
\item
petty cash
\item
staff reseracher salaries
\item
usps charges through mdds (participant payments, magic boxes)
\item
fedex charges through financial report/receipts Brian sends
\item
lamination - must email them for receipts
\item
DNA genotek on financial report (and they should send receipts for each PO)
\item
guest permits through emailing tyler
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{technology}{%
\section{Technology}\label{technology}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{slack}{%
\subsection{Slack}\label{slack}}
If you haven't already found this out for yourself, emails are a clunky way of communicating for most lab needs. Moreover, most people will find that they have a backlog of emails awaiting their attention. For this reason, we will use Slack for the primary means of lab communication.
The beauty of Slack is that you only subscribe to the channels that concern you. For messages to one person or a small group, use direct messages. If you have to include out-of-lab recipients, use e-mail. If you have a paper you want to share, download it and then upload it to Slack in the \#papers channel.
Full-time lab members should install Slack on their computers and/or phones and check it regularly (during working hours). Part-time lab members should also check Slack when they are working in the lab as there may be important messages in there for them.
Of course, if there is an emergency, and you need to contact Bridget, use her email or phone or drop into her office.
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.18\columnwidth}\raggedright
Slack Channel\strut
\end{minipage} & \begin{minipage}[b]{0.04\columnwidth}\raggedright
Type\strut
\end{minipage} & \begin{minipage}[b]{0.70\columnwidth}\raggedright
Purpose\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#bablab\_core\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For private communication between the core team - this includes the PI, Lab Managers, Postdocs, and Grad Students\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#bablab\_ra\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For private communication between the lab managers and all the research assistants\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#bablab\_senior\_ra\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For private communication between the lab managers and the senior research assistants\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#diversity\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For lab-wide communication regarding lab commitment to diversity, inclusivity, and allyship\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#general\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For lab-wide communication and announcements\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#meetings\_lab\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For notes or communication related to lab meetings\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#methods\_fmri\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
Sharing wisdom on fMRI data collection / analysis or asking (and answering) the fMRI questions of others\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#methods\_mb\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
Sharing wisdom on microbiome data collection / analysis or asking and answering the microbiome questions of others\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#notes\_conferences\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For taking notes at conferences\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#papers\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
Sharing links to lab-relevant papers and discussing them\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#random\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
Non-work-related chatting -- e.g., pics of pets, funny cartoons etc.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#recruitment\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
Any ideas you have for recruiting youth into our study\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#rejection\_collection\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
A collection of rejections and reflections!\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#stats\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
To ask and answer questions about statistical analyses\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#study\_inside\_out\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
To discuss issues related to the EGG and Emotionality study\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#study\_pup\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
To discuss issues related to the Parenting Under Pressure Study\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#study\_mbb\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
To discuss issues related to the Mind, Brain, Body study\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#study\_transfer\_mental\_health\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
To discuss issues related to the Transfer Mental Health Study\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#tips\_coding\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Public\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
Sharing wisdom on code writing or asking (and answering) the coding questions of others\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.18\columnwidth}\raggedright
\#writing\_group\strut
\end{minipage} & \begin{minipage}[t]{0.04\columnwidth}\raggedright
Private\strut
\end{minipage} & \begin{minipage}[t]{0.70\columnwidth}\raggedright
For writing accountability and motivation\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{box}{%
\subsection{Box}\label{box}}
We have moved over to Box for our file storage service. This works very similarly to Google Drive or Dropbox, but is more secure. Additionally, each lab member can have their own account, it's free and great for collaboration!
Please download \href{https://www.box.com/drive}{Box Drive} to use.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Click download for your operating system
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/box/1.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
After installing, you may need to click allow in your security preferences
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/box/2.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Log in with your UCLA email (make sure to accept the Box sharing request first)
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/box/3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Now you can use Box on your desktop.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/box/4.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\tightlist
\item
On the web version, change your notification preferences so that you don't get an email every time someone uploads a file by unchecking the boxes below
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/box/5.png}
\caption{}
\end{figure}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{using-trello}{%
\subsection{Using Trello}\label{using-trello}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
There are multiple lists on the Tasks Board!
These include: Doing, To-do, Later and Done.
Depending on the task, simply move it to the right list once you progress with it.
\end{enumerate}
\begin{itemize}
\tightlist
\item
\textbf{To Do:} Current tasks to complete.
\item
\textbf{Doing:} Tasks currently being done.
\item
\textbf{Later}: Tasks not as pressing, but still must be done.
\item
\textbf{Done}: Completed tasks.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
\textbf{To add a card}: Click `+ Add Another Card' under the appropriate list. There are multiple functions within this:
\end{enumerate}
\begin{itemize}
\tightlist
\item
You can add members, labels (useful for studies), checklist, attachment, due date and more to the back of the card. This information will show when you click on the card.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
\textbf{Show Menu function:} This is a great way to search specific items, such as your own name for tasks, or the study for which there are tasks for, or tasks which have upcoming due dates.
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{server}{%
\subsection{Server}\label{server}}
In addition to Box, we make regular biweekly backups to a dedicated psychology department server (in addition to two external drives)
To connect to the CallaghanLab server:
*Contact the lab manager first to set up your credentials.
On a Mac --
\begin{itemize}
\tightlist
\item
From the dropdown menu under ``Go'', select ``Connect to Server\ldots{}'' (Apple + K)
\item
Enter the network/server address: \texttt{smb://pythia.psych.ucla.edu/Users/CallaghanLab/}
\item
Click on ``Connect''.
\item
A dialogue box will prompt you for your credentials. Enter your credentials obtained from Psychology IT and click on ``OK''.
\item
If everything was entered correctly from above, the mapped drive will appear under ``Shared'' in the Mac's Finder.
\end{itemize}
On a PC --
\begin{itemize}
\tightlist
\item
From the Windows file explorer, right mouse click on ``Computer'' for Windows 7 or ``This PC'' on Windows 8/10.
\item
Select ``Map network drive''.
\item
Specify an available ``Drive'' letter from the dropdown menu.
\item
Enter the network/server location for the ``Folder'' field and click on ``Finish''.
\begin{itemize}
\tightlist
\item
Network/server location: \texttt{\textbackslash{}\textbackslash{}pythia.psych.ucla.edu\textbackslash{}Users\textbackslash{}CallaghanLab\textbackslash{}}
\end{itemize}
\item
Enter your username and password that was provided by Psychology IT in the ``network credentials'' popup dialogue box and click on OK.
\item
If everything was entered correctly from above, the mapped drive will appear under ``Network locations'' when you click on ``Computer/This PC''.
\item
After the drive has been mapped, logged out of Windows to ``logout'' from the network drive.
\item
Don't right mouse click on the mapped drive and select ``Disconnect''. This will only unmap the network drive and you will have to go through the process all over again.
\end{itemize}
To connect off-campus connect to the UCLA/BOL VPN and let it run in the background prior to logging into the mapped drive you had configured on your computer.
How-to download/install the \href{https://help.bol.ucla.edu/kb_view.do?sysparm_article=kb0010923}{Cisco VPN client}.
\begin{quote}
Every night the server is backed up to the Life Sciences data center in Hershey Hall. That's always been the case. To make those nightly backups more safe, there is another copy of the backups stored offsite (i.e.~to prevent losing both the server AND the backups in a fire, earthquake, etc.)
Once we have Shadow Copy enabled, we'll also have more direct access to backups, so we won't need to work with Life Sciences to retrieve backups. Psych IT will be able to grab a recent copy of your files/folders ourselves. We'll also have access to incremental backups (i.e.~yesterday's copy, two day old copy, three day old copy\ldots{}up to two weeks back).
So at that point we'll have 3 forms of backup, and plenty of safety net.
\begin{itemize}
\tightlist
\item
Dave (Psych IT)
\end{itemize}
\end{quote}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{dropbox-paper}{%
\subsection{Dropbox Paper}\label{dropbox-paper}}
The lab has a shared Dropbox Paper account --- which is slightly different than regular Dropbox file storage. On the Dropbox Paper, we will place collaborative documents. We will grant you access permission to various folders in the Dropbox Paper account, You may need to initialize an account with the email we grant access permission.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{github}{%
\subsection{GitHub}\label{github}}
The lab's GitHub should be used to share code and data with people outside of the lab (i.e., people not on our IRB). Not all data can be shared (because of IRB restrictions) and not all data that can be shared should be shared immediately. Speak with Bridget about when to share data, and what needs to be done to the data (e.g., the level of de-identification required) before we share it. Ask the lab manager to get access to the lab's GitHub.
Our lab manual, lab wiki, and study wikis are also hosted on our GitHub.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{google-calendars}{%
\subsection{Google Calendars}\label{google-calendars}}
The lab has many Google calendars and you should subscribe to those that make sense for your unique situation.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\textbf{BABLab:} Used for lab meetings, out of schedule meetings, birthdays, formal lab events etc.
\item
\textbf{Availability:} If you are part time, please place the hours you plan to come into the lab on this calendar. If you are going to be away, please place the dates and times on this calendar. This is critical as the lab manager will use this information when scheduling people to run participants for our studies. Bridget and the core team will also put her out of office times on this calendar to help people with scheduling.\\
\item
\textbf{MBB:} Used for booking sessions and reminders for the Mind, Brain, Body study
\item
\textbf{The Bear's Den:} used to reserve time in experimental room 1
\item
\textbf{The Rainbow Room:} used to reserve time in experimental room 2
\item
\textbf{A180 Testing Computer:} the SAND Lab room that can be used for blood spots
\item
\textbf{HPL1333:} The Health Psychology Lab room that can be used for blood spots
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{e-mail}{%
\subsection{E-mail}\label{e-mail}}
We have an email listserv for communicating with the whole lab and individuals who subscribe to our list - including visitors and students from other labs who attend our meetings, visiting scholars, etc.
The email is: \textbf{\href{mailto:[email protected]}{\nolinkurl{[email protected]}}}
If you are thinking about joining the lab and would like to be notified about upcoming lab meetings, please request to join the listserv.
There is also a lab email account which people use to contact the lab to participate in studies (\href{mailto:[email protected]}{\nolinkurl{[email protected]}}). This email account will be staffed by the lab manager/s and they will sort the emails in specific folders within the Gmail account. If you are running a study, it is your responsibility to check your study's folder on the lab Gmail every few days and respond to participant inquiries in the ``potential'' participants folder in relation to your study.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{gmail-organization}{%
\subsection{Gmail Organization}\label{gmail-organization}}
The \href{mailto:[email protected]}{\nolinkurl{[email protected]}} email is the lab email for all lab related correspondance. The email is organized with different folders and tabs for all lab and lab studies correspondance.
\begin{figure}
\centering
\includegraphics{images/lab_protocols/gmail/gmail_1.png}
\caption{}
\end{figure}
\textbf{Lab}: Overarching tab for all lab functioning related emails
\begin{itemize}
\item
\textbf{Calendar}: All emails about calendar events go in this tab (invitations to sessions, automatic emails Google sends about calendar events, etc)
\item
\textbf{Other}: Google Voice emails about missed calls or texts, anything not important that doesn't belong in any other email tab
\item
\textbf{Purchasing}: Emails about purchases for lab, Amazon purchase confirmation emails
\item
\textbf{RA\_app\_archive}: Tab for RA applications that will not be reviewed anymore / not considering because it has been a long time since person initially expressed interest or otherwise outdated
\item
\textbf{RA\_app\_ongoing}: Tab for emails and app materials of individuals currently being interviewed for a position in the lab. Emails only stay in this tab during the hiring process and are then moved to either RA\_app\_archive or RA\_applications
\item
\textbf{RA\_applications}: Tab for recent RA applications that can be reviewed if we're looking to hire someone
\item
\textbf{Save}: Tab for \emph{important} emails that don't fit anywhere else
\item
\textbf{Security}: Tab for emails from Zoom about accessing calendar, security alert emails from Google, and any other security related emails
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/gmail/gmail_2.png}
\caption{}
\end{figure}
\textbf{Recruitment}: Overarching tabs for all emails about recruitment
\begin{itemize}
\item
\textbf{Ads}: Tab for emails about any ads we run -- Facebook, Instagram, Craigslist, Patch, Nextdoor, etc.
\item
\textbf{Community\_contacts}: Tab for emails with organizations or individuals we've reached out to for potential collaborations ( e.g.~Influencers, community partners)
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/gmail/gmail_4.png}
\caption{}
\end{figure}
\textbf{Studies}: Overarching label for the folders of differnt studies in lab
\begin{itemize}
\item
\textbf{Inside\_Out}: Tab that has all correspondence related to Inside Study previously completed in the lab
\item
\textbf{Mind\_brain\_Body}: Tab for all correspondence related to ongoing longitudinal Mind Brain Body Study
\item
\textbf{Parenting\_Under\_Pressure}: Tab for correspondence related to Parenting Under Pressure study previously completed in lab
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/gmail/gmail_3.png}
\caption{}
\end{figure}
\textbf{Mind\_Brain\_Body}: Tab with all emails related to the Mind Brain Body study
\begin{itemize}
\item
\textbf{Added}: Tab for emails with individuals we've added to the participant database but that aren't enrolled in MBB; interest forms that participants fill out on website
\item
\textbf{MBB\_online\_scheduled}: Emails with participants that are enrolled in MBB Wave 1 online
\item
\textbf{MBB\_Wave\_2}: Emails with all the people that are now in Wave 2 of MBB (either already enrolled in Wave 2 or we are in contact about enrolling them in Wave 2)
\item
\textbf{Potential}: Tab where correspondence goes for anyone that \emph{has not been added to REDCap or our participant database} but is interested in the Mind Brain Body study.
\item
\textbf{Testimonials}: We add this label to any email thread that has a testimonial about MBB (we sometimes ask participants if they'd like to give a testimonial for website)
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{listserv}{%
\subsection{ListServ}\label{listserv}}
\textbf{Protocol: Adding MBB Participants to the BabLab Newsletter Google Group}
The Brain and Body Lab has a Newsletter Google Group where we send previous study participants newsletters and updates regarding the research within our lab, and it is important to consistently add new study participant emails to this list, so they can receive our newsletters!
The BabLab's Google Groups can be found Here.
Find the group called ``Brain and Body Lab News!'' -- this group contains previous study participants within the lab.
Click on the ``People'' tab to view and add new members to the group.
\includegraphics{images/lab_protocols/listserv/listserv_1.png}
Filter list by ``join date'' to view the newest members added to the group -- and locate these members on the Participant Database. This will make it clear to see when the Newsletter Google Group was last updated, and which names from the Participant database need to be added to the Google Group.
\begin{figure}
\centering
\includegraphics{images/lab_protocols/listserv/listserv_2.png}
\caption{}
\end{figure}
Click ``Add Members'' and enter any new emails from the Participant Database one at a time in the ``Group Members'' box.
\begin{figure}
\centering
\includegraphics{images/lab_protocols/listserv/listserv_3.png}
\caption{}
\end{figure}
Enter the following message in the ``Welcome Message'' box before sending.
\textbf{``Hi there!
You are receiving this message because you may have indicated that you are interested in opting in to the UCLA Brain and Body Lab's newsletter, updates, and findings! Thanks for signing up to receive our emails- we're excited to welcome you to the BAB Lab community! As a part of our community, you will be hearing about all things new and exciting in our team and our research. We couldn't be more excited to have you! (Note: If you would like to be removed from this group, please feel free to ``unsubscribe'' below.)
Sincerely,
The BAB Lab Team''}
Select ``Add Members'' and all new emails will be directly added to the BabLab Newsletter Google Group.
\textbf{Protocol: Adding all RA applicants to the BabLab Google Group}
The Brain and Body lab also has a Google group for students who have applied to be RA's in the Lab -- so that anyone who is interested in our lab can get lab updates and join in on lab meetings.
The BabLab's Google Groups can be found Here.
Find the group called ``BabLab'' -- this group includes: PI (Bridget), postdocs, graduate students, lab managers, paid research assistants/technicians, undergraduate research assistants and volunteers, students from other labs who attend our meetings, visiting scholars, etc.
Click on the ``People'' tab to view and add new members to the group.
\includegraphics{images/lab_protocols/listserv/listserv_4.png}
Filter list by ``join date'' to view the newest members added to the group -- and locate these in the Bablab Gmail -- in the folders ``RA\_applications'' and ``RA\_app\_archive''. This will make it clear to see when the BabLab Google Group was last updated, and which names/emails from the Bablab Gmail need to be added to the Google Group.
\includegraphics{images/lab_protocols/listserv/listserv_5.png}
Click ``Add Members'' and enter any new emails from the RA application folders within Gmail, one at a time in the ``Group Members'' box.
\includegraphics{images/lab_protocols/listserv/listserv_6.png}
Enter the following message in the ``Welcome Message'' box before sending.
\textbf{``Hello there!
We hope you are well! You are receiving this invitation to join our Brain and Body Lab google group, as you may have indicated on our google form that you were interested in hearing about BABLab news and about our lab meetings. We use this google group to send BABLab updates, newsletters, lab meeting information, and more! If you have changed your mind, please follow the unsubscribe link below. Thank you!
Best Regards,
The Brain and Body Lab Team''}
Select ``Add Members'' and all new emails from interested RAs will be directly added to the BabLab Google Group.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{mac-os---catalina}{%
\subsection{Mac OS - Catalina}\label{mac-os---catalina}}
If you upgrade your Mac operating system to Catalina, and wish to run tasks on PsychoPy, you must enable the following settings in the image below.
\begin{figure}
\centering
\includegraphics{images/lab_protocols/catalina/1.png}
\caption{}
\end{figure}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{redcap}{%
\subsection{REDCap}\label{redcap}}
\hypertarget{creating-events}{%
\subsubsection{Creating Events}\label{creating-events}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Click into Project Setup
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/redcap/2.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Click to either Define Your (New) Event or to Designate Instruments to your Events
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/redcap/3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Select which Arm you want to designate instruments to
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/redcap/4.png}
\caption{}
\end{figure}
\hypertarget{entering-instruments}{%
\subsubsection{Entering Instruments}\label{entering-instruments}}
\hypertarget{using-the-test-logic-feature}{%
\subsubsection{Using the test logic feature}\label{using-the-test-logic-feature}}
You can use the test logic with a record feature to see if this question will be shown for a specific participant
\begin{figure}
\centering
\includegraphics{images/lab_protocols/redcap/1.png}
\caption{}
\end{figure}
\hypertarget{custom-record-dashboards}{%
\subsubsection{Custom Record Dashboards}\label{custom-record-dashboards}}
\begin{itemize}
\tightlist
\item
to create a custom record dashboard in record status dashboard that you can filter participants to based off of certain characteristics, open your REDCap project and navigate to the record status dashboard then click ``Create custom dashboard''
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/redcap/5.png}
\caption{}
\end{figure}
\begin{itemize}
\tightlist
\item
Refer to the following custom record dashboard settings. Fill out the dashboard title, header orientation, group columns by event, filter logic (based off your qualifying variable), filter by arm (indicate correct arm this dashboard will attend to), and sort by (how the items in the dashboard should be sorted):
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/redcap/6.png}
\caption{}
\end{figure}
\hypertarget{backend-of-instument-settings}{%
\subsubsection{Backend of Instument Settings}\label{backend-of-instument-settings}}
CHECKLIST for REDCap Survey backends:
\begin{itemize}
\tightlist
\item
title of survey
\item
delete survey instructions
\item
color: BABLAB
\item
allow respondents to return without needing a return code
\item
auto continue to next survey
\item
delete survey completion text
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{r}{%
\subsection{R}\label{r}}
To install R:
\begin{itemize}
\tightlist
\item
Go to \href{http://www.r-project.org}{R Project}
\item
Click the ``download R'' link in the middle of the page under ``Getting Started''
\item
Select a CRAN location (a mirror site) and click the corresponding link
\item
Click on the ``Download R for (Mac) OS X'' link at the top of the page
\item
Click on the file containing the latest version of R under ``Files''
\item
Save the .pkg file, double-click it to open, and follow the installation instructions
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{rstudio}{%
\subsection{RStudio}\label{rstudio}}
To install RStudio:
\begin{itemize}
\tightlist
\item
Go to \href{http://www.rstudio.com}{RStudio} and click on the ``Download RStudio'' button
\item
Click on ``Download RStudio Desktop''
\item
Click on the version recommended for your system, or the latest Mac version, save the .dmg file on your computer, double-click it to open, and then drag and drop it to your applications folder
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{python}{%
\subsection{Python}\label{python}}
To install Python:
\begin{itemize}
\tightlist
\item
Download the \href{https://www.anaconda.com/distribution/}{Anaconda Distribution}
\item
Be sure that you download the Python 3.7 version. (Note, this can take upwards of 1-2 hours depending on your internet connection)
\item
Helpful instructions for these checks can be found on the Anaconda User Guide website: \href{https://docs.anaconda.com/anaconda/user-guide/getting-started/}{``Getting Started''}
\item
Any issues are most likely due to incorrect installation, which is addressed in the \href{https://docs.anaconda.com/anaconda/user-guide/faq/}{FAQ page}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{vpn-to-lab-computers}{%
\subsection{VPN to Lab Computers}\label{vpn-to-lab-computers}}
We have set up a VPN on the mac mini in the Bear Den (on the left side of the room).
Because this computer has the Acknowledge software installed on it, and has the USB Key for that program you need to use that computer for processing any physiology data. If you are off campus, you can VPN to the computer using the following steps. NB: you will need to have downloaded Cisco Anyconnect - you can access that by clicking \href{https://www.it.ucla.edu/it-support-center/services/virtual-private-network-vpn-clients}{here}
Step 1. Open Cisco and type `ssl.vpn.ucla.edu' and press `connect'
\begin{figure}
\centering
\includegraphics{images/lab_protocols/cisco/1.png}
\caption{}
\end{figure}
Step 2. Type in your UCLA ID and Password (same as you use to get into email)
Step 3. Accept the SSO push to your second device
Step 4. Click `Accept'
Step 5. Go to the magnifying glass at the top right of your screen and search for Screen Sharing.
Step 6. Type in the IP address for the Bear Den computer: 164.67.125.42
Step 7. Type in the username and pw for the Bear Den computer:
Username - Brain \& Body Lab
PW - BaBLaB
Step 8. You will see a new window pop up, with a desktop, which is the desktop of the Bear Den computer.
A second option would be to go to the Finder on your Mac, on the top menu bar click `Go' and then click `Connect to Server'. Type in vpn://164.67.125.42. This will take you straight to the screen sharing page where you can then perform Step 7 \& 8 from above.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{google-voice}{%
\subsection{Google Voice}\label{google-voice}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\tightlist
\item
Go to the app store and search for Google Voice and download the app. The app icon should look like the picture.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic1.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{1}
\tightlist
\item
When you open the app you will be given the option to choose what account to sign into for Google Voice. Click on add another account to add the BaB lab account. If you are already signed into the BaB lab email on your phone then it will likely already show the account as a sign-in option.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic2.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{2}
\tightlist
\item
That will open this window in the app and you'll sign into the BaB lab account.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{3}
\tightlist
\item
Once you're signed in it will show you this page to choose your phone number. Choose skip since your phone number won't be an option yet.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic4.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{4}
\tightlist
\item
Now go to settings by clicking the three bars on the top left of the main page of Google Voice.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic5.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{5}
\tightlist
\item
Click on devices and numbers to add your phone number to send and receive calls from your phone.
\end{enumerate}
Note: There are some pre-chosen settings like `` forward messages to email'' do not change those. You can customize your notification setting to your preference.
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic6.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\setcounter{enumi}{6}
\tightlist
\item
Add your phone number by choosing ``new linked number''. It will ask you to input your phone then send you a confirmation code. Once you input the code you're all set up!
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/google_voice/pic7.png}
\caption{}
\end{figure}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{burning-cds}{%
\subsection{Burning CDs}\label{burning-cds}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Connect external DVD drive if your Mac doesn't have a built-in optical drive.
\item
Insert a blank disk into your drive. This pop-up will appear. Click okay. The disk icon will appear on your desktop.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/cd_burning/1.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Create a New Folder of content that you want to burn. Right click on the file and select ``Burn\_File Name\_to Disc''
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/cd_burning/2.png}
\caption{}
\end{figure}
4.This box will pop-up. Rename file if you wish. Then select ``Burn''.
\begin{figure}
\centering
\includegraphics{images/lab_protocols/cd_burning/3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\item
Once burning process is complete, the disk icon will appear on your desktop with the name you gave it.
\item
Right click on the CD and click ``Eject CD''. That's it!
\end{enumerate}
To watch this in a video: \url{https://www.youtube.com/watch?v=ZCBKSkfnqX8}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{high-to-low-res-video}{%
\subsection{High to low res video}\label{high-to-low-res-video}}
Step 1. Open the file you wish to convert in QuickTime. QuickTime automatically comes with all Mac devices, so no need to download it if you own a Mac. Usually when you open the video after you download it, the video automatically opens in QuickTime.
Step 2. Choose File, then mouse down to~Export, and choose an option from the Export menu.
Here, you will see 3 video resolution options and 3 other options:
\begin{itemize}
\tightlist
\item
1080p: QuickTime movie using H.264 or HEVC (H.265), up to 1920x1080 resolution.
\item
720p: QuickTime movie using H.264, up to 1280x720 resolution.
\item
480p: QuickTime movie using H.264, up to 640x480 resolution.
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/video_resolution/1.png}
\caption{}
\end{figure}
Step 3. Choose size 480p. Then Save to your computer.
Step 4. Go to Box: Studies -\textgreater{} MBB -\textgreater{} Data -\textgreater{} Wave\_1\_online -\textgreater{} Wave\_1\_online\_parent\_child\_interactions
Step 5. Go to the folder with the old video. Click the three dots shown below. Select ``Upload New Version''. That's it!
\begin{figure}
\centering
\includegraphics{images/lab_protocols/video_resolution/2.png}
\caption{}
\end{figure}
\hypertarget{equipment}{%
\section{Equipment}\label{equipment}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{biopac}{%
\subsection{Biopac}\label{biopac}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{printer}{%
\subsection{Printer}\label{printer}}
\begin{itemize}
\item
Make sure you are connected to eduroam wifi
\item
Open up Printer \& Scanners in System Preferences
\begin{itemize}
\tightlist
\item
If current printer is not working, right click printer and click Reset Printing System
\includegraphics{images/lab_protocols/printer/1.png}
\end{itemize}
\item
Reset Computer
\item
Open up System Preferences -- Printers \& Scanners
\item
Click on + sign to add a printer
\item
Enter IP Address from Printer: 164.67.125.35
\includegraphics{images/lab_protocols/printer/2.png}
\item
Make sure Use displays: EPSON WF-7710 Series
\item
Click ADD
\item
Reset your Printer Presets if needed
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{social-media}{%
\section{Social Media}\label{social-media}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{instagram}{%
\subsection{Instagram}\label{instagram}}
The pride and joy of the lab. There is a lot of content to keep track of and to ensure is posted weekly.
General important rules:
\begin{itemize}
\tightlist
\item
Keep posts short, family-friendly, and accessible.
\item
If you post on the story, it should also likely be added to a story highlight.
\item
Stick to the color scheme and aesthetic (this includes matching the text in story highlights to the story highlight cover color).
\item
Maintain the integrity of the main feed grid (will be elaborated on further down).
\item
Maintain the consistency of the Lab's hashtags (will be elaborated on further down).
\end{itemize}
Feed Content:
\begin{itemize}
\tightlist
\item
The feed grid is an important part of the aesthetic of the lab's social media. We can divide the grid into ``A Week'' and ``B Week'' rows. Because there are 3 posts horizontally in the grid, there should be 3 pieces of content posted each week (or with relative consistency).
\end{itemize}
``A Week'':
\begin{itemize}
\tightlist
\item
Biome Bites! ad post: This is simply a post saying to check the story for this week's Biome Bites! installment. The caption for this should be brief and maybe reference the content in the actual story post.
\item
Lab Meeting ad post OR Email List ad post: Post this on lab meeting day in ``A Week''. If there is a speaker or specific topic for the week, discuss that briefly in the caption.
\item
In ``B Week'', don't post this even if there is a lab meeting. Instead, post a previous Lab Meeting ad post on the story. If there is not a lab meeting during that ``A Week'', you should post the Email List ad post instead.
\item
Brain Bites! ad post: This is simply a post saying to check the story for this week's Brain Bites! installment. The caption for this should be brief and maybe reference the content in the actual story post.
\end{itemize}
``B Week'':
\begin{itemize}
\tightlist
\item
3 random posts: post pictures from around the lab, from events, or advertisements for upcoming events. Check the existing feed for ideas, and try to stay current with seasons, trends, etc.
\item
If there is an event upcoming, use the following template to advertise it.
\end{itemize}
Important notes for all feed content:
\begin{itemize}
\tightlist
\item
Post all posts to both instagram and facebook (integrated feature on insta)
\item
End every single post with the following: \#brain
\item
Add 2-3 topical hashtags on the new line afterwards, and then follow that with the following block of hashtags: \#funscience \#psychology \#neuroscience \#research \#lablife \#ucla \#gutbiome \#dev \#psych \#brain \#body \#adolescence \#childhood \#ela \#losangeles \#scientist
\end{itemize}
Story Content:
\begin{itemize}
\tightlist
\item
\textbf{Q\&A Monday:} every Monday, post the Q\&A Monday story with instagram's questions feature attached. Check periodically throughout the day to see if there are any questions worth responding to. Post any responses on the Q\&A story highlight.
\item
\textbf{Biome Bites!:} A weekly fun fact about the microbiome. Try to stay scientific (with citations) and avoid product/treatment recommendations that might be trendy or controversial. Post the bite itself on the story, and every other week advertise it with a main feed post. Add to the Weekly Bites story highlight.
\item
\textbf{Lab Meeting ad:} on lab meeting days, post one of the ad posts on the story.
\item
\textbf{Brain Bites!:} A weekly fun fact about the brain/developmental psych. Try to stay scientific (with citations) and avoid product/treatment recommendations that might be trendy or controversial. Post the bite itself on the story, and every other week advertise it with a main feed post. Add to the Weekly Bites story highlight.
\item
\textbf{Contact Story ad:} on Fridays, post the contact story post.
\end{itemize}
When events are coming up, be sure to post frequently on the story about the date, time, and what activities we will be doing.
There is a whole series of story templates made to show off different activities and share information regarding the event.
Where to Find the Designs:
\begin{itemize}
\tightlist
\item
All of the above designs for social media posts are on our canva site.
\item
If you need to adjust any of the designs, feel free to do so.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{facebook}{%
\subsection{Facebook}\label{facebook}}
Most of the content will carry over from instagram because the accounts are linked.
Just ensure that you stay active with checking notifications and responding to comments.
Consistency across all of the lab materials is the most important thing to maintain for our online footprint.
If you are unsure of what a post should look like, check out previous posts and highlights for ideas!
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{recruitment}{%
\section{Recruitment}\label{recruitment}}
\hypertarget{facebook-ads}{%
\subsection{Facebook Ads}\label{facebook-ads}}
One way we recruit participants for MBB is by running ads on Facebook through our lab page. Below is all the relevant information for running a Facebok ad. Facebook has been successful for recruiting parents.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
First go the Brain and Body Facebook page and click on the Ad Center tab.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/1.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Once on the ad center click on Create Ad
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/2.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\item
Choose ``create new ad'' for choose ad type.
\begin{itemize}
\tightlist
\item
Do \emph{not} choose the automated ad option because the Facebook algorithm doesn't understand our ad needs.
\end{itemize}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
For goal choose ``get more website visitors'' or if promoting a post choose ``get more page likes''
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/4.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\item
Fill in the ad description
\begin{itemize}
\item
keep the description short and general
\item
do not mention payment or other words that could cause the ad to be flagged ( mention of money, els related words like adopted or gaurdianship children)
\item
typically bablab email is included so they can contact us
\end{itemize}
\end{enumerate}
5.1. Add ad picture by clicking choose image
\begin{itemize}
\item
the picture should also have minimal writing so it runs successfully
\item
it is okay to say adopted / guardianship in the picture since picture words won't get flagged
\item
include the irb approval line at the bottom of ad picture when making it on Canva
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/5.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Put the url for the lab website page you want to direct participants to.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/6.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{6}
\item
For audience chose ``People you choose through targeting'' and click the pencil to edit the audience.
\includegraphics{images/lab_protocols/fb_ads/7.png}
\item
Choose a location for ad to run and add appropriate filters to further target individuals
\begin{itemize}
\item
examples of filters used: adoption, parents, parents with young childrenm international adoption
\item
it is best to start with a bigger audience and then fine tune filters as needed
\end{itemize}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/8.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{8}
\item
Choose how long you want the ad to run
\begin{itemize}
\tightlist
\item
let it run for at least three days and a budget of at least \$3 a day to maximize chances of it running well.
\end{itemize}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/9.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{9}
\tightlist
\item
Look at the ad previews to make sure everything is correct and the picture is not cropped incorrectly
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_ads/10.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{10}
\tightlist
\item
This gives an estimate of the ammount of people we will reach and how much it will cost. Once done looking through it click the submit button at the bottom of the page.
\includegraphics{images/lab_protocols/fb_ads/11.png}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{instagram-ads}{%
\subsection{Instagram Ads}\label{instagram-ads}}
Another way we recruit participants for MBB is by running ads on Instagram through our lab page. Below is all the relevant information for running an Instagram ad. Instagram has been successful for recruiting teens and young adults but not very successful for recruiting lots of parents thus far.
To run an instagram ad you need to promote a post.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Click the plus sign on top right corner of screen then choose post.
\includegraphics{images/lab_protocols/ig_ads/6.png}
\item
Choose picture and click next. On next page it gives you options for filters, just skip that.
\includegraphics{images/lab_protocols/ig_ads/7.png}
\item
Write the caption for the post you'll be promoting. Click on the ``create a promotion'' button to make a promotion from the post. Click share to post the picture to our IG account.
\begin{itemize}
\tightlist
\item
keep the caption short and try to avoid words that could get the ad flagged
\includegraphics{images/lab_protocols/ig_ads/1.png}
\end{itemize}
\item
Once the post has been shared to our IG page it should take you directly to the promotion settings. On this page you'll choose the goal for the ad. Choose ``more website visits'' and make sure the link is correct
\includegraphics{images/lab_protocols/ig_ads/2.png}
\item
Here you will define your audience. If audience is els you can choose the ELS filter we previously made.
\end{enumerate}
5.1 If making a new audience filter then choose Create your own
\includegraphics{images/lab_protocols/ig_ads/3.png}
5.1 If creating your own filter ad in location you want for the ad and some filters of who you are trying to target.
\begin{itemize}
\item
Examples: parents, parents with young children, adoption
\item
it is best to start with a bigger audience and then fine tune filters as needed
\includegraphics{images/lab_protocols/ig_ads/4.png}
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\item
Choose how long you want the ad to run
\begin{itemize}
\tightlist
\item
let it run for at least three days and a budget of at least \$3 a day to maximize chances of it running well
\includegraphics{images/lab_protocols/ig_ads/5.png}
\end{itemize}
\item
On the last page it gives you a summary of the ad. Look over it before submitting the promotion. Lab manager may need to provide payment info to complete transaction.
\end{enumerate}
\hypertarget{getting-ad-receipts}{%
\subsubsection{\texorpdfstring{\textbf{Getting Ad Receipts}}{Getting Ad Receipts}}\label{getting-ad-receipts}}
\textbf{Getting Facebook Ad Receipt}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\tightlist
\item
Go to the business Suite from the BAB Lab page
\includegraphics{images/lab_protocols/fb_pay/pay1.png}
\end{enumerate}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Click on more tools.
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/fb_pay/pay2.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\item
Go to billing option which should open up in a seperate tab.
\includegraphics{images/lab_protocols/fb_pay/pay3.png}
\item
If there is an outstanding balance then pay by clicking pay now. After paying the transaction should show up at the top of the list and you can download the receipt using the arrow button.
\includegraphics{images/lab_protocols/fb_pay/pay4.png}
\end{enumerate}
\textbf{Getting IG Ad receipt}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
In the IG app, click on the plus sign.
\includegraphics{images/lab_protocols/ig_pay/ig_1.png}
\item
Click on settings on the popup options that will show up.
\includegraphics{images/lab_protocols/ig_pay/ig_2.png}
\item
Click on promotion payments
\includegraphics{images/lab_protocols/ig_pay/ig_3.png}
\item
If you have already been charged/ paid for the latest ad then click on the Transaction History tab.
\includegraphics{images/lab_protocols/ig_pay/ig_4.png}
\end{enumerate}
4.1. If the top has an outstanding balance then pay by clicking the pay now button. After paying you can click the Transaction History button.
\includegraphics{images/lab_protocols/ig_pay/ig_7.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\item
Once on transaction history click on the date of when the latest ad was paid for.
\includegraphics{images/lab_protocols/ig_pay/ig_5.png}
\item
Click the download button to download the receipt. In most cases you can redirect so it opens on a different page and you can email it to yourself or save as a pdf.
\includegraphics{images/lab_protocols/ig_pay/ig_6.png}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{craigslist-posting}{%
\subsection{Craigslist posting}\label{craigslist-posting}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Login into Bablab's craigslist account. See internal of Lab website for login info
\item
Click on ``create a posting.''
\includegraphics{images/lab_protocols/craigslist/1.png}
\item
Click the option ``community'': DO NOT choose ``gig offered'' because you have to pay for these advertisements. The community option is free
\includegraphics{images/lab_protocols/craigslist/2.png}
\item
Choose ``volunteers''.
\includegraphics{images/lab_protocols/craigslist/3.png}
\item
Fill out this page based on the research study. Type ``Los Angeles'' for the city and the postal code 90095. Then press continue and post it!
\includegraphics{images/lab_protocols/craigslist/4.png}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{sand-lab-collaboration}{%
\subsection{SAND Lab Collaboration}\label{sand-lab-collaboration}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
How the Collaboration Works
\end{enumerate}
\begin{itemize}
\tightlist
\item
Once a month, Lab Managers from SAND Lab and BABLab to meet and/or check-in on cross-collaboration recruitment status
\item
If eligible participants become available for cross-collaboration, Lab Managers should first check to make sure these participants have consented to be contacted by other labs at UCLA
\item
If they have consented, Lab Managers of each lab should send their own participants {[}template\_email{]} 1 month post-session
\item
At the next cross-collaboration meeting, Lab Managers to transfer over contact information for eligible participants
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Transferring Over Contact Information
\end{enumerate}
\begin{itemize}
\tightlist
\item
Lab Managers should pass over contact information on an excel sheet detailing nothing other than participant names and contact information and notes on prior contact
\item
Contact information excel should be passed through a secure box link upload
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Calling referred participants directly
\end{enumerate}
\begin{itemize}
\tightlist
\item
After contact information is received, 2 weeks after the email template has been sent, Lab Managers may contact cross-collaborated participants directly via phone call using the {[}phone\_template{]}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{research-assistant-hiring}{%
\section{Research Assistant Hiring}\label{research-assistant-hiring}}
\hypertarget{not-hiring}{%
\subsection{Not hiring}\label{not-hiring}}
If we are not looking for research assistants, please respond to any inquiries with the following template.
\begin{itemize}
\tightlist
\item
{[}BAB - NOT HIRING{]}
\end{itemize}
\hypertarget{hiring}{%
\subsection{Hiring}\label{hiring}}
If we are looking for research assistants, please follow the protocol below.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Qualified candidates should be invited to fill out our form using the folowing template.
\end{enumerate}
\begin{itemize}
\tightlist
\item
{[}BAB - INVITE APPLY{]}
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Candidates to be interviewed should be invited to interview using the following template.
\end{enumerate}
\begin{itemize}
\tightlist
\item
{[}BAB - INTERVIEW{]}
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Candidates we wish to extend an offer to should be emailed using the following template.
\end{enumerate}
\begin{itemize}
\tightlist
\item
{[}BAB - OFFER{]}
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Once hired, there are several email templates to welcome/onboard members to the team. Please send the email and follow the instructions in the prompt.
\end{enumerate}
\begin{itemize}
\tightlist
\item
{[}BAB - ONBOARDING STUDENT{]}
\item
{[}BAB - ONBOARDING NON-STUDENT{]}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{covid-19}{%
\section{COVID-19}\label{covid-19}}
\hypertarget{transition-back-in-person}{%
\subsection{Transition back in-person}\label{transition-back-in-person}}
\textbf{Before coming into the lab\ldots{}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Check-in with the Lab Manager to ensure you can come into the lab
\item
Get familiar with the \href{https://ucla.app.box.com/s/96on1j6ynhy5c91jdpxxqyi39pl8x796}{UCLA Requirements for COVID-19 Symptom Monitoring}
\item
Try out the \href{https://www.adminvc.ucla.edu/covid-19/ucla-employee-faq/symptom-monitoring}{UCLA COVID-19 Symptom Monitoring Survey}
\item
Sign up for your shift on BABLab's shared google calendar
\end{enumerate}
\begin{itemize}
\item
Log into your google calendar
\item
Enable the COVID-19 Protocol calendar shared by the BABLab gmail
\includegraphics{images/lab_protocols/covid_protocols/1.png}
\item
Sign up for a ``shift'' on this shared calendar. Add an event during the hours you wish to be present in the lab, using the following format:
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/covid_protocols/2.png}
\caption{}
\end{figure}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/covid_protocols/3.png}
\caption{}
\end{figure}
\begin{itemize}
\item
Only \textbf{3 people maximum} can be present in the lab at all times. Postdocs, Graduate Students, and the Lab Manager have priority over shifts in the lab.
\item
All parties should sign up for a shift at least \emph{3 days prior to arrival}
\item
If you are an RA or a Senior RA, your first shift in the lab should always be a time when the Lab Manager is also there. After this, if you are an RA or a Senior RA, you should only go into the lab after having spoken to the Lab Manager and received tasks for in-person work.
\end{itemize}
\textbf{Every time you come into the lab\ldots{}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
A mask is required to be worn at all times while in the lab
\item
Try to remain at least a 6-feet distance apart from other folks in the lab
\item
Fill out the \href{https://www.adminvc.ucla.edu/covid-19/ucla-employee-faq/symptom-monitoring}{UCLA COVID-19 Symptom Monitoring Survey} to receive clearance prior to coming to the lab and send a screenshot or download of your clearance to the Lab Manager
\item
Wipe down and sanitize your work station after you complete your shift
\item
Last person of the day should wipe down all the shared tables
\end{enumerate}
\hypertarget{research-protocols}{%
\chapter{Research Protocols}\label{research-protocols}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{data-management}{%
\section{Data Management}\label{data-management}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{storing-active-datasets}{%
\subsection{Storing Active Datasets}\label{storing-active-datasets}}
Lab data can be stored on Box, the psychology department server, and on external hard drives and CD's. Any data with personally identifying information can only be stored on non-networked, encrypted, external harddrives, flash drives, and CD's.
Although the the data is routinely backed up, the backup is only on-site -- so make extra backups! Each lab member should back up raw data on an external hard drive, as well as the code needed to reproduce all analyses. You should not store data locally on your computer (but logging into your Box/server account on your computer is ok).
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{data-organization}{%
\subsection{Data Organization}\label{data-organization}}
General notes on file naming:
\begin{itemize}
\tightlist
\item
It is very important that files are named in a clear and consistent way to avoid confusion
\item
Generally, versions of data files, scripts, etc. that are kept in study folders are final versions, meaning that they are ``master'' copies rather than versions that are being edited by individual lab members.
\item
If you would like to edit a script for some reason, please copy it to your user folder and rename it adding \_\textless{}your initials\textgreater{}, eg., \_fq to mark it as being edited by you. If we would like to use it for general study use, it will need to be checked and approved. If the check is successful and it is approved, only then can it replace the ``master'' script and the initials be removed.
\item
Data should generally be named in the following way: \textless{}measure\textgreater{}\_\textless{}status\textgreater{}\_\textless{}if relevant, date in form YYYYMMDD\textgreater{}\_\textless{}if personal copy, initials\textgreater{} (e.g., cbcl\_raw\_20201118\_fq). Date is the date the data file was generated. If it was generated by a script, date needs to be updated each time the script is run.
\end{itemize}
If you have already run several independent projects and have a data organization structure that works well for you, feel free to use it. If not (or if you are looking for a change), the following structure is recommended (based on Neuropipe):
\begin{itemize}
\tightlist
\item
projectName/subjects
\begin{itemize}
\tightlist
\item
individual directories for each of your participants
\item
projectName/subjects/\{subj\}/analysis
\begin{itemize}
\tightlist
\item
subject-specific analyses (e.g., 1st and 2nd level analysis -- at the run level and experiment level)
\end{itemize}
\item
projectName/subjects/\{subj\}/data
\begin{itemize}
\tightlist
\item
raw data for that participant, with the following directories\ldots{}
\begin{itemize}
\tightlist
\item
behavioralData (for, well, behavioral data)
\item
eyetrackingData (if applicable)
\item
nifti (raw nifti files / raw MRI and fMRI data)
\item
rois (participant-specific ROIs)
\end{itemize}
\end{itemize}
\item
projectName/subjects/\{subj\}/design
\begin{itemize}
\tightlist
\item
timing files for that participant, with different directories for the different GLMs you're running (and the different runs in the experiment)
\end{itemize}
\item
projectName/subjects/\{subj\}/fsf
\begin{itemize}
\tightlist
\item
if you're using FSL, put the .fsf fies here. If you're using SPM or something else, save the files for setting up preprocessing and GLMs here
\end{itemize}
\item
projectName/subjects/\{subj\}/scripts
\begin{itemize}
\tightlist
\item
Matlab, Python, R, or bash scripts that you used for that participant. You should keep the `template' scripts elsewhere, but you can store scripts you modified specifically for that participant here
\end{itemize}
\end{itemize}
\item
projectName/scripts
\begin{itemize}
\tightlist
\item
template scripts and that you may modify for each participant, as well as scripts and functions used for all participants and group analyses
\item
recommend making subdirectories for each type of analysis (e.g., behavior, pattern analysis, functional connectivity, univariate)
\item
if you have scripts that are the same for each participant, you can have symbolic links for them in your participant-specific scripts directories
\item
\textbf{naming convention for scripts}: Note that BABLab has decided on this naming conventions for scripts - \textless{}measure or study\textgreater{}\_\textless{}purpose\textgreater{}\_\textless{}if personal copy or editing, initials\textgreater{} (examples: mbb\_cleaning\_fq, cbcl\_scoring\_fq)
\end{itemize}
\item
projectName/results
\begin{itemize}
\tightlist
\item
figures with main results, powerpoint or keynote presentations, manuscripts if you wish
\end{itemize}
\item
projectName/notes
\begin{itemize}
\tightlist
\item
detailed notes about the design, analysis pipeline, relevant papers, etc
\end{itemize}
\item
projectName/group
\begin{itemize}
\tightlist
\item
group analyses
\item
recommend making subdirectories for each type of analysis (e.g., behavior, pattern analysis, functional connectivity, univariate)
\end{itemize}
\item
projectName/task
\begin{itemize}
\tightlist
\item
code for your behavioral experiment, stimuli, piloting information
\item
if you are running your presentation code off of the server, it will still be good to have a copy of the code here (but you can keep the stimuli only on the server if you'd like)
\end{itemize}
\end{itemize}
When you leave the lab, your projects directories should be set up like this, or something similarly transparent, so that other people can look at your data and code. You must do this, otherwise your analysis pipeline and data structure will be uninterpretable to others once you leave, and this will slow everyone down (and cause us to bug you repeatedly to clean up your project directory or answer questions about it).
Agreed upon organizational structure specific to MBB:
\begin{itemize}
\tightlist
\item
/../Studies/Mind\_Brain\_Body/Scripts/\textless{}wave\textgreater{}/Data: This contains all data that are used in the scripts (including raw data, inputs, and outputs of the scripts). Each file needs to be named clearly (see naming convention for data above).
\item
/../Studies/Mind\_Brain\_Body/Scripts/\textless{}wave\textgreater{}/\textless{}purpose, eg. scoring\textgreater{}/\textless{}if necessary, sub-folder for measure\textgreater{}: scripts go here (e.g., /Scripts/Wave1/scoring/cbcl would contain script to score the cbcl)
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{archiving-inactive-datasets}{%
\subsection{Archiving Inactive Datasets}\label{archiving-inactive-datasets}}
Before you leave, or upon completion of a project, you must archive old datasets and back them up. We will develop the instructions for this when we reach our first inactive dataset.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{ethics}{%
\section{Ethics}\label{ethics}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{irb}{%
\subsection{IRB}\label{irb}}
\textbf{Consent, Assent, and Screening}
Links to \href{https://ohrpp.research.ucla.edu/consent-templates/}{templates} from the UCLA research administration group.
\hypertarget{how-to-request-an-irb-account}{%
\subsubsection{How to request an IRB account}\label{how-to-request-an-irb-account}}
Click \href{https://webirb.research.ucla.edu/WEBIRB/Rooms/DisplayPages/LayoutInitial?Container=com.webridge.entity.Entity\%5BOID\%5B8990716E5E076B40BFE2D4D479617FD3\%5D\%5D}{here} for steps on how to request of an IRB account
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{ibc}{%
\subsection{IBC}\label{ibc}}
\textbf{What is the IBC?}
The IBC is the Institutional Biosafety Committee, which has the same purpose as the IRB but specific to research involving biohazards materials. The IBC is an arm of the UCLA Environment Health \& Safety office (EH\&S).
\textbf{How to Apply for Approval}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
DBS approval and approval to collect any other biological samples is processed through UCLA SafetyNet, the IBC online system, which is the IBC's equivalent to webIRB. SafetyNet is accessible \href{https://safetynet.research.ucla.edu/}{here} with UCLA logon ID.
\begin{itemize}
\tightlist
\item
IBC approval IS needed for blood samples
\item
IBC approval IS NOT needed for saliva, stool, or hair samples unless ---
\begin{itemize}
\tightlist
\item
Saliva is collected from dental procedures
\item
Stool or hair samples are contaminated with blood or infected with pathogens (e.g.~HBV, HIV)
\end{itemize}
\end{itemize}
\item
Once signed in, a new protocol is created by clicking `Create BUA'. A BUA is a Biological Use Authorization, which is synonymous with IBC protocol. Completing the BUA is just like completing an IRB protocol, but with a focus on the collection of biological samples.
\item
A BUA (or IBC protocol) requires the following document in addition to information supplied in the online form:
\begin{itemize}
\tightlist
\item
Lab Specific Biosafety Manual (includes the following)
\begin{itemize}
\tightlist
\item
Laboratory Specific SOPs (based on general template available \href{https://ucla.app.box.com/v/ehs-bio-lab-biomanual}{here})
\item
Bloodborne Pathogens Exposure Control Plan (based on general template available \href{https://ucla.app.box.com/v/ehs-bbp-ecp-template}{here})
\end{itemize}
\end{itemize}
\end{enumerate}
\emph{NOTE:}
\begin{itemize}
\tightlist
\item
Consultation with an EH\&S is likely necessary to complete the BUA protocol. Contact EH\&S or IBC employees with questions at \href{mailto:[email protected]}{\nolinkurl{[email protected]}} or \href{mailto:[email protected]}{\nolinkurl{[email protected]}}.
\item
All EH\&S documents are available \href{https://www.ehs.ucla.edu/documents}{here}.
\item
Additional documents may be required depending on the kind of biological material that's going to be collected.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Once a BUA is completed, it will appear under `Submissions.'
\item
IBC staff may require that modifications be made to the protocol, just as the IRB would. You may reply to modification requests and make modifications in the same way that you would for an IRB protocol, by logging your response to a reviewers comment and then making the necessary change in the protocol itself.
\item
Once all modifications are made, there are two more requirements before a BUA can be approved:
\end{enumerate}
\begin{itemize}
\tightlist
\item
Staff involved in collecting biological samples must acquire necessary training
\begin{itemize}
\tightlist
\item
Training may be completed via the UCLA \href{https://worksafe.ucla.edu/Ability/Programs/Standard/Control/elmLearner.wml?PortalID=LearnerWeb}{WorkSafe} portal accessible with UCLA logon ID.
\begin{itemize}
\tightlist
\item
For Dried Blood Spot collection, the following trainings are required of any staff working directly with samples:
\begin{itemize}
\tightlist
\item
NIH Guidelines for UCLA Researchers IBC Compliance Training (online)
\item
Laboratory Safety Fundamentals (online)
\item
Blood-borne Pathogens Training (online)
\item
Medical Waste Management (online)
\item
Biological Safety Cabinet (BSC) (online)
\item
Biosafety ABC's - Biosafety Level 2 Training (in-person)
\end{itemize}
\item
The PI is required to complete two courses :
\begin{itemize}
\tightlist
\item
NIH Guidelines for UCLA Researchers: IBC Compliance Training (online)
\item
Laboratory Safety for PIs and Lab Supervisors (in-person)
\end{itemize}
\end{itemize}
\item
Training must be up to date. Training certificates are maintained on the BAB Lab Box at BABLAB/Lab/Training/IBC
\item
A room inspection must be done to approve the use of physical space for sample collection and storage.
\end{itemize}
\item
The room inspection is arranged directly with EH\&S staff.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{questionnaire-database}{%
\section{Questionnaire Database}\label{questionnaire-database}}
In Box you can find a questionnaire database for the BABLab. This is different from the study specific questionnaire folders! This database is a repository for all of the questionnaires we have used or thought about using in our research. Organizing them here makes it easy for future BABLab members to plan, organize, and reproduce studies!
This includes the questionnaires used in all of our studies, including source material. In addition, the questionnaire database excel file contains information such as a brief description and reference (needed for IRB protocols and the like).
You can find the questionnaire database at the following path:
\begin{itemize}
\tightlist
\item
Box/BABLAB/Lab/Questionnaires
\end{itemize}
You can find the questionnaire database spreadsheet at the following path:
\begin{itemize}
\tightlist
\item
Box/BABLAB/Lab/Questionnaires/Questionnaire\_database.xlsx
\end{itemize}
When making a new study, please add your questions to the database, including a category and a reference! Adding a category makes it easy to filter this sheet by category when exploring measures.
\begin{figure}
\centering
\includegraphics{images/lab_protocols/questionnaire_database/1.png}
\caption{}
\end{figure}
Please create a folder for each questionnaire within the database to allow for the organization of source material. For example, the scq (social cravings questionnaire) was adapted from the fcq (food cravings questionnaire). Therefore, in the scq folder I included the original measure for the fcq, and a paper in which it is desribed and validated. In addition, if you have created this questionnaire as an instrument in REDCap - please upload the zipped file of the instrument to this folder! This will save a great deal of time for future researchers!
\begin{figure}
\centering
\includegraphics{images/lab_protocols/questionnaire_database/2.png}
\caption{}
\end{figure}
\begin{figure}
\centering
\includegraphics{images/lab_protocols/questionnaire_database/3.png}
\caption{}
\end{figure}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{interviews}{%
\section{Interviews}\label{interviews}}
\hypertarget{ksads}{%
\subsection{KSADS}\label{ksads}}
\begin{itemize}
\tightlist
\item
Align expectations from the start (semi-structured interview)
\item
Encourage brief responses
\item
Can write down details later
\item
Dive in and direct participant
\item
Read the threshold criteria
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{tests}{%
\section{Tests}\label{tests}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wasi}{%
\subsection{WASI}\label{wasi}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wasi-administration}{%
\subsubsection{WASI Administration}\label{wasi-administration}}
Ensure you have all necessary materials (WASI/WIAT administration instruction sheet, WASI score sheet, pencil with NO eraser, WASI administration booklet, WASI score book)
\textbf{Part I: Vocabulary}
\emph{General Instruction}: You will be pointing to each item in the WASI administration booklet and asking the child/adolescent what this item is/if they can describe what this item means to you
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Start audio recording
\item
Flip the WASI administration booklet to page 41, item \#4 (what is a shirt?)
\item
Flip WASI scoring booklet to page 74, beginning with item \#4 (what is a shirt?)
\begin{itemize}
\tightlist
\item
Use the WASI scoring booklet to determine if child/adolescent's description of each item shall be categorized as score 0, 1, or 2
\item
\emph{Note}: Q indicated on the scoring booklet refers to prompt/query the child further- ``Can you tell me more?''
\item
Provide queries as often as necessary- marginal responses, generalized responses, functional responses, and gand gestures, but NOT answers that are clearly incorrect
\end{itemize}
\item
Note score on the WASI score sheet: Vocabulary
\item
If the child does not obtain a perfect score on either item 4 or item 5, administer the preceding items in reverse order until two consecutive perfect scores are obtained
\item
Stop administering when the child/adolescent receives 3 consecutive Zeros \emph{OR} participant hits max score for age group (age 6: item 22; age 7-11: item 25; age 12-14: item 28)
\item
Keep audio recording for Part II: Matrix Reasoning
\end{enumerate}
\emph{Note: These will be audio recorded and can sometimes move quickly- can be scored later}
\textbf{Part II: Matrix Reasoning}
\emph{General Instruction}: You will be pointing to each matrix reasoning question in the WASI administration booklet and asking the child/adolescent where this item belongs in the missing box
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Flip the WASI administration booklet to page 57- Practice Questions
\begin{itemize}
\tightlist
\item
Explain you will do a few practice questions first then walk through 2 practice questions
\item
You may acknowledge correct responses/explain why answers may be incorrect
\end{itemize}
\item
Flip to correct start page/item to begin (age 6-8: item 1; age 9+: item 4)
\begin{itemize}
\tightlist
\item
Do NOT give verbal acknowledgement to their answers (e.g.~Correct! That's right!)
\end{itemize}
\item
If adolescents age 9+ do not obtain a perfect score on either item 4 or item 5, administer the preceding items in reverse order until two consecutive perfect scores are obtained
\item
Note score on the WASI score sheet: Matrix Reasoning
\item
Stop administering when the child/adolescent receives 3 consecutive Zeros \emph{OR} participant hits max score for age group (age 6-8: item 24)
\item
Stop audio recording
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wasi-scoring}{%
\subsubsection{WASI Scoring}\label{wasi-scoring}}
\textbf{Part I}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Examiner writes ``scored by: NAME'' at the top of the sheet
\item
Fill in any missing scores in Vocabulary or Matrix Reasoning tests using audio file if questions are missing (e.g.~scores continue before and after this missing question, NOT because the administrator left questions blank because they have stopped the test)
\item
Add up the Vocabulary total raw score:
\begin{itemize}
\tightlist
\item
\emph{Note}: Even if a participant begins at item 4 due to age, the total raw score should still include items 1-3
\end{itemize}
\item
Add up Matrix Reasoning total raw score
\item
Transfer both total raw scores to front sheet under ``Total Raw Score to T-Score Conversion'' chart in column titled ``Raw Score''
\end{enumerate}
\textbf{Part II}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Ensure you have the participant's correct age at day of testing in the upper right corner
\item
Open WASI-II Manual book \textgreater{} page 151 for T-Score Conversions
\begin{itemize}
\tightlist
\item
Flip to correct chart by age group (age group indicated at top of chart using year:month format)
\item
Under the correct chart by age group of the participant, view VC column for Vocabulary and MR column for Matrix Reasoning
\begin{itemize}
\tightlist
\item
Scroll down VC column for correct Vocabulary total raw score and acquire T-Score equivalent (horizontally)
\item
Scroll down MR column for correct Matrix Reasoning total raw score and acquire T-Score equivalent (horizontally)
\end{itemize}
\item
Write T-Score number in the boxes under ``Total Raw Score to T-Score Conversion'' chart in column titled ``T-Scores''
\begin{itemize}
\tightlist
\item
Add T-Scores totals for box titled ``Full Scale-2''
\item
Copy this total number to ``Sum of T-Scores to Composite Score Conversion'' chart in column titled ``Sum of T-Scores''
\end{itemize}
\end{itemize}
\item
Flip WASI-II Manual book \textgreater{} page 188 for FSIQ, Percentile Rank, and Confidence Interval
\begin{itemize}
\tightlist
\item
\textbf{To obtain FSIQ}: Scroll down Sum of T-Scores column and compare horizontally to FSIQ-2 column
\item
\textbf{To obtain Percentile Rank}: Scroll down Sum of T-Scores column and compare horizontally to Percentile Rank column
\item
\textbf{To obtain Confidence Interval (always circle/indicate 95\%)}: Scroll down Sum of T-Scores column and compare horizontally to 95\% column in correct age group
\end{itemize}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wiat}{%
\subsection{WIAT}\label{wiat}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wiat-administration}{%
\subsubsection{WIAT Administration}\label{wiat-administration}}
Ensure you have all necessary materials (WASI/WIAT administration instruction sheet, WIAT score sheet, 2 pencils with NO erasers, WIAT word reading list, WIAT Math problems sheet)
\textbf{Part I: Word Reading}
\emph{General Instruction}: You will be asking the child/adolescent to read off the WIAT word reading list left to right, top to bottom until they can no longer read the words
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Start audio recording
\item
Note in scoring sheet what grade participant is in
\item
Note the following basic scoring instructions:
\begin{itemize}
\tightlist
\item
\textbf{(1)} if fluent/correct
\item
\textbf{(DK)} if the child does not know
\item
\textbf{(\textgreater{}3)} if it took the child longer than 3 seconds to say
\item
\textbf{(SC)} if the child said the word wrong but self-corrected
\end{itemize}
\item
If multiple attempts are made to read a word, score only the last attempt
\item
If the child is sounding the word out/verbalizes the word in a choppy manner, ask the child to ``read the word altogether'' immediately after
\begin{itemize}
\tightlist
\item
If the next attempt is not fluent, score as 0 and say ``try the next one''
\end{itemize}
\item
If the child skips a word or row, redirect the child to the appropriate place immediately after and make a note in the scoring sheet
\item
If the child was unclear when reading a word/you did not hear the child correctly, ask the child to repeat the whole row of words where this particular word was located at the very end after they have finished reading all they can
\item
Discontinue after the child has reached 4 consecutive Zeros
\end{enumerate}
\textbf{Part II: Numerical Operations}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
You will be asking the child to fill out the ``math worksheet'' him/herself
\begin{itemize}
\tightlist
\item
Indicate where to begin based on age (Grades K-1: item 1; grades 2-4: item 14; grades 5-12+: item 18)
\item
Explain to the child/adolescent to work on problems from left to right, top to bottom in order and if they do not know a question they may skip it
\item
If beginning at item 1, refer to WIAT scoring sheet for specific verbal administration instructions
\end{itemize}
\item
If child does not reach 3 consecutive scores of 1, reverse backward until child has reached a correct response
\item
Be sure to pay attention to the child's responses- if the numbers they write are illegible or mirrored, ask the child to verbally indicate the response they meant
\begin{itemize}
\tightlist
\item
Note that you obtained a verbal response and note the actual response in your WIAT score sheet
\end{itemize}
\item
Discontinue this task when they have reached 4 consecutive Zeros
\item
Stop audio recording
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wiat-scoring}{%
\subsubsection{WIAT Scoring}\label{wiat-scoring}}
\textbf{Part I}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Examiner writes ``scored by: NAME'' at the top of the sheet
\item
Fill in any missing scores in Vocabulary or Matrix Reasoning tests using audio file if questions are missing (e.g.~scores continue before and after this missing question, NOT because the administrator left questions blank because they have stopped the test)
\item
Add up the Word Reading Total Raw Score:
\begin{itemize}
\tightlist
\item
Add up Word Reading Total Score Box
\item
Add up Total \textgreater{}3" Box
\item
Add up Total SC Box
\end{itemize}
\item
Word Reading Speed Total Raw Score:
\begin{itemize}
\tightlist
\item
Listen to the audio file and note time participant began to read words
\item
Count 30 seconds forward
\item
Note the word the participant completed at 30 seconds; write item number of this word in box
\end{itemize}
\item
Add up the Numerical Operations Total Raw Score
\begin{itemize}
\tightlist
\item
\emph{Note}: Even if a participant begins at item 8 due to age, the total raw score should still include items 1-7
\end{itemize}
\item
Transfer both Word Reading Total Raw Score and Numerical Operations Total Raw Score to front page under ``Composite Score Summary'' chart
\end{enumerate}
\textbf{Part II}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Ensure you have the participant's correct age at day of testing in the upper right corner
\item
To obtain the Composite Standard Score:
\begin{itemize}
\tightlist
\item
Flip WIAT-III Manual book \textgreater{} page 252-387 for Table C.1 based on age of participant (noted at top of chart by year, month, days range)
\item
Scroll down Word Reading column and compare horizontally to Standard Score column; write standard score in ``Composite Standard Score'' box
\item
Scroll down Numerical Operations column and compare horizontally to Standard Score column; write standard score in ``Composite Standard Score'' box
\end{itemize}
\item
To obtain the Confidence Interval (always at 95\%):
\begin{itemize}
\tightlist
\item
Flip WIAT-III Manual book \textgreater{} page 392 for Table C.3
\item
Follow column for correct age \textgreater{} 95\% \textgreater{} Word Reading
\begin{itemize}
\tightlist
\item
Add and subtract this number to/from the Composite Standard Score: Word Reading to create highest and lowest numbers for the Confidence Interval
\end{itemize}
\item
Follow column for correct age \textgreater{} 95\% \textgreater{} Numerical Operations
\begin{itemize}
\tightlist
\item
Add and subtract this number to/from the Composite Standard Score: Numerical Operations to create highest and lowest numbers for the Confidence Interval
\end{itemize}
\end{itemize}
\item
To obtain GRADE-LEVEL equivalents of score: (\emph{Note: No longer doing percentile})
\begin{itemize}
\tightlist
\item
Flip WIAT-III Manual book \textgreater{} page 398 for Table D.2
\begin{itemize}
\tightlist
\item
Scroll down through Word Reading column and look for raw score, view to left column for grade equivalent
\end{itemize}
\item
Flip WIAT-III Manual book \textgreater{} page 402 for Table D.2
\begin{itemize}
\tightlist
\item
Scroll down through Numerical Operations column and look for raw score, view to left column for grade equivalent
\end{itemize}
\end{itemize}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{behavioral-coding}{%
\section{Behavioral Coding}\label{behavioral-coding}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{fims}{%
\subsection{FIMS}\label{fims}}
\begin{itemize}
\tightlist
\item
Always code positive video first (could be colored by negative video)
\item
When not obvious use the process of elimination
\item
Make notes while coding
\item
Maturity for child for their age
\item
Attunement = harmonious
\end{itemize}
\href{https://docs.google.com/document/d/1oLEg1gAdpcrDWg1Vlh0z_esq9fBy9CKBrNuU-ZokaKg/edit?usp=sharing}{FIMS Behavioral Training Protocol}
\href{https://docs.google.com/document/d/1zd4BD7-yQZxle4bH_cXrEaqFAd3MCVKATnVE1PnWn_4/edit?usp=sharing}{FIMS Behavioral Coding Protocol}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{physiological-measurement}{%
\section{Physiological Measurement}\label{physiological-measurement}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{ecg}{%
\subsection{ECG}\label{ecg}}
Electrocardiogram (ECG) measures a subject's heart rate and waveform pattern. With each heartbeat, an electrical signal travels through the heart. This electrical wave causes the muscle to squeeze and pump blood from the heart. ECG measures this wave through electrodes placed across the torso. By collecting ECG, you can detect changes in heart function due to certain stimuli. Things like stress, excitement, fear, and other emotional responses can be physiologically measured based on changes in the ECG readouts.
\textbf{Biopac Setup}
\begin{itemize}
\tightlist
\item
In our ECG setup, we have one transmitter with one channel.
\item
The red and white leads are the signal, the black lead is the ground.
\item
Because we are using a wireless setup, there needs to be a clear line of sight between the transmitter and the receiver.
\end{itemize}
\textbf{Electrode Placement}
\begin{itemize}
\tightlist
\item
We will be placing 2 electrodes just below the collarbones and one electrode on the lowest left rib.
\end{itemize}
\textbf{Filtering and Signal Frequency}
\begin{itemize}
\tightlist
\item
We will sample ECG at a rate of 2kHz, or 2000 samples/second. This gives us a resolution high enough to catch all of the important parts of the heartbeat waveform.
\item
Noise is not much of an issue with collecting ECG in a 3-electrode setup.
\end{itemize}
\textbf{Subject Position}
\begin{itemize}
\tightlist
\item
Ensure that the subject is in a comfortable position, so that body movement can be completely avoided or reduced to the minimal. The subject should be asked not to talk, move, read or make phone calls during the procedure.
\item
Ensure that the position of the subject is the same if there are multiple sessions. Timing of unavoidable body movement or motion artifacts should be noted and the recording periods with motion artifacts must be removed before analysis.
\end{itemize}
\textbf{Gathering ECG Data}
\begin{itemize}
\tightlist
\item
Lightly abrade the skin at the electrode sites with EL-Prep Gel
\item
Wipe off excess with a wet wipe or tissue
\item
After prepping the electrodes with Gel-100, attach electrodes to the skin at the three positions indicated above
\begin{itemize}
\tightlist
\item
Let these sit as long as possible to adhere and for the gel to soak in
\end{itemize}
\item
Ask the participant to put on the module like a belt around their torso
\begin{itemize}
\tightlist
\item
Make sure the electrode lead inputs are pointed up towards their head
\end{itemize}
\item
Connect the white lead to the Right Collarbone electrode, connect the black lead to the Left Collarbone electrode, and connect the red lead to the Left Rib electrode
\item
Turn on the transmitter and ensure that both the light on the Biopac receiver module and the transmitter are green (the transmitter should be flashing, whereas the receiver should be solid)
\item
Make sure there is a clear, unobstructed line of sight between the transmitter and receiver antenna
\item
Open AcqKnowledge by selecting the template file on the desktop
\begin{itemize}
\tightlist
\item
If the system is not connected to the hardware, make sure the wifi is turned off and restart AcqKnowledge
\end{itemize}
\item
Ensure all devices are connected and lead wires attached properly
\item
Hit the green ``Start'' button and click through all of the dialog boxes that you're prompted with
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{egg}{%
\subsection{EGG}\label{egg}}
\textbf{Biopac Setup}
\begin{itemize}
\tightlist
\item
In our EGG setup we have one transmitter with two channels (A and B).
\item
The white leads are the reference, the red are the signal, and the black is the ground.
\item
Each transmitter needs to have a ground.
\item
Because we are using a wireless setup, there needs to be a clear line of sight between the transmitter and the receiver.
\end{itemize}
\textbf{Electrode Placement}
\begin{itemize}
\tightlist
\item
We will place the two white electrodes side-by-side on the xyphoid (which is the lower part of the sternum).
\item
We will place the two red electrodes in position 1 and 4 in the diagram above.
\item
We will place the black electrode (the ground) on the second from bottom rib on the left. Try to get it over the bone as much as possible.
\item
Position 1 should be in line with the reference electrode, and position 4 should be in line with the ground.
\item
Next, we need to have the transmitter up high on the participants chest so there is line of sight between the transmitter and receiver (antenna).
\end{itemize}
\emph{Alternative}:
\begin{itemize}
\tightlist
\item
Regular electrocardiogram (ECG) electrodes can be used for EGG recordings.
\item
The most commonly used configuration for recording 1-channel EGG is to place one electrode at the midpoint on a line connecting the xiphoid and umbilicus, and the other electrode 5 cm away (up and 45 degree) to the patient's left.
\item
The ground electrode is placed on the left costal margin horizontal to the first active electrode.
\end{itemize}
\textbf{Filtering and signal frequency}
\begin{itemize}
\tightlist
\item
\emph{Amplification}: the EGG signal is usually in a range of 50-500 μV and adequate amplification needs to be provided by a recording device so that the amplified signal is of an appropriate range for display and analysis
\item
\emph{Filter setting}: determines the frequency range of the EGG signal to be maximally amplified. The interested range of the EGG signal is in the range of 0.5-9.0 cpm or 0.0083 to 0.15 Hz which is much lower than that of most of extracellular recordings.
\begin{itemize}
\tightlist
\item
In addition to the basic fundamental frequencies of 0.5-9.0 cpm, it is also important to record certain harmonics (multiples of the fundamental frequency). Accordingly, an appropriate frequency setting is in the range of 0.0083 to 1 Hz.
\end{itemize}
\end{itemize}
\textbf{Skin Preparation}
\begin{itemize}
\tightlist
\item
First, the abdominal skin where the electrodes are to be positioned should be thoroughly cleaned to ensure that the impedance between the pair of electrodes is below 10 kΩ.
\begin{itemize}
\tightlist
\item
To do so, it is advised to abrade the skin until it turns pinkish using some sandy skin-preparation jelly, and then apply a thin layer of electrode jelly for 1 minute for the jelly to penetrate into the skin.
\end{itemize}
\item
Before placing the electrode, the excessive jelly must be completely wiped out.
\end{itemize}
\textbf{Subject Position}
\begin{itemize}
\tightlist
\item
Ensure that the subject is in a comfortable position, most commonly supine, so that body movement can be completely avoided or reduced to the minimal. The subject should be asked not to talk, move, read or make phone calls during the procedure.
\item
Ensure that the position of the subject is the same if there are multiple sessions. Timing of unavoidable body movement or motion artifacts should be noted and the recording periods with motion artifacts must be removed before analysis.
\end{itemize}
\textbf{Duration of Recording}
\begin{itemize}
\tightlist
\item
A common mistake in recording the EGG is that the recording is too short. Unlike the ECG in which there are about 60 waves every minute, the EGG is composed of only 3 waves every minute. That is, if the recording is of a short duration of 5 minutes, there are only 15 waves which are obviously insufficient for analysis and interpretation.
\item
Ideally, at least a 30-minute period is needed to ensure an accurate measure of gastric slow waves in a particular state, such as fasting, fed, baseline or after intervention.
\end{itemize}
\textbf{Meals}
\begin{itemize}
\tightlist
\item
We will ask participants to eat something about 1 hour before they come into the lab.
\item
Then we will give them water (as a test meal) immediately before performing the EGG.
\item
The subjects should all drink the same amount of water.
\end{itemize}
\textbf{Analysis}
\begin{itemize}
\tightlist
\item
The EGG also contains respiration artifact that is between 12-25 cpm and sometimes the ECG artifacts (< 60 cpm). Occasionally, the slow wave of the small intestine may also be recorded in the EGG (9-12 cpm).
\item
Although these interferences distort gastric slow waves in the EGG, their frequencies do not overlap with that of the gastric slow waves. Consequently, spectral analysis can be performed to separate the gastric slow waves from interferences.
\item
Before spectral analysis is performed, any periods with motion artifacts must be identified and deleted because motion artifacts can not be separated from the gastric slow waves even with spectral analysis. So we will need to record their motion during the task.
\end{itemize}
\textbf{Dominant Frequency and Power}
\begin{itemize}
\tightlist
\item
The dominant frequency and power of the EGG can be derived from the power spectral density assessed by the periodogram method. The normal range of the dominant frequency of the EGG is between 2 to 4 cpm.
\item
The EGG is called bradygastria if its dominant frequency is lower than 2 cpm, tachygastria if its dominant frequency is higher than 4 cpm but lower than 9 cpm, and arrhythmia if there is no dominant peak power in the spectrum
\end{itemize}
\textbf{Power Ratio or Relative Electrogastrography}
\begin{itemize}
\tightlist
\item
\emph{Power Change}: The ratio of dominant EGG powers after and before an intervention is a commonly used parameter that is associated with alteration in gastric contractions. It is generally accepted that a ratio of > 1 reflects an increase in gastric contractility due to the intervention, whereas a ratio of < 1 reflects a decrease in gastric contractility.
\begin{itemize}
\tightlist
\item
If the decibel (dB) unit is used, the ratio should be replaced by the difference between the baseline and after intervention.
\end{itemize}
\item
\emph{Percentage of Normal Gastric Slow Waves}: The percentage of normal slow waves is a quantitative assessment of the regularity of the gastric slow wave measured from the EGG. It is defined as the percentage of time during which normal gastric slow waves are observed in the EGG. The percentage of normal slow waves can be computed from the running power spectra of the EGG.
\begin{itemize}
\tightlist
\item
In this method, 1 spectrum is derived from every 1 minute (or some other short period) of EGG data; the minute is considered normal if its EGG spectrum exhibits a dominant power in the range of 2-4 cpm. In humans, the normal percentage of gastric slow wave is defined as 70\%.
\end{itemize}
\item
\emph{Percentage of Gastric Dysrhythmia}: The percentage of gastric dysrhythmia is defined as the percentage of time during which gastric dysrhythmia is observed in the EGG. It is computed in the same way as that for the percentage of normal slow waves.
\begin{itemize}
\tightlist
\item
It is further classified into the percentage of bradygastria, the percentage of tachygastria and the percentage of arrhythmia.
\end{itemize}
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{gsr}{%
\subsection{GSR}\label{gsr}}
Electrodermal response (EDR) measurements (including Galvanic Skin Response, GSR) show the activity of the eccrine sweat glands. Typically, one will place electrodes where the concentration of these glands is the highest: namely, the fingertips. The activity of the eccrine sweat glands as a response to physiological excitation (stress, fear, etc.) serves to increase the conductivity of the skin when activated. When one applies a very small electric voltage (0.5 V) between two electrodes, the manifested electrical conductance varies in direct proportion to the electric current flowing between the electrodes. For instance, if a subject is presented a stimulus and the palms start to sweat, this response indicates a highly-stimulated state. The EDR of this subject will then be higher than the subject's baseline. If another subject receives the same stimulus and the palms remain as ``cool as a cucumber,'' the EDA reading will remain unchanged with respect to the baseline. EDR undergoes relatively fast habituation (decrease of amplitude) in the event the same stimulus is repeated over and over to the same subject.
\textbf{Biopac Setup}
\begin{itemize}
\tightlist
\item
In our GSR setup, we have one transmitter with one channel.
\item
The red lead is the signal, the black lead is the ground.
\item
Because we are using a wireless setup, there needs to be a clear line of sight between the transmitter and the receiver.
\end{itemize}
\textbf{Electrode Placement}
\begin{itemize}
\tightlist
\item
We will be placing the signal electrode on the middle finger of the child's non-dominant hand.
\begin{itemize}
\tightlist
\item
This has been noted to be the region of the hand with the most concentrated and reactive eccrine sweat glands, and using the non-dominant ensures that the participants will be able to continue with other activities that they may be tasked with while hooked up to the GSR module.
\end{itemize}
\item
The ground electrode can be attached anywhere within reach of the transmitter leads. While the above paper grounded to a position on the participant's arm, for the sake of consistency and simplicity, we are attaching the second electrode to the participants' index finger (on the same hand).
\begin{itemize}
\tightlist
\item
This provides an effective ground, consolidates the leads into one area (preventing potential interference or having the electrodes pulled off by strain on the leads), and also standardizes the placement across all the participants.
\end{itemize}
\end{itemize}
\textbf{Filtering and Signal Frequency BioPac Recommendation}
\begin{itemize}
\tightlist
\item
The sample rate can be set quite low for long-term ambulatory measurements or experiments that do not require a high level of temporal precision (i.e., 1-5 samples per second). However, lower sample rates cannot ensure that specific events are accurately represented in what is relayed and graphed, and a degree of timing error might occur.
\begin{itemize}
\tightlist
\item
To avoid this, BioPac recommends the sampling rate be set to a minimum of 2000 samps/sec (2KHz). Higher sample rates are useful for a number of methodological reasons and for improvements in precision.
\end{itemize}
\item
For EDA/GSR measurements, it is typical to filter the data at 35Hz. Some recommendations are that a sample rate of 200Hz - 400Hz are a minimum to ensure enough samples for accurate separation of phasic waveforms from tonic signals and a more accurate representation of signal shape.
\item
A general approach is to always err on the side of caution and probably seek to sample higher than you really need. As a general rule 1000Hz - 2000Hz sample rates are more than sufficient and easily achievable. The decision for the present study is to collect at a sampling rate of 2000Hz.
\end{itemize}
\textbf{Subject Position}
\begin{itemize}
\tightlist
\item
Ensure that the subject is in a comfortable position, so that body movement can be completely avoided or reduced to the minimal. The subject should be asked not to talk, move, read or make phone calls during the procedure.
\item
Ensure that the position of the subject is the same if there are multiple sessions. Timing of unavoidable body movement or motion artifacts should be noted and the recording periods with motion artifacts must be removed before analysis.
\end{itemize}
\textbf{Gathering GSR Data}
\begin{itemize}
\tightlist
\item
Don't abrade the skin
\item
After prepping 2 electrodes with a dab of Gel101, attach them to the child's middle and ring fingers
\begin{itemize}
\tightlist
\item
Let these rest to let the adhesive set and gel soak in for 5 minutes
\end{itemize}
\item
Ask the child to put on the PPGED transmitter like a wrist watch. Assist if needed.
\item
Attach the transmitter leads to the electrodes (red to middle, black to pointer)
\begin{itemize}
\tightlist
\item
These should be the only two leads connected to the device
\item
If there are more leads present, ensure you have the correct transmitter and that the correct lead set is plugged in
\end{itemize}
\item
Turn on the transmitter and ensure that both the light on the Biopac receiver module and the transmitter are green (the transmitter should be flashing, whereas the receiver should be solid)
\item
Make sure there is a clear, unobstructed line of sight between the transmitter and receiver antenna
\item
Open AcqKnowledge by selecting the template file on the desktop
\begin{itemize}
\tightlist
\item
If the system is not connected to the hardware, make sure the wifi is turned off and restart AcqKnowledge
\end{itemize}
\item
Ensure all devices are connected and lead wires attached properly
\item
Hit the green ``Start'' button and click through all of the dialog boxes that you're prompted with
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{asa}{%
\section{ASA}\label{asa}}
Loading Users into ASA Protocol
ASA is a 24-hour dietary assessment tool that we use as part of MBB for participants to enter information regarding their diet and generate a nutrition report. Each MBB participant requires their own unique ASA username and password to complete this task.
To load new users into ASA:
Go to the following website: \url{https://asa24.nci.nih.gov/}
Log in using username and password found on the BabLab Internal Site.
\begin{figure}
\centering
\includegraphics{images/research_protocols/asa/asa_1.png}
\caption{}
\end{figure}
Click on ``Respondent Accounts'' to add new accounts to ASA
\includegraphics{images/research_protocols/asa/asa_2.png}
Click ``Start Wizard'' to set up additional account usernames and passwords.
\includegraphics{images/research_protocols/asa/asa_3.png}
Respondent usernames will automatically begin with MBB and an ID number of your choice. Confirm with Lab Manager what the MBB ID numbers should be for the study, and how many participants accounts should be created. Because participants complete ASA for different waves, the MBB ID numbers need to be different for each Wave.
Ex: Wave 1 MBB participant usernames are MBB001-MBB150
Wave 2 MBB participant usernames are MBB2001-MBB2150
*It is possible that Wave 3 participants will be MBB3001-MBB3150 BUT confirm with Lab Manager.
Enter the Starting ID Number, and ensure that the Example contains MBBXXXX with the starting ID number you have entered.
\includegraphics{images/research_protocols/asa/asa_4.png}
Next, enter the number of respondent accounts you require to add for the study. (Previous studies have added 150 respondent accounts).
\includegraphics{images/research_protocols/asa/asa_5.png}
Next, the number of recalls will automatically be entered as specified in the original study details. You can leave this unchanged and click next.
\includegraphics{images/research_protocols/asa/asa_6.png}
Confirm Study details before clicking next.
\includegraphics{images/research_protocols/asa/asa_7.png}
Next, we will be autogenerating passwords for each respondent account. Select the option to provide a root word for each password and enter ``Bablab'' as the root word. (This will autogenerate passwords such as ``\href{mailto:Bablab@194}{\nolinkurl{Bablab@194}}''). Click Finish.
\includegraphics{images/research_protocols/asa/asa_8.png}
ASA will generate two files with the date of creation: one file with the Usernames and Passwords, and one Template File. Download and save the Username and Password File, which will contain the autogenerated username and password information for each participant.
Save this file on Box, under Bablab \textgreater{} studies \textgreater{} Mind\_Brain\_Body \textgreater{} ID\_Drive as ``MBB\_asa\_usernames\_passwords.csv''
This file should remain confidential and not be paired with any identifying information.
\includegraphics{images/research_protocols/asa/asa_9.png}
These usernames and passwords can now be shared with participants to allow them to complete the ASA task under their own respondent accounts.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{gorilla}{%
\section{Gorilla}\label{gorilla}}
\textbf{Creating Tasks on Gorilla Protocol}
Gorilla is an online experiment builder that allows scientists to collect behavioral data, with an easy-to-use graphical interface. It allows for building of tasks, and questionnaires, to be administered online to participants. The BABLab has commonly used Gorilla for the Halloween task as part of the Mind, Brain, Body Study.
Go to the following website: \url{https://gorilla.sc/}
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla1.png}
\caption{}
\end{figure}
Log in using username and password found on the BabLab Internal Site.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla2.png}
\caption{}
\end{figure}
Once logged in, click on ``Projects'' to access the labs current studies.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla3.png}
\caption{}
\end{figure}
If you are working on a current study or a new wave of a current study, click on that study. If not, create a new project in the upper right-hand corner.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla4.png}
\caption{}
\end{figure}
For example, the project page for MBB contains ``experiments'' which are the completed experiments that participants complete, as well as ``tasks \& questionnaires'' that must be created first, and then inputted into the final experiment.
\textbf{IF you are updating a current project:}
\textbf{IMPORTANT} It is imperative that you do NOT edit any current tasks, questionnaires, or experiments that are in use within the lab, UNLESS explicitly instructed to do so.
IF you are building a new experiment within a current study such as MBB, you may CLONE the experiment or task, so that it stays separate from previous tasks/experiments. (As seen by the naming conventions below)
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla5.png}
\caption{}
\end{figure}
To clone an experiment or task, select the item you'd like to clone, and under settings, select ``clone'', and rename it, following the previous naming conventions.
Ex: ``MBB\_wave\_x'' or ``task\_wave\_x''
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla6.png}
\caption{}
\end{figure}
\textbf{IF you are creating a new project:}
Select ``Create a New Project'' and name it accordingly. Your project will contain all of your tasks, questionnaires, and experiments. Tasks and Questionnaires are components of Experiments.
\textbf{To create a Questionnaire}, select the ``Create'' button, and select the type you'd like to create.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla7.png}
\caption{}
\end{figure}
Enter a name for your questionnaire and select OK. If you'd like to clone a previous task, you have the option to do so here as well.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla8.png}
\caption{}
\end{figure}
Gorilla offers many different options for types of Questionnaires that can be included in your online experiments, as seen below. In this example, we will select \textbf{``Rating Scale/Likert''}
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla9.png}
\caption{}
\end{figure}
This is the format you will see when creating a task. On the left, you enter the questions that make up your questionnaire, as well as any variables and titles. On the right side, you will see a live preview of how your question will appear during your experiment.
For example, we have created a likert scale, where a participant can rate their experience from 1 to 5. We included the rating options of 1,2,3,4,5, and labeled 1 as worst, and 5 as best.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla10.png}
\caption{}
\end{figure}
To add additional questions within your questionnaire, you can select ``Add Widget Here'', and continue your questionnaire.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla11.png}
\caption{}
\end{figure}
When you complete your questionnaire, select ``Commit Version x'' to save it as a completed task. You can go back and edit this questionnaire as necessary and commit the questionnaire to a newer version as well.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla12.png}
\caption{}
\end{figure}
To create a task, select the ``Create'' button, and select the ``Task builder task'' and name it accordingly.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla13.png}
\caption{}
\end{figure}
We first begin with task structure. Depending on the nature of your study, you will most likely begin with an instructions screen. Select the first + symbol, and title your first display ``instructions''.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla14.png}
\caption{}
\end{figure}
The following steps are specific to your current study, but generally the next displays will be the trials in your experiment, followed by an end/debrief screen. Select the next + symbols, and enter names for each of the displays you require for your study. In this example, we've added a display for trials, and a display for the ending of the experiment.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla15.png}
\caption{}
\end{figure}
The instruction and end pages can be customized by clicking the ``+'' icon, where you can choose a template for these displays. In this example, we will select ``rich text'' as this is the most common display we use for providing instructions and debriefing.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla16.png}
\caption{}
\end{figure}
For your trials, you can also choose between a variety of templates, depending on the nature of your research. You can select multiple different screens in your preferred order, such as a fixation screen, followed by a screen with an image, and three buttons.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla17.png}
\caption{}
\end{figure}
Now your task will look something like this, and next you can click each display to edit them for your experiment.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla18.png}
\caption{}
\end{figure}
To customize the displays you have chosen, click on the display, and select the red box to enter the instructions for your study. You can also customize the button for participant to enter to continue through the study.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla19.png}
\caption{}
\end{figure}
Similarly, your trials page can be tailored to your study, depending on your stimuli and answer choices.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla20.png}
\caption{}
\end{figure}
The number of trials, and the content within each trial will be determined by a spreadsheet you create. Under the spreadsheet tab, click ``Download Spreadsheet'' to download a template for which you will enter your trial information. This spreadsheet drives the information that will be shown to the participant.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla21.png}
\caption{}
\end{figure}
This will download and xlsx file, where you can specify your display, stimuli, answers, and the randomization of the trials. Here is an example below of an experiment with instructions, three randomized trials, and an ending page.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla22.png}
\caption{}
\end{figure}
You can then upload this spreadsheet to Gorilla. **it is necessary that the displays match exactly as they are formatted in your task structure page (ex: entries are case sensitive) Any items in green have been identified and connected to your task structure -- if an item is not green, gorilla has not recognized it, and it should be double-checked.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla23.png}
\caption{}
\end{figure}
Once your task looks complete, you can preview the task to test its functionality. Once you are happy with its functionality, you may commit the task to complete it.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla24.png}
\caption{}
\end{figure}
Once your task and questionnaires are completed, you can then create your experiment. Under your project, create a new experiment and name it accordingly.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla25.png}
\caption{}
\end{figure}
Your experiment page will begin with start and finish nodes. You can then select the option ``+ Add New Node'' to add any task or questionnaire to your study.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla26.png}
\caption{}
\end{figure}
Select the type of node(s) you'd like to include in your experiment, and drag the arrow bar to and from each node in the order you'd like your experiment to function.
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla27.png}
\caption{}
\end{figure}
For example, you can include your test task in your experiment with connecting arrows, and your experiment will appear as seen below. At this point, you can preview and commit your experiment, and it will be ready to run!
\begin{figure}
\centering
\includegraphics{images/research_protocols/gorilla/gorilla28.png}
\caption{}
\end{figure}
For additional instructions on creating a task in Gorilla, see the following instructional video: \url{https://www.youtube.com/watch?v=syw-7XKLCM4}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{git}{%
\section{Git}\label{git}}
To use GitHub and other Git applications, you will need to install Git on your local computer. Git runs in the background and performs tracking and version control for you. This is similar to the way that RStudio runs on top of R.
To install Git:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Go \href{https://git-scm.com/downloads}{here}
\item
Use the \href{https://brew.sh/}{homebrew} method
\begin{itemize}
\tightlist
\item
Type this line of code into your terminal and press enter
\item
When you see the instructions RETURN - press enter again
\item
You may have to enter your password, if it looks like you aren't entering text, you really are, so just type and press enter
\end{itemize}
\end{enumerate}
\begin{verbatim}
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
\end{verbatim}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Once that has installed, type this code into your terminal and press enter
\end{enumerate}
\begin{verbatim}
brew install git
\end{verbatim}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Git should now be installed
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{github}{%
\section{GitHub}\label{github}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Download \href{https://desktop.github.com/}{GitHub Desktop}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/1.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Create new repository on GitHub Desktop
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/2.png}
\caption{}
\end{figure}
\begin{itemize}
\tightlist
\item
Make sure to select the correct parent folder
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Initialize the repository
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/4.png}
\caption{}
\end{figure}
\begin{itemize}
\tightlist
\item
If you press command + shift + . you can see the hidden git files
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/5.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\item
Add your files to the folder
\item
Open back to GitHub Desktop
\end{enumerate}
\begin{itemize}
\tightlist
\item
The blue dot means there are changes in that folder to commit
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Publish the repository (this sets up the repository online - choose which organization it should go to)
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/6.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{6}
\tightlist
\item
Create the first commit and publish the repository (add a comment and select your organization)
\end{enumerate}
\begin{itemize}
\tightlist
\item
Committing saves the changes to your local git (your local computer record of changes)(local changes)
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/11.png}
\caption{}
\end{figure}
\includegraphics{images/research_protocols/github/7.png}
\includegraphics{images/research_protocols/github/8.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{7}
\tightlist
\item
Push the repository
\end{enumerate}
\begin{itemize}
\tightlist
\item
Push the changes that git has catalogued to the online GitHub
\end{itemize}
\includegraphics{images/research_protocols/github/9.png}
\includegraphics{images/research_protocols/github/10.png}
\includegraphics{images/research_protocols/github/12.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{8}
\tightlist
\item
Go to GitHub online and you can see your repository with all of your files and latest commits
\end{enumerate}
\begin{itemize}
\tightlist
\item
Important to note that the README file will display as the main page for the repository
\end{itemize}
\includegraphics{images/research_protocols/github/13.png}
\includegraphics{images/research_protocols/github/14.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{9}
\tightlist
\item
If you click on the commits button, you can browse your entire history of commits, explore the file, and download the version from any point in it's history
\end{enumerate}
\includegraphics{images/research_protocols/github/15.png}
\includegraphics{images/research_protocols/github/16.png}
\includegraphics{images/research_protocols/github/17.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{10}
\tightlist
\item
Adding a .gitignore file will allow you to ``skip'' over certain types of files that you don't want to commit
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/18.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{11}
\tightlist
\item
You can change the name of the root folder on your local directory - just be sure to use the ``locate'' function in GitHub Desktop to locate it
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/20.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{12}
\tightlist
\item
Make your respository public
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/19.png}
\caption{}
\end{figure}
\textbf{For a more thorough description of Git, see this \href{https://vuorre.netlify.app/publication/2018/06/01/curating-research-assets-a-tutorial-on-the-git-version-control-system/vuorre-curating-research-assets-2018.pdf}{article}.}
\textbf{Vuorre, M., \& Curley, J. P. (2018). Curating Research Assets: A Tutorial on the Git Version Control System. Advances in Methods and Practices in Psychological Science, 1(2), 219--236. \url{https://doi.org/10.1177/2515245918754826}}
\hypertarget{how-to-use-github-like-a-software-developer}{%
\subsection{How to use GitHub like a Software Developer}\label{how-to-use-github-like-a-software-developer}}
The above method is great if you are working alone in your own repository, but it doesn't work well for collaboration, because only one git repository can be linked to a folder at one time. GitHub has an integrated workflow for this.
\textbf{Collaborative Workflow}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Fetch origin
\item
Create branch
\item
Code (Make sure you are working in the correct project on R)
\item
Build / Knit (Build is for Wiki's/Books - Knit is for regular Rmarkdown files)
\item
Commit
\item
Push (and/or publish branch)
\item
Pull
\item
Merge
\item
Delete branch on GitHub online AND on local computer GitHub Desktop
\item
Repeat
\end{enumerate}
--
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
The first step is to fetch origin (or clone down if this is your first time working in this repository)
\end{enumerate}
If fetching:
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/30.png}
\caption{}
\end{figure}
If cloning: clone the repository from your organization into a local directory on your computer using GitHub Desktop
- Anywhere is fine as long as it's not a shared folder
- Box, Dropbox, Google Drive folders are fine as long as you are the ONLY user (i.e.~your personal Google Drive etc.)
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/22.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
The next step in a collaborative repository is to create your own branch on the GitHub Desktop app
\begin{itemize}
\tightlist
\item
NEVER work directly in the master branch - you might break something
\item
Name it after yourself (this will be your workspace)
\end{itemize}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/29.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\item
Now you can begin coding! (make/add/remove things etc. directly in the folders on your computer)
\begin{itemize}
\tightlist
\item
Make sure you are working in the correct project on R
\end{itemize}
\item
Build/Knit - when you are done making your desired changes, click build or knit
\item
Commit - Commit your changes using GitHub Desktop
\begin{itemize}
\tightlist
\item
This is kind of like saving the changes to your local computer change tracker
\item
Make sure you put a comment in the box (no description necessary)
\item
Make your comment useful to those reviewing it
\end{itemize}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/25.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Push - now push your changes to GitHub.com - this is like saving your local changes online (or publish branch)
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/26.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{6}
\tightlist
\item
Pull - Now, in order to integrate your changes with the master copy (which hosts the public facing ebsite) you need to submit a pull request.
\begin{itemize}
\tightlist
\item
This means that you want the owner of the repository to pull in your changes to the master branch
\item
This will take you to GitHub.com, follow the instructions to create your pull request
\item
Make as detailed notes as possible on what changes you have created in this round of updates
\end{itemize}
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/27.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{7}
\tightlist
\item
Merge - on GitHub.com, the owner of the repository will then be able to review your pull request, fix any conflicts, and merge the branch into the master. Usually, this is pretty simple if there aren't any conflicts!
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/28.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{8}
\tightlist
\item
After the merge - delete your branch on your local computer AND on the remote (Github.com)
\end{enumerate}
\begin{itemize}
\tightlist
\item
You can do both simultaneously by navigating to your GitHub Desktop app and pressing
\end{itemize}
Mac: COMMAND + SHIFT + D
Windows: WINDOWS + SHIFT + D
\begin{itemize}
\tightlist
\item
Check the box to delete branch on remote
\end{itemize}
\begin{figure}
\centering
\includegraphics{images/research_protocols/github/31.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{9}
\tightlist
\item
Repeat - now before you start new work, make sure to fetch origin and repeat the process.
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{osf}{%
\section{OSF}\label{osf}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Create a new project on OSF
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/osf/1.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Title it and choose storage location (US)
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/osf/2.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Navigate to the new project and click Add Ons
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/osf/3.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
Enable GitHub
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/osf/4.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\tightlist
\item
Link OSF to your GitHub account
\end{enumerate}
\includegraphics{images/research_protocols/osf/5.png}
\includegraphics{images/research_protocols/osf/6.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Select the repository you want to link
\end{enumerate}
\includegraphics{images/research_protocols/osf/7.png}
\includegraphics{images/research_protocols/osf/8.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{6}
\tightlist
\item
Now all of your files are visible in the OSF project
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/osf/9.png}
\caption{}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{7}
\tightlist
\item
Make your OSF project public
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/osf/10.png}
\caption{}
\end{figure}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{wiki-creation}{%
\section{Wiki Creation}\label{wiki-creation}}
\textbf{In order to properly build the wiki you will neet to install \href{https://www.latex-project.org/}{LaTex}}
\emph{This is a huge installation, so leave plenty of time}
\textbf{You will also need to have the bookdown package installed in R-Studio}
\emph{To install and load bookdown in R run the following code}
\texttt{install.packages(\textquotesingle{}bookdown\textquotesingle{})}
\texttt{library(bookdown)}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Create a new project based on the wiki template (duplicate and rename for your project/study)
\end{enumerate}
\includegraphics{images/research_protocols/wiki/1.png}
\includegraphics{images/research_protocols/wiki/2.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Rename any instance of ``Template'' to your project's title (open using RStudio)
\end{enumerate}
\begin{itemize}
\tightlist
\item
.Rproj itself (YOU MUST OPEN THIS PROJECT FILE TO GET STARTED)
\item
\_bookdown.yml file
\item
\_output.yml file
\item
index.rmd file
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Each .Rmd file creates a section
\end{enumerate}
\begin{itemize}
\tightlist
\item
Index is always the home page
\item
You can create subsections by creating new .Rmd files -
\item
These files use markdown syntax
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\item
Create a new repository using GitHub Desktop for the wiki
\item
Move all the files from your draft into the repository folder, commit, and push
\end{enumerate}
\begin{itemize}
\tightlist
\item
Make sure to always BUILD before you commit and push so that all the necessary files are updated
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{5}
\tightlist
\item
Go into settings in GitHub online
\end{enumerate}
\begin{itemize}
\tightlist
\item
Scroll down to GitHub pages
\item
Select master/branch/docs folder to set your GitHub Pages site to the docs folder within your bookdown files
\end{itemize}
\includegraphics{images/research_protocols/wiki/3.png}
\includegraphics{images/research_protocols/wiki/4.png}
\includegraphics{images/research_protocols/wiki/5.png}
\includegraphics{images/research_protocols/wiki/6.png}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{6}
\tightlist
\item
Put the link on OSF to the wiki
\end{enumerate}
\begin{figure}
\centering
\includegraphics{images/research_protocols/wiki/7.png}
\caption{}
\end{figure}
\hypertarget{offboarding}{%
\chapter{Offboarding}\label{offboarding}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{offboarding---volunteer-research-assistant}{%
\section{Offboarding - Volunteer Research Assistant}\label{offboarding---volunteer-research-assistant}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{offboarding-tasks---volunteer-research-assistant}{%
\subsection{Offboarding Tasks - Volunteer Research Assistant}\label{offboarding-tasks---volunteer-research-assistant}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Ensure all tasks on Trello have been reassigned to another active member of the lab, or completed prior to departure.
\item
Any departing Research Assistant should train another active member of the lab on any processes/tasks they have been in charge of (if nobody else in the lab carries this knowledge).
\item
Lab manager should remove this member from all necessary Lab applications (Slack, Box, Trello, Shared Google Calendar, etc.).
\item
Lab manager to update the Lab website. Remove inforamtion from website internal.
\item
Social Media Manager to update status to Lab alumni.
\item
Submit PAF form with updated end date to HR for all volunteers (see specific instructions below)
\item
Lab Manager to send RA \href{https://docs.google.com/spreadsheets/d/1BDrPZkQR2k0A2yIzCgjrRj-ve8a-tWYQdN2QSSsppbQ/edit?usp=sharing}{BABLAB academic activity} list to fill out before they depart
\end{enumerate}
\hypertarget{paf}{%
\subsection{PAF}\label{paf}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
If the Volunteer Research Assistant is leaving prior to their previously specified end date, lab manager must submit a PAF form to HR.
\begin{itemize}
\tightlist
\item
If the previously specified end date is not known, email HR to confirm
\end{itemize}
\item
Request a Personnel Action Form (PAF) from HR (Michelle Claudio, \href{mailto:[email protected]}{\nolinkurl{[email protected]}}).
\item
Fill out the Personnel Action Form (PAF) with effective date, new official end date, and Bridget's signature.
\begin{itemize}
\tightlist
\item
Note: fund manager signature is not needed, as volunteer positions do not require funding
\end{itemize}
\item
Once submitted, HR will update the Volunteer Research Assistant's UCPath.
\item
Save the PAF to Box (BabLab\textgreater{}Lab\textgreater{}Documents\textgreater{}RA\_hiring\_documents\textgreater{}RA\_offboarding\_PAF)
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{offboarding---lab-manager}{%
\section{Offboarding - Lab Manager}\label{offboarding---lab-manager}}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{offboarding-tasks---lab-manager}{%
\subsection{Offboarding Tasks - Lab Manager}\label{offboarding-tasks---lab-manager}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Ensure all tasks on Trello have been reassigned to another active member of the lab, or completed prior to departure.
\item
Lab Manager to transfer ownership on the following applications:
\end{enumerate}
\begin{itemize}
\tightlist
\item
GitHub (all wikis and projects)
\item
Slack Ownership
\item
Psychology website
\item
Trello
\item
Google voice forwarding phone
\item
Security email/information for bab gmail
\item
ALBMC account
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
Remove from all necessary Lab applications (Slack, Box, Trello, Shared Google Calendar, etc.).
\item
Update the Lab website. Remove inforamtion from website internal.
\item
Social Media Manager to update status to Lab alumni.
\item
Contact HR for all offboarding paperwork/tasks.
\item
Fill out the\href{https://docs.google.com/spreadsheets/d/1BDrPZkQR2k0A2yIzCgjrRj-ve8a-tWYQdN2QSSsppbQ/edit?usp=sharing}{BABLAB academic activity} list to fill out before they depart
\item
Lab Manager to write any outstanding protocols needed.
\item
Lab Manager to pass over financial log password.
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\textbf{ECG}
McLaughlin, K. A., Sheridan, M. A., Tibu, F., Fox, N. A., Zeanah, C. H., \& Nelson, C. A. (2015). Causal effects of the early caregiving environment on development of stress response systems in children. \emph{Proceedings of the National Academy of Sciences}, \emph{112}(18), 5637--5642.\\
\url{https://doi.org/10.1073/pnas.1423363112}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\textbf{EGG}
Yin, J., \& Chen, J. D. Z. (2013). Electrogastrography: methodology, validation, and applications. \emph{Journal of Neurogastroenterol Motil}, \emph{19}(1), 5-17. \url{http://dx.doi.org/10.5056/jnm.2013.19.1.5}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\textbf{GSR}
Braithwaite, J. J., Watson, D. G., Jones, R., \& Rowe, M. (2013). Guide for analysing electrodermal activity (EDA) \& skin conducatance responses (SCRs) for psychological experiments. Technical report: selective attention \& awareness laboratory (SAAL) Behavioural Brain Sciences Centre, University of Birmingham, UK. 1-42. \url{https://www.biopac.com/wp-content/uploads/EDA-SCR-Analysis.pdf}
Martin, I. (1963). Delayed GSR conditioning and the effect of electrode placement on measurements of skin resistance. \emph{Journal of Psychosomatic Research}, \emph{7}(1), 15-22.\\
\url{https://www.sciencedirect.com/science/article/abs/pii/0022399963900473}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\textbf{GitHub}
Vuorre, M., \& Curley, J. P. (2018). Curating Research Assets: A Tutorial on the Git Version Control System. Advances in Methods and Practices in Psychological Science, 1(2), 219--236. \url{https://doi.org/10.1177/2515245918754826}
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7637105295,
"avg_line_length": 35.2094437489,
"ext": "tex",
"hexsha": "a573687a8afd17dee97f31216d160e0e46418e18",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7f8ff9db5c1ea00322c43d1dfe07258fd44475f6",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "bablab/wiki_bablab",
"max_forks_repo_path": "docs/BABLab.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7f8ff9db5c1ea00322c43d1dfe07258fd44475f6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "bablab/wiki_bablab",
"max_issues_repo_path": "docs/BABLab.tex",
"max_line_length": 1125,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7f8ff9db5c1ea00322c43d1dfe07258fd44475f6",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "bablab/wiki_bablab",
"max_stars_repo_path": "docs/BABLab.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 53780,
"size": 196856
} |
\section{Observations}
This experiment aims to obtain an optical spectrum of Earthshine and test for
the presence of biomarker features,\eg O$_2$, O$_3$, H$_2$O, and the presence of
the vegetation red edge through fits of several models' components to the
observed spectrum. The observations were performed on the {\bf Mt. Stony Brook
14-inch telescope}, with the {\bf DADOS optical spectrograph} \cite{dados}, and
the {\bf SBIG STL-402 CCD camera} \cite{CCD}, on the date of April 17th, from
1:30 am to 5:30 am. The sky was clean and the temperature was around
55$^{\circ}$ F. The overall cloud cover from the archival satellite imagery and
the Earth seen from the moon, can be seen in the figures \ref{sat} and
\ref{frommoon}, in the appendix.
\subsection{Setting up the Spectrometer}
A spectrometer splits the incoming light
of Earthshine by wavelength, having previously been focused by the optical
section of the instrument. The intensity of light at each wavelength is
measured and recorded by the detector, and it is then plotted against light
intensity, which is analyzed to find a number of
features. In order to pick up the key features of oxygen, water and vegetation
red edge, we investigate the visible and near infrared sections of
the electromagnetic spectrum, with light of wavelengths between 500 to
800 nm.
Earthshine is brightest when most of the Earth as seen from the Moon is
illuminated, {\it i.e.}, when the Moon is only a thin crescent. However, if the
Moon is too close to the Sun, there are difficulties separating
the glow of Earthshine from the bright twilight sky \cite{woolf_etal02}.
We select a night of waxing crescent moon when the angular separation from
the Sun to the moon (from the moonrise to the sunrise) is suitable for the
experiment. This means that the moon is sufficiently high above the
horizon ($>20^{\circ}$), yet sufficiently close to the Sun
($<90^{\circ}$) so that the Earthshine signal is bright. The observations
cease at sunrise, since when the Sun reaches about 5$^{\circ}$ below the
horizon, the sky will be too bright to measure Earthshine.
We set the optical wavelength range of the spectrograph to cover our chosen set
of Earthshine absorption by molecular O$_2$, O$_3$, H$_2$O. This was done before
mounting the spectrograph on the telescope with the help of the Neon light
source. We looked up the wavelengths of the strongest Neon gas transitions
in the optical, adjusting the wavelength range of the spectrograph. We use the
DADOS spectrograph with long integration times. To maximize the spectral
resolution the narrowest width slit $ 25 \mu m$ is to be used. This gives a
spectral resolution of $\lambda / \Delta \lambda \sim 500$.
Setting the telescope tracking rate to lunar ({\texttt Autostar II}
keypad), we obtain sequences of spectra of the {\bf bright} (moonshine) and {\bf dark} (earthshine) sides
of the facing Moon, each of them together with the adjacent {\bf sky} (in the
adjacent slit). We also take
calibration exposures of the Neon light source and darks with duration matched
to the duration of all of the science exposures.
\subsection{Estimative of Exposure Times}
The Earthshine usually has low S/N (signal to background) ratio because the
observations are obtained with Moon low above the horizon, and consequently a
high airmass and low Earthshine fluxes with respect to the sky. Moreover, the
detector can be quickly saturated when recording the spectrum of the sunlit Moon
crescent. When it comes to the vegetation signal, the Earthshine data reduction
becomes even more difficult: past works \cite{seager_etal05} have
shown that it is only a few percent (less than $5\%$) above
the continuum. Two reasons for this are pointed in the literature: (i) the variable amplitude, induced by a variable cloud cover and the Earth phase, (ii) the strong astmospheric bands which need to be removed to access the surface reflectance.
Assuming that the signal is limited to photons, the signal to background ratio
for each of the sciences can be written as
\begin{equation}
S/N = \sqrt{N_{\gamma}t},
\end{equation}
where $t$ is the time of integration ans $N_{\gamma}$ the number of photon
counts.
First, we make an estimate of the exposure time for the dark and
the bright side of the waxing crescent moon, supposing that their S/N are
similar. Considering that the {\it full moon} has magnitude
$m_{full} = -12.7$ and the {\it new moon} has magnitude $m_{new}= -2.5$, the
ratio of {\it photon fluxes} for the dark and bright side of the moon \cite{CCD}
is
\begin{eqnarray}
R_{d/b} &\sim&\frac{F_{dark}}{F_{bright}}, \nonumber \\
&\sim& 0.1 \times 10^{(m_{full} - m_{new}) / 2.5}, \nonumber \\
&\sim& 10^{3}. \nonumber
\end{eqnarray}
We can then derive the exposures times for each side,
\begin{eqnarray}
t_{bright} \times N_{\gamma \, bright} &\sim& t_{dark} \times N_{\gamma \,
dark} \times R^{-1}_{d/b}.
\nonumber
\end{eqnarray}
The last result means that if we keep the number of counts constant for the dark
and bright sides, we should aim for calibration dark exposure times of
around 1000
larger than those for bright, \eg we should have at least 90 minutes of net
observation of the dark side for a 5 seconds exposure of the bright side.
However, due the observational constrains, we ended up collecting only 1/4 of
this value, as it is shown in the Table \ref{looo} of the
appendix.
\subsection{Steps of Data Acquisition}
The data acquisition was performed by the following steps, based on the
exposure times in the Table \ref{looo} in the appendix:
\begin{enumerate}
\item We record the spectrum of the Neon lamp on all the slits.
\item We record a non-saturated spectrum of a {\it bright stellar point
source} (Altair) at {\it two distinct positions on the slit}. We trace their
spectrum, which gives the direction along which we extract the spectrum of the
Earthshine.
\item We track the {\it dark side of the Moon}, positioning the {\it dark limb}
in one slit and the adjacent on the other and recording the spectra.
\item We measure the {\it illuminated limb of the moon } in the same way,
with a much smaller time to avoid saturation.
\item We repeat the above three procedures while the Sun is still not close
enough to the Moon.
\item We obtain sets of dark frames for all the exposure times of
our sciences.
\end{enumerate}
| {
"alphanum_fraction": 0.7647702407,
"avg_line_length": 43.5238095238,
"ext": "tex",
"hexsha": "19fb5ee1744d8f866e6a79609f68d664e8580288",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "956fc5391a2a2a8b08fe531b09e9f29e7d119522",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "bt3gl/Spectra_of_Earthshine",
"max_forks_repo_path": "tex/obser.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "956fc5391a2a2a8b08fe531b09e9f29e7d119522",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "bt3gl/Spectra_of_Earthshine",
"max_issues_repo_path": "tex/obser.tex",
"max_line_length": 245,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "956fc5391a2a2a8b08fe531b09e9f29e7d119522",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "bt3gl/Spectra_of_Earthshine",
"max_stars_repo_path": "tex/obser.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-08T23:54:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-28T03:13:04.000Z",
"num_tokens": 1632,
"size": 6398
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\title{Distilling the knowledge in a neural network}
\author{}
\date{}
\usepackage{natbib}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage[left=2.5cm,right=2.5cm,top=1cm,bottom=1.25cm]{geometry}
\usepackage{hyperref}
\hypersetup{colorlinks=true,urlcolor=blue}
\pagenumbering{gobble}
\begin{document}
\maketitle
\section*{Link}
\href{https://arxiv.org/abs/1503.02531}{arxiv}
\section*{Summary}
\begin{itemize}
\item We can train very cumbersome models(ensemble of separately trained models or a single very large model with strong regularizer) to extract structure from very large, highly redundant datasets. But during deployment we have more stringent requirements on latency and computational resources. To address this we will use a different kind of training(distillation) to transfer knowledge from the cumbersome model to to a small model that is more suitable for deployment.
\item When we use maximum log likelihood criterion as training objective, the trained model assigns probabilities to all the classes. Even when the probabilities for wrong classes are very small their relative probability tend to be significant. For example a model may assign a very small probability to an image of BMW being garbage truck but it will still usually be much higher than the probability of being a carrot. This tells us a garbage truck is more similar to a BMW than a carrot is to a BMW. This kind of information shows how a cumbersome model tend to generalize which is missing when we use hard target(like one hot encoding) for training.
\item One way to transfer the generalization ability of the cumbersome model to a small model is to use the class probabilities produced by the cumbersome model as soft targets for training the small model. When these soft targets have higher entropy they provide more information per training case than hard targets and much less variance in the gradient between training cases, so the small model can often be trained on much less data than the original cumbersome model and using a much higher learning rate. The problem with this approach is that the assigns probabilities are usually too low (specially for task like MNIST) for the wrong classes to have significant effect on cross entropy error(remember cross entropy loss is $\sum_{y} y_{true} \log y_{pred}$, here $y_{true}$ will be the soft target from cumbersome model and $y_{pred}$ the probability obtained from simpler model). One way to circumvent this is to use the logits(input to final softmax) as the target for simple model (which will have larger contribution for small values compared to their softmax output).
\item In this paper they propose a solution that raise the temperature term $T$ in general softmax equation
\begin{equation*}
q_i = \dfrac{exp(z_i/T)}{\sum_{j}exp(z_j/T)}
\end{equation*}
Raising the temperature produces a softer output as can be seen in \autoref{fig:Figure 1}. This means smaller logits will have relatively more contribution to the cross entropy error.
\begin{figure}
\centering
\includegraphics{softmax_temperature.png}
\caption{Effect of temperature on softmax output}
\label{fig:Figure 1}
\end{figure}
\item For training they use two objective functions:
\begin{enumerate}
\item Cross entropy with the soft target from cumbersome model, using the same high temperature as the complex model.
\item Cross entropy with correct labels(one hot encoding) but with temperature 1.
\end{enumerate}
The total loss is then a weighted average of those two losses. The second loss is given considerably lower weight than the first one. Magnitude of gradient produced by first objective function scales as $1/T^2$. So we multiply them by $T^2$ so that relative contributions of the soft and hard target remain roughly unchanged if $T$ is changed.
\item Experiment is done on MNIST using a neural net with two hidden layers of 1200 rectified linear units with dropout regularizer as the cumbersome model and a neural net with two hidden layers of 800 rectified linear units and no regularization as smaller model. The larger model achieved 67 test errors. The smaller model achieved 146 test errors when trained using hard targets. But then the objective function include soft target as well the smaller model achieved 74 test errors. Thus showed that the soft targets can transfer a great deal of generalization knowledge to the distilled model.
\item Experiment on Automatic Speech Recognition(ASR) system shows that the distilled model is able to achieve much better performance than a model trained on hard labels and can approximate performance gained by using a ensemble of 10 models.
\item When the dataset is very large or the individual models are large we can train several specialist models that that each focus on a different confusable subset of the classes. Using soft targets for the specialists can reduce the overfitting. Specialist models are much faster to train than large ensembles and improves performance.
\item Soft targets can act as a regularizer and is a very effective way of communicating the regularities discovered by a model trained on all of the data to another model(possibly trained on fewer data).
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7874118084,
"avg_line_length": 103.5769230769,
"ext": "tex",
"hexsha": "d51e1e1edb6ba9efe616592b188926d2c32f2ad1",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-11-17T06:46:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-10T14:42:40.000Z",
"max_forks_repo_head_hexsha": "3b787710ee49dd53f5db7a62d91046356d6c5a9e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "xashru/deep-learning-distilled",
"max_forks_repo_path": "source/paper_summary/Distilling the knowledge in a neural network/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3b787710ee49dd53f5db7a62d91046356d6c5a9e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "xashru/deep-learning-distilled",
"max_issues_repo_path": "source/paper_summary/Distilling the knowledge in a neural network/main.tex",
"max_line_length": 1087,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "3b787710ee49dd53f5db7a62d91046356d6c5a9e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "xashru/deep-learning-distilled",
"max_stars_repo_path": "source/paper_summary/Distilling the knowledge in a neural network/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-01T13:48:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-27T16:40:02.000Z",
"num_tokens": 1153,
"size": 5386
} |
\section{Motivation} \label{sec:intro/motivation}
\subsection{Eye-Tracking as an Emerging Technology} \label{sec:intro/motivation/eye_tracking}
Eye-tracking hardware has been available for a long time. Studies dating as far back as the 80s show the use of eye trackers together with computers, which achieved accuracies up to half a degree of visual angle \cite{colin1986}. Even before cameras and computers were advanced enough to measure anything accurately, mirrors have been used in reading exercises to observe gaze patterns and cognitive behavior \cite{vanGog2013}. The current state-of-the-art eye trackers provide massive accuracy improvements, higher sampling frequencies, better data quality, stability, and ease of use.
Most eye trackers in wide adoption today are used by researchers for research purposes, with data quality and price tags fit only for large research budgets. As an effect, manufacturers have had little incentive to promote the commercial availability of the hardware and its accompanying software. However, recent trends in the market have enabled the emergence of much more affordable eye-tracking.
% Only recently have we seen emerging eye trackers with price tags below thousands of dollars.
This trend promotes its use even for the casual user, which has remodeled the commercial approach to eye-tracking applications.
Traditionally, the leading value proposition for eye-tracking has been as an assistive technology for people with disabilities, offering an improvement to the autonomy and quality of life for those in need of alternate input devices \cite{barry1994, corno2002}. More recently, the video-game industry has caught wind of the technology through a series of very promising studies over the past decades \cite{leyba2004, smith2006, tobii2017}. These studies suggest that eye-tracking hardware need not directly substitute existing control input devices but could instead serve to complement them. For example, game developers can make game graphics more immersive by letting the user's gaze point determine camera focus, depth of field, or light exposure. Game characters may interact differently with the user depending on whether they maintain eye contact. Eye-tracking provides a more challenging and immersive experience \cite{antunes2018}, and its adoption is only going to increase as the technology and its applications advance in the future.
\subsection{Eye-Tracking in \acrlong{esports}} \label{sec:intro/motivation/esports}
The ever-expanding competitive gaming environment drives another compelling use case for eye-tracking. As the video game industry is already worth more than the music and movie industries combined \cite{mangeloja2019}, all estimates show a positive trend in the interest for \acrfull{esports}. In fact, market reports show that the \acrshort{esports} audience reached 474 million people million in 2018, with a year-on-year growth of +8,7\% \cite{newzoo2021}. With this impressive growth comes the ever-increasing demand for competitive performance analytics.
There is always an incentive to be better at whatever game one plays, especially if the competitive scene is attractive. Naturally, the most effective method of improving performance is through direct feedback. Many amateur and pro players tend to subscribe to software services that provide targeted match feedback, as is evident by the success of companies such as Mobalytics and Shadow Esports. At Osirion AS, we aim to complement existing applications of match feedback with that which can be inferred from the analysis of eye-tracking data.
\subsection{Cognitive Load}
To reliably give targeted feedback to the user, we first need ways of distinguishing the good players from the great. Only then can we begin considering the aspects that separate them and help novice players reach higher levels of performance.
As the French poet Guillaume de Salluste so eloquently portrays them, our eyes can be considered "windows of the soul" \cite{hess1965} for their broad implications on cognition. As such, when eye-tracking is available as a data source, cognitive load is a natural step towards performance distinction. Tamara Van Gog, professor of educational sciences at the department of education at Utrecht University, states the following. "Eye tracking is not only a useful tool to study cognitive processes and cognitive load in computer-based learning environments but can also be used indirectly or directly in the design of components of such environments to enhance cognitive processes and foster learning." \cite{vanGog2013}. In a report, she refers to several studies where eye-tracking implementations have increased successful problem-solving.
It is safe to assume that a given task becomes decreasingly demanding with continued practice and increased experience. World-famous psychologist and Nobel Prize winner Daniel \textcite{kahneman2013} published a best-selling pop-science book in 2013, where he depicts two systems that drive the way we think. According to \textcite{kahneman2013}, the fast-thinking "System 1" is the most efficient actor when complex tasks are to be executed, as it is guided by intuition. This intuition serves to ease task execution such that capacity is freed from the slow-thinking conscious self. The catch, however, is that intuition needs to be trained. The field of psychology, as mentioned above, is commonly known as \acrfull{clt} and will be explained in detail in section \ref{sec:bt/CLT}. In short, cognitive load is a vastly complex metric that is subject to many confounding variables. Therefore, it is challenging to measure directly, and the development of accurate methods is an open problem.
As will become apparent in section \ref{sec:bt/cognitive_impacts}, ocular measures made available by modern eye-tracking have clear correlations with cognition. If a classification model could accurately predict levels of cognitive load from eye-tracking data, there is great potential for further research in user-targeted feedback in esports. Moreover, combining cognitive load with performance metrics may allow for the calculation of an index of cognitive capacity, mental efficiency, task expertise, or even intellect \cite{sweller1998}.
% If one were to accurately measure cognitive load in an environment where information and presentation can be controlled, there is reason to believe that individual differences in performance can be induced at later stages.
% Mental effort, memory workload, and other intrinsic processing demands are all factors which can be commonly attributed as cognitive load, and this is where the potential of modern eye-tracking equipment becomes apparent. Decades of research in psychology and neuroscience prove a significant correlation between these factors and various ocular events, such as pupil dilation and spontaneous eyeblink rate, just to name a few.
% As Tamara Van Gog, professor of educational sciences at the department of education at Utrecht University states, "Eye tracking is not only a useful tool to study cognitive processes and cognitive load in computer-based learning environments but can also be used indirectly or directly in the design of components of such environments to enhance cognitive processes and foster learning." \cite{vanGog2013}. In a report, she refers to several studies where eye tracking implementations have led to an increase in successful problem solving. Such implementations are outside the scope of this thesis, however they serve as a motivational end to which further research may seek inspiration. | {
"alphanum_fraction": 0.8214943436,
"avg_line_length": 230.3636363636,
"ext": "tex",
"hexsha": "ca5831cd8b38d46d02f430da53a992fe25d538cc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c0a9631f89a0112b2ade27d05c22818745706fb8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "JLysberg/thesis-NTNU",
"max_forks_repo_path": "chapters/introduction/motivation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0a9631f89a0112b2ade27d05c22818745706fb8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "JLysberg/thesis-NTNU",
"max_issues_repo_path": "chapters/introduction/motivation.tex",
"max_line_length": 1045,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c0a9631f89a0112b2ade27d05c22818745706fb8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "JLysberg/thesis-NTNU",
"max_stars_repo_path": "chapters/introduction/motivation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1476,
"size": 7602
} |
\section*{Hashlists (\textit{hashlist})}
Used to access all functions around hashlists.
\subsection*{\textit{listsHashlists}}
List all hashlists (excluding superhashlists);
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "listHashlists",
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "listHashlists",
"response": "OK",
"hashlists": [
{
"hashlistId": 1,
"hashtypeId": 0,
"name": "Hashcat Example",
"format": 0,
"hashCount": 6494
},
{
"hashlistId": 3,
"hashtypeId": 14800,
"name": "iTunes test for splitting",
"format": 0,
"hashCount": 1
},
{
"hashlistId": 4,
"hashtypeId": 6242,
"name": "truecrypt test",
"format": 2,
"hashCount": 1
}
]
}
\end{verbatim}
}
\subsection*{\textit{getHashlist}}
Get information about a specific hashlist.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "getHashlist",
"hashlistId": 1,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "getHashlist",
"response": "OK",
"hashlistId": 1,
"hashtypeId": 0,
"name": "Hashcat Example",
"format": 0,
"hashCount": 6494,
"cracked": 3382,
"accessGroupId": 1,
"isHexSalt": false,
"isSalted": false,
"isSecret": false,
"saltSeparator": ":",
"notes": "This hashlist is from blahblah...",
"useBrain": false
}
\end{verbatim}
}
\subsection*{\textit{createHashlist}}
Create a new hashlist. Please note that it is not ideal to create large hashlists with the API as you have to send the full data. The hashlist data should always be base64 (using UTF-8) encoded. Hashcat brain can only be used if it is activated in the server config.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "createHashlist",
"name": "API Hashlist",
"isSalted": false,
"isSecret": true,
"isHexSalt": false,
"separator": ":",
"format": 0,
"hashtypeId": 3200,
"accessGroupId": 1,
"data": "JDJ5JDEyJDcwMElMNlZ4TGwyLkEvS2NISmJEYmVKMGFhcWVxYUdrcHhlc0FFZC5jWFBQUU4vWjNVN1c2",
"useBrain": false,
"brainFeatures": 0,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "createHashlist",
"response": "OK"
}
\end{verbatim}
}
\subsection*{\textit{setHashlistName}}
Set the name of a hashlist.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "setHashlistName",
"name": "BCRYPT easy",
"hashlistId": 5,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "setHashlistName",
"response": "OK"
}
\end{verbatim}
}
\subsection*{\textit{setSecret}}
Set if a hashlist is secret or not.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "setSecret",
"isSecret": false,
"hashlistId": 5,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "setSecret",
"response": "OK"
}
\end{verbatim}
}
\subsection*{\textit{importCracked}}
Add some cracked hashes from an external source for this hashlist. The data must be base64 (using UTF-8) encoded.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "importCracked",
"hashlistId": 5,
"separator": ":",
"data": "JDJ5JDEyJDcwMElMNlZ4TGwyLkEvS2NISmJEYmVKMGFhcWVxYUdrcHhlc0FFZC5jWFBQUU4vWjNVN1c2OnRlc3Q=",
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "importCracked",
"response": "OK",
"linesProcessed": 1,
"newCracked": 1,
"alreadyCracked": 0,
"invalidLines": 0,
"notFound": 0,
"processTime": 0,
"tooLongPlains": 0
}
\end{verbatim}
}
\subsection*{\textit{exportCracked}}
Exports the cracked hashes in hash:plain format to a new file. The response includes the informations about the created file.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "exportCracked",
"hashlistId": 5,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "exportCracked",
"response": "OK",
"fileId": 7567,
"filename": "Pre-cracked_5_19-07-2018_14-45-52.txt"
}
\end{verbatim}
}
\subsection*{\textit{generateWordlist}}
Generates a wordlist of all plaintexts of the cracked hashes of this hashlist. The response includes the informations about the created file.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "generateWordlist",
"hashlistId": 5,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "generateWordlist",
"response": "OK",
"fileId": 7568,
"filename": "Wordlist_5_19.07.2018_14.47.20.txt"
}
\end{verbatim}
}
\subsection*{\textit{exportLeft}}
Generates a left list with all hashes which are not cracked. The response returns informations about the created file. This only works for plaintext hashlists!
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "exportLeft",
"hashlistId": 1,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "exportLeft",
"response": "OK",
"fileId": 7569,
"filename": "Leftlist_1_19-07-2018_14-49-02.txt"
}
\end{verbatim}
}
\subsection*{\textit{deleteHashlist}}
Delete a hashlist and all according hashes. This will remove a hashlist from the superhashlists it is member of.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "deleteHashlist",
"hashlistId": 5,
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "deleteHashlist",
"response": "OK"
}
\end{verbatim}
}
\subsection*{\textit{getHash}}
Search if a hash is found on the server. This searches on all hashlists which the user has access to.
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "getHash",
"hash": "0021ca52049c734ac0d3d6f92042abf7",
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "getHash",
"response": "ERROR",
"message": "Hash was not found or is not cracked!"
}
\end{verbatim}
}
{
\color{blue}
\begin{verbatim}
{
"section": "hashlist",
"request": "getHash",
"hash": "00428d94d9482d8c7037b6865521b3fd",
"accessKey": "mykey"
}
\end{verbatim}
}
{
\color{OliveGreen}
\begin{verbatim}
{
"section": "hashlist",
"request": "getHash",
"response": "OK",
"hash": "00428d94d9482d8c7037b6865521b3fd",
"crackpos": 12467,
"plain": "wellgetthem"
}
\end{verbatim}
}
| {
"alphanum_fraction": 0.6105462925,
"avg_line_length": 21.0170940171,
"ext": "tex",
"hexsha": "612708613ef4b54e856579f851ee045aeb61de16",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-09-11T19:32:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-06-15T17:38:57.000Z",
"max_forks_repo_head_hexsha": "0e2d33ca22910876194773f0380649aff776d9ef",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "elreydetoda/htpclientapi",
"max_forks_repo_path": "tex/sections/hashlist.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0e2d33ca22910876194773f0380649aff776d9ef",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "elreydetoda/htpclientapi",
"max_issues_repo_path": "tex/sections/hashlist.tex",
"max_line_length": 267,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "0e2d33ca22910876194773f0380649aff776d9ef",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "elreydetoda/htpclientapi",
"max_stars_repo_path": "tex/sections/hashlist.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-29T18:49:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-23T15:20:18.000Z",
"num_tokens": 2610,
"size": 7377
} |
\documentclass[a4paper]{article}
\usepackage{amssymb,amsmath,amsthm,amsfonts}
\usepackage{multicol,multirow}
\usepackage{calc}
\usepackage{ifthen}
\usepackage{graphicx}
\usepackage{float}
\usepackage[landscape]{geometry}
\usepackage[colorlinks=true,citecolor=blue,linkcolor=blue]{hyperref}
\ifthenelse{\lengthtest { \paperwidth = 11in}}
{ \geometry{top=.5in,left=.5in,right=.5in,bottom=.5in} }
{\ifthenelse{ \lengthtest{ \paperwidth = 297mm}}
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
}
\pagestyle{empty}
\makeatletter
\renewcommand{\section}{\@startsection{section}{1}{0mm}%
{-1ex plus -.5ex minus -.2ex}%
{0.5ex plus .2ex}%x
{\normalfont\large\bfseries}}
\renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}%
{-1explus -.5ex minus -.2ex}%
{0.5ex plus .2ex}%
{\normalfont\normalsize\bfseries}}
\renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}%
{-1ex plus -.5ex minus -.2ex}%
{1ex plus .2ex}%
{\normalfont\small\bfseries}}
\makeatother
\setcounter{secnumdepth}{0}
\setlength{\parindent}{0pt}
\setlength{\parskip}{0pt plus 0.5ex}
% -----------------------------------------------------------------------
\title{CheatSheet}
\begin{document}
\raggedright
\footnotesize
\begin{multicols*}{3}
\setlength{\premulticols}{1pt}
\setlength{\postmulticols}{1pt}
\setlength{\multicolsep}{1pt}
\setlength{\columnsep}{2pt}
\section{Probability Review}
\subsection{Identities}
\begin{flalign*}
& P(A \cap B) = P(A)P(B) \\
& P(A|B) = \frac{P(A \cap B)}{P(B)}
& P(A) = P(A|B)P(B) + P(A|\bar B)P(\bar B) \\
& f(x) = \frac{dF(x)}{dx}, \int_{-\infty}^\infty f(x)=1
& P(a < X < b) = \int_a^{b} f(x) dx \\
& F(y) = \int_{-\infty}^y f(x)dx
& P(X < x) = F(X) \\
& \mathbb{E}[X^n] = \int_{-\infty}^\infty x^n f(x) dx
& Var(X) = \mathbb{E}[X^2] - \mathbb{E}[X]^2
\end{flalign*}
\subsection{Uniform Distribution: $X \sim uniform(a,b)$}
\begin{flalign*}
& f(x)=
\begin{cases}
\frac{1}{b-a}, & \text{if } a < x < b\\
0, & \text{otherwise}
\end{cases}& \tag{pdf} \\
& F(x)=
\begin{cases}
0, & \text{if } x < a \\
\frac{x-a}{b-a}, & \text{if } a \leq x \leq b\\
1, & \text{if } x > b
\end{cases} \tag{cdf} \\
& \mathbb{E}[X] = \frac{a+b}{2}
& Var[X] = \frac{(b-a)^2}{12} \\
\end{flalign*}
\subsection{Exponential Distribution: $ X \sim exp(\lambda)$}
\begin{flalign*}
& f(x)= \lambda e ^{-\lambda x} \tag{pdf} \\
& F(x)= 1 - \lambda e ^{-\lambda x} \tag{cdf} \\
& \mathbb{E}[X] = \frac{1}{\lambda}
& Var[X] = \frac{1}{\lambda^2} \\
& P(X > s+t | X>s) = P(X >t)
& \text{[Memoryless]} \\
\end{flalign*}
\subsection{Poisson Distribution: $ X \sim pois(\lambda)$}
\begin{flalign*}
& P(N(t) = n) = \frac{(\lambda t)^t}{n!} e^{-\lambda t}
& \text{[pmf]} \\
& \mathbb{E}[X] = Var(X) \\
\end{flalign*}
\subsection{Poisson Process}
\begin{flalign*}
& N(0) = 0 \\
& f(x)= \lambda e ^{-\lambda x}
& \text{[Arrival See Time Average]} \\
& P(X > s+t | X>s) = P(X >t)
& \text{[Memoryless]} \\
& \lambda = \sum _{i=1}^{n}\lambda _{i}, Y = \left(\sum _{i=1}^{n}X_{i}\right)\sim \operatorname {pois}(\lambda)
& \text{[Merge Poisson Processes]} \\
& X \sim pois(\lambda), X = [X_1, X_2] \\
& \implies X_{1,2} \sim pois(\frac{\lambda}{2})
& \text{[Split Poisson Processes]}
\end{flalign*}
\columnbreak
\section{Performance Analysis}
\subsection{Identities}
\begin{flalign*}
& A_i(t) \text{ [Number of arrivals]}
& \lambda_i(t) = \frac{A_i(t)}{t} \text{ [Arrival Rate]} \\
& C_i(t) \text{ [Completions] }
& X_i(t) = \frac{C_i(t)}{t} \text{ [Throughput]} \\
& B_i(t) \text{ [Busy time]}
& \rho_i(t) = \frac{B_i(t)}{t} \text{ [Utilization]} \\
& S_i(t) = \frac{B_i(t)}{C_i(t)}
\text{ [Avg process time]}
& S_i(t) = \mathbb{E}[S] \\[4pt]
& D_i \text{ [Processing time of cycle]}
& \mathbb{E}[D_i] = \mathbb{E}[S_i]\mathbb{E}[Vi] \\[6pt]
& V_i(t) \text{ [Visits to device]}
& V_{user} = V_0 = 1 \\[4pt]
& \lim_{t \to \infty} \frac{A_i(t)}{t} = \lim_{t \to \infty} \frac{C_i(t)}{t}
& \lambda_i = X_i \text{ [Steady state]} \\[4pt]
& N(t) = A(t) - C(t)
& \text{ [Number of jobs in system]} \\
& R(t) \approx \int_0^t \frac{A(s) - C(s)}{A(t)}ds
& \text{ [Avg response time]} \\
& \bar N(t) \approx \int_0^t \frac{A(s)-C(s)}{t} ds
& \text{ [Avg number of jobs in system]} \\
& \bar N(t) = \frac{R(t)A(t)}{t} \\
& Z
& \text{ [Think time] } \\
& \mathbb{E}[N] = N, \lambda = X, R = R + Z
& \text{[Closed System]}
\end{flalign*}
\subsection{Operation Laws}
\begin{flalign*}
& \mathbb{E}[N] = \lambda \mathbb{E}[R]
& \text{[Little's Law]} \\
& \rho_i = \mathbb{E}[S_i] X_i = \frac{\lambda_i}{\rho_i}
& \text{[Utilization Law]} \\
& \rho_i = \mathbb{E}[S_i]\mathbb{E}[Vi]X = \mathbb{E}[D_i] X
& \text{[Bottleneck Law]} \\[4pt]
& X_i = \mathbb{E}[V_i]X
& \text{[Forced Flow Law]}\\
& \mathbb{E}[R] = \frac{N}{X} - \mathbb{E}[Z]
& \text{[Closed System Response Time Law] } \\
\end{flalign*}
\subsection{Bottleneck Analysis}
\begin{flalign*}
& D_{max} \text{ [Bottleneck Device]}
& D = \sum D_i \\
& \mathbb{E}[R] \geq D
& X = \frac{\rho_{max}}{D_{max}} \\
& \mathbb{E}[R] \geq max(D, ND_{max} - \mathbb{E}[Z])
& X \leq min(\frac{1}{D_{max}}, \frac{N}{D+\mathbb{E}[Z]}) \\
& N^* = \frac{D + \mathbb{E}[Z]}{D_{max}}
& \implies \text{optimal } X \text{ and } \mathbb{E}[R] \\
\end{flalign*}
\columnbreak
\section{Queuing Models}
(Arrivals / Service Times / Number of servers / Room in queue)
\subsection{M/M/1}
\begin{flalign*}
& \rho = \lambda/\mu
& \mu > \lambda \text{ [Stability condition]}\\
& \pi_0 = 1 - \frac{\lambda}{\mu} = 1 - \rho
& \pi_i = \pi_0 (\frac{\lambda}{\mu})^i = (1-\rho)\rho^i \\
& \mathbb{E}[N] = \frac{\lambda}{\mu - \lambda} = \frac{\rho}{1-\rho}
& \mathbb{E}[N_Q] = \mathbb{E}[N] - \rho \\
& \mathbb{E}[R] = \frac{1}{\mu - \lambda}
& \mathbb{E}[R_Q] = \frac{1}{\mu - \lambda} - \frac{1}{\mu} \\
\end{flalign*}
\begin{figure}[H]
\vspace{-1cm}
\centering
\includegraphics[scale=0.35]{MM1-queue.png}
\end{figure}
\subsection{M/M/c}
\begin{flalign*}
& \rho = \frac{\lambda}{c\mu}
& c\mu > \lambda \text{ [Stability condition]} \\
& \pi_0 = (\frac{\lambda}{\mu})^c \frac{1}{1 - \rho}
% TODO Fix from notes:
& \pi_i =
\begin{cases}
\frac{\lambda^i}{i!\mu^i} \pi_0, & \text{if } i < c \\
\frac{\lambda^i}{c!\mu^ic^{i-c}} \pi_0, & \text{if } i \geq c
\end{cases}& \\
& \mathbb{E}[N] = \lambda \mathbb{E}[R]
& \mathbb{E}[N_Q] = \lambda \mathbb{E}[R_Q] \\
& \mathbb{E}[R] = \mathbb{E}[R_Q] + \mathbb{E}[S] = \mathbb{E}[R_Q] + \frac{1}{\mu} \\
& \mathbb{E}[R_Q] = \frac{(\frac{\lambda}{\mu})^c \mu}{(c-1)! (c\mu - \lambda)^2} \\
& P(\text{job is queued}) = \sum_{i=0}^\infty \pi = \frac{1}{c!}(\frac{\lambda}{\mu})^c \frac{1}{1-\rho} \pi_0
& \text{[Erlang C Formula]}
\end{flalign*}
\begin{figure}[H]
\vspace{-0.25cm}
\centering
\includegraphics[scale=0.35]{MMC-queue.png}
\end{figure}
\vspace{-0.25cm}
\subsection{M/M/$\infty$}
\begin{flalign*}
& \rho = \lambda/\mu
& \mu > \lambda \text{ [Always Stable]}\\
& \pi_0 = e^{-\frac{\lambda}{\mu}} = e^{-\rho}
& \pi_i = \frac{(\lambda/\mu)^i}{i!} e^{-\frac{\lambda}{\mu}} = \frac{\rho^i}{i!} e^{-\rho} \\
& \mathbb{E}[N] = \frac{\lambda}{\mu} = \rho
& \mathbb{E}[N_Q] = 0 \\
& \mathbb{E}[R] = \frac{1}{\mu} = \mathbb{E}[S]
& \mathbb{E}[R_Q] = 0 \\
\end{flalign*}
\begin{figure}[H]
\vspace{-1cm}
\centering
\includegraphics[scale=0.4]{MMinfinity-queue.png}
\end{figure}
\subsection{Birth-Death Process}
CTMC where state transitions increase or decrease by a constant factor.
%TODO check notes for pi_0
\[
\pi_0 = \frac{1}{1 + \sum_{k=1}^{\infty} \prod_{i=1}^k \frac {\lambda _{i-1}}{\mu_i} }
\]
\[
\pi_i = \frac{\prod_{j=0}^{i-1} \lambda_j}{ \prod_{j=1}^{i} \mu_j}\pi_0
\]
\subsection{Threshold System}
$T>0$, Arrival rate $s$, processing rate $s$. If $r > s, N \to 0$. If $s >r, N \to \infty$.
\[
\pi_0 = \frac{1}{1 - \frac{r}{s}} (\frac{s}{r})^T-1
\]
\[
\pi_i =
\begin{cases}
(\frac{s}{r})^i \pi_0, & \text{if } i < T \\
(\frac{s}{r})^{i-T} (\frac{r}{s})^2 \pi_0, & \text{if } i \geq T
\end{cases}
\]
\subsection{Jackson Networks}
\begin{enumerate}
\item
External arrivals form a Poisson process
\item
All service times are exponentially distributed and the service discipline at all queues is first-come, first-served
\item
internal routing of jobs between servers is probabilistic
\item
The utilization of all of the queues is less than one
\end{enumerate}
Solved via markov model
\begin{enumerate}
\item \textbf{Markov Chain:} We may solve the corresponding Discrete Time Markov Chain to find its steady state distribution, $\mathbb{E}[N]$, and $\mathbb{E}[R]$. If there are $N$ jobs and $k$ nodes, we will have a lower bound of $\Omega(\binom{N+k-1}{k-1}^2)$ when solving the system of equations.
\item \textbf{Product form:} Using a temporary value for each node's arrival rate, $\bar \lambda$, determine the ratios between the balance equations and then recover the real values using the actual arrival rate, $\lambda$, finding the steady-state distribution, $\mathbb{E}[N]$, and $\mathbb{E}[R]$. Still suffers from a combinatorial explosion in complexity with a lower bound of $\Omega(\binom{N+k-1}{k-1})$.
\item \textbf{Mean Value Analysis:} Uses the Arrival Theorem in a recursive algorithm to analyse specific nodes when there are $N$ jobs in the system. We only have access to expectations and utilization of specific nodes, i.e. $\mathbb{E}[R_i]$, $\mathbb{E}[N_i]$, $\rho_i$ but is more performant with an upper bound of $\mathcal{O}(Nk)$.
\end{enumerate}
\subsection{M/G/1}
\begin{itemize}
\item Markovian (modulated by a Poisson process), service times have a General distribution and there is a single server
\item $\mathbb{E}[S] = \frac{1}{\mu}$
\item high variance in service distribution $\implies$ high response time
\item Has equal $\mathbb{E}[N]$ for all blind non-pre-emptive service policies
\end{itemize}
\subsection{Pollaczek–Khinchine formula}
\[
\mathbb{E}[N] = \rho + \frac{\rho^2 + \lambda^2 \sigma_s^2}{2(1-\rho)}
\]
\section{Service Policies}
Blind and non-blind policy relates to knowledge of job size on arrival. If service times that jobs require are known, then the optimal scheduling policy is shortest remaining processing time (SRPT).
\begin{itemize}
\item first-come, first-served (FCFS)
\item processor sharing (PS) where all jobs in the queue share the service capacity between them equally
\item last-come, first served (LCFS) with/without preemption where a job in service may or may not be interrupted with work being conserved
\item generalized foreground-background (FB) scheduling also known as least-attained-service where the jobs which have received least processing time so far are served first and jobs which have received equal service time share service capacity using processor sharing
\item shortest job first (SJF) with/without preemption, where the job with the smallest size receives service
\item shortest remaining processing time (SRPT) where the next job to serve is that with the smallest remaining processing requirement
\end{itemize}
\section{Failure/Hazard Rate}
\begin{itemize}
\item Increasing Failure Rate (IFR): h(t) is non-decreasing in t, the expected remaining work is decreasing, non pre-emptive is preferable.
\item Decreasing Failure Rate (DFR): h(t) is non-increasing in t, the expected remaining work is increasing, pre-emptive policy is preferable.
\end{itemize}
\begin{flalign*}
& \quad \quad h(t) = \frac{f(t)}{1 - F(t)}
& \mathbb{E}[\text{Remaining time}] = \frac{1}{h(t)} \qquad \\
& \quad \quad X \sim uniform(a,b) \text{ (IFR)} \implies
& h(t) = \frac{1}{b-t} \qquad \\
& \quad \quad X \sim exp(\lambda) \text{ (IFR and DFR)} \implies
& h(t) = \frac{\lambda e^{-\lambda t}}{1-(1-e^{-\lambda t})} \qquad
\end{flalign*}
\[
\text{Time average Excess} = \mathbb{E}[S_c] = \frac{\mathbb{E}[S_c]}{2\mathbb{E}[S]}
\]
\[
\mathbb{E}[R_Q] = \frac{\rho}{1-\rho} \mathbb{E}[S_c]
\]
\section{Pareto Distribution}
\begin{itemize}
\item popular DFR, "80-20 rule", pre-emptive policy is preferable
\item 50\% of the load on the system comes from 1\% of the jobs
\item $\alpha$ shape parameter, $\alpha =1, X > t \implies P(X>2t) = \frac{1}{2}$
\item $ 0 < alpha < 1$, $Var(X) = \infty$, $\mathbb{E}[X] = \infty$
\item Survival Function:
\end{itemize}
\[
{\displaystyle {\overline {F}}(x)=\Pr(X>x)={\begin{cases}\left({\frac {x_{\mathrm {m} }}{x}}\right)^{\alpha }&x\geq x_{\mathrm {m} },\\1&x<x_{\mathrm {m} },\end{cases}}}
\]
\section{Misc}
\begin{itemize}
\item
$\sum_{i=0}^\infty \alpha^i = \frac{1}{1-\alpha}, |\alpha| < 1 $.
\item
$h = \frac{f}{g} \implies h' = \frac{f'g -fg'}{g^2}$
\item
Max system utilization $\implies$ only bottleneck utilization is 100\%
\item
Want to minimize $\mathbb{E}[R]$ and maximize $X$.
\item
Operation Laws work regardless of distributions of random variables
\item
exponential distributions are a very good assumption for modeling arrivals, but only moderately good for modelling processing times
\end{itemize}
\end{multicols*}
\end{document}
| {
"alphanum_fraction": 0.5843807372,
"avg_line_length": 38.693989071,
"ext": "tex",
"hexsha": "a226e5399d3de71eb9c9257f89775195a1a32650",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4390a2da344ec00a3f651f464c79b7e097cbabe6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lukepereira/latex-ci",
"max_forks_repo_path": "documents/2020-performance-analysis/main.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4390a2da344ec00a3f651f464c79b7e097cbabe6",
"max_issues_repo_issues_event_max_datetime": "2020-07-13T02:09:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-13T01:21:22.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lukepereira/latex-ci",
"max_issues_repo_path": "documents/2020-performance-analysis/main.tex",
"max_line_length": 416,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "4390a2da344ec00a3f651f464c79b7e097cbabe6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lukepereira/latex-ci",
"max_stars_repo_path": "documents/2020-performance-analysis/main.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-25T21:30:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-04T20:32:18.000Z",
"num_tokens": 5117,
"size": 14162
} |
\section{Solving the encoding problem}
\label{introduction:encodingSolution}
I have painted an admittedly bleak picture of the difficulties in studying
cell signaling. It should be clear from this discussion that experimental
design in cell signaling is quite difficult, and interpretation of experiments
must be done with extreme care. But can we remove, or at least reduce,
some of the difficulties discussed above?
\subsubsection{Obtaining meaningful $S$ and $R$}
Perhaps the most problematic of the issues discussed is that
of not knowing which signals $\vec{S}$ a cell is interested in
nor which responses $\vec{R}$ encode those signals. One can begin to address
this by considering different variations of the inputs (e.g. changing
concentrations or treatment durations) and outputs
(e.g. phosphorylation states, live-cell measurements of the same
cell over time, fold-change in concentration, total change in concentration).
Further discussion of different output measurements, in the particular
context of single-cell fluorescence microscopy, can
be found in \ar{imaging:introduction}.
Assuming that one could identify a series of potential input signals
and response metrics, it is not obvious how to go about identifying
which are the more accurate estimates of $\vec{S}$ and $\vec{R}$.
An interesting
approach would be to perform measurements of information content
between putative combinations of $S$ and $R$, for example using the
mutual information metric \cite{Cheong2011}.
Under the assumption
that the cell is encoding signals in the most informative way
possible, the $R$ and $S$ choices that maximizes mutual information can then
be considered to be the best approximation
of the encoding that the cell uses. This would require either high
accuracy in measurement or precise knowledge of measurement error.
In \ar{insulation:introduction} I test multiple reagents and readouts
to ensure that they have similar information content, and I discuss
this approach in more detail in
\ar{imaging:introduction}.
In an example of this approach, research groups measured the
information content of transcription factor gradients across
\fly\ embryos, with the question of whether enough information
was present to specify the location of all nuclei along the embryo.
While each transcription factor had low positional information content when taken alone,
in combination the factors did have enough information to specify each nucleus
position with high accuracy \cite{Gregor2007,Dubuis2011,Dubuis2013}.
As with the example of human speech above,
it is important to remember that low information content of a
single $S$ does not
necessarily mean that it is an \i{incorrect}
encoding. It may alternatively signify an \i{incomplete} encoding,
and that that other unmeasured signal properties need to also be considered.
Importantly, incomplete encodings can be good enough for many
experimental biology questions.
\subsubsection{Compensating for cellular variability}
To address issues stemming from cellular variability, the
straightforward solution is to directly measure its effects on the
experimental relationship between the chosen $S$ and $R$. Measurements
of $R$ can be performed on a single-cell basis, for example by
microscopy (as in this dissertation) or by flow sorting.
The distributions of single-cell values can then be checked for properties,
such as multi-modality, that would suggest the presence of multiple
cellular states. (In my work (\ar{insulation:introduction}),
I verified that each measurement generated unimodal single-cell distributions.)
In the case that different cellular states do exists, each cell
can be grouped into statistically distinct subpopulations by measured
phenotype \cite{Loo2010,Singh2010}. Each subpopulation can then be
tested separately to see if each has the same $f(S)=R$ relationship.
For example, in \ar{insulation:introduction} I test how cell cycle phase
affects measurements of inter-pathway crosstalk. Further, if live-cell markers
are able differentiate subpopulations, then cells can be physically
sorted and experimented on separately.
Importantly, the absence of subpopulations
along the dimension of the measured response $R$ does not imply that
all cells are the same. Rather, it implies that we cannot claim that they
are different. Conversely, the presence of subpopulations in one
dimension does not imply existence of the same subpopulations with respect
to other dimensions (see \ar{fig:introduction:subpops}). In other
words, the presence or absence of cellular subpopulations as measured
by one readout is insufficient evidence to make a claim about whether $R$
is being distorted by the presence of subpopulations. Such a connection
must be explicitly tested.
\begin{figure}[!bt]
\centering
\includegraphics[width=2.5in]{FIGS/introduction/subpops.pdf}
{\singlespacing
\caption[Cellular subpopulations are phenotype-dependent]
{ The presence of subpopulations in one measurement dimension
does not imply subpopulations in another dimension;
subpopulations are a phenotype-dependent property.
(\b{a}) Cells show bi-modality in their total DNA content
(as measured by total Hoechst fluorescence), but
(\b{b}) not in the coefficient of variation in DNA content.
These measurements are addressed fully in \ar{imaging:introduction}.}
\label{fig:introduction:subpops}}
\end{figure}
\subsubsection{Determining context-dependency}
Finally, how can we deal with the issue of context-dependency?
First, it is important to verify that the context-dependency truly exists.
As I discuss in \ar{pathways:introduction} and implied in this chapter,
context-dependency of biological phenomena is often inferred by
the fact that different labs produce different results when asking
the same questions. However, interpretation of cell signaling results
are incredibly complicated, which may
simply mean that the labs were not, in fact, asking the same questions.
Because cells
may be encoding information differently than we expect, we should
take care when comparing interpretations of results obtained by different
experimental methods, as each method will approximate $\vec{S}$ and $\vec{R}$
differently.
However, some (perhaps much) of context-dependency is undoubtedly
real, and can be absorbed into the simple model of cells as functions.
While we could allow the function to vary from context to context,
it is more useful to say that $f$ does not change but that
subsets of the inputs $\vec{S}$ and outputs $\vec{R}$ can vary.
To get around this parameter variation, we can first make sure that
all controllable conditions are kept the same and that all measurements
are the same from experiment to experiment. Thus, the experimentally-defined
subset of values in $\vec{S}$ and $\vec{R}$ do not change. Experiments can
then be repeated identically across multiple cell types, so that the only
varying parameters are those inherent to cell type differences.
Any consistent aspects of the relationship between experimental
$S$ and $R$ across diverse cell types can then be used to infer the
general properties of $f$. Indeed, this approach is common in cell biology,
as it is widely believed that any given cell line may have a myriad of
idiosyncratic properties.
When using such a multiple cell-type approach, one may find a case
where context-dependency is so dramatic that no
general properties of $f$ can be uncovered. The first aspect
of this problem to tackle would be to carefully ask if the cell types
are truly being treated ``identically.'' As suggested above,
experiments typically use an absolute set of conditions across cell types
(e.g. identical ligand or drug concentrations). But it may be the case that two cell
types simply vary in sensitivity to the conditions, such that one type
is effectively receiving a half-maximal dose while the other is saturated.
Because we do not know which property $R$ encodes the treatment condition,
it is also difficult to know if we are measuring an ``identical'' readout.
Perhaps some of the apparent context-dependency of signaling is due to
incorrectly interpreting what it means to treat different cell types
identically.
Instead of relying on constant treatment concentrations
derived from the literature and applying such conditions generally
across experiments, another approach would be to measure dose-response and time-response
curves for all cell types that are under experimentation. Conditions could then be
calibrated on a cell type-specific basis so that, for example, all cell
lines receive a half-maximal input concentration.
\subsubsection{The encoding problem is unsolved}
Part of the intention of this chapter was to make it clear that
cellular signaling is an incredibly difficult phenomenon to understand,
and that experimental designs are making many
assumptions that are either going unnoticed or are not
being made explicit. Some of these assumptions, if made explicit, might
dramatically affect how we interpret our experimental results.
There is no general solution to the encoding problem but,
as I have outlined here, steps can be taken to minimize its effects. Perhaps more
importantly, an awareness of the assumptions allows for them to be tested
in some cases or, at minimum, allows for results to be interpreted cautiously
in the light of those assumptions.
| {
"alphanum_fraction": 0.8022431489,
"avg_line_length": 48.7164948454,
"ext": "tex",
"hexsha": "e5d26fcb82e9256332f5658abad3231f69dbac48",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4ddf35426f6ce2d83d7193dd94bd36f21b68c27d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adam-coster/dissertation",
"max_forks_repo_path": "TEXT/introduction/encodingSolution.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4ddf35426f6ce2d83d7193dd94bd36f21b68c27d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adam-coster/dissertation",
"max_issues_repo_path": "TEXT/introduction/encodingSolution.tex",
"max_line_length": 88,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4ddf35426f6ce2d83d7193dd94bd36f21b68c27d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adam-coster/dissertation",
"max_stars_repo_path": "TEXT/introduction/encodingSolution.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1992,
"size": 9451
} |
\chapter{Introduction}
This document describes the RISC-V privileged architecture, which
covers all aspects of RISC-V systems beyond the unprivileged ISA,
including privileged instructions as well as additional functionality
required for running operating systems and attaching external devices.
\begin{commentary}
Commentary on our design decisions is formatted as in this paragraph,
and can be skipped if the reader is only interested in the
specification itself.
\end{commentary}
\begin{commentary}
We briefly note that the entire privileged-level design described in
this document could be replaced with an entirely different
privileged-level design without changing the unprivileged ISA, and
possibly without even changing the ABI. In particular, this
privileged specification was designed to run existing popular
operating systems, and so embodies the conventional level-based
protection model. Alternate privileged specifications could embody
other more flexible protection-domain models. For simplicity of
expression, the text is written as if this was the only possible
privileged architecture.
\end{commentary}
\section{RISC-V Privileged Software Stack Terminology}
This section describes the terminology we use to describe components
of the wide range of possible privileged software stacks for RISC-V.
Figure~\ref{fig:privimps} shows some of the possible software stacks
that can be supported by the RISC-V architecture. The left-hand side
shows a simple system that supports only a single application running
on an application execution environment (AEE). The application is
coded to run with a particular application binary interface (ABI).
The ABI includes the supported user-level ISA plus a set of ABI calls to
interact with the AEE. The ABI hides details of the AEE from the
application to allow greater flexibility in implementing the AEE. The
same ABI could be implemented natively on multiple different host OSs,
or could be supported by a user-mode emulation environment running on
a machine with a different native ISA.
\begin{figure}[th]
\centering
\includegraphics[width=\textwidth]{figs/privimps.pdf}
\caption{Different implementation stacks supporting various forms of
privileged execution.}
\label{fig:privimps}
\end{figure}
\begin{commentary}
Our graphical convention represents abstract interfaces using black
boxes with white text, to separate them from concrete instances of
components implementing the interfaces.
\end{commentary}
The middle configuration shows a conventional operating system (OS)
that can support multiprogrammed execution of multiple
applications. Each application communicates over an ABI with the OS,
which provides the AEE. Just as applications interface with an AEE
via an ABI, RISC-V operating systems interface with a supervisor
execution environment (SEE) via a supervisor binary interface (SBI).
An SBI comprises the user-level and supervisor-level ISA together with
a set of SBI function calls. Using a single SBI across all SEE
implementations allows a single OS binary image to run on any SEE.
The SEE can be a simple boot loader and BIOS-style IO system in a
low-end hardware platform, or a hypervisor-provided virtual machine in
a high-end server, or a thin translation layer over a host operating
system in an architecture simulation environment.
\begin{commentary}
Most supervisor-level ISA definitions do not separate the SBI from the
execution environment and/or the hardware platform, complicating
virtualization and bring-up of new hardware platforms.
\end{commentary}
The rightmost configuration shows a virtual machine monitor
configuration where multiple multiprogrammed OSs are supported by a
single hypervisor. Each OS communicates via an SBI with the
hypervisor, which provides the SEE. The hypervisor communicates with
the hypervisor execution environment (HEE) using a hypervisor binary
interface (HBI), to isolate the hypervisor from details of the
hardware platform.
\begin{commentary}
The ABI, SBI, and HBI are still a work-in-progress, but we are now
prioritizing support for Type-2 hypervisors where the SBI is provided
recursively by an S-mode OS.
\end{commentary}
Hardware implementations of the RISC-V ISA will generally require
additional features beyond the privileged ISA to support the various
execution environments (AEE, SEE, or HEE).
\section{Privilege Levels}
At any time, a RISC-V hardware thread ({\em hart}) is running at some
privilege level encoded as a mode in one or more CSRs (control and
status registers). Three RISC-V privilege levels are currently defined
as shown in Table~\ref{privlevels}.
\begin{table*}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Level & Encoding & Name & Abbreviation \\ \hline
0 & \tt 00 & User/Application & U \\
1 & \tt 01 & Supervisor & S \\
2 & \tt 10 & {\em Reserved} & \\
3 & \tt 11 & Machine & M \\
\hline
\end{tabular}
\end{center}
\caption{RISC-V privilege levels.}
\label{privlevels}
\end{table*}
Privilege levels are used to provide protection between different
components of the software stack, and attempts to perform operations
not permitted by the current privilege mode will cause an exception to
be raised. These exceptions will normally cause traps into an
underlying execution environment.
\begin{commentary}
In the description, we try to separate the privilege level for which
code is written, from the privilege mode in which it runs, although
the two are often tied. For example, a supervisor-level operating
system can run in supervisor-mode on a system with three privilege
modes, but can also run in user-mode under a classic virtual machine
monitor on systems with two or more privilege modes. In both cases,
the same supervisor-level operating system binary code can be used,
coded to a supervisor-level SBI and hence expecting to be able to use
supervisor-level privileged instructions and CSRs. When running a
guest OS in user mode, all supervisor-level actions will be trapped
and emulated by the SEE running in the higher-privilege level.
\end{commentary}
The machine level has the highest privileges and is the only mandatory
privilege level for a RISC-V hardware platform. Code run in
machine-mode (M-mode) is usually inherently trusted, as it has
low-level access to the machine implementation. M-mode can be used to
manage secure execution environments on RISC-V. User-mode (U-mode)
and supervisor-mode (S-mode) are intended for conventional application
and operating system usage respectively.
Each privilege level has a core set of privileged ISA extensions with optional
extensions and variants. For example, machine-mode supports an optional
standard extension for memory protection. Also, supervisor mode can be
extended to support Type-2 hypervisor execution as described in
Chapter~\ref{hypervisor}.
Implementations might provide anywhere from 1 to 3 privilege modes
trading off reduced isolation for lower implementation cost, as shown
in Table~\ref{privcombs}.
\begin{table*}[h!]
\begin{center}
\begin{tabular}{|c|l|l|}
\hline
Number of levels & Supported Modes & Intended Usage \\ \hline
1 & M & Simple embedded systems \\
2 & M, U & Secure embedded systems \\
3 & M, S, U & Systems running Unix-like operating systems\\
\hline
\end{tabular}
\end{center}
\caption{Supported combinations of privilege modes.}
\label{privcombs}
\end{table*}
All hardware implementations must provide M-mode, as this is the only
mode that has unfettered access to the whole machine. The simplest
RISC-V implementations may provide only M-mode, though this will
provide no protection against incorrect or malicious application code.
\begin{commentary}
The lock feature of the optional PMP facility can provide some
limited protection even with only M-mode implemented.
\end{commentary}
Many RISC-V implementations will also support at least user mode
(U-mode) to protect the rest of the system from application code.
Supervisor mode (S-mode) can be added to provide isolation between a
supervisor-level operating system and the SEE.
A hart normally runs application code in U-mode until some trap (e.g.,
a supervisor call or a timer interrupt) forces a switch to a trap
handler, which usually runs in a more privileged mode. The hart will
then execute the trap handler, which will eventually resume execution
at or after the original trapped instruction in U-mode. Traps that
increase privilege level are termed {\em vertical} traps, while traps
that remain at the same privilege level are termed {\em horizontal}
traps. The RISC-V privileged architecture provides flexible routing
of traps to different privilege layers.
\begin{commentary}
Horizontal traps can be implemented as vertical traps that
return control to a horizontal trap handler in the less-privileged mode.
\end{commentary}
\section{Debug Mode}
Implementations may also include a debug mode to support off-chip
debugging and/or manufacturing test. Debug mode (D-mode) can be
considered an additional privilege mode, with even more access than
M-mode. The separate debug specification proposal describes operation
of a RISC-V hart in debug mode. Debug mode reserves a few CSR
addresses that are only accessible in D-mode, and may also reserve
some portions of the physical address space on a platform.
| {
"alphanum_fraction": 0.7899978809,
"avg_line_length": 44.7298578199,
"ext": "tex",
"hexsha": "f909ea65e3d6e288b2b3b5ede14497fe59a335fd",
"lang": "TeX",
"max_forks_count": 457,
"max_forks_repo_forks_event_max_datetime": "2022-03-27T18:09:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-01T03:53:07.000Z",
"max_forks_repo_head_hexsha": "b9a642c963f2ee222ce96178f93135dcbcfa71cd",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "superhaocheng/riscv-isa-manual",
"max_forks_repo_path": "src/priv-intro.tex",
"max_issues_count": 702,
"max_issues_repo_head_hexsha": "b9a642c963f2ee222ce96178f93135dcbcfa71cd",
"max_issues_repo_issues_event_max_datetime": "2022-03-29T02:14:18.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-02-07T18:29:00.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "superhaocheng/riscv-isa-manual",
"max_issues_repo_path": "src/priv-intro.tex",
"max_line_length": 78,
"max_stars_count": 1991,
"max_stars_repo_head_hexsha": "b9a642c963f2ee222ce96178f93135dcbcfa71cd",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "superhaocheng/riscv-isa-manual",
"max_stars_repo_path": "src/priv-intro.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T13:05:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-02-02T05:09:24.000Z",
"num_tokens": 2096,
"size": 9438
} |
\documentclass{beamer}
% to typeset the presentation as a handout uncomment:
%\documentclass{article}
%\usepackage{beamerarticle}
\usepackage{graphicx,hyperref,url}
\usepackage{color,colortbl}
\usepackage{hyperref}
\newcommand{\myhref}[2]{{\color{blue}\href{#1}{#2}}}
\usecolortheme{beaver}
\usetheme{Goettingen}
\beamertemplatenavigationsymbolsempty
\usefonttheme[onlymath]{serif}
\makeatletter
\setbeamertemplate{sidebar canvas right}%
[vertical shading][top=red,bottom=gray]
\setbeamertemplate{footline}{\hfill\insertframenumber/\inserttotalframenumber}
\makeatother
\title{Remote and Real-Time Monitoring of the Urban Noise Environment}
\author[Richert \& Leung]{Dean Richert \inst{1} \\ \scriptsize{with H. Leung \inst{1}, N. Xie \inst{2}, C. Adderley \inst{2}, and K. Hussein \inst{2}}}
\institute[University of Calgary]
{
\inst{1} Department of Electrical and Computer Engineering\\
Schulich School of Engineering\\University of Calgary \\ \vspace{5mm}
\inst{2} Information Technology, City of Calgary
}
\logo{%
\includegraphics[width=2cm,height=2cm,keepaspectratio]{figures/uc_logo.jpg} \hspace*{1cm} {\color{black} December 7, 2017} \hspace*{1cm} \includegraphics[width=2cm,height=2cm,keepaspectratio]{figures/schulich.png}
}
\date{\scalebox{1}{\insertlogo}}
%\AtBeginSection[]
%{
% \begin{frame}<beamer>{Outline}
% \tableofcontents[currentsection,currentsubsection]
% \end{frame}
%}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\begin{frame}{Outline}
\tableofcontents
\end{frame}
\section{Introduction}
\begin{frame}{Introduction: motivation}
\begin{itemize}
\item Unwanted noise in urban environments has negative health effects \begin{itemize}
\item loss of sleep, disruption to relaxation and social gatherings, hearing loss, high blood pressure, and more
\end{itemize}
\item City noise codes aim to reduce noise pollution, but violations of the code are difficult to catch
\item Continuous monitoring of noise is difficult. Many noise assessments are complaint driven
\item Noise data contains information about the happenings within a city
\begin{itemize}
\item traffic noise, construction noise, persons in distress, car accidents, etc.
\end{itemize}
\item Acoustic monitoring promises to be a good application for the recently installed CoC's low-power wide-area network.
\end{itemize}
\end{frame}
\begin{frame}{Introduction: case study}
{\bf Sounds of New York City (SONYC) -} Objectives
\vfill
``to create technological solutions for
\begin{itemize}
\item the systematic, constant monitoring of noise pollution at the city scale
\item the accurate description of acoustic environments in terms of its composing sources
\item broadening citizen participation in noise reporting and mitigation
\item enabling city agencies to take effective, information-driven action for noise mitigation."
\end{itemize}
\end{frame}
\begin{frame}{Introduction: case study}
{\bf Sounds of New York City (SONYC) - } Overview
\vfill
\begin{center}
\begin{figure}
\includegraphics[scale=0.4]{figures/sonyc.png}
\caption{SONYC project overview (image taken from the project \myhref{https://wp.nyu.edu/sonyc}{website})}
\end{figure}
\end{center}
\end{frame}
\begin{frame}{Introduction: case study}
{\bf Sounds of New York City (SONYC) - } Prototype
\vfill
\begin{center}
\begin{figure}
\includegraphics[scale=0.5]{figures/sonyc_prototype}
\caption{SONYC sensor unit prototype (image taken from the project \myhref{https://wp.nyu.edu/sonyc}{website})}
\end{figure}
\end{center}
\end{frame}
\section{Technology background}
\begin{frame}{Technology background}
{\bf LoRaWAN -} a type of wireless telecommunication that:
\begin{itemize}
\item allows long range communication
\item has limited bit rate (amount of information that can be transmitted per packet)
\item transmitting and receiving data drains very little current from power source
\end{itemize}
\begin{center}
\begin{figure}
\includegraphics[scale=0.5]{figures/lorawan.png}
\caption{A typical LoRaWAN based application}
\end{figure}
\end{center}
\end{frame}
\begin{frame}{Technology background}
{\bf As a consequence of the LoRaWAN properties -}
\vfill
Sensor node features:
\begin{itemize}
\item battery operated
\item easy to deploy
\item low maintenance
\item pervasive deployment
\item low cost
\item discrete
\item secure
\end{itemize}
\vfill
Application features:
\begin{itemize}
\item low data throughput
\item delay tolerant
\item periodic sensing and/or event based notifications
\end{itemize}
\end{frame}
\section{Project goals}
\begin{frame}{Project goals}
{\bf Business goals -} (i) investigate how acoustic sensing can be used to improve the quality of life for Calgarians, (ii) create a platform to test other LoRaWAN-based applications
\vfill
{\bf Research goals -} investigate the limits of LoRaWAN-based applications by incorporating \alert{in-network processing}
\vfill
\begin{center}
\begin{figure}
\includegraphics[scale=0.31]{figures/figure_1.png}
\end{figure}
\end{center}
\end{frame}
\section{Current project status}
\begin{frame}{Current project status}
We expect a working prototype by the end of the year.
\vfill
{\bf Microphone specs:} tolerance limits of $\pm 1$dB, noise floor of 30dB (\alert{exceeds} the specifications of a Class 2 sound level meter)
\vfill
{\bf Microcontroller:} best possible computing power for the price and power consumption
\vfill
{\bf Algorithm:} Spectral decomposition. Future implementation: sound source classification.
\vfill
{\bf Battery life:} 2 months with 2 D-cell Lithium Ion batteries
\vfill
{\bf Physical dimensions:} 8x8x4cm (enclosure only)
\vfill
{\bf Price per unit:} \$150
\end{frame}
\begin{frame}{Current project status}
\begin{itemize}
\item Gateways and network server provided by Tektelic
\item Application server is developed
\end{itemize}
\vfill
\begin{center}
\begin{figure}
\centering
\includegraphics[scale=0.25]{figures/gui.PNG}
\end{figure}
\end{center}
\end{frame}
\section{Acoustic sensing basics}
\begin{frame}{Acoustic sensing basics}
\begin{center}
\begin{figure}
\includegraphics[scale=0.25]{figures/sound_pressure_wave.png}
\end{figure}
\end{center}
\begin{itemize}
\item Sound is an oscillating pressure wave - a microphone is basically a pressure sensor
\item A decibel reading (dB) is a measure of the power of the sound pressure wave in a window of time
\item dBA is another common noise level unit that models the perceived loudness of a sound source by humans
\end{itemize}
\end{frame}
\begin{frame}{Acoustic sensing basics}
A typical CoC bylaw reads:
\begin{center} \fbox{\begin{minipage}{0.75\linewidth}
\emph{No person shall cause continuous sound that exceeds 75dBA during the day-time (60dBA during the night-time)}
\end{minipage}}
\end{center}
\begin{itemize}
\item Continuous sound = continuous duration over a 3 minute period, or sporadically for a total of 3 minutes over a 15 minute period.
\item Sound level must exceed 5dBA over ambient before it becomes an offence
\end{itemize}
For enforcement purposes, a class 2 sound level meter must be used.
\end{frame}
\begin{frame}{Acoustic sensing basics}
The classification of a sound level meter is governed by the IEC 61672-1:2002 standard.
\vfill
The standard defines (i) specifications of the device, (ii) tests to verify conformance, and (iii) a schedule for periodic testing of the device.
\vfill
Disclaimer: I have not read the standard!
\vfill
Class 2 devices have tolerance limits of $\pm 1.4$dB and measurement ranges from 35dBA-130dBA
\end{frame}
\begin{frame}{Acoustic sensing basics}
How ``far away" can a sensor measure a sound source?
\vfill
A sensor {\bf does not} measure sound at a distance. It measures air pressure {\bf at the sensor location}.
\vfill
Two concepts are relevant:
\begin{enumerate}
\item sound {\bf propagation} - sound is attenuated further from the source
\item {\bf noise floor} - the minimum sound level that can be measured by a sensor (\alert{independent} of the distance to the sound source).
\end{enumerate}
\vfill
Example 1: A 100dB noise source could be detected by a sensor with a 35dB noise floor 500m away.
\vfill
Example 2: A 60dB noise source can be detected by the same sensor at most 5m away.
\end{frame}
\section{Questions/feedback}
\begin{frame}{Questions/feedback}
\begin{center}
\includegraphics[scale=0.2]{figures/36601.png}
\end{center}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6351127159,
"avg_line_length": 36.0809859155,
"ext": "tex",
"hexsha": "1615030e26d7b57dd7fa943ee169f89b8e195a3e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fde4e0156f7bf8477d28827647e7032457e8fce3",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "deanmrichert/urbanNoiseMonitoring",
"max_forks_repo_path": "presentations/CoC_managersMeeting/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fde4e0156f7bf8477d28827647e7032457e8fce3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "deanmrichert/urbanNoiseMonitoring",
"max_issues_repo_path": "presentations/CoC_managersMeeting/main.tex",
"max_line_length": 217,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fde4e0156f7bf8477d28827647e7032457e8fce3",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "deanmrichert/urbanNoiseMonitoring",
"max_stars_repo_path": "presentations/CoC_managersMeeting/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2497,
"size": 10247
} |
\chapter{NetCDF and ESDM Functionalities}
\label{ch:func}
This chapter compares the NetCDF functionalities with the current version of ESDM.
\section{Error Handling}
{\itshape
Each netCDF function in the C, Fortran 77, and Fortran 90 APIs returns 0 on success, in the tradition of C.
When programming with netCDF in these languages, always check return values of every netCDF API call.
The return code can be looked up in netcdf.h (for C programmers) or netcdf.inc (for Fortran programmers), or you can use the strerror function to print out an error message.
In general, if a function returns an error code, you can assume it did not do what you hoped it would.
NetCDF functions return a non-zero status codes on error.
If the returned status value indicates an error, you may handle it in any way desired, from printing an associated error message and exiting to ignoring the error indication and proceeding (not recommended!).
Occasionally, low-level I/O errors may occur in a layer below the netCDF library.
For example, if a write operation causes you to exceed disk quotas or to attempt to write to a device that is no longer available, you may get an error from a layer below the netCDF library, but the resulting write error will still be reflected in the returned status value.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/group__error.html}}
\subsection{ESDM}
NetCDF has an extensive classification for the possible errors that might happen.
ESDM does not share this classification, and it is something that their developers are not considering to include in the final version.
This decision does not affect the performance of ESDM, but it is critical when NetCDF tests are evaluated.
NetCDF tests introduce invalid conditions and, because ESDM does not produce the expected error, the test fails.
To provide a fair comparison with ESDM, the code in the NetCDF tests that considers invalid parameters as input was removed.
\section{NetCDF Data Models}
{\itshape
There are two netCDF data models, the \textbf{Classic Model} (Section \ref{sec:classic}) and the \textbf{Common Data Model} (Section \ref{sec:common}) (also called the netCDF-4 data model or enhanced model).
The Classic Model is the simpler of the two, and is used for all data stored in classic CDF-1 format, 64-bit offset CDF-2 format, 64-bit data CDF-5 format, or netCDF-4 classic model format.
The Common Data Model (sometimes also referred to as the netCDF-4 data model) is an extension of the Classic Model that adds more powerful forms of data representation and data types at the expense of some additional complexity.
Although data represented with the Classic Model can also be represented using the Common Data Model, datasets that use Common Data Model features, such as user-defined data types, cannot be represented with the Classic Model. Use of the Common Data Model requires storage in the netCDF-4 format.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/faq.html}}
\subsection{Classic Data Model}
\label{sec:classic}
{\itshape
The \textbf{Classic Data Model} consists of variables, dimensions, and attributes. This way of thinking about data was introduced with the very first NetCDF release and is still the core of all NetCDF files.
\begin{description}
\item[Variables] $N$-dimensional arrays of data. Variables in NetCDF files can be one of six types (char, byte, short, int, float, double).
\item[Dimensions] describe the axes of the data arrays. A dimension has a name and a length. An unlimited dimension has a length that can be expanded at any time, as more data are written to it. NetCDF files can contain at most one unlimited dimension.
\item[Attributes] annotate variables or files with small notes or supplementary metadata. Attributes are always scalar values or 1D arrays, which can be associated with either a variable or the file as a whole. Although there is no enforced limit, the user is expected to keep attributes small.
\end{description}
\subsection{Common Data Model}
\label{sec:common}
With NetCDF-4, the NetCDF data model has been extended, in a backwards-compatible way. The new data model, which is known as the \textbf{Common Data Model}, is part of an effort here at Unidata to find a common engineering language for the development of scientific data solutions. It contains the variables, dimensions, and attributes of the classic data model, but adds:
\begin{description}
\item[Groups] A way of hierarchically organising data, similar to directories in a Unix file system.
\item[User-defined Types] The user can now define compound types (like C structures), enumeration types, variable-length arrays, and opaque types.
\end{description}
These features may only be used when working with a NetCDF-4/HDF5 file.
Files created in classic formats cannot support groups or user-defined types.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/netcdf_data_model.html}}
\subsection{ESDM}
NetCDF includes tests with both Classic and Common Data Models.
While the ESDM data model basically supports all features of the Classic Model, it does not support the setting of the mode.
For the Common Data Model, ESDM does not support groups and user-defined types.
% I think here is the place to explain the ideia of ESDM being like NetCDF 3.5. It has to be clear what is supported or not.
\section{Data Modes}
{\itshape
There are two modes associated with accessing a NetCDF file \footnote{Adapted from \url{https://northstar-www.dartmouth.edu/doc/idl/html_6.2/NetCDF_Data_Modes.html}}:
\begin{description}
\item[Define Mode] In define mode, dimensions, variables, and new attributes can be created, but variable data cannot be read or written.
\item[Data Mode] In data mode, data can be read or written, and attributes can be changed, but new dimensions, variables, and attributes cannot be created.
\end{description}
}
\subsection{ESDM}
The current version of ESDM does not have restrictions regarding the modes.
Once the file is open, the user can do any modifications s/he wants.
Tables \ref{tab_modes_create} and \ref{tab_modes_open} compare the options for creating and opening a file using NetCDF and ESDM.
ESDM maps the NetCDF flag into an internal flag, if the mode is supported.
\begin{table}[H]
\centering
\begin{tabular}{|l|m{6cm}|l|}
\hline
\multicolumn{1}{|c|}{FLAG} & \multicolumn{1}{c|}{NetCDF Support} & \multicolumn{1}{c|}{ESDM Support} \\ \hline \hline
NC\_CLOBBER & Overwrite existing file & ESDM\_CLOBBER \\ \hline
NC\_NOCLOBBER & Do not overwrite existing file & ESDM\_NOCLOBBER \\ \hline
NC\_SHARE & Limit write caching - netcdf classic files only & NOT SUPPORTED \\ \hline
NC\_64BIT\_OFFSET & Create 64-bit offset file & NOT SUPPORTED \\ \hline
NC\_64BIT\_DATA & Create CDF-5 file (alias NC\_CDF5) & NOT SUPPORTED \\ \hline
NC\_NETCDF4 & Create NetCDF-4/HDF5 file & NOT SUPPORTED \\ \hline
NC\_CLASSIC\_MODEL & Enforce NetCDF classic mode on NetCDF-4/HDF5 files & NOT SUPPORTED \\ \hline
NC\_DISKLESS & Store data in memory & NOT SUPPORTED \\ \hline
NC\_PERSIST & Force the NC\_DISKLESS data from memory to a file & NOT SUPPORTED \\ \hline
\hline
\end{tabular}
\caption{\label{tab_modes_create} Modes -- Creating a file.}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|m{6.8cm}|l|}
\hline
\multicolumn{1}{|c|}{FLAG} & \multicolumn{1}{c|}{NetCDF Support} & \multicolumn{1}{c|}{ESDM Support} \\ \hline \hline
NC\_NOWRITE & Open the dataset with read-only access & ESDM\_MODE\_FLAG\_READ \\ \hline
NC\_WRITE & Open the dataset with read-write access & ESDM\_MODE\_FLAG\_WRITE \\ \hline
NC\_SHARE & Share updates, limit caching & NOT SUPPORTED \\ \hline
NC\_DISKLESS & Store data in memory & NOT SUPPORTED \\ \hline
NC\_PERSIST & Force the NC\_DISKLESS data from memory to a file & NOT SUPPORTED \\ \hline
\hline
\end{tabular}
\caption{\label{tab_modes_open} Modes -- Opening a file.}
\end{table}
\section{Data Types}
Data in a NetCDF file may be one of the \textbf{atomic types} (Section \ref{ed-type}), or may be a \textbf{user-defined types} (Section \ref{ud-type}).
\subsection{Atomic Types}
\label{ed-type}
{\itshape
Atomic types are those which can not be further subdivided.
All six classic model types (BYTE, CHAR, SHORT, INT, FLOAT, DOUBLE) are atomic, and fully supported in netCDF-4.
The following new atomic types have been added in netCDF-4: UBYTE, USHORT, UINT, INT64, UINT64, STRING.
The string type will efficiently store arrays of variable length strings.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/workshops/2007/nc4features/AtomicTypes.html}}
Table \ref{tab_modes_open} shows the definition for the atomic types supported by the NetCDF interface.
\footnote{This table is no longer available on Unidata website. To reconstruct it, the information is in \url{https://www.unidata.ucar.edu/software/netcdf/docs/netcdf_8h.html}, in which the definition is now made using bytes, instead of bits. For example, NC\_USHORT is now defined as \textit{unsigned 2-byte int}.}
\begin{table}[H]
\centering
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{Type} & \multicolumn{1}{c|}{Description} \\ \hline \hline
NC\_BYTE & 8-bit signed integer \\ \hline
NC\_UBYTE & 8-bit unsigned integer \\ \hline
NC\_CHAR & 8-bit character \\ \hline
NC\_SHORT & 16-bit signed integer \\ \hline
NC\_USHORT & 16-bit unsigned integer \\ \hline
NC\_INT (or NC\_LONG) & 32-bit signed integer \\ \hline
NC\_UINT & 32-bit unsigned integer \\ \hline
NC\_INT64 & 64-bit signed integer \\ \hline
NC\_UINT64 & 64-bit unsigned integer \\ \hline
NC\_FLOAT & 32-bit floating-point \\ \hline
NC\_DOUBLE & 64-bit floating-point \\ \hline
NC\_STRING & variable length character string \\ \hline
\hline
\end{tabular}
\caption{NetCDF4 atomic types.}
\end{table}
\subsection{ESDM}
ESDM supports all NetCDF4 atomic types but NC\_STRING.
It is worth mentioning that, althought ESDM has a type SMD\_DTYPE\_STRING, this type does not work as the NC\_STRING type.
Table \ref{datatypes-netcdf} summarizes the available NetCDF data types and the corresponding support from ESDM.
\begin{table}[H]
\centering
\begin{tabular}{|l|m{4.7cm}|l|l|}
\hline
\multicolumn{1}{|c|}{NetCDF} & \multicolumn{1}{c|}{Definition} & \multicolumn{1}{c|}{ESDM} & \multicolumn{1}{c|}{ESDM} \\
\multicolumn{1}{|c|}{Type} & & \multicolumn{1}{c|}{Type} & \multicolumn{1}{c|}{Representation} \\ \hline \hline
\scriptsize{NC\_NAT} & \small{NAT = Not A Type (c.f. NaN)} & \scriptsize{SMD\_TYPE\_AS\_EXPECTED} & \small{as expected} \\ \hline
\scriptsize{NC\_BYTE} & \small{signed 1 byte integer} & \scriptsize{SMD\_DTYPE\_INT8} & \small{int8\_t} \\ \hline
\scriptsize{NC\_CHAR} & \small{ISO/ASCII character} & \scriptsize{SMD\_DTYPE\_CHAR} & \small{char} \\ \hline
\scriptsize{NC\_SHORT} & \small{signed 2 byte integer} & \scriptsize{SMD\_DTYPE\_INT16} & \small{int16\_t} \\ \hline
\scriptsize{NC\_INT} & \small{signed 4 byte integer} & \scriptsize{SMD\_DTYPE\_INT32} & \small{int32\_t} \\ \hline
\scriptsize{NC\_LONG} & \small{deprecated, but required for backward compatibility} & \scriptsize{SMD\_DTYPE\_INT32} & \small{int32\_t} \\ \hline
\scriptsize{NC\_FLOAT} & \small{single precision floating-point number} & \scriptsize{SMD\_DTYPE\_FLOAT} & \small{32 bits} \\ \hline
\scriptsize{NC\_DOUBLE} & \small{double precision floating-point number} & \scriptsize{SMD\_DTYPE\_DOUBLE} & \small{64 bits} \\ \hline
\scriptsize{NC\_UBYTE} & \small{unsigned 1 byte int} & \scriptsize{SMD\_DTYPE\_UINT8} & \small{uint8\_t} \\ \hline
\scriptsize{NC\_USHORT} & \small{unsigned 2-byte int} & \scriptsize{SMD\_DTYPE\_UINT16} & \small{uint16\_t} \\ \hline
\scriptsize{NC\_UINT} & \small{unsigned 4-byte int} & \scriptsize{SMD\_DTYPE\_UINT32} & \small{uint32\_t} \\ \hline
\scriptsize{NC\_INT64} & \small{signed 8-byte int} & \scriptsize{SMD\_DTYPE\_INT64} & \small{int64\_t} \\ \hline
\scriptsize{NC\_UINT64} & \small{unsigned 8-byte int} & \scriptsize{SMD\_DTYPE\_UINT64} & \small{uint64\_t} \\ \hline
\scriptsize{NC\_STRING} & \small{variable length character string} & \scriptsize{NOT SUPPORTED YET} & \scriptsize{NOT SUPPORTED YET} \\ \hline
\end{tabular}
\caption{\label{datatypes-netcdf} Data Types Compatibility}
\end{table}
\subsection{User-Defined Types}
\label{ud-type}
{\itshape
User defined types allow for more complex data structures. NetCDF-4 has added support for four different user defined data types.
\begin{description}
\item[Compound Type]
Like a C struct, a compound type is a collection of types, including other user defined types, in one package.
\item[Opaque Type]
Used to store ragged arrays.
\item[Variable Length Array Type]
This type has only a size per element, and no other type information.
\item[Enum Type]
Like an enumeration in C, this type lets you assign text values to integer values, and store the integer values.
\end{description}
Users may construct user defined type with the various \texttt{nc\_def\_*} functions described in this section.
They may learn about user defined types by using the \texttt{nc\_inq\_} functions defined in this section.
\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/group__user__types.html}}
}
\subsection{ESDM}
The current version of ESDM does not support user-defined data types, but the developers intend to support this feature in the final version.
\begin{table}[H]
\centering
\begin{tabular}{|l|m{6cm}|l|}
\hline
\multicolumn{1}{|c|}{NetCDF Type} & \multicolumn{1}{c|}{Definition} & \multicolumn{1}{c|}{ESDM Support} \\ \hline \hline
\scriptsize{NC\_VLEN} & used internally for vlen types & \scriptsize{NOT SUPPORTED YET} \\ \hline
\scriptsize{NC\_OPAQUE} & used internally for opaque types & \scriptsize{NOT SUPPORTED YET} \\ \hline
\scriptsize{NC\_COMPOUND} & used internally for compound types & \scriptsize{NOT SUPPORTED YET} \\ \hline
\scriptsize{NC\_ENUM} & used internally for enum types & \scriptsize{NOT SUPPORTED YET} \\ \hline \hline
\end{tabular}
\caption{\label{ud-datatypes} User-Defined Types}
\end{table}
\section{Compression}
{\itshape
The NetCDF-4 libraries inherit the capability for data compression from the HDF5 storage layer underneath the NetCDF-4 interface.
Linking a program that uses NetCDF to a NetCDF-4 library allows the program to read compressed data without changing a single line of the program source code.
Writing NetCDF compressed data only requires a few extra statements.
And the nccopy utility program supports converting classic NetCDF format data to or from compressed data without any programming.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/blogs/developer/entry/netcdf_compression}}
\subsection{ESDM}
ESDM does not support compression yet.
Because of that, all functions and tests related to chunking, deflate, and fletcher will not work when using ESDM.
We will integrate a compression library in the future and support quantification of error tolerance levels for different variables.
The Scientific Compression Library (SCIL) it not yet integrated with the current version of ESDM, but it can be found in the following Git Repository:
\begin{center}
\url{https://github.com/JulianKunkel/scil/}
\end{center}
\section{Endianness}
{\itshape
The endianness is defined as the order of bytes in multi-byte numbers: numbers encoded in big-endian have their most significant bytes written first, whereas numbers encoded in little-endian have their least significant bytes first.
Little-endian is the native endianness of the IA32 architecture and its derivatives, while big-endian is native to SPARC and PowerPC, among others.
The native-endianness procedure returns the native endianness of the machine it runs on.
}\footnote{Adapted from \url{https://www.gnu.org/software/guile/manual/html_node/Bytevector-Endianness.html}}
{\itshape
NetCDF-4 uses \textbf{reader-makes-right} approach, in which:
\begin{itemize}
\item Writer always uses native representations, so no conversion is necessary on writing
\item Reader is responsible for detecting what representation is used and applying a conversion, if necessary, to reader's native representation
\item No conversion is necessary if reader and writer use same representation
\end{itemize}
NetCDF-4 also lets writer control endianness explicitly, if necessary.
}\footnote{Reference: \url{https://www.unidata.ucar.edu/software/netcdf/workshops/2008/netcdf4/ReaderMakesRight.html}}
\subsection{ESDM}
ESDM only supports native-endianness of the machine it runs on.
The developers believe that the native-endianness of the machine is enough for demonstrating the benefits of using ESDM to improve efficiency in the system.
The rationale behind this design choice is that ESDM will be deployed in data centres and will be used to store data optimally in the data centre partitioned across available storage solutions.
It is not intended to be stored in a portable fashion.
Therefore, data can be imported/exported between, e.g., a NetCDF format and the ESDM native format.
\section{Groups}
{\itshape
NetCDF-4 files can store attributes, variables, and dimensions in hierarchical groups.
This allows the user to create a structure much like a Unix file system.
In NetCDF, each group gets an ncid.
Opening or creating a file returns the ncid for the root group (which is named ``/'').
Dimensions are scoped such that they are visible to all child groups. For example, you can define a dimension in the root group, and use its dimension id when defining a variable in a sub-group.
Attributes defined as NC\_GLOBAL apply to the group, not the entire file.
The degenerate case, in which only the root group is used, corresponds exactly with the classic data model, before groups were introduced.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/groups.html}}
\subsection{ESDM}
In general, ESDM does not support groups from NetCDF.
When only the root group is used, ESDM can work adequately and assumes the group and the file are the same entity.
The ability to work with groups is a functionality that ESDM developers may implement depending on future requirements.
% Mention here the option to use \ (os something like it, I don't remember) to access groups.
\section{Fill Values}
{\itshape
Sometimes there are missing values in the data, and some value is needed to represent them.
For example, what value do you put in a sea-surface temperature variable for points overland?
In NetCDF, you can create an attribute for the variable (and of the same type as the variable) called \_FillValue that contains a value that you have used for missing data.
Applications that read the data file can use this to know how to represent these values.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/fill_values.html}}
\subsection{ESDM}
ESDM supports fill values.
There are some specific details in the implementation of fill values inside ESDM that is worth noticing.
% Talk about the differences between the approach. If I'm not mistaken, the fill value is the real value in NetCDF and in ESDM is just something like a flag indicating the position has a fill value.
\section{Type Conversion}
{\itshape
With the new interface, users need not be aware of the external type of numeric variables, since automatic conversion to or from any desired numeric type is now available.
You can use this feature to simplify code, by making it independent of external types.
The elimination of void* pointers provides detection of type errors at compile time that could not be detected with the previous interface.
Programs may be made more robust with the new interface, because they need not be changed to accommodate a change to the external type of a variable.
If conversion to or from an external numeric type is necessary, it is handled by the library.
This automatic conversion and separation of external data representation from internal data types will become even more important in netCDF version 4, when new external types will be added for packed data for which there is no natural corresponding internal type (for example, arrays of 11-bit values).
Converting from one numeric type to another may result in an error if the target type is not capable of representing the converted value.
For example, a short may not be able to hold data stored externally as an NC\_FLOAT (an IEEE floating-point number).
When accessing an array of values, an NC\_ERANGE error is returned if one or more values are out of the range of representable values, but other values are converted properly.
Note that mere loss of precision in type conversion does not return an error.
Thus, if you read double precision values into a long, for example, no error results unless the magnitude of the double precision value exceeds the representable range of longs on your platform.
Similarly, if you read a large integer into a float incapable of representing all the bits of the integer in its mantissa, this loss of precision will not result in an error.
If you want to avoid such precision loss, check the external types of the variables you access to make sure you use an internal type that has a compatible precision.
The new interface distinguishes arrays of characters intended to represent text strings from arrays of 8-bit bytes intended to represent small integers.
The interface supports the internal types text, uchar, and schar, intended for text strings, unsigned byte values, and signed byte values.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/release-notes-3.3.html}}
\subsection{ESDM}
ESDM supports most of the data conversions but may return a slightly different error.
ESDM deals with type conversion the same way as NetCDF.
However, ESDM only accepts conversions for attributes, and not for variables.
% The reason behind this choice is ...
% In particular, conversions for attributes are not working in the tests.
% This part has to be rewritten. Or maybe removed for now. All the tests with the conversion using SMD are not here either.
\section{HDF5 Format}
{\itshape
NetCDF-4 allows some interoperability with HDF5.
The HDF5 files produced by netCDF-4 are perfectly respectable HDF5 files, and can be read by any HDF5 application.
NetCDF-4 relies on several new features of HDF5, including dimension scales.
The HDF5 dimension scales feature adds a bunch of attributes to the HDF5 file to keep track of the dimension information.
It is not just wrong, but wrong-headed, to modify these attributes except with the HDF5 dimension scale API.
If you do so, then you will deserve what you get, which will be a mess.
Additionally, netCDF stores some extra information for dimensions without dimension scale information.
(That is, a dimension without an associated coordinate variable).
So HDF5 users should not write data to a netCDF-4 file which extends any unlimited dimension, or change any of the extra attributes used by netCDF to track dimension information.
Also there are some types allowed in HDF5, but not allowed in netCDF-4 (for example the time type).
Using any such type in a netCDF-4 file will cause the file to become unreadable to netCDF-4.
So do not do it.
NetCDF-4 ignores all HDF5 references.
Can not make head nor tail of them.
Also netCDF-4 assumes a strictly hierarchical group structure.
No looping, you weirdo!
Attributes can be added (they must be one of the netCDF-4 types), modified, or even deleted, in HDF5.
}\footnote{Adapted from \url{https://www.unidata.ucar.edu/software/netcdf/docs/interoperability_hdf5.html}}
\subsection{ESDM}
ESDM does not support HDF5 format.
% Again, if I'm not mistaken, I remember comments about this being tried in the past for a long time and not working properly. Maybe it's worth a comment about why it was not implemented (at least in this version of ESDM) and the consequences of this choice.
% Using transitivity, one can argue that ESDM is compatible with NetCDF which is compatible with HDF5. Therefore, ESDM is compatible with HDF5. Does it make sense?
| {
"alphanum_fraction": 0.7671182417,
"avg_line_length": 56.297235023,
"ext": "tex",
"hexsha": "04058a0054ee2755503dcabb2fd5bf6860cb789f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "02662d7864176df6b610b0eed23d6ac5419ee53d",
"max_forks_repo_licenses": [
"NetCDF"
],
"max_forks_repo_name": "ESiWACE/esdm-netcdf",
"max_forks_repo_path": "libsrcesdm_test/report/functionalities.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bde3a4b08fae29a94694e6b56e5d9a41c5e7be3a",
"max_issues_repo_issues_event_max_datetime": "2020-11-23T16:07:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-11-18T16:58:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "ESiWACE/esdm-netcdf-c",
"max_issues_repo_path": "libsrcesdm_test/report/functionalities.tex",
"max_line_length": 372,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "bde3a4b08fae29a94694e6b56e5d9a41c5e7be3a",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "ESiWACE/esdm-netcdf-c",
"max_stars_repo_path": "libsrcesdm_test/report/functionalities.tex",
"max_stars_repo_stars_event_max_datetime": "2019-03-19T21:56:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-19T21:56:52.000Z",
"num_tokens": 6201,
"size": 24433
} |
We evaluate the proposed approach with four experiments.
A toy problem is first used to study the transfer of learning between two related processes via the shared inducing points.
We then compare the model with existing multi-output models on the tasks of predicting foreign exchange rate and air temperature.
In the final experiment, we show that joint learning under sparsity can yield significant performance gain on a large scale dataset of inverse dynamics of a robot arm.
Since we are using stochastic optimization, the learning rates need to be chosen carefully.
We found that the rates used in \cite{hensmangaussian} also work well for our model.
Specifically, we used the learning rates of $0.01$ for the variational parameters, $1 \times 10^{-5}$ for the covariance hyperparameters, and $1 \times 10^{-4}$ for the weights, noise precisions, and inducing inputs.
We also included a momentum term of $0.9$ for all of the parameters except the variational parameters and the inducing inputs.
All of the experiments are executed on an Intel(R) Core(TM) i7-2600 3.40GHz CPU with 8GB of RAM using Matlab R2012a.
%
\subsection{TOY PROBLEM}
In this toy problem, two related outputs are simulated from the same latent function $sin(x)$ and corrupted by independent noise: $y_1(x) = sin(x) + \epsilon$ and $y_2(x) = -sin(x) + \epsilon$, $\epsilon \sim \Normal(0,0.01)$.
Each output is given 200 observations with missing values in the $(-7,-3)$ interval for the first output and the $(4,8)$ interval for the second output.
We used $Q = 1$ latent sparse process with squared exponential kernel, $h_1(x) = h_2(x) = 0$, and $M = 15$ inducing inputs for our model.
Figure \ref{fig:toy} shows the predictive distributions by our model (COGP) and independent GPs with stochastic variational inference (SVIGP, one for each output).
The locations of the inducing inputs are fixed and identical for both methods.
It is apparent from the figure that the independent GPs fail to predict the functions in the unobserved regions, especially for output 1.
In contrast, by using information from the observed intervals of one output to interpolate the missing signal of the other, COGP makes perfect prediction for both outputs.
This confirms the effectiveness of collaborative learning of sparse processes via the shared inducing variables.
Additionally, we note that the inference procedure learned that the weights are $w_{11} = 1.07$ and $w_{21} = -1.06$ which accurately reflects the correlation between the two outputs.
\begin{figure*}
\centering
\begin{tabular}{cccc}
\includegraphics[scale=0.2]{figures/toy-slfm-y1.eps} &
\includegraphics[scale=0.2]{figures/toy-svigp-y1.eps} &
\includegraphics[scale=0.2]{figures/toy-slfm-y2.eps} &
\includegraphics[scale=0.2]{figures/toy-svigp-y2.eps}
\end{tabular}
\caption{Simulated data and predictive distributions of by COGP (first and third figure) and independent GPs using stochastic variational inference (second and last figure) for the toy problem. Solid black line: predictive mean; grey bar: two standard deviations; magenta dots: real observations; blue dots: missing data. The black crosses show the locations of the inducing inputs. By sharing inducing points across the outputs, COGP accurately interpolates the missing function values.}
\label{fig:toy}
\end{figure*}
\subsection{FOREIGN EXCHANGE RATE PREDICTION}
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.28]{figures/fxCAD.eps} &
\includegraphics[scale=0.28]{figures/fxJPY.eps} &
\includegraphics[scale=0.28]{figures/fxAUD.eps}
\end{tabular}
\caption{Real observations and predictive distributions for CAD (left), JPY (middle), and AUD (right). The model used information from other currencies to effectively extrapolate the exchange rates of AUD. The color coding scheme is the same as in Figure \ref{fig:toy}.}
\label{fig:fx}
\end{figure*}
The first real world application we consider is to predict the foreign exchange rate w.r.t the US dollar of the top 10 international currencies (CAD, EUR, JPY, GBP, CHF, AUD, HKD, NZD, KRW, and MXN) and 3 precious metals (gold, silver, and platinum)\footnote{Data is available at http://fx.sauder.ubc.ca}.
The setting of our experiment described here is identical to that in \citet{alvarez2010efficient}.
The dataset consists of all the data available for the 251 working days in the year of 2007.
There are 9, 8, and 42 days of missing values for gold, silver, and platinum, respectively.
We remove from the data the exchange rate of CAD on days 50--100, JPY on day 100--150, and AUD on day 150--200.
Note that these 3 currencies are from very different geographical locations, making the problem more interesting.
The 153 points are used for testing, and the remaining 3051 data points are used for training.
Since the missing data corresponds to long contiguous sections, the objective here is to evaluate the capacity of the model to impute the missing currency values based on other currencies.
%todo: batch size, learn rate (maybe at the beginning of experiments)
For preprocessing we normalized the outputs to have zero mean and unit variance.
Since the exchange rates are driven by a small number of latent market forces \citep[see e.g.][]{alvarez2010efficient}, we tried different values of $Q = {1,2,3}$ and selected $Q = 2$ which gave the best model evidence (ELBO).
We used the squared-exponential covariance function for the shared processes and the noise covariance function for the individual process of each output.
$M = 100$ inducing inputs (\text{per sparse process}) were randomly selected from the training data and fixed throughout training.
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\caption{Performance comparison on the foreign exchange rate dataset. Results are averages of the 3 outputs over 5 repetitions. Smaller figures are better.}
\label{tab:fx}
\begin{center}
\begin{tabular}{ccc}
\toprule
\textbf{METHOD} & \textbf{SMSE} & \textbf{NLPD} \\ \hline
COGP & \textbf{0.2125} & -0.8394 \\
CGP & 0.2427 & \textbf{-2.9474} \\
IGP & 0.5996 & 0.4082 \\
%CGPVAR & 0.2795 & NA \\
%ICM & 0.3927 &
\bottomrule
\end{tabular}
\end{center}
\end{table}
The real data and predictive distributions by our model are shown in Figure \ref{fig:fx}.
They exhibit similar behaviors to those by the convolved model with inducing kernels in \citet{alvarez2010efficient}.
In particular, both models perform better at capturing the strong depreciation of the AUD than the fluctuations of the CAD and JPY currency.
Further analysis of the dataset found that 4 other currencies (GBP, NZD, KRW, and MXN) also experienced the same trend during the days
150 -- 200.
This information from these currencies was effectively used by the model to extrapolate the values of the AUD.
%todo: result for independent gps
We also report in Table \ref{tab:fx} the predictive performance of our model compared to the convolved GPs model with exact inference \citep[CGP,][]{alvarez-lawrence-nips-08} and independent GPs (IGP, one for each output).
Our model outperforms both of CGP and IGP in terms of the standardized mean squared error (SMSE).
CGP has lower negative log predictive density (NLPD), mainly due to the less conservative predictive variance of the exact CGP for the CAD currency.
% no std because there are 3 outputs but variance is small
For reference, the convolved GPs with approximation via the variational inducing kernels \citep[CGPVAR,][]{alvarez2010efficient}
has an SMSE of 0.2795 while the NLPD was not provided.
Training took only 10 minutes for our model compared to 1.4 hours for the full CGP model.
\subsection{AIR TEMPERATURE PREDICTION}
%
\setlength{\tabcolsep}{4pt}
\begin{table}[t]
\caption{Performance comparison on the air temperature dataset. Results are averages of 2 outputs over 5 repetitions. }
\label{tab:air}
\begin{center}
\begin{tabular}{ccc}
\toprule
\textbf{METHOD} & \textbf{SMSE} & \textbf{NLPD} \\
\hline
COGP & \textbf{0.1077} & \textbf{2.1712} \\
CGP & 0.1125 & 2.2219 \\
IGP & 0.8944 & 12.5319 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
%
Next we consider the task of predicting air temperature at 4 different locations in the south coast of England.
The air temperatures are recorded by a network of weather sensors (named Bramblemet, Sotonmet, Cambermet, and Chimet) during the period from July 10 to July 15, 2013.
Measurements were taken every 5 minutes, resulting in a maximum of 4320 observations.
%The data is gathered from a network of weather sensors (named Bramblemet, Sotonmet, Cambermet, and Chimet), each of which measures several environmental variables \citep{osborne2008towards}\footnote{Data is available at seperate web pages, see e.g. http://www.bramblemet.co.uk}.
%The sensors are close geographically so we can expect correlation in air temperature.
%We selected the sensor signal for air temperature during the period from July 10 to July 15, 2013.
%The sensors record measurements every 5 minutes, resulting in a maximum of 4320 observations.
There are missing data for Bramblemet (100 points), Chimet (15 points), and Sotonmet (1002 points), possibly due to network outages or hardware failures.
We further simulated failure of the sensors by removing the observations from the time periods [10.2 - 10.8] for Cambermet and [13.5 - 14.2] for Chimet.
The removed data comprises 375 data points and is used for testing.
The remaining data consisting of 15,788 points is used for training.
Similar to the previous experiment, the objective is to evaluate the ability of the model to use the signals from the functioning sensors to extrapolate the missing signals.
We normalized the outputs to have zero mean and unit variance.
We used $Q = 2$ sparse processes with the squared exponential covariance function and individual processes with the noise covariance function.
$M=200$ inducing inputs were randomly selected from the training set and fixed throughout training.
%
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{figures/cogp-weatherCambermet.eps} &
\includegraphics[scale=0.3]{figures/cgp-weatherCambermet.eps} &
\includegraphics[scale=0.3]{figures/weatherCambermet.eps}
\\
\includegraphics[scale=0.3]{figures/cogp-weatherChimet.eps} &
\includegraphics[scale=0.3]{figures/cgp-weatherChimet.eps} &
\includegraphics[scale=0.3]{figures/weatherChimet.eps}
\end{tabular}
\caption{Real data and predictive distributions by our method (COGP, left figures), the convolved GP method with exact inference (CGP, middle figures), and full independent GPs (right figures) for the air temperature problem. The coding color scheme is the same as in Figure \ref{fig:toy}.}
\label{fig:weather}
\end{figure*}
The real data and the predictive distributions by our model, CGP with exact inference \citep{alvarez-lawrence-nips-08}, and independent GPs are shown in Figure \ref{fig:weather}.
It is clear that the independent GP model is clueless in the test regions and thus simply uses the average temperature as its prediction.
For Cambermet, both COGP and CGP can capture the rising in temperature from the morning until the afternoon and the fall afterwards.
%For Chimet, both models perform poorly but CGP more so as it falsely predicts wild fluctuations that are non-existent in the data.
The performance of the models are summarized in Table \ref{tab:air}, which shows that our model outperforms CGP in terms of both SMSE and NLPD.
%All results are averaged of the 2 outputs over 5 repetitions.
It took 5 minutes on average to train our model compared to 3 hours of CGP with exact inference.
It is also worth noting the characteristics of the sparse processes learned by our model as they correspond to different patterns in the data.
In particular, one process has an inverse lengthscale of 136 which captures the global increase in temperature during the training period while the other has an inverse lengthscale of 0.5 to model the local variations within a single day.
\subsection{ROBOT INVERSE DYNAMICS}
Our last experiment is with a dataset relating to an inverse dynamics model of a 7-degree-of-freedom anthropomorphic robot arm \citep{vijayakumar2000locally}.
The data consists of 48,933 datapoints mapping from a 21-dimensional input space (7 joints positions, 7 joint velocities, 7 joint accelerations) to the corresponding 7 joint torques.
%The problem is strongly nonlinear due to complex superpositions of sine and cosine functions in robot dynamics \citep{vijayakumar2000locally}.
%Furthermore, exploratory analysis using standard GP with automatic relevance determination suggests that all of the 21 dimensions are relevant.
It has been used in previous work \citep[see e.g.][]{rasmussen-williams-book,vijayakumar2000locally} but only for single task learning.
\citet{chai2008multi} considered multitask learning of robot inverse dynamics but on a different and much smaller dataset.
Here we consider joint learning for the 4th and 7th torques, where the former has 2,000 points
while the latter has 44,484 points for training.
The test set consists of 8,898 observations equally divided between the two outputs.
%Note that the data was collected at 100Hz for 7.5 minutes in total, hence ideal for this experiment which evaluates the effectiveness of joint learning under the assumption of sparsity in the output spaces.
Since none of the existing multi-output models are applicable to problems of this scale, we compare with independent models that learn each output separately.
Standard GP is applied to the first output as it has only 2,000 observations for training.
For the second output, we used two baselines.
The first is the subset of data (SOD) approach where 2,000 data points are randomly selected for training with a standard GP model.
The second is the sparse GP with stochastic variational inference (SVIGP) using 500 inducing inputs and a batch size of 1,000.
In case of COGP, we also used a batch size of 1,000 and 500 inducing points for the shared process ($Q = 1$) and each of the individual processes.
%The squared exponential with automatic relevance determination (SEard) covariance function is used for all processes of all methods.
%Finally, the outputs are normalized to have zero mean and unit variance.
\begin{table}[t]
\caption{Performance comparison on the robot inverse dynamics dataset. In the last two lines, standard GP is applied to output 1 and the other method is applied to output 2. Results are averaged over 5 repetitions. }
\label{tab:robotarm}
\begin{center}
\begin{tabular}{lcccc}
\toprule
& \multicolumn{2}{c}{\textbf{OUTPUT 1}} & \multicolumn{2}{c}{\textbf{OUTPUT 2}} \\ \cmidrule(r){2-5}
\textbf{METHOD} & \textbf{SMSE} & \textbf{NLPD} & \textbf{SMSE} & \textbf{NLPD}\\
\midrule
COGP, learn z & \textbf{0.2631} & \textbf{3.0600} & 0.0127 & \textbf{0.8302} \\
COGP, fix z & 0.2821& 3.2281 & 0.0131 & 0.8685 \\
GP, SVIGP & 0.3119 & 3.2198 & \textbf{0.0101} & 1.1914 \\
GP, SOD & 0.3119 & 3.2198 & 0.0104 & 1.9407 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
The performance of all methods in terms of SMSE and NLPD is given in Table \ref{tab:robotarm}.
The benefits of learning the two outputs jointly are evident, as can be seen by the significantly lower SMSE and NLPD of COGP compared to the full GP for the first output (4th torque).
While the SMSE of the second output is essentially the same for all methods, the NLPD of COGP is substantially better than that of the independent SVIGP model which has the same amount of training data for this torque.
These results validate the impact of collaborative learning under sparsity assumptions, opening up new opportunities for improvement over single task learning with independent sparse processes.
Finally, we see on Table \ref{tab:robotarm} that optimizing the inducing inputs can yield better performance than fixing them.
More importantly, the overhead in computation is small, as demonstrated by the training times shown in Figure \ref{fig:time}.
For instance, the total training time is only 1.9 hours when learning with 500 inducing inputs compared to 1.6 hours when fixing them.
As this dataset is 21-dimensional, this small difference in training time confirms that learning of the inducing inputs is a practical option even when dealing with problems of high dimensions.
%As this dataset is 21-dimensional, the small difference in training time our analysis that the cost of optimizing the inducing inputs only scales sublinearly with the dimensionality of the problem.
%Thus, for datasets of high dimension, learning of the inducing inputs is a practical option.
% table for smses, nlpds, training tiem
\begin{figure}
\includegraphics[width=0.7\linewidth]{figures/sarcosTime.eps}
\caption{Learning of the inducing inputs is a practical option as the overhead in training time is small.}
\label{fig:time}
\end{figure}
| {
"alphanum_fraction": 0.784947512,
"avg_line_length": 78.0601851852,
"ext": "tex",
"hexsha": "b8e3a4c8108eebd301522519403d9944d82aaead",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2020-12-23T13:28:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-04-03T03:18:18.000Z",
"max_forks_repo_head_hexsha": "3b07f621ff11838e89700cfb58d26ca39b119a35",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "trungngv/cogp",
"max_forks_repo_path": "paper/experiments.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "3b07f621ff11838e89700cfb58d26ca39b119a35",
"max_issues_repo_issues_event_max_datetime": "2019-06-04T01:44:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-30T08:52:36.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "fkopsaf/cogp",
"max_issues_repo_path": "paper/experiments.tex",
"max_line_length": 488,
"max_stars_count": 15,
"max_stars_repo_head_hexsha": "3b07f621ff11838e89700cfb58d26ca39b119a35",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "fkopsaf/cogp",
"max_stars_repo_path": "paper/experiments.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-10T11:02:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-28T13:46:13.000Z",
"num_tokens": 4253,
"size": 16861
} |
\documentclass{article}
\usepackage{xcolor}
\usepackage{mathpartir}
\usepackage{amsthm}
\usepackage{mathtools}
\usepackage{amssymb}
\usepackage{latexsym}
\usepackage{stmaryrd}
\usepackage{fullpage}
\usepackage{subcaption}
\usepackage{tikz}
\input{macros}
\newcommand{\mypar}[1]{\vspace{0.2cm}\paragraph{#1:} \hfill\vspace{0.1cm}}
\newtheorem{theorem}{Theorem}
\begin{document}
\section{Syntax}
\mypar{Syntax of source language, $F_{MP}^{+}$}
\noindent\begin{tabular}{l r r l}
Types & $A, B$ & $::=$ & $\tau~\mid~A \to B~\mid~A \& B~\mid~\textcolor{magenta}{\Tabs{\alpha}{A}{B}}~\mid~\trecord{l}{A}$\vspace{0.3cm}\\
Monotypes & $\tau$ & $::=$ & $\nat~\mid~\top~\mid~\textcolor{magenta}{\alpha}~\mid~\tau_1\to\tau_2~\mid~\tau_1\&\tau_2~\mid~\trecord{l}{\tau}$ \vspace{0.3cm}\\
Expressions & $E$ & $::=$ & $i~\mid~()~\mid~x~\mid~\abs{x}{E}~\mid~E_1\,E_2~\mid~E_1,,E_2~\mid~E : A~\mid~\record{l}{E}~\mid~E.l$\vspace{0.1cm}\\
& & & $\textcolor{magenta}{\tabs{\alpha}{A}{E}}~\mid~\textcolor{magenta}{E\,A}$\vspace{0.3cm}\\
Type context & $\Delta$ & $::=$ & $\bullet~\mid~\Delta,\tentry{\alpha}{A}$
\end{tabular}
%% \klara{Is there a reason to exclude intersection types from monotypes?}
\mypar{Syntax of target language}
\noindent\begin{tabular}{l r r l}
Types & $\rho$ & $::=~$ & $\nat~\mid~\top~\mid~\alpha~\mid~\rho_1 \to \rho_2~\mid~\rho_1 \times \rho_2~\mid~\alpha~\mid~\forall{\alpha}.{\rho}~\mid~\trecord{l}{\rho}$ \vspace{0.3cm}\\
Expressions & $e$ & $::=~$ & $i~\mid~()~\mid~x~\mid~\abs{x}{e}~\mid~e_1\,e_2~\mid~e_1 , , e_2~\mid~c\,e~\mid~\Lambda{\alpha}.{e}~\mid~e\,\rho~\mid~\record{l}{e}~\mid~e.l$ \vspace{0.3cm}\\
Coercions & $c$ & $::=~$ & $\idC{\rho}~\mid~\topC{\rho}~\mid~\topArrC~\mid~\topAllC~\mid~\distArrC{\rho_1}{\rho_2}{\rho_3}~\mid~\distRecC{l}{\rho_1}{\rho_2}~\mid~\projlC{\rho_1}{\rho_2}~\mid~\projrC{\rho_1}{\rho_2}~\mid$\vspace{0.1cm}\\
& & & $\mpC{c_1}{c_2}~\mid~\compC{c_1}{c_2}~\mid~\pairC{c_1}{c_2}~\mid~\arrC{c_1}{c_2}~\mid~\textcolor{magenta}{\alllC{c}{\rho}}~\mid~\textcolor{magenta}{\allrC{\alpha}{c}}~\mid~\trecord{l}{c}$\vspace{0.3cm}\\
Type context & $\Phi$ & $::=~$ & $\bullet~\mid~\Phi,\alpha$\vspace{0.1cm}\\
Term context & $\Psi$ & $::=~$ & $\bullet~\mid~\Psi,\eentry{x}{\rho}$
\end{tabular}
\vspace{1cm}
\paragraph{Helper definitions:}
Desugaring of new coercion operators into target functions.
\begin{align*}
\trecord{l}{c} &= \abs{x}{\record{l}{c\,x}}\\
\alllC{c}{\rho} &= \abs{x}{(c\,(x\,\rho))}\\
\allrC{\alpha}{c} &= \abs{x}{\tabst{\alpha}{c\, x}}
\end{align*}
\section{Declarative Specification}
\subsection{Declarative bidirectional typing}
%% \subsubsection{Declarative bidirectional typing}
\fbox{
\begin{mathpar}
\inferrule*[right=TS-top]{\wfContext{\Gamma}}{\synthmode{\Gamma}{()}{\top}{()}} \and
\inferrule*[right=TS-nat]{\wfContext{\Gamma}}{\synthmode{\Gamma}{i}{\nat}{i}} \and
\inferrule*[right=TS-var]{\wfContext{\Gamma}\\(\eentry{x}{A})\in\Gamma}{\synthmode{\Gamma}{x}{A}{x}} \and
\inferrule*[right=TS-Rcd]{\synthmode{\Gamma}{E}{A}{e}}{\synthmode{\Gamma}{\record{l}{E}}{\trecord{l}{A}}{\record{l}{e}}}\and
\inferrule*[right=TS-Proj]{\synthmode{\Gamma}{E}{\trecord{l}{A}}{e}}{\synthmode{\Gamma}{E.l}{A}{e.l}}\and
\inferrule*[right=TS-app]{\synthmode{\Gamma}{E_1}{A\to B}{e_1}\\ \checkmode{\Gamma}{E_2}{A}{e_2}}{\synthmode{\Gamma}{E_1\,E_2}{B}{e_1\,e_2}} \and
\inferrule*[right=TS-anno]{\checkmode{\Gamma}{E}{A}{e}}{\synthmode{\Gamma}{E:A}{A}{e}} \and
\inferrule*[right=TS-merge]{\synthmode{\Gamma}{E_1}{A_1}{e_1}\\ \synthmode{\Gamma}{E_2}{A_2}{e_2}\\ \udisjoint{\Gamma}{A\& B}}{\synthmode{\Gamma}{E_1,,E_2}{A_1\& A_2}{\pairC{e_1}{e_2}}}\and
%% \nrule{TS-abs}{\checkmode{\Gamma,\tentry{\hat\alpha}{\top},\tentry{\hat\beta}{\top},\eentry{x}{\hat\alpha}}{E}{\hat\beta}{e}{\Gamma',\eentry{x}{\hat\alpha},\Theta}\\\fresh{\hat\alpha,\hat\beta}}{\synthmode{\Gamma}{\abs{x}{E}}{\hat\alpha\to \hat\beta}{\abs{x}{e}}{\Gamma'}}\and
\inferrule*[right=TS-Tabs]{\synthmode{\Gamma,\tentry{\alpha}{A}}{E}{B}{e}}{\synthmode{\Gamma}{\tabs{\alpha}{A}{E}}{\Tabs{\alpha}{A}{B}}{\Lambda\alpha.e}}\and
\inferrule*[right=TS-Tapp]{\synthmode{\Gamma}{E}{\Tabs{\alpha}{A}{B}}{e}\\ \disjoint{\Gamma}{A}{A'}}{\synthmode{\Gamma}{E\,A'}{[\substitution{\alpha}{A'}]B}{e\,\toTarget{A'}}}
\end{mathpar}
}
\fbox{
\begin{mathpar}
\inferrule*[right=TC-abs]{\wfT{\Gamma}{A}\\\checkmode{\Gamma,\eentry{x}{A}}{E}{B}{e}}{\checkmode{\Gamma}{\abs{x}{E}}{A\to B}{\abs{x}{e}}} \and
\inferrule*[right=TC-sub]{\synthmode{\Gamma}{E}{A}{e}\\ \Subtype{\Gamma}{A}{B}{c}}{\checkmode{\Gamma}{E}{B}{c\,e}}\and
\end{mathpar}
}
%% \fbox{
%% \begin{mathpar}
%% \inferrule*[right=T-..]{\checkmode{\Gamma,\eentry{x}{A_1}}{E}{A_2}{e}{\Gamma',\eentry{x}{A_1},\Theta}}{\checkmode{\Gamma}{\abs{x}{E}}{A_1\to A_2}{\abs{x}{e}}{\Gamma'}} \and
%% \inferrule*[right=T-sub]{\synthmode{\Gamma}{E}{A}{e}{\Theta}\\ \ldots }{\checkmode{\Gamma}{E}{B}{c\,e}{\Gamma'}}
%% \end{mathpar}
%% }
\subsection{Declarative Disjointness}
%% \mypar{Disjointness - Declarative}
\fbox{
\begin{mathpar}
\inferrule*[right=D-TopL]{ }{\disjoint{\Delta}{\top}{A}} \and
% \inferrule*[right=D-TopR]{ }{\disjoint{\Delta}{A}{\top}} \and
%% \inferrule*[right=D-Arr]{\disjoint{\Delta}{A_2}{B_2}}{\disjoint{\Delta}{A_1\to A_2}{B_1\to B_2}} \and
\inferrule*[right=D-ArrL]{\disjoint{\Delta}{A_2}{B}}{\disjoint{\Delta}{A_1\to A_2}{B}} \and
% \inferrule*[right=D-ArrR]{\disjoint{\Delta}{A}{B_2}}{\disjoint{\Delta}{A}{B_1\to B_2}} \and
\inferrule*[right=D-AndL]{\disjoint{\Delta}{A_1}{B} \\ \disjoint{\Delta}{A_2}{B}}{\disjoint{\Delta}{A_1\& A_2}{B}} \and
% \inferrule*[right=D-AndR]{\disjoint{\Delta}{A}{B_1} \\ \disjoint{\Delta}{A}{B_2}}{\disjoint{\Delta}{A}{B_1\& B_2}} \and
\inferrule*[right=D-VarL]{\tentry{\alpha}{A}\in{\Delta} \\ \subt{A}{B}}{\disjoint{\Delta}{\alpha}{B}} \and
% \inferrule*[right=D-VarR]{\tentry{\beta}{B}\in{\Delta} \\ \subt{B}{A}}{\disjoint{\Delta}{A}{\beta}} \and
\mprset{sep=1em}
\inferrule*[right=D-AllL]{(\forall \tau,\,\disjoint{\Delta}{\tau}{A}\implies\disjoint{\Delta}{[\alpha\mapsto\tau]B_1}{B_2})}{\disjoint{\Delta}{\Tabs{\alpha}{A}{B_1}}{B_2}}\and
% \inferrule*[right=D-AllR]{(\forall \tau,\,\disjoint{\Delta}{\tau}{A}\implies\disjoint{\Delta}{B_1}{[\alpha\mapsto\tau]B_2})}{\disjoint{\Delta}{B_1}{\Tabs{\alpha}{A}{B_2}}}\and
\inferrule*[right=D-Rec]{\disjoint{\Delta}{A}{B}}{\disjoint{\Delta}{\trecord{l}{A}}{\trecord{l}{B}}} \and
\inferrule*[right=D-NRec]{l_1\neq l_2}{\disjoint{\Delta}{\trecord{l_1}{A}}{\trecord{l_2}{B}}} \and
\inferrule*[right=D-NatRcd]{ }{\disjoint{\Delta}{\nat}{\trecord{l}{A}}}\and
\inferrule*[right=D-Sym]{\disjoint{\Delta}{A}{B}}{\disjoint{\Delta}{B}{A}}
%% \inferrule*[right=D-ax]{\starax{A}{B}}{\disjoint{\Delta}{A}{B}}
\end{mathpar}
}
\subsubsection{Notes}
\mypar{Subsumption of \textsc{D-Forall} from $F_{i}^{+}$}
Rule \textsc{D-Forall} of $F_{i}^{+}$ is subsumed by our rules:
\[
\inferrule*[right=D-forall]
{\disjoint{\Delta,\tentry{\alpha}{A_1\& B_1}}{A_2}{B_2}}
{\disjoint{\Delta}{\Tabs{\alpha}{A_1}{A_2}}{\Tabs{\alpha}{B_1}{B_2}}}
\]
For any well-formed substitution $\wfSubst{\Delta,\tentry{\alpha}{A_1\& B_1}}{\theta\circ[{\alpha}\mapsto{\tau_0}]}$,
it follows by case analysis that $\wfT{\Delta}{\tau_0}$, $\disjoint{\Delta}{\tau_0}{A_1}$ and $\disjoint{\Delta}{\tau_0}{B_1}$.
Applying $\theta\circ[{\alpha}\mapsto{\tau_0}]$ in the premise of \textsc{D-forall}, yields $\disjoint{\theta(\Delta)}{[{\alpha}\mapsto{\tau_0}]{A_2}}{[{\alpha}\mapsto{\tau_0}]{B_2}}$. In our system,
this behaviour is recovered by:
\[
\inferrule*[right=D-allL($\tau_0$)]
{ \inferrule*{\text{given}}{\disjoint{\Delta}{\tau_0}{A_1}}
\\
\inferrule*[right=D-allR($\tau_0$)]
{
\inferrule*{\text{given}}{\disjoint{\Delta}{\tau_0}{B_1}}
\\
\inferrule*
{\text{given}}
{\disjoint{\Delta}{[\alpha\mapsto\tau_0]A_1}{[\alpha\mapsto\tau_0]{B_2}}}
}
{\disjoint{\Delta}{[\alpha\mapsto\tau_0]A_1}{\Tabs{\alpha}{B_1}{B_2}}}
}
{\disjoint{\Delta}{\Tabs{\alpha}{A_1}{A_2}}{\Tabs{\alpha}{B_1}{B_2}}}
\]
\subsection{Declarative Subtyping}
\fbox{
\begin{mathpar}
\inferrule*[right=S-refl]{ }{\Subtype{\Delta}{A}{A}{\idC{\toTarget{A}}}} \and
\inferrule*[right=S-trans]{\Subtype{\Delta}{A_1}{A_2}{c} \\ \Subtype{\Delta}{A_2}{A_3}{c'}}{\Subtype{\Delta}{A_1}{A_3}{\compC{c'}{c}}} \\
\inferrule*[right=S-top]{ }{\Subtype{\Delta}{A}{\top}{\topC{\toTarget{A}}}} \and
\inferrule*[right=S-topArr]{ }{\Subtype{\Delta}{\top}{\top\to\top}{\topArrC}} \and
\inferrule*[right=S-topAll]{ }{\Subtype{\Delta}{\top}{\Tabs{\alpha}{A}{\top}}{\topAllC}} \and
\inferrule*[right=S-and]{\Subtype{\Delta}{A}{B_1}{c_1} \\ \Subtype{\Delta}{A}{B_2}{c_2}}{\Subtype{\Delta}{A}{B_1 \& B_2}{\pairC{c_1}{c_2}}} \and
\inferrule*[right=S-andL]{ }{\Subtype{\Delta}{A_1\& A_2}{A_1}{\projlC{\toTarget{A_1}}{\toTarget{A_2}}}} \and
\inferrule*[right=S-andR]{ }{\Subtype{\Delta}{A_1\& A_2}{A_2}{\projrC{\toTarget{A_1}}{\toTarget{A_2}}}} \and
\inferrule*[right=S-arr]{\Subtype{\Delta}{B_1}{A_1}{c_1}\\ \Subtype{\Delta}{A_2}{B_2}{c_2}}{\Subtype{\Delta}{A_1\to A_2}{B_1\to B_2}{\arrC{c_1}{c_2}}} \and
\inferrule*[right=S-rcd]{\Subtype{\Delta}{A}{B}{c}}{\Subtype{\Delta}{\trecord{l}{A}}{\trecord{l}{B}}{\trecord{l}{c}}} \and
\inferrule*[right=S-mp]{\Subtype{\Delta}{A}{B_1\to B_2}{c_1} \\ \Subtype{\Delta}{A}{B_1}{c_2}}{\Subtype{\Delta}{A}{B_2}{\mpC{c_1}{c_2}}} \and
\inferrule*[right=S-distArr]{ }{\Subtype{\Delta}{(A\to B_1)\&(A\to B_2)}{A\to{B_1\& B_2}}{\distArrC{\toTarget{A}}{\toTarget{B_1}}{\toTarget{B_2}}}} \and
\inferrule*[right=S-distRcd]{ }{\Subtype{\Delta}{\trecord{l}{A}\&\trecord{l}{B}}{\trecord{l}{A\& B}}{\distRecC{l}{\toTarget{A}}{\toTarget{B}}}} \and
\inferrule*[right=S-allL]{\disjoint{\Delta}{\tau}{A} \\ \Subtype{\Delta}{[\alpha\mapsto\tau]B}{B'}{c}}{\Subtype{\Delta}{\Tabs{\alpha}{A}{B}}{B'}{\alllC{c}{\tau}}} \and
\inferrule*[right=S-allR]{\Subtype{\Delta,\tentry{\alpha}{B_1}}{A}{B_2}{c}}{\Subtype{\Delta}{A}{\Tabs{\alpha}{B_1}{B_2}}{\allrC{\alpha}{c}}}
\end{mathpar}\hfill
}\\
\subsubsection{Notes}
\mypar{Predicative instantiation}
We use predicative type instantiation (rule \textsc{S-allL}) to avoid following kind of non-termination, where $\mathcal{A} :=\Tabs{\alpha}{A}{\alpha\to\alpha}$.
In the premise of rule \textsc{S-allL} below, we use substitution $[\alpha\mapsto\mathcal{A}]$.
\begin{mathpar}
\inferrule*[right=S-MP]
{
\inferrule*[right=S-allL]
{\disjoint{\Delta}{\mathcal{A}}{A}
\\
\inferrule*
{\vdots}
{\Subtype{\Delta}{\mathcal{A}\to\mathcal{A}}{\mathcal{A}\to\mathcal{A}}{?}}
}
{\Subtype{\Delta}{\mathcal{A}}{\mathcal{A}\to\mathcal{A}}{\alllC{?}{\toTarget{\mathcal{A}}}}}
\\
\inferrule*[right=S-refl]
{ }
{\Subtype{\Delta}{\mathcal{A}}{\mathcal{A}}{\idC{\toTarget{\mathcal{A}}}}}
}
{\Subtype{\Delta}{\mathcal{A}}{\mathcal{A}}{\mpC{(\alllC{?}{\toTarget{\mathcal{A}}})}{\idC{\toTarget{\mathcal{A}}}}}}
\end{mathpar}
\mypar{Subsumption of \textsc{S-DistAll} and \textsc{S-topAll} from $F_{i}^{+}$}
Rule \textsc{S-DistAll} of $F_{i}^{+}$ is subsumed by our rules:
\begin{mathpar}
\inferrule*
{
\inferrule*
{
\inferrule*
{
\inferrule*
{\disjoint{\bullet,\tentry{a}{A}}{a}{A}
\\
\inferrule*{ }{\Subtype{\bullet,\tentry{a}{A}}{[\alpha\mapsto\alpha]B_1}{B_1}{\idC{\toTarget{B_1}}}}
}
{\Subtype{\bullet,\tentry{a}{A}}{\Tabs{\alpha}{A}{B_1}}{B_1}{\alllC{\idC{\toTarget{B_1}}}{\alpha}}}
\\
\vdots
}
{\Subtype{\bullet,\tentry{a}{A}}{(\Tabs{\alpha}{A}{B_1})\&(\Tabs{\alpha}{A}{B_2})}{B_1}{\compC{(\alllC{\idC{\toTarget{B_1}}}{\alpha})}{\projlC{\Tabst{\alpha}{\toTarget{B_1}}}{\Tabst{\alpha}{\toTarget{B_2}}}}}}
\\
\vdots
%% \inferrule*
%% { }
%% {\Subtype{\bullet,\tentry{a}{A}}{(\Tabs{\alpha}{A}{B_1})\&(\Tabs{\alpha}{A}{B_2})}{B_2}{ }}
}
{\Subtype{\bullet,\tentry{a}{A}}{(\Tabs{\alpha}{A}{B_1})\&(\Tabs{\alpha}{A}{B_2})}{B_1\&B_2}{\pairC{\compC{(\alllC{\idC{\toTarget{B_1}}}{\alpha})}{\projlC{\Tabst{\alpha}{\toTarget{B_1}}}{\Tabst{\alpha}{\toTarget{B_2}}}}}{\compC{(\alllC{\idC{\toTarget{B_2}}}{\alpha})}{\projrC{\Tabst{\alpha}{\toTarget{B_1}}}{\Tabst{\alpha}{\toTarget{B_2}}}}}}}
}
{\Subtype{\bullet}{(\Tabs{\alpha}{A}{B_1})\&(\Tabs{\alpha}{A}{B_2})}{\Tabs{\alpha}{A}{B_1\&B_2}}{\allrC{\alpha}{\pairC{\compC{(\alllC{\idC{\toTarget{B_1}}}{\alpha})}{\projlC{\Tabst{\alpha}{\toTarget{B_1}}}{\Tabst{\alpha}{\toTarget{B_2}}}}}{\compC{(\alllC{\idC{\toTarget{B_2}}}{\alpha})}{\projrC{\Tabst{\alpha}{\toTarget{B_1}}}{\Tabst{\alpha}{\toTarget{B_2}}}}}}}}
\end{mathpar}
\clearpage
\section{Algorithmic Specification}
\begin{table}[h]
\begin{tabular}{l r r l}
Types & $A, B$ & $::=$ & $\tau~\mid~A \to B~\mid~A \& B~\mid~\textcolor{magenta}{\Tabs{\alpha}{A}{B}}~\mid~\trecord{l}{A}$\vspace{0.1cm}\\
Monotypes & $\tau$ & $::=$ & $\xi~\mid~\tau_1\to\tau_2~\mid~\tau_1\&\tau_2~\mid~\trecord{l}{\tau}$ \vspace{0.1cm}\\
Base types & $\xi$ & $::=$ & $\nat~\mid~\bool~\mid~\top~\mid~\textcolor{magenta}{\alpha}~\mid~\textcolor{magenta}{\hat{\alpha}}$ \vspace{0.3cm}\\
Expressions & $E$ & $::=$ & $i~\mid~\True~\mid~\False~\mid~()~\mid~x~\mid~\abs{x}{E}~\mid~E_1\,E_2~\mid~E_1,,E_2~\mid$\vspace{0.1cm}\\
& & & $E : A~\mid~\textcolor{magenta}{\tabs{\alpha}{A}{E}}~\mid~\textcolor{magenta}{E\,A}~\mid\record{l}{E}~\mid~E.l$\vspace{0.3cm}\\
Type context & $\Delta$ & $::=$ & $\bullet~\mid~\Delta,\tentry{\alpha}{A}~\mid~\Delta,\tentry{\hat{\alpha}}{A}$\vspace{0.3cm}\\
Queue & $\mathcal{L},\mathcal{M}$ & $::=$ & $\bullet~\mid~\mathcal{L},A~\mid~\mathcal{L},l$\vspace{0.3cm}\\
Coercion context & $\mathcal{C}$ & $::=$ & $\bullet~\mid~\arrCC{\mathcal{C}}{c}~\mid~\projlCC{\mathcal{C}}{A}{B}~\mid~\projrCC{\mathcal{C}}{A}{B}~\mid~\mpCC{\mathcal{C}}{\mathcal{M}}{c_1}{A}{B}~\mid~\alllCC{\mathcal{C}}{\tau}~\mid~\recCC{\mathcal{C}}{l}$\vspace{0.3cm}\\
Type variable substitutions & $\theta$ & $::=$ & $\bullet~\mid~\unification{\alpha}{A},\theta~\mid~\substitution{\alpha}{A},\theta$\vspace{0.3cm}
\end{tabular}
\caption{The language of the algorithm}
\end{table}
\subsubsection{Coercion context completion}
\begin{minipage}[t]{0.7\textwidth}
\begin{align*}
\bullet [c] &= c\\
(\arrCC{\mathcal{C}}{c'})[c] &= \mathcal{C}[c'\to c]\\
\projlCC{\mathcal{C}}{A}{B}[c] &= \mathcal{C}[\compC{c}{\projlC{\toTarget{A}}{\toTarget{B}}}]\\
\projrCC{\mathcal{C}}{A}{B}[c] &= \mathcal{C}[\compC{c}{\projrC{\toTarget{A}}{\toTarget{B}}}]\\
\mpCC{\mathcal{C}}{\mathcal{M}}{c_1}{A}{B}[c] &= \compC{\distarrowqueueC{\mathcal{M}}{\left(\compC{c}{(\mpC{\projlC{\toTarget{A\to B}}{\toTarget{A}}}{\projrC{\toTarget{A\to B}}{\toTarget{A}}})}\right)}{\toTarget{A\to B}}{\toTarget{A}}}{\pairC{\mathcal{C}[\idC{\toTarget{A\to B}}]}{c_1}}\\
\alllCC{\mathcal{C}}{\tau}[c] &= \mathcal{C}[\alllC{c}{\toTarget{\tau}}]\\
\recCC{\mathcal{C}}{l}[c] &= \mathcal{C}[\trecord{l}{c}]
\end{align*}
\end{minipage}
\subsection{Substitutions}
\subsubsection{Applying unification variable substitution...}
\begin{minipage}[t]{0.5\textwidth}
\mypar{{...on types}}
\begin{align*}
[\unification{\alpha}{\tau}]\nat &= \nat\\
[\unification{\alpha}{\tau}]\top &= \top\\
[\unification{\alpha}{\tau}]\alpha &= \alpha\\
[\unification{\alpha}{\tau}]\hat\alpha &= \tau\\
[\unification{\alpha}{\tau}]\hat\beta &= \hat\beta\\
[\unification{\alpha}{\tau}](A\to B) &= ([\unification{\alpha}{\tau}]A)\to([\unification{\alpha}{\tau}]B)\\
[\unification{\alpha}{\tau}](A\& B) &= ([\unification{\alpha}{\tau}]A)\&([\unification{\alpha}{\tau}]B)\\
[\unification{\alpha}{\tau}](\Tabs{\alpha}{A}{B}) &= \Tabs{\alpha}{[\unification{\alpha}{\tau}]A}{[\unification{\alpha}{\tau}]B}\\
[\unification{\alpha}{\tau}]\trecord{l}{A} &= \trecord{l}{[\unification{\alpha}{\tau}]A}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\mypar{{...on terms}}
\begin{align*}
[\unification{\alpha}{\tau}]i &= i\\
[\unification{\alpha}{\tau}]() &= ()\\
[\unification{\alpha}{\tau}]x &= x\\
[\unification{\alpha}{\tau}]\abs{x}{E} &= \abs{x}{[\unification{\alpha}{\tau}]E}\\
[\unification{\alpha}{\tau}](E_1\,E_2) &= ([\unification{\alpha}{\tau}]E_1)\,([\unification{\alpha}{\tau}]E_2)\\
[\unification{\alpha}{\tau}](E : A) &= [\unification{\alpha}{\tau}]E : [\unification{\alpha}{\tau}]A\\
[\unification{\alpha}{\tau}]\tabs{\alpha}{A}{E} &= \tabs{\alpha}{[\unification{\alpha}{\tau}]A}{[\unification{\alpha}{\tau}]E}\\
[\unification{\alpha}{\tau}](E\,A) &= ([\unification{\alpha}{\tau}]E)\, ([\unification{\alpha}{\tau}]A)\\
[\unification{\alpha}{\tau}]\record{l}{E} &= \record{l}{[\unification{\alpha}{\tau}]E}\\
[\unification{\alpha}{\tau}](E.l) &= ([\unification{\alpha}{\tau}]E).l
\end{align*}
\end{minipage}\\
\noindent
\begin{minipage}[t]{0.47\textwidth}
\mypar{{...on type contexts}}
\begin{align*}
[\unification{\alpha}{\tau}]\bullet &= \bullet\\
[\unification{\alpha}{\tau}](\Delta,\tentry{\alpha}{A}) &= [\unification{\alpha}{\tau}]\Delta,\tentry{\alpha}{[\unification{\alpha}{\tau}]A}\\
[\unification{\alpha}{\tau}](\Delta,\tentry{\hat\alpha}{A}) &= [\unification{\alpha}{\tau}]\Delta\\
[\unification{\alpha}{\tau}](\Delta,\tentry{\hat\beta}{A}) &= [\unification{\alpha}{\tau}]\Delta,\tentry{\hat\beta}{[\unification{\alpha}{\tau}]A}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.47\textwidth}
\mypar{{...on queues}}
\begin{align*}
[\unification{\alpha}{\tau}]\bullet &= \bullet\\
[\unification{\alpha}{\tau}](\mathcal{M},A) &= [\unification{\alpha}{\tau}]\mathcal{M},([\unification{\alpha}{\tau}]A)\\
[\unification{\alpha}{\tau}](\mathcal{M},l) &= [\unification{\alpha}{\tau}]\mathcal{M},l
\end{align*}
\end{minipage}\\
\noindent
\begin{minipage}[t]{0.49\textwidth}
\mypar{...on coercions}
\begin{align*}
[\unification{\alpha}{\rho}]\idC{\rho'} &= \idC{[\unification{\alpha}{\rho}]\rho}\\
[\unification{\alpha}{\rho}]\topC{\rho'} &= \topC{[\unification{\alpha}{\rho}]\rho'}\\
[\unification{\alpha}{\rho}]\topArrC &= \topArrC\\
[\unification{\alpha}{\rho}]\topAllC &= \topAllC\\
[\unification{\alpha}{\rho}]\distArrC{\rho_1}{\rho_2}{\rho_3} &= \distArrC{[\unification{\alpha}{\rho}]\rho_1}{[\unification{\alpha}{\rho}]\rho_2}{[\unification{\alpha}{\rho}]\rho_3}\\
[\unification{\alpha}{\rho}]\distRecC{l}{\rho_1}{\rho_2} &= \distRecC{l}{[\unification{\alpha}{\rho}]\rho_1}{[\unification{\alpha}{\rho}]\rho_2}\\
[\unification{\alpha}{\rho}]\projlC{\rho_1}{\rho_2} &= \projlC{[\unification{\alpha}{\rho}]\rho_1}{[\unification{\alpha}{\rho}]\rho_2}\\
[\unification{\alpha}{\rho}]\projrC{\rho_1}{\rho_2} &= \projrC{[\unification{\alpha}{\rho}]\rho_1}{[\unification{\alpha}{\rho}]\rho_2}\\
[\unification{\alpha}{\rho}](\mpC{c_1}{c_2}) &= \mpC{([\unification{\alpha}{\rho}]c_1)}{([\unification{\alpha}{\rho}]c_2)}\\
[\unification{\alpha}{\rho}](\compC{c_1}{c_2}) &= \compC{([\unification{\alpha}{\rho}]c_1)}{([\unification{\alpha}{\rho}]c_2)}\\
[\unification{\alpha}{\rho}]\pairC{c_1}{c_2} &= \pairC{[\unification{\alpha}{\rho}]c_1}{[\unification{\alpha}{\rho}]c_2}\\
[\unification{\alpha}{\rho}](\arrC{c_1}{c_2}) &= \arrC{([\unification{\alpha}{\rho}]c_1)}{([\unification{\alpha}{\rho}]c_2)}\\
[\unification{\alpha}{\rho}](\alllC{c}{\rho'}) &= \alllC{([\unification{\alpha}{\rho}]c)}{([\unification{\alpha}{\rho}]\rho')}\\
[\unification{\alpha}{\rho}](\allrC{\alpha}{c}) &= \allrC{\alpha}{[\unification{\alpha}{\rho}]c}\\
[\unification{\alpha}{\rho}]\trecord{l}{c} &= \trecord{l}{[\unification{\alpha}{\rho}]c}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.49\textwidth}
\mypar{...on coercion contexts}
\begin{align*}
[\unification{\alpha}{\rho}]\bullet &= \bullet\\
[\unification{\alpha}{\rho}](\arrCC{\mathcal{C}}{c'}) &= \arrCC{([\unification{\alpha}{\rho}]\mathcal{C})}{[\unification{\alpha}{\rho}]c'}\\
[\unification{\alpha}{\rho}](\projlCC{\mathcal{C}}{A}{B}) &= \projlCC{([\unification{\alpha}{\rho}]\mathcal{C})}{[\unification{\alpha}{\rho}]A}{[\unification{\alpha}{\rho}]B}\\
[\unification{\alpha}{\rho}](\projrCC{\mathcal{C}}{A}{B}) &= \projrCC{([\unification{\alpha}{\rho}]\mathcal{C})}{[\unification{\alpha}{\rho}]A}{[\unification{\alpha}{\rho}]B}\\
[\unification{\alpha}{\rho}](\mpCC{\mathcal{C}}{\mathcal{M}}{c_1}{A}{B}) &= \mpCC{([\unification{\alpha}{\rho}]\mathcal{C})}{([\unification{\alpha}{\rho}]\mathcal{M})}{[\unification{\alpha}{\rho}]c_1}{[\unification{\alpha}{\rho}]A}{[\unification{\alpha}{\rho}]B}\\
[\unification{\alpha}{\rho}](\alllCC{\mathcal{C}}{\rho'}) &= \alllCC{([\unification{\alpha}{\rho}]\mathcal{C})}{[\unification{\alpha}{\rho}]\rho'}\\
[\unification{\alpha}{\rho}](\recCC{\mathcal{C}}{l}) &= \recCC{([\unification{\alpha}{\rho}]\mathcal{C})}{l}
\end{align*}
\end{minipage}
\subsubsection{Applying type variable substitution...}
\begin{minipage}[t]{0.5\textwidth}
\mypar{{...on types}}
\begin{align*}
[{\alpha}\mapsto{\tau}]\nat &= \nat\\
[{\alpha}\mapsto{\tau}]\top &= \top\\
[{\alpha}\mapsto{\tau}]\alpha &= \tau\\
[{\alpha}\mapsto{\tau}]\beta &= \beta\\
[{\alpha}\mapsto{\tau}]\hat\alpha &= \hat\alpha\\
[{\alpha}\mapsto{\tau}](A\to B) &= ([\substitution{\alpha}{\tau}]A)\to([\substitution{\alpha}{\tau}]B)\\
[{\alpha}\mapsto{\tau}](A\& B) &= ([\substitution{\alpha}{\tau}]A)\&([\substitution{\alpha}{\tau}]B)\\
[{\alpha}\mapsto{\tau}](\Tabs{\alpha}{A}{B}) &= \Tabs{\alpha}{[\substitution{\alpha}{\tau}]A}{[\substitution{\alpha}{\tau}]B}\\
[{\alpha}\mapsto{\tau}]\trecord{l}{A} &= \trecord{l}{[\substitution{\alpha}{\tau}]A}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\mypar{{...on terms}}
\begin{align*}
[\substitution{\alpha}{\tau}]i &= i\\
[\substitution{\alpha}{\tau}]() &= ()\\
[\substitution{\alpha}{\tau}]x &= x\\
[\substitution{\alpha}{\tau}]\abs{x}{E} &= \abs{x}{[\substitution{\alpha}{\tau}]E}\\
[\substitution{\alpha}{\tau}](E_1\,E_2) &= ([\substitution{\alpha}{\tau}]E_1)\,([\substitution{\alpha}{\tau}]E_2)\\
[\substitution{\alpha}{\tau}](E : A) &= [\substitution{\alpha}{\tau}]E : [\substitution{\alpha}{\tau}]A\\
[\substitution{\alpha}{\tau}]\tabs{\beta}{A}{E} &= \tabs{\beta}{[\substitution{\alpha}{\tau}]A}{[\substitution{\alpha}{\tau}]E}\\
[\substitution{\alpha}{\tau}](E\,A) &= ([\substitution{\alpha}{\tau}]E)\, ([\substitution{\alpha}{\tau}]A)\\
[\substitution{\alpha}{\tau}]\record{l}{E} &= \record{l}{[\substitution{\alpha}{\tau}]E}\\
[\substitution{\alpha}{\tau}](E.l) &= ([\substitution{\alpha}{\tau}]E).l
\end{align*}
\end{minipage}\\
\noindent
\begin{minipage}[t]{0.47\textwidth}
\mypar{{...on type contexts}}
\begin{align*}
[\substitution{\alpha}{\tau}]\bullet &= \bullet\\
[\substitution{\alpha}{\tau}](\Delta,\tentry{\alpha}{A}) &= [\substitution{\alpha}{\tau}]\Delta\\
[\substitution{\alpha}{\tau}](\Delta,\tentry{\beta}{A}) &= [\substitution{\alpha}{\tau}]\Delta,\tentry{\beta}{[\substitution{\alpha}{\tau}]A}\\
[\substitution{\alpha}{\tau}](\Delta,\tentry{\hat\alpha}{A}) &= [\substitution{\alpha}{\tau}]\Delta,\tentry{\hat\alpha}{[\substitution{\alpha}{\tau}]A}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.47\textwidth}
\mypar{{...on queues}}
\begin{align*}
[\substitution{\alpha}{\tau}]\bullet &= \bullet\\
[\substitution{\alpha}{\tau}](\mathcal{M},A) &= [\substitution{\alpha}{\tau}]\mathcal{M},([\substitution{\alpha}{\tau}]A)\\
[\substitution{\alpha}{\tau}](\mathcal{M},l) &= [\substitution{\alpha}{\tau}]\mathcal{M},l
\end{align*}
\end{minipage}\\%\vspace{3pt}
\noindent
\begin{minipage}[t]{0.49\textwidth}
\mypar{...on coercions}
\begin{align*}
[\substitution{\alpha}{\rho}]\idC{\rho'} &= \idC{[\substitution{\alpha}{\rho}]\rho}\\
[\substitution{\alpha}{\rho}]\topC{\rho'} &= \topC{[\substitution{\alpha}{\rho}]\rho'}\\
[\substitution{\alpha}{\rho}]\topArrC &= \topArrC\\
[\substitution{\alpha}{\rho}]\topAllC &= \topAllC\\
[\substitution{\alpha}{\rho}]\distArrC{\rho_1}{\rho_2}{\rho_3} &= \distArrC{[\substitution{\alpha}{\rho}]\rho_1}{[\substitution{\alpha}{\rho}]\rho_2}{[\substitution{\alpha}{\rho}]\rho_3}\\
[\substitution{\alpha}{\rho}]\distRecC{l}{\rho_1}{\rho_2} &= \distRecC{l}{[\substitution{\alpha}{\rho}]\rho_1}{[\substitution{\alpha}{\rho}]\rho_2}\\
[\substitution{\alpha}{\rho}]\projlC{\rho_1}{\rho_2} &= \projlC{[\substitution{\alpha}{\rho}]\rho_1}{[\substitution{\alpha}{\rho}]\rho_2}\\
[\substitution{\alpha}{\rho}]\projrC{\rho_1}{\rho_2} &= \projrC{[\substitution{\alpha}{\rho}]\rho_1}{[\substitution{\alpha}{\rho}]\rho_2}\\
[\substitution{\alpha}{\rho}](\mpC{c_1}{c_2}) &= \mpC{([\substitution{\alpha}{\rho}]c_1)}{([\substitution{\alpha}{\rho}]c_2)}\\
[\substitution{\alpha}{\rho}](\compC{c_1}{c_2}) &= \compC{([\substitution{\alpha}{\rho}]c_1)}{([\substitution{\alpha}{\rho}]c_2)}\\
[\substitution{\alpha}{\rho}]\pairC{c_1}{c_2} &= \pairC{[\substitution{\alpha}{\rho}]c_1}{[\substitution{\alpha}{\rho}]c_2}\\
[\substitution{\alpha}{\rho}](\arrC{c_1}{c_2}) &= \arrC{([\substitution{\alpha}{\rho}]c_1)}{([\substitution{\alpha}{\rho}]c_2)}\\
[\substitution{\alpha}{\rho}](\alllC{c}{\rho'}) &= \alllC{([\substitution{\alpha}{\rho}]c)}{([\substitution{\alpha}{\rho}]\rho')}\\
[\substitution{\alpha}{\rho}](\allrC{\alpha}{c}) &= \allrC{\alpha}{[\substitution{\alpha}{\rho}]c}\\
[\substitution{\alpha}{\rho}]\trecord{l}{c} &= \trecord{l}{[\substitution{\alpha}{\rho}]c}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.49\textwidth}
\mypar{{...on coecion contexts}}
\begin{align*}
[\substitution{\alpha}{\rho}]\bullet &= \bullet\\
[\substitution{\alpha}{\rho}](\arrCC{\mathcal{C}}{c'}) &= \arrCC{([\substitution{\alpha}{\rho}]\mathcal{C})}{[\substitution{\alpha}{\rho}]c'}\\
[\substitution{\alpha}{\rho}](\projlCC{\mathcal{C}}{A}{B}) &= \projlCC{([\substitution{\alpha}{\rho}]\mathcal{C})}{[\substitution{\alpha}{\rho}]A}{[\substitution{\alpha}{\rho}]B}\\
[\substitution{\alpha}{\rho}](\projrCC{\mathcal{C}}{A}{B}) &= \projrCC{([\substitution{\alpha}{\rho}]\mathcal{C})}{[\substitution{\alpha}{\rho}]A}{[\substitution{\alpha}{\rho}]B}\\
[\substitution{\alpha}{\rho}](\mpCC{\mathcal{C}}{\mathcal{M}}{c_1}{A}{B}) &= \mpCC{([\substitution{\alpha}{\rho}]\mathcal{C})}{([\substitution{\alpha}{\rho}]\mathcal{M})}{[\substitution{\alpha}{\rho}]c_1}{[\substitution{\alpha}{\rho}]A}{[\substitution{\alpha}{\rho}]B}\\
[\substitution{\alpha}{\rho}](\alllCC{\mathcal{C}}{\rho'}) &= \alllCC{([\substitution{\alpha}{\rho}]\mathcal{C})}{[\substitution{\alpha}{\rho}]\rho'}\\
[\substitution{\alpha}{\rho}](\recCC{\mathcal{C}}{l}) &= \recCC{([\substitution{\alpha}{\rho}]\mathcal{C})}{l}
\end{align*}
\end{minipage}
\subsection{Judgments}
\mypar{Well-formed substitution (algorithmic)}
\fbox{
\begin{mathpar}
\inferrule*[right=WFS-nil]{ }{\wfSubst{\Delta}{\bullet}{\bullet}} \and
\inferrule*[]{\wfSubst{\Delta}{\theta,\unification{\alpha}{A}}{\theta'}}{\wfSubst{\Delta,\tentry{\alpha}{B}}{\theta,\unification{\alpha}{A}}{\theta'}} \and
\inferrule*{\wfSubst{\Delta}{\theta,\unification{\beta}{A}}{\theta'}}{\wfSubst{\Delta,\tentry{\hat{\alpha}}{B}}{\theta,\unification{\beta}{A}}{\theta'}} \and
\inferrule*[right=WFS-next]{\wfSubst{\Delta}{\theta}{\theta_1} \\ \theta_1\circ\theta(\algdisjoint{\Delta}{A}{B)}{\theta_2}}{\wfSubst{\Delta,\tentry{\hat{\alpha}}{B}}{\theta,\unification{\alpha}{A}}{\theta_2\circ\theta_1}}
\end{mathpar}
}
\mypar{Unification algorithm}
\fbox{
\begin{mathpar}
\inferrule*[right=U-refl]{ }{\unifyB{\Delta}{\xi}{\xi}{\bullet}} \\
\inferrule*[right=U-VVL]{\wfSubst{\Delta}{[\unification{\alpha}{\hat\beta}]}{\theta}}{\unifyB{\Delta}{\hat\alpha}{\hat\beta}{\theta,{\hat\alpha}\mapsto{\hat\beta}}} \and
\inferrule*[right=U-VVR]{\wfSubst{\Delta}{[\unification{\beta}{\hat\alpha}]}{\theta}}{\unifyB{\Delta}{\hat\alpha}{\hat\beta}{\theta,{\hat\beta}\mapsto{\hat\alpha}}} \and
\inferrule*[right=U-NatV]{\wfSubst{\Delta}{[\unification{\alpha}{\nat}]}{\theta}}{\unifyB{\Delta}{\nat}{\hat{\alpha}}{\theta,\unification{\alpha}{\nat}}} \and
\inferrule*[right=U-VNat]{\wfSubst{\Delta}{[\unification{\alpha}{\nat}]}{\theta}}{\unifyB{\Delta}{\hat{\alpha}}{\nat}{\theta,\unification{\alpha}{\nat}}} \and
\inferrule*[right=U-CV]{\wfSubst{\Delta}{[\unification{\alpha}{\alpha}]}{\theta}}{\unifyB{\Delta}{\alpha}{\hat{\alpha}}{\theta,\hat{\alpha}\mapsto\alpha}}\and
\inferrule*[right=U-VC]{\wfSubst{\Delta}{[\unification{\alpha}{\alpha}]}{\theta}}{\unifyB{\Delta}{\hat{\alpha}}{\alpha}{\theta,\hat{\alpha}\mapsto\alpha}}
\end{mathpar}
}\\
\fbox{
\begin{mathpar}
\inferrule{\unifyB{\Delta}{\xi_1}{\xi_2}{\theta}}{\unifyM{\Delta}{\xi_1}{\xi_2}{\theta}}
\end{mathpar}
}
\mypar{Algorithmic disjointness}
Formula $\notarrow{A}$, in the rules below, means that $A$ is not a function type.\\
\fbox{
\begin{mathpar}
\inferrule*[right=AD-TopL]{ }{\algdisjoint{\Delta}{\top}{A}{\bullet}} \and
\inferrule*[right=AD-TopR]{ }{\algdisjoint{\Delta}{A}{\top}{\bullet}} \\
\inferrule*[right=AD-VarL]{\tentry{\alpha}{A}\in\Delta \\ \algSubRight{\Delta}{\bullet}{A}{B}{c}{\theta}}{\algdisjoint{\Delta}{\alpha}{B}{\theta}} \and
\inferrule*[right=AD-VarR]{\tentry{\beta}{B}\in{\Delta} \\ \algSubRight{\Delta}{\bullet}{B}{A}{c}{\theta}}{\algdisjoint{\Delta}{A}{\beta}{\theta}} \and
\inferrule*[right=AD-UVarL]{\tentry{\hat\alpha}{A}\in\Delta \\ \algSubRight{\Delta}{\bullet}{A}{B}{c}{\theta}}{\algdisjoint{\Delta}{\hat\alpha}{B}{\theta}} \and
\inferrule*[right=AD-UVarR]{\tentry{\hat\beta}{B}\in{\Delta} \\ \algSubRight{\Delta}{\bullet}{B}{A}{c}{\theta}}{\algdisjoint{\Delta}{A}{\hat\beta}{\theta}} \and
\inferrule*[right=AD-Rcd]{\algdisjoint{\Delta}{A}{B}{\theta}}{\algdisjoint{\Delta}{\trecord{l}{A}}{\trecord{l}{B}}{\theta}} \and
\inferrule*[right=AD-Nrcd]{l_1 \neq l_2}{\algdisjoint{\Delta}{\trecord{l_1}{A}}{\trecord{l_2}{B}}{\bullet}} \and
\inferrule*[right=AD-Arr]{\algdisjoint{\Delta}{A_2}{B_2}{\theta}}{\algdisjoint{\Delta}{A_1\to A_2}{B_1\to B_2}{\theta}} \and
\inferrule*[right=AD-ArrL]{\algdisjoint{\Delta}{A_2}{B}{\theta} \\ \notarrow{B}}{\algdisjoint{\Delta}{A_1\to A_2}{B}{\theta}} \and
\inferrule*[right=AD-ArrR]{\algdisjoint{\Delta}{A}{B_2}{\theta} \\ \notarrow{A}}{\algdisjoint{\Delta}{A}{B_1\to B_2}{\theta}} \and
\inferrule*[right=AD-AndL]{\algdisjoint{\Delta}{A_1}{B}{\theta_1} \\ \theta_1(\algdisjoint{\Delta}{A_2}{B)}{\theta_2} \\ \notarrow{B}}{\algdisjoint{\Delta}{A_1\& A_2}{B}{\theta_2\circ\theta_1}} \and
\inferrule*[right=AD-AndR]{\algdisjoint{\Delta}{A}{B_1}{\theta_1} \\ \theta_1(\algdisjoint{\Delta}{A}{B_2)}{\theta_2} \\ \notarrow{A}}{\algdisjoint{\Delta}{A}{B_1\& B_2}{\theta_2\circ\theta_1}} \and
\inferrule*[right=AD-All]{\algdisjoint{\Delta,\tentry{\hat\alpha}{A_1\& B_1}}{[\alpha\mapsto\hat\alpha]B_1}{[\alpha\mapsto\hat\alpha]B_2}{\theta}}{\algdisjoint{\Delta}{\Tabs{\alpha}{A_1}{A_2}}{\Tabs{\alpha}{B_1}{B_2}}{\theta}}\and
\inferrule*[right=AD-AllL]{\algdisjoint{\Delta,\tentry{\hat\alpha}{A}}{[\alpha\mapsto\hat\alpha]B_1}{B_2}{\theta}}{\algdisjoint{\Delta}{\Tabs{\alpha}{A}{B_1}}{B_2}{\theta}}\and
\inferrule*[right=AD-AllR]{\algdisjoint{\Delta,\tentry{\hat\alpha}{A}}{B_1}{[\alpha\mapsto\hat\alpha]B_2}{\theta}}{\algdisjoint{\Delta}{B_1}{\Tabs{\alpha}{A}{B_2}}{\theta}}\and
\inferrule*[right=AD-AX]{\starax{A}{B}}{\algdisjoint{\Delta}{A}{B}{\bullet}}
\end{mathpar}
}\\
\fbox{
\begin{mathpar}
\inferrule*[right=AX-NatBool]{ }{\starax{\nat}{\bool}}\and
\inferrule*[right=AX-BoolNat]{ }{\starax{\bool}{\nat}}\and
\inferrule*[right=AX-RcdNat]{ }{\starax{\trecord{l}{A}}{\nat}}\and
\inferrule*[right=AX-NatRcd]{ }{\starax{\nat}{\trecord{l}{A}}}\and
\inferrule*[right=AX-RcdBool]{ }{\starax{\trecord{l}{A}}{\bool}}\and
\inferrule*[right=AX-BoolRcd]{ }{\starax{\bool}{\trecord{l}{A}}}
%% \inferrule*{}
\end{mathpar}
}
\mypar{Internal disjointness}
\fbox{
\begin{mathpar}
\inferrule*[right=UD-Nat]{ }{\algUdisjoint{\Delta}{\nat}{\bullet}}\and
\inferrule*[right=UD-Bool]{ }{\algUdisjoint{\Delta}{\bool}{\bullet}}\and
\inferrule*[right=UD-Top]{ }{\algUdisjoint{\Delta}{\top}{\bullet}}\and
\inferrule*[right=UD-Var]{\tentry{\alpha}{A}\in\Delta}{\algUdisjoint{\Delta}{\alpha}{\bullet}}\and
\inferrule*[right=UD-UVar]{\tentry{\hat\alpha}{A}\in\Delta}{\algUdisjoint{\Delta}{\hat\alpha}{\bullet}}\and
\inferrule*[right=UD-Rcd]{\algUdisjoint{\Delta}{A}{\theta}}{\algUdisjoint{\Delta}{\trecord{l}{A}}{\theta}}\and
\inferrule*[right=UD-Arr]{\algUdisjoint{\Delta}{B}{\theta}}{\algUdisjoint{\Delta}{A\to B}{\theta}}\and
\inferrule*[right=UD-And]{\algdisjoint{\Delta}{A}{B}{\theta}\\\ \theta(\algUdisjoint{\Delta}{A)}{\theta_1}\\\theta(\algUdisjoint{\Delta}{B)}{\theta_2}}{\algUdisjoint{\Delta}{A\& B}{\theta_1\circ\theta_2\circ\theta}}\and
\inferrule*[right=UD-All]{\algUdisjoint{\Delta,\tentry{\alpha}{A}}{B}{\theta}}{\algUdisjoint{\Delta}{\Tabs{\alpha}{A}{B}}{\theta}}
\end{mathpar}
}
\begin{figure}[h]
\caption{Algorithmic subtyping}
\begin{subfigure}{\textwidth}
\begin{mathpar}
\inferrule{\algSubRight{\bullet}{\bullet}{A}{B}{c}{\theta}}{\algSubMain{A}{B}{c}{\theta}}
\end{mathpar}
\caption{Main judgment}
\end{subfigure}
\begin{subfigure}{\textwidth}
\begin{mathpar}
\inferrule*[right=AR-top]
{ }
{\algSubRight{\Delta}{\mathcal{L}}{A}{\top}{\compC{\toparrowqueue{\mathcal{L}}}{\topC{\toTarget{A}}}}{\bullet}}
\and
\inferrule*[right=AR-rcd]
{\algSubRight{\Delta}{\mathcal{L},l}{A}{B}{c}{\theta}}
{\algSubRight{\Delta}{\mathcal{L}}{A}{\trecord{l}{B}}{c}{\theta}}
\and
\inferrule*[right=AR-and]
{\algSubRight{\Delta}{\mathcal{L}}{A}{B_1}{c_1}{\theta} \\ \algSubRight{\Delta}{\mathcal{L}}{A}{B_2}{c_2}{\theta}}
{\algSubRight{\Delta}{\mathcal{L}}{A}{B_1\& B_2}{\compC{\distarrowqueue{\mathcal{L}}{B_1}{B_2}}{\pairC{c_1}{c_2}}}{\theta}}
\and
\inferrule*[right=AR-arr]
{\algSubRight{\Delta}{\mathcal{L},B_1}{A}{B_2}{c}{\theta}}
{\algSubRight{\Delta}{\mathcal{L}}{A}{B_1\to B_2}{c}{\theta}}
\and
\inferrule*[right=AR-all]
{\algSubRight{\Delta,\tentry{\alpha}{B_1}}{\mathcal{L}}{A}{B_2}{c}{\theta}}
{\algSubRight{\Delta}{\mathcal{L}}{A}{\Tabs{\alpha}{B_1}{B_2}}{\allrC{\alpha}{c}}{\theta}}
\and
\inferrule*[right=AR-base]
{\algSubLeft{\Delta}{\mathcal{L}}{\bullet}{A}{\bullet}{A}{\xi}{\mathcal{C}}{\theta}}
{\algSubRight{\Delta}{\mathcal{L}}{A}{\xi}{\mathcal{C}[\idC{\toTarget{A}}]}{\theta}}
\end{mathpar}
\caption{Right focus}
\end{subfigure}
\begin{subfigure}{\textwidth}
\begin{mathpar}
\inferrule*[right=AL-Base]
{\unifyB{\Delta}{\xi_1}{\xi_2}{\theta}}
{\algSubLeft{\Delta}{\bullet}{\mathcal{M}}{A_0}{\mathcal{C}}{\xi_1}{\xi_2}{\mathcal{C}}{\theta}}
\and
\mprset{vskip=2pt}
\inferrule*[right=AL-VarArr]
{\theta= [\unification{\alpha}{(\hat\alpha_1\to\hat\alpha_2)}] \\ \fresh{\hat\alpha_1, \hat\alpha_2}\\ \Delta = \Delta_1,\tentry{\hat\alpha}{A},\Delta_2\\\\ %(\tentry{\hat\alpha}{A})\in\Delta\\\\
\algSubRight{\Delta_1,\tentry{\hat\alpha_1}{\top},\tentry{\hat\alpha_2}{A},\theta(\Delta_2)}{\bullet}{\theta(B_1)}{\hat\alpha_1}{c_1}{\theta_1}\\
\theta_1\circ\theta(\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M},B_1}{A_0}{\arrCC{\mathcal{C}}{c_1}}{\hat\alpha_2}{\xi)}{\mathcal{C'}}{\theta_2}
}
{\algSubLeft{\Delta}{B_1,\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{\hat\alpha}{\xi}{\mathcal{C'}}{\theta_2\circ\theta_1\circ\theta}}
\mprset{vskip=}
\and
\inferrule*[right=AL-AndL]
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\projlCC{\mathcal{C}}{\toTarget{A_1}}{\toTarget{A_2}}}{A_1}{\xi}{\mathcal{C'}}{\theta}}
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{A_1\& A_2}{\xi}{\mathcal{C'}}{\theta}}
\and
\inferrule*[right=AL-AndR]
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\projrCC{\mathcal{C}}{\toTarget{A_1}}{\toTarget{A_2}}}{A_2}{\xi}{\mathcal{C'}}{\theta}}
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{A_1\& A_2}{\xi}{\mathcal{C'}}{\theta}}
\and
\inferrule*[right=AL-Rcd]
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M},l}{A_0}{\recCC{\mathcal{C}}{l}}{A}{\xi}{\mathcal{C'}}{\theta}}
{\algSubLeft{\Delta}{l,\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{\trecord{l}{A}}{\xi}{\mathcal{C'}}{\theta}}
\and
\inferrule*[right=AL-Arr]
{\algSubRight{\Delta}{\bullet}{B_1}{A_1}{c_1}{\theta_1}\\ \theta_1(\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M},B_1}{A_0}{\arrCC{\mathcal{C}}{c_1}}{A_2}{\xi)}{\mathcal{C'}}{\theta_2}}
{\algSubLeft{\Delta}{B_1,\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{A_1\to A_2}{\xi}{\mathcal{C'}}{\theta_2\circ\theta_1}}
\and
\inferrule*[right=AL-MP]
{\algSubRight{\Delta}{\bullet}{A_0}{\arrowqueue{\mathcal{M}}{A_1}}{c_1}{\theta_1}\\
\theta_1(\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\mpCC{\mathcal{C}}{\mathcal{M}}{c_1}{\toTarget{A_1}}{\toTarget{A_2}}}{A_2}{\xi)}{\mathcal{C'}}{\theta_2}
%% \elab{\mathcal{C'} = \abs{c}{\compC{\distarrowqueueC{\mathcal{M}}{\left(\compC{c}{(\mpC{\projlC{\toTarget{A_1\to A_2}}{\toTarget{A_1}}}{\projrC{\toTarget{A_1\to A_2}}{\toTarget{A_1}}})}\right)}{(A_1\to A_2)}{A_1}}{\pairC{\mathcal{C}[\idC{\toTarget{A_1\to A_2}}]}{c_1}}}}
}
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{A_1\to A_2}{\xi}{\mathcal{C'}}{\theta_2\circ\theta_1}}
\and
\inferrule*[right=AL-Forall]
{\algSubLeft{\Delta,\tentry{\hat{\alpha}}{A}}{\mathcal{L}}{\mathcal{M}}{A_0}{\alllCC{\mathcal{C}}{\hat\alpha}}{[\alpha\mapsto\hat{\alpha}]B}{\xi}{\mathcal{C'}}{\theta}\\
\fresh{\hat\alpha}}
{\algSubLeft{\Delta}{\mathcal{L}}{\mathcal{M}}{A_0}{\mathcal{C}}{\Tabs{\alpha}{A}{B}}{\xi}{\mathcal{C'}}{\theta}}
\end{mathpar}
\caption{Left focus}
\end{subfigure}
\end{figure}
\section{Guide}
\begin{theorem}
\newcommand*{\dom}[1]{\mathsf{dom}(#1)}
If $\wfSubst{\Delta}{\theta}{\theta'}$, then $\dom{\theta}\cap\dom{\theta'}=\emptyset$.
\end{theorem}
\begin{proof}
\newcommand*{\dom}[1]{\mathsf{dom}(#1)}
Easy by induction on the well-formed-substitution judgment. The only interesting case is where
\begin{itemize}
\item \(\inferrule{\wfSubst{\Delta}{\theta}{\theta_1} \\ \theta_1\circ\theta(\algdisjoint{\Delta}{A}{B)}{\theta_2}}{\wfSubst{\Delta,\tentry{\hat{\alpha}}{B}}{\theta,\unification{\alpha}{A}}{\theta_1\circ\theta_2}}\)
\end{itemize}
From the fact that $\hat\alpha\not\in\Delta$ and the two premises of the rule, it follows that $\hat\alpha\not\in\dom{\theta_1}$ and $\hat\alpha\not\in\dom{\theta_2}$.
From the definition of substitution, we have $\forall\beta,\,\beta\in\dom{\theta_1\circ\theta}\implies\beta\not\in\dom{(\theta_1\circ\theta)(\Delta)}$.
Therefore, from the second premise it follows that $\beta\not\in\dom{\theta_2}$. Then, $\dom{\theta}\cap\dom{\theta_2}=\emptyset$.\\
Also, from the induction hypothesis and the first premise of the rule, it follows that $\dom{\theta}\cap\dom{\theta_1}=\emptyset$.\\
Altogether, we get that $(\dom{\theta}\cup\{\hat\alpha\})\cap\dom{\theta_1\circ\theta_2}=\emptyset$.
\end{proof}
\subsection{Examples}
With
\begin{align*}
\mathcal{A} &:= \Tabs{\alpha}{\top}{\Tabs{\beta}{\alpha}{\Tabs{\gamma}{\alpha\&\beta}{\gamma\to\alpha\&\beta}}}\\
\Delta_1 &:= \tentry{\alpha}{\top},\tentry{\beta}{\alpha\&\nat}\\
\Delta_2 &:= \tentry{\hat\alpha}{\top},\tentry{\hat\beta}{\hat\alpha},\tentry{\hat\gamma}{\hat{\alpha}\&\hat\beta}\\
\end{align*}
we have
\[
\mprset{vskip=0ex}
\inferrule*[right=AR-all*]
{
\inferrule*[Right=AR-arr]
{
\inferrule*[Right=AR-and]
{
\inferrule*
{\vdots}
{\algSubRight{\Delta_1}{\beta}{\mathcal{A}}{\alpha}{?_1}{?}}
\\
\inferrule*[Right=AR-base]
{
\inferrule*[Right=AL-all*]
{
\inferrule*[Right=AL-arr]
{
\inferrule*[right=AR-base,leftskip=2cm]
{
\inferrule*[Right=AL-base]
{
\inferrule*
{
\inferrule*
{
\inferrule*[Right=AD-andR]
{
\inferrule*[right=AD-varL,leftskip=2cm]
{
\inferrule*[]
{ }
{\algSubRight{\Delta_1,\Delta_2}{}{}{}{}{}}
}
{\algdisjoint{\Delta_1,\Delta_2}{\beta}{\hat\alpha}{?}}
\\
\inferrule*[right=AD-varL,rightskip=3cm]
{\vdots}
{\algdisjoint{\Delta_1,\Delta_2}{\beta}{\hat{\beta}}{?}}
%% \vdots
}
{\algdisjoint{\Delta_1,\Delta_2}{\beta}{\hat{\alpha}\&\hat\beta}{?}}
}
{\wfSubst{\Delta_1,\Delta_2}{\unification{\gamma}{\beta}}}
}
{\unifyB{\Delta_1,\Delta_2}{\beta}{\hat\gamma}{?}}
}
{\algSubLeft{\Delta_1,\Delta_2}{\bullet}{\bullet}{\beta}{\abs{c}{c}}{\beta}{\hat\gamma}{?}{?}}
}
{\algSubRight{\Delta_1,\Delta_2}{\bullet}{\beta}{\hat\gamma}{?}{?}}
\\
\inferrule*[rightskip=2cm]
{ }
{\algSubLeft{}{}{\beta}{\mathcal{A}}{}{}{\nat}{?}{?}}
}
{\algSubLeft{\Delta_1,\Delta_2}{\beta}{\bullet}{\mathcal{A}}{\abs{c}{c}}{\hat\gamma\to\hat{\alpha}\&\hat\beta}{\nat}{?}{?}}
}
{\algSubLeft{\Delta_1}{\beta}{\bullet}{\mathcal{A}}{\abs{c}{c}}{\mathcal{A}}{\nat}{?}{?}}
}
{\algSubRight{\Delta_1}{\beta}{\mathcal{A}}{\nat}{?_2}{?}}
}
{\algSubRight{\Delta_1}{\beta}{\Tabs{\alpha}{\top}{\Tabs{\beta}{\alpha}{\Tabs{\gamma}{\alpha\&\beta}{\gamma\to\alpha\&\beta}}}}{\alpha\&\nat}{\pairC{?_1}{?_2}}{?}}
}
{\algSubRight{\Delta_1}{\bullet}{\Tabs{\alpha}{\top}{\Tabs{\beta}{\alpha}{\Tabs{\gamma}{\alpha\&\beta}{\gamma\to\alpha\&\beta}}}}{\beta\to\alpha\&\nat}{?}{?}}
}
{\algSubRight{\bullet}{\bullet}{\Tabs{\alpha}{\top}{\Tabs{\beta}{\alpha}{\Tabs{\gamma}{\alpha\&\beta}{\gamma\to\alpha\&\beta}}}}{\Tabs{\alpha}{\top}{\Tabs{\beta}{\alpha\&\nat}{\beta\to\alpha\&\nat}}}{?}{?}}
\]
\end{document}
| {
"alphanum_fraction": 0.5789129454,
"avg_line_length": 66.8778280543,
"ext": "tex",
"hexsha": "5587365b28921651eb9d48a155fa316f3f3fdab5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c44dbb91b647827767af12f5c6e00154c6b26bb7",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "martonbognar/modusponens-prototype",
"max_forks_repo_path": "notes/newspecs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c44dbb91b647827767af12f5c6e00154c6b26bb7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "martonbognar/modusponens-prototype",
"max_issues_repo_path": "notes/newspecs.tex",
"max_line_length": 369,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c44dbb91b647827767af12f5c6e00154c6b26bb7",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "martonbognar/modusponens-prototype",
"max_stars_repo_path": "notes/newspecs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17572,
"size": 44340
} |
\chapter{Noun-verb predicates}\label{noun-verb}
This chapter deals with idiomatic combinations of a noun and a verb. These predicates occupy a position somewhat between word and phrase. Lexically, a \isi{noun-verb predicate} always constitutes one word, as its meaning is not directly predictable from its individual components (with varying degrees of metaphoricity and abstraction). But since the nouns enjoy considerable morphosyntactic freedom, speaking of noun incorporation here would be misleading. About 80 noun-verb predicates are attested so far, with rougly two thirds referring to experiential events.
There are two main morphologically defined patterns for noun-verb predicates. In the first pattern (Simple noun-verb predicates) the predicate consists of a noun and a verb that are juxtaposed in N-V order, such as \emph{lam phakma} \rede{open way, give turn} or \emph{tukkhuʔwa lamma} \rede{doze off} (discussed in \sectref{simple-noun-verb}).\footnote{The same pattern is also used as a strategy to incorporate \ili{Nepali} nouns into the Yakkha morphology, with a very small class of light verbs, namely \emph{cokma} \rede{make}, \emph{wama} \rede{exist, be} and \emph{tokma} \rede{get}, cf. \sectref{cop}. } The second pattern (the \isi{experiencer-as-possessor construction}) is semantically more restricted and also different morphologically. It expresses experiential events, with the \isi{experiencer} coded as possessor, as for instance \emph{hakamba keʔma} \rede{yawn} (literally \rede{someone's yawn to come up}). This pattern is discussed in \sectref{nv-comp-poss}.
\section{Simple noun-verb predicates}\label{simple-noun-verb}
Most of the \isi{simple noun-verb predicates} are relatively transparent but fixed collocations. They denote events from the semantic domains of natural phenomena, e.g., \emph{nam phemma} \rede{shine [sun]}, \emph{taŋkhyaŋ kama} \rede{thunder} (literally \rede{the sky shouts}), some culturally significant actions like \emph{kei lakma} \rede{dance the drum dance} and also verbs that refer to experiential events and bodily functions, such as \emph{whaŋma tukma} \rede{feel hot} (literally \rede{the sweat (or heat) hurts}). Experiential concepts are, however, more frequently expressed by the \isi{experiencer-as-possessor construction}.\footnote{There is no clear explanation why some verbs expressing bodily functions, like \emph{chipma chima} \rede{urinate} belong to the \isi{simple noun-verb predicates}, while most of them belong to the experiencer-as-possessor frame. Some verbs show synonymy across these two classes, e.g., the two lexemes with the meaning \rede{sweat}: \emph{whaŋma lomma} (literally \rede{(someone's) sweat comes out}, an experiencer-as-possessor predicate, with the \isi{experiencer} coded as possessor of \emph{whaŋma}) and \emph{whaŋmaŋa lupma} (literally \rede{the sweat disperses}, a simple \isi{noun-verb predicate}).}
\tabref{simple-nv-tab} provides some examples of \isi{simple noun-verb predicates}. Lexemes in square brackets were not found as independent words beyond their usage in these compounds. Some verbs, like weather verbs (e.g., \emph{nam phemma} \rede{shine [sun]} and \emph{wasik tama} \rede{rain}) and some \isi{experiential predicates} (e.g., \emph{wepma sima} \rede{be thirsty} and \emph{whaŋma tukma} \rede{feel hot}), for instance, do not allow the expression of additional arguments; their \isi{valency} is zero (under the assumption that the nouns belonging to the predicates are different from full-fledged arguments). If overt arguments are possible, they behave like the arguments of standard intransitive or standard transitive verbs. They trigger agreement on the verb, and they take \isi{nominative} or \isi{ergative} \isi{case} marking (see \sectref{frames}).
\begin{table}[htp]
\resizebox{\textwidth}{!}{
\begin{tabular}{lll}
\lsptoprule
{\sc predicate} & {\sc gloss} &{\sc literal translation}\\
\midrule
\emph{cabhak lakma} &\rede{do the paddy dance} &(paddy – dance)\\
\emph{chakma pokma} & \rede{troubled times to occur} &(hardship – strike)\\
\emph{chipma chima}& \rede{urinate} &(urine – urinate)\\
\emph{cuŋ tukma}& \rede{feel cold} &(cold – hurt)\\
\emph{himbulumma cama} &\rede{swing} &(swing – eat)\\
\emph{hiʔwa phemma} &\rede{wind blow} &(wind – be activated [weather])\\
\emph{hoŋga phaŋma} & \rede{crawl} &([{\sc stem}] – [{\sc stem}])\\
\emph{kei lakma} &\rede{do the drum dance} &(drum – dance)\\
\emph{laŋ phakma} &\rede{make steps} &(foot/leg – apply) \\
\emph{lam phakma} &\rede{open way, give turn} &(way – apply/build)\\
\emph{lambu lembiʔma} & \rede{let pass} &(way – let–give)\\
\emph{muk phakma} & \rede{help, serve} &(hand – apply)\\
\emph{nam ama} &\rede{sit around all day} &(sun – make set)\\
\emph{nam phemma} &\rede{be sunny} &(sun – be activated [weather])\\
\emph{phiʔma phima}& \rede{fart} &(fart – fart)\\
\emph{sak tukma}& \rede{be hungry} &(hunger – hurt)\\
\emph{setni keʔma}& \rede{stay awake all night} &(night – bring up)\\
\emph{sokma soma}& \rede{breathe} &(breath – breathe)\\
\emph{susuwa lapma}& \rede{whistle} &([whistle] – call)\\
\emph{tukkhuʔwa lapma} &\rede{doze off} &([{\sc stem}] – call)\\
\emph{(\ti lamma)} & & \\
\emph{taŋkhyaŋ kama} &\rede{thunder} &(sky – call)\\
\emph{uwa cama} &\rede{kiss} &(nectar/liquid – eat)\\
\emph{wa lekma} &\rede{rinse} &(water – turn)\\
\emph{wasik tama} &\rede{rain} &(rain – come)\\
\emph{wepma sima} & \rede{be thirsty} &(thirst – die)\\ %: ka wepma siangna
\emph{wepma tukma}& \rede{by thirsty} &(thirst – hurt)\\
\emph{wha pokma} &\rede{septic wounds to occur} &(septic wound – infest)\\
\emph{whaŋma tukma} &\rede{feel hot} &(heat/sweat – hurt)\\
\emph{yak yakma} &\rede{stay over night} &([{\sc stem}] – stay over night)\\
\emph{yaŋchan chiʔma} &\rede{regret} &([{\sc stem}] – get conscious)\\
\midrule
\emph{chemha=ŋa sima} &\rede{be intoxicated, be drunken} &(be killed by alcohol)\\
\emph{cuŋ=ŋa sima} &\rede{freeze} &(die of cold)\\
\emph{sak=ŋa sima } &\rede{be hungry} &(die of hunger) \\ %: ka saknga siangna
\emph{whaŋma=ŋa lupma} &\rede{sweat} &(heat/sweat – disperse/strew)\\
\lspbottomrule
\end{tabular}
}
\caption{Simple noun-verb predicates}\label{simple-nv-tab}
\end{table}
%= 34
The predicates vary as to whether the noun or the verb carries the semantic weight of the predicate, or whether both parts play an equal role in establishing the meaning of the construction. In verbs like \emph{wepma sima} \rede{be thirsty} (lit. \rede{thirst – die}) or \emph{wasik tama} \rede{rain} (lit. \rede{rain – come}), the noun carries the semantic weight,\footnote{This is the reason why noun-verb collocations have also become known as \isi{light verb} constructions (after \citet{Jespersen1965_Modern}, who used this term for English collocations like \emph{have a rest}).} while in verbs like \emph{kei lakma} \rede{drum – dance}, \emph{cabhak lakma} \rede{paddy – dance}, the nouns merely modify the verbal meaning. The nouns may stand in various thematic relations to the verb: in \emph{wepma sima} \rede{be thirsty}, the noun has the role of an effector, in predicates like \emph{hiʔwa phemma} \rede{wind – blow} it is closer to an agent role. In \emph{saya pokma} \rede{head-soul – raise},\footnote{\rede{Raising the head soul} is a ritual activity undertaken by specialists to help individuals whose physical or psychological well-being is in danger.} it is a patient.
There are also a few constructions in which the noun is etymologically related to the verb, such as \emph{chipma chima} \rede{urinate}, \emph{sokma soma} \rede{breathe} and \emph{phiʔma phima} \rede{fart} (cognate object constructions). The nouns in these constructions do not contribute to the overall meaning of the predicate.\footnote{Semantically empty nouns are also attested in the \isi{experiencer-as-possessor construction} (cf. below). They are called “eidemic" in \citet{Bickel1995In-the-vestibule, Bickel1997The-possessive}; and “morphanic” (morpheme orphans) in \citet{Matisoff1986Hearts}.}
Concerning \isi{stress assignment} and the \isi{voicing} rule (see \sectref{stress} and \sectref{morphophon}), noun and verb do not constitute a unit. Both the noun and the verb carry equal stress, even if the noun is monosyllabic, resulting in adjacent stress, as in \emph{ˈsak.ˈtuk.ma}. As for \isi{voicing}, if the initial stop of the verbal stem is preceded by a nasal or a vowel, it remains voiceless. This stands in contrast to the verb-verb predicates (see Chapter \ref{verb-verb}), which are more tightly fused in other respects, too. Compare, for instance, \emph{cuŋ tukma} \rede{be cold} (N+V, \rede{cold—hurt}) with \emph{ham-biʔma} \rede{distribute among people} (V+V, \rede{distribute—give}).
There are different degrees of morphological fusion of noun and verb, and some nouns may undergo operations that are not expected if they were incorporated. They can be topicalized by means of the particle \emph{=ko} (see \Next[a]), and two nouns selecting the same \isi{light verb} may also be coordinated (see \Next[b]). Such examples are rare, though. Note that \Next[a] is from a collection of proverbs and sayings, in which rhythm and rhyming constraints could lead to the insertion of particles such as \emph{=ko}. In most cases the noun and verb occur without any intervening material. The noun may also be modified independently, as the spontaneously uttered sentence in \Next[c] shows. Typically, the predicates are modified as a whole by adverbs, but here one can see that the noun may also be modified independently by adnominal modifiers. The modifying phrase is marked by a \isi{genitive}, which is never found on adverbial modifiers.
\ex.\ag.makkai=ga cama, chiʔwa=ga khyu, cabhak=ko lak-ma, a-ŋoʈeŋma=jyu.\\
maize{\sc =gen} cooked\_grains nettle{\sc =gen} curry\_sauce paddy{\sc =top} dance{\sc -inf} {\sc 1sg.poss-}female\_in-law{\sc =hon}\\
\rede{Corn mash, nettle sauce, let us dance the paddy dance, dear sister-in-law.} \source{12\_pvb\_01.008}
\bg.kei=nuŋ cabhak lak-saŋ ucun n-joŋ-me.\\
drum{\sc =com} paddy dance{\sc -sim} nice {\sc 3pl-}do{\sc -npst}\\
\rede{They have a good time, dancing the drum dance and the paddy dance.}\footnote{The interpretation of \rede{dancing the paddy dance with drums} can be ruled out here, because the drums are not played in the paddy dance.} \source{01\_leg\_07.142}
\bg. a-phok tuk=nuŋ=ga sak tug-a=na.\\
{\sc 1sg.poss}-stomach hurt{\sc =com=gen} hunger hurt{\sc -pst[3sg]=nmlz.sg} \\
\rede{I am starving.} (literally \rede{A hunger struck (me) that makes my stomach ache.})
Some of the nouns may even trigger agreement on the verb, something which is also unexpected from the traditional definition of compounds, which entails that compounds are one unit lexically and thus morphologically opaque (see, e.g., \citealt{Fabb2001Compounding}). Example \Next[a] and \Next[b] are different in this respect:\footnote{Many Yakkha verbs have inchoative-stative Aktionsart so that the past inflection refers to a state that still holds true at the time of speaking.} while predicates that contain the verb \emph{tukma} \rede{hurt} are invariably inflected for third person singular (in other words, the noun \emph{whaŋma} triggers agreement; overt arguments are not possible), predicates containing \emph{sima} \rede{die} show agreement with the overtly expressed (\isi{experiencer}) subject in the unmarked \isi{nominative} \Next[b].\footnote{The same \isi{argument realization} is found in the Belhare cognates of these two verbs \citep{Bickel1997The-possessive}. For the details of \isi{argument realization} in Yakkha see Chapter \ref{verb-val}.} Some meanings can be expressed by either frame (compare \Next[b] and \Next[c]), but this is not a regular and productive alternation.
\ex.\ag.whaŋma tug-a=na.\\
sweat hurt{\sc [3sg]-pst=nmlz.sg}\\
\rede{I/you/he/she/it/we/they feel(s) hot.}
\bg.ka wepma sy-a-ŋ=na.\\
{\sc 1sg} thirst die{\sc -pst-1sg=nmlz.sg}\\
\rede{I am thirsty.}
\bg.wepma tug-a=na.\\
thirst hurt{\sc [3sg]-pst=nmlz.sg}\\
\rede{I/you/he/she/it/we/they is/are thirsty.}
Note that if \emph{wepma} in \Last[b] were a regular verbal argument, an \isi{instrumental} \isi{case} would be expected, since it is an effector with respect to the verbal meaning. And indeed, some noun-verb predicates require an \isi{instrumental} or an \isi{ergative} \isi{case} on the noun (see \Next).\footnote{Yakkha has an \isi{instrumental}/\isi{ergative} syncretism. Therefore, in intransitive predicates \emph{=ŋa} is interpreted as instrumental; in transitive predicates it is interpreted as \isi{ergative}.}
\ex.\ag. (chemha=ŋa) sis-a-ga=na=i?\\
(liquor{\sc =erg}) kill{\sc -pst-2.P=nmlz.sg=q}\\
\rede{Are you drunk?}
\bg. sak=ŋa n-sy-a-ma-ŋa-n=na.\\
hunger{\sc =ins} {\sc neg-}die{\sc -pst-prf-1sg-neg=nmlz.sg}\\
\rede{I am not hungry.}
Some verbs participating in noun-verb predicates have undergone semantic changes. Note that in \Last the nouns \emph{sak} and \emph{chemha} do not have the same status with regard to establishing the semantics of the whole predicate. The verbal stem \emph{sis} \rede{kill} in (a) has already acquired a metaphorical meaning of \rede{be drunk, be intoxicated} (with the \isi{experiencer} coded like a standard object). The noun is frequently omitted in natural speech, and if all arguments are overt, the \isi{experiencer} precedes the stimulus, just like in Experiencer-as-Object constructions (see \Next and Chapter \ref{tr-objex}).\footnote{The same development has taken place in Belhare \citep[151]{Bickel1997The-possessive}.} In contrast to this, the stem \emph{si} in (b) is not polysemous; the noun is required to establish the meaning of the construction.
\exg. ka macchi=ŋa haŋd-a-ŋ=na.\\
{\sc 1sg} pickles/chili{\sc =erg} taste\_hot{\sc -pst-1sg.P=nmlz.sg}\\
\rede{The pickles/chili tasted hot to me.}
Despite a certain \isi{degree} of morphosyntactic freedom, the nouns are not full-fledged arguments. It is not possible to demote or promote the noun via \isi{transitivity operations} such as the causative or the passive, or to extract it from the noun-verb complex via \isi{relativization} (see ungrammatical \Next).
\ex.\ag.*lakt-i=ha cabhak\\
dance{\sc -1pl[pst]=nmlz.nsg} paddy\\
Intended: \rede{the paddy (dance) that we danced}
\bg.*tug-a=ha sak\\
hurt{\sc [3sg]-pst=nmlz.nsg} hunger\\
Intended: \rede{the hunger that was perceivable}
To sum up, \isi{simple noun-verb predicates} behave like one word with respect to lexical semantics, adjacency (in the overwhelming majority of examples), extraction possibilities for the noun (i.e., the lack thereof). They behave like two words as far as \isi{clitic} placement (including \isi{case}), coordination, modifiability, stress and \isi{voicing} are concerned. Thus, they are best understood as lexicalized phrases.
\section{Experiencer-as-possessor constructions}\label{nv-comp-poss}
Following a general tendency of languages of South and \isi{Southeast Asia}, Yakkha has a dedicated construction for the expression of experiential concepts, including emotional and cognitive processes, bodily functions, but also human character traits and their moral evaluation. In Yakkha, such concepts are expressed by predicates that are built from a noun and a verb, whereby the noun is perceived as the location of this concept, i.e., the “arena” where a physiological or psychological experience unfolds \citep[8]{Matisoff1986Hearts}. These nouns are henceforth referred to as psych-nouns, but apart from referring to emotions and sensations, they can also refer to body parts and excreted substances. Example \Next illustrates the basic pattern:
\exg.u-niŋwa tug-a=na.\\
{\sc 3sg.poss-}mind hurt{\sc [3sg]-pst=nmlz.sg}\\
\rede{He was/became sad.}
The verbs come from a rather small class; they denote the manner in which the \isi{experiencer} is affected by the event, many of which refer to \isi{motion} events. The \isi{experiencer} is morphologically treated like the possessor of the \isi{psych-noun}; it is indexed by \isi{possessive prefixes}. The expression of experiential concepts by means of a possessive metaphor is a characteristic and robust feature of Kiranti languages (cf. the “possessive of experience" in \citet{Bickel1997The-possessive}, “emotive predicates" in \citet[72]{Ebert1994The-structure}, and “body part emotion verbs" in \citealt[219]{Doornenbal2009A-grammar}), but this is also found beyond Kiranti in South-East Asian languages, including Hmong-Mien, Mon-Khmer and Tai-Kadai languages \citep{Matisoff1986Hearts, Bickel2004The-syntax}. In other \isi{Tibeto-Burman} languages, such as \ili{Newari}, Balti and Tibetan, for instance, experiencers are marked by a dative \citep{Beyer1992_Tibetan, Genetti2007_Newari, Read1934Balti}, an option which is not available, at least not by native morphology, in most Kiranti languages.
Experiencer-as-possessor constructions are not the only option to express experiential events. The crosslinguistic variation that can be found within \isi{experiential predicates} is also reflected in the language-internal variation of Yakkha. We have seen simple noun-verb predicates in \sectref{simple-noun-verb} above. Other possibilities are simple verbal stems like \emph{haŋma} \rede{taste hot/have a spicy sensation} (treating the \isi{experiencer} like a standard P argument), \emph{eʔma} \rede{perceive, like, have an impression, have opinion} (treating it like a standard A argument) and the historically complex verb \emph{kisiʔma} \rede{be afraid} (treating it like a standard S argument). Verbs composed of several verbal stems may also encode experiential notions, such as \emph{yoŋdiʔma} \rede{be scared} (a compound consisting of the roots for \rede{shake} and \rede{give}). It is the \isi{experiencer-as-possessor construction} though that constitutes the biggest class of \isi{experiential predicates}. About fifty verbs have been found so far (cf. Tables \ref{tab-exp1} through \ref{tab-exp2c}), but probably this list is far from exhaustive.
This section is organized as follows: the various possibilities of \isi{argument realization} within the experiencer-as-possessor frame are introduced in \sectref{poss-e1} introduces, \sectref{poss-e3} looks at the principles behind the semantic composition of possessive \isi{experiential predicates}, and \sectref{poss-e2} deals with the morphosyntax of these predicates and with the behavioral properties of experiencers as non-canonically marked S or A arguments.
\subsection{Subframes of argument realization}\label{poss-e1}
A basic distinction can be drawn between predicates of intransitive \isi{valency} and transitive or labile\footnote{See also \sectref{labile}.} \isi{valency}. Within this basic distinction, the verbs can be further divided into various subframes of \isi{argument realization} (see Tables \ref{tab-exp1} through \ref{tab-exp2c} at the end of the section). In all classes, the \isi{experiencer} is marked as possessor of the \isi{psych-noun}, i.e., as possessor of a sensation or an affected body part.
In the class of intransitive verbs, the \isi{psych-noun} triggers third \isi{person marking} on the verb, as in \Last and \Next. Intransitive verbs usually do not have an overt \isi{noun phrase} referring to the \isi{experiencer}; only the possessive prefix identifies the reference of the \isi{experiencer}. When the \isi{experiencer} has a special pragmatic status, and is thus marked by a discourse particle, it can be overtly expressed in either the \isi{nominative} or in the \isi{genitive} (compare example \ref{kacaayupma} and \ref{ukkaseopomma} below). As this is quite rare, the reasons for this alternation are not clear yet.
In some cases, the noun is conceptualized as nonsingular, triggering the according \isi{number} markers on the verb as well (see \Next[a]). One verb in this group is special in consisting of two nouns and a verb (see \Next[b]). Both nouns take the possessive prefix. Their respective full forms would be \emph{niŋwa} and \emph{lawa}. It is not uncommon that the nouns get reduced to one \isi{syllable} in noun-verb predicates.
\ex.\ag.a-pomma=ci ŋ-gy-a=ha=ci.\\
{\sc 1sg.poss-}laziness{\sc =nsg} {\sc 3pl-}come\_up{\sc -pst=nmlz.nsg=nsg}\\
\rede{I feel lazy.}
\bg.a-niŋ a-la sy-a=na.\\
{\sc 1sg.poss-}mind {\sc 1sg.poss-}spirit die{\sc [3sg]-pst=nmlz.sg}\\
\rede{I am fed up/annoyed.}
The transitive group can be divided into five classes (cf. Tables \ref{tab-exp2}, \ref{tab-exp2b} and \ref{tab-exp2c} on pages \pageref{tab-exp2}--\pageref{tab-exp2c}). In all classes, the \isi{experiencer} is coded as the possessor of the \isi{psych-noun} (via \isi{possessive prefixes}), and hence this does not need to be explicitly stated in the schematic representation of \isi{argument realization} in the table.
In class (a) the \isi{experiencer} is realized like a standard transitive subject (in addition to being indexed by \isi{possessive prefixes}): it triggers transitive subject agreement and has \isi{ergative} \isi{case} marking (only overtly marked if it has third person reference and is overt, which is rare). The stimulus is unmarked and triggers object agreement (see \Next[a]).
Class (b) differs from class (a) in that the \isi{psych-noun} triggers object agreement, invariably third person and in some cases, third person plural (see \Next[b]). No stimulus is expressed in class (b). This class has the highest number of members.
\ex.\ag. uŋ=ŋa u-ppa u-luŋma tukt-uks-u=na.\\
{\sc 3sg=erg} {\sc 3sg.poss-}father {\sc 3sg.poss-}liver pour{\sc -prf-3.P=nmlz.sg}\\
\rede{He loved his father.} (literally \rede{He poured his father his liver.})
\bg.\label{ex-yupmaci}a-yupma=ci cips-u-ŋ-ci-ŋ=ha.\\
{\sc 1sg.poss-}sleepiness{\sc =nsg} complete{\sc -3.P[pst]-1sg.A-3nsg.P-1sg.A=nmlz.nsg}\\
\rede{I am well-rested.} (literally \rede{I completed my sleep(s).})
Predicates of class (c) show three possibilities of \isi{argument realization}. One possibility is an unexpected pattern where the stimulus triggers object agreement, while the \isi{psych-noun} triggers subject agreement, which leads, oddly enough, to a literal translation \rede{my disgust brings up bee larvae} in \Next[a]. Despite the subject agreement on the verb, the psych-nouns in this class do not host an \isi{ergative} \isi{case} marker, an option that is available, however, for verbs of class (d). The \isi{experiencer} is indexed only by the possessive prefix in this frame; overt \isi{experiencer} arguments were not found. The stimulus can be in the \isi{nominative} or in the \isi{ablative} in class (c), but if it is in the \isi{ablative}, the verb is blocked from showing object agreement with the stimulus, showing 3>3 agreement instead (see \Next[b]). The third option of \isi{argument realization} in class (c) is identical to class (a) (cf. the comments in \Next[a] and (b)). Reasons or conditions for these alternations, for instance in different configurations of the referential properties of the arguments, could not be detected.
\ex.\ag.thaŋsu=ga u-chya=ci a-chippa ket-wa-ci=ha.\\
bee{\sc =gen} {\sc 3sg.poss-}child{\sc =nsg} {\sc 1sg.poss-}disgust bring\_up{\sc -npst-3nsgP=nmlz.nsg}\\
\rede{I am disgusted by the bee larvae.} \\
(same: \emph{thaŋsuga ucyaci achippa ketwaŋciŋha} - (1{\sc sg}>3{\sc pl}, class (a)))
\bg.njiŋda=bhaŋ a-sokma hips-wa=na!\\
{\sc 2du=abl} {\sc 1sg.poss-}breath whip-{\sc npst[3A>3.P]=nmlz.sg}\\
\rede{I get fed up by you.} \\
(same: \emph{njiŋda asokma himmeʔnencinhaǃ} - 1{\sc sg}>2{\sc du}, class (a))
\newpage
In class (d), the \isi{psych-noun} also triggers transitive subject agreement, and it exhibits \isi{ergative} marking. The object agreement slot can be filled either by the stimulus or by the \isi{experiencer} argument (see \Next[a]).\footnote{There are (at least) two concepts, \emph{saya} and \emph{lawa}, that are related to or similar to \rede{soul} in Yakkha and the Kiranti metaphysical world in general. \citet{Gaenszle2000Origins} writes about these two (and other) concepts in Mewahang (also Eastern Kiranti, Upper Arun branch):
\begin{quote}
The concept of \emph{saya} is understood to be a kind of “vital force" that must be continually renewed (literally “bought") by means of various sacrificial rites. [...] The vital force \emph{saya} makes itself felt [...] not only in subjective physical or psychic states but also, and in particular, in the social, economic, religious and political spheres - that is, it finds expression in success, wealth, prestige and power. The third concept, \emph{lawa} (cf. \citet[165]{Hardman1981The-psychology}, \citealt[299]{Hardman_phd_Conformity}) is rendered by the \ili{Nepali} word \emph{sāto} (\rede{soul}). This is a small, potentially evanescent substance, which is compared to a mosquito, a butterfly or a bee, and which, if it leaves the body for a longer period, results in loss of consciousness and mental illness. The shaman must then undertake to summon it back or retrieve it. \citep[119]{Gaenszle2000Origins}
\end{quote}
}
Class (e) is exemplified by \Next[b]. Here, the \isi{experiencer} is the possessor of a body part which triggers object agreement on the verb. Some verbs may express an effector or stimulus overtly. Others, like \emph{ya limma} \rede{taste sweet} cannot express an overt A argument, despite being inflected transitively (see \Next[c]). This pattern is reminiscent of the transimpersonal verbs (treated in \sectref{tr-imp}).
\ex.\ag.\label{ex-lawa}a-lawa=ŋa naʔ-ya-ŋ=na.\\
{\sc 1sg.poss-}spirit{\sc =erg} leave{\sc -V2.leave-pst-1sg.P=nmlz.sg}\\
\rede{I was frozen in shock.} (literally \rede{My spirit left me.})
\bg. (cuŋ=ŋa) a-muk=ci khokt-u-ci=ha.\\
(cold{\sc =erg}) {\sc 1sg.poss-}hand{\sc =nsg} chop{\sc -3.P[pst]-3nsg.P=nmlz.nsg}\\
\rede{My hands are tingling/freezing (from the cold).} (literally \rede{The cold chopped off my hands.})
\bg.a-ya limd-u=na.\\
{\sc 1sg.poss-}mouth taste\_sweet{\sc -3.P[pst]=nmlz.sg}\\
\rede{It tastes sweet to me.}
Many of the transitive verbs are attested also with intransitive inflection without further morphological marker of decreased \isi{transitivity}, i.e., they show a lability alternation (see \Next).
\ex.\ag.n-lok khot-a-ŋ-ga=na=i?\\
{\sc 2sg.poss}-anger scratch{\sc -pst-1sg.P-2.A=nmlz.sg=q} \\
\rede{Are you angry at me?}
\bg. o-lok khot-a=na.\\
{\sc 3sg.poss-}anger scratch{\sc [3sg]-pst=nmlz.sg}\\
\rede{He/she got angry.}
For two verbs, namely \emph{nabhuk-lemnhaŋma} \rede{dishonor (self/others)} (literally \rede{throw away one's nose}) and \emph{nabhuk-yuŋma} \rede{uphold moral} (literally \rede{keep one's nose}), there is one more constellation of participants, due to their particular semantics. The \isi{experiencer} can either be identical to the agent or different from it, as the social consequences of morally transgressive behavior usually affect more people than just the agent (e.g., illegitimate sexual contacts, or an excessive use of swearwords).\footnote{This concept is particularly related to immoral behavior of women. It is rarely, if ever, heard that a man \rede{threw away his nose}.} The morphosyntactic consequences of this are that the verbal agreement and the possessive prefix on the noun may either have the same conominal or two different conominals. Taken literally, one may \rede{throw away one's own nose} or \rede{throw away somebody else's nose} (see \Next). Note that due to the possessive \isi{argument realization} it is possible to have partial coreference, which is impossible in the standard transitive verbal inflection (cf. \sectref{verb-infl}).
\ex. \ag. u-nabhuk lept-haks-u=na.\\
{\sc 3sg.poss-}nose throw{\sc -V2.send-3.P[pst]=nmlz.sg}\\
\rede{She dishonored herself.}
\bg. nda eŋ=ga nabhuk(=ci) lept-haks-u-ci-g=haǃ\\
{\sc 2sg[erg]} {\sc 1pl.incl.poss=gen} nose({\sc =nsg}) throw{\sc -V2.send-3.P[pst]-3nsg.P-2.A=nmlz.nsg}\\
\rede{You dishonored us all (including yourself)!}
In \sectref{simple-noun-verb} cognate object constructions like \emph{chipma chima} \rede{urinate} were discussed. In these cases, the noun is cognate to the verb and does not actually make a semantic contribution to the predicate. Such developments are also found in the experiencer-as-possessor frame. Example \Next[a] and \Next[b] are two alternative ways to express the same propositional content. Note the change of \isi{person marking} to third person in (b). The noun \emph{phok} \rede{belly} is, of course, not etymologically related to the verb in this case, but it also does not make a semantic contribution. Further examples are \emph{ya limma} \rede{taste sweet} (\emph{ya} means \rede{mouth}) and \emph{hi ema} \rede{defecate} (\emph{hi} means \rede{stool}).
\ex.\ag.ka khas-a-ŋ=na.\\
{\sc 1sg} be\_full{\sc -pst-1sg=nmlz.sg}\\
\rede{I am full.}
\bg.a-phok khas-a=na.\\
{\sc 1sg.poss-}belly be\_full{\sc [3sg]-pst=nmlz.sg}\\
\rede{I am full.}
All frames of \isi{argument realization} with examples are provided in Tables \ref{tab-exp1} through \ref{tab-exp2c}.\footnote{Stems in square brackets in the tables were not found as independent words beyond their use in these collocations.}
\begin{table}[p]
\begin{tabularx}{\textwidth}{lll}
\lsptoprule
{\sc predicate} & {\sc gloss} &{\sc literal translation}\\
\midrule
\multicolumn{3}{l}{\{(S[{\sc exp]-nom/gen}) V-s[3]\}}\\
\midrule
\emph{chipma lomma}&\rede{have to pee}&(urine – come out)\\
\emph{hakamba keʔma}&\rede{yawn}&(yawn – come up)\\
\emph{hakchiŋba keʔma}&\rede{sneeze}&(sneeze – come up)\\
\emph{heli lomma}&\rede{bleed}&(blood – come out)\\
\emph{hi lomma}&\rede{have to defecate}&(shit – come out)\\
\emph{laŋ miŋma}&\rede{twist/sprain leg}&(leg – sprain)\\ %other limbs?
\emph{laŋ sima}&\rede{have paraesthetic leg}&(leg – die)\\ %other limbs?
\emph{miʔwa uŋma}&\rede{cry, shed tears}&(tear – come down)\\
\emph{niŋ-la sima}&\rede{be fed up}&([mind] – [spirit] – die)\\
\emph{niŋwa kaŋma}&\rede{give in, surrender}&(mind – fall)\\
\emph{niŋwa khoŋdiʔma}&a)\rede{be mentally ill}& (mind – break down)\\
&b)\rede{be disappointed/sad}&\\
\emph{niŋwa ima}&\rede{feel dizzy}&(mind – revolve)\\
\emph{niŋwa tama}&\rede{be satisfied, content}&(mind – come)\\%nniŋda jacpe pas leksighabhoŋ aniŋwa tayana, nniŋdanuŋ aniŋwa tayana
\emph{niŋwa tukma}&\rede{be sad, be offended}&(mind – be ill/hurt)\\
\emph{niŋwa wama}&\rede{hope}&(mind – exist)\\
\emph{phok kama}&\rede{be full}&(stomach – be full/saturated)\\
\emph{pomma keʔma}&\rede{feel lazy}&(laziness – come up)\\
\emph{saklum phemma}&\rede{be frustrated}&(frustration – be activated)\\
\emph{ʈaŋ pokma}&\rede{be arrogant, naughty}& (horn – rise)\\
\emph{yuncama keʔma}&\rede{have to laugh, chuckle}&(laugh – come up)\\
\emph{yupma yuma}&\rede{be tired}&(sleepiness – be full)\\
\lspbottomrule
\end{tabularx}\\
\caption{Intransitive experiencer-as-possessor predicates}\label{tab-exp1}
\end{table}
%\pagestyle{empty}
\begin{table}%[p]
\begin{tabularx}{\textwidth}{lll}
\lsptoprule
{\sc predicate} & {\sc gloss }& {\sc literal translation}\\
\midrule
\multicolumn{3}{l}{Class (a): \{A{\sc [exp]-erg} P{\sc [stim]-nom} V-a[A].p[P]\}}\\
\midrule
\emph{chik ekma}&\rede{hate}&(hate – make break) \\%(\ti intr.)
\emph{lok khoʔma}&\rede{be angry at}&(anger – scratch) \\%(\ti intr.)
\emph{luŋma kipma}&\rede{be greedy}&(liver – cover tightly) \\%(\ti intr.)
\emph{luŋma tukma}&\rede{love, have compassion}&(liver – pour)\\
\emph{na hemma}&\rede{be jealous}& ([jealousy] – [feel]) \\
\lspbottomrule
\end{tabularx}
\caption{Transitive experiencer-as-possessor predicates, Class (a)}\label{tab-exp2}
\end{table}
\begin{table}%[p]
\begin{tabularx}{\textwidth}{lXX}
\lsptoprule
{\sc predicate} & {\sc gloss }& {\sc literal translation}\\
\midrule
\multicolumn{3}{l}{Class (b): \{A{\sc [exp]-erg} P{\sc [noun]-nom} V-a[A].p[3]\}}\\
\midrule
\emph{hi ema}&\rede{defecate}&(stool-defecate)\\
\emph{iklam saŋma}&\rede{clear throat, harrumph}&(throat – brush)\\
\emph{khaep cimma}&\rede{be satisfied, lose interest}& ([interest] –\newline be completed)\\
\emph{miʔwa saŋma}&\rede{mourn (ritually)}&(tear – brush)\\
\emph{nabhuk lemnhaŋma}&\rede{dishonor self/others}&(nose – throw away)\\
\emph{nabhuk yuŋma}&\rede{uphold own/\newline others' moral}&(nose – keep)\\
\emph{niŋwa chiʔma}&\rede{see reason, get grown up}&(mind – [get conscious])\\
\emph{niŋwa cokma}&\rede{pay attention}&(mind – do)\\
\emph{niŋwa hupma}&\rede{unite minds, decide together}&(mind – tighten, unite)\\
\emph{niŋwa lapma}&\rede{pull oneself together}&(mind – hold)\\
\emph{niŋwa lomma}&\rede{have/apply an idea}&(mind – take out)\\
\emph{niŋwa piʔma}&\rede{trust deeply}&(mind – give)\\
\emph{niŋwa yuŋma}&\rede{be careful}&(mind – put)\\
\emph{saya pokma}&\rede{raise head soul (ritually)}&(head soul – raise)\\ %need examples
\emph{semla saŋma}&\rede{clear throat, clear voice}&(voice – brush)\\
\emph{sokma soma}&\rede{breathe}&(breath – breathe)\\
\emph{yupma cimma}&\rede{be well-rested}&(sleepiness –\newline be completed)\\
\lspbottomrule
\end{tabularx}
\caption{Transitive experiencer-as-possessor predicates, Class (b)}\label{tab-exp2b}
\end{table}
\begin{table}%[p]
\begin{tabularx}{\textwidth}{lp{3.5cm}l}
\lsptoprule
{\sc predicate} & {\sc gloss }& {\sc literal translation}\\
\midrule
\multicolumn{3}{l}{Class (c): \{P{\sc [stim]-nom} V-a[3].p[P]\} \ti \{P{\sc [stim]-abl} V-a[3].p[3]\} \ti \{Class (a)\} }\\
\midrule
\emph{chippa keʔma}&\rede{be disgusted}&(disgust – bring up) \\
\emph{niŋsaŋ puŋma}&\rede{lose interest, have enough}&([interest] – [lose])\\
\emph{sokma himma}&\rede{be annoyed, be bored}&(breath – whip/flog) \\
\emph{sap thakma}&\rede{like}& ([{\sc stem}] – send up)\\
\midrule
\multicolumn{3}{l}{Class (d): \{A{\sc [noun]-erg} P{\sc [stim]-nom} V-a[3].p[A/P]\}}\\
\midrule
\emph{niŋwa=ŋa cama}&\rede{feel sympathetic}&(mind=\textsc{erg} – eat)\\
\emph{niŋwa=ŋa mundiʔma}&\rede{forget}&(mind=\textsc{erg} – forget) \\
\emph{hop=ŋa khamma}&\rede{trust}&([{\sc stem}]-\textsc{erg} – chew)\\
\emph{niŋwa=ŋa apma}&\rede{be clever, be witty}&(mind=\textsc{erg} – bring)\\
\emph{lawa=ŋa naʔnama}&\rede{be frozen in shock, be scared stiff }&(spirit=\textsc{erg} – leave)\\
\midrule
\multicolumn{3}{l}{Class (e): \{P{\sc [stim]-erg} V-a[3].p[3]\}}\\
\midrule
\emph{muk khokma}&\rede{freezing/stiff hands}&(hand-chop) \\
\emph{miʔwa saŋma} & (part of the death ritual) &(tear - brush off)\\
\emph{ya limma} (transimp.)& \rede{taste good} &(mouth - taste sweet)\\
\lspbottomrule
\end{tabularx}
\caption{Transitive experiencer-as-possessor predicates, Classes (c)--(e)}\label{tab-exp2c}
\end{table}
\pagestyle{scrheadings}
\subsection{Semantic properties}\label{poss-e3}
The experiencer-as-possessor predicates are far less transparent and predictable than the \isi{simple noun-verb predicates}. The nouns participating in this structure refer to abstract psychological or moral concepts like \emph{lok} \rede{anger}, \emph{yupma} \rede{sleepiness} and \emph{pomma} \rede{laziness}, or they refer to body parts or inner organs which are exploited for experiential metaphors. The lexeme \emph{luŋma} \rede{liver}, for instance, is used in the expression of love and greed, and \emph{nabhuk} \rede{nose} is connected to upholding (or eroding) moral standards. The human body is a very common source for psychological metaphors, or as Matisoff observed:
\begin{quote}
[...] it is a universal of human metaphorical thinking to equate mental operations and states with bodily sensations and movements, as well as with physical qualities and events in the outside world. \citep[9]{Matisoff1986Hearts}
\end{quote}
In Yakkha, too, psychological concepts are treated as concrete tangible entities that can be possessed, moved or otherwise manipulated. Many verbs employed in experiencer-as-possessor predicates are verbs of \isi{motion} and caused \isi{motion}, like \emph{keʔma} (both \rede{come up} and \rede{bring up}, distinguished by different stem behavior), \emph{kaŋma} \rede{fall}, \emph{haŋma} \rede{send}, \emph{lemnhaŋma} \rede{throw}, \emph{pokma} \rede{raise} or \emph{lomma} (both \rede{take out} and \rede{come out}). Other verbs refer to physical change (both spontaneous and caused), such as \emph{khoŋdiʔma} \rede{break down}, \emph{himma} \rede{whip/flog} or \emph{kipma} \rede{cover tightly}. Most of the predicates acquire their experiential semantics only in the particular idiomatic combinations. Only a few verbs have intrinsic experiential semantics, like \emph{tukma} \rede{hurt/be ill}.
\subsection{Morphosyntactic properties}\label{poss-e2}
\subsubsection{Wordhood vs. phrasehood}
Experiencer-as-possessor predicates host both nominal and verbal morphology. A possessive prefix (referring to the \isi{experiencer}) attaches to the noun, and the verbal inflection attaches to the verb. The verbal inflection always attaches to the verbal stem, so that the verbal prefixes stand between the noun and the verb (see \Next). It has been shown above that some of the psych-nouns can be inflected for \isi{number} as well as trigger plural morphology on the verb, and that others may show \isi{case} marking (see \ref{ex-yupmaci} and \ref{ex-lawa}).
\largerpage
\exg. a-luŋma n-duŋ-meʔ-nen=na.\\
{\sc 1sg.poss}-liver {\sc neg-}pour{\sc -npst-1>2=nmlz.sg}\\
\rede{I do not love you./I do not have compassion for you.}
The \isi{experiencer} argument, which is always indexed by the possessive prefix on the noun, is rarely expressed overtly. It may show the following properties: it is in the \isi{nominative} or in the \isi{genitive} when the \isi{light verb} is intransitive, and in the \isi{ergative} in predicates that show transitive subject agreement with the \isi{experiencer} argument (class (a) and (b)).
Noun and verb have to be adjacent, as shown by the following examples. Constituents like \isi{degree} adverbs and \isi{quantifiers} (see \Next[a] and \Next[b]) or question words (see \Next[c]) may not intervene.
\ex.\ag. tuknuŋ u-niŋwa (*tuknuŋ) tug-a-ma, {\hspace{-.4cm}\ob\dots\cb}\\
completely {\sc 3sg.poss}-mind (*completely) hurt{\sc [3sg]-pst-prf}\\
\rede{She was so sad, ...} \source{38\_nrr\_07.009}
\bg. ka khiŋ pyak a-ma=ŋa u-luŋma (*khiŋ pyak) tuŋ-me-ŋ=na!\\
{\sc 1sg} so\_much much {\sc 1sg.poss-}mother{\sc =erg} {\sc 3sg.poss-}liver (*so\_much much) pour{\sc [ 3sg.A]-npst-1sg.P=nmlz.sg}\\
\rede{How much my mother loves me!}\source{01\_leg\_07.079}
\bg. ijaŋ n-lok (*ijaŋ) khot-a-ŋ-ga=na=i?\\
why {\sc 2sg.poss}-anger (*why) scratch{\sc -pst-1sg.P-2sg.A=nmlz.sg=q}\\
\rede{Why are you angry at me?}
Information-structural clitics, usually attaching to the rightmost element of the phrase, may generally stand between noun and verb,
but some combinations were judged better than others (compare \Next[a] with \Next[b]). Compare also the impossible \isi{additive focus} particle \emph{=ca} in \Next[a] with the \isi{restrictive focus} particle \emph{=se} and the contrastive particle \emph{=le} in \NNext. Overtly expressed \isi{experiencer} arguments may naturally also host topic and focus particles, just like any other constituent can. This is shown, e.g., by \Next[b], \NNext[c] and \NNext[d].
%*** \Next[a] how then?? kaca ayupmaci ....? -yes
\ex. \ag. a-yupma=ci(*=ca) n-yus-a=ha=ci.\\
{\sc 1sg.poss}-sleepiness{\sc =nsg(*=add)} {\sc 3pl.A}-be\_full{\sc -pst=nmlz.nsg=nsg}\\
Intended: \rede{I am tired, too (in addition to being in a bad \isi{mood}).}
\bg.u-ʈaŋ=ca pog-a-by-a=na.\\
{\sc 3sg.poss-}horn{\sc =add} rise{\sc [3sg]-pst-v2.give-pst=nmlz.sg}\\
\rede{She is also naughty.}
\bg. ka=ca a-yupma=ci n-yus-a=ha=ci.\label{kacaayupma}\\
{\sc 1sg=add} {\sc 1sg.poss}-sleepiness{\sc =nsg} {\sc 3pl.A}-be\_full{\sc -pst=nmlz.nsg=nsg}\\
Only: \rede{I am also tired (in addition to you being tired).} (not, e.g., \rede{I am tired in addition to being hungry.})
\ex.\ag. a-saklum=ci=se m-phen-a-sy-a=ha=ci.\\
{\sc 1sg.poss}-need{\sc =nsg=restr} {\sc 3pl}-be\_activated-{\sc pst-mddl-pst=nmlz.nsg=nsg} \\
\rede{I am just pining for it.}
\bg. uŋ=ŋa u-ma u-chik=se ekt-uks-u-sa.\\
{\sc 3sg=erg} {\sc 3sg.poss-}mother {\sc 3sg.poss-}hate{\sc =restr} make\_break{\sc -prf-3.P[pst]-pst.prf}\\
\rede{He had nothing but hate for his mother.}\source{01\_leg\_07.065}
\bg.ka=go a-sap=le thakt-wa-ŋ=na.\\
{\sc 1sg=top} {\sc 1sg.poss-}[stem]{\sc =ctr} send\_up{\sc -npst[3.P]-1sg.A=nmlz.sg}\\
\rede{But I like it.} (said in contrast to another speaker)
\bg. uk=ka=se o-pomma=ci ŋ-gy-a=ha=ci.\label{ukkaseopomma}\\
{\sc 3sg=gen=restr} {\sc 3sg.poss-}laziness{\sc =nsg} {\sc 3pl.A}-come\_up{\sc -pst=nmlz.nsg=nsg}\\
\rede{Only he was lazy (not the others).}
The noun can even be omitted, in \isi{case} it was already active in discourse, such as in the question-answer pair in \Next. It is, however, not possible to extract the noun from the predicate to relativize on it, neither with the \isi{nominalizer} \emph{-khuba} nor with the nominalizers \emph{=na} and \emph{=ha} as shown in \NNext (cf. Chapter \ref{ch-nmlz}). Furthermore, in my corpus there is not a single example of a noun in a possessive experiential construction that is modified independently. The predicate is always modified as a whole, by adverbial modification. A certain \isi{degree} of morphological freedom does not imply that the noun is a full-fledged argument.
\ex.\ag. ŋkha mamu=ci n-sap thakt-u-ci-g=ha=i?\\
those girl{\sc =nsg} {\sc 2sg.poss-[stem]} send\_up-{\sc 3.P[pst]-3nsg.P-2.A=nmlz.nsg=q}\\
\rede{Do you like those girls?}
\bg. thakt-u-ŋ-ci-ŋ=ha!\\
send\_up-{\sc 3.P[pst]-1sg.A-3nsg.P-1sg.A=nmlz.nsg}\\
\rede{I do!}
\ex.\ag.*kek-khuba (o-)pomma\\
come\_up-{\sc nmlz[S/A]} ({\sc 3sg.poss-})laziness\\
Intended: \rede{the laziness that comes up}
\bg.*ky-a=na (o-)pomma\\
come\_up{\sc -pst=nmlz.sg} ({\sc 3sg.poss-})laziness\\
Intended: \rede{the laziness that came up}
The noun-verb complex as a whole may serve as input to derivational processes, such as the creation of \isi{adjectives} by means of a \isi{reduplication} and the \isi{nominalizer} \emph{=na} or \emph{=ha}, shown in \Next.
\ex.\ag.uŋ tuknuŋ luŋma-tuk-tuk=na sa-ya-ma.\\
{\sc 3sg} completely liver-{\sc redupl-}pour{\sc =nmlz.sg} be{\sc [3]-pst-prf}\\
\rede{She was such a kind (loving, caring) person.}\source{01\_leg\_07.061}
\bg.ikhiŋ chippa-ke-keʔ=na takabaŋ!\\
how\_much disgust-{\sc redupl-}come\_up{\sc =nmlz.sg} spider\\
\rede{What a disgusting spider!}
\bg.nna chik-ʔek-ʔek=na babu\\
that hate-{\sc redupl-}make\_break{\sc =nmlz.sg} boy\\
\rede{that outrageous boy}
Wrapping up, just as we have seen above for the \isi{simple noun-verb predicates}, the noun and the verb build an inseparable unit for some processes, but not for others; the predicates show both word-like and phrasal properties. Semantically, of course, noun and verb build one unit, but they can be targeted by certain morphological and syntactic processes: the nonsingular marking on psych-nouns, psych-nouns triggering agreement, the possibility of hosting phrasal clitics, and the partial ellipsis. The ambiguous status of these predicates is also reflected in their phonology: noun and verb are two units with respect to stress and \isi{voicing}.
\largerpage
Another feature distinguishes the possessive \isi{experiencer} predicates from compounds: nouns in compounds are typically generic (\citealt[66]{Fabb2001Compounding}, \citealt[156]{Haspelmath2002Understanding}). As the noun in the possessive \isi{experiential predicates} hosts the possessive prefix, its reference is made specific. The contiguity of noun and verb, the derivation of \isi{adjectives} and the restrictions on extraction and modification also clearly show that noun and verb are one unit. All these conflicting properties of Yakkha add further support to approaches that question the notion of the word as opaque to morphosyntactic processes (as, e.g., stated in the Lexical Integrity Principle). The possessive \isi{experiential predicates} may best be understood as lexicalized phrases, such as the predicates discussed in \sectref{simple-noun-verb} above.
\subsubsection{Behavioral properties of the experiencer arguments}\label{poss-e4}
Experiencers as morphologically downgraded, non-canonically marked subjects do not necessarily have to be downgraded in other parts of the grammar. As observed by \citet{Bickel2004The-syntax}, \isi{Tibeto-Burman} languages, in contrast to Indo-Aryan languages, show a strong tendency to treat experiencers as full-fledged arguments syntactically. Yakkha confirms this generalization. In syntactic constructions that select pivots, the \isi{experiencer} argument is chosen, regardless of the fact that it is often blocked from triggering verbal agreement. The \isi{nominalizer} \emph{-khuba} (S/A arguments) selects the \isi{experiencer}, because it is the most agent-like argument in the clause (see \Next). As the ungrammatical \Next[c] shows, the stimulus cannot be nominalized by \emph{-khuba}.
%***checked this from notes 2011: chippa kekkhuba camyoŋba not possibleǃǃ ( rather chippakekeʔna?), hangkhuba machi, *hangkhuba mamu (from KS, MM judged this as ungrammatical)
\ex.\ag.takabaŋ u-chippa kek-khuba mamu\\
spider {\sc 3sg.poss-}disgust come\_up{\sc -nmlz} girl\\
\rede{the girl who is disgusted by spiders}
\bg.o-pomma kek-khuba babu\\
{\sc 3sg.poss-}laziness come\_up{\sc -nmlz} boy\\
\rede{the lazy fellow}
\bg.*chippa kek-khuba camyoŋba \\
disgust come\_up{\sc -nmlz} food \\
Intended: \rede{disgusting food} (only: \emph{chippakekeʔna})
\largerpage
Another process that exclusively selects S and A arguments is the converbal \isi{clause linkage}, which is marked by the suffix \emph{-saŋ}. It implies that two (or more) events happen simultaneously, and it requires the referential identity of the S and A arguments in both clauses. Example \Next illustrates that this also holds for \isi{experiencer} arguments.
\ex.\ag. o-pomma kes-saŋ kes-saŋ kam cog-wa.\\
{\sc 3sg.poss-}laziness come\_up{\sc -sim} come\_up{\sc -sim} work do{\sc -npst[3.P]}\\
\rede{He does the work lazily.}
\bg.uŋ lok khos-saŋ lukt-a-khy-a=na.\\
{\sc 3sg} anger scratch{\sc -sim} run{\sc [3sg]-pst-V2.go-pst=nmlz.sg}\\
\rede{He ran away angrily.}
In causatives, the \isi{experiencer} is the causee, as is evidenced by the verbal marking in \Next. There is no overt marking for 1.P, but the reference is retrieved from the opposition to the other forms in the paradigm — with third person object agreement, the inflected form would have to be \emph{himmetugha}.
\exg. khem=nuŋ manoj=ŋa a-sokma him-met-a-g=haǃ\\
Khem{\sc =com} Manoj{\sc =erg} {\sc 1sg.poss-}breath whip{\sc -caus-pst-2.A[1.P]=nmlz.nsg}\\
\rede{Khem and Manoj (you) annoy me!}
The last syntactic property discussed here is the agreement in complement-taking verbs that embed infinitives, as for instance \emph{yama} \rede{be able} or \emph{tarokma} \rede{begin}, shown in \Next. Basically, the complement-taking verb mirrors the agreement that is found in the embedded verb. Those predicates whose \isi{experiencer} arguments do not trigger agreement in the verb do not show agreement in the complement-taking verb either. Other restrictions are semantic in nature, so that, for instance, \rede{I want to get lazy} is not possible, because being lazy is not conceptualized as something one can do on purpose. Thus, the agreement facts neither confirm nor contradict the generalization made above. A more interesting \isi{case} is the periphrastic \isi{progressive} construction, with the lexical verb in the \isi{infinitive} and an intransitively inflected auxiliary \emph{-siʔ} (infinitial form and auxiliary got fused into one word). The auxiliary selects the \isi{experiencer} as agreement triggering argument (see \Next[b]).
\largerpage
\ex.\ag.ka nda a-luŋma tuk-ma n-ya-meʔ-nen=na.\\
{\sc 1sg[erg]} {\sc 2sg} {\sc 1sg.poss-}liver pour{\sc -inf} {\sc neg-}be\_able-{\sc npst-1>2=nmlz.sg}\\
\rede{I cannot love you./I cannot have pity for you.}
\bg.nda ka ijaŋ n-lok khoʔ-ma-si-me-ka=na?\\
{\sc 2sg[erg]} {\sc 1sg} why {\sc 2sg.poss-}anger scratch{\sc -inf-aux.prog-npst-2=nmlz.sg}\\
\rede{Why are you being angry at me?}
| {
"alphanum_fraction": 0.7379306062,
"avg_line_length": 96.2231075697,
"ext": "tex",
"hexsha": "90563086105520d117fb70ab2850632228c08177",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "37a7473097d2c8ed7787bfda95096b940d2db6c5",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/66",
"max_forks_repo_path": "indexed/08_NounVerb.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37a7473097d2c8ed7787bfda95096b940d2db6c5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/66",
"max_issues_repo_path": "indexed/08_NounVerb.tex",
"max_line_length": 1259,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "37a7473097d2c8ed7787bfda95096b940d2db6c5",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/66",
"max_stars_repo_path": "indexed/08_NounVerb.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15638,
"size": 48304
} |
%!TEX root = thesis.tex
\chapter{Cloud Migration}
| {
"alphanum_fraction": 0.7254901961,
"avg_line_length": 12.75,
"ext": "tex",
"hexsha": "bdfc0c271245f3d0817f7dac850448beeb6c9380",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-11-15T10:48:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-07T10:03:35.000Z",
"max_forks_repo_head_hexsha": "62e6fb4d801fa9aaa84adc5e2150d38ff6a1eacd",
"max_forks_repo_licenses": [
"Beerware"
],
"max_forks_repo_name": "T1m1/customized-latex-htwg-template-master",
"max_forks_repo_path": "03_design_concept/migration.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "62e6fb4d801fa9aaa84adc5e2150d38ff6a1eacd",
"max_issues_repo_issues_event_max_datetime": "2021-05-30T18:47:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-28T12:08:30.000Z",
"max_issues_repo_licenses": [
"Beerware"
],
"max_issues_repo_name": "T1m1/customized-latex-htwg-template-master",
"max_issues_repo_path": "03_design_concept/migration.tex",
"max_line_length": 25,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "62e6fb4d801fa9aaa84adc5e2150d38ff6a1eacd",
"max_stars_repo_licenses": [
"Beerware"
],
"max_stars_repo_name": "T1m1/customized-latex-htwg-template-master",
"max_stars_repo_path": "03_design_concept/migration.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-20T15:43:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-07T10:03:36.000Z",
"num_tokens": 13,
"size": 51
} |
\section{Fuzzy Inference}
A simplified Sugeno-style fuzzy inference system is developed with functionalities for fuzzy rule / set parsing, input fuzzification, fuzzy inferencing and output defuzzification.
\subsection{Determining Fuzzy sets}
As mentioned in the previous section, the noises in the image inevitably introduces uncertainties to our system. Therefore \textbf{the feature values referenced in our rules have to be represented in fuzzy sets}.
To determine the \textbf{best fuzzy sets capturing the appropriate degree of uncertainties}, we use the statistics from the training set to find the optimal threshold.
For $Thinness$, we use 3 fuzzy sets for ellipse-like, rectangle-like and triangle-like $Extent$ ratio. For $Extent$, we use 2 fuzzy sets for circle-like and square-like $Thinness$ ratio. The shapes and boundries of these fuzzy sets are determined based on the \textbf{distribution and percentiles of feature values} in the training set. We consider the area between 25th percentile and 75th percentile as the \textbf{high confidence range}, while the remaining area plus a small portion of area adjacent to the extreme values (to tolerate some outliners) are considered as the \textbf{low confidence range}. The corresponding polygons are then constructed based on these two ranges to represent the fuzzy sets. The final fuzzy sets are illustrated in Figure 3 (Figures not drawn to scale).
\begin{figure}
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{Figure_3_Fuzzy_Sets_1.png}
\caption{Fuzzy Set of Extent}\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{Figure_4_Fuzzy_Sets_2.png}
\caption{Fuzzy Set of Thinness}
\end{subfigure}
\caption{Fuzzy Sets}
\end{figure}
\subsection{Implementing Inference Engine}
Now that the fuzzy sets are determined, we have to implement the inference engine. But the specific nature of our task requires us to make some adaptations to the the usual fuzzy inference engine.
Firstly, unlike the example from the textbook, the set of possible outputs in our task is \textbf{NOT} a set of linguistic values which can be represented by a series of contiguous intervals. We can't compute a COG-like output and decide which category it falls in, because it's not possible to assign a reasonable order to the shapes.
Secondly, rules derived in the previous section are all of the format:
\textit{IF X IS LIKE A AND Y IS NOT LIKE B THEN Shape IS C}
Note that the consequent of each rule is an assertion about the shape of the input figure and that \textbf{each rule corresponds to the recognition process for one specific shape}. In fact a fuzzy inference system of this kind is essentially \textbf{a flattened decision tree}, with \textbf{each rule acting as a filter to calculate the possibility of one specific shape}. The only difference is that in our expert system rules are separated from inference, making it easily maintainable.
Thus, in order to gain the advantages brought by fuzzy inference without over-complicating our task, we choose to implement \textbf{a simplified Sugeno-style inference engine}. In the final defuzzification stage, we don't compute a weighted average of the rule outputs. Instead we examine all the possible values for the target variable $Shape$ and \textbf{choose the one with the highest possibility as the output}.
| {
"alphanum_fraction": 0.8034289093,
"avg_line_length": 82.512195122,
"ext": "tex",
"hexsha": "e820bcf620d74ea21e38260b31150eb2173e0822",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-04-30T14:42:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-04-30T14:42:59.000Z",
"max_forks_repo_head_hexsha": "9d49bd4bdbbc18404dede74c0f878418b1074d8c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dnc1994/Shape",
"max_forks_repo_path": "report/fuzzy_inference.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "9d49bd4bdbbc18404dede74c0f878418b1074d8c",
"max_issues_repo_issues_event_max_datetime": "2016-09-10T18:17:12.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-01-13T08:47:58.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dnc1994/Shape",
"max_issues_repo_path": "report/fuzzy_inference.tex",
"max_line_length": 789,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "9d49bd4bdbbc18404dede74c0f878418b1074d8c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dnc1994/Shape",
"max_stars_repo_path": "report/fuzzy_inference.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T17:31:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-01-13T08:42:43.000Z",
"num_tokens": 740,
"size": 3383
} |
\documentclass[11pt,fancychapters]{article}
\usepackage[a4paper, total={6in, 8in}]{geometry}
\usepackage{cite}
\usepackage{color}
\usepackage{xcolor}
\usepackage{empheq}
\usepackage{setspace}
\usepackage{hyperref}
\usepackage{minted}
\usepackage{acro}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{multirow}
\usepackage{graphicx}
\usepackage{geometry}
\usepackage{subcaption}
\usepackage{cancel}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{tcolorbox}
\usepackage{hyperref}
\usepackage{cleveref}
\usepackage{parskip}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{pgfplots}
\geometry{
a4paper,
total={170mm,257mm},
left=20mm,
top=20mm,
}
\pgfplotsset{width=8cm,compat=1.9}
\newcommand{\dbar}{{d\mkern-7mu\mathchar'26\mkern-2mu}}
\newcommand{\boxedeq}[2]{\begin{empheq}[box={\fboxsep=6pt\fbox}]{align}\label{#1}#2\end{empheq}}
\def\*#1{\mathbf{#1}}
\def\ab{ab}
\usepackage{tikz}
\usetikzlibrary{calc,trees,positioning,arrows,chains,shapes.geometric,%
decorations.pathreplacing,decorations.pathmorphing,shapes,%
matrix,shapes.symbols}
\geometry{top=1.3in,bottom=1.3in}
\begin{document}
\centerline{\huge{2D Project --- 50.004 Introduction to Algorithms}}
\begin{table}[ht]
\centering
\footnotesize
\begin{tabular}{c c c c c c}
V S Ragul Balaji&James Raphael Tiovalen&Anirudh Shrinivason&Jia Shuyi&Gerald Hoo&Shoham Chakraborty
\end{tabular}
\end{table}
\section{Part A --- Deterministic Graph-Based Algorithm}
\subsection{Overview of Algorithm}
A solver for a 2-SAT problem can follow many methods. For a 2-SAT problem to be SATISFIABLE in Conjunctive Normal Form (CNF), every clause must be \texttt{true}. Since there are only 2 literals in each clause in a 2-SAT problem, within a single clause, we notice that we can form two implications, thus converting it into an Implicative Normal Form (INF). This sets one of the literals within the clause to be \texttt{false} and for the entire clause to be \texttt{true}, we need the other literal to be \texttt{true}. In other words, we are forbidding the four possible joint assignments of a pair of literals. This defines certain constraints between the variables, which can be propagated throughout the whole implication graph. Each pair of constraints can be considered to be an edge between the variables in an implication graph for the boolean satisfiability problem.\newline
To create the implication graph, we first implement a directed graph using an adjacency list. In the \texttt{Graph} class defined in the \texttt{dfs/kosaraju.py} file, we create the adjacency list by defining the graph's vertices, as well as a function to add edges. Since each variable produces two literals (the variable itself and its negation), we create $2n$ vertices, where $n$ is the number of variables. These vertices are added as the keys of the dictionary $G$ defined in the \texttt{Graph} class. The values for each key would be a list containing consecutive outgoing edges from the vertex defined in the key. This keeps the connection between each individual vertex to the other vertices.
\vspace{4mm}
\begin{figure}[h]\label{fig}
\centering
\includegraphics[width=.9\textwidth]{diagrams/example_cnf_solved.png}
\caption{Guided example of solving 2SAT by hand in polynomial time}
\label{fig:example_problem}
\end{figure}
\vspace{4mm}
In Figure~\ref{fig:example_problem}, we demonstrate this process using an example. This example is also defined in the \texttt{cnf/example.cnf} file. Steps 3 and 4 in Figure~\ref{fig:example_problem} are defined in lines 110-111 in the \texttt{kosaraju.py} file. Using the implication graph, we find the Strongly-Connected Components (SCCs) of the graph and group vertices that are along the same path using Kosaraju's Algorithm. We first create a stack and a depth-first search (DFS) is implemented to traverse through the graph. Adjacent vertices are pushed to the stack. The same procedure is executed on the inverse of the graph, where the direction of all of the edges is reversed. After that, while the stack is not empty, we pop each vertex from the stack. Using this algorithm, we can find all the SCCs of the graph.\newline
Next, we can check for contradiction in each of the SCC. If a variable and its corresponding negation is in the same SCC, the whole 2-SAT formula is deemed to be UNSATISFIABLE, as it is impossible to assign two literals to the same variable. This is because within a single SCC, you can traverse from any vertex to any other vertex. Otherwise, the formula is deemed as SATISFIABLE.\newline
If the formula is SATISFIABLE, we can output a possible solution for the formula. We can do this by grouping the SCCs together and connect them using a directed acyclic graph. Using the graph in Figure~\ref{fig:example_problem} as an example, going in topological order, assign $0$ to the first group of SCC (which is the left SCC in this specific case) and assign $1$ to the second group of SCC (the one highlighted by the red rectangle). We would then get the corresponding output of the variables by equating the literals in any SCC to the assigned value.
\subsection{Time Complexity Analysis}
In general, we know that 2-SAT is in P and is tractable due to the mechanism of forcing the other literal to be \texttt{true} in a single clause by assigning a literal within that same clause as \texttt{true}. When we pick an assignment, 3 cases could happen:
\begin{enumerate}
\item We reach a contradiction. In this case, it means that there can only be a satisfying assignment if we use the other truth value for that specific variable. Thus, we can simplify the formula using this new assigned value for that variable and repeat the process.
\item The ``forcing" of the value assignment for a specific variable does not affect other variables and clauses. In this case, adopt these truth values, eliminate the clauses that they satisfy and continue the process.
\item We find a satisfying assignment.
\end{enumerate}
In Cases 1 and 2, we have spent at most $O(n^2)$ time and have reduced the length of the formula by $\geq 1$. Thus, in total, we have spent at most $O(n^3)$ time.\newline
In fact, our specific implementation method of using a DFS traversal on an implication graph using Kosaraju's Algorithm would take even less time:
\begin{enumerate}
\item To create the implication graph, we set up the vertices and edges in $O(V + E)$ time, where $V$ is the number of vertices and $E$ is the number of edges in the graph.
\item We implement DFS to traverse through the implication graph using Kosaraju's Algorithm in $O(V + E)$ time.
\item We set up the inverse/transpose implication graph in $O(V + E)$ time.
\item We implement the DFS again through the inverse implication graph in $O(V + E)$ time.
\end{enumerate}
Thus, this reduction of the 2-SAT problem to finding SCCs implemented in our deterministic algorithm would cause the algorithm to take only linear time.\newline
Meanwhile, $k$-SAT problems for $k \geq 3$ would be NP-complete (as shown by the Cook-Levin Theorem) and thus, the time taken would have non-polynomial asymptotics. This is because the size of the CNF formula is exponential in the size of the original boolean formula for $k \geq 3$.\newline
A possible improvement that could be made would be to implement Tarjan's Algorithm to conduct the search for the Strongly-Connected Components. While both Kosaraju's Algorithm and Tarjan's Algorithm would take $O(V+E)$ time, Tarjan's Algorithm has a lower constant factor to the runtime since it would need to go through the whole graph and execute DFS only once (instead of two times for Kosaraju's Algorithm, once for the normal graph and another instance of the DFS traversal for the inverse graph).
\newpage
\section{Part B --- Randomised Algorithm}
\subsection{Overview of Algorithm}
\begin{algorithm}[H]
\caption*{\textbf{function} RANDOM\_WALK($\mathbb{F}, L$)}
// $\mathbb{F}: \textit{a list of clauses}$\\
// $L: \textit{a list of all variables used in } \mathbb{F}$
\begin{algorithmic}[1]
\State Store all variables as keys in a dictionary $\mathbb{D}$ with initial values \textbf{false}
\For {$i \gets 1 \text{ to } 100\times(L.\textit{length})^2$}
\State Using $\mathbb{D}$, assign boolean values to $\mathbb{F}$
\State $\mathbb{C}\gets $ all invalid clauses in $\mathbb{F}$
\If {$\mathbb{C}.length \ne 0$}
\State $V\gets$ a random variable from a random clause in $\mathbb{C}$
\State $V\gets \neg V$
\State Update $\mathbb{D}$ with $V$
\Else
\State \textbf{return} SATISFIABLE
\EndIf
\EndFor
\State \textbf{return} UNSATISFIABLE
\end{algorithmic}
\end{algorithm}
First, we arbitrarily assign Boolean value \texttt{false} to all $n$ variables. In each of the $100n^2$ steps, we randomly choose a variable from a randomly selected invalid clause (that is, the clause evaluates to \texttt{false}) and negate its assignment. We then check if the resultant formula is satisfied or not. If no solution is found after $100n^2$ steps, we return UNSATISFIABLE. If at any point during the $100n^2$ steps the formula is satisfied, we return SATISFIABLE.
\subsection{Time Complexity Analysis}
Let $X_i$ be the number of correct assignments at step $i$. Assuming worst-case initialization, all variables are assigned incorrectly, we have $X_0 = 0$. This forces $X_1 = 1$, since flipping any variable would give us a correct assignment.\newline
For $1\le i \le n-1$, the probability for $X_i$ transiting to $X_{i+1}$ is at least $\frac{1}{2}$ while that to $X_{i-1}$ is at most $\frac{1}{2}$. This can be easily seen from the table below:
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{2}{|l|}{Wrong Clause $A+B$} & $A$ & $B$ \\ \hline
\multicolumn{2}{|l|}{Actual Value} & T & T \\ \hline
\multicolumn{2}{|l|}{Both Wrong Assignment} & F & F \\ \hline
\multicolumn{2}{|l|}{One Wrong Assignment} & F & T \\ \hline
\multirow{2}{*}{$P(X_i \text{ to } X_{i+1})$} & Both Wrong & \multicolumn{2}{l|}{1} \\ \cline{2-4}
& One Wrong & \multicolumn{2}{l|}{0.5} \\ \hline
\multirow{2}{*}{$P(X_i \text{ to } X_{i-1})$} & Both Wrong & \multicolumn{2}{l|}{0} \\ \cline{2-4}
& One Wrong & \multicolumn{2}{l|}{0.5} \\ \hline
\end{tabular}
\end{table}
Let us suppose the worst case – that the probability $X_i$ goes up is $\frac{1}{2}$, and down is $\frac{1}{2}$. This process is similar to a random walk.\newline
Let $h_i$ be the expected number of steps to reach $n$ on our random walk when we start at step $i$. We have
\begin{equation}\label{eqn2.1}
h_i = \frac{h_{i-1}}{2} + \frac{h_{i+1}}{2} + 1 \quad \Rightarrow \quad h_i - h_{i+1} = h_{i-1} -h_i + 2.
\end{equation}
Using the base case $h_0 = h_1 + 1$, the next 2 steps are
\begin{align}
h_1 - h_2 &= h_0 - h_1 + 2 = h_1 + 1 - h_1 + 2 = 3,\label{h1h2} \\
h_2 - h_3 &= h_1 - h_2 + 2 = 3 +2 = 5,\label{h2h3}
\end{align}
where Eqn.~(\ref{h1h2}) is substituted into Eqn.~(\ref{h2h3}). By careful observation, we formulate the following expression:
\begin{equation}\label{mi}
h_i - h_{i+1} = 2i+1.
\end{equation}
As shown above, this expression holds for $i=1$. Assume the expression is \texttt{true} for $h_k - h_{k+1} = 2k+1$ for some positive integer $k$, we want to prove that the expression holds for $i = k + 1$.
\begin{align*}
h_{k+1} - h_{k+2} &= h_k - h_{k+1} + 2 \qquad \text{By Eqn.~(\ref{eqn2.1})}\\
&= 2k+1+2\\
&= 2(k+1) + 1.
\end{align*}
Thus, by mathematical induction, Eqn.~(\ref{mi}) is true for all $k \in \mathbb{Z}^+$. Using Eqn.~(\ref{mi}), we sum all the steps from $i = 0$ to $i = n$:
\begin{align*}
h_0 &= h_n + \sum^{n-1}_{i=0}\left(h_i - h_{i+1}\right)\\
&= \sum^{n-1}_{i=0}(2i+1)\\
&= n+2\left(\frac{n^2-n}{2}\right)\\
&= n^2,
\end{align*}
where $h_n = 0$.\newline
Therefore, the average time complexity of the randomised algorithm is $O(n^2)$. In other words, we will find a solution in $n^2$ steps \textbf{on average}. If we decide to run the algorithm for $2n^2$ steps, the probability of not finding a solution is at most $\frac{1}{2}$. Thus, if we run the algorithm for $100n^2$ steps (as it is in the pseudo-code), the probability of not finding a solution is $\left(\frac{1}{2}\right)^{50} = 2^{-50}$.
\newpage
\section{Performance Comparison}
\begin{figure}[h]
\centering
\includegraphics[width=.95\textwidth]{diagrams/test0.png}
\caption{Benchmarking script used to compare the speed of the two implementation in Part A and Part B. Kosaraju's Algorithm runs faster than the randomised algorithm as predicted by the algorithmic analysis. }
\end{figure}
\vspace{4mm}
Even though the probability of not finding a solution is very small ($2^{-50}$ for $100n^2$ steps), the randomised algorithm is not a practical substitute for the deterministic one since it takes longer time than the deterministic algorithm. There is also some chance that a solution is not found and hence an incorrect conclusion/statement of the problem's satisfiability could be made. The randomised algorithm is also dependent on the size of the variables.\newline
However, we should not dismiss the usefulness of the idea of randomised local search entirely. We should be aware that the strategy of using randomised local search is useful to improve over naive brute-force search for NP-complete $k$-SAT problems, such as the $3$-SAT problem. In fact, for a 3-SAT problem, the naive brute-force method would take $O(2^n)$ time, while a version of the randomised local search with a clever twist (such as Sch\"{o}ning's stochastic local search algorithm) would take $O\left(\frac{4}{3}^n\right)$ time, which is significantly better than $O(2^n)$.
\end{document} | {
"alphanum_fraction": 0.7282881077,
"avg_line_length": 69.835,
"ext": "tex",
"hexsha": "56e8c7767dc4785e0c5d003bc97307b2ff649420",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e5df3ccbaf55fe03713bae0ecdc179d40d309b83",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jamestiotio/DigiAlpha",
"max_forks_repo_path": "sat-solver/50.004_report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e5df3ccbaf55fe03713bae0ecdc179d40d309b83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jamestiotio/DigiAlpha",
"max_issues_repo_path": "sat-solver/50.004_report.tex",
"max_line_length": 882,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e5df3ccbaf55fe03713bae0ecdc179d40d309b83",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jamestiotio/DigiAlpha",
"max_stars_repo_path": "sat-solver/50.004_report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4010,
"size": 13967
} |
% ------------------------------------------------------------------------------
% One Page
% - Problem
% - Objective(s)
% - Approach and method
% - Results, Solutions (This part should have the most prominence)
% - Conclusion
% ------------------------------------------------------------------------------
\chapter*{Abstract}
\addcontentsline{toc}{chapter}{Abstract} % adds an entry to the table of contents
% -- Your text goes here --
\lipsum[1-2]
\vspace{0.5cm}
\textbf{Key words:}
\Keywords
| {
"alphanum_fraction": 0.474,
"avg_line_length": 25,
"ext": "tex",
"hexsha": "eff7d59e1bdea1b6f27773e3c620cfd23e0803b7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7d209f79434e8b29a3f74e9267a4f5fb74aa103c",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "tschinz/hevs-latextemplate-thesis",
"max_forks_repo_path": "01-head/abstract.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7d209f79434e8b29a3f74e9267a4f5fb74aa103c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "tschinz/hevs-latextemplate-thesis",
"max_issues_repo_path": "01-head/abstract.tex",
"max_line_length": 81,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "7d209f79434e8b29a3f74e9267a4f5fb74aa103c",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "tschinz/hevs-latextemplate-thesis",
"max_stars_repo_path": "01-head/abstract.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-17T07:49:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-17T20:05:36.000Z",
"num_tokens": 104,
"size": 500
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%% ICML 2013 EXAMPLE LATEX SUBMISSION FILE %%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Use the following line _only_ if you're still using LaTeX 2.09.
%\documentstyle[icml2013,epsf,natbib]{article}
% If you rely on Latex2e packages, like most moden people use this:
\documentclass{article}
% For figures
\usepackage{graphicx} % more modern
%\usepackage{epsfig} % less modern
\usepackage{subfigure}
% For citations
\usepackage{natbib}
% For algorithms
\usepackage{algorithm}
\usepackage{algorithmic}
% As of 2011, we use the hyperref package to produce hyperlinks in the
% resulting PDF. If this breaks your system, please commend out the
% following usepackage line and replace \usepackage{icml2013} with
% \usepackage[nohyperref]{icml2013} above.
\usepackage{hyperref}
% Packages hyperref and algorithmic misbehave sometimes. We can fix
% this with the following command.
\newcommand{\theHalgorithm}{\arabic{algorithm}}
% Employ the following version of the ``usepackage'' statement for
% submitting the draft version of the paper for review. This will set
% the note in the first column to ``Under review. Do not distribute.''
\usepackage{icml2013}
% Employ this version of the ``usepackage'' statement after the paper has
% been accepted, when creating the final version. This will set the
% note in the first column to ``Proceedings of the...''
% \usepackage[accepted]{icml2013}
% The \icmltitle you define below is probably too long as a header.
% Therefore, a short form for the running title is supplied here:
\icmltitlerunning{Submission and Formatting Instructions for ICML 2013}
\begin{document}
\twocolumn[
\icmltitle{Using K-Nearest Neighbors on the MNIST Dataset}
% It is OKAY to include author information, even for blind
% submissions: the style file will automatically remove it for you
% unless you've provided the [accepted] option to the icml2013
% package.
\icmlauthor{Carl Cortright}{[email protected]}
% You may provide any keywords that you
% find helpful for describing your paper; these are used to populate
% the "keywords" metadata in the PDF but will not be shown in the document
\icmlkeywords{boring formatting information, machine learning, ICML}
\vskip 0.3in
]
\section{Introduction}
In this assignment, I use a K-Nearest-Neighbors (KNN) model, implemented using scikit-learn and numpy, to classify the MNIST handwriting dataset. I then show how the role of the number of datapoints as well as K contribute to accuracy. Afterwards I analyze which numbers get confused with eachother most easily.
\section{Analysis}
After completing the model, I ran a series of trials comparing the number of training examples with the accuracy while holding k constant. Before running the test I hypothesized that as we gave the model more training examples we would have higher accuracy. By graphing the results it is clear that the accuracy is asymptotically approaching somewhere around 95-96 percent.
\begin{figure}[!ht]
\caption{Graph of KNN Accuracy vs. Training example limit.}
\includegraphics[width=\columnwidth]{limit.png}
\end{figure}
Next I plotted accuracy vs k. I decided to keep the limit set to 500 because at 500 the change in accuracy would be easier to see with change in k. The results were interesting; as k grew the accuracy sinusoidally approached somewhere around 79 percent.
\begin{figure}[!ht]
\caption{Graph of KNN Accuracy vs. k.}
\includegraphics[width=\columnwidth]{k.png}
\end{figure}
Another interesting metric to consider with the MNIST dataset is what numbers get confused with other numbers. It is easy to analyze using the confusion matrix printed at the end of the program. By analyzing this matrix, you can see that the numbers 5 and 8 get confused with other numbers most often. The number 5 mostly gets confused with the numbers 3 and 6 whereas 8 gets confused with 3 and 5. Another outlier is how the number four gets confused with 9, an occurance that happend 19 times in our test. All of these confusions make sense due to the numbers that are being confused looking somewhat similar. It would be weird if 1 got confused with 5 because they look nothing alike. The vectors representing 8 and 3 on the other hand must look very similar due to their curved nature.
\begin{figure}[!ht]
\caption{Confusion Matrix with final accuracy of 97 percent}
\includegraphics[width=\columnwidth]{ConfusionMatrix.png}
\end{figure}
\section{Conclusion}
In this assignment I analyzed how accuracy was effected by comparing it to both the k value and the limit of training examples. I then analyzed what numbers were getting confused the most. The final accuracy was achieved was slightly above 97 percent which makes it acceptable in the research context but still less accurate than methods like deep learning.
\end{document}
% This document was modified from the file originally made available by
% Pat Langley and Andrea Danyluk for ICML-2K. This version was
% created by Lise Getoor and Tobias Scheffer, it was slightly modified
% from the 2010 version by Thorsten Joachims & Johannes Fuernkranz,
% slightly modified from the 2009 version by Kiri Wagstaff and
% Sam Roweis's 2008 version, which is slightly modified from
% Prasad Tadepalli's 2007 version which is a lightly
% changed version of the previous year's version by Andrew Moore,
% which was in turn edited from those of Kristian Kersting and
% Codrina Lauth. Alex Smola contributed to the algorithmic style files.
| {
"alphanum_fraction": 0.7627782725,
"avg_line_length": 48.4051724138,
"ext": "tex",
"hexsha": "8a78048db371a61ac68b965e27ca660e16fb6c7c",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-01-28T05:52:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-30T07:28:47.000Z",
"max_forks_repo_head_hexsha": "5d1c6c7bfb05b54f7c000c940b1f6410054f10f0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ckcortright/CSCI4830MachineLearning",
"max_forks_repo_path": "ProgrammingAssignments/KNN/Analysis/example_paper.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5d1c6c7bfb05b54f7c000c940b1f6410054f10f0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ckcortright/CSCI4830MachineLearning",
"max_issues_repo_path": "ProgrammingAssignments/KNN/Analysis/example_paper.tex",
"max_line_length": 789,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5d1c6c7bfb05b54f7c000c940b1f6410054f10f0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ckcortright/CSCI4830MachineLearning",
"max_stars_repo_path": "ProgrammingAssignments/KNN/Analysis/example_paper.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1332,
"size": 5615
} |
%
% Copyright 2016, Data61
% Commonwealth Scientific and Industrial Research Organisation (CSIRO)
% ABN 41 687 119 230.
%
% This software may be distributed and modified according to the terms of
% the BSD 2-Clause license. Note that NO WARRANTY is provided.
% See "LICENSE_BSD2.txt" for details.
%
% @TAG(D61_BSD)
%
This chapter explains the interfaces which make up the \refOS protocol. These interfaces set the language with which components in the system interact. Interfaces are intended to provide abstractions to manage conceptual objects such as a processes, files and devices.
Further implementations beyond \refOS may choose to extend these interfaces to provide extra system functionality and to attach additional interfaces.
% ----------------------------------------------------------------------
\section{Objects}
Servers implement interfaces which provide management (creation, destruction, monitoring, sharing and manipulation) of objects. For example, an audio device may implement the dataspace interface for UNIX \texttt{/dev/audio} which gives the abstraction to manage audio objects.
In \refOS, the access to an object (the permission to invoke the methods which manage the object) is implemented using a badged endpoint capability. The badge number is used to uniquely identify the instance of an object. Since most expected implementations of these protocols likely will use a similar method to track access permissions, the term capability may be used interchangably with object in this document. For example "window capability" means a capability which represents the access to a window object.
Note that in implementations, objects may be merged in some cases. For example, the process server's \obj{liveness}, \obj{anon} or \obj{process} object capabilities may be merged for simplification.
\begin{description}
\item[\obj{process}] is an object which is most likely maintained by the process server (\srv{procserv}) and represents a process. If something has access to a process object, it may perform operations involving that process such as killing that process and calling methods on behalf of that process.
\item[\obj{liveness}] is an object representing the identity of a process. If something has access to a liveness object of a process, it can be notified of the process's death and can request an ID to uniquely identify the process, but it cannot kill the process or pretend to be the process.
\item[\obj{anon}] is an object representing the "address" of a server, and it is used to establish a session connection to the server.
\item[\obj{session}] is an object representing an active connection session to a server. If something has access to a session object, it can invoke methods on the server on behalf of the session client.
\item[\obj{dataspace}] is an object that represents a dataspace. The dataspace itself may represent anything that may be modeled as a series of bytes including devices, RAM and files. If something has access to a dataspace object, it may read from the dataspace object, write to the dataspace object and execute the dataspace object by mapping a memory window to the dataspace, closing the dataspace, or deleting the dataspace. Performing these operations is dependent on the dataspace permissions.
\item[\obj{memwindow}] is an object that represents an address space range (i.e. a memory window) segment in a process's virtual memory. If something has access to a memory window object, it may perform operations on the memory window object such as mapping the memory window to a dataspace and mapping a frame into the memory window.
\end{description}
% ----------------------------------------------------------------------
\section{Protocols}
\label{lProtocols}
This section describes a number of important protocols that \refOS employs. As noted in \autoref{mNotation} of this document, each protocol description consists of the server that is receiving and handling the method invocation via an endpoint, the name of the interface that the server implements, the name of the method call and the arguments that are passed to the method call and the return values, output variables and/or reply capabilities of the method invocation. Note that for simplification some method names differ slightly between this document and \refOS's implementation.
% ------------------
\subsection{Process Server Interface}
The process server interface provides the abstraction for managing processes and threads. The abstraction includes management of processes' virtual memory, memory window creation and deletion, process creation and deletion, death notification of client processes and thread management. Note that in implementations \cp{procserv}{session} could be the same capability as \cp{procserv}{process} in which case the process server is connectionless. \cp{procserv}{session} may also be shared with \cp{procserv}{anon} for simplification.
\begin{description}
\item \pro{procserv}{session}{watch\_client(\cp{procserv}{liveness}, death\_notify\_ep)}
{(Errorcode, death\_id, principle\_id)}
Authenticate a new client of a server against the \srv{procserv} and register for death
notification.
\begin{description}
\item [procserv\_liveness\_C] The new client's liveness capability, which the client has given to the server through session connection
\item [death\_notify\_ep] The asynchronous endpoint through which death notification occurs
\item [death\_id] The unique client ID that the server will receive on death notification
\item [principle\_id] The ID used for permission checking (optional)
\end{description}
\item \pro{procserv}{session}{unwatch\_client(\cp{procserv}{liveness})}{(Errorcode)}
Stop watching a client and remove its death notifications.
\item \pro{procserv}{session}{create\_mem\_window(base\_vaddr, size, permissions, flags)}
{(ErrorCode, \cp{procserv}{window})}
Create a new memory window segment for the calling client. Note that clients may only create memory windows for their own address space and alignment restrictions may apply here due to implementation and/or hardware restrictions. In the \refOS client environment, a valid memory window segment must be covering any virtual address ranges before any mapping can be performed (including dataspace and device frame mappings).
\begin{description}
\item [base\_vaddr] The window base address in the calling client's VSpace
\item [size] The size of the memory window
\item [permissions] The read and write permission flags (optional)
\item [flags] The extra flags bitmask - cached versus uncached window for example (optional)
\end{description}
\item \pro{procserv}{session}{resize\_mem\_window(\cp{procserv}{window}, size)}
{ErrorCode}
Resize a memory window segment. This is an optional feature, which may be useful for implementing dynamic heap memory allocation on clients.
\begin{description}
\item [size] The size of the memory window to resize to
\end{description}
\item \pro{procserv}{session}{delete\_mem\_window(\cp{procserv}{window})}
{ErrorCode}
Delete a memory window segment.
\item \pro{procserv}{session}{register\_as\_pager(\cp{procserv}{window}, fault\_notify\_ep)}
{(ErrorCode, window\_id)}
Register to receive faults for the presented window. The returned \ty{window\_id} is an integer used during notification in order for the pager to be able to identify which window faulted. \ty{window\_id} must be unique for each pager, although each \ty{window\_id} may also be implemented to be unique across the entire system.
\begin{description}
\item [fault\_notify\_ep] The asynchronous endpoint which fault notifications are to be sent through
\item [window\_id] The unique ID of the window which is used to identify which window faulted. The server most likely has to record this ID to handle faults correctly
\end{description}
\item \pro{procserv}{session}{unregister\_as\_pager(\cp{procserv}{window})}{(ErrorCode)}
Unregister to stop being the pager for a client process's memory window
\item \pro{procserv}{session}{window\_map(\cp{procserv}{window}, window\_offset, src\_addr)}
{ErrorCode}
Map the frame at the given VSpace into a client's faulted window and then resolve the fault and resume execution of the faulting client. This protocol is most commonly used in response to a prior fault notification from the process server, and it also may be used to eagerly map frames into clients before they VMfault.
\begin{description}
\item [window\_offset] The offset into the window to map the frame into
\item [src\_addr] The address of the source frame in the calling client process's VSpace. This address should contain a valid frame, and page-alignment restrictions may apply for this parameter
\end{description}
\item \pro{procserv}{session}{new\_proc(name, params, block, priority)}{(ErrorCode, status)}
Start a new process, blocking or non-blocking.
\begin{description}
\item [name] The executable file name of the process to start
\item [params] The parameters to pass onto the new process
\item [block] The flag stating to block or not to block until the child process exits
\item [priority] The priority of the new child process
\item [status] The exit status of the process (only applicable if blocking)
\end{description}
\item \pro{procserv}{session}{exit(status)}{(ErrorCode)}
Exit and delete the calling process
\begin{description}
\item [status] The exit status of the calling client process
\end{description}
\item \pro{procserv}{session}{clone(entry, stack, flags, args)}{(ErrorCode, thread\_id)}
Start a new thread, sharing the current client process's address space. The child thread will have the same priority as the parent process.
\begin{description}
\item [entry] The entry point vaddr of the new thread
\item [stack] The stack vaddr of the new thread
\item [flags] Any thread-related flags
\item [args] The thread arguments
\item [thread\_id] The thread ID of the cloned thread
\end{description}
\end{description}
% ------------------
\subsection{Server Connection Interface}
The server connection interface enables client session connection and disconnection. It may be a good idea during implementation to extend this interface onto any other extra operating system functionality that is common across servers. This could include debug ping, parameter buffer setup and notification buffer setup.
\begin{description}
\item \pro{serv}{anon}{connect(\cp{procserv}{liveness})}{(ErrorCode, \cp{serv}{session})}
Connect to a server and establish a session
\item \pro{serv}{session}{disconnect()}{(ErrorCode)}
Disconnect from a server and delete session
\end{description}
% ------------------
\subsection{Data Server Interface}
The data server interface provides the abstraction for management of dataspaces including dataspace creation, dataspace access, dataspace sharing and dataspace manipulation.
\begin{description}
\item \pro{dataserv}{session}{open(char *name, flags, mode, size)}
{(ErrorCode, \cp{dataserv}{dataspace})}
Open a new dataspace at the dataspace server, which represents a series of bytes. Dataspace mapping methods such as datamap() and init\_data() directly or indirectly map the contents of a dataspace into a memory window after which the contents can be read from and written to. The concept of a dataspace in \refOS is similar to a file in UNIX: what a dataspace represents depends on the server that is implementing the interface.
\begin{description}
\item [name] The name of the dataspace to open
\item [flags] The read, write and create flags to open the dataspace with
\item [mode] The mode to create a new file with in the case that a new one is created
\item [size] The size of dataspace to open - some data servers may ignore this
\end{description}
\item \pro{dataserv}{session}{close(\cp{dataserv}{dataspace})}{ErrorCode}
Close a dataspace belonging to the data server.
\item \pro{dataserv}{session}{expand(\cp{dataserv}{dataspace}, size)}{ErrorCode}
Expand a given dataspace. Note that some dataspaces may not support this method as sometimes the size of a dataspace makes no sense (serial input for instance).
\begin{description}
\item [size] The size to expand the dataspace to
\end{description}
\item \pro{dataserv}{session}{datamap(\cp{dataserv}{dataspace}, \cp{procserv}{window}, offset)}
{ErrorCode}
Request that the data server back the specified window with the specified dataspace. Offset is the offset into the dataspace to be mapped to the start of the window. Note that the dataspace has to be provided by the session used to request the backing of the window.
\begin{description}
\item [procserv\_window\_C] Capability to the memory window to map the dataspace contents into
\item [offset] The offset in bytes from the beginning of the dataspace
\end{description}
\item \pro{dataserv}{session}{dataunmap(\cp{procserv}{window})}
{ErrorCode}
Unmap the contents of the data from the given memory window.
\begin{description}
\item [procserv\_window\_C] Capability to the memory window to unmap the dataspace from
\end{description}
\item \pro{dataserv}{session}{init\_data(\cp{dest}{dataspace}, \cp{dataserv}{dataspace},
offset)}{ErrorCode}
Initialise a dataspace by the contents of a source dataspace. The source dataspace is where the content is, and the source dataspace must originate from the invoked dataserver. Whether the destination dataspace and the source dataspace can originate from the same dataserver depends on the dataserver implementation: one should refer to the dataserver documentation. One example use case for this is a memory manager implementing the dataspace for RAM having a block of RAM initialised by an external data source such as a file from a file server.
\begin{description}
\item [dest\_dataspace\_C] The dataspace to be initialised with content from the source dataspace
\item [dataserv\_dataspace\_C] The dataserver's own dataspace (where the content is)
\item [offset] The content offset into the source dataspace
\end{description}
\item \pro{dataserv}{session}{have\_data(\cp{dataserv}{dataspace}, fault\_ep)}
{ErrorCode, data\_id}
Call a remote dataspace server to have the remote dataspace server initalised by the contents of the local dataspace server. The local dataspace server must bookkeep the remote dataspace server's ID. The remote dataspace server then will request from the given endpoint content initialisation with the remote dataspace server providing its ID in the notification.
\begin{description}
\item [fault\_ep] The asynchronous endpoint to ask for content initialisation with
\item [data\_id] The remote endpoint's unique ID number
\end{description}
\item \pro{dataserv}{session}{unhave\_data(\cp{dataserv}{dataspace})}
{ErrorCode}
Inform the dataserver to stop providing content initialise data for its dataspace.
\item \pro{dataserv}{session}{provide\_data(\cp{dataserv}{dataspace}, offset, content\_size)}
{ErrorCode}
Give the content from the local dataserver to the remote dataserver in response to the remote dataserver's earlier notification asking for content. The content is assumed to be in the set up parameter buffer. This call implicitly requires a parameter buffer to be set up, and how this is done is up to the implementation. Even though the notification from the dataserver asking for content uses an ID to identify the dataspace, the reply, for security reasons, gives the actual dataspace capability. The ID may be used securely if the dataserver implementation supports per-client ID checking, and in this situtation a version of this method with an ID instead of a capability could be added.
\begin{description}
\item [offset] The offset into the remote dataspace to provide content for
\item [content\_size] The size of the content
\end{description}
\item \pro{dataserv}{process}{datashare(\cp{dataserv}{dataspace})}
{ErrorCode}
Share a dataspace of a dataserver with another process. The exact implementation of this is context-based. For example, a server sharing a parameter buffer may implement this method as \texttt{share\_param\_buffer(dataspace\_cap)}, implicitly stating the context and implementing multiple share calls for more contexts. Although this method of stating the sharing context is strongly encouraged, the exact method of passing context is left up to the implementation. \\
There are also a few assumptions here:
\begin{itemize}
\item The dataspace is backed by somebody the receiving process trusts (the process server for instance)
\item The dataspace is not revocable, so the receiving process does not need to protect itself from an untrusted fault handler on that memory
\end{itemize}
\item \pro{dataserv}{event}{pagefault\_notify(window\_id, offset, op)}
{ErrorCode}
Send a notification event to a data server indicating which window a page fault occurred in, the offset within that window and the operation attempted (either read or write)
\item \pro{dataserv}{event}{initdata\_notify(data\_id, offset)}
{ErrorCode}
Send a notification event to a data server indicating that a dataspace needs its initial data
\item \pro{dataserv}{event}{death\_notify(death\_id)}
{ErrorCode}
Send a notification event to a data server indicating a client has died and that the resources associated with the client can be cleaned up
\end{description}
% ------------------
\subsection{Name Server Interface}
The name server interface provides a naming protocol. \refOS employs a simple hierachical distributed naming scheme where each server is responsible for a particular prefix under another name server. This allows for simple distributed naming while allowing prefix caching. In \refOS, the process server acts as the root name server.
\begin{description}
\item \pro{nameserv}{session}{register(name, \cp{dataserv}{anon})}
{ErrorCode}
Create an endpoint and set this endpoint on the root name server in order for clients to be able to find the root name server and connect to it. The anon capability is given to clients looking for the root name server, and then clients make their connection calls through the anon capability to establish a session. Re-registering replaces the current server anon capability.
\begin{description}
\item [name] The name to register under
\item [dataserv\_anon\_C] The anonymous endpoint to register with
\end{description}
\item \pro{nameserv}{session}{unregister(name)}
{ErrorCode}
Unregister a server under a given name so clients are no longer able to find the server under that name. This method invalidates existing anon capabilities.
\begin{description}
\item [name] The name to unregister for
\end{description}
\item\pro{nameserv}{session}{resolve\_segment(path)}
{(ErrorCode, \cp{dataserv}{anon}, resolved\_bytes)}
Return an anon capability if the server is found. This method resolves part of the path string returned as an offset into the path string that has been resolved. The rest of the path string may be resolved by other dataspaces until the system reaches the endpoint to the server that contains the file that it is searching for. This allows for a simple hierachical namespace with distributed naming.
\begin{description}
\item [path] The path to resolve
\item [resolved\_bytes] The number of bytes resolved from the start of the path string
\end{description}
\end{description}
% ----------------------------------------------------------------------
\section{Server Components}
\refOS follows the component-based multi-server operating system design. The two main components in the design are the process server and the file server. \refOS implements an additional operating system server which is a dataserver that is responsible for basic device related tasks.
% ------------------
\subsection{Process Server}
The process server is the most trusted component in the system. It runs as the initial kernel thread and does not depend on any other component (this avoids deadlock). The process server implementation is single-threaded. The process server also implements the dataspace interface for anonymous memory and acts as the memory manager.
In \refOS, the process server implements the following interfaces:
\begin{itemize}
\item Process server interface (naming, memory windows, processes and so on)
\item Dataspace server interface (for anonymous memory)
\item Name server interface (in \refOS, the process server acts as the root name server)
\end{itemize}
% ------------------
\subsection{File Server}
The file server is more trusted than clients, but it is less trusted than the process server (this avoids deadlock). In \refOS, the file server does not use a disk driver and the actual file contents are compiled into the file server executable itself using a cpio archive. The file server acts as the main data server in \refOS.
In \refOS, the file server implements the following interfaces:
\begin{itemize}
\item Dataspace server interface (for stored file data)
\item Server connection interface (for clients to connect to it)
\end{itemize}
% ------------------
\subsection{Console Server}
The console server provides serial and EGA input and output functionality, which is exposed through the dataspace interface. The console server also provides terminal emulation for EGA screen output.
In \refOS, the console server implements the following interfaces:
\begin{itemize}
\item Dataspace server interface (for serial input and output and EGA screen devices)
\item Server connection interface (for clients to connect to it)
\end{itemize}
% ------------------
\subsection{Timer Server}
The timer server provides timer get time and sleep functionality, which is exposed through the dataspace interface.
In \refOS, the timer server implements the following interfaces:
\begin{itemize}
\item Dataspace server interface (for timer devices)
\item Server connection interface (for clients to connect to it)
\end{itemize}
| {
"alphanum_fraction": 0.7497474193,
"avg_line_length": 70.0461538462,
"ext": "tex",
"hexsha": "68d810e962ec424a7813643a6b6f31e6cb7b2b67",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-09-17T07:34:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-08-13T01:48:40.000Z",
"max_forks_repo_head_hexsha": "7b0ad5afb339e9bebc65142ee89bded5344cedbe",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "Nexusoft/LX-OS",
"max_forks_repo_path": "projects/refos/design/interface.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7b0ad5afb339e9bebc65142ee89bded5344cedbe",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "Nexusoft/LX-OS",
"max_issues_repo_path": "projects/refos/design/interface.tex",
"max_line_length": 696,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "7b0ad5afb339e9bebc65142ee89bded5344cedbe",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "Nexusoft/LLL-OS",
"max_stars_repo_path": "projects/refos/design/interface.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-16T20:05:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-07T19:48:01.000Z",
"num_tokens": 4944,
"size": 22765
} |
\section{Adverb}
\begin{table}[h]
\caption{Adverb characteristics}
\begin{tabular}{ll}
\textbf{Title} & \textbf{Value} \\
Semantic value & Attribute \\
Category & Independent \\
Subcategory & Nominal \\
Alteration & Comparison \\
Alteration parameters & Degree \\
Differentiation parameters & Type, Group
\end{tabular}
\end{table}
There are different words that cannot be alternated. The biggest class of such words is adverb. It unites a number of groups adverbs can be divided into.
First of all, you should know that adverbs\index{adverb} can be divided into three types, depending on the form they have: \textit{primary} (P), \textit{secondary} (S) and \textit{derivative} (D). When we speak about adverb groups, these letters will be written next to the adverb to help you easier differentiate these types.
\underline{Primary\index{adverb!primary}} adverbs have been formed long ago and we study it as a whole word, without an easy-found root.
\textbf{Examples:}
\textit{Nyne} - Now
\textit{Dy} - When
\textit{De} - Where
\underline{Secondary\index{adverb!secondary}} adverbs have been formed from two words (or a phrase) or from a frozen word form.
\textbf{Examples:}
\textit{Vdomu} - At home
\textit{Nazad} - Backwards
\underline{Derivative\index{adverb!derivative}} adverbs have been shifted semantically from an adjective. It is the most common type of adverbs. This form usually corresponds with -LY adverbs in English.
\textbf{Examples:}
\textit{Hlådno} - Cold, Coldly
\textit{Věčno} - Eternally
Now let us talk about groups of adverbs. One distinguishes two main types of adverbs: \textit{Significant} and \textit{Demonstrative} \cite{belorussian}. Significant adverbs names concrete property or process attributes, while demonstrative adverbs only refer to these attributes or have an indication of the common nature.
\begin{figure}
\includegraphics[width=\linewidth]{./sources/adverbs.jpg}
\caption{Categories of adverbs}
\label{fig:adverbs}
\end{figure}
\subsection{Demonstrative adverbs}
There are six categories of demonstrative\index{adverb!demonstrative} adverbs. They are adverbs of place, time. cause. goal, mean, quantity.
1. \textbf{Demonstrative adverbs of place.}
Let’s speak about subcategories of this adverbs.
A. \textit{Adverbs indicating the place of action.}
Question: \textit{Where? (Kųde?)}
Here - \textit{Tut, zdě}
There - \textit{Tamo, tųde}
Everywhere - \textit{Vsëde, vsüdu, vsëkųde}
Nowhere - \textit{Nide, nikųde}
Somewhere - \textit{Něde, někųde}
B. \textit{Adverbs indicating the place where action is directed. }
Question: \textit{Whither? (Kųda?)}
Here - \textit{Nazdě, süda}
There - \textit{Tųda, natųde, natamo}
Everywhere - \textit{Navsękųde, vsękųda}
Nowhere - \textit{Nanikųde, nikųda}
Somewhere - \textit{Naněkųde, někųda}
C. \textit{Adverbs indicating the place of action’s start.}
Question: \textit{Whence? (Odkųde?)}
Here - \textit{Odzdě, odsüda}
There - \textit{Odtųda}
Everywhere - \textit{Odvsękųda}
Nowhere - \textit{Odnikųda}
Somewhere - \textit{Odněkųda}
D. \textit{Adverbs indicating the place of action’s end.}
Question: \textit{Where? How far? (Dokųde?)}
(Up to) here - \textit{Dozdě}
There - \textit{Dotųde, dotamo}
Everywhere - \textit{Dovsękųde}
Nowhere - \textit{Donikųde}
Somewhere - \textit{Doněkųde}
2. \textbf{Demonstrative adverbs of time}
Question: \textit{When? (Koĝda?)}
Now - \textit{Nyně, sëdy}
Afterwards - \textit{Poslě, potym}
Later - \textit{Pozdě, po-pozdno}
%TODO: Pozdne??%
Then - \textit{Poslě, potym}
Once - \textit{Jednađy}
Sometimes - \textit{Něĝda, nědy}
Ever - \textit{Něĝda, nědy}
Never - \textit{Niĝda, nidy}
Always - \textit{Vsëĝda, vsëdy}
3. \textbf{Demonstrative adverbs of cause}
Question: \textit{Why? (Čomu?)}
Therefore - \textit{Slědno}
Because - \textit{Bo, tomu če}
Thus - \textit{Tako}
Somehow - \textit{Nějako}
4. \textbf{Demonstrative adverbs of goal}
Question: \textit{For what? (Za čto?)}
For - \textit{Za da, dlä}
In order to - \textit{Dlä}
So as to - \textit{Za da}
5. \textbf{Demonstrative adverbs of mean}
Question: \textit{How? (Kako?)}
So - \textit{Tako}
Likewise - \textit{Podobno}
Somehow - \textit{Nějako}
Otherwise - \textit{Drugo}
Nohow - \textit{Nijako}
Like that - \textit{Kakto (Kako to)}
That way - \textit{Tako}
Anyhow - \textit{Nějako}
Differently - \textit{Ïno}
6.\textbf{ Demonstrative adverbs of quantity}
Question: \textit{How much? (Kolïko?)}
So much - \textit{Tako mnogo}
Few - \textit{Několïko}
Some - \textit{Několïko}
Several - \textit{Několïko}
Nary - \textit{Malo}
\subsection{Significant adverbs}
Significant\index{adverb!significant} adverbs are often divided into two large subcategories - conditional and attributive adverbs. \textit{Conditional} adverbs shows the conditions of the action took place - place, time, cause, goal. \textit{Attributive} shows the attributes of the action - the mean of the action. its quality and its quantity.
1. \textbf{Significant adverbs of place.}
A. \textit{Adverbs indicating the place of action.}
Question: \textit{Where? (Kųde?)}
Next to (close to) - \textit{Blïzko do}
Ahead - \textit{Upredï}
Opposite - \textit{Naprotï}
Around - \textit{Okolo}
Far - \textit{Dalïko}
Not far - \textit{Nedalïko}
Among - \textit{Među, sredï}
Between - \textit{Među}
At home -\textit{ Doma, vdomu}
Upstairs - \textit{Uvòrhu, vòrhu}
Downstairs - \textit{Unïzu, nïzu}
B. \textit{Adverbs indicating the place where action is directed. }
Question:\textit{ Whither? (Kųda?)}
Ahead - \textit{Upred}
To the right - \textit{Uděsno}
Upwards - \textit{Uvòrh}
Sideways - \textit{Uboku}
Downwards - \textit{Unïz}
Opposite - \textit{Naprotiv}
Home - \textit{Dodomu}
C. \textit{Adverbs indicating the place of action’s start.}
Question: \textit{Whence? (Odkųde?)}
From upstairs - \textit{Zvòrhu}
From downstairs - \textit{Znïzu}
From the right - \textit{Zděsnu}
From the left - \textit{Zlěvu}
D. \textit{Adverbs indicating the place of action’s end.}
Question: \textit{Where? How far? (Dokųde?)}
Up to the top - \textit{Dovòrha}
Down to the bottom - \textit{Donïza}
2. \textbf{Significant adverbs of time}
Question: \textit{When? (Koĝda?)}
\textit{A. With relative time value}
Now - \textit{Nyně}
Already - \textit{Juž}
Soon - \textit{Skoro}
Always - \textit{Vsëĝda}
Long ago - \textit{Davno}
Recently - \textit{Nedavno}
Earlier - \textit{Ráno}
\textit{B. With the value of while when the action is executed.}
Today - \textit{Dnësj}
Yesterday - \textit{Včera}
The day before yesterday - \textit{Zadvčera}
In the afternoon - \textit{Dnëm}
In the night - \textit{Nočïü}
In the morning - \textit{Jutrom}
Before the dawn - \textit{Predutrom}
In the evening - \textit{Večorom}
In spring - \textit{Věsnoju}
In autumn - \textit{Jesenïü}
In winter - \textit{Zimoju}
In summer - \textit{Lětom}
3. \textbf{Significant adverbs of cause}
Question: \textit{Why? (Čomu?)}
Accidentally - \textit{Slučaǐno}
Intentionally - \textit{Umysëlno}
4. \textbf{Significant adverbs of goal}
Question: For what? (Za čto?)
Cattily - \textit{Nazlo}
For memory - \textit{Napamętj}
Further let’s speak about attributive adverbs.
1. \textbf{Significant adverbs of mean}
Question: \textit{How? (Kako?)}
\textit{A. With value of forming an aciton}
Again - \textit{Znovu}
Firstly - \textit{Pòrvo}
Gradually - \textit{Postepno}
Immediately - \textit{Zrazu}
\textit{B. With value of executing the action}
Sometimes - \textit{Něĝda}
Annually - \textit{Vsękoročno}
Continuously - \textit{Bezostanno}
\textit{C. With value of action subject state}
Grungingly - \textit{Nehòtno}
Seriously - \textit{Považno}
Silently - \textit{Mòlčno}
\textit{D. With value of action result}
For long - \textit{Nadôlgo}
Past - \textit{Mïmo}
\textit{E. With value of the mean of execution}
Afoot - \textit{Pěhom}
Running - \textit{Běgom}
Aloud - \textit{Naglås, glåsno}
\textit{F. With value of similarity}
Fraternally - \textit{Po-bratóvu}
Our way - \textit{Po našomu}
\textit{G. With value of connection between subjects}
Together - \textit{Zajedno}
Jointly - \textit{Splåtno}
2. \textbf{Significant adverbs of quality}
Question: \textit{How? (Kako?)}
\textit{A. With value of color}
\textit{Bělo} - Whitely
\textit{Modro} - Bluely
\textit{B. With value of length}
\textit{Kråtko} - Shortly
\textit{Dôlgo} - Longly
\textit{C. With value of action/state expressing measure}
\textit{Odlično} - Excellent
\textit{Hudo} - Bad
\textit{D. With value of action/state expressing estimation}
\textit{Spokôǐno} - Calmly
\textit{Tųžno} - Heavily
\textit{E. With value of state}
\textit{Možno} - Possible
\textit{Nuđno} - Necessary
3.\textbf{ Significant adverbs of quantity}
Question: \textit{How much? (Kolïko?)}
\textit{A. With value of intensity}
Very - \textit{Vëlïmi}
Almost - \textit{Blïzko}
A bit - \textit{Malko}
\textit{B. With value of measure}
Many - \textit{Mnogo}
Little - \textit{Malo}
More - \textit{Ješto}
\textit{C. With value of growth limit}
Forever - \textit{Navsëdy}
To the end - \textit{Dokonca}
Too long - \textit{Predôlgo}
\subsection{Degrees of comparison}
Likewise adjectives, adverbs have three degrees of comparison\index{comparison}. Moreover, there are synthetic and analytic forms too.
Remember (look at paragraph about adjective degrees of comparison), that there are three degrees: positive, comparative and superlative.
\textbf{Synthetic forms}
Comparative\index{comparison!synthetic} form is made by adding to the word base the suffix “ěǐ’. Unlike adjectives, this is the only suffix to create a comparative form (compare with suffixes “ëǐ”, “aǐ” for hard and soft bases in adjectives).
\underline{Examples:}
\textit{Mnogo - Množěǐ}
\textit{Sïnë - Sïněǐ}
Superlative form is made so as it is in adjectives - adding a prefix “naǐ-” to the comparative form.
\textbf{Analytic forms}
Analytic\index{comparison!analytic} forms provide simple ways of creating comparative and superlative forms without modifying the word itself. There are two types of adverb analytic comparison.
\textbf{Using prefixes}
Comparative form is created by adding prefix “po-” through a defis ro a positive form. Superlative form is created by adding prefix “naǐ-” through a defis to the positive form.
\underline{Examples:}
\textit{Mnogo - po-mnogo - naǐ-mnogo}
\textbf{Using an auxiliary adverb}
To the positive form you should add an auxiliary adverb in comparative or superlative form.
\begin{table}[!htb]
\caption{Auxiliary adverb for analytic comparison}
\begin{tabular}{lll}
Auxiliary adverb
& Comparative form
& Superlative form \\
more & bolěǐ & naǐbolěǐ \\
less & mëněǐ & naǐmëněǐ \\
\end{tabular}
\end{table}
\underline{Examples:}
\textit{Mnogo - bolěǐ mnogo - naǐbolěǐ mnogo}
You can use both synthetic and analytic forms at the same time.
| {
"alphanum_fraction": 0.7241722453,
"avg_line_length": 21.8891089109,
"ext": "tex",
"hexsha": "1ee7780430f5f2aaa7b07de6e9ac8831cb71ff9b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8086752f2d6679f31e0f3b924ca02f670bcf9db9",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "SlavDom/novoslovnica-book",
"max_forks_repo_path": "content/adverb3.9.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8086752f2d6679f31e0f3b924ca02f670bcf9db9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "SlavDom/novoslovnica-book",
"max_issues_repo_path": "content/adverb3.9.tex",
"max_line_length": 346,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8086752f2d6679f31e0f3b924ca02f670bcf9db9",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "SlavDom/novoslovnica-book",
"max_stars_repo_path": "content/adverb3.9.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3746,
"size": 11054
} |
\documentclass[11pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{setspace}
\onehalfspacing
\usepackage{graphicx}
\graphicspath{report_images/}
\usepackage{appendix}
\usepackage{listings}
\usepackage{float}
\usepackage{multirow}
\usepackage{amsthm}
% The next three lines make the table and figure numbers also include section number
\usepackage{chngcntr}
\counterwithin{table}{section}
\counterwithin{figure}{section}
% Needed to make titling page without a page number
\usepackage{titling}
% DOCUMENT INFORMATION =================================================
\font\titleFont=cmr12 at 11pt
\title {{\titleFont ECEN 429: Introduction to Digital Systems Design Laboratory \\ North Carolina Agricultural and Technical State University \\ Department of Electrical and Computer Engineering}} % Declare Title
\author{\titleFont Reporter: Nikiyah Beulah \\ \titleFont Partner: Chris Cannon} % Declare authors
\date{\titleFont February 15, 2018}
% ======================================================================
\begin{document}
\begin{titlingpage}
\maketitle
\begin{center}
Lab 4
\end{center}
\end{titlingpage}
\section{Introduction}
The object of this lab is to introduce students to the topic of memory and show how we might represent and handle memory in VHDL. This lab also reiterated important important concepts about components that will be utilized in this project. By the end of this lab, we will be able to implement a basic memory module in VHDL with the ability to select specific values from the memory address.
\section{Background, Design Solution, and Results}
\subsection{Problem 1 ROM Implementation}
\subsubsection{Background}
We were instructed to implement a ROM module with a 4-bit input and a 3-bit output. Because there are 4-inputs, which correspond with the available addresses in this memory. 4-bits corresponds with 2\textsuperscript{4}, or 16 possible values. Therefore, we were able to derive that there are 16 addresses in this memory module. Because the output is only 3 bits, we knew that our memory word size was 3 bits, meaning that each memory address held 3 bits of data.
\subsubsection{Design Solution}
The design we came up with for this ROM is a simple case statement what will retrieve a different value for each address. To populate our ROM for testing purposes, we decided to simply start with output "000" and iterate to "111", twice. It should be noted that the output values are trivial in this assignment. The point is to return a value stored at a given address, but the actual data stored there does not matter for this assignment. Students attempting to recreate this lab are encouraged to come up with their own values for output if they wish. The truth table for the ROM is shown in Table ~\ref{tab:romTruthTable} and the port assignments are summarized in Table ~\ref{tab:romPorts}.
\begin{table}[H]
\begin{center}
\begin{tabular}{| l | l | l | l | l |}
\hline
a3 & a2 & a1 & a0 & output \\ \hline
0 & 0 & 0 & 0 & 000 \\ \hline
0 & 0 & 0 & 1 & 001 \\ \hline
0 & 0 & 1 & 0 & 010 \\ \hline
0 & 0 & 1 & 1 & 011 \\ \hline
0 & 1 & 0 & 0 & 100 \\ \hline
0 & 1 & 0 & 1 & 101 \\ \hline
0 & 1 & 1 & 0 & 110 \\ \hline
0 & 1 & 1 & 1 & 111 \\ \hline
1 & 0 & 0 & 0 & 000 \\ \hline
1 & 0 & 0 & 1 & 001 \\ \hline
1 & 0 & 1 & 0 & 010 \\ \hline
1 & 0 & 1 & 1 & 011 \\ \hline
1 & 1 & 0 & 0 & 100 \\ \hline
1 & 1 & 0 & 1 & 101 \\ \hline
1 & 1 & 1 & 0 & 110 \\ \hline
1 & 1 & 1 & 1 & 111 \\ \hline
\end{tabular}
\caption{\label{tab:romTruthTable}Truth table for our ROM implementation.}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{| l | l | l |}
\hline
Bit & Label & Port \\ \hline
a3 & Switch 3 & W17 \\ \hline
a2 & Switch 2 & W16 \\ \hline
a1 & Switch 1 & V16 \\ \hline
a0 & Switch 0 & V17 \\ \hline
o2 & LED 2 & U19 \\ \hline
o1 & LED 1 & E19 \\ \hline
o0 & LED 0 & U16 \\ \hline
\end{tabular}
\caption{\label{tab:romPorts}Port assignments for ROM implementation.}
\end{center}
\end{table}
\subsubsection{Results}
The ROM was successfully implemented and tested on our Basys3 board. We were able to successfully demonstrate returning every value specified in our truth table. Figures ~\ref{fig:p1img1} through ~\ref{fig:p1img9} show a sample of our results for chosen inputs.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3076.jpg}
\caption{\label{fig:p1img1}The input given by the switches is "0000" and the output shown by the LEDs is "000".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3081.jpg}
\caption{\label{fig:p1img2}The input given by the switches is "0100" and the output shown by the LEDs is "100".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3082.jpg}
\caption{\label{fig:p1img3}The input given by the switches is "0101" and the output shown by the LEDs is "101".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3083.jpg}
\caption{\label{fig:p1img4}The input given by the switches is "0111" and the output shown by the LEDs is "111".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3086.jpg}
\caption{\label{fig:p1img6}The input given by the switches is "1001" and the output shown by the LEDs is "001".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3088.jpg}
\caption{\label{fig:p1img7}The input given by the switches is "1011" and the output shown by the LEDs is "011".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3089.jpg}
\caption{\label{fig:p1img8}The input given by the switches is "1100" and the output shown by the LEDs is "100".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1/IMG_3090.jpg}
\caption{\label{fig:p1img9}The input given by the switches is "1101" and the output shown by the LEDs is "101".}
\end{center}
\end{figure}
\subsection{Problem 2 ROM Multiplexer}
\subsubsection{Background}
This assignment builds completely on the previous assignment by feeding the output of the ROM module into a multiplexer that will output only one of the three bits. This module could be used to work with only specific bits from a given memory word.
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\begin{definition}
Multiplexer: A multiplexer is a module that will pass only specific parts of the given input signal, based on a selector signal from the user.
\end{definition}
\subsubsection{Design Solution}
Our approach was to use the complete entity and architecture from the previous portion of the lab as a component, and feed that information to the multiplexer. Since there are 3 outputs that can be selected, we needed to use a 2-bit input for our multiplexer. However, with 4 possible select inputs and only 3 used, we do have an invalid "sel" input, "11". For this input, we decided to turn on all lights on the 7-segment display, showing an "8". Because 8 is obviously not a value that can be represented by one bit, this shows a clear error state to the user. Valid selections will display the value of the chosen bit, '1' or '0', on the 7-segment display. For reference, a key to 7-segment displays is shown in Figure ~\ref{fig:sevenSeg}. The truth table for this system is shown in Table ~\ref{tab:romMuxTruthTable}. Instead of describing every output to the 7-segment display, Tables ~\ref{tab:romMuxTruthTable1} and ~\ref{tab:romMuxTruthTable2} show the output as 1 or 0. Table ~\ref{tab:sevenSegTruthTable} describes how these outputs are handled with respect to the 7-segments display. Finally, Table ~\ref{tab:romMuxPorts} describes the port assignments for our design.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.2\textwidth]{../../Lab2/report-images/img1.png}
\caption{\label{fig:sevenSeg}Seven segment display diagram.}
\end{center}
\end{figure}
\begin{minipage}[t]{0.4\textwidth}
\begin{table}[H]
\begin{center}
\begin{tabular}{| l | l | l | l | l | l |}
\hline
x & sel & z \\ \hline
\multirow{3}{*}{0000} & 00 & 0 \\
& 01 & 0 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{0001} & 00 & 1 \\
& 01 & 0 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{0010} & 00 & 0 \\
& 01 & 1 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{0011} & 00 & 1 \\
& 01 & 1 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{0100} & 00 & 0 \\
& 01 & 0 \\
& 10 & 1 \\ \hline
\multirow{3}{*}{0101} & 00 & 1 \\
& 01 & 0 \\
& 10 & 1 \\ \hline
\multirow{3}{*}{0110} & 00 & 0 \\
& 01 & 1 \\
& 10 & 1 \\ \hline
\multirow{3}{*}{0111} & 00 & 1 \\
& 01 & 1 \\
& 10 & 1 \\ \hline
\end{tabular}
\caption{\label{tab:romMuxTruthTable1}Truth Table for addresses 0000 through 0111.}
\end{center}
\end{table}
\end{minipage}
\begin{minipage}[t]{0.2\textwidth}
\end{minipage}
\begin{minipage}[t]{0.4\textwidth}
\begin{table}[H]
\begin{center}
\begin{tabular}{| l | l | l | l | l | l |}
\hline
x & sel & z \\ \hline
\multirow{3}{*}{1000} & 00 & 0 \\
& 01 & 0 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{1001} & 00 & 1 \\
& 01 & 0 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{1010} & 00 & 0 \\
& 01 & 1 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{1011} & 00 & 1 \\
& 01 & 1 \\
& 10 & 0 \\ \hline
\multirow{3}{*}{1100} & 00 & 0 \\
& 01 & 0 \\
& 10 & 1 \\ \hline
\multirow{3}{*}{1101} & 00 & 1 \\
& 01 & 0 \\
& 10 & 1 \\ \hline
\multirow{3}{*}{1110} & 00 & 0 \\
& 01 & 1 \\
& 10 & 1 \\ \hline
\multirow{3}{*}{1111} & 00 & 1 \\
& 01 & 1 \\
& 10 & 1 \\ \hline
\end{tabular}
\caption{\label{tab:romMuxTruthTable2}Truth table for addresses 1000 through 1111.}
\end{center}
\end{table}
\end{minipage}
\begin{table}[H]
\begin{center}
\begin{tabular}{| l | l | l | l | l | l | l | l |}
\hline
Output Bit & z0 & z1 & z2 & z3 & z4 & z5 & z6 \\ \hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline
1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\ \hline
\end{tabular}
\caption{\label{tab:sevenSegTruthTable}Truth table for seven segment displayed based on the output.}
\end{center}
\end{table}
\begin{table}{H}
\begin{center}
\begin{tabular}{| l | l | l |}
\hline
Bit & Label & Port \\ \hline
x3 & Switch 3 & W17 \\ \hline
x2 & Switch 2 & W16 \\ \hline
x1 & Switch 1 & V16 \\ \hline
x0 & Switch 0 & V17 \\ \hline
z0 & CA & W7 \\ \hline
z1 & CB & W6 \\ \hline
z2 & CC & U8 \\ \hline
z3 & CD & V8 \\ \hline
z4 & CE & U5 \\ \hline
z5 & CF & V5 \\ \hline
z6 & CG & U7 \\ \hline
\end{tabular}
\caption{\label{tab:romMuxPorts}Port assignments for ROM and multiplexer design.}
\end{center}
\end{table}
\subsubsection{Results}
For our ROM and multiplexer circuit to operate as intended, we did have to rotate the output bits for '1' to the right by 1. A misread of the 7-segment diagram led to an initial output of "0011111" for the seven segment display, which was incorrect. However, once that adjustment was made our circuit functioned exactly as expected. A set of the results are displayed in figures ~\ref{fig:p1img1} through ~\ref{fig:p2img13}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3095.jpg}
\caption{\label{fig:p2img1}The input given by the switches is "0000", select is "00", therefore the output shown by the 7-segment display is "0".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3096.jpg}
\caption{\label{fig:p2img2}The input given by the switches is "0000", select is "01", therefore the output shown by the 7-segment display is "0".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3097.jpg}
\caption{\label{fig:p2img3}The input given by the switches is "0000", select is "10", therefore the output shown by the 7-segment display is "0".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3098.jpg}
\caption{\label{fig:p2img4}The input given by the switches is "0001", select is "00", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3099.jpg}
\caption{\label{fig:p2img5}The input given by the switches is "0001", select is "01", therefore the output shown by the 7-segment display is "0".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3100.jpg}
\caption{\label{fig:p2img6}The input given by the switches is "0001", select is "10", therefore the output shown by the 7-segment display is "0".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3101.jpg}
\caption{\label{fig:p2img7}The input given by the switches is "0101", select is "00", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3102.jpg}
\caption{\label{fig:p2img8}The input given by the switches is "0101", select is "01", therefore the output shown by the 7-segment display is "0".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3105.jpg}
\caption{\label{fig:p2img9}The input given by the switches is "0110", select is "01", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3108.jpg}
\caption{\label{fig:p2img10}The input given by the switches is "0111", select is "00", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3112.jpg}
\caption{\label{fig:p2img11}The input given by the switches is "1011", select is "01", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3114.jpg}
\caption{\label{fig:p2img12}The input given by the switches is "1101", select is "00", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2/IMG_3116.jpg}
\caption{\label{fig:p2img13}The input given by the switches is "1101", select is "10", therefore the output shown by the 7-segment display is "1".}
\end{center}
\end{figure}
\section{Conclusion}
In this lab, we were able to quickly and effectively implement each of the design challenges. As a result, we now have a firmer understanding of VHDL systems and components, as well as a new conceptual understanding of read-only memory than we previously had. The most difficult technical challenge in this lab was the design decisions around the use of the 7-segment display. Eventually, I think the design and the error case that we set up for the 7-segment display is highly functional and intuitive to the user.
\pagebreak
\textbf{Appendices}
\begin{appendices}
\section{Problem 1 VHDL Code}
\begin{lstlisting}[language=VHDL]
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
--Declares an entity that will represent
--read-only memory
entity rom is
Port ( a : in STD_LOGIC_VECTOR(3 downto 0);
o : out STD_LOGIC_VECTOR(2 downto 0));
end entity rom;
architecture rom_arch of rom is
begin
process(a)
begin
--Will return a preset value for each address "a"
case a is
when "0000" => o <= "000";
when "0001" => o <= "001";
when "0010" => o <= "010";
when "0011" => o <= "011";
when "0100" => o <= "100";
when "0101" => o <= "101";
when "0110" => o <= "110";
when "0111" => o <= "111";
when "1000" => o <= "000";
when "1001" => o <= "001";
when "1010" => o <= "010";
when "1011" => o <= "011";
when "1100" => o <= "100";
when "1101" => o <= "101";
when "1110" => o <= "110";
when "1111" => o <= "111";
end case;
end process;
end rom_arch;
\end{lstlisting}
\section{Problem 1 Constraints File}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part1Const.png}
\caption{\label{fig:Part1ConstFile}Constraints file for Problem 1.}
\end{center}
\end{figure}
\section{Problem 2 VHDL Code}
\begin{lstlisting}[language=VHDL]
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
--Declares an entity that will represent
--read-only memory
entity rom is
Port ( a : in STD_LOGIC_VECTOR(3 downto 0);
o : out STD_LOGIC_VECTOR(2 downto 0));
end entity rom;
architecture rom_arch of rom is
begin
process(a)
begin
--Will return a preset value for each address "a"
case a is
when "0000" => o <= "000";
when "0001" => o <= "001";
when "0010" => o <= "010";
when "0011" => o <= "011";
when "0100" => o <= "100";
when "0101" => o <= "101";
when "0110" => o <= "110";
when "0111" => o <= "111";
when "1000" => o <= "000";
when "1001" => o <= "001";
when "1010" => o <= "010";
when "1011" => o <= "011";
when "1100" => o <= "100";
when "1101" => o <= "101";
when "1110" => o <= "110";
when "1111" => o <= "111";
end case;
end process;
end rom_arch;
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
--Declares a ROM unit whose output is read through
--a multiplexer, one bit at a time
entity rom_mux is
Port ( x : in STD_LOGIC_VECTOR(3 downto 0);
sel : in STD_LOGIC_VECTOR(1 downto 0);
z : out STD_LOGIC_VECTOR(6 downto 0));
end rom_mux;
architecture Behavioral of rom_mux is
--Declares a component of the ROM entity above
component rom is port(a : in STD_LOGIC_VECTOR(3 downto 0);
o : out STD_LOGIC_VECTOR(2 downto 0));
end component rom;
--This is the returned ROM value at the given address
signal val : STD_LOGIC_VECTOR(2 downto 0);
begin
rom_com : rom port map(x, val);
process(sel)
begin
--selects which bit of "val" to return
case sel is
when "00" =>
if(val(0) = '1') then z <= "1001111";
else z <= "0000001";
end if;
when "01" =>
if(val(1) = '1') then z <= "1001111";
else z <= "0000001";
end if;
when "10" =>
if(val(2) = '1') then z <= "1001111";
else z <= "0000001";
end if;
when others => z <= "0000000";
end case;
end process;
end Behavioral;
\end{lstlisting}
\section{Problem 2 Constraints File}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth]{../report-images/Part2Const.png}
\caption{\label{fig:Part2ConstFile}Constraints file for Problem 2.}
\end{center}
\end{figure}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.6649898374,
"avg_line_length": 37.4857142857,
"ext": "tex",
"hexsha": "0118c431df2a67dc245e594d073722f2a6dc7179",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ccannon94/ncat-ecen429-repository",
"max_forks_repo_path": "Lab4/Lab4Report/Lab4Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ccannon94/ncat-ecen429-repository",
"max_issues_repo_path": "Lab4/Lab4Report/Lab4Report.tex",
"max_line_length": 1179,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ccannon94/ncat-ecen429-repository",
"max_stars_repo_path": "Lab4/Lab4Report/Lab4Report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6335,
"size": 19680
} |
\chapter{Abstract}\label{Abstract}
\lipsum[1]
| {
"alphanum_fraction": 0.7446808511,
"avg_line_length": 11.75,
"ext": "tex",
"hexsha": "91756cd67dad4fb3e157fcb26110d84826c13aae",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2021-11-08T11:49:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-03-15T08:13:40.000Z",
"max_forks_repo_head_hexsha": "070d796715a2965460bf2f3d9c14addf2c019ecd",
"max_forks_repo_licenses": [
"WTFPL"
],
"max_forks_repo_name": "MichaelGrupp/TTT__TUM-Thesis-Template",
"max_forks_repo_path": "content/Abstract.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "070d796715a2965460bf2f3d9c14addf2c019ecd",
"max_issues_repo_issues_event_max_datetime": "2020-10-28T11:29:53.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-28T11:29:53.000Z",
"max_issues_repo_licenses": [
"WTFPL"
],
"max_issues_repo_name": "MichaelGrupp/TTT__TUM-Thesis-Template",
"max_issues_repo_path": "content/Abstract.tex",
"max_line_length": 34,
"max_stars_count": 26,
"max_stars_repo_head_hexsha": "070d796715a2965460bf2f3d9c14addf2c019ecd",
"max_stars_repo_licenses": [
"WTFPL"
],
"max_stars_repo_name": "MichaelGrupp/TTT__TUM-Thesis-Template",
"max_stars_repo_path": "content/Abstract.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-05T18:20:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-05-28T10:49:09.000Z",
"num_tokens": 15,
"size": 47
} |
\vsssub
\subsubsection{~$S_{db}$: Battjes and Janssen 1978} \label{sec:DB1}
\vsssub
\opthead{DB1 / MLIM}{Pre-WAM}{J. H. G. M. Alves}
\noindent
The implementation in \ws\ of depth-induced breaking algorithms is intended to
extend the applicability of the model to within shallow water environments,
where wave breaking, among other depth-induced transformation processes,
becomes important.
For this reason the approach of \citet[][henceforth denoted as
BJ78]{pro:BJ78}, which is based on the assumption that all waves in a random
field exceeding a threshold height, defined as a function of bottom topography
parameters, will break. For a random wave field, the fraction of waves
satisfying this criterion is determined by a statistical description of
surf-zone wave heights (i.e., a Rayleigh-type distribution, truncated at a
depth-dependent wave-height maximum).
The bulk rate $\delta$ of spectral energy density dissipation of the fraction
of breaking waves, as proposed by BJ78, is estimated using an analogy with
dissipation in turbulent bores as
%-------------------------------%
% Battjes Janssen surf breaking %
%-------------------------------%
% eq:BJ78_base
\begin{equation}
\delta = 0.25 \: Q_b \: f_m \: H_{\max}^2 , \label{eq:BJ78_base}
\end{equation}
\noindent
where $Q_b$ is the fraction of breaking waves in the random field, $f_m$ is
the mean frequency and $H_{\max}$ is the maximum individual height a component
in the random wave field can reach without breaking (conversely, above which
all waves would break). In BJ78 the maximum wave height $H_{\max}$ is defined
using a Miche-type criterion \citep{art:Miche44},
% eq:BJ78_Miche
\begin{equation}
\bar{k} H_{\max} = \gamma_M \tanh ( \bar{k} d )
, \label{eq:BJ78_Miche}
\end{equation}
\noindent
where $\gamma_M$ is a constant factor. This approach also removes energy in
deep-water waves exceeding a limiting steepness. This can potentially result
in double counting of dissipation in deep-water waves. Alternatively,
$H_{\max}$ can be defined using a McCowan-type criterion, which consists of
simple constant ratio
% eq:BJ78_McC
\begin{equation}
H_{\max} = \gamma \: d , \label{eq:BJ78_McC}
\end{equation}
\noindent
where $d$ is the local water depth and $\gamma$ is a constant derived from
field and laboratory observation of breaking waves. This approach will
exclusively represent depth-induced breaking. Although more general breaking
criteria for $H_{\max}$ as a simple function of local depth exist
\citep[e.g.,][]{art:TG83}, it should be noted that the coefficient $\gamma$
refers to the maximum height of an individual breaking wave within the random
field. \cite{art:M1894} calculated the limiting wave-height-to-depth ratio for
a solitary wave propagating on a flat bottom to be 0.78, which is still used
presently as a conservative criteria in engineering applications. The average
value found by \cite{pro:BJ78} was $\gamma = 0.73$. More recent analyses of
waves propagating over reefs by \cite{art:Nel94, art:Nel97} suggest a ratio of
0.55.
The fraction of breaking waves $Q_b$ is determined in terms of a Rayleigh-type
distribution truncated at $H_{\max}$ (i.e., all broken waves have a height
equal to $H_{max}$), which results in the following expression:
% eq:BJ78_Qb
\begin{equation}
\frac{1 - Q_b}{-\ln Q_b} = \left ( \frac{H_{rms}}{H_{\max}} \right ) ^{2}
, \label{eq:BJ78_Qb}
\end{equation}
\noindent
where $H_{rms}$ is the root-mean-square wave height. In the current
implementation, the implicit equation (\ref{eq:BJ78_Qb}) is solved for $Q_b$
iteratively. With the assumption that the total spectral energy dissipation
$\delta$ is distributed over the entire spectrum so that it does not change
the spectral shape \citep{art:EB96} the following depth-induced breaking
dissipation source function is obtained
% eq:BJ78
\begin{equation}
\cS_{db} (k,\theta) = - \alpha \frac{\delta}{E} F(k,\theta)
= - 0.25 \: \alpha \: Q_b \: f_m \frac{H_{\max}^2}{E} F(k,\theta)
, \label{eq:BJ78}
\end{equation}
\noindent
where $E$ is the total spectral energy, and $\alpha = 1.0$ is a tunable
parameter. The user can select between Eqs.~(\ref{eq:BJ78_Miche}) and
(\ref{eq:BJ78_McC}), and adjust $\gamma$ and $\alpha$. Defaults are
Eq.~(\ref{eq:BJ78_McC}), $\gamma = 0.73$ and $\alpha = 1.0$.
| {
"alphanum_fraction": 0.7386759582,
"avg_line_length": 40.2336448598,
"ext": "tex",
"hexsha": "e99399be2a78126d1dcf575b40c374653954af2d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z",
"max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_forks_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_forks_repo_name": "minsukji/ci-debug",
"max_forks_repo_path": "WW3/manual/eqs/DB1.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z",
"max_issues_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_issues_repo_name": "minsukji/ci-debug",
"max_issues_repo_path": "WW3/manual/eqs/DB1.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803",
"max_stars_repo_licenses": [
"Apache-2.0",
"CC0-1.0"
],
"max_stars_repo_name": "minsukji/ci-debug",
"max_stars_repo_path": "WW3/manual/eqs/DB1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1226,
"size": 4305
} |
\chapter{Characteristic functions}
| {
"alphanum_fraction": 0.8108108108,
"avg_line_length": 9.25,
"ext": "tex",
"hexsha": "3870c7ae97ac13a6724794c063759e89c639414a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/probability/probabilityMomentsCharacteristic/00-00-Chapter_name.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/probability/probabilityMomentsCharacteristic/00-00-Chapter_name.tex",
"max_line_length": 34,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/probability/probabilityMomentsCharacteristic/00-00-Chapter_name.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 37
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Beamer Presentation
% LaTeX Template
% Version 1.0 (10/11/12)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND THEMES
%----------------------------------------------------------------------------------------
\documentclass{beamer}
\mode<presentation> {
% The Beamer class comes with a number of default slide themes
% which change the colors and layouts of slides. Below this is a list
% of all the themes, uncomment each in turn to see what they look like.
%\usetheme{default}
%\usetheme{AnnArbor}
%\usetheme{Antibes}
%\usetheme{Bergen}
%\usetheme{Berkeley}
%\usetheme{Berlin}
%\usetheme{Boadilla}
%\usetheme{CambridgeUS}
%\usetheme{Copenhagen}
%\usetheme{Darmstadt}
%\usetheme{Dresden}
\usetheme{Frankfurt}
%\usetheme{Goettingen}
%\usetheme{Hannover}
%\usetheme{Ilmenau}
%\usetheme{JuanLesPins}
%\usetheme{Luebeck}
%\usetheme{Madrid}
%\usetheme{Malmoe}
%\usetheme{Marburg}
%\usetheme{Montpellier}
%\usetheme{PaloAlto}
%\usetheme{Pittsburgh}
%\usetheme{Rochester}
%\usetheme{Singapore}
%\usetheme{Szeged}
%\usetheme{Warsaw}
% As well as themes, the Beamer class has a number of color themes
% for any slide theme. Uncomment each of these in turn to see how it
% changes the colors of your current slide theme.
%\usecolortheme{albatross}
\usecolortheme{beaver}
%\usecolortheme{crane}
\usecolortheme{dove}
\usecolortheme{wolverine}
%\setbeamertemplate{footline} % To remove the footer line in all slides uncomment this line
%\setbeamertemplate{footline}[page number] % To replace the footer line in all slides with a simple slide count uncomment this line
%\setbeamertemplate{navigation symbols}{} % To remove the navigation symbols from the bottom of all slides uncomment this line
}
\usepackage{graphicx} % Allows including images
\usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
\title{Predictive Analysis of Bike-share Rental Demand}
\author{Benjamin Fillmore} % Your name
\institute[OU] % Your institution as it will appear on the bottom of every slide, may be shorthand to save space
{
University of Oklahoma \\ % Your institution for the title page
\medskip
\textit{[email protected]} % Your email address
}
\date{\today} % Date, can be changed to a custom date
\begin{document}
\begin{frame}
\titlepage % Print the title page as the first slide
\end{frame}
\begin{frame}
\frametitle{Overview} % Table of contents slide, comment this block out to remove it
\tableofcontents % Throughout your presentation, if you choose to use \section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation
\end{frame}
%----------------------------------------------------------------------------------------
% PRESENTATION SLIDES
%----------------------------------------------------------------------------------------
\section{Data Source}
The data for this project came from a Kaggle competition prompting the application of Machine Learning tools to predict bike share rental demand.
\includegraphics[width=\textwidth]{Images/sumtable.png}
%------------------------------------------------
\section{Background}
%------------------------------------------------
\begin{frame}
Inevitably as urban center expansion progresses, we need to adopt more sustainable commuting methods. Cycling is a growing means of transportation in urban areas, which begs the question - How much demand is there, practically, for bike rentals in Metro areas? Through predictive analysis, this project quantifies demand for bike rentals as part of a Washington, D.C. bike-share program designed to target commuting groups.
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Bullet Points}
\begin{itemize}
\item Lorem ipsum dolor sit amet, consectetur adipiscing elit
\item Aliquam blandit faucibus nisi, sit amet dapibus enim tempus eu
\item Nulla commodo, erat quis gravida posuere, elit lacus lobortis est, quis porttitor odio mauris at libero
\item Nam cursus est eget velit posuere pellentesque
\item Vestibulum faucibus velit a augue condimentum quis convallis nulla gravida
\end{itemize}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Existing Research}
\begin{block}{Block 1}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer lectus nisl, ultricies in feugiat rutrum, porttitor sit amet augue. Aliquam ut tortor mauris. Sed volutpat ante purus, quis accumsan dolor.
\end{block}
\begin{block}{Block 2}
Pellentesque sed tellus purus. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Vestibulum quis magna at risus dictum tempor eu vitae velit.
\end{block}
\begin{block}{Block 3}
Suspendisse tincidunt sagittis gravida. Curabitur condimentum, enim sed venenatis rutrum, ipsum neque consectetur orci, sed blandit justo nisi ac lacus.
\end{block}
\begin{block}{Block 4}
Suspendisse tincidunt sagittis gravida. Curabitur condimentum, enim sed venenatis rutrum, ipsum neque consectetur orci, sed blandit justo nisi ac lacus.
\end{block}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Multiple Columns}
\begin{columns}[c] % The "c" option specifies centered vertical alignment while the "t" option is used for top vertical alignment
\column{.45\textwidth} % Left column and width
\textbf{Heading}
\begin{enumerate}
\item Statement
\item Explanation
\item Example
\end{enumerate}
\column{.5\textwidth} % Right column and width
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer lectus nisl, ultricies in feugiat rutrum, porttitor sit amet augue. Aliquam ut tortor mauris. Sed volutpat ante purus, quis accumsan dolor.
\end{columns}
\end{frame}
%------------------------------------------------
\section{Equations}
%------------------------------------------------
\begin{frame}
\frametitle{Table}
\begin{table}
\begin{tabular}{l l l}
\toprule
\textbf{Treatments} & \textbf{Response 1} & \textbf{Response 2}\\
\midrule
Treatment 1 & 0.0003262 & 0.562 \\
Treatment 2 & 0.0015681 & 0.910 \\
Treatment 3 & 0.0009271 & 0.296 \\
\bottomrule
\end{tabular}
\caption{Table caption}
\end{table}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Theorem}
\begin{theorem}[Mass--energy equivalence]
$E = mc^2$
\end{theorem}
\end{frame}
%------------------------------------------------
\begin{frame}[fragile] % Need to use the fragile option when verbatim is used in the slide
\frametitle{Verbatim}
\begin{example}[Theorem Slide Code]
\begin{verbatim}
\begin{frame}
\frametitle{Theorem}
\begin{theorem}[Mass--energy equivalence]
$E = mc^2$
\end{theorem}
\end{frame}\end{verbatim}
\end{example}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Figure}
Uncomment the code on this slide to include your own image from the same directory as the template .TeX file.
%\begin{figure}
%\includegraphics[width=0.8\linewidth]{test}
%\end{figure}
\end{frame}
%------------------------------------------------
\section{Findings}
\begin{frame}[fragile] % Need to use the fragile option when verbatim is used in the slide
\frametitle{Citation}
An example of the \verb|\cite| command to cite within the presentation:\\~
This statement requires citation \cite{p1}.
\end{frame}
%------------------------------------------------
\begin{frame}[fragile] % Need to use the fragile option when verbatim is used in the slide
\frametitle{Citation}
An example of the \verb|\cite| command to cite within the presentation:\\~
This statement requires citation \cite{p1}.
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{References}
\footnotesize{
\begin{thebibliography}{99} % Beamer does not support BibTeX so references must be inserted manually as below
\bibitem[Smith, 2012]{p1} John Smith (2012)
\newblock Title of the publication
\newblock \emph{Journal Name} 12(3), 45 -- 678.
\end{thebibliography}
}
\end{frame}
%------------------------------------------------
\begin{frame}
\Huge{\centerline{The End}}
\end{frame}
%----------------------------------------------------------------------------------------
\end{document} | {
"alphanum_fraction": 0.6455420582,
"avg_line_length": 32.8045112782,
"ext": "tex",
"hexsha": "e9d6819b521a98f9af49f7afd004148b38842b69",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "24adf0b6b4197aaac058699234db78102474a9c7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bfillmoreou/DScourseS21",
"max_forks_repo_path": "FinalProject/dscourse21_finalpresentation_bf.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "24adf0b6b4197aaac058699234db78102474a9c7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bfillmoreou/DScourseS21",
"max_issues_repo_path": "FinalProject/dscourse21_finalpresentation_bf.tex",
"max_line_length": 424,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "24adf0b6b4197aaac058699234db78102474a9c7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bfillmoreou/DScourseS21",
"max_stars_repo_path": "FinalProject/dscourse21_finalpresentation_bf.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2177,
"size": 8726
} |
\subsection{vbcc}
MARK II toolchain also contain ISO C Compiler. This compiler was wrote by
Dr. Volker Barthelmann and for full documentation please refer original vbcc
documentation that can be found in folder /sw/vbcc/doc.
Compiler can translate C programs into assembler sources. These sources then can
be translated into object files, linked together and loaded into MARK II memory.
Homepage of vbcc can be found at this link: \url{http://www.compilers.de/vbcc.html}.
Purpose of this section isn't give informations about vbcc usage, for this please
refer original documentation. Purpose of this section is to inform about register
usage, calling conventions and backend related things.
\subsubsection{Register usage}
CPU have sixteen registers, three of them are special registers. Almost all
of these registers are used for compiler purposes. Function of all registers
can be found in table \ref{tab:registers_list_ussage}.
\begin{table}[h]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Register name} & \textbf{Purpose} & \textbf{Register name} & \textbf{Purpose} \\ \hline
R0 & zero register & R8 & tmp register \\ \hline
R1 & compiler reserved & R9 & tmp register \\ \hline
R2 & compiler reserved & R10 & tmp register \\ \hline
R3 & compiler reserved & R11 & tmp register \\ \hline
R4 & condition flag & R12 & tmp register \\ \hline
R5 & return value & R13 & frame pointer \\ \hline
R6 & tmp register & R14 & program counter \\ \hline
R7 & tmp register & R15 & stack pointer \\ \hline
\end{tabular}
\caption{Register usage}
\label{tab:registers_list_ussage}
\end{table}
Compiler reserved registers $R1$ - $R3$ are used by compiler for loading
variables, calculating addresses, comparisons and so on.
Condition flag register $R4$ are used for conditional jumps. CMP instruction
will always store return value into this register. Following instruction should
be branch instruction that will use this register.
Registers $R1$ to $R4$ are always pushed into stack in function head and then
poped out at function bottom.
Register $R5$ is used as return register. When function have to return some value
this value will be returned in this register.
Frame pointer and stack pointer, registers $R13$ and $R15$ are used for manipulating
stack. At stack are stored all local variables. For more information please refer
section about stack usage.
Program counter is maintained by CPU and vbcc backend doesn't manipulate it directly.
There are seven tmp registers, registers $R6$ to $R12$. These registers can be
used freely by vbcc backend or by assembler programmer. But at head of each
function/subroutine, used registers have to be pushed into stack.
\subsubsection{Stack usage by functions}
When function is called, new stack frame is emitted. For this frame pointer and
stack pointer registers are used. Stack frame consist from arguments passed into
called function, return address and old frame pointer.
So whole calling sequence look like this:
\begin{lstlisting}[language={[markII]Assembler}, frame=single]
PUSH R6 ;push two arguments into stack
PUSH R7
CALL foo ;call function foo
\end{lstlisting}
Head of function foo will look like this:
\begin{lstlisting}[language={[markII]Assembler}, frame=single]
.EXPORT foo
foo:
PUSH R13 ;store old frame pointer
MOV SP R13 ;create new frame pointer
;make space for auto class variables by pushing R0
;store all used registers
\end{lstlisting}
At first, name of function is exported, then label is generated. In new function
first thing to do is backup frame pointer by pushing $R13$. Then creating a new
frame pointer by copy value from $SP$ into $R13$. Finally, function will store all
registers that will use with PUSH instruction.
At the end of function, return value is moved into $R5$ and function bottom is
generated.
Function bottom consist from poping out used registers from stack, restoring frame pointer,
and calling instruction RET. This look like this:
\begin{lstlisting}[language={[markII]Assembler}, frame=single]
; move return value into R5
; pop all used registers
MOV R13 SP ; restore SP
POP R13 ; restore FP
RET
\end{lstlisting}
Stack is also the place where all locals variables are stored. For their addressing
is used frame pointer. And they are stored after stored old frame pointer.
\subsubsection{Simple optimizations tips}
\begin{itemize}
\item
Reorder local variables declaration. First declared variable should be
variable that is used most often. Second most often used variable should be
declared right after first declaration. Order of next variables do not
matter. Also make sure, that first and second variables are not arrays or
structures.
\item
Avoid multiplication whenever is possible. Use shifts instead, this is
always faster than multiplication.
\end{itemize}
\subsubsection{Libraries}
C Standard Library is not yet implemented but there is SPL - Standard
Peripheral Library for MARK II available. This library is only header files
defining some useful macros and constants for reading and writing registers,
accessing RAM, ROM and VRAM memories and bit mask for accessing various bits in
registers. For more details please refer SPL reference manual, or see examples
in sw directory.
Usage is really simple. When toolchain is installed, install script emit path to
SPL. These path have to be parsed into vbcc at compile time with -I argument.
Then you can normally include header file spl.h as usual.
\subsubsection{Interrupts}
Interrupt service routines is a bit different than normal functions. Thus there is
need to inform compiler functions used as interrupt routines. This can be done by
simply adding specified keyword before function return type. Also, function return
type have to be void.
For example, declaration of ISR function should look like:
\begin{lstlisting}[language={C}, frame=single]
__interrupt void swi_isr();
\end{lstlisting}
Name of function doesn't matter, but remember, address of this function have to be
stored in interrupt controller register by hand.
\subsubsection{Inline assembler}
VBCC frondend and MARK-II backend support inline assembler. You can use it for
special CPU features like software interrupts or for hand optimalized functions.
Ussage is simple, just define function and after argument brackets put string
containing your assembler instructions. This string will be directly emited into
output assembler file by vbcc.
For example, following is declaration of inline assembler for calling software interrupt.
\begin{lstlisting}[language={C}, frame=single]
void intrq() = "\tSWI";
\end{lstlisting}
For more informations please see vbcc documentation. It is also possible to specify
register number used for function passing. For this feature please refer vbcc documentation
too and keep in mind compiler register ussage.
| {
"alphanum_fraction": 0.7264634475,
"avg_line_length": 41.6516853933,
"ext": "tex",
"hexsha": "77d42067f3997a939bfe4a67aa9848cf02cfc406",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2020-04-01T10:48:10.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-20T14:46:22.000Z",
"max_forks_repo_head_hexsha": "58a441675729d4036b503c2a4743fd181daaf5af",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "C-ArenA/MARK_II",
"max_forks_repo_path": "doc/refman/tex/toolchain/vbcc.tex",
"max_issues_count": 30,
"max_issues_repo_head_hexsha": "58a441675729d4036b503c2a4743fd181daaf5af",
"max_issues_repo_issues_event_max_datetime": "2018-06-03T14:08:07.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-07-12T22:12:58.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "C-ArenA/MARK_II",
"max_issues_repo_path": "doc/refman/tex/toolchain/vbcc.tex",
"max_line_length": 104,
"max_stars_count": 25,
"max_stars_repo_head_hexsha": "58a441675729d4036b503c2a4743fd181daaf5af",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "VladisM/MARK_II-SoC",
"max_stars_repo_path": "doc/refman/tex/toolchain/vbcc.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-09T22:12:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-21T08:56:19.000Z",
"num_tokens": 1628,
"size": 7414
} |
% This LaTeX was auto-generated from MATLAB code.
% To make changes, update the MATLAB code and republish this document.
\documentclass{article}
\usepackage{graphicx}
\usepackage{color}
\sloppy
\definecolor{lightgray}{gray}{0.5}
\setlength{\parindent}{0pt}
\begin{document}
\subsection*{Contents}
\begin{itemize}
\setlength{\itemsep}{-1ex}
\item Allow images to be added by doing:
\item This case adapted from addmatrix. Thanks to
\item Stephen Eglen \begin{verbatim}[email protected]\end{verbatim} for this idea.
\item Check to see that the image is the correct size. Do
\item this by reading in the image and then checking its size.
\end{itemize}
\begin{verbatim}
function MakeQTMovie(cmd,arg, arg2)
% function MakeQTMovie(cmd, arg, arg2)
% Create a QuickTime movie from a bunch of figures (and an optional sound).
%
% Syntax: MakeQTMovie cmd [arg]
% The following commands are supported:
% addfigure - Add snapshot of current figure to movie
% addaxes - Add snapshot of current axes to movie
% addmatrix data - Add a matrix to movie (convert to jpeg with imwrite)
% addmatrixsc data - Add a matrix to movie (convert to jpeg with imwrite)
% (automatically scales image data)
% addsound data [sr] - Add sound to movie (only monaural for now)
% (third argument is the sound's sample rate.)
% cleanup - Remove the temporary files
% demo - Create a demonstration movie
% finish - Finish movie, write out QT file
% framerate fps - Set movies frame rate [Default is 10 fps]
% quality # - Set JPEG quality (between 0 and 1)
% size [# #] - Set plot size to [width height]
% start filename - Start creating a movie with this name
% The start command must be called first to provide a movie name.
% The finish command must be called last to write out the movie
% data. All other commands can be called in any order. Only one
% movie can be created at a time.
%
% This code is published as Interval Technical Report #1999-066
% The latest copy can be found at
% http://web.interval.com/papers/1999-066/
% (c) Copyright Malcolm Slaney, Interval Research, March 1999.
% This is experimental software and is being provided to Licensee
% 'AS IS.' Although the software has been tested on Macintosh, SGI,
% Linux, and Windows machines, Interval makes no warranties relating
% to the software's performance on these or any other platforms.
%
% Disclaimer
% THIS SOFTWARE IS BEING PROVIDED TO YOU 'AS IS.' INTERVAL MAKES
% NO EXPRESS, IMPLIED OR STATUTORY WARRANTY OF ANY KIND FOR THE
% SOFTWARE INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
% PERFORMANCE, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
% IN NO EVENT WILL INTERVAL BE LIABLE TO LICENSEE OR ANY THIRD
% PARTY FOR ANY DAMAGES, INCLUDING LOST PROFITS OR OTHER INCIDENTAL
% OR CONSEQUENTIAL DAMAGES, EVEN IF INTERVAL HAS BEEN ADVISED OF
% THE POSSIBLITY THEREOF.
%
% This software program is owned by Interval Research
% Corporation, but may be used, reproduced, modified and
% distributed by Licensee. Licensee agrees that any copies of the
% software program will contain the same proprietary notices and
% warranty disclaimers which appear in this software program.
% This program uses the Matlab imwrite routine to convert each image
% frame into JPEG. After first reserving 8 bytes for a header that points
% to the movie description, all the compressed images and the sound are
% added to the movie file. When the 'finish' method is called then the
% first 8 bytes of the header are rewritten to indicate the size of the
% movie data, and then the movie header ('moov structure') is written
% to the output file.
%
% This routine creates files according to the QuickTime file format as
% described in the appendix of
% "Quicktime (Inside MacIntosh)," Apple Computer Incorporated,
% Addison-Wesley Pub Co; ISBN: 0201622017, April 1993.
% I appreciate help that I received from Lee Fyock (MathWorks) and Aaron
% Hertzmann (Interval) in debugging and testing this work.
% Changes:
% July 5, 1999 - Removed stss atom since it upset PC version of QuickTime
% November 11, 1999 - Fixed quality bug in addmatrix. Added addmatrixsc.
% March 7, 2000 - by Jordan Rosenthal ([email protected]), Added truecolor
% capability when running in Matlab 5.3 changed some help comments, fixed
% some bugs, vectorized some code.
% April 7, 2000 - by Malcolm. Cleaned up axis/figure code and fixed(?) SGI
% playback problems. Added user data atom to give version information.
% Fixed sound format problems.
% April 10, 2000 - by Malcolm. Fixed problem with SGI (at least) and B&W
% addmatrix.
if nargin < 1
fprintf('Syntax: MakeQTMovie cmd [arg]\n')
fprintf('The following commands are supported:\n');
fprintf(' addfigure - Add snapshot of current figure to movie\n')
fprintf(' addaxes - Add snapshot of current axes to movie\n')
fprintf(' addmatrix data - Add a matrix to movie ');
fprintf('(convert to jpeg)\n')
fprintf(' addmatrixsc data - Add a matrix to movie ');
fprintf('(scale and convert to jpeg)\n')
fprintf(' addsound data - Add sound samples ');
fprintf('(with optional rate)\n')
fprintf(' demo - Show this program in action\n');
fprintf(' finish - Finish movie, write out QT file\n');
fprintf(' framerate # - Set movie frame rate ');
fprintf('(default is 10fps)\n');
fprintf(' quality # - Set JPEG quality (between 0 and 1)\n');
fprintf(' size [# #] - Set plot size to [width height]\n');
fprintf(' start filename - Start making a movie with ');
fprintf('this name\n');
return;
end
global MakeQTMovieStatus
MakeDefaultQTMovieStatus; % Needed first time, ignored otherwise
switch lower(cmd)
case {'addframe','addplot','addfigure','addaxes'}
\end{verbatim}
\begin{verbatim}
switch lower(cmd)
case {'addframe','addfigure'}
hObj = gcf; % Add the entire figure (with all axes)
otherwise
hObj = gca; % Add what's inside the current axis
end
frame = getframe(hObj);
[I,map] = frame2im(frame);
if ImageSizeChanged(size(I)) > 0
return;
end
if isempty(map)
% RGB image
imwrite(I,MakeQTMovieStatus.imageTmp, 'jpg', 'Quality', ...
MakeQTMovieStatus.spatialQual*100);
else
% Indexed image
writejpg_map(MakeQTMovieStatus.imageTmp, I, map);
end
[pos, len] = AddFileToMovie;
n = MakeQTMovieStatus.frameNumber + 1;
MakeQTMovieStatus.frameNumber = n;
MakeQTMovieStatus.frameStarts(n) = pos;
MakeQTMovieStatus.frameLengths(n) = len;
\end{verbatim}
\subsection*{Allow images to be added by doing:}
\begin{verbatim}
%% MakeQTMovie('addimage', '/path/to/file.jpg');
\end{verbatim}
\subsection*{This case adapted from addmatrix. Thanks to}
\subsection*{Stephen Eglen \begin{verbatim}[email protected]\end{verbatim} for this idea.}
\begin{verbatim}
case 'addimage'
\end{verbatim}
\begin{verbatim}
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a filename with ');
fprintf('the image command.\n');
return;
end
\end{verbatim}
\subsection*{Check to see that the image is the correct size. Do}
\subsection*{this by reading in the image and then checking its size.}
\begin{verbatim}
%% tim - temporary image.
tim = imread(arg); tim_size = size(tim);
fprintf('Image %s size %d %d\n', arg, tim_size(1), tim_size(2));
if ImageSizeChanged(tim_size) > 0
return;
end
[pos, len] = AddFileToMovie(arg);
n = MakeQTMovieStatus.frameNumber + 1;
MakeQTMovieStatus.frameNumber = n;
MakeQTMovieStatus.frameStarts(n) = pos;
MakeQTMovieStatus.frameLengths(n) = len;
\end{verbatim}
\begin{verbatim}
case 'addmatrix'
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a matrix with ');
fprintf('the addmatrix command.\n');
return;
end
if ImageSizeChanged(size(arg)) > 0
return;
end
% Work around a bug, at least on the
% SGIs, which causes JPEGs to be
% written which can't be read with the
% SGI QT. Turn the B&W image into a
% color matrix.
if ndims(arg) < 3
arg(:,:,2) = arg;
arg(:,:,3) = arg(:,:,1);
end
imwrite(arg, MakeQTMovieStatus.imageTmp, 'jpg', 'Quality', ...
MakeQTMovieStatus.spatialQual*100);
[pos, len] = AddFileToMovie;
n = MakeQTMovieStatus.frameNumber + 1;
MakeQTMovieStatus.frameNumber = n;
MakeQTMovieStatus.frameStarts(n) = pos;
MakeQTMovieStatus.frameLengths(n) = len;
case 'addmatrixsc'
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a matrix with ');
fprintf('the addmatrix command.\n');
return;
end
if ImageSizeChanged(size(arg)) > 0
return;
end
arg = arg - min(min(arg));
arg = arg / max(max(arg));
% Work around a bug, at least on the
% SGIs, which causes JPEGs to be
% written which can't be read with the
% SGI QT. Turn the B&W image into a
% color matrix.
if ndims(arg) < 3
arg(:,:,2) = arg;
arg(:,:,3) = arg(:,:,1);
end
imwrite(arg, MakeQTMovieStatus.imageTmp, 'jpg', 'Quality', ...
MakeQTMovieStatus.spatialQual*100);
[pos, len] = AddFileToMovie;
n = MakeQTMovieStatus.frameNumber + 1;
MakeQTMovieStatus.frameNumber = n;
MakeQTMovieStatus.frameStarts(n) = pos;
MakeQTMovieStatus.frameLengths(n) = len;
case 'addsound'
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a sound array ');
fprintf('with the addsound command.\n');
return;
end
% Do stereo someday???
OpenMovieFile
MakeQTMovieStatus.soundLength = length(arg);
arg = round(arg/max(max(abs(arg)))*32765);
negs = find(arg<0);
arg(negs) = arg(negs) + 65536;
sound = mb16(arg);
MakeQTMovieStatus.soundStart = ftell(MakeQTMovieStatus.movieFp);
MakeQTMovieStatus.soundLen = length(sound);
fwrite(MakeQTMovieStatus.movieFp, sound, 'uchar');
if nargin < 3
arg2 = 22050;
end
MakeQTMovieStatus.soundRate = arg2;
case 'cleanup'
if isstruct(MakeQTMovieStatus)
if ~isempty(MakeQTMovieStatus.movieFp)
fclose(MakeQTMovieStatus.movieFp);
MakeQTMovieStatus.movieFp = [];
end
if ~isempty(MakeQTMovieStatus.imageTmp) & ...
exist(MakeQTMovieStatus.imageTmp,'file') > 0
delete(MakeQTMovieStatus.imageTmp);
MakeQTMovieStatus.imageTmp = [];
end
end
MakeQTMovieStatus = [];
case 'debug'
fprintf('Current Movie Data:\n');
fprintf(' %d frames at %d fps\n', MakeQTMovieStatus.frameNumber, ...
MakeQTMovieStatus.frameRate);
starts = MakeQTMovieStatus.frameStarts;
if length(starts) > 10, starts = starts(1:10);, end;
lens = MakeQTMovieStatus.frameLengths;
if length(lens) > 10, lens = lens(1:10);, end;
fprintf(' Start: %6d Size: %6d\n', [starts; lens]);
fprintf(' Movie Image Size: %dx%d\n', ...
MakeQTMovieStatus.imageSize(2), ...);
MakeQTMovieStatus.imageSize(1));
if length(MakeQTMovieStatus.soundStart) > 0
fprintf(' Sound: %d samples at %d Hz sampling rate ', ...
MakeQTMovieStatus.soundLength, ...
MakeQTMovieStatus.soundRate);
fprintf('at %d.\n', MakeQTMovieStatus.soundStart);
else
fprintf(' Sound: No sound track\n');
end
fprintf(' Temporary files for images: %s\n', ...
MakeQTMovieStatus.imageTmp);
fprintf(' Final movie name: %s\n', MakeQTMovieStatus.movieName);
fprintf(' Compression Quality: %g\n', ...
MakeQTMovieStatus.spatialQual);
case 'demo'
clf
fps = 10;
movieLength = 10;
sr = 22050;
fn = 'test.mov';
fprintf('Creating the movie %s.\n', fn);
MakeQTMovie('start',fn);
MakeQTMovie('size', [160 120]);
MakeQTMovie('quality', 1.0);
theSound = [];
for i=1:movieLength
plot(sin((1:100)/4+i));
MakeQTMovie('addaxes');
theSound = [theSound sin(440/sr*2*pi*(2^(i/12))*(1:sr/fps))];
end
MakeQTMovie('framerate', fps);
MakeQTMovie('addsound', theSound, sr);
MakeQTMovie('finish');
case {'finish','close'}
AddQTHeader;
MakeQTMovie('cleanup') % Remove temporary files
%MakeDefaultQTMovieStatus;
case 'framerate'
if nargin < 2
fprintf('MakeQTMovie error: Need to specify the ');
fprintf('frames/second with the framerate command.\n');
return;
end
MakeQTMovieStatus.frameRate = arg;
case 'help'
MakeQTMovie % To get help message.
case 'size'
% Size is off by one on the
% Mac.
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a vector with ');
fprintf('the size command.\n');
return;
end
if length(arg) ~= 2
error('MakeQTMovie: Error, must supply 2 element size.');
end
oldUnits = get(gcf,'units');
set(gcf,'units','pixels');
cursize = get(gcf, 'position');
cursize(3) = arg(1);
cursize(4) = arg(2);
set(gcf, 'position', cursize);
set(gcf,'units',oldUnits);
case 'start'
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a file name ');
fprintf('with start command.\n');
return;
end
MakeQTMovie('cleanup');
MakeDefaultQTMovieStatus;
MakeQTMovieStatus.movieName = arg;
case 'test'
clf
MakeQTMovieStatus = [];
MakeQTMovie('start','test.mov');
MakeQTMovie('size', [320 240]);
MakeQTMovie('quality', 1.0);
subplot(2,2,1);
for i=1:10
plot(sin((1:100)/4+i));
MakeQTMovie('addfigure');
end
MakeQTMovie('framerate', 10);
MakeQTMovie('addsound', sin(1:5000), 22050);
MakeQTMovie('debug');
MakeQTMovie('finish');
case 'quality'
if nargin < 2
fprintf('MakeQTMovie error: Need to specify a quality ');
fprintf('(between 0-1) with the quality command.\n');
return;
end
MakeQTMovieStatus.spatialQual = arg;
otherwise
fprintf('MakeQTMovie: Unknown method %s.\n', cmd);
end
%%%%%%%%%%%%%%% MakeDefaultQTMovieStatus %%%%%%%%%%%%%%%%%
% Make the default movie status structure.
function MakeDefaultQTMovieStatus
global MakeQTMovieStatus
if isempty(MakeQTMovieStatus)
MakeQTMovieStatus = struct(...
'frameRate', 10, ... % frames per second
'frameStarts', [], ... % Starting byte position
'frameLengths', [], ...
'timeScale', 10, ... % How much faster does time run?
'soundRate', 22050, ... % Sound Sample Rate
'soundStart', [], ... % Starting byte position
'soundLength', 0, ...
'soundChannels', 1, ... % Number of channels
'frameNumber', 0, ...
'movieFp', [], ... % File pointer
'imageTmp', tempname, ...
'movieName', 'output.mov', ...
'imageSize', [0 0], ...
'trackNumber', 0, ...
'timeScaleExpansion', 100, ...
'spatialQual', 1.0); % Between 0.0 and 1.0
end
%%%%%%%%%%%%%%% ImageSizeChanged %%%%%%%%%%%%%%%%%
% Check to see if the image size has changed. This m-file can't
% deal with that, so we'll return an error.
function err = ImageSizeChanged(newsize)
global MakeQTMovieStatus
newsize = newsize(1:2); % Don't care about RGB info, if present
oldsize = MakeQTMovieStatus.imageSize;
err = 0;
if sum(oldsize) == 0
MakeQTMovieStatus.imageSize = newsize;
else
if sum(newsize ~= oldsize) > 0
fprintf('MakeQTMovie Error: New image size');
fprintf('(%dx%d) doesn''t match old size (%dx%d)\n', ...
newsize(1), newsize(2), oldsize(1), oldsize(2));
fprintf(' Can''t add this image to the movie.\n');
err = 1;
end
end
%%%%%%%%%%%%%%% AddFileToMovie %%%%%%%%%%%%%%%%%
% OK, we've saved out an image file. Now add it to the end of the movie
% file we are creating.
% We'll copy the JPEG file in 16kbyte chunks to the end of the movie file.
% Keep track of the start and end byte position in the file so we can put
% the right information into the QT header.
function [pos, len] = AddFileToMovie(imageTmp)
global MakeQTMovieStatus
OpenMovieFile
if nargin < 1
imageTmp = MakeQTMovieStatus.imageTmp;
end
fp = fopen(imageTmp, 'rb');
if fp < 0
error('Could not reopen QT image temporary file.');
end
len = 0;
pos = ftell(MakeQTMovieStatus.movieFp);
while 1
data = fread(fp, 1024*16, 'uchar');
if isempty(data)
break;
end
cnt = fwrite(MakeQTMovieStatus.movieFp, data, 'uchar');
len = len + cnt;
end
fclose(fp);
%%%%%%%%%%%%%%% AddQTHeader %%%%%%%%%%%%%%%%%
% Go back and write the atom information that allows
% QuickTime to skip the image and sound data and find
% its movie description information.
function AddQTHeader()
global MakeQTMovieStatus
pos = ftell(MakeQTMovieStatus.movieFp);
header = moov_atom;
cnt = fwrite(MakeQTMovieStatus.movieFp, header, 'uchar');
fseek(MakeQTMovieStatus.movieFp, 0, -1);
cnt = fwrite(MakeQTMovieStatus.movieFp, mb32(pos), 'uchar');
fclose(MakeQTMovieStatus.movieFp);
MakeQTMovieStatus.movieFp = [];
%%%%%%%%%%%%%%% OpenMovieFile %%%%%%%%%%%%%%%%%
% Open a new movie file. Write out the initial QT header. We'll fill in
% the correct length later.
function OpenMovieFile
global MakeQTMovieStatus
if isempty(MakeQTMovieStatus.movieFp)
fp = fopen(MakeQTMovieStatus.movieName, 'wb');
if fp < 0
error('Could not open QT movie output file.');
end
MakeQTMovieStatus.movieFp = fp;
cnt = fwrite(fp, [mb32(0) mbstring('mdat')], 'uchar');
end
%%%%%%%%%%%%%%% writejpg_map %%%%%%%%%%%%%%%%%
% Like the imwrite routine, but first pass the image data through the indicated
% RGB map.
function writejpg_map(name,I,map)
global MakeQTMovieStatus
[y,x] = size(I);
% Force values to be valid indexes. This fixes a bug that occasionally
% occurs in frame2im in Matlab 5.2 which incorrectly produces values of I
% equal to zero.
I = max(1,min(I,size(map,1)));
rgb = zeros(y, x, 3);
t = zeros(y,x);
t(:) = map(I(:),1)*255; rgb(:,:,1) = t;
t(:) = map(I(:),2)*255; rgb(:,:,2) = t;
t(:) = map(I(:),3)*255; rgb(:,:,3) = t;
imwrite(uint8(rgb),name,'jpeg','Quality',MakeQTMovieStatus.spatialQual*100);
%%%%%%%%%%%%%%% SetAtomSize %%%%%%%%%%%%%%%%%
% Fill in the size of the atom
function y=SetAtomSize(x)
y = x;
y(1:4) = mb32(length(x));
%%%%%%%%%%%%%%% mb32 %%%%%%%%%%%%%%%%%
% Make a vector from a 32 bit integer
function y = mb32(x)
if size(x,1) > size(x,2)
x = x';
end
y = [bitand(bitshift(x,-24),255); ...
bitand(bitshift(x,-16),255); ...
bitand(bitshift(x, -8),255); ...
bitand(x, 255)];
y = y(:)';
%%%%%%%%%%%%%%% mb16 %%%%%%%%%%%%%%%%%
% Make a vector from a 16 bit integer
function y = mb16(x)
if size(x,1) > size(x,2)
x = x';
end
y = [bitand(bitshift(x, -8),255); ...
bitand(x, 255)];
y = y(:)';
%%%%%%%%%%%%%%% mb8 %%%%%%%%%%%%%%%%%
% Make a vector from a 8 bit integer
function y = mb8(x)
if size(x,1) > size(x,2)
x = x';
end
y = [bitand(x, 255)];
y = y(:)';
%
% The following routines all create atoms necessary
% to describe a QuickTime Movie. The basic idea is to
% fill in the necessary data, all converted to 8 bit
% characters, then fix it up later with SetAtomSize so
% that it has the correct header. (This is easier than
% counting by hand.)
%%%%%%%%%%%%%%% mbstring %%%%%%%%%%%%%%%%%
% Make a vector from a character string
function y = mbstring(s)
y = double(s);
%%%%%%%%%%%%%%% dinf_atom %%%%%%%%%%%%%%%%%
function y = dinf_atom()
y = SetAtomSize([mb32(0) mbstring('dinf') dref_atom]);
%%%%%%%%%%%%%%% dref_atom %%%%%%%%%%%%%%%%%
function y = dref_atom()
y = SetAtomSize([mb32(0) mbstring('dref') mb32(0) mb32(1) ...
mb32(12) mbstring('alis') mb32(1)]);
%%%%%%%%%%%%%%% edts_atom %%%%%%%%%%%%%%%%%
function y = edts_atom(add_sound_p)
global MakeQTMovieStatus
fixed1 = bitshift(1,16); % Fixed point 1
if add_sound_p > 0
duration = MakeQTMovieStatus.soundLength / ...
MakeQTMovieStatus.soundRate * ...
MakeQTMovieStatus.timeScale;
else
duration = MakeQTMovieStatus.frameNumber / ...
MakeQTMovieStatus.frameRate * ...
MakeQTMovieStatus.timeScale;
end
duration = ceil(duration);
y = [mb32(0) ... % Atom Size
mbstring('edts') ... % Atom Name
SetAtomSize([mb32(0) ... % Atom Size
mbstring('elst') ... % Atom Name
mb32(0) ... % Version/Flags
mb32(1) ... % Number of entries
mb32(duration) ... % Length of this track
mb32(0) ... % Time
mb32(fixed1)])]; % Rate
y = SetAtomSize(y);
%%%%%%%%%%%%%%% hdlr_atom %%%%%%%%%%%%%%%%%
function y = hdlr_atom(component_type, sub_type)
if strcmp(sub_type, 'vide')
type_string = 'Apple Video Media Handler';
elseif strcmp(sub_type, 'alis')
type_string = 'Apple Alias Data Handler';
elseif strcmp(sub_type, 'soun')
type_string = 'Apple Sound Media Handler';
end
y = [mb32(0) ... % Atom Size
mbstring('hdlr') ... % Atom Name
mb32(0) ... % Version and Flags
mbstring(component_type) ... % Component Name
mbstring(sub_type) ... % Sub Type Name
mbstring('appl') ... % Component manufacturer
mb32(0) ... % Component flags
mb32(0) ... % Component flag mask
mb8(length(type_string)) ... % Type Name byte count
mbstring(type_string)]; % Type Name
y = SetAtomSize(y);
%%%%%%%%%%%%%%% mdhd_atom %%%%%%%%%%%%%%%%%
function y = mdhd_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
data = [mb32(MakeQTMovieStatus.soundRate) ...
mb32(MakeQTMovieStatus.soundLength)];
else
data = [mb32(MakeQTMovieStatus.frameRate * ...
MakeQTMovieStatus.timeScaleExpansion) ...
mb32(MakeQTMovieStatus.frameNumber * ...
MakeQTMovieStatus.timeScaleExpansion)];
end
y = [mb32(0) mbstring('mdhd') ... % Atom Header
mb32(0) ...
mb32(round(now*3600*24)) ... % Creation time
mb32(round(now*3600*24)) ... % Modification time
data ...
mb16(0) mb16(0)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% mdia_atom %%%%%%%%%%%%%%%%%
function y = mdia_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
hdlr = hdlr_atom('mhlr', 'soun');
else
hdlr = hdlr_atom('mhlr', 'vide');
end
y = [mb32(0) mbstring('mdia') ... % Atom Header
mdhd_atom(add_sound_p) ...
hdlr ... % Handler Atom
minf_atom(add_sound_p)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% minf_atom %%%%%%%%%%%%%%%%%
function y = minf_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
data = smhd_atom;
else
data = vmhd_atom;
end
y = [mb32(0) mbstring('minf') ... % Atom Header
data ...
hdlr_atom('dhlr','alis') ...
dinf_atom ...
stbl_atom(add_sound_p)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% moov_atom %%%%%%%%%%%%%%%%%
function y=moov_atom
global MakeQTMovieStatus
MakeQTMovieStatus.timeScale = MakeQTMovieStatus.frameRate * ...
MakeQTMovieStatus.timeScaleExpansion;
if MakeQTMovieStatus.soundLength > 0
sound = trak_atom(1);
else
sound = [];
end
y = [mb32(0) mbstring('moov') ...
mvhd_atom udat_atom sound trak_atom(0) ];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% mvhd_atom %%%%%%%%%%%%%%%%%
function y=mvhd_atom
global MakeQTMovieStatus
fixed1 = bitshift(1,16); % Fixed point 1
frac1 = bitshift(1,30); % Fractional 1
if length(MakeQTMovieStatus.soundStart) > 0
NumberOfTracks = 2;
else
NumberOfTracks = 1;
end
% Need to make sure its longer
% of movie and sound lengths
MovieDuration = max(MakeQTMovieStatus.frameNumber / ...
MakeQTMovieStatus.frameRate, ...
MakeQTMovieStatus.soundLength / ...
MakeQTMovieStatus.soundRate);
MovieDuration = ceil(MovieDuration * MakeQTMovieStatus.timeScale);
y = [mb32(0) ... % Size
mbstring('mvhd') ... % Movie Data
mb32(0) ... % Version and Flags
mb32(0) ... % Creation Time (unknown)
mb32(0) ... % Modification Time (unknown)
mb32(MakeQTMovieStatus.timeScale) ... % Movie's Time Scale
mb32(MovieDuration) ... % Movie Duration
mb32(fixed1) ... % Preferred Rate
mb16(255) ... % Preferred Volume
mb16(0) ... % Fill
mb32(0) ... % Fill
mb32(0) ... % Fill
mb32(fixed1) mb32(0) mb32(0) ... % Transformation matrix (identity)
mb32(0) mb32(fixed1) mb32(0) ...
mb32(0) mb32(0) mb32(frac1) ...
mb32(0) ... % Preview Time
mb32(0) ... % Preview Duration
mb32(0) ... % Poster Time
mb32(0) ... % Selection Time
mb32(0) ... % Selection Duration
mb32(0) ... % Current Time
mb32(NumberOfTracks)]; % Video and/or Sound?
y = SetAtomSize(y);
%%%%%%%%%%%%%%% raw_image_description %%%%%%%%%%%%%%%%%
function y = raw_image_description()
global MakeQTMovieStatus
fixed1 = bitshift(1,16); % Fixed point 1
codec = [12 'Photo - JPEG '];
y = [mb32(0) mbstring('jpeg') ... % Atom Header
mb32(0) mb16(0) mb16(0) mb16(0) mb16(1) ...
mbstring('appl') ...
mb32(1023) ... % Temporal Quality (perfect)
mb32(floor(1023*MakeQTMovieStatus.spatialQual)) ...
mb16(MakeQTMovieStatus.imageSize(2)) ...
mb16(MakeQTMovieStatus.imageSize(1)) ...
mb32(fixed1 * 72) mb32(fixed1 * 72) ...
mb32(0) ...
mb16(0) ...
mbstring(codec) ...
mb16(24) mb16(65535)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% raw_sound_description %%%%%%%%%%%%%%%%%
function y = raw_sound_description()
global MakeQTMovieStatus
y = [mb32(0) mbstring('twos') ... % Atom Header
mb32(0) mb16(0) mb16(0) mb16(0) mb16(0) ...
mb32(0) ...
mb16(MakeQTMovieStatus.soundChannels) ...
mb16(16) ... % 16 bits per sample
mb16(0) mb16(0) ...
mb32(round(MakeQTMovieStatus.soundRate*65536))];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% smhd_atom %%%%%%%%%%%%%%%%%
function y = smhd_atom()
y = SetAtomSize([mb32(0) mbstring('smhd') mb32(0) mb16(0) mb16(0)]);
%%%%%%%%%%%%%%% stbl_atom %%%%%%%%%%%%%%%%%
% Removed the stss atom since it seems to upset the PC version of QT
% and it is empty so it doesn't add anything.
% Malcolm - July 5, 1999
function y = stbl_atom(add_sound_p)
y = [mb32(0) mbstring('stbl') ... % Atom Header
stsd_atom(add_sound_p) ...
stts_atom(add_sound_p) ...
stsc_atom(add_sound_p) ...
stsz_atom(add_sound_p) ...
stco_atom(add_sound_p)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% stco_atom %%%%%%%%%%%%%%%%%
function y = stco_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
y = [mb32(0) mbstring('stco') mb32(0) mb32(1) ...
mb32(MakeQTMovieStatus.soundStart)];
else
y = [mb32(0) mbstring('stco') mb32(0) ...
mb32(MakeQTMovieStatus.frameNumber) ...
mb32(MakeQTMovieStatus.frameStarts)];
end
y = SetAtomSize(y);
%%%%%%%%%%%%%%% stsc_atom %%%%%%%%%%%%%%%%%
function y = stsc_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
samplesperchunk = MakeQTMovieStatus.soundLength;
else
samplesperchunk = 1;
end
y = [mb32(0) mbstring('stsc') mb32(0) mb32(1) ...
mb32(1) mb32(samplesperchunk) mb32(1)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% stsd_atom %%%%%%%%%%%%%%%%%
function y = stsd_atom(add_sound_p)
if add_sound_p
desc = raw_sound_description;
else
desc = raw_image_description;
end
y = [mb32(0) mbstring('stsd') mb32(0) mb32(1) desc];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% stss_atom %%%%%%%%%%%%%%%%%
function y = stss_atom()
y = SetAtomSize([mb32(0) mbstring('stss') mb32(0) mb32(0)]);
%%%%%%%%%%%%%%% stsz_atom %%%%%%%%%%%%%%%%%
function y = stsz_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
y = [mb32(0) mbstring('stsz') mb32(0) mb32(2) ...
mb32(MakeQTMovieStatus.soundLength)];
else
y = [mb32(0) mbstring('stsz') mb32(0) mb32(0) ...
mb32(MakeQTMovieStatus.frameNumber) ...
mb32(MakeQTMovieStatus.frameLengths)];
end
y = SetAtomSize(y);
%%%%%%%%%%%%%%% stts_atom %%%%%%%%%%%%%%%%%
function y = stts_atom(add_sound_p)
global MakeQTMovieStatus
if add_sound_p
count_duration = [mb32(MakeQTMovieStatus.soundLength) mb32(1)];
else
count_duration = [mb32(MakeQTMovieStatus.frameNumber) ...
mb32(MakeQTMovieStatus.timeScaleExpansion)];
end
y = SetAtomSize([mb32(0) mbstring('stts') mb32(0) mb32(1) count_duration]);
%%%%%%%%%%%%%%% trak_atom %%%%%%%%%%%%%%%%%
function y = trak_atom(add_sound_p)
global MakeQTMovieStatus
y = [mb32(0) mbstring('trak') ... % Atom Header
tkhd_atom(add_sound_p) ... % Track header
edts_atom(add_sound_p) ... % Edit List
mdia_atom(add_sound_p)];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% tkhd_atom %%%%%%%%%%%%%%%%%
function y = tkhd_atom(add_sound_p)
global MakeQTMovieStatus
fixed1 = bitshift(1,16); % Fixed point 1
frac1 = bitshift(1,30); % Fractional 1 (CHECK THIS)
if add_sound_p > 0
duration = MakeQTMovieStatus.soundLength / ...
MakeQTMovieStatus.soundRate * ...
MakeQTMovieStatus.timeScale;
else
duration = MakeQTMovieStatus.frameNumber / ...
MakeQTMovieStatus.frameRate * ...
MakeQTMovieStatus.timeScale;
end
duration = ceil(duration);
y = [mb32(0) mbstring('tkhd') ... % Atom Header
mb32(15) ... % Version and flags
mb32(round(now*3600*24)) ... % Creation time
mb32(round(now*3600*24)) ... % Modification time
mb32(MakeQTMovieStatus.trackNumber) ...
mb32(0) ...
mb32(duration) ... % Track duration
mb32(0) mb32(0) ... % Offset and priority
mb16(0) mb16(0) mb16(255) mb16(0) ... % Layer, Group, Volume, fill
mb32(fixed1) mb32(0) mb32(0) ... % Transformation matrix (identity)
mb32(0) mb32(fixed1) mb32(0) ...
mb32(0) mb32(0) mb32(frac1)];
if add_sound_p
y = [y mb32(0) mb32(0)]; % Zeros for sound
else
y = [y mb32(fliplr(MakeQTMovieStatus.imageSize)*fixed1)];
end
y= SetAtomSize(y);
MakeQTMovieStatus.trackNumber = MakeQTMovieStatus.trackNumber + 1;
%%%%%%%%%%%%%%% udat_atom %%%%%%%%%%%%%%%%%
function y = udat_atom()
atfmt = [64 double('fmt')];
atday = [64 double('day')];
VersionString = 'Matlab MakeQTMovie version April 7, 2000';
y = [mb32(0) mbstring('udta') ...
SetAtomSize([mb32(0) atfmt mbstring(['Created ' VersionString])]) ...
SetAtomSize([mb32(0) atday ' ' date])];
y = SetAtomSize(y);
%%%%%%%%%%%%%%% vmhd_atom %%%%%%%%%%%%%%%%%
function y = vmhd_atom()
y = SetAtomSize([mb32(0) mbstring('vmhd') mb32(0) ...
mb16(64) ... % Graphics Mode
mb16(0) mb16(0) mb16(0)]); % Op Color
\end{verbatim}
\color{lightgray} \begin{verbatim}Syntax: MakeQTMovie cmd [arg]
The following commands are supported:
addfigure - Add snapshot of current figure to movie
addaxes - Add snapshot of current axes to movie
addmatrix data - Add a matrix to movie (convert to jpeg)
addmatrixsc data - Add a matrix to movie (scale and convert to jpeg)
addsound data - Add sound samples (with optional rate)
demo - Show this program in action
finish - Finish movie, write out QT file
framerate # - Set movie frame rate (default is 10fps)
quality # - Set JPEG quality (between 0 and 1)
size [# #] - Set plot size to [width height]
start filename - Start making a movie with this name
\end{verbatim} \color{black}
\end{document}
| {
"alphanum_fraction": 0.6616410941,
"avg_line_length": 30.5918367347,
"ext": "tex",
"hexsha": "13935c035d4fa18b02bf9da8a8ed1caebb3c7926",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "24dc8f732ef28acfa1b3594fdd9bbd61b5439d18",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hirowgit/1B0_matlab_optmization_course",
"max_forks_repo_path": "MakeQTMovie/MakeQTMovie.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "24dc8f732ef28acfa1b3594fdd9bbd61b5439d18",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hirowgit/1B0_matlab_optmization_course",
"max_issues_repo_path": "MakeQTMovie/MakeQTMovie.tex",
"max_line_length": 96,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "24dc8f732ef28acfa1b3594fdd9bbd61b5439d18",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hirowgit/1B0_matlab_optmization_course",
"max_stars_repo_path": "MakeQTMovie/MakeQTMovie.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8911,
"size": 29980
} |
\documentclass{beamer}
\usepackage{fontspec}
\usepackage{xeCJK}
\setCJKmainfont[BoldFont=Noto Serif CJK TC Bold]{Noto Serif CJK TC}
\XeTeXlinebreaklocale "zh"
\XeTeXlinebreakskip = 0pt plus 1pt
\linespread{1.3}
\allowdisplaybreaks
%\newcommand{\weib}{\CJKfamily{weib}}
%\newcommand{\hkss}{\CJKfamily{hkss}}
%\newcommand{\hksy}{\CJKfamily{hksy}}
%\newcommand{\lth}{\CJKfamily{lth}}
\usepackage{color}
\usepackage{booktabs}
\usepackage{tabularx}
\usepackage{caption}
\usepackage{tikz}
\usepackage{verbatim}
\usepackage{pgfplotstable}
\pgfplotsset{width=12cm}
\pgfplotsset{height=7cm}
\pgfplotsset{compat=1.13}
\usetheme{EastLansing}
\usetikzlibrary{positioning}
\useinnertheme{rectangles}
\usefonttheme{professionalfonts}
\newcommand{\lw}{0.8mm}
\setbeamercovered{transparent}
%\AtBeginSection[]
%{
%\begin{frame}<beamer>
%\frametitle{報告大綱}
%%\frametitle{RoadMap}
%\tableofcontents[currentsection]
%\end{frame}
%}
\title{Paper Report}
\subtitle{\textcolor[rgb]{0.00,0.50,1.00}{{Speech Processing \& Machine Learning Laboratory}}}
\author{徐瑞陽}
\date{2019/09/12}
\begin{document}
\begin{frame}
\maketitle
\end{frame}
\begin{frame}
\includegraphics[width=\textwidth]{fig/CAVIA.png}
\center ICML 2019
\end{frame}
\begin{frame}
\includegraphics[width=\textwidth]{fig/title.png}
\center ICML 2018
\end{frame}
\begin{frame}{Trends in Meta Learning}
\begin{itemize}
\item Task conditioning
\item Parameter space warping
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Outline}
\tableofcontents
\end{frame}
\section{Quick recap of meta learning}
\begin{frame}{Objective}
\[ \theta^\star = \arg \max_\theta \mathbb{E}_{\mathcal{D} \sim p(\mathcal{D}) }[\mathcal{A}_\theta(\mathcal{D})]\]
$\mathcal{A}$ depends on which application we want to learn \\
(e.g classification, regression... and MORE!!)
\end{frame}
\begin{frame}{Common Approaches\footnote{source: \href{http://metalearning-symposium.ml/files/vinyals.pdf}{Vinyals' talk at NIPS 2017}}}
\begin{itemize}
\item \textbf{Metric-based}
\item Model-based: won't talk today
\item Optimization-based
\begin{itemize}
\item Update rule: e.g optimizer
\item \textbf{Initial weight} $\theta_0$
\end{itemize}
\end{itemize}
\end{frame}
\section{Task Conditioning}
\begin{frame}
\begin{center}
\LARGE{Task Conditioning}
\end{center}
\end{frame}
\begin{frame}{Motivation}
For different tasks, we still use
\begin{itemize}
\item Same structure: e.g feature encoder in metric-based approach
\item Same parameter: e.g same $\theta_0$ in optimization-based approach
\end{itemize}
to adapt
\begin{center}
Can we exploit prior knowledge about tasks,\\ and adapt current task more effectively ?
\end{center}
\end{frame}
\begin{frame}{Consider 2 extremes}
Every task
\begin{itemize}
\item share same parameter $\theta$ to start
\item have individual parameter (training from scratch)
\end{itemize}
\end{frame}
\begin{frame}{Implementations of Task Conditioning}
\begin{itemize}
\item Learn \textbf{another} feature encoder as \textbf{condition of tasks} to get more useful representation before meta learning
\begin{itemize}[<+->]
\item TADAM: Use $\mu$ of class prototypes as task representation (NIPS 2018)
\item CAML: Use ProtoNet offline learn an encoder to scale and shift $x_i$, then MAML (ICLR 2019)
\item TAFE-Net: Use task embedding (sample image/ description) to generate weights of network used for prediction (CVPR 2019)
\item Category Traversal
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Example: Overview of CAML}
\center \includegraphics[width=1.0\textwidth]{fig/caml.png}
\end{frame}
\begin{frame}{Implementations of Task Conditioning}
\begin{itemize}
\item Cross-modality: \textbf{zero-shot concept} \\
e.g use text embedding as additional input to learn prototype in ProtoNet (ICLR 2019)
\end{itemize}
\center \includegraphics[width=0.7\textwidth]{fig/cross-modal.png}
\end{frame}
\begin{frame}{Implementations of Task Conditioning}
\begin{itemize}
\item Mask some parameter based on task representation
\end{itemize}
\center \includegraphics[width=\textwidth]{fig/hierarchical-meta.png}
\end{frame}
\begin{frame}{Results on 5-way 1-shot/5-shot on MiniImageNet}
\center \includegraphics[width=0.6\textwidth]{fig/task-conditioning-result.png}
\end{frame}
\begin{frame}{Some Thoughts}
\begin{itemize}
\item Use \textbf{pre-trained} feature encoder as task representation extractor?
\item Apply to Speech/NLP? depend on task definition!
\end{itemize}
%\begin{center}
%\LARGE{What about applications in Speech/NLP} ?
%\end{center}
\end{frame}
\section{Parameter Space Warping}
\begin{frame}
\begin{center}
\LARGE{Parameter Space Warping}
\end{center}
\end{frame}
\begin{frame}{Motivation}
Intuitively, meta learner model should have \textbf{more} capacity than learner model, but current optimization-based approach (e.g MAML) use same model/parameter for both of them\\
\center operate in same parameter space is reasonable?
\end{frame}
\begin{frame}{Idea}
Limit the freedom of task learner by fixing some param during adaptation
\center \includegraphics[width=1.0\textwidth]{fig/space-warp.png}
\end{frame}
\begin{frame}{Implementations of Parameter Space Warping}
We have 2 sets of parameters
\begin{itemize}
\item $\phi$: param shared across and fixed during adaptation,\\ updated in outer loop (\textbf{meta})
\item $\theta$: param for adaptation (but initial value shared across task), updated in inner loop
\end{itemize}
\end{frame}
\begin{frame}{Key Questions for Implementations}
\begin{itemize}
\item How to determine param belong to which set?
\begin{itemize}
\item pre-defined
\item learned
\end{itemize}
\item Which \textbf{warp} module should use?
\begin{itemize}
\item simple feed-forward
\item complicated structure (like recurrent structure)
\end{itemize}
\end{itemize}
\end{frame}
\subsection{CAVIA}
\begin{frame}{CAVIA}
\includegraphics[width=\textwidth]{fig/CAVIA.png}
\center ICML 2019
\end{frame}
\begin{frame}{Architecture of CAVIA}
\begin{itemize}
\item pre-defined meta param set
\item feed-forward warp module
\end{itemize}
\center \includegraphics[width=0.7\textwidth]{fig/CAVIA-idea.png}
\end{frame}
\begin{frame}{Update Rule of CAVIA}
\begin{itemize}
\item prediction given network: $f_{\phi,\theta_0}(x)$
\item loss of task $i$: $\mathcal{L}_{T_i}(f_{\phi,\theta_0}(x),y)$
\item training set of task $i$: $\mathcal{D}_i^{train}$, testing set: $\mathcal{D}_i^{test}$
\item $\theta_0 = \mathbf{0}$
\end{itemize}
\end{frame}
\begin{frame}{Update Rule of CAVIA}
\begin{block}{Inner loop udpate}
\[\theta_i = \theta_0 - \alpha \nabla_{\theta} \frac{1}{|\mathcal{D}_i^{train}|}\sum_{(x,y) \in \mathcal{D}_i^{train}}\mathcal{L}_{T_i}(f_{\phi,\theta_0}(x),y)\]
\end{block}
\begin{block}{Outer loop update}
\[ \phi \leftarrow \phi - \beta \nabla_{\phi}\frac{1}{N} \sum_{T_i \in \mathcal{T}} \frac{1}{|\mathcal{D}_i^{test}|} \sum_{(x,y) \in \mathcal{D}_i^{test}} \mathcal{L}_{T_i}(f_{\phi,\theta_i}(x),y)\]
\end{block}
$N$: meta batch size of tasks $\mathcal{T}$
\end{frame}
\begin{frame}{Experiment: Sine Curves Regression}
A task
\begin{itemize}
\item defined by amplitude ($\sim U(0.1,0.5)$) and phase ($\sim U(0, \pi)$)
\item $10$ labeled data points given during adaptation
\end{itemize}
Model
\begin{itemize}
\item NN with 2 layers with 40 nodes
\item Number of additional input params $\theta$: $2 \sim 50$
\end{itemize}
\end{frame}
\begin{frame}{Experiment: Sine Curves Regression}
\center \includegraphics[width=1.0\textwidth]{fig/caml-sine-result.png}
Green line: also meta-learn $\theta_0$ (rather than set to $\mathbf{0}$)
\end{frame}
\subsection{MT-Net}
\begin{frame}{MT-Net}
\includegraphics[width=\textwidth]{fig/title.png}
\center ICML 2018
\end{frame}
\begin{frame}{Architecture of MT-Net}
\begin{itemize}
\item \textbf{learned meta param set}: see next slide
\item feed-forward warp module
\end{itemize}
\center \includegraphics[width=0.8\textwidth]{fig/MT-idea.png}
\end{frame}
\begin{frame}{Architecture of MT-Net}
\begin{itemize}
\item T-net: last slide, learns a metric in activation space
\item MT-net: additionally learned which subset of params should be adaptated
\end{itemize}
\center \includegraphics[width=0.8\textwidth]{fig/MT-idea2.png}
\end{frame}
\begin{frame}{More on mask $M$}
\begin{itemize}
\item $W$ is $m \times n$ matrix, T is $n \times n$ matrix
\item $M = [\mathbf{m_1},\cdots,\mathbf{m_n}]^\top$
\item $\mathbf{m_j}^\top \sim Bern\Big( \frac{\exp(\zeta_j)}{\exp(\zeta_j) + 1} \Big) \mathbf{1}^\top$
\end{itemize}
\end{frame}
\begin{frame}{Update Rule of MT-Net}
\begin{itemize}
\item meta param $\phi = \lbrace \underbrace{W^1,\cdots,W^L}_{\theta_W},\underbrace{T^1,\cdots,T^L}_{\theta_T},\underbrace{\mathbf{\zeta^1},\cdots,\mathbf{\zeta^L}}_{\theta_\zeta} \rbrace$
\item param for adaptation $\theta = \lbrace \underbrace{W^1,\cdots,W^L}_{\theta_W} \rbrace$
\end{itemize}
Note
\begin{itemize}
\item same rule as CAVIA
\item use \texttt{Gumbel-Softmax} estimator to differentiate through sampling of masks
\end{itemize}
%\begin{block}{Outer loop update}
%\[ \phi \leftarrow \phi - \beta \nabla_{\phi}\frac{1}{N} \sum_{T_i \in \mathcal{T}} \frac{1}{|\mathcal{D}_i^{test}|} \sum_{(x,y) \in \mathcal{D}_i^{test}} \mathcal{L}_{T_i}(f_{\phi,\theta_i}(x),y)\]
%\end{block}
\end{frame}
\begin{frame}{More on MT-Net}
Consider the output changes as follows, denote $A=TW$
\begin{block}{Output change}
\[ \mathbf{y}^{\text{new}} = \mathbf{y} - \alpha (T \odot M_T^{\top}) (M_T \odot T^{\top}) \nabla_A \mathcal{L}_{\mathcal{T}}\mathbf{x} \]
\end{block}
\begin{itemize}
\item $T$ controls the step size (make model more lr-agnostic)
\item $M$ defines the task-specific \& task-mutual neurons
\end{itemize}
\end{frame}
\begin{frame}{Experiment: Sine Curves Regression}
Same setting as CAVIA
\center \includegraphics[width=0.8\textwidth]{fig/MT-sine.png}
\end{frame}
\begin{frame}{Experiment: Polynomial Regression}
A task
\begin{itemize}
\item defined by $\sum_{i=0}^n c_i x^i$
\item $n \in \lbrace 0, 1, 2 \rbrace$
\item $c_0, c_1, c_2 \sim U(-1,1)$
\item $10$ labeled data points given during adaptation
\end{itemize}
\end{frame}
\begin{frame}{Experiment: Polynomial Regression}
\center \includegraphics[width=0.7\textwidth]{fig/MT-poly.png}
\center Observation: dimension of the learned subspace reflect the underlying complexity of tasks
\end{frame}
\subsection{Warped Gradient Descent}
\begin{frame}{Warped Gradient Descent}
\includegraphics[width=\textwidth]{fig/Warp-meta.png}
\center 8/30 on Arxiv
\end{frame}
\begin{frame}{Architecture of Warped-GD}
\begin{itemize}
\item predifined/learned meta param set
\item complicated warp module
\item define $\mathcal{L}_{meta}, \mathcal{L}_{task}$ individually
\end{itemize}
%\center \includegraphics[width=0.8\textwidth]{fig/MT-idea.png}
\end{frame}
%\section{MISC}
\begin{frame}
\begin{center}
%\weib{\LARGE{謝謝聆聽!}}
\LARGE{Questions?}
\end{center}
\end{frame}
\subsection{Appendix}
\end{document}
| {
"alphanum_fraction": 0.7027238709,
"avg_line_length": 30.9730458221,
"ext": "tex",
"hexsha": "194c25b5c20acc39c098c9700f0f9a41afe0a165",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0457759b940f17901b0d0bea387cd810cbd3177d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sunprinceS/ReportHub",
"max_forks_repo_path": "201909-Report/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0457759b940f17901b0d0bea387cd810cbd3177d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sunprinceS/ReportHub",
"max_issues_repo_path": "201909-Report/main.tex",
"max_line_length": 203,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "0457759b940f17901b0d0bea387cd810cbd3177d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sunprinceS/ReportHub",
"max_stars_repo_path": "201909-Report/main.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-03T05:32:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-07-02T09:08:18.000Z",
"num_tokens": 3761,
"size": 11491
} |
\section{Mikmatch}
\label{sec:mikmatch}
Directly supported in toplevel
Regular expression \emph{share} their own namespace.
\begin{enumerate}
\item compile
\begin{bluetext}
"test.ml" : pp(camlp4o -parser pa_mikmatch_pcre.cma)
<test.{cmo,byte,native}> : pkg_mikmatch_pcre
-- myocamlbuild.ml use default
\end{bluetext}
\item toplevel
\begin{ocamlcode}
ocaml
#camlp4o ;;
#require "mikmatch_pcre" ;; (* make sure to follow the order strictly *)
\end{ocamlcode}
\item debug
\begin{bluetext}
camlp4of -parser pa_mikmatch_pcre.cma -printer o test.ml
(* -no_comments does not work *)
\end{bluetext}
\item structure \\
regular expressions can be used to match strings, it must be preceded by
the RE keyword, or placed between slashes (/../).
\begin{ocamlcode}
match ... with pattern -> ...
function pattern -> ...
try ... with pattern -> ...
let /regexp/ = expr in expr
let try (rec) let-bindings in expr with pattern-match
(only handles exception raised by let-bindings)
MACRO-NAME regexp -> expr ((FILTER | SPLIT) regexp)
\end{ocamlcode}
\begin{alternate}
let x = (function (RE digit+) -> true | _ -> false) "13232";;
val x : bool = true
# let x = (function (RE digit+) -> true | _ -> false) "1323a2";;
val x : bool = true
# let x = (function (RE digit+) -> true | _ -> false) "x1323a2";;
val x : bool = false
\end{alternate}
\begin{ocamlcode}
let get_option () = match Sys.argv with
[| _ |] -> None
|[| _ ; RE (lower+ as key) "=" (_* as data) |] -> Some(key,data)
|_ -> failwith "Usage: myprog [key=val]";;
val get_option : unit -> (string * string) option = <fun>
\end{ocamlcode}
\begin{alternate}
let option = try get_option () with Failure (RE "usage"~) -> None ;;
val option : (string * string) option = None
\end{alternate}
\item \textbf{sample regex}
built in regexes
\begin{bluetext}
lower, upper, alpha(lower|upper), digit, alnum, punct
graph(alnum|punct), blank,cntrl,xdigit,space
int,float
bol(beginning of line)
eol
any(except newline)
bos, eos
\end{bluetext}
\begin{alternate}
let f = (function (RE int as x : int) -> x ) "132";;
val f : int = 132
let f = (function (RE float as x : float) -> x ) "132.012";;
val f : float = 132.012
let f = (function (RE lower as x ) -> x ) "a";;
val f : string = "a"
let src = RE_PCRE int ;;
val src : string * 'a list = ("[+\\-]?(?:0(?:[Xx][0-9A-Fa-f]+|(?:[Oo][0-7]+|[Bb][01]+))|[0-9]+)", [])
let x = (function (RE _* bol "haha") -> true | _ -> false) "x\nhaha";;
val x : bool = true
\end{alternate}
\begin{ocamlcode}
RE hello = "Hello!"
RE octal = ['0'-'7']
RE octal1 = ["01234567"]
RE octal2 = ['0' '1' '2' '3' '4' '5' '6' '7']
RE octal3 = ['0'-'4' '5'-'7']
RE octal4 = digit # ['8' '9'] (* digit is a predefined set of characters *)
RE octal5 = "0" | ['1'-'7']
RE octal6 = ['0'-'4'] | ['5'-'7']
RE not_octal = [ ^ '0'-'7'] (* this matches any character but an octal digit *)
RE not_octal' = [ ^ octal] (* another way to write it *)
\end{ocamlcode}
\begin{ocamlcode}
RE paren' = "(" _* Lazy ")"
(* _ is wild pattern, paren is built in *)
let p = function (RE (paren' as x )) -> x ;;
\end{ocamlcode}
\begin{alternate}
p "(xx))";;
- : string = "(xx)"
# p "(x)x))";;
- : string = "(x)"
\end{alternate}
\begin{ocamlcode}
RE anything = _* (* any string, as long as possible *)
RE anything' = _* Lazy (* any string, as short as possible *)
RE opt_hello = "hello"? (* matches hello if possible, or nothing *)
RE opt_hello' = "hello"? Lazy (* matches nothing if possible, or hello *)
RE num = digit+ (* a non-empty sequence of digits, as long as possible;
shortcut for: digit digit* *)
RE lazy_junk = _+ Lazy (* match one character then match any sequence
of characters and give up as early as possible *)
RE at_least_one_digit = digit{1+} (* same as digit+ *)
RE at_least_three_digits = digit{3+}
RE three_digits = digit{3}
RE three_to_five_digits = digit{3-5}
RE lazy_three_to_five_digits = digit{3-5} Lazy
let test s = match s with
RE "hello" -> true
| _ -> false
\end{ocamlcode}
It's important to know that matching process will try \textit{any} possible combination until
the pattern is matched. However the combinations are tried from left to right, and
repeats are either greedy or lazy. (greedy is default). laziness triggered by the presence
of the Lazy keyword.
\item fancy features of regex
\begin{enumerate}[(a)]
\item normal
\begin{ocamlcode}
let x = match "hello world" with
RE "world" -> true
| _ -> false;;
\end{ocamlcode}
\begin{ocamlcode}
val x : bool = false
\end{ocamlcode}
\item pattern match syntax
(the let constructs can be used directly with a
regexp pattern, but \textbf{let RE ... = ... }does not look nice, the
sandwich notation (/.../) has been introduced )
\begin{alternate}
Sys.ocaml_version;;
- : string = "3.12.1"
# RE num = digit + ;;
\end{alternate}
\begin{ocamlcode}
RE num = digit + ;;
let /(num as major : int ) "." (num as minor : int)
( "." (num as patchlevel := fun s -> Some (int_of_string s))
| ("" as patchlevel := fun s -> None ))
( "+" (_* as additional_info := fun s -> Some s )
| ("" as additional_info := fun s -> None )) eos
/ = Sys.ocaml_version ;;
\end{ocamlcode}
we always use \textbf{as} to extract the information.
\begin{ocamlcode}
val additional_info : string option = None
val major : int = 3
val minor : int = 12
val patchlevel : int option = Some 1
\end{ocamlcode}
\item File processing (Mikmatch.Text)
\begin{ocamlcode}
val iter_lines_of_channel : (string -> unit) -> in_channel -> unit
val iter_lines_of_file : (string -> unit) -> string -> unit
val lines_of_channel : in_channel -> string list
val lines_of_file : string -> string list
val channel_contents : in_channel -> string
val file_contents : ?bin:bool -> string -> string
val save : string -> string -> unit
val save_lines : string -> string list -> unit
exception Skip
val map : ('a -> 'b) -> 'a list -> 'b list
val rev_map : ('a -> 'b) -> 'a list -> 'b list
val fold_left : ('a -> 'b -> 'a) -> 'a -> 'b list -> 'a
val fold_right : ('a -> 'b -> 'b) -> 'a list -> 'b -> 'b
val map_lines_of_channel : (string -> 'a) -> in_channel -> 'a list
val map_lines_of_file : (string -> 'a) -> string -> 'a list
\end{ocamlcode}
\item \textbf{Mikmatch.Glob} (pretty useful)
\begin{ocamlcode}
val scan :
?absolute:bool ->
?path:bool ->
?root:string ->
?nofollow:bool -> (string -> unit) -> (string -> bool) list -> unit
val lscan :
?rev:bool ->
?absolute:bool ->
?path:bool ->
?root:string list ->
?nofollow:bool ->
(string list -> unit) -> (string -> bool) list -> unit
val list :
?absolute:bool ->
?path:bool ->
?root:string ->
?nofollow:bool -> ?sort:bool -> (string -> bool) list -> string list
val llist :
?rev:bool ->
?absolute:bool ->
?path:bool ->
?root:string list ->
?nofollow:bool ->
?sort:bool -> (string -> bool) list -> string list list
\end{ocamlcode}
here we want to get \verb|~/.*/*.conf| file
X.list (predicates corresponding to each layer .
\begin{alternate}
let xs = let module X = Mikmatch.Glob in X.list ~root:"/Users/bob" [FILTER "." ; FILTER _* ".conf" eos ] ;;
val xs : string list = [".libfetion/libfetion.conf"]
\end{alternate}
\begin{ocamlcode}
let xs =
let module X = Mikmatch.Glob in
X.list ~root:"/Users/bob" [const true; FILTER _* ".pdf" eos ]
in print_int (List.length xs) ;;
\end{ocamlcode}
\begin{ocamlcode}
455
\end{ocamlcode}
\item Lazy or Greedy
\begin{ocamlcode}
match "acbde (result), blabla... " with
RE _* "(" (_* as x) ")" -> print_endline x | _ -> print_endline "Failed";;
\end{ocamlcode}
\begin{ocamlcode}
result
\end{ocamlcode}
\begin{ocamlcode}
match "acbde (result),(bla)bla... " with
RE _* Lazy "(" (_* as x) ")" -> print_endline x | _ -> print_endline "Failed";;
\end{ocamlcode}
\begin{ocamlcode}
result),(bla
\end{ocamlcode}
\begin{alternate}
let / "a"? ("b" | "abc" ) as x / = "abc" ;; (* or patterns, the same as before*)
val x : string = "ab"
# let / "a"? Lazy ("b" | "abc" ) as x / = "abc" ;;
val x : string = "abc"
\end{alternate}
In place conversions of the substrings can be performed, using
either the predefined converters \textit{int, float}, or custom converters
\begin{alternate}
let z = match "123/456" with RE (digit+ as x : int ) "/" (digit+ as y : int) -> x ,y ;;
val z : int * int = (123, 456)
\end{alternate}
Mixed pattern
\begin{alternate}
let z = match 123,45, "6789" with i,_, (RE digit+ as j : int) | j,i,_ -> i * j + 1;;
val z : int = 835048
\end{alternate}
\item Backreferences \\
Previously matched substrings can be matched again using backreferences.
\begin{alternate}
let z = match "abcabc" with RE _* as x !x -> x ;;
val z : string = "abc"
\end{alternate}
\item Possessiveness prevent backtracking
\begin{alternate}
let x = match "abc" with RE _* Possessive _ -> true | _ -> false;;
val x : bool = false
\end{alternate}
\item macros
\begin{enumerate}
\item FILTER macro
\begin{alternate}
let f = FILTER int eos;;
val f : ?share:bool -> ?pos:int -> string -> bool = <fun>
# f "32";;
- : bool = true
# f "32a";;
- : bool = false
\end{alternate}
\item REPLACE macro
\begin{alternate}
let remove_comments = REPLACE "#" _* Lazy eol -> "" ;;
val remove_comments : ?pos:int -> string -> string = <fun>
# remove_comments "Hello #comment \n world #another comment" ;;
- : string = "Hello \n world "
let x = (REPLACE "," -> ";;" ) "a,b,c";;
val x : string = "a;;b;;c"
\end{alternate}
\item REPLACE\_FIRST macro
\item SEARCH(\_FIRST) COLLECT COLLECTOBJ MACRO
\begin{alternate}
let search_float = SEARCH_FIRST float as x : float -> x ;;
val search_float : ?share:bool -> ?pos:int -> string -> float = <fun>
search_float "bla bla -1.234e12 bla";;
- : float = -1.234e+12
let get_numbers = COLLECT float as x : float -> x ;;
val get_numbers : ?pos:int -> string -> float list = <fun>
get_numbers "1.2 83 nan -inf 5e-10";;
- : float list = [1.2; 83.; nan; neg_infinity; 5e-10]
let read_file = Mikmatch.Text.map_lines_of_file (COLLECT float as x : float -> x );;
val read_file : string -> float list list = <fun>
(** Negative assertions *)
let get_only_numbers = COLLECT < Not alnum . > (float as x : float) < . Not alnum > -> x
let list_words = COLLECT (upper | lower)+ as x -> x ;;
val list_words : ?pos:int -> string -> string list = <fun>
# list_words "gshogh sghos sgho ";;
- : string list = ["gshogh"; "sghos"; "sgho"]
RE pair = "(" space* (digit+ as x : int) space* "," space* ( digit + as y : int ) space* ")";;
# let get_objlist = COLLECTOBJ pair;;
val get_objlist : ?pos:int -> string -> < x : int; y : int > list =
\end{alternate}
\item SPLIT macro
\begin{alternate}
let ys = (SPLIT space* [",;"] space* ) "a,b,c, d, zz;";;
val ys : string list = ["a"; "b"; "c"; "d"; "zz"]
let f = SPLIT space* [",;"] space* ;;
val f : ?full:bool -> ?pos:int -> string -> string list = <fun>
\end{alternate}
Full is false by default. When true, it considers the regexp
as a separator between substrings even if the first or the last one
is empty. will add some whitespace trailins
\begin{alternate}
f ~full:true "a,b,c,d;" ;;
- : string list = ["a"; "b"; "c"; "d"; ""]
\end{alternate}
\item MAP macro (a weak lexer) (MAP regexp -> expr ) \\
splits the given string into fragments: the fragments that do not match the pattern are returned as \textit{`Text s}. Fragments that match the pattern are replaced by the result of expr
\begin{alternate}
let f = MAP ( "+" as x = `Plus ) -> x ;;
val f : ?pos:int -> ?full:bool -> string -> [> `Plus | `Text of string ] list =
let x = (MAP ',' -> `Sep ) "a,b,c";;
val x : [> `Sep | `Text of string ] list = [`Text "a"; `Sep; `Text "b"; `Sep; `Text "c"]
\end{alternate}
\begin{ocamlcode}
let f = MAP ( "+" as x = `Plus ) | ("-" as x = `Minus) | ("/" as x = `Div)
| ("*" as x = `Mul) | (digit+ as x := fun s -> `Int (int_of_string s))
| (alpha [alpha digit] + as x := fun s -> `Ident s) -> x ;;
\end{ocamlcode}
\begin{ocamlcode}
val f :
?pos:int ->
?full:bool ->
string ->
[> `Div
| `Ident of string
| `Int of int
| `Minus
| `Mul
| `Plus
| `Text of string ]
list = <fun>
\end{ocamlcode}
\begin{ocamlcode}
# f "+-*/";;
\end{ocamlcode}
\begin{ocamlcode}
- : [> `Div
| `Ident of string
| `Int of int
| `Minus
| `Mul
| `Plus
| `Text of string ]
list
=
[`Text ""; `Plus; `Text ""; `Minus; `Text ""; `Mul; `Text ""; `Div; `Text ""]
\end{ocamlcode}
\begin{ocamlcode}
let xs = Mikmatch.Text.map (function `Text (RE space* eos) -> raise Mikmatch.Text.Skip | token -> token) (f "+-*/");;
val xs :
[> `Div
| `Ident of string
| `Int of int
| `Minus
| `Mul
| `Plus
| `Text of string ]
list = [`Plus; `Minus; `Mul; `Div]
\end{ocamlcode}
\item lexer (ulex is faster and more elegant)
\begin{ocamlcode}
let get_tokens = f |- Mikmatch.Text.map (function `Text (RE space* eos)
-> raise Mikmatch.Text.Skip | `Text x -> invalid_arg x | x
-> x) ;;
val get_tokens :
string ->
[> `Div
| `Ident of string
| `Int of int
| `Minus
| `Mul
| `Plus
| `Text of string ]
list = <fun>
get_tokens "a1+b3/45";;
- : [> `Div
| `Ident of string
| `Int of int
| `Minus
| `Mul
| `Plus
| `Text of string ]
list
= [`Ident "a1"; `Plus; `Ident "b3"; `Div; `Int 45]
\end{ocamlcode}
\item SEARCH macro (location)
\begin{alternate}
let locate_arrows = SEARCH %pos1 "->" %pos2 -> Printf.printf "(%i-%i)" pos1 (pos2-1);;
val locate_arrows : ?pos:int -> string -> unit = <fun>
# locate_arrows "gshogho->ghso";;
(7-8)- : unit = ()
let locate_tags = SEARCH "<" "/"? %tag_start (_* Lazy as tag_contents) %tag_end ">" -> Printf.printf "%s %i-%i" tag_contents tag_start (tag_end-1);;
\end{alternate}
\end{enumerate}
\item debug
\begin{alternate}
let src = RE_PCRE <Not alnum . > (float as x : float ) < . Not alnum > in print_endline (fst src);;
(?<![0-9A-Za-z])([+\-]?(?:(?:[0-9]+(?:\.[0-9]*)?|\.[0-9]+)(?:[Ee][+\-]?[0-9]+)?|(?:[Nn][Aa][Nn]|[Ii][Nn][Ff])))(?![0-9A-Za-z])
\end{alternate}
\item ignore the case
\begin{alternate}
match "OCaml" with RE "O" "caml"~ -> print_endline "success";;
success
\end{alternate}
\item zero-width assertions
\begin{ocamlcode}
RE word = < Not alpha . > alpha+ < . Not alpha>
RE word' = < Not alpha . > alpha+ < Not alpha >
\end{ocamlcode}
\begin{ocamlcode}
RE triplet = <alpha{3} as x>
let print_triplets_of_letters = SEARCH triplet -> print_endline x
print_triplets_of_letters "helhgoshogho";;
\end{ocamlcode}
\begin{ocamlcode}
hel
elh
lhg
hgo
gos
osh
sho
hog
ogh
gho
- : unit = ()
\end{ocamlcode}
\begin{ocamlcode}
(SEARCH alpha{3} as x -> print_endline x ) "hello world";;
\end{ocamlcode}
\begin{ocamlcode}
hel
wor
\end{ocamlcode}
\begin{ocamlcode}
(SEARCH <alpha{3} as x> -> print_endline x ) "hello world";;
\end{ocamlcode}
\begin{ocamlcode}
hel
ell
llo
wor
orl
rld
\end{ocamlcode}
\begin{ocamlcode}
(SEARCH alpha{3} as x -> print_endline x ) ~pos:2 "hello world";;
\end{ocamlcode}
\begin{ocamlcode}
llo
wor
\end{ocamlcode}
\item dynamic regexp
\begin{alternate}
let get_fild x = SEARCH_FIRST @x "=" (alnum* as y) -> y;;
val get_fild : string -> ?share:bool -> ?pos:int -> string -> string = <fun>
# get_fild "age" "age=29 ghos";;
- : string = "29"
\end{alternate}
\item reuse \\
using macro INCLUDE
\item view patterns
\begin{ocamlcode}
let view XY = fun obj -> try Some (obj#x, obj#y) with _ -> None ;;
val view_XY : < x : 'a; y : 'b; .. > -> ('a * 'b) option = <fun>
# let test_orign = function
%XY (0,0) :: _ -> true
|_ -> false
;;
val test_orign : < x : int; y : int; .. > list -> bool = <fun>
let view Positive = fun x -> x > 0
let view Negative = fun x -> x <= 0
let test_positive_coords = function
%XY ( %Positive, %Positive ) -> true
| _ -> false
(** lazy pattern is already supported in OCaml *)
let test x = match x with
lazy v -> v
type 'a lazy_list = Empty | Cons of ('a * 'a lazy_list lazy_t)
let f = fun (Cons (_ , lazy (Cons (_, lazy (Empty)) ) )) -> true ;;
let f = fun %Cons (x1, %Cons (x2 %Empty)) -> true (* simpler *)
\end{ocamlcode}
implementation
let view X = f is translated into:
let view\_X = f
Similarly, we have local views:
let view X = f in ...
Given the nature of camlp4, this is the simplest solution that allows us to make views available to other modules, since they are just functions, with a standard name. When a view X is encountered in a pattern, it uses the view\_X function. The compiler will complain if doesn't have the right type, but not the preprocessor.
About inline views: since views are simple functions, we could insert functions directly in patterns. I believe it would make the pattern really difficult to read, especially since views are expected to be most useful in already complex patterns.
About completeness checking: our definition of views doesn't allow the compiler to warn against incomplete or redundants pattern-matching. We have the same situation with regexps. What we define here are incomplete or overlapping views, which have a broader spectrum of applications than views which are defined as sum types.
\item tiny use
\begin{alternate}
se (FILTER _* "map_lines_of_file" ) "Mikmatch";;
val map_lines_of_file : (string -> 'a) -> string -> 'a list
\end{alternate}
\begin{ocamlcode}
let _ = Mikmatch.map_lines_of_file
(function x ->
match x with
| RE "\xbegin{ocamlcode}" -> "\n" ^ x
| RE "\xend{ocamlcode}" -> x ^ ``\n''
| _ -> x )
"/Users/bob/SourceCode/Notes/ocaml-hacker.tex"
|> List.enum
|> File.write_lines "/Users/bob/SourceCode/Notes/ocaml-hacker-back-up.tex";;
\end{ocamlcode}
\end{enumerate}
\end{enumerate}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../master"
%%% End:
| {
"alphanum_fraction": 0.6211200707,
"avg_line_length": 28.4685534591,
"ext": "tex",
"hexsha": "6b3aa1f6b3e860739efb601d1fa5bd643f1061d1",
"lang": "TeX",
"max_forks_count": 17,
"max_forks_repo_forks_event_max_datetime": "2021-06-21T06:57:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-10T18:12:15.000Z",
"max_forks_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "mgttlinger/ocaml-book",
"max_forks_repo_path": "library/mikmatch.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75",
"max_issues_repo_issues_event_max_datetime": "2018-12-03T04:15:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-10-09T13:53:43.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "mgttlinger/ocaml-book",
"max_issues_repo_path": "library/mikmatch.tex",
"max_line_length": 325,
"max_stars_count": 142,
"max_stars_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "mgttlinger/ocaml-book",
"max_stars_repo_path": "library/mikmatch.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-15T00:47:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-12T16:45:40.000Z",
"num_tokens": 5778,
"size": 18106
} |
%!TEX root = ../thesis.tex
\chapter{Scheduling}
\ifpdf
\graphicspath{{Chapters/Figs/Raster/}{Chapters/Figs/PDF/}{Chapters/Figs/}}
\else
\graphicspath{{Chapters/Figs/Vector/}{Chapters/Figs/}}
\fi
%********************************** % Intro *****************************************
In this chapter a scheduling framework for mixed-critical applications with precedence constraint and communication costs is presented.
%********************************** % Section **************************************
\section{Introduction}
\par It has been shown \cite{MCSNPhard} that the mixed-criticality schedulability problem (preemptive or non-preemptive) is strongly NP-hard even with only two criticality levels (High and Low). Nevertheless, different approaches had been proposed in the literature. The first research was presented in 2009 by Anderson et al.\cite{Anderson09} and extended in 2010 \cite{Mollison10}. The mechanism they presented is based on assigning to high-critical tasks higher Worst Case Execution Time and different scheduling policies (e.g. level A tasks are statistic assigned and cyclic released, level B with a Partitioned-EDF scheduler, level C and D with G-EDF). Kritikakou et al.\cite{Kritikakou14} provided a new scheduling for tasks distinguished in only two levels: HI-criticality and LO-criticality. Same as proposed by S.Baruah et al.\cite{Baruah2012EDFVD} in which they describe EDF-VD (for Earliest Deadline First with Virtual Deadlines) for mixed-criticality tasks (see \cite{Zhang2014} for detailed analysis).
%\paragraph{} In the following sections is presented a model to schedule real-time, mixed-critical task-sets with precedence and periodicity through an off-line scheduling algorithm.
\paragraph{}To properly treat the problem an formal abstraction of the real-time scheduling with mixed-critical tasks with precedence and periodicity is presented.
%********************************** % Section **************************************
\section{Problem Formulation}
As stated in the previous chapter, tasks (or threads) to be scheduled are represented as an acyclic directed graph. Every node in the graph represents one task and the edges between nodes, the communications. The node cost is the time required for the task to complete (WCET) and the edge cost is the communication cost.
\paragraph{} More formally, the tasks are defined by $G=(\Gamma,E,C,T,K)$ where $\tau_i\in\Gamma$ represents a task and $\Gamma$ the task-set. The set $E=\{e_{ij}:\forall\tau_i\to\tau_j\}$ represent the precedence constraints between $\tau_i$ and $\tau_j$ (meaning that $\tau_i$ must be completed before $\tau_j$) with its communication cost expressed in time. The Worst-Case Execution Times are expressed in $C=\{c_i:\forall\tau_i\in\Gamma\}$. The periods (or rate) are $T=\{T_i:\forall\tau_i\in\Gamma\}$ and it is assumed that every $T_i$ is a integer multiple of some base-period $\beta_T$. The criticality levels are $K=\{\chi:\forall\tau_i\in\Gamma\}$. Moreover, each task has its priority $\rho_i$ which is assigned by the scheduling algorithm and a set of accessed resources $\mathbb{R}$ manually assigned by the system designer to each task.
\par The Direct Acyclic Graph made by all the partitions (which are the group of tasks) is denoted by $\mathbb{P}=(\Pi, H, L, R)$ and called \emph{P-DAG}, where $\pi_i\in\Pi$ is a partition. The inter-partition communications are represented in $H$, $\lambda_i\in L$ and $\delta_i\in R$ are respectively the duration and the periodicity of the partition $\pi_i$. So we can define a map $\Psi:\Gamma\to\mathbb{P}$ as the partitioning algorithm. The subgraph of $G$ made by all the tasks assigned to a given partition is called T-DAG.
\par The behavioral parameters for the task $\tau_i$ that the scheduling process must define are: the starting time $s_i$ (\emph{when} the task should execute), and the core $\mu_i$ on which it will execute (\emph{where} should execute), also called \emph{affinity mask}.
%********************************** % Section **************************************
\section{Assumptions}
It is assumed that the COTS board is a connected network of processors with identical communication links (the Unified Memory Model shown in figure \ref{fig:unifiedmemorymodel}) and relatively small number of processors. This simplifies the mathematical formulation of the optimization problem and limits its computational complexity.
\par It is also assumed that the partitioning addresses the security and safety requirement. This mechanism relies on the Hypervisor, which is a trusted code (certified by the authorities) and it is the only one executing in the highest privileged mode. It ensures time and spatial isolation among partitions so the partitioning algorithm should map each task to one partition such that a fault in one partition does not affect another partition while considering the criticality as a decision variable. Moreover, interferences and inter-partition communications should be minimized.
\paragraph{} Once partitions are determined, they need to be scheduled. The problem can be split in two parts: \emph{intra-partition} scheduling and \emph{inter-partition} scheduling. In the following sections is presented a detailed description of each phase.
%********************************** % Section **************************************
\section{Partitioning}
Determine a way to measure safety is complex, hence, derive an optimization problem is not easy. In order to simplify the intra-partition scheduling and enforce determinism it is important that all the tasks inside a partition have the same period (or eventually integer multiple of the partition rate). To understand the rationale, assume that a task $\tau_i$ assigned to \TP{1} needs to be activated at time $t_1<t_{L_1}$ and $t_2>t_{L_2}$ as shown in figure \ref{fig:PartitionRationale}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{PartitionRationale}
\caption{Non rate-base partitioning}
\label{fig:PartitionRationale}
\end{figure}
To allow this behavior, two approaches are possible:
\begin{enumerate}
\item Introduce preemption among time partition, losing determinism and the control of safety states.
\item Push the execution of $\tau_i$ o the new activation of \TP{1}. This approach leads to a lower level of determinism because $\tau_i$ now should interrupt any task assigned to \TP{2} that is executing at time $t_2$. Moreover, the worst-case execution time of \TP{2} will considerably change between different execution, leading to an over-estimating of it.
\end{enumerate}
The solution adopted in this work does not pretend to be a solution for the partitioning problem. Instead, it would be an example. The algorithm simply groups all the tasks with the same rates in partitions and then split them according to some criticality threshold, creating smaller sub-partitions. A more complex partitioning algorithm is under development.
\par As stated before, the result of the partitioning is another DAG, called P-DAG, each of them with its T-DAG. Now the inter-partition and intra-partition scheduling can be introduced.
%********************************** % Section **************************************
\section{Tasks Allocation and Scheduling}
\subsection{Intra-Partition}
In order to schedule partitions the execution time of the partition itself must be estimated. This optimization schedules a T-DAG on all the available $|M|$ processors. The solution space of this problem is spawned by all possible processor assignments combined with all possible task orderings that satisfy the partial order expressed by the T-DAG. The tasks are to be assigned in such a way as to minimize the total computation time required to execute that partition. This is also referred as reducing the makespan. The optimization problem that solves this problem presented below is based on the one proposed by S.Venugopalan and O.Sinnen \cite{ILP}.
\paragraph{} For each task $\tau_i\in\Gamma$, let $s_i$ the starting time, $\mu_i$ the core on which it will be executed and $\gamma_i$ the cost of all outgoing communications. Let $W$ the makespan and $|M|$ the number of available cores. Moreover, let $\delta^-(i)$ the set of tasks that need to be completed before task $\tau_i$. Some tasks cannot execute in parallel with another due the shared resource they are going to use, $\mathcal{I}$ is matrix that represent \emph{parallel incompatibilities}, the component $\mathcal{I}_{ij}$ is equal to one if $\tau_i$ and $\tau_j$ cannot execute in parallel, formally:
\[
\mathcal{I}_{i,j}=
\begin{cases}
1\quad \text{if } \tau_i \text{ and } \tau_j \text{ share at least one resource}\\
0\quad\text{otherwise}
\end{cases}
\]
Let the variable $x_{ik}$ is one if task $\tau_i$ is assigned to the processor $\mu_k$, zero otherwise. In order to control the scheduling behavior, define the following set of binary variables:
\[
\forall \tau_i,\tau_j\in\Gamma \quad \sigma_{ij}=
\begin{cases}
1\quad s_i+c_i\leq s_j \\
0\quad\text{otherwise}
\end{cases}
\]
\[
\forall \tau_i,\tau_j\in\Gamma \quad \epsilon_{ij}=
\begin{cases}
1\quad\mu_i<\mu_j \\
0\quad\text{otherwise}
\end{cases}
\]
The resulting MILP problem is
\begin{align}
\min & \quad & W \label{eq:milp1}\\
\forall\tau_i\in\Gamma & & s_i+c_i\leq W \label{eq:milp2}\\
\forall\tau_i\neq\tau_j\in\Gamma & & s_j-s_i-c_i-\gamma_i-(\sigma_{ij}-1)W_{\max}\geq 0 \label{eq:milp3}\\
\forall\tau_i\neq\tau_j\in\Gamma & & \mu_j-\mu_i-1-(\epsilon_{ij}-1)M\geq 0 \label{eq:milp4}\\
\forall\tau_i\neq\tau_j\in\Gamma & & \sigma_{ij}+\sigma_{ji}+\epsilon_{ij}+\epsilon_{ji}\geq 1 \label{eq:milp5}\\
\forall(i,j):\mathcal{I}_{ij}=1 & & s_i+c_i+\gamma_i-s_j\leq W_{\max}(1-\sigma_{ij}) \label{eq:milp6}\\
\forall(i,j):\mathcal{I}_{ij}=1 & & s_j+c_j+\gamma_j-s_i\leq W_{\max}\sigma_{ij} \label{eq:milp7}\\
\forall\tau_i\neq\tau_j\in\Gamma & & \sigma_{ij}+\sigma_{ji}\leq 1 \label{eq:milp8}\\
\forall\tau_i\neq\tau_j\in\Gamma & & \epsilon_{ij}+\epsilon_{ji}\leq 1 \label{eq:milp9}\\
\forall\tau_j\in\Gamma:\tau_i\in\delta^-(j) & & \sigma_{ij}=1 \label{eq:milp10}\\
\forall\tau_i\in\Gamma & & \sum_{k\in |M|} kx_{ik}=\mu_i \label{eq:milp11}\\
\forall\tau_i\in\Gamma & & \sum_{k\in |M|} x_{ik}=1 \label{eq:milp12}\\
& & 0\leq W \leq W_{\max} \label{eq:milp13}\\
\forall\tau_i\in\Gamma & & s_i\geq 0 \label{eq:milp14}\\
\forall\tau_i\in\Gamma & & \mu_i\in \{1,...,|M|\} \label{eq:milp15}\\
\forall\tau_i\in\Gamma,k\in |M| & & x_{ij}\in\{0,1\} \label{eq:milp16}\\
\forall\tau_i,\tau_j\in\Gamma & & \sigma_{ij},\epsilon_{ij} \in\{0,1\} \label{eq:milp17}
\end{align}
Where $W_{\max}$ is an upper bound for the makespan $W$. It can be computed as all the tasks were executed on a single core (so it is the sum of computational cost and communication costs) or with some heuristics.
\par The formulation is a min-max problem: this is achieved by minimizing the makespan $W$ while introducing the constraint (\ref{eq:milp2}). Constraint (\ref{eq:milp3}) impose the partial order on the tasks in terms of the $\sigma$ variables. Constraint (\ref{eq:milp4}) impose the multi-core usage. Constraint (\ref{eq:milp5}) impose that at least one of the following is true: $\tau_i$ must finish before $\tau_j$ starts and/or $\mu_i<\mu_j$. Constraints (\ref{eq:milp6}) and (\ref{eq:milp7}) avoid that two tasks that share a common resource execute in parallel. By (\ref{eq:milp8}) and (\ref{eq:milp9}) a task cannot be before and after another task in both time and cores. Constraint (\ref{eq:milp10}) enforces the task precedences defined by the T-DAG. Constraints (\ref{eq:milp11}) link the assignment variable $x$ with the core variables $\mu$ and finally (\ref{eq:milp12}) ensures that any given task runs only on one core.
\par The complexity in terms of constraint and variables, depends on $|G|$, $|E|$, $|M|$ and $|\mathcal{I}|$. Assuming that the number of processor $|M|$ and the number of shared resources $|\mathcal{I}|$ are small, then the MILP complexity is dominated by (\ref{eq:milp10}) which generates $O(|G||E|)$ constraints. In the worst case scenario $|E|=|G|(|G|-1)/2$, however, for task-sets representing real applications, we usually have $O(|E|)=O(|G|)$, hence the overall complexity is $O(|G|^2)$.
\paragraph{} Once a T-DAG related to a partition is scheduled the makespan of the schedule is the Worst-Case Execution Time of the partition itself. Moreover, the variables $s_i$ and $\mu_i$ for each task $\tau_i\in\Gamma$ are known so the priorities can be computed.
\subsection{Inter-Partition}\label{interpartition}
The inter-partition schedule is analogue to the problem of schedule the P-DAG on a single core. Indeed, each Resource Partition is assigned to a Time Partition that is as big the total Worst-Case-Execution-Time of the Resource Partition contained (this amount of time can be estimated after the intra-partition schedule). In PikeOS, Time partitions are scheduled according to a statically-assigned schedule scheme like they were on a single core.
\par The problem of scheduling tasks on a single core have received substantial attention and many algorithms are available in the literature, for a complete review see \cite{buttazzoRT} and \cite{blazewiczScheduling}.
\paragraph{} When scheduling the P-DAG, the partial order expressed by it must be satisfied. Let introduce some concept (as in \cite{blazewiczScheduling}). For seek of notation simplicity, let consider a partition like a task, so that the same notation as before can be used. In addition to the previous notation, let introduce the \emph{arrival time} of a partition $\pi_i$ as $r_i$ which represent the moment in time in which a partition can start its execution, and the \emph{due date} $\widetilde{d}_i$ as the moment in time in which the task must be completed. These parameters, together with the periodicity, represent the real-time requirement for a given partition.
\subsubsection{Factorization}
Considering all nodes in the P-DAG, it is common to find different periodicity. In the general problem formulation is assumed that the periodicity of each task is an integer multiple of a base-period $\beta_T$, so when they are grouped into a partition, the partition itself inherit the rate of the tasks it contains and the so does the property. If $T_i=k_i\beta_T$ , the \emph{Hyper-Period} or \emph{Major Time Frame} can be defined as
\begin{equation}
\Delta = \gcd(k_1,k_2,...)\beta_T\quad i=\{1,...,|\Pi|\}
\end{equation}
Inside the hyper-period some partitions $\pi_i$ should execute more than once, in general exactly $k_i$ times. In order to generalize this behavior, a \emph{factorized P-DAG} can be defined. Let denote it as $\widetilde{\mathbb{P}}=(\widetilde{\Pi},\widetilde{H},\widetilde{L},\widetilde{R})$, it is a \emph{finite repetitive precedence} of partition $\pi_i$ by exactly $k_i$ times, in a direct precedence relation. The factorization process is depicted in figure \ref{fig:Factorization}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{Factorization}
\caption{Factorization example}
\label{fig:Factorization}
\end{figure}
\subsubsection{Partitions precedence constraints}
\par Given a schedule, it is called \emph{normal} if, for any two partition $\pi_i,\pi_j\in\Gamma$, $s_i<s_j$ implies that $\widetilde{d}_i\leq\widetilde{d}_j$ or $r_j>s_i$. Release times and deadlines are called \emph{consistent with the precedence relation} if $\pi_i\to\pi_j$ implies that $r_i+\delta t\leq r_j$ and $\widetilde{d}_i\leq\widetilde{d}_j-\delta t$, where $\delta t$ represent a small amount of time (basically the scheduling decision tick-time). The following lemma proves that the precedence constraints are not of essential relevance if there is only one processor.
\begin{lemma}\label{eq:precedenceLemma}
If the release times and deadlines are consistent with the precedence relation, then any normal one-processor schedule that satisfies the release times and deadlines must also obey the precedence relation.
\end{lemma}
\begin{proof}
Consider a normal schedule, and suppose that $\pi_i\to\pi_j$ but $s_i>s_j$. By the consistency assumption we have $r_i<r_j$ and $\widetilde{d}_i<\widetilde{d}_j$. However, these, together with $r_j\leq s_j$, cause a violation of the assumption that the schedule is normal, a contradiction from which the result follows.
\end{proof}
This lemma ensures that release times and deadlines can be made consistent with the precedence relation if they are redefined by:
\begin{align}\label{eq:precedence}
r^{'}_{j} = & \max\big(\{r_j\}\cup\{r^{'}_i+\delta t:\pi_i\to\pi_j\} \big) \\
\widetilde{d}^{'}_j = & \min\big(\{\widetilde{d}_j\}\cup\{\widetilde{d}^{'}_i-\delta t:\pi_j\to\pi_i\} \big)
\end{align}
%\begin{align}
%r^{'}_{\alpha_j} = & \max\big(\{r_{\alpha_j}\}\cup\{r^{'}_{\alpha_i}+\delta t:\pi_{\alpha_i}\to\pi_{\alpha_j}\} \big) \\
%\widetilde{d}^{'}_{\alpha_j} = & \min\big(\{\widetilde{d}_{\alpha_j}\}\cup\{\widetilde{d}^{'}_{\alpha_i}-\delta t:\pi_{\alpha_j}\to\pi_{\alpha_i}\} \big)
%\end{align}
These changes do not alter the feasibility of any schedule. Furthermore, from lemma \ref{eq:precedenceLemma} follows that a precedence relation is essentially irrelevant when scheduling on one processor.
\subsubsection{Bratley algorithm}
Scheduling partitions with precedence constraint (or adapted arrival times and due dates) is NP-hard in the strong sense, even for integer release times and deadlines \cite{LRKB77}. Only if all tasks have unit processing times, an optimization algorithm of polynomial time complexity is available. However, Bratley et al. \cite{bratleyScheduling} proposed a branch-and-bound algorithm which solves this class of problems. Their algorithm is shortly described below.
%Scheduling Computer and Manufacturing: Bratley et al. [BFR71] , page 74
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{Bratley}
\caption{Search tree example for Bratley et al. algorithm}
\label{fig:bratley}
\end{figure}
\paragraph{} All possible partition schedules are implicitly enumerated by a search tree as shown in Figure \ref{fig:bratley} (for three partitions). Each node $v_{ij}$ of the tree represents the assignment of $i-1$ partitions and a new unscheduled one to be in the $i$-th position of the schedule scheme, with $i = \{1,...,N\}$, $N=|\widetilde{\mathbb{P}}|$. On each level $i$, there are $N-i+1$ new nodes generated from each node of the preceding level. Hence, all the $N!$ possible schedule will be enumerated. To each node is associated the completion time of the corresponding partial schedule.
\par The order in which the nodes of the tree are examined is based on a \emph{backtracking
search strategy}. Moreover, the algorithm uses two criteria to bound the solution space.
\begin{enumerate}
\item Exceeding deadlines. Consider a node $v_{ij}$ of the tree where one of the $i$ partitions exceed its due date, it will certainly exceed its deadline if other partitions are scheduled after it. Therefore, the node with all the sub-tree may be excluded from further consideration.
\item Problem decomposition. Consider a node $v_{ij}$ where an unscheduled partition is assigned to $i$-th position of the schedule scheme. If the completion time of this partial schedule is less than or equal to the smallest release time among the yet unscheduled partitions, then the problem decomposes at level $i$, and there is need to backtrack beyond level $i$. This follows from the fact that the best schedule for the remaining $N-i$ partitions may not be started before the smallest of their release times.
\end{enumerate}
After enumerating all the possible $N!$ the best schedule according to an objective function can be selected. A common objective function is the makespan minimization.
%If the objective function is the makespan minimization, a sufficient but not necessary condition for optimality can be derived. Let define a \emph{block} as a group of partitions such that the first partition starts at its release time and all the following partitions to the end of the schedule are processed without idle times. Thus the length of a block is the sum of processing times of the partition within that block. If a block has the property that the release times of all the partitions within the block are greater than or equal to the release time of the first partition in the block (in that case we will say that \emph{"the block satisfies the release time property"}), then the schedule found for this block is clearly optimal. A block satisfying the release time property may be found by scanning the given schedule, starting from the last partition and attempting to find a group of tasks of the described property. Then, from the definition follow the lemma
%\begin{lemma}
%If a schedule for for a single-core, with starting time and due date, satisfies the release time property then it is one optimal solution for the makespan minimization.
%\end{lemma}
%********************************** % Section **************************************
\section{Priority assignment}\label{sec:priorityassignment}
Priority assignment is required to allow the operating system scheduler to execute tasks according to the optimal schedule.
\par Let assume that each thread has its affinity mask, meaning that it can execute only on the core specified by it and that the scheduler is priority-based FIFO queue. To enforce the non-preemptive behavior for the tasks inside a partition, threads on the same core must have \emph{strictly monotonically decreasing} priorities. Here to derive a correct assignment algorithm, an assumption on the implementation is required. Priorities alone cannot ensure mutual exclusion on communications memory locations. These shared memory regions are accessed only by communicating thread and them can be placed:
\begin{itemize}
\item On the same core: so priorities can ensure that the inputs are fulfilled, indeed the task with lower criticality will not execute before higher priority task.
\item On different core: so spinlocks can be used.
\end{itemize}
The use of spinlocks for inter-core synchronization is suggested because they avoid overhead from operating system process rescheduling or context switching. Moreover, spinlocks are efficient if tasks are likely to be blocked for only short periods, which is true to a certain degree that depends on the worst-case timing analysis.
\paragraph{} A simple yet effective way to achieve this result is through a Linear Programming optimization problem:
\begin{equation}
\begin{cases}
\min \sum\rho_i \\
\rho_i - \rho_j \leq -1 \quad
\begin{matrix}
\text{for each consecutive task } \tau_i,\tau_j \text{ on the same core} \\
\text{for each communication edge } e_{ij} \text{ between cores}
\end{matrix} \\
\rho_{\min} \leq \rho_i \leq \rho_{\max}
\end{cases}
\end{equation}
where $\rho_i$ is the priority assigned to task $\tau_i\in\Gamma$. This class of problems can be solved in polynomial time \cite{polyLP}.
\paragraph{} Usually an operating system can only handle a finite set of priority values, for this reason the variable $\rho$ is bounded. However, if the schedule priority assignment does not use all the possible priority values, it is possible to create a gap below and above the partition to allow the execution of sporadic tasks. For example, this behavior can be easily implemented utilizing the background PikeOS partition \TP{0}. The result is depicted in figure \ref{fig:PriorityAssignment}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{PriorityAssignment}
\caption{Priority Assignment }
\label{fig:PriorityAssignment}
\end{figure}
| {
"alphanum_fraction": 0.748580561,
"avg_line_length": 117.7079207921,
"ext": "tex",
"hexsha": "53564cb92f8325176fa077fc6277bf9d797be837",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "47cf8aff592557c1ca990404dc7c079e09307262",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pantonante/EMC2-thesis",
"max_forks_repo_path": "Chapters/scheduling.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "47cf8aff592557c1ca990404dc7c079e09307262",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pantonante/EMC2-thesis",
"max_issues_repo_path": "Chapters/scheduling.tex",
"max_line_length": 1015,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "47cf8aff592557c1ca990404dc7c079e09307262",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pantonante/EMC2-thesis",
"max_stars_repo_path": "Chapters/scheduling.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6120,
"size": 23777
} |
\chapter{Chapter 10: Too Warm a Welcome}
\begin{enumerate}
\item \cs\, Walk right and \pickup{Elixir} from on top the rock (Need to be big)
\item Run forward to next area
\item \cs\ then run to gate and travel to Nine Wood Hills
\item Go to North Promenade and shop with Chocolatte
\end{enumerate}
\begin{shop}
\textbf{Sell:}
\begin{itemize}
\item Everything except Fish Scales
\end{itemize}
\textbf{Buy:}
\begin{itemize}
\item 14x Lightning Marble
\item 20x Dragon Scale
\item Confirm Purchase
\item 6x Fish Scales
\item 3x Solid Frigicite
\item 4x Frigicite
\end{itemize}
\end{shop}
\begin{enumerate}[resume]
\item Go to Sylver Park and use the gate to travel to Dragon Scars.
\item Walk up, right at the fork, down the cliff and down to the next area.
\item Go right and follow the path round.
\item Jump down the first cliff. \Pickup{3x Ether} after jumping down to the next level. Keep jumping down to the bottom then run right to the next area.
\item Run right at Gimme Golem then follow the path to the end and fight the Red Dragons
\end{enumerate}
\begin{battle}[]{Red Dragon x3}
\begin{itemize}
\item 3x Dragon Scale (Auto-battle after first 2)
\end{itemize}
\end{battle}
\begin{enumerate}[resume]
\item Run forward. Skip tutorial on entering next area.
\item Engage Cerberus and immediately escape.
\item Run through Cerberus and \pickup{Fluffiflower} behind it.
\item Escape from Cerberus again.
\item Run back and complete Gimme Golem.
\item Run up and jump down the left most cliffs, then run down to next area.
\item Run straight ahead up to the Boss.
\end{enumerate}
\begin{battle}[]{Mama Dragon}
\begin{itemize}
\item Use Frigicite
\item Use Solid Frigicite x3 (Very important to use 1x Frigicite first to avoid Flare Star attack)
\end{itemize}
\end{battle} | {
"alphanum_fraction": 0.7453245325,
"avg_line_length": 37.1020408163,
"ext": "tex",
"hexsha": "9195f6fbe13f0a7ff1ff50bc7192b0c6ca1ea010",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns",
"max_forks_repo_path": "World of Final Fantasy/Chapters/010_Chapter10.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns",
"max_issues_repo_path": "World of Final Fantasy/Chapters/010_Chapter10.tex",
"max_line_length": 154,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns",
"max_stars_repo_path": "World of Final Fantasy/Chapters/010_Chapter10.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 570,
"size": 1818
} |
\documentclass[twocolumn,numberedappendix,trackchanges]{../aastex62}
% these lines seem necessary for pdflatex to get the paper size right
\pdfpagewidth 8.5in
\pdfpageheight 11.0in
% for the red MarginPars
\usepackage{color}
% some extra math symbols
\usepackage{mathtools}
% allows Greek symbols to be bold
\usepackage{bm}
% allows us to force the location of a figure
\usepackage{float}
% allows comment sections
\usepackage{verbatim}
% Override choices in \autoref
\def\sectionautorefname{Section}
\def\subsectionautorefname{Section}
\def\subsubsectionautorefname{Section}
% MarginPars
\setlength{\marginparwidth}{0.75in}
\newcommand{\MarginPar}[1]{\marginpar{\vskip-\baselineskip\raggedright\tiny\sffamily\hrule\smallskip{\color{red}#1}\par\smallskip\hrule}}
\newcommand{\msolar}{\mathrm{M}_\odot}
% Software names
\newcommand{\amrex}{\texttt{AMReX}}
\newcommand{\boxlib}{\texttt{BoxLib}}
\newcommand{\castro}{\texttt{CASTRO}}
\newcommand{\maestro}{\texttt{Maestro}}
\newcommand{\microphysics}{\texttt{Microphysics}}
\newcommand{\wdmerger}{\texttt{wdmerger}}
\newcommand{\python}{\texttt{Python}}
\newcommand{\matplotlib}{\texttt{matplotlib}}
\newcommand{\yt}{\texttt{yt}}
\newcommand{\vode}{\texttt{VODE}}
\newcommand{\isoseven}{\texttt{iso7}}
\newcommand{\aproxthirteen}{\texttt{aprox13}}
\newcommand{\aproxnineteen}{\texttt{aprox19}}
\newcommand{\aproxtwentyone}{\texttt{aprox21}}
\begin{document}
%==========================================================================
% Title
%==========================================================================
\title{Numerical Stability of Detonations in White Dwarf Simulations}
\shorttitle{Detonation Stability}
\shortauthors{Katz and Zingale (2019)}
\author{Max P. Katz}
\affiliation
{
NVIDIA Corporation, 2788 San Tomas Expressway, Santa Clara, CA, 95051, USA
}
\author{Michael Zingale}
\affiliation
{
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800, USA
}
%==========================================================================
% Abstract
%==========================================================================
\begin{abstract}
Some simulations of Type Ia supernovae feature self-consistent thermonuclear
detonations. However, these detonations are not meaningful if the simulations
are not resolved, so it is important to establish the requirements for achieving
a numerically converged detonation. In this study we examine a test detonation
problem inspired by collisions of white dwarfs. This test problem demonstrates
that achieving a converged thermonuclear ignition requires spatial resolution
much finer than 1 km in the burning region. Current computational resource
constraints place this stringent resolution requirement out of reach for
multi-dimensional supernova simulations. Consequently, contemporary simulations
that self-consistently demonstrate detonations are possibly not converged
and should be treated with caution.
\end{abstract}
\keywords{supernovae: general - white dwarfs}
%==========================================================================
% Introduction
%==========================================================================
\section{Introduction}
\label{sec:introduction}
Thermonuclear detonations are common to all current likely models of Type Ia
supernovae (SNe Ia), but how they are actually generated in progenitor systems
is still an open question. Different models predict different locations for
the detonation and different mechanisms for initiating the event. Common to all
of the cases is a severe lack of numerical resolution in the location where the
detonation is expected to occur. The length and time scale at which a detonation
forms is orders of magnitude smaller than the resolution that typical multi-dimensional
hydrodynamic simulations can achieve. The mere presence of a detonation (or lack thereof)
in a simulation is therefore only weak evidence regarding whether a detonation would truly occur.
In this study we examine the challenges associated with simulating thermonuclear detonations.
The inspiration for this work comes from the literature on head-on collisions of WDs,
which can occur, for example, in certain triple star systems \citep{thompson:2011,hamers:2013}.
WD collisions rapidly convert a significant amount of kinetic energy into thermal energy and
thus set up conditions ripe for a thermonuclear detonation. Since they are easy to set up in a simulation,
they are a useful vehicle for studying the properties of detonations.
Early studies on WD collisions \citep{rosswog:2009,raskin:2010,loren-aguilar:2010,
hawley:2012,garcia-senz:2013} typically had effective spatial resolutions in the burning region of
100--500 km for the grid codes, and 10--100 km for the SPH codes, and observed
detonations that convert a large amount of carbon/oxygen material into iron-group elements.
These studies varied in methodology (Lagrangian versus Eulerian evolution, nuclear network
used) and did not closely agree on the final result of the event (see Table 4 of
\cite{garcia-senz:2013} for a summary).
There is mixed evidence for simulation convergence presented in these studies.
\cite{raskin:2010} claim that their simulations are converged in nickel yield up to 2 million
(constant mass) particles, but the nickel yield still appears to be trending slightly upward
with particle count. The earlier simulations of \cite{raskin:2009} are not converged up to
800,000 particles, where the smoothing length was kept constant instead of the particle mass.
\cite{hawley:2012} do not achieve convergence over a factor of 2 in spatial resolution.
\cite{garcia-senz:2013} claim at least qualitative (though not strict absolute) convergence, but
their convergence test is only over a factor of 2 in particle count, which is a factor of $2^{1/3} = 1.3$
in spatial resolution (for constant mass particles). \cite{kushnir:2013} test convergence over an
order of magnitude in spatial resolution, and find results that appear to be reasonably well
converged for one of the two codes used (VULCAN2D), and results that are not converged for the
other code used (FLASH). \cite{papish:2015} claim convergence in nuclear burning up to 10\% at a
resolution of 5--10 km, but do not present specific data demonstrating this claim or precisely define
what is being measured. \cite{loren-aguilar:2010} and \cite{rosswog:2009} do not present convergence
studies for their work.
\cite{kushnir:2013} argued that many of these simulations featured numerically unstable evolution,
ultimately caused by the zone size being significantly larger than the length scale over which
detonations form. The detonation length scale can vary widely based on physical conditions
\citep{seitenzahl:2009,garg:2017} but is generally not larger than 10 km. \citeauthor{kushnir:2013}
argue that this numerically unstable evolution is the primary cause of convergence difficulties.
They further argue that it is possible to apply a burning limiter to achieve converged results,
which was used in their work and later the simulations of \cite{papish:2015}. We investigate
this hypothesis in \autoref{sec:unstable_burning}.
In this paper, we attempt to find what simulation length scale is required to achieve
converged thermonuclear ignitions. The inspiration for this work comes from our
simulations of WD collisions using the reactive hydrodynamics code \castro\
\citep{castro, astronum:2017}. We have done both 2D axisymmetric and 3D simulations
of collisions of $0.64\ \msolar$ carbon/oxygen WDs, and we were unable to achieved converged
simulations at any resolution we could afford to run (the best was an effective zone size of
0.25 km, using adaptive mesh refinement, for the 2D case). We were therefore forced to
turn to 1D simulations, where we can achieve much higher resolution (at the cost, of course,
of not being able to do a test that can be directly compared to multi-dimensional simulations).
We believe the simulations presented below help show why we and others had difficulty
achieving convergence at the resolutions achievable in multi-dimensional WD collision simulations.
%==========================================================================
% 1D collision test problem
%==========================================================================
\section{Test Problem}
\label{sec:collisions}
Our test problem is inspired by \cite{kushnir:2013}, and very loosely approximates the
conditions of two $0.64\ \msolar$ WDs colliding head-on. The simulation domain is 1D with a
reflecting boundary at $x = 0$. For $x > 0$ there is a uniform fluid composed (by mass)
of $50\%\, ^{12}$C, $45\%\, ^{16}$O, and $5\%\, ^{4}$He. The fluid is relatively cold,
$T = 10^7$ K, has density $\rho = 5 \times 10^6$ g/cm$^3$, and is traveling toward the
origin with velocity $-2 \times 10^8$ m/s. A uniform constant gravitational acceleration
is applied, $g = -1.1 \times 10^8$ m/s$^{2}$. This setup causes a sharp initial release
of energy at $x = 0$, and the primary question is whether a detonation occurs promptly
near this contact point, or occurs later (possibly at a distance from the contact point).
The simulated domain has width $1.6384 \times 10^9$ cm, and we apply inflow boundary conditions
that keep feeding the domain with material that has the same conditions as the initial fluid.
Simulations are performed with the adaptive mesh refinement (AMR) code \castro.
For the burning we use the alpha-chain nuclear network \texttt{aprox13}.
Release 18.12 of the \castro\ code was used. The \amrex\ and \microphysics\ repositories
that \castro\ depends on were also on release 18.12. The problem is located in the
\texttt{Exec/science/Detonation} directory, and we used the \texttt{inputs-collision} setup.
The simulation is terminated when the peak temperature on the domain first reaches
$4 \times 10^9$ K, which we call a thermonuclear ignition (for reference, the
density at the location where the ignition occurs is approximately $1.4\times 10^7\ \text{g / cm}^3$). This stopping criterion is a
proxy for the beginning of a detonation. Reaching this temperature does not guarantee
that a detonation will begin, and in this study we do not directly address the question
of whether a ignition of this kind always leads to a detonation. Nor are we commenting
on the physics of the ignition process itself. Rather, the main question
we investigate here is whether this ignition is numerically converged, and for this purpose
this arbitrary stopping point is sufficient, since in a converged simulation the stopping point
should be reached at the same time independent of resolution. A converged ignition
is a prerequisite to having a converged detonation. We measure two diagnostic quantities:
the time since the beginning of the simulation required to reach this ignition criterion,
and the distance from the contact point of the peak temperature.
The only parameter we vary in this study is the spatial resolution used for this problem.
For low resolutions we vary only the base resolution of the grid, up to a resolution of
0.25 km. For resolutions finer than this, we fix the base grid at a resolution of 0.25 km,
and use AMR applied on gradients of the temperature. We tag zones for refinement if the temperature
varies by more than 50\% between two zones. Timesteps are limited only by the hydrodynamic
stability constraint, with CFL number 0.5. Although this leads to Strang splitting error
in the coupling of the burning and hydrodynamics for low resolution, we have verified that
the incorrect results seen at low resolution do not meaningfully depend on the timestep constraint
(both by applying a timestep limiter based on nuclear burning, and by using the spectral deferred
corrections driver in \castro, which directly couples the burning and hydrodynamics). At very high
resolution, the splitting error tends to zero as the CFL criterion decreases the timestep.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.30]{{{plots/amr_ignition_self-heat}}}
\caption{Distance from the contact point of the ignition (solid blue), and time of the
ignition (dashed green), as a function of finest spatial resolution.
\label{fig:self-heat-distance}}
\end{figure}
\autoref{fig:self-heat-distance} shows our main results. The lowest resolution we consider,
256 km, is typical of the early simulations of white dwarf collisions, and demonstrates a
prompt ignition near the contact point. As the (uniform) resolution increases, the ignition
tends to occur earlier and nearer to the contact point. This trend is not physically meaningful:
all simulations with resolution worse than about 1 km represent the same prompt central ignition,
and as the resolution increases, there are grid points physically closer to the center that can ignite.
However, when the resolution is better than 1 km, the situation changes dramatically: the prompt
central ignition does not occur, but rather the ignition is delayed and occurs further from the contact
point. When we have finally reached the point where the curves start to flatten and perhaps begin to converge,
the ignition occurs around 900 km from the contact point, about 1 second after contact (contrast to less than
0.05 seconds for the simulation with 1 km resolution). Even at this resolution, it is not clear if
the simulation is converged. We were unable to perform higher resolution simulations to check convergence
due to the length of time that would be required.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.30]{{{plots/amr_ignition_co}}}
\caption{Similar to \autoref{fig:self-heat-distance}, but with pure C/O material.
Note the different vertical axis scale.
\label{fig:self-heat-distance-co}}
\end{figure}
We also tested a similar configuration made of pure carbon/oxygen material (equal fraction by mass).
This is closer to the configuration used in the $0.64\ \msolar$ WD collision simulations that previous
papers have focused on. However, for the setup described above, pure carbon/oxygen conditions do not
detonate at all. This is not particularly surprising, since the 1D setup is a very imperfect representation
of the real multi-dimensional case, and is missing multi-dimensional hydrodynamics that could substantially
alter the dynamical evolution. So the small amount of helium we added above ensured that the setup ignited.
(Of course, there will likely be a small amount of helium present in C/O white dwarfs as a remnant of the prior
stellar evolution.) However, we can prompt the C/O setup to ignite by starting the initial temperature
at $10^9$ K instead of $10^7$ K. This loosely mimics the effect from the first test where helium burning
drives the temperature to the conditions necessary to begin substantial burning in C/O material. But since
no helium is present in this case, it allows us to test whether it is easier to obtain convergence for pure
C/O burning, even though the test itself is artificial. The only other change relative to the prior test is
that we refined on relative temperature gradients of 25\% instead of 50\%. The results for this case are shown in
\autoref{fig:self-heat-distance-co}. In this case, the ignition is central at all resolutions, but the
simulation is still clearly unconverged at resolutions worse than 100 m, as the ignition becomes significantly
delayed at high resolution.
This story contains two important lessons. First, the required resolution for even a qualitatively converged
simulation, less than 100 m, is out of reach for an analogous simulation done in 3D. Second, the behavior for
resolutions worse than 1 km qualitatively appears to be converged, and one could perhaps be misled into
thinking that there was no reason to try higher resolutions, which is reason for caution in interpreting
reacting hydrodynamics simulations. With that being said, our 1D tests are not directly comparable to
previous multi-dimensional WD collision simulations. The 1D tests should not be substituted for understanding the
actual convergence properties of the 2D/3D simulations, which may have different resolution requirements for
convergence. Our tests suggest only that it is plausible that simulations at kilometer-scale (or worse) resolution
are unconverged. This observation is, though, consistent with the situation described in \autoref{sec:introduction},
where our 2D WD collision simulations (not shown here) are unconverged, and many of the previous collision simulations
presented in the literature have relatively weak evidence for convergence.
\section{Numerically Unstable Burning}
\label{sec:unstable_burning}
\citet{kushnir:2013} observe an important possible failure mode
for reacting hydrodynamics simulations. Let us define $\tau_{\rm e} = e / \dot{e}$
as the nuclear energy injection timescale, and $\tau_{\rm s} = \Delta x / c_{\rm s}$
as the sound-crossing time in a zone (where $\Delta x$ is the grid
resolution and $c_{\rm s}$ is the speed of sound). When the sound-crossing
time is too long, energy is built up in a zone faster than it can be
advected away by pressure waves. This effect generalizes to
Langrangian simulations as well, where $\tau_{\rm s}$ should be understood
as the timescale for transport of energy to a neighboring fluid element.
This is of course a problem inherent
only to numerically discretized systems as the underlying fluid equations
are continuous. This can lead to a numerically seeded detonation
caused by the temperature building up too quickly in the zone. The
detonation may be spurious in this case. If $\tau_{\rm s} \ll \tau_{\rm e}$,
we can be confident that a numerically seeded detonation has not
occurred. In practice, we quantify this requirement as:
\begin{equation}
\tau_{\rm s} \leq f_{\rm s}\, \tau_{\rm e} \label{eq:burning_limiter}
\end{equation}
and require that $f_{s}$ is sufficiently smaller than one.
\citet{kushnir:2013} state that $f_{\rm s} = 0.1$ is a sufficient
criterion for avoiding premature ignitions. \citeauthor{kushnir:2013}
enforced this criterion on their simulations by artificially limiting
the magnitude of the energy release after a burn, and claimed that
this is resulted in more accurate WD collision simulations.
We find that for our test problem (and also the WD collisions we have simulated)
we do observe $\tau_{\rm s} > \tau_{\rm e}$; typically the ratio is a factor of 2--5 at
low resolution (see \autoref{fig:self-heat-ts_te}). This means that an ignition is
very likely to occur for numerical reasons, regardless of whether it would occur for physical reasons.
At low resolution, adding more resolution does not meaningfully improve the ratio of
$\tau_s$ to $\tau_e$ at the point of ignition. The ignition timescale is so short
that almost all of the energy release occurs in a single timestep even though the
timestep gets shorter due to the CFL limiter. It is only when the resolution gets
sufficiently high that we can simultaneously resolve the energy release over multiple
timesteps and the advection of energy across multiple zones. Even at the highest resolution
we could achieve for the test including helium, about 50 cm, $\tau_s / \tau_e$ was 0.8 at ignition, which is not
sufficiently small to be confident of numerical stability. Note
that merely decreasing the timestep (at fixed resolution) does not help here either, as the
instability criterion is, to first order, independent of the size of the timestep.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.30]{{{plots/amr_ignition_self-heat_ts_te}}}
\caption{Ratio of the sound-crossing timescale to the energy injection timescale
for the simulations in \autoref{fig:self-heat-distance}.
\label{fig:self-heat-ts_te}}
\end{figure}
We thus investigate whether limiting the energy release of the burn (we will term this
``suppressing'' the burn), as proposed by \citeauthor{kushnir:2013},
is a useful technique for avoiding the prompt detonation. Since the limiter ensures
the inequality in \autoref{eq:burning_limiter} holds by construction, the specific
question to ask is whether the limiter achieves the correct answer and is converged
in cases where the simulation would otherwise be uncorrect or unconverged.
Before we examine the results, consider a flaw in the application of the limiter:
a physical detonation may \textit{also} occur with the property that, in the detonating
zone, $\tau_s > \tau_e$. For example, consider a region of WD material at uniformly
high temperature, say $5 \times 10^9\ \text{K}$, with an arbitrarily large size,
say a cube with side length 100 km. This region will very likely ignite,
even if it is surrounded by much cooler material. By the time the material on
the edges can advect heat away, the material in the center will have long since
started burning carbon, as the sound crossing time scale is sufficiently large
compared to the energy injection time scale. This is true regardless of whether
the size of this cube corresponds to the spatial resolution in a simulation.
Suppression of the burn in this case is unphysical: if we have a zone matching
these characteristics, the zone should ignite.
When the resolution is low enough, there is a floor on the size of a hotspot,
possibly making such a detonation more likely. This is an unavoidable consequence
of the low resolution; yet, it may be the correct result of the simulation that
was performed. That is, even if large hotspots are unphysical because in reality
the temperature distribution would be smoother, if such a large hotspot \textit{were}
to develop (which is the implicit assumption of a low resolution simulation), then
it would likely ignite. If the results do not match what occurs at higher
resolution, then the simulation is not converged and the results are not reliable.
However, it may also be the case that a higher resolution simulation will yield
similar results, for example because even at the higher resolution, the physical
size of the hotspot stays the same. For this reason, an appeal to the numerical
instability criterion alone is insufficient to understand whether a given ignition
is real.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.30]{{{plots/amr_ignition_suppressed}}}
\caption{Similar to \autoref{fig:self-heat-distance}, but for simulations with the
suppressed burning limiter applied (\autoref{eq:burning_limiter}).
\label{fig:suppressed-distance}}
\end{figure}
\autoref{fig:suppressed-distance} shows the results we obtain for our implementation of
a ``suppressed'' burning mode. In a suppressed burn, we limit the changes to the state
so that \autoref{eq:burning_limiter} is always satisfied. This is done by rescaling the
energy release and species changes from a burn by a common factor such that the equality
in \autoref{eq:burning_limiter} is satisfied. (If the inequality is already satisfied,
then the integration vector is not modified.) We find that the suppressed burn
generally does not yield correct results for low resolutions. The 64 km resolution
simulation happens to yield approximately the correct ignition distance, but it does
not occur at the right time, and in any case the incorrectness of the results at neighboring
resolutions suggests that this is not a robust finding. The suppressed burning simulation
reaches qualitative convergence at around the same 100 m resolution as the normal self-heating
burn. Because of both the theoretical reasons discussed above, and this empirical finding that
the burning suppression does not make low resolution simulations any more accurate, we do not
believe that the suppressed burning limiter should be applied in production simulations.
\newline % For better formatting
%==========================================================================
% Conclusion
%==========================================================================
\section{Conclusion}\label{Sec:Conclusion}
\label{sec:conclusion}
Our example detonation problem demonstrates, at least for this class of
hydrodynamical burning problem, a grid resolution requirement much more stringent
than 1 km. This test does not, of course, represent all possible WD burning conditions.
However, the fact that it is even possible for burning in white dwarf material to require a
resolution better than 100 m should suggest that stronger demonstrations of convergence are
required. This is especially true bearing in mind our observation that the numerical
instability can result in simulations that appear qualitatively converged when the
resolution is increased by a factor of one or two orders of magnitude but not three
orders of magnitude.
This study does not directly address the problem of how, in the detailed
microphysical sense, a detonation wave actually begins to propagate, as
we cannot resolve this length scale even in our highest resolution simulations.
Rather, we are making the point that for simulations in which a macroscopic
detonation wave appears self-consistently, this is only a valid numerical result
if the resolution is sufficiently high. This convergence requirement does
not imply that the detonation itself is physically realistic; but, it does
imply that we are not even correctly solving the fluid equations we intend
to solve when the convergence requirement is not met. We believe that our
test case can be useful in the future for testing algorithmic innovations
that hope to improve the realism of burning at low resolutions.
\acknowledgments
This research was supported by NSF award AST-1211563 and DOE/Office of
Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook. An award of
computer time was provided by the Innovative and Novel Computational
Impact on Theory and Experiment (INCITE) program. This research used
resources of the Oak Ridge Leadership Computing Facility located in
the Oak Ridge National Laboratory, which is supported by the Office of
Science of the Department of Energy under Contract
DE-AC05-00OR22725. Project AST106 supported use of the ORNL/Titan
resource. This research used resources of the National Energy
Research Scientific Computing Center, which is supported by the Office
of Science of the U.S. Department of Energy under Contract
No. DE-AC02-05CH11231. The authors would like to thank Stony Brook
Research Computing and Cyberinfrastructure, and the Institute for
Advanced Computational Science at Stony Brook University for access
to the high-performance LIred and SeaWulf computing systems, the latter
of which was made possible by a \$1.4M National Science Foundation grant (\#1531492).
The authors thank Chris Malone and Don Willcox for useful discussions
on the nature of explosive burning, and Doron Kushnir for providing
clarification on the nature of the burning limiter used in \cite{kushnir:2013}.
This research has made use of NASA's Astrophysics Data System
Bibliographic Services.
\facilities{OLCF, NERSC}
\software{\castro\ \citep{castro, astronum:2017},
\amrex\ \citep{boxlib-tiling},
\yt\ \citep{yt},
\matplotlib\ \citep{matplotlib}
}
\bibliographystyle{../aasjournal}
\bibliography{../refs}
\end{document}
| {
"alphanum_fraction": 0.7753798723,
"avg_line_length": 60.6815144766,
"ext": "tex",
"hexsha": "80e8f791a2ed5b95cc574cf69e24211d771d5723",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-12-28T10:01:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-12-25T01:05:59.000Z",
"max_forks_repo_head_hexsha": "9f575efacc8d373b6d2961f731e30bf59ee15ffd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AMReX-Astro/wdmerger",
"max_forks_repo_path": "papers/ignition/paper.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "9f575efacc8d373b6d2961f731e30bf59ee15ffd",
"max_issues_repo_issues_event_max_datetime": "2017-08-05T06:25:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-08-05T06:25:41.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AMReX-Astro/wdmerger",
"max_issues_repo_path": "papers/ignition/paper.tex",
"max_line_length": 137,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "9f575efacc8d373b6d2961f731e30bf59ee15ffd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AMReX-Astro/wdmerger",
"max_stars_repo_path": "papers/ignition/paper.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-14T07:34:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-23T21:12:02.000Z",
"num_tokens": 6237,
"size": 27246
} |
\chapter{Signal processing}
| {
"alphanum_fraction": 0.7666666667,
"avg_line_length": 7.5,
"ext": "tex",
"hexsha": "f557ef5b8356bb8df95ab2936ab1b1f790b1d02f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/signal/00-00-Chapter_name.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/signal/00-00-Chapter_name.tex",
"max_line_length": 27,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/signal/00-00-Chapter_name.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 30
} |
\section{Modeling with ordinary differential equations}
\begin{example}[Exponential growth]
Bacteria are living on a substrate with ample nutrients. Each
bacteria splits into two after a certain time $\Delta t$. The time
span for splitting is fixed and independent of the individuum. Then,
given the amount $u_0$ of bacteria at time $t_0$, the amount at
$t_1 = t_0+\Delta t$ is $u_1 = 2 u_0$. Generalizing, we obtain
\begin{gather*}
u_n = u(t_n) = 2^n u_0, \qquad t_n = t_0 + n\Delta t.
\end{gather*}
After a short time, the number of bacteria will be huge, such that
counting is not a good idea anymore. Also, the cell division does
not run on a very sharp clock, such that after some time, divisions
will not only take place at the discrete times $t_0+n\Delta t$, but
at any time between these as well. Therefore, we apply the continuum
hypothesis, that is, $u$ is not a discrete quantity anymore, but a
continuous one that can take any real value. In order to accommodate
for the continuum in time, we make a change of variables:
\begin{gather*}
u(t) = 2^{\frac{t-t_0}{\Delta t}} u_0.
\end{gather*}
Here, we have already written down the solution of the problem,
which is hard to generalize. The original description of the problem
involved the change of $u$ from one point in time to the next. In
the continuum description, this becomes the derivative, which we can
now compute from our last formula:
\begin{gather*}
\tfrac{d}{dt} u(t) = \frac{\ln 2}{\Delta t} 2^{\frac{t-t_0}{\Delta t}} u_0
= \frac{\ln 2}{\Delta t} u(t).
\end{gather*}
We see that the derivative of $u$ at a certain time depends on $u$
itself at the same time and a constant factor, which we call the
growth rate $\alpha$. Thus, we have arrived at our first
differential equation
\begin{gather}
\label{eq:models:1}
u'(t) = \alpha u(t).
\end{gather}
What we have seen as well is, that we had to start with some
bacteria to get the process going. Indeed, any function of the form
\begin{gather*}
u(t) = c e^{\alpha t}
\end{gather*}
is a solution to equation~\eqref{eq:models:1}. It is the initial
value $u_0$, which anchors the solution and makes it unique.
\end{example}
\begin{example}[Predator-prey systems]
We add a second species to our bacteria example. Let's say, we
replace the bacteria by sardines living in a nutrient rich sea, and
we add tuna eating sardines. The amount of sardines eaten depends on
the likelyhood that a sardine and a tuna are in the same place, and
on the hunting efficiency $\beta$ of the tuna. Thus,
equation~\eqref{eq:models:1} is augmented by a negative change in
population depending on the product of sardines $u$ and tuna $v$:
\begin{gather*}
u' = \alpha u - \beta u v.
\end{gather*}
In addition, we need an equation for the amount of tuna. In this
simple model, we will make two assumptions: first, tuna die of
natural causes at a death rate of $\gamma$. Second, tuna procreate
if there is enough food (sardines), and the procreation rate is
proportional to the amount of food. Thus, we obtain
\begin{gather*}
v' = \delta u v - \gamma v.
\end{gather*}
Again, we will need initial populations at some point in time to
compute ahead from there.
\end{example}
\begin{remark}
The Lotka-Volterra-equations have periodic solutions. Even though
none of these exist in closed form the sulotions can be simulated:
\begin{figure}[tp]
\begin{center}
\includegraphics[width=.6\textwidth]{fig/lotkavolterra}
\caption{Plot of a solution to the Lotka-Volterra equation with
parameters $\alpha = \frac 23$, $\beta = \frac 43$, $\delta = \gamma = 1$
and initial values $u(0) = 3$, $v(0) = 1$. Solved with a Runge-Kutta
method of order five and step size $h = 10^{-5}$.}
\end{center}
\label{fig:lotkavolterra}
\end{figure}
Lotka and Volterra became interested in this system as they had
found that the amount of predatory fish caught had increased
during World War I. During the war years there was a strong
decrease of fishing effort. In conclusion, they thought, there had
to be more prey fish.
A (far too rarely) applied consequence is that in order to diminish
the amount of e.g. foxes one should hunt rabbits as foxes feed
on rabbits.
\end{remark}
\begin{example}[Graviational two-body systems]
According to Newton's law of universal gravitation, two bodies of
masses $m_1$ and $m_2$ attract each other with a force
\begin{gather*}
\vec F_1 = G \frac{m_1m_2}{r^3} \vec r_1,
\end{gather*}
where $\vec F_1$ is the force vector acting on $m_1$ and $\vec r_1$
is the vector pointing from $m_1$ to $m_2$ and $r = \lvert\vec r_1\rvert = \lvert\vec r_2\rvert$.
Newton's second law of motion on the other hand relates forces and
acceleration:
\begin{gather*}
\vec F = m \vec x'',
\end{gather*}
where $\vec x$ is the position of a body in space.
Combining these, we obtain equations for the positions of the two bodies:
\begin{gather*}
\vec x''_i = G \frac{m_{3-i}}{r^3} (\vec x_i - \vec x_{3-i}), \qquad i=1,2.
\end{gather*}
This is a system of 6 independent variables. Nevertheless, it can be
reduced to three by using that the center of mass moves
inertially. Then, the distance vector is the only variable to be
computed for:
\begin{gather*}
\vec r'' = - G \frac{m}{r^3} \vec r.
\end{gather*}
Intuitively, that we need an initial position and an initial
velocity for the two bodies. Later on, we will see that this can
actually be justified mathematically.
\end{example}
\begin{example}[Celestial mechanics]
Now we extend the two-body system to a many-body system. Again, we
subtract the center of mass, such that we obtain $n$ sets of 3
equations for an $n+1$-body system. Since forces simply add up, this
system becomes
\begin{gather}
\label{eq:celestial}
\vec x_i = -G \sum_{j\neq i} \frac{m_j}{r_{ij}^3} \vec r_{ij}.
\end{gather}
Here, $\vec r_{ij} = \vec r_j - \vec r_i$ and $r_{ij} = \lvert \vec r_{ij}\rvert$.
Initial data for the solar system can be obtained from
\begin{center}
\texttt{https://ssd.jpl.nasa.gov/?horizons}
\end{center}
\end{example}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "notes"
%%% End:
| {
"alphanum_fraction": 0.6995748701,
"avg_line_length": 41.5098039216,
"ext": "tex",
"hexsha": "9d76710c8dda1cfc09d87490e9672caa9b859663",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z",
"max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "ahumanita/notes",
"max_forks_repo_path": "ode/models.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7",
"max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "ahumanita/notes",
"max_issues_repo_path": "ode/models.tex",
"max_line_length": 99,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "ahumanita/notes",
"max_stars_repo_path": "ode/models.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1920,
"size": 6351
} |
\chapter{Appendix: IBiD Proofs}
\label{chap:appendix-ibid-proofs}
\begin{proof}[Proof of Theorem~\ref{thm:ibid-relaxation-notension}]
Consider any vertex $x$.
If $d^*(x) = \infty$,
then by (\ref{eqn:ibid-relaxation-props-nounder}) we must have that
$d(x) = \infty$.
Otherwise,
by (\ref{eqn:ibid-distance-function-global}),
there exists a path $p^*$ of length $d^*(x)$;
consider this path.
The first vertex on $p^*$ is $s$ with $d^*(s) = 0$,
and by (\ref{eqn:ibid-relaxation-props}),
$d(s) = d^*(s)$.
For each edge $e_{uv}$ on $p^*$ with $d(u) = d^*(u)$,
we will show that $d(v) = d^*(v)$.
By definition of the shortest path,
$d^*(u) + w(e_{uv}) = d^*(v)$.
Therefore $d(u) + w(e_{uv}) = d^*(v)$,
and by (\ref{eqn:ibid-relaxation-props-tens}),
we have $d^*(v) \geq d(v)$,
and by (\ref{eqn:ibid-relaxation-props-nounder})
we have $d^*(v) = d(v)$.
By induction along the path $p^*$,
we have that $d(x) = d^*(x)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ibid-relaxation-sound}]
The proof proceeds as follows.
First, if $d' = \infty$,
then no edges can be in tension,
and so $d = d^*$ everywhere
as shown in Section~\ref{subsec:ibid-tension}.
Otherwise,
suppose that $d(x) \neq d^*(x)$
for some vertex $x$ with $d(x) \leq d'$.
By (\ref{eqn:ibid-relaxation-props}),
it would have to be that $d^*(x) < d(x)$.
Consider a true shortest path $p$ from $s$ to $x$;
by (\ref{eqn:ibid-relaxation-props})
such a path exists and has finite length $d^*(x)$.
By (\ref{eqn:ibid-relaxation-props}),
we have that $d^*(s) = d(s) = 0$
(and so $s$ and $x$ must be distinct).
Let $e_{uv}$ be the first edge along $p$ such that
$d^*(u) = d(u)$
but $d^*(v) < d(v)$.
Since $p$ is a shortest path,
edge $e_{uv}$ must therefore be in tension.
Since $w \geq 0$,
it must be that $d^*(u) \leq d^*(x)$;
further, since $d^*(u) = d(u)$,
$d^*(x) < d(x)$,
and $d(x) \leq d'$
it must be that $d(u) < d'$.
But then edge $e_{uv}$ is in tension with lower $d(u)$!
This contradiction implies that every vertex $x$
with $d(x) \leq d'$
must have $d(x) = d^*(x)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ibid-relaxation-reconstruct}]
For any vertex $x$ with $d(x) \leq k$,
consider the following construction of a path from $s$ to $x$.
Initialize the path $p \leftarrow \{ x \}$;
note that by Theorem~\ref{thm:ibid-relaxation-sound},
the first vertex $v$ on $p$ has $d(v) = d^*(v)$.
At each iteration, terminate construction if $v=s$.
Otherwise,
let $u^*$ be the predecessor of $v$ which minimizes
$d(u) + w(e_{uv})$,
and prepend $u^*$ to $p$.
By (\ref{eqn:ibid-relaxation-props-nottoogood}),
it follows that $d(u^*) + w(e_{uv}^*) \leq d(v)$,
and since $d(v) = d^*(v)$ and
$d^*(v) \leq d^*(u) + w(e_{uv}^*)$
by definition of the distance function,
it follows that $d(u^*) \leq d^*(u^*)$.
In combination with (\ref{eqn:ibid-relaxation-props-nounder}),
this implies $d(u^*) = d^*(u^*)$,
and we can iterate.
The result of this construction is a path $p$ from $s$ to $x$
on which all vertices have $d(v) = d^*(v)$.
Therefore $p$ is a shortest path of length $d^*(x)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ibid-bidir-sound}]
Note first that since $d_s(u) \leq D_s$,
by Theorem~\ref{thm:ibid-relaxation-sound},
$d_s(u) = d^*_s(u)$,
and so there exists a path from $s$ to $u$ of length $d_s(u)$.
Similarly, there exists a path from $v$ to $t$ of length $d_t(v)$,
and as a result,
there exists a path through $e^*_{uv}$ with length $\ell_e(e^*_{uv})$.
We will prove that this constitutes a shortest path by contradiction.
Suppose that a different shortest path $p'$ exists with
$\mbox{len}(p') < \ell_e(e^*_{uv})$.
Then it must also be that
$\mbox{len}(p') < D_s + D_t$.
Note that since $s \neq t$, $p'$ contains at least one edge.
We will consider two cases.
First, consider the case where $\mbox{len}(p') < D_s$,
so that $d_s^*(t) < D_s$.
In this case,
the last edge $e_{ut}'$ on $p'$
has $d_s^*(u') \leq d_s^*(t) < D_s$;
by Theorem~\ref{thm:ibid-relaxation-sound},
$u'$ therefore has $d_s(u') = d_s^*(u')$.
In addition,
since $D_t > 0$,
$t$ must be $t$-consistent with $d_t(t) = 0$.
Therefore,
it follows that $d_s^*(u') + w(e_{ut}') < d_s(u') + w(e_{ut}')$,
which contradicts the supposition.
In the second case with $D_s \leq \mbox{len}(p')$,
identify on $p'$ the edge $e'_{uv}$ adjoining the vertices $u'$, $v'$
such that $d_s^*(u') < D_s \leq d_s^*(v')$.
(Since $D_s > 0$, this edge will exist.)
Since $d_s^*(u') < D_s$,
by Theorem~\ref{thm:ibid-dynamicswsffp-sound},
$u'$ is therefore $s$-consistent with $d_s(u') = d_s^*(u')$.
Consider our supposition that
$d_s^*(u') + w(e_{uv}') + d_t^*(v') < D_s + D_t$.
Since $d_s^*(u') + w(e_{uv}') = d_s^*(v')$
and $D_s \leq d_s^*(v')$,
it follows that
$d_t^*(v') < D_t$.
Therefore,
by Theorem~\ref{thm:ibid-dynamicswsffp-sound},
$v'$ is $t$-consistent with $d_t(v') = d_t^*(v')$.
As a consequence,
the edge $e'_{uv}$ must be in $E_{\ms{conn}}$.
Therefore,
$d_s^*(u') + w(e_{uv}') + d_t^*(v')
< d_s(u') + w(e_{uv}') + d_t(v')$,
which is a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ibid-dynamicswsffp-sound}]
The proof relies upon a path construction from $s$ to $x$
described by Lemma~\ref{lemma:ibid-dynamicswsffp-sound-conpath}.
Lemma~\ref{lemma:ibid-dynamicswsffp-sound-geq} then demonstrates
that the length of the path is $d(x)$,
and therefore that $d(x)$ can be no less than $d^*(x)$.
Finally,
Lemma~\ref{lemma:ibid-dynamicswsffp-sound-leq}
shows that $d(x)$ can be no greater than $d^*(x)$.
As a result,
the value $d(x)$ is correct,
and the path constructed
via Lemma~\ref{lemma:ibid-dynamicswsffp-sound-conpath}
is a shortest path.
\end{proof}
\begin{lemma}
For any consistent vertex $x$ with $d(x) \leq k_{\ms{min}}$,
there exists a path $p$ from $s$ to $x$
in which each vertex is consistent
and each edge $e_{uv}$ satisfies $d(u) + w(e_{uv}) = d(v)$.
\label{lemma:ibid-dynamicswsffp-sound-conpath}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:ibid-dynamicswsffp-sound-conpath}]
Construct the path $p$ as follows.
Initialize the path with the single vertex $x$.
Iteratively consider the first vertex $v$ on the path,
which is known to be consistent.
In the first case, if $v \neq s$,
then there exists a predecessor vertex $u$ and edge $e_{uv}$
with $d(u) + w(e_{uv}) = r(v)$.
Since $w > 0$ and $d(v) = r(v)$,
we have $d(u) < d(v) \leq d(x)$.
As a consequence,
$u$ is consistent;
prepend to the path the vertex $u$ and the edge $e_{uv}$,
and iterate.
In the second case, if $v = s$,
then we finish our construction of $p$.
Since the values $d(u)$ decrease monotonically
for all inserted vertices,
this process will terminate with a path $p$ beginning at $s$.
\end{proof}
\begin{lemma}
Any consistent vertex $x$ with $d(x) \leq k_{\ms{min}}$
has $d(x) \geq d^*(x)$.
\label{lemma:ibid-dynamicswsffp-sound-geq}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:ibid-dynamicswsffp-sound-geq}]
This follows directly from
Lemma~\ref{lemma:ibid-dynamicswsffp-sound-conpath}.
Since all vertices on the path are known consistent,
we must have $d(s) = 0$.
Further,
since the $d$-values across each edge in $p$
satisfy $d(u) + w(e_{uv}) = d(v)$,
it follows that $d(x) = \sum_{e \in p} w(e)$.
Therefore,
a path exists from $s$ to $x$ of length $d(x)$,
and so the true distance $d^*(x)$ must be upper-bounded by $d(x)$.
\end{proof}
\begin{lemma}
Any consistent vertex $v$ with $d(x) \leq k_{\ms{min}}$
has $d(x) \leq d^*(x)$.
\label{lemma:ibid-dynamicswsffp-sound-leq}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:ibid-dynamicswsffp-sound-leq}]
We demonstrate that $d(x) \leq d^*(x)$ by contradiction.
Suppose a vertex $x$ exists for which $d^*(x) < d(x)$.
Then there must exist a path $p'$ from $s$ to $x$ of length $d^*(x)$,
with $d^*(s) = 0$ and $d^*(u) + w(e_{uv}) = d^*(v)$ for each edge
in $p'$;
as a consequence,
we must have $d^*(v) < d(x)$ for all vertices $v$ on $p'$.
By Lemma~\ref{lemma:ibid-dynamicswsffp-sound-conpath},
we know that $s$ is consistent,
so $d(s) = 0$.
We will show that walking along
each edge $e_{uv}$ on $p'$ starting at $s$,
if $d(u) \leq d^*(u)$,
then $d(v) \leq d^*(v)$.
By definition,
we have $d(u) + w(e_{uv}) \geq r(v)$,
so that $d(u) - d^*(u) \geq r(v) - d^*(v)$.
Therefore,
it follows that $r(v) \leq d^*(v)$.
Since $k(v) \leq d^*(v)$,
it follows that $v$ is consistent,
so $d(v) \leq d^*(v)$.
We can replicate this logic down the path.
As a result,
it follows that $d(x) \leq d^*(x)$.
But this contradicts our supposition that $d^*(x) < d(x)$,
and therefore such a vertex $x$ cannot exist.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ibid-sound}]
We will prove this by contradiction.
Suppose that a path $p'$ exists with
$\mbox{len}(p') < \min_{e \in E_{\ms{conn}}} \left( d_s(u) + w(e_{uv}) + d_t(v) \right)$.
Then it must also be that
$\mbox{len}(p') < K_s + K_t$.
We will consider two cases.
First, consider the case where $K_s > \mbox{len}(p')$,
so that $K_s > d_s^*(t)$.
In this case,
the last edge $e_{ut}'$ on $p'$
has $d_s^*(u') < d_s^*(t) < K_s$;
by Theorem~\ref{thm:ibid-dynamicswsffp-sound},
$u'$ is therefore $s$-consistent with $d_s(u') = d_s^*(u')$.
In addition,
since $K_t > 0$,
$t$ must be $t$-consistent with $d_t(t) = 0$.
Therefore,
it follows that $d_s^*(u') + w(e_{ut}') < d_s(u') + w(e_{ut}')$,
which contradicts the supposition.
In the second case with $K_s \leq \mbox{len}(p')$,
identify on $p'$ the edge $e'_{uv}$ adjoining the vertices $u'$, $v'$
such that $d_s^*(u') < K_s \leq d_s^*(v')$.
(Since $K_s > 0$, this edge will exist.)
Since $d_s^*(u') < K_s$,
by Theorem~\ref{thm:ibid-dynamicswsffp-sound},
$u'$ is therefore $s$-consistent with $d_s(u') = d_s^*(u')$.
Consider our supposition that
$d_s^*(u') + w(e_{uv}') + d_t^*(v') < K_s + K_t$.
Since $d_s^*(u') + w(e_{uv}') = d_s^*(v')$
and $K_s \leq d_s^*(v')$,
it follows that
$d_t^*(v') < K_t$.
Therefore,
by Theorem~\ref{thm:ibid-dynamicswsffp-sound},
$v'$ is $t$-consistent with $d_t(v') = d_t^*(v')$.
As a consequence,
the edge $e'_{uv}$ must be in $E_{\ms{conn}}$.
Therefore,
$d_s^*(u') + w(e_{uv}') + d_t^*(v')
< d_s(u') + w(e_{uv}') + d_t(v')$,
which is a contradiction.
\end{proof}
| {
"alphanum_fraction": 0.6430124531,
"avg_line_length": 33.9530201342,
"ext": "tex",
"hexsha": "a821ed55afcd5a5aa97e28f009fa209b3ef70dac",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "siddhss5/phdthesis-dellin",
"max_forks_repo_path": "thesis-ch04-ibid-proofs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "siddhss5/phdthesis-dellin",
"max_issues_repo_path": "thesis-ch04-ibid-proofs.tex",
"max_line_length": 89,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "siddhss5/phdthesis-dellin",
"max_stars_repo_path": "thesis-ch04-ibid-proofs.tex",
"max_stars_repo_stars_event_max_datetime": "2018-09-06T21:45:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-06T21:45:42.000Z",
"num_tokens": 3817,
"size": 10118
} |
\chapter{Introduction}
This documents describes the usage and features of the TUHH Telematics Thesis Class for \LaTeX. While the intention of this work is to explain the class and its functions to you, it is far from being complete or exhaustive. You are most welcome to contribute to this class and the attached packages by sending enhancements or feature proposals to \texttt{[email protected]}. In case of any questions, feel also free to send an E-Mail.
| {
"alphanum_fraction": 0.8,
"avg_line_length": 67.8571428571,
"ext": "tex",
"hexsha": "241f0f6c81c79863e8c2b9aa6d16297716702f32",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c351afff6447f16dfce885636ee44ac41762396",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "bone4/Bachelorarbeit-JuraCoffeeThesis",
"max_forks_repo_path": "tuhhthesis/chapter_Introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5c351afff6447f16dfce885636ee44ac41762396",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "bone4/Bachelorarbeit-JuraCoffeeThesis",
"max_issues_repo_path": "tuhhthesis/chapter_Introduction.tex",
"max_line_length": 447,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5c351afff6447f16dfce885636ee44ac41762396",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "bone4/Bachelorarbeit-JuraCoffeeThesis",
"max_stars_repo_path": "tuhhthesis/chapter_Introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 103,
"size": 475
} |
% \newpage
% \section{Database design}
This project uses Firebase Cloud Firestore as a database. It is a NoSQL, document-oriented database. Unlike a SQL database, there are no tables or rows, instead, the data is stored in documents, which are organized into collections.
Each document contains a set of key-value pairs. Cloud Firestore is optimized for storing large collections of small documents. All documents must be stored in collections. Documents can contain subcollections and nested objects, both of which can include primitive fields like strings or complex objects like lists. \cite{firebase-datamodel}
The point of using Firebase is that it makes reading and writing very easy directly from the client app and you don't have to deal with any server configuration. The standard for a project like this one is to have a server with a SQL database installed, code a middle-ware that exposes the data through a REST API, and access the data from the client. But all these steps require a lot of time. And with Firebase you just access the data from the client. Firebase makes it exaggeratedly fast setup, simple to scale, and easy to maintain.
\vfill
\begin{figure}[ht!]
\center
\includegraphics[width=0.9\textwidth]{media/firebase-console.png}
\caption{Screenshot of Firebase Console}
\label{fig:firebase-console}
\end{figure}
\vfill
\clearpage\newpage
\subsection{Data model}
\label{sec:data-model}
% \subsection{Data model in TypeScript for Firebase}
The firebase database has two collections, users and subjects, that contain \texttt{User} and \texttt{Subject} objects respectively. The database is defined in TypeScript, so this is its definition in TypeScript.
The attributes \texttt{Evaluation.name} and \texttt{Exam.name} must be unique in the array. The attribute \texttt{Exam.grade} can be undefined when the user hasn't done the exam. And, the attribute \texttt{Evaluation.selected} can be undefined when the subject is not assigned to a user.
% \subsubsection{User}
% \label{sec:user-data-model}
\vfill
\begin{minted}[
baselinestretch=1,
]{typescript}
interface User {
subjects: Array<Subject>
}
\end{minted}
\vfill
% \subsubsection{Subject}
% \label{sec:subject-data-model}
\begin{minted}[
baselinestretch=1,
]{typescript}
interface Subject {
color: number,
course: string,
creationDate: Date,
creator: string,
creatorId: string,
evaluations: Array<Evaluation>,
faculty: string,
fullName: string,
shortName: string,
uni: string,
}
interface Evaluation {
name: string,
selected?: boolean,
exams: Array<Exam>
}
interface Exam {
name: string,
type: string,
weight: number,
grade?: number
}
\end{minted}
\clearpage\newpage\noindent
% \subsubsection{Data model in UML}
For a clearer explanation, here is a UML diagram for the database. Although TypeScript represents it better. % noSQL is difficult to represent with UML
\vfill
\begin{figure}[ht!]
\center
\includegraphics[height=19.5cm]{media/diagrams/database-uml.pdf}
\caption{Data model's UML Diagram}
\label{updated-gantt}
\end{figure}
\vfill
% textual restrictions: 0 <= weight <= 1
\clearpage\newpage
\subsection{Example objects}
These examples will help in understanding the object's structure.
\subsubsection{Example of a subject}
This is an example of a subject, in the subjects collection. It has basic information like \texttt{shortName}, \texttt{fullName} or \texttt{course}, and also two evaluations. Each evaluation has exams that have a \texttt{name}, \texttt{weight} and \texttt{type}. Notice that some exams are in both evaluations, this means that they represent the same available item but can be weighted different in each evaluation.
\vfill
\begin{minted}[
baselinestretch=1,
]{typescript}
{
shortName: "EDA",
fullName: "Estructures de Dades i Algorismes",
course: "Q2 2019-2020",
uni: "UPC",
faculty: "FIB",
color: 2,
evaluations: [
{
name: "Continua",
exams: [
{ name: "P1", weight: 0.3, type: "Exàmens" },
{ name: "PC", weight: 0.3, type: "Exàmens" }
{ name: "F", weight: 0.3, type: "Exàmens" },
{ name: "Joc", weight: 0.2, type: "Joc" }
]
}, {
name: "Final",
exams: [
{ name: "PC", weight: 0.3, type: "Exàmens" },
{ name: "F", weight: 0.6, type: "Exàmens" },
{ name: "Joc", weight: 0.2, type: "Joc" }
]
}
],
creationDate: "February 28, 2020 at 9:39:50 PM UTC+1",
creator: "Maurici Abad Gutierrez",
creatorId: "4wUPZqVqt1Y9K6CAWLBNlZwe3b12"
}
\end{minted}
\vfill
\newpage
\subsubsection{Example of a user}
This user has only one subject saved. He changed its color (from color 2 to color 7) and saved some grades (8.33 in P1 and 5.5 in PC). Notice that because he hasn't done the exam "Joc", it's not stored. Also, he has the \textit{Continua} evaluation selected and changed the \texttt{color}.
The subject's entire information is duplicated because if the original subject is edited, the data inside the user doesn't change, to prevent unexpected changes.
This object structure is optimized for NoSQL databases because it contains all the information needed to load the screen.
\vfill
\begin{minted}[
baselinestretch=0.85,
]{typescript}
{
subjects: [
{
shortName: "EDA",
fullName: "Estructures de Dades i Algorismes",
course: "Q2 2019-2020",
uni: "UPC",
faculty: "FIB",
color: 7,
evaluations: [
{
name: "Continua",
selected: true,
exams: [
{ name: "P1", weight: 0.3, type: "Exàmens", grade: 8.33 },
{ name: "PC", weight: 0.3, type: "Exàmens", grade: 5.5 }
{ name: "F", weight: 0.3, type: "Exàmens" },
{ name: "Joc", weight: 0.2, type: "Joc" }
]
}, {
name: "Final",
selected: false,
exams: [
{ name: "PC", weight: 0.3, type: "Exàmens", grade: 5.5 },
{ name: "F", weight: 0.6, type: "Exàmens" },
{ name: "Joc", weight: 0.2, type: "Joc" }
]
}
],
creationDate: "February 28, 2020 at 9:39:50 PM UTC+1",
creator: "Maurici Abad Gutierrez",
creatorId: "4wUPZqVqt1Y9K6CAWLBNlZwe3b12"
}
]
}
\end{minted}
\vfill
| {
"alphanum_fraction": 0.6821926489,
"avg_line_length": 34.4918032787,
"ext": "tex",
"hexsha": "05c58fcd57cfc66e3ee33270cb87fa4abfb4a633",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mauriciabad/TFG",
"max_forks_repo_path": "sections/db.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mauriciabad/TFG",
"max_issues_repo_path": "sections/db.tex",
"max_line_length": 538,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mauriciabad/TFG",
"max_stars_repo_path": "sections/db.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1776,
"size": 6312
} |
% !TEX root=../main.tex
\section{Introduction}
In the next sections we will gradually develop a language to model interactive processes.
Each section will introduce one of the following basic constructs,
making our language more powerful after every addition:
\begin{itemize}
\item interactive \emph{editors} with a value ($\Edit v$) and without a value ($\Enter \tau$);
\item the \emph{failure} task ($\Fail$);
\item sequential composition or \emph{stepping} of tasks ($t_1 \Then e_2$);
\item parallel composition or \emph{pairing} of tasks ($t_1 \And t_2$);
\item making a \emph{choice} between tasks ($t_1 \Or t_2$);
\end{itemize}
Additionally, we will discuss two convenience constructs and one extension for shared data:
\begin{itemize}
\item continuing to the next task upon users' requests ($t_1 \Next e_2$);
\item giving users an explicit choice ($t_1 \Xor t_2$);
\item watching and changing shared data ($\Update l$).
\end{itemize}
Compared to previous attempts \cite{conf/ifl/KoopmanPA08,conf/ppdp/PlasmeijerLMAK12,theses/radboud/VinterHviid18},
our approach distinguishes itself by the following points:
\begin{itemize}
\item
There is a clear distinction between the underlaying \emph{host language}
and the task layer (\emph{object language}) on top of it.
% For example, \textcite{conf/ppdp/PlasmeijerLMAK12} present a reference implementation of task oriented programming
% as an embedded domain specific language in Clean \cite{manuals/PlasmeijerE98} which blurs the lines between the host and object language.
\item
There is no notion of values which can be stable or unstable.
Our language has editors which can be valued ($\Edit v$) or unvalued ($\Enter \tau$),
but values themselves do not have special distinguishing features and are just values,
as in every other lambda calculus or functional programming language.
\item
Tasks are never done!
A value lifted into the task world can always interactively be changed by end users.
\item
All tasks are, in the end, about interaction with end users.
Not because every construct introduced in our language has a way to interact with the user,
but the leafs are.\footnote{
There are two leafs in our language: editors ($\Edit v$ and $\Enter \tau$) and fail ($\Fail$).
Editors are the main focus of user interaction.
Users can enter and change information using an editor.
Fail is a special case.
It acts like a black hole regarding interaction and will swallow every event users will throw at it.
All other language constructs are nodes.
}
\item
Entering some information into a system is not a one-shot action.
Editors keep asking for input continuously.
Thus repeatedly asking for information does not need to be modelled with recursion.
The next task is only executed under preprogrammed internal conditions or external actions by users.
\item
Tasks do not need to be identified by a task identifier.
Sending events is based on the \emph{structure} of the task at hand.
\item
Events are not accessible by users, they are built into the semantics.
\item
Semantically, tasks do not return a value \emph{and} a continuation
but \emph{only} a continuation.
Obtaining the current value of a task is an \emph{observation},
implemented by an additional semantic function.\footnote{
Looking at tasks in this way mitigates the problem of a stream-like type which can not be made into a monad
(as described in \textcite{theses/radboud/VinterHviid18})
and make sure some tasks, such as a step, do not have a value.
}
\item
We make use of multiple semantic functions that describe different aspects of the language.
At the core, we assume a big step \emph{evaluation} semantics for the host language.\footnote{
We use a $\lambda$-calculus with some primitive types and simple extensions like $\If{}{}{}$, pairs, and references.}
On top of this there is \emph{normalisation} of tasks
and \emph{handling} of events.
Normalisation is done before starting the main event loop
and after handling an event.
Next to these three core semantics,
there are also semantic functions querying the current \emph{value} of a task,
giving enabled \emph{actions},
or producing a (rudimentary) \emph{user interface}.
\end{itemize}
The remainder of this report is structured as follows.
In the first section we set up our our host language,
the $\lambda$-calculus extended with primitive types and operations.
Thereafter, in the next \fixme{seven} sections, we will gradually define our object language,
extending the properties of our host language when needed.
After some sections,
we will make a small intermezzo,
showing the current properties and laws of our language.
We strive to keep the host language as small as possible.
At the end of the report,
we will see an overview of our our language,
summarising all static and dynamic rules defined along the way.
\todo{Consistent capitalization style of section headings}
| {
"alphanum_fraction": 0.7489719992,
"avg_line_length": 52.6494845361,
"ext": "tex",
"hexsha": "b251e37dfd3d5a3051c3195dfebcfeab90cc5607",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e7b846338d5da59ed5d00aef81f9874cadbbdd9f",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "mklinik/task-semantics",
"max_forks_repo_path": "doc/report/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e7b846338d5da59ed5d00aef81f9874cadbbdd9f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "mklinik/task-semantics",
"max_issues_repo_path": "doc/report/introduction.tex",
"max_line_length": 143,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e7b846338d5da59ed5d00aef81f9874cadbbdd9f",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "mklinik/task-semantics",
"max_stars_repo_path": "doc/report/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1238,
"size": 5107
} |
\section{Modelling articulated subjects}
% Brief introduction and discuss 'non-shape/weak shape' methods. E.g. faces and hands.
% Discuss methods like dolphins or Angjoo birds which start
% face model paper: https://arxiv.org/pdf/1909.01815.pdf
The design of 3D morphable models (3DMMs) has a significant recent history in computer vision research. A 3DMM is a statistical model designed to represent the structure, deformation and apperance space of for a particular object category. Such a model can be constructed for any object category for which a dense point-to-point correspondence can be established between instances. For example, a 3DMM can be designed to represent medium-sized quadrupeds but perhaps not for general animal categories. How, for instance, would one sensibly determine correspondences between a dog and an octupus? 3DMMs have been used extensively as a strong 3D prior to aid various 3D reconstruction algorithms. They are, however, most influential for problems with the most ambiguity: particularly when dealing with articulated objects (e.g. animals or humans), when only a single monocular RGB image is available or when no paired 3D training data is available.
% cars:
% ears: https://core.ac.uk/download/pdf/158370989.pdf
% human bodies: http://grail.cs.washington.edu/projects/digital-human/pub/allen03space-submit.pdf
% human bodies:
Blanz and Vetter~\cite{blanz-vetter} presented the first 3DMM, which expressed a low-dimensional face space space learnt by aligning various face scans. This work, presented over two decades ago, has been recognized with an impact paper award for the continued applications for the ideas presented. Indeed, the approach introduced has found applications far beyond faces~\cite{face-warehouse, basel-old, basel-new}, including for cars~\cite{deformable-cars}, other human body parts including the hands~\cite{Khamis_2015_CVPR} and ears~\cite{deformable-ears}, the human body surface~\cite{anguelov05scape,loper15smpl} and a restricted set of animal categories~\cite{cafm, zuffi2017menagerie}.
This section will cover methods for modelling articulated subjects, focussing primarily methods for human bodies and animals.
% \subsection{Building 3D morphable models}
% Of primary concern to this thesis are methods which represent articulated structures, such as human bodies or animals, with a 3D polygon mesh. A polygon mesh $M = (V, T)$ is a collection of vertices, edges bound by vertex pairs, and polygons bound by sequences of edges and vertices~\cite{smith2006vertex}. Although other convex shapes are allowed, this thesis only has need to discuss triangular mesh polygons, which henceforth will be referred to as \emph{triangles}. An example mesh is shown in Figure~\ref{fig:polygon_mesh}.
% \begin{figure}[H] % Example image
% \center{\includegraphics[width=0.5\linewidth]{dolphin_mesh}}
% \caption{A polygon mesh~\cite{polygon_mesh}.}
% \label{fig:polygon_mesh}
% \end{figure}
% \subsection{Mesh deformation}
% The process of adapting a 3D mesh is known as \textit{mesh deformation} and has relevance to a multitude of computer graphics applications, particuarly those in which models are designed to represent dynamic objects. To constrain an optimization function (or simplify the animation process), it is useful to introduce priors that prevent unnatural mesh movement. Two methods for achieving this are discussed:
% \subsubsection{As Rigid as Possible}
% As Rigid as Possible (ARAP) surface deformation~\cite{sorkine2007rigid} is a distance function that measures similarity between two meshes with corresponding vertices. For two vertex sets~$V_{1}$ and~$V_{2}$, ARAP minimizes over~$N = |V|$ rotation matrices. Note~$j \sim i$ indicates vertex indices~$j$ adjacent to vertex index~$i$:
% \begin{equation}
% D(V_{1}, V_{2}) = \min_{R_{1..N}}\sum_{i=1}^{N}\sum_{j \sim i}|| (V_{1i} - V_{1j}) - R_{i}(V_{2i} - V_{2j}) ||^{2}
% \end{equation}
% This distance function can be incorporated into an energy-based optimizer as a regularization function. By considering how small vertex regions overlap, the function can be used to discourage `unnatural movement', e.g.\ shearing effects, over mesh faces. ARAP regularizers are particularly useful in cases in which there is no prior knowledge of the mesh. Figure~\ref{fig:arap_dino} shows an example of a dinosaur mesh undergoing ARAP deformation, obtained by translating the highlighted yellow vertex.
% \begin{figure}[H] % Example image
% \center{\includegraphics[width=0.35\linewidth]{dino_arap}}
% \caption{Dinosaur mesh undergoing ARAP deformation, obtained by translating the highlighted yellow vertex. Reprinted from~\cite{sorkine2007rigid}.}
% \label{fig:arap_dino}
% \end{figure}
% \subsubsection{Skeletal Rigging and Linear Blend Skinning}
% In cases that the mesh shape is known in advance, it is common to follow a process known as \textit{rigging}, in which the mesh is augmented with a hierarchical bone structure. The point at which two bones meet is called a \emph{joint}, and these can be used to define acceptable centres of rotation for mesh deformation. It is possible to describe a distribution of joint configurations, which could be used to constrain the mesh to (in the case of human / animal subjects) anatomically achievable poses. It is also simple to define conceptual `body parts' from a rigged mesh, by considering regions between pairs of joints; for example a lower leg region can be defined between a knee and ankle joint. A simple example of a rigged 2D mesh with joints indicated by green diamonds is shown in Figure~\ref{fig:finger_model}. Note how the mesh surface deforms naturally as the joints are displaced.
% \begin{figure}[H]
% \centering
% \begin{subfigure}{0.48\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{finger/finger1}
% \caption{Default joint positions.}
% \end{subfigure}
% \begin{subfigure}{0.48\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{finger/finger2}
% \caption{Right-most joint displaced.}
% \end{subfigure}
% \begin{subfigure}{0.48\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{finger/finger3}
% \caption{Central joint displaced and right-most joint displaced and rotated.}
% \end{subfigure}%
% \caption{Web application demonstrating LBS on a 2D finger mesh. Joints are denoted as green diamonds.}
% \label{fig:finger_model}
% \end{figure}
% \clearpage
% Formally, a skinned mesh consists of a set of rigged vertices $V \subseteq \mathbb{R}^3 \times \mathbb{R}^{|J|}$, a set of faces $F \subseteq V^3$ and joints $J \subseteq R^{3\times3}$. Each vertex $v = (x, s) \in V$ consists of positional coordinate $x \in \mathbb{R}^{3}$ and a weight vector $s \in \mathbb{R}^{|J|}$ which describes the level of influence each joint $j \in J$ has over its movement. Many approaches exist for assigning weights, but perhaps the simplest is to build a vector with entries corresponding to the distance from the vertex to each joint centre. Skinning weight vectors are normalized such that their entries sum to one, and for computational reasons, the number of non-zero elements is typically limited to 2 or 4. The weakness of such models is that artifacts and other unrealistic deformations can occur around the model joints, particularly for meshes that model non-linear structures such as humans. However, the technique is frequently used in computer graphics and game design when a character's shape is known ahead of time.
% To assist in explanation, Figure \ref{fig:rigged_cylinder} shows skinning weight influences from three joints within a rigged cylinder mesh. Here, $|J| = 3$ and each vertex $v_{i} = (x_{i}, s_{i}) \in V$ has a skinning weight vector $s_{i} \in \mathbb{R}^{3}$. Each model joint is assigned a distinct RGB value, shown separately in (a), (b) and (c), and together in (d) by linearly combining the colours. This linear blend colorization scheme will be frequently used in later sections of this report.
% \begin{figure}[H]
% \centering
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/lower_bone}
% \caption{Lower joint.}
% \end{subfigure}%
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/middle_bone}
% \caption{Middle joint.}
% \end{subfigure}%
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/upper_bone}
% \caption{Upper joint.}
% \end{subfigure}%
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/linear_blend}
% \caption{Linear blend.}
% \end{subfigure}%
% \caption{A rigged cylinder with $|J| = 3$ and where each vertex $v_{i} = (x_{i}, s_{i}) \in V$ has a skinning weight vector $s_{i} \in \mathbb{R}^{3}$.}
% \label{fig:rigged_cylinder}
% \end{figure}
% Figure \ref{fig:rigged_quadruped} shows a more complex rigged quadruped mesh with $|J| = 25$ with skinning weight influences again shown by the linear blend colorization scheme. Again, each joint is assigned a unique RGB value and a vertex's colour is calculated by linearly combining joint colours with skinning weight vectors given by the $\{s_{i}\}$. A triangle's colour is then generated by averaging the colours given for the three surrounding vertices.
% \begin{figure}[H] % Example image
% \center{\includegraphics[width=0.5\linewidth]{linear_blend_bold_bones}}
% \caption{A rigged quadruped with $|J| = 25$ and where each vertex $v_{i} = (x_{i}, s_{i}) \in V$ has a skinning weight vector $s_{i} \in \mathbb{R}^{3}$. Visualization uses the linear blend colorization scheme in which each joint is assigned a unique RGB value.}
% \label{fig:rigged_quadruped}
% \end{figure}
% Once a mesh has been suitably rigged, there are a number of options (e.g. Linear Blend Skinning (LBS), Dual Quaternions~\cite{kavan2007skinning} etc.) for applying a particular mesh deformation. Typically, a user assigns a transformation (in this case comprising a rotation and transformation) to each `joint' and the updated positions $\bar{x_{i}}$ of the remaining vertices $v_{i}$ with original positions $x_{i}$ are then calculated. The original transformation for each joint (i.e. before the deformation) is expressed as a matrix $U_{j}$. The transformation after the deformation has been applied is captured by $D_{j}$. Note that $s_{ij}$ denotes the skinning weight influence of joint~$j \in J$ on vertex $v_{i} \in V$.
% The updated positions $\bar{x_{i}}$ can then be calculated by LBS:
% \begin{equation}
% \bar{x_{i}} = \sum_{j=1}^{|J|}s_{ij}D_{j}U_{j}^{-1}x_{i}
% \end{equation}
% \subsection{Modelling human body parts}
% % Hands, faces etc.
% Given the availability of strong shape and pose priors, articulated hand tracking aptly demonstrates the advantage of model fitting approaches. Again, it is first necessary to decide how the human hand should be parameterized, i.e. what an optimizer should specifically aim to learn. Similar to the case with the full human body, the aim is again to adapt a mesh (although this time of a hand) to reproduce a performance given by a real human hand either in still frames or from an input video sequence. Many modern approaches follow a hand parameterization given by Khamis et al.~\cite{Khamis_2015_CVPR} using a pose vector $\theta \in \mathbb{R}^{28}$ that includes global translation and rotation, one adbuction and three flexion variables for each finger digit, and one abduction and flexion parameter for the wrist and forearm. An example hand tracking result can be seen in Figure~\ref{fig:hand_tracking}.
\subsection{Constructing 3D morphable models}
% As it relates to modelling rigged articulated objects, it is important to factor deformation into two categories. Firstly \emph{pose} deformations govern the positioning of articulated parts, for example arms and legs. Consequently, parameters for controlling pose will typically vary when reconstructing a sequence exhibiting an articulated subject in motion. \emph{Shape} deformations, however, control the relative lengths and sizes of articulated parts and indeed the global structure. In general, parameters for controlling shape should be invariant to motion and remain consistent for a single individual.
% A frustrating exception to this occurs in the literature discussing face reconstruction. Since models typicaly since without a define face skeleton, all shape variations are handling face reconstruction, since without a defined face skeleton, all
% remain consistent In general, control the relative length and sizing of body parts. In general, only one sizing and length of the model limbs (e.g. arms and legs). typically used to govern the positions of the limbs. are used to govern global and local body proportions (e.g. body part sizes and lengths), and are mostly consistent for a single subject. \emph{Pose deformations} on the other hand are used to govern the positioning of limbs. In order to consolidate various definitions used in the literature, this thesis will define \emph{pose} to be any mesh deformation affected by the movement of an internal skeleton. \emph{Shape} will therefore be any other kind deformation. As an example, then, of most concern to 3D morphable models representing human faces will be \emph{shape} parameters, since face models rarely have an internal skeletal structure. an internal skeleton constitute any deformation change affected by the typically which govern body proportions (e.g. body part sizes, lengths) and \emph{pose deformation} which governs the positioning of limbs. Of course, a typical image of a human or animal will comprise
3D deformable models are typically represented by a polygon mesh. A polygon mesh $M = (V, T)$ is a collection of vertices, edges bound by vertex pairs, and polygons bound by sequences of edges and vertices~\cite{smith2006vertex}. Although other convex shapes are allowed, this thesis only has need to discuss triangular mesh polygons, which henceforth will be referred to as \emph{triangles}. An example mesh is shown in Figure~\ref{fig:polygon_mesh}.
\begin{figure}[t] % Example image
\center{\includegraphics[width=0.5\linewidth]{dolphin_mesh}}
\caption{A polygon mesh. Reprinted from~\cite{polygon_mesh}.}
\label{fig:polygon_mesh}
\end{figure}
A 3D morphable model can then be constructed by deforming a template mesh $M$ on $n$-vertices to a set of 3D training examples. Generally, this optimization will have a significant degrees of freedom so it is necesssary to employ techniques for regularization. One such regularizer is known as the \emph{As Rigid as Possible} scheme:
\begin{definition}[As Rigid as Possible]
As Rigid as Possible (ARAP) surface deformation~\cite{sorkine2007rigid} is a distance function that measures similarity between two meshes with corresponding vertices. For two vertex sets~$V$ and~$W$, ARAP minimizes over~$N = |V|$ rotation matrices. Note~$j \sim i$ indicates vertex indices~$j$ adjacent to vertex index~$i$:
\begin{equation}
D(V, W) = \min_{R_{1..N}}\sum_{i=1}^{N}\sum_{j \sim i}|| (V_{i} - V_{j}) - R_{i}(W_{i} - W_{j}) ||^{2}
\end{equation}
This distance function can be incorporated into an energy-based optimizer as a regularization function. By considering how small vertex regions overlap, the function can be used to discourage `unnatural movement', e.g.\ shearing effects, over mesh faces. ARAP regularizers are particularly useful in cases in which there is no prior knowledge of the mesh. Figure~\ref{fig:arap_dino} shows an example of a dinosaur mesh undergoing ARAP deformation, obtained by translating the highlighted yellow vertex.
\begin{figure}[t] % Example image
\center{\includegraphics[width=0.35\linewidth]{dino_arap}}
\caption{Dinosaur mesh undergoing ARAP deformation, obtained by translating the a vertex on the nose. Reprinted from~\cite{sorkine2007rigid}.}
\label{fig:arap_dino}
\end{figure}
\end{definition}
Once aligned, a $d$-dimensional \emph{shape space} can then be defined (with $d << n$), where each $w \in \R{d}$ gives rise to a vertex configuration in $\R{3}$ (with unchanged triangulation). In this way, every plausible 3D example has a parameter vector $w \in \R{d}$ that generates it. This construction can then be interpreted as a \emph{generative} model. However, very few selections of $w \in \R{d}$ will generate a plausible-looking 3D mesh. This can then be interpreted probabilistically, by defining a density function $f(w)$ that defines the likelihood that a realistic 3D example would be represented by $w$ in shape space.
\subsection{Modelling shapes (e.g. faces)}
The concepts raised above were first introduced in the seminal work of Blanz and Vetter~\cite{blanz-vetter}. They define a linear generator function based on principle component analysis (PCA) in order to map $d$-dimensional parameter vectors to the set of $n$ vertex coordinates. In particular they use the mapping:
\begin{equation}
g(\alpha) = \bar{c} + E\alpha
\end{equation}
where $g: \R{d} \mapsto \R{3n}$ is the generator function, $\bar{c} \in \R{3n}$ is the mean 3D face in the training dataset and $E \in \R{3n \times d}$ is a matrix containing the $d$ most dominant eigenvectors computed over shape residuals $\{c_i - \bar{c_i}\}$.
This construction assumes 3D faces in this $d$-dimensional parameter space follow a multivariate normal distribution (a design decision explored further in this thesis \Cref{chap:wldo}). In addition, the function $f(w)$ which defines the likelihood shape space vector $\alpha$ represents a plausible face, is therefore given by the Mahalanbois distance of $\alpha$ to the origin.
Note that this formulation additionally enables the definition of facial expressions. For example, Blanz and Vetter defined an expression (e.g. surprise) according to the difference in shape space between a expressive and neural face of the same subject. This then enabled the formulation above to be factored into identity and expression components:
\begin{equation}
g(\alpha_{idt}, \alpha_{exp}) = \bar{c} + E_{idt}\alpha_{idt} + E_{exp}\alpha_{exp}
\end{equation}
where $E_{idt}, E_{exp}$ are the basis vectors of the identity and expression space and $\alpha_{idt}, \alpha_{exp}$ are the coefficients. As noted by Lewis et al.~\cite{xxx}, the basis vectors of the expression space above can be interpreted as a data-driven \emph{blendshape model}: a standard approach in the animation industry for representing facial expressions as a linear combination of target faces. This concept will later reemerge in a section discussing corrective blendshapes uses in SMPL~\cite{loper15smpl}.
%Blendshape Facial Animation
As identified by Blanz and Vetter, improved modelling of finer details (particularly around the eye or nose regions) can be obtained through local modelling. Various authors~\cite{xxx} began manually segmentating the face into parts and learning individual PCA representations for them. Later, segmentations were automatic and learned based on displacement patterns found in the training dataset. Next, approaches were adopted based on hierarchical, multi-scale frameworks~\cite{xxx, xxx}. Possibly the closest to later sections which require a focus on \emph{pose deformation} is the work of Wu et al~\cite{xxx}. who combine a a local shape space model with an anatomical bone structure that helps regularize deformation.
% booth et al 2017
% pascal 2010
% egger et al 2016b
% zhou et al.
% xu et al (deep 3d portrait from a single image)
A standard challenge in face modelling is towards reconstructing appearance, typically incorporating albedo and illumination (although frequently these are not factored, in which case apperance is generally referred to as \emph{texture}). Early work modelled shape and texture independently~\cite{xxx, xxx}, although recent techniques show solving for these factors jointly enable constraints to be applied due to correlations present. Perhaps most interesting are the recent techniques among these~\cite{xxx, xxx}, who propose methods based on deep convolutional models to jointly model shape and texture.
% PROBLEMS
% the statistics of most models are limited to the face and do not include information on eyes, mouth interior or hair
% Second, the interpretability of the representations would benefit from being improved. PCA is the most commonly used method to perform statistics on 3D faces, and as it is an unsupervised method, the principal components do not coincide with attributes that humans would use to describe a face
\subsection{Modelling articulation (e.g. hands)}
% Perhaps the biggest difference with hand modelling are the complex modes of articulation among subjects.
3D morphable models have also influenced work in 3D hand tracking and modelling. Human hands servce multiple purposes in everyday life, serving a mechanism to handle tools/objects, expressing emotion and aiding (or even as the primary tool for) communication. As a result, hands (and particularly fingers) exhibit complex articulation patterns which are best characterized as 3D \emph{rotations}. Compared to the previous section, in which face shape variation could be represented as as an abstract linear basis learnt from scans, an advantage to modelling hands is the modes of articulation can be defined in advance.
In particular, human hand motion is controlled by a hierarchical bone structure referred to as a \emph{skeleton}. The point at which two bones meet is referred to as a \emph{joint} and can be used to define acceptable centers of articulation. The direction and magnitude of the articulation can then be neatly expressed as a 3D rotation.
This formulation helps provide insight into why the abstract linear basis (shape space) introduced in the previous section would be poor choice for modelling hands. Deformation is here characterized in terms of 3D rotations, and 3D rotations are non-linear with respect to the input angle. This is easily shown:
\begin{definition}[3D Rotations]
The simplest kind of 3D rotation is an \emph{elementary rotation} and involves a rotation around a single axis of a coordinate system. For example, the following matrix represents a rotation by an angle $\gamma$ around the $x$ axis:
\begin{equation}
R_{x}(\gamma) = \begin{bmatrix}
1 & 0 & 0 \\
0 & \cos(\gamma) & -\sin(\gamma) \\
0 & \sin(\gamma) & \cos(\gamma)
\end{bmatrix}
\end{equation}
One can then apply this matrix to an input point $p \in \R{3}$ to compute the new position $p'$ after rotating $\gamma$ around the $x$ axis:
\begin{equation}
p' = R_{x}(\theta)p
\end{equation}
This formulation can then be extended to represent any 3D rotation as the composition of elementary rotations. For example, a 3D rotation can be decomposed into a $\gamma$ rotation around the $x$-axis (pitch), followed by a $\beta$ rotation around the $y$-axis (yaw) and finally by an $\alpha$ rotation around the $z$-axis (roll).
\begin{equation}
R = R_{z}(\alpha) R_{y}(\beta) R_{x}(\gamma)
\end{equation}
One can see immediately that affecting a 3D rotation (i.e. computing the new position of the points) is a non-linear function of the input angle. It is necessary, therefore to describe an alternative technique for low-dimensional and efficient mesh deformation, which relies on \emph{rigging} and \emph{linear blend skinning}.
\end{definition}
% Compared to facesCompared to faces, hands have a simpler space of shape variation (typically restricted to palm size, finger lengths etc.) but the modes of articulation are extremely complex. In the class of 3DMMs, hand models arguably have the strongest synergy to their biological counterparts; in many cases each of the 27 hand bones (occasionally except the capal bones which join the fingers to the wrists) are explicitly represented in the model skeleton.
\begin{definition}[Skeletal Rigging and Linear Blend Skinning (LBS)]
In cases that the articulated object is known in advance, it is common to augment a representative 3D mesh model with an internal skeleton that approximates the biological counterpart. This is achieved through a process known as \emph{rigging}.
% In general, the mesh skeleton approximates the real, biometric skeleton by allowing the important modes of deformation but often is a rough approximation of the real, biometric skeleton be detailed enough to allow the primary modes of deformation, but often it is not necessary that the 3D mesh skeletthat the articulated object is known In cases that the mesh shape is known in advance, it is common to follow a process known as \textit{rigging}, in which the mesh is augmented with a hierarchical bone structure. The point at which two bones meet is called a \emph{joint}, and these can be used to define acceptable centres of rotation for mesh deformation. It is possible to describe a distribution of joint configurations, which could be used to constrain the mesh to (in the case of human / animal subjects) anatomically achievable poses. It is also simple to define conceptual `body parts' from a rigged mesh, by considering regions between pairs of joints; for example a lower leg region can be defined between a knee and ankle joint. A simple example of a rigged 2D mesh with joints indicated by green diamonds is shown in Figure~\ref{fig:finger_model}. Note how the mesh surface deforms naturally as the joints are displaced.
% \begin{figure}[H]
% \centering
% \begin{subfigure}{0.48\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{finger/finger1}
% \caption{Default joint positions.}
% \end{subfigure}
% \begin{subfigure}{0.48\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{finger/finger2}
% \caption{Right-most joint displaced.}
% \end{subfigure}
% \begin{subfigure}{0.48\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{finger/finger3}
% \caption{Central joint displaced and right-most joint displaced and rotated.}
% \end{subfigure}%
% \caption{Web application demonstrating LBS on a 2D finger mesh. Joints are denoted as green diamonds.}
% \label{fig:finger_model}
% \end{figure}
Formally, a skinned mesh consists of a set of rigged vertices $V \subseteq \R{3} \times \R{|J|}$, a set of faces $F \subseteq V^3$ and joint transformation matrices $J \subseteq \RR{3}{4}$. Each vertex $v = (x, s) \in V$ consists of positional coordinate $x \in \R{3}$ and a weight vector $s \in \R{|J|}$ which describes the level of influence each joint $j \in J$ has over its movement. Many approaches exist for assigning weights, but perhaps the simplest is to build a vector with entries corresponding to the distance from the vertex to each joint centre. Skinning weight vectors are normalized such that their entries sum to one, and for computational reasons, the number of non-zero elements is typically limited to 2 or 4. The weakness of such models is that artifacts and other unrealistic deformations can occur around the model joints, particularly for meshes that model non-linear structures such as humans. However, the technique is frequently used in computer graphics and game design when a character's shape is known ahead of time.
% To assist in explanation, Figure \ref{fig:rigged_cylinder} shows skinning weight influences from three joints within a rigged cylinder mesh. Here, $|J| = 3$ and each vertex $v_{i} = (x_{i}, s_{i}) \in V$ has a skinning weight vector $s_{i} \in \mathbb{R}^{3}$. Each model joint is assigned a distinct RGB value, shown separately in (a), (b) and (c), and together in (d) by linearly combining the colours. This linear blend colorization scheme will be frequently used in later sections of this report.
% \begin{figure}[H]
% \centering
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/lower_bone}
% \caption{Lower joint.}
% \end{subfigure}%
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/middle_bone}
% \caption{Middle joint.}
% \end{subfigure}%
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/upper_bone}
% \caption{Upper joint.}
% \end{subfigure}%
% \begin{subfigure}{0.25\linewidth}
% \centering
% \includegraphics[width=1\linewidth]{wonky_pole/linear_blend}
% \caption{Linear blend.}
% \end{subfigure}%
% \caption{A rigged cylinder with $|J| = 3$ and where each vertex $v_{i} = (x_{i}, s_{i}) \in V$ has a skinning weight vector $s_{i} \in \mathbb{R}^{3}$.}
% \label{fig:rigged_cylinder}
% \end{figure}
% Figure \ref{fig:rigged_quadruped} shows a more complex rigged quadruped mesh with $|J| = 25$ with skinning weight influences again shown by the linear blend colorization scheme. Again, each joint is assigned a unique RGB value and a vertex's colour is calculated by linearly combining joint colours with skinning weight vectors given by the $\{s_{i}\}$. A triangle's colour is then generated by averaging the colours given for the three surrounding vertices.
% \begin{figure}[H] % Example image
% \center{\includegraphics[width=0.5\linewidth]{linear_blend_bold_bones}}
% \caption{A rigged quadruped with $|J| = 25$ and where each vertex $v_{i} = (x_{i}, s_{i}) \in V$ has a skinning weight vector $s_{i} \in \mathbb{R}^{3}$. Visualization uses the linear blend colorization scheme in which each joint is assigned a unique RGB value.}
% \label{fig:rigged_quadruped}
% \end{figure}
Once a mesh has been suitably rigged, Linear Blend Skinning (LBS) can be used to apply the mesh deformation. Typically, a user assigns a transformation (e.g. rotation and translation) to each `joint' and LBS computes the an updated position of vertex $v = (x, s)$ where $x$ is the original location and $s = \{s_j\}$ are the skinning weight influence of joints $j$. The matrix $U_{j}$ (occasionally referred to as a the \emph{reference pose}) is the mapping from the bone's "default" coordinate system to world coordinates. As a result, it need only be computed (and inverted) once since it is invariant to changing joint angles. In general, a user will supply the matrices $D_{j}$ to define the mapping from the bone's deformed coordinate system to world coordinates.
The updated position of an input point $x$ can then be calculated by LBS:
\begin{equation}
LBS(D, x; U, s) = \sum_{j}s_{j}D_{j}U_{j}^{-1}x
\end{equation}
% TODO: Explain LBS for kinematic trees
Note this formulation is made slightly more complicated in the case of kinematic trees, since the
Similarly to the approach mentioned above, this formulation can again be seen as a generative 3D model. In particular, an output vertex positions $v'$ can be computed from a vector of 3D joint rotations $\theta$ and input vertex position $v$ as follows:
\begin{equation}
v' = LBS(R(\theta), v)
\end{equation}
\end{definition}
With the above formulation, it is now possible to give an overview of recent hand tracking literature. In many cases, 3D morphable hands models are built to reflect the 27 biological hand bones (occasionally except the capal bones which join the fingers to the wrists). Of course, while the major source of hand deformation is due to 3D pose, some sources of variation are present due to hand size and finger propotions.
Allen et al~\cite{xxx} handle this variation adapting a 3D surface with displacement maps with various constraints designed to avoid self-intersections before adopting the linear blend skinning formulation defined above. Rhee et al~\cite{xxx} learn a shape deformation space (with a similar technique to that of faces) and user-specific skinning weights for LBS. Albrecht undergo a laborious process to create extremely detailed hand models through laser scanninng plaster casts~\cite{xxx}. A significant advance was presented by Taylor et al~\cite{xxx}. who presented a method to learn a personalized hand model (although not a shape basis) given an input video sequence (with depth) taken of a user slowly articulating their fingers. Ballan et al.~\cite{xxx} follow a similar process by making use of multi-view input.
% TODO: Allen et al. B. Allen, B. Curless, and Z. Popovic. Articulated body de-formation from range scan data. In ACM Transactions on Graphics (TOG). ACM, 2002. 2
% Represent the model as an adaptation of a standard subdivision surface model with linear blend skinning. Cruicially their adaptations are displacement maps on top of a base surface. The displacements must be limited in magnitude to avoid self-intersections and their shape basis is forced to coincide with the input scans.
% % TODO: Rhee et al. T. Rhee, U. Neumann, and J. P. Lewis. Human hand modeling from surface anatomy. In Proceedings of the 2006 symposium on Interactive 3D graphics and games, pages 27–34. ACM, 2006. 2
% Early work included the work of Rhee et al.~\cite{xxx} who learn a 3D shape deformation space from hands by fitting 3D model with user-specific skinning.
% % TODO: Albrecht et al. I. Albrecht, J. Haber, and H.-P. Seidel. Construction and animation of anatomically based human hand models. In Proc. Eurographics, 2003. 2
% Albrecht et al. go to the other extreme creating very detailed, physically-realistic hand models. However, the process is laborious requiring plaster casting of human hands, performing laser scans, and manually creating a physics-enabled hand model.
% % TODO: Taylor et al. J. Taylor, R. Stebbing, V. Ramakrishna, C. Keskin, J. Shotton, S. Izadi, A. Hertzmann, and A. Fitzgibbon. User-specific hand modeling from monocular depth sequences. In Proc.CVPR, 2014. 1, 2, 4, 5, 7
% A more automatic technique is presented by Taylor et al., which generates personalized hand models given noisy input depth sequences where the user’s hand rotates 180 degrees whilst articulating fingers. A continuous optimization that jointly solves for correspondences and model parameters across a smooth subdivision surface with as rigid as possible (ARAP) regularization leads to high-quality userspecific rigged hand models, though not a shape basis. Whilst the process is automatic, the hands are required to cover the full range of articulations, and longer sequences are required, leading to more complex capture requirements and more costly optimization.
% % TODO: Ballan et al. L. Ballan, A. Taneja, J. Gall, L. V. Gool, and M. Pollefeys. Motion capture of hands in action using discriminative salient points. In Proc. ECCV, 2012. 1, 2
% Ballan et al. construct a personalized hand
% mesh using a multiview camera rig and Poisson surface reconstruction, which is then manually skinned. They demonstrate high-quality results with complex two-handed and
% hand-object interactions, closely fitting the detailed mesh
% model to the data. However, this system focuses on pose
% estimation as opposed to the shape construction, which is
% performed in an time consuming manual manner.
% TODO: Khamis et al. Learning an efficient model of hand shape variation from depth images
% TODO: Fits Like a Glove: Tan et al. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/FitsLikeAGlove.pdf
% TODO: Taylor et al. Efficient and Precise Interactive Hand Tracking Through Joint, Continuous Optimization of Pose and Correspondences
\subsection{Modelling the human body surface}
% hasler et al. 2010
However, of all human categories, the works of most relevance to this thesis are those which represent the entire human body surface. It is first important to characterize these two deformation modes which must be overcome with modelling algorithms. Firstly, there is considerable variation in the \emph{shape} characteristics between different human subjects. Humans not only vary in their heights and weights, but also in their body part proportions, muscle density, fat etc. Secondly, humans exhibit significant \emph{pose} variation, characterized by the range of motion of body parts (e.g. arms and legs). In general, pose is likely to change for a individual subject over a sequence.
The earliest deformable 3D models of the human body was presented by Allen at al~\cite{xxx} (although ~\cite{xxx} came soon after with similar ideas). Allen et al. learnt a PCA shape space model from 250 registered body scans cound in the CAESAR dataset. The model was articulated through a set of pose parameters, which use linear blend skinning to interpolate rotation matrices assigned to the joint to transform model veertices. Unfortunately, this approach suffers from artefacts around joint locations, due to a loss of volume. For this reason, it is important to note that pose and shape are not entirely independent; in fact, body shape does indeed change due to pose variation. Imagine for example, how a fatty stomach region would deform during a walking sequence. SCAPE~\cite{anguelov05scape} improved over this by introducing a model equipped with both body shape variation and pose-dependent shape changes, expressed in terms of triangle deformations (rather than vertex displacements, see~\cite{loper15smpl} for a comprehensive overview). An important advance was made by Hasler et al.~\cite{xxx}, who learn two linear blend rigs: one for pose and one for body shape. In this model, shape change was controlled through the introduction of abstract bones that further deform the vertices.
Perhaps the most significant advance however, was the introduction of the Skinned Multi-Person Linear (SMPL) model of Loper et al.~\cite{loper15smpl}. SMPL follows a similar design philosophy to SCAPE by decomposing shape into identity-dependent and pose-dependent components. However, unlike scape, SMPL adopts a vertex-based skinning approach based on corrective blend shapes. The model's shape space is first taught how human beings deform through pose changes using 1786 high-resolution 3D scans of different subjects in a wide variety of poses. Following alignment to a template mesh, a linear model for each biological gender is created from the CAESAR dataset \cite{robinette2002civilian} using principal component analysis (PCA). SMPL can then be viewed as a function, which makes use of a shape basis and linear blend skinning to map a set of pose and shape parameters to a set of vertex locations. Precisely, \emph{pose} is given as a set of 3D rotations (per-joint and global) in axis-angle form $\pose \in \R{24}{3}$. \emph{Shape} is then given as coefficients for a learned shape basis $\shape \in \R{10}$. The SMPL function can then be viewed as:
\begin{equation}
v = \SMPL(\pose, \shape) + \trans
\end{equation}
where $v \in \RR{6890}{3}$ and $\trans \in \R{3}$ is a global translation parameter. Further details on SMPL have been left to \Cref{chap:3dmulti} of this thesis, which makes use of the model to examine uncertainty when deriving 3D reconstructions of ambiguous input imagery.
% Another key ambition for the SMPL model was the motivatio to create a realistic data-driven human body model which can be rendered in real-time using standard engines, such as Unity~\cite{unity2017} or Blender~\cite{blender2017}. Having been designed for animation, SMPLs base template has a number of useful qualities for this work; the underlying mesh is a clean structure and comprises relatively few polygons. A novelty of this model is that it encodes explicit and meaningful body joint positions. Some sample SMPL meshes are shown in Figure \ref{fig:smpl_model}.
% TODO: GHUML & GHUM model
More recently, SMPL has been combined with face and hand models to add expressive capabilities~\cite{xiang19monocular, joo18total, pavlakos19expressive}. CAPE~\cite{CAPE:CVPR:20} also shows how to add a clothing parameter to effectively model humans in clothing, a challenge solved by learning a shape prior over freeform vertex deformations Techniques have also been developed to model human clothing -- a common challenge generally handled by allowing SMPL model vertices to vary independently to the provided blend shapes. SMPL has also been recently improved with STAR~\cite{STAR:2020} which constructs a part-based shape space (closely related to the local PCA space discussed in the earlier shape section of this literature review). They show this new parameterization is much more efficient (uses approximately 20\% of the model parameters of SMPL) and avoids capturing spurious long-range correlations present in the training dataset. They also show a method for learning shape-dependent pose-corrective blendshapes, that better model how individuals with different body shapes deform with motion. Tangential work of Xu et al.~\cite{ghum-ghuml} train an end-to-end network and learn 3D human body model parameters (including faces and hands) for an input artist model using variational auto-encoders and normalizing flows. This work will be further explored \Cref{chap:3dmulti} in which these generative models will be fully examined.
\begin{figure}[t] % Example image
\center{\includegraphics[width=0.5\linewidth]{smpl_wbg}}
\caption{SMPL model showing pose-invariant shape changes, reprinted from~\cite{loper15smpl}.}
\label{fig:smpl_model}
\end{figure}
\subsection{Modelling animals}
There is still relevatively little work specifically focusing on the 3D scanning~\cite{xxx} and modelling of animal categories. The variation in animal shape and sizes combined with the practical challenges associated with scanning live animal subjects (particularly in attaching traditional motion capture equipment) make scanning a difficult task. As a result, there is a significant lack of real 3D animal training data available in the public domain which could otherwise have been employed to build 3D deformable models. As with humans, animals deformations can again be factored into shape (e.g. variation mostly due to identity) and pose (variation due to articulated motion). However, the enormous diversity among animal species and even between individual breeds results in a much more complex shape space.
Some early work by Favreau et al~\cite{xxx} describe a method for animating an artist-designed rigged 3D model, by tracking a 2D sequence. Chen et al.~\cite{xxx} learn a shape space by registering 11 3D shark models downloaded from the Internet. Cashman et al.~\cite{xxx} learn a morphable model of dolphin shapes by adapting a representative 3D model to 2D images. Ntouskos et al.~\cite{xxx} fit geometric primitives to manually-segemented animal parts generated from an input collection. Reinert et al.~\cite{xxx} demonstrates an effective method for fitting generalized cylinders to an input video sequence supplied with sketched limb tracks. They demonstrate reconstructed results with 3D texture on a few quadruped sequences. So far, none of these techniques for animal reconstruction explicitly factor shape and pose.
\subsubsection{SMAL}
A similar technique to that used to build the SMPL model has been recently used to build a Skinned Multi-Animal Linear Model (SMAL)~\cite{zuffi2017menagerie}, a generative animal model exhibiting realistic 3D shape (see Figure \ref{fig:smal_model_shape}) and pose (see Figure \ref{fig:smal_model_poses}). Due to the lack of available motion capture data for animal subjects, the SMAL model is learnt from a set of $41$ 3D scans of toy figurines in arbitrary poses. The figurines span five quadruped families, and included examples of lions, cats, tigers, dogs, horses, any many more, although notably for this work no rodent toys were included. The paper introduces a new technique to accurately align each toy scan to a common template, allowing the shape space to be learnt.
\begin{figure}[t]
\centering
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=1\linewidth]{smal/default}
\caption{Default SMAL mesh.}
\end{subfigure}%
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=1\linewidth]{smal/horse}
\caption{SMAL in horse shape.}
\end{subfigure}%
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=1\linewidth]{smal/lion}
\caption{SMAL in lion shape.}
\end{subfigure}%
\caption{SMAL with varying shape parameters.}
\label{fig:smal_model_shape}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=1\linewidth]{smal/pose_1}
\end{subfigure}%
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=1\linewidth]{smal/pose_2}
\end{subfigure}%
\begin{subfigure}{0.3\linewidth}
\centering
\includegraphics[width=1\linewidth]{smal/pose_3}
\end{subfigure}%
\caption{SMAL with varying pose parameters.}
\label{fig:smal_model_poses}
\end{figure}
From the paper, SMAL is defined as a function $\SMAL(\pose, \shape)$ parameterized by pose-invariant shape $\shape \in \R{41}$ (again, coefficients of a low-dimensional shape space) and pose $\pose \in \RR{32}{3}$ (including global rotation). There are three pose parameters for each of the $32$ body joints and an additional three to express the global rotation. Global translation $\gamma$ is expressed by a further three parameters. The $\SMAL$ function returns a triangulated surface comprising $6890 \times 3$ vertex locations. \Cref{chap:cgas} and \Cref{chap:wldo} of this thesis make use of the SMAL model in order to reconstruct various quadruped categories.
| {
"alphanum_fraction": 0.7570211762,
"avg_line_length": 104.5044843049,
"ext": "tex",
"hexsha": "09ea02768e5c84c80c9515e6d9c5001732ae6e66",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "benjiebob/phd-thesis-template",
"max_forks_repo_path": "Chapter3/2_modelling-articulated.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "benjiebob/phd-thesis-template",
"max_issues_repo_path": "Chapter3/2_modelling-articulated.tex",
"max_line_length": 1443,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "benjiebob/phd-thesis-template",
"max_stars_repo_path": "Chapter3/2_modelling-articulated.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11284,
"size": 46609
} |
\documentclass[runningheads]{llncs}
\begin{document}
\subsection{Reliable Totally Ordered Multicast} \label{multicast}
Application messages inside the group will be sent via reliable totally ordered
Multicast. In the following, we will describe our implementation and explain
how we handled some corner cases we stumbled across.
\subsubsection{Procedure} \label{multicastprocedure}
Reliable Totally Ordered Multicast is build upon the IP/Multicast protocol.
Each server has two sockets to realize it, one listener socket which listens on
our defined Multicast address and another socket which is used to send messages
so all messages has a unique sender address. The second socket is needed, so we
know exactly the address who sent a message, so we can answer this server
directly.
If one server sends a message to the group, we add multiple things to the
messages. First we need to give the message a type, so we can distinguish
between messages that are content and messages that are needed for the
protocol. We also add a unique identifier for this message, the unique
identifier of the sender and $S$ of the server. $S$ is needed for reliability
so we know which messages a server missed, so these can be requested for
redelivery.
If the new message was sent, each server listening on the broadcast address,
might now receive this message. First we check if we already received this
message, by comparing the unique message identifier with our received messages.
If we already received the message we can ignore it, if we haven't received it
we put it in our received backlog, and we send a copy of the message to the
group if we aren't the sender of this message. Then we check the piggyback $S$
value and compare it with the current $R$ for that sender. If $S = R + 1$ then
the new message is the next message from this sender, if the $S \leq R + 1$
then the message was already delivered, and we can ignore it and if $S > R + 1$
that means we have missed messages that we either need to request from this
address or messages that could be found in the holdback queue of the server.
We first look at the case where the next message was delivered ($S = R + 1$).
In this case, we then decided to implement ISIS total ordering.
% TODO: Maybe we need to explain here why we need total ordering. Maybe we also
% do this earlier
First we need to set $R$ to $R + 1$, and then we start processing our message
by looking at the type of the message. If the message is a new Application
message we need to propose an order to the original sender by answering with a
$pq = max(aq, pq) + 1$. $aq$ is the largest agreed sequence number and $pq$ is
the largest proposed sequence number. This message will be sent to the second
socket address, not via Multicast, so it only reaches the server which needs
the answers.
This server will then collect these answers from all active members in the
group (we will discuss later how we are handling this with dynamic joining and
leaving servers \ref{multicastchanges}). When all messages are collected the
server will broadcast a new message, with a different type. The content of the
message is the agreed sequence number $a =
max(all\_proposed\_sequence\_numbers)$. We also add a unique ID, sender ID and
the next $S$ to this message, so the message is also reliable.
Upon receiving the message, we compare the $S$ value with the current $R$ the
same way as described above, but we handle the message a different if $S = R +
1$. Now we need to reorder the delivery group based on the received $a$. The
delivery queue is the queue of the next messages that will be delivered, we
have two queues to make it easier for us to handle. A holdback queue of
messages that can't be processed yet because of missing messages and the
delivery queue.
Now we look into what we can do if we notice that we have missed a message. If
$S > R + 1$ we know we have missed at least one message from the sender. We put
this new message in our holdback queue, for later processing and then look
inside the whole back queue if we maybe already received the next message in
the order defined by $S$ and $R$. If that is the case, we process these
messages and remove them from the holdback queue until we find a message for
that $S = R + 1$ is no longer true. We collect all missing $S$ numbers and send
the server a 1 to 1 message that we are missing these $S$ numbers. The server
will then collect these messages and resend them. Both messages aren't
protected with an S because if the message is lost, we run into the next
failure of $S = R + 1$ when the next broadcast message arrives, and we end up
running the algorithm again.
\subsubsection{Server leaving and joining the group} \label{multicastchanges}
If a new server joins, the group management will make sure that all servers set
up the new member correctly and set $R$ for this server to 0. When the leader
welcomes the new server, it also shares his current $R$ numbers of all the
members of the group and all messages in the delivery queue, so that the new
server can handle a next message that might be an order proposal.
When waiting for proposal messages, the server will not wait for a proposal
from the new joined server, if the server wasn't active when the original
messages was sent. We realize this by keeping a snapshot of the group view at
the time the message was sent. We then only wait for messages of members from
$members\_at\_time\_of\_sending \cap current\_members$. This also solves the issues
that we no longer wait for messages from server, which already left the group.
\end{document}
| {
"alphanum_fraction": 0.7812723374,
"avg_line_length": 57.6907216495,
"ext": "tex",
"hexsha": "b9f8a64807d0b0db499a1251fe1aea41f38cd3f6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6ea2a696100c046fd5d5ede468febd9072f3763f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Conni2461/admission_handler",
"max_forks_repo_path": "reports/report3/multicast.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6ea2a696100c046fd5d5ede468febd9072f3763f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Conni2461/admission_handler",
"max_issues_repo_path": "reports/report3/multicast.tex",
"max_line_length": 83,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6ea2a696100c046fd5d5ede468febd9072f3763f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Conni2461/admission_handler",
"max_stars_repo_path": "reports/report3/multicast.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-11T04:29:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-11T04:29:18.000Z",
"num_tokens": 1297,
"size": 5596
} |
%#!rm -f tigerpsdfmt4* && lualatex -shell-escape graphicxpsd
\documentclass[luatex]{article}
\usepackage{shortvrb}\MakeShortVerb{\|}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{graphicxpsd}
\title{\textsf{graphicxpsd} Package}
\author{Munehiro Yamamoto}
\date{2021/01/07 v1.2}
\begin{document}
\maketitle
\begin{abstract}
This package provides Adobe Photoshop Data format (PSD) support
for \textsf{graphicx} package
with \texttt{sips} (Darwin/macOS)/\texttt{magick} (ImageMagick) command.
\end{abstract}
\section{Motivation}
\textsf{graphicx} package supports already many graphics image formats as bellow.
\begin{itemize}
\item non-vector formats: jpg, png, bmp, and so on
\item PostScript-style formats: eps, ps
\item PDF-style formats: pdf, ai
\end{itemize}
However, it currently does not support Adobe Photoshop Data format (PSD).
Against that, we developed the \textsf{graphicxpsd} package
to support PSD format via PSD-to-PDF conversion
with two image converters.
\begin{itemize}
\item \texttt{sips}:
pre-installed command in Darwin/macOS
\item \texttt{magick}:
bundled command in \href{https://www.imagemagick.org/}{ImageMagick}
\end{itemize}
\section{Loading \textsf{graphicxpsd} Package}
Load \textsf{graphicxpsd} package after loading \textsf{graphicx} package.
\begin{quote}
\begin{verbatim}
\usepackage{graphicx}
\usepackage[<options>]{graphicxpsd}
\end{verbatim}
\end{quote}
The list of available options is the following.
\begin{itemize}
\item |dvipdfmx|, |xetex|, |pdftex|, |luatex|:
supported driver options;
You can also give specific driver option from global option.
\item |sips| (default),
|magick| (same as |imagemagick|), |convert|\footnotemark: % ,
% |graphicsmagick|:
supported image converters;
\begin{itemize}
\item
Darwin/macOS users do not have to do anything
unless you choose ImageMagick as PSD-to-PDF converter.
\item
If you use ImageMagick~7, you may choose |magick|.
\item
If you should use ImageMagick~6 or lower version, you just choose |convert|.
% \item
% If you use GraphicsMagick, you may choose |graphicsmagick|.
\end{itemize}
\item |cache=true|: supports to include cached images for all PSD files.
If there does not exist the cached image for a PSD file,
\textsf{graphicxpsd} attempts PSD-to-PDF conversion of the PSD file.
\end{itemize}
\footnotetext{When ImageMagick project had released ImageMagick~7,
they changed \texttt{convert} to \texttt{magick}
because that might be the usual problem with the conflict of names
between the ImageMagick's \texttt{convert.exe} and
the Windows ``\texttt{convert.exe}'' program,
which complains about invalid parameters, and
changing the Imagemagick program's name to imconvert and
using that instead avoided the conflict.}
\section{Example}
Typeset the following {\LaTeX} document with Lua{\TeX} enabling the shell escape,
that is, run |lualatex -shell-escape|.
\begin{quote}
\small
\begin{verbatim}
%#!lualatex -shell-escape
\documentclass[luatex]{article}%%set luatex driver as global option
\usepackage{graphicx}
\usepackage{graphicxpsd}
\begin{document}
\includegraphics{tigerpsdfmt.psd}
\end{document}
\end{verbatim}
\end{quote}
Then, the result is as below.
\begin{center}
\includegraphics{tigerpsdfmt.psd}
\end{center}
Incidentally, the above \texttt{tigerpsdfmt.psd} file is converted from
the \texttt{tiger.eps} file (a.k.a.~``cubic spline tiger''),
which comes with Ghostscript.
\begin{quote}
\small
\begin{verbatim}
$ file tigerpsdfmt.psd
tigerpsdfmt.psd: Adobe Photoshop Image, 550 x 568, RGBA, 4x 8-bit channels
\end{verbatim}
\end{quote}
\end{document}
| {
"alphanum_fraction": 0.7738595018,
"avg_line_length": 30.8017241379,
"ext": "tex",
"hexsha": "fed24f3af964f6dec93e9da23d4975b33338a759",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c6c1426cd0230e6a388916a81c710e4bbd10e2cb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "munepi/graphicxpsd",
"max_forks_repo_path": "graphicxpsd.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "c6c1426cd0230e6a388916a81c710e4bbd10e2cb",
"max_issues_repo_issues_event_max_datetime": "2018-01-26T09:02:08.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-01-26T09:01:45.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "munepi/graphicxpsd",
"max_issues_repo_path": "graphicxpsd.tex",
"max_line_length": 81,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c6c1426cd0230e6a388916a81c710e4bbd10e2cb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "munepi/graphicxpsd",
"max_stars_repo_path": "graphicxpsd.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-15T02:43:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-28T00:53:14.000Z",
"num_tokens": 1076,
"size": 3573
} |
\chapter{Code Generation}
This chapter describes the code generation process. The examples of
generated code presented here are produced by the compiler, but some
cosmetic changes have been made to enhance readability.
Code generation is done one procedure at a time. An Icon procedure is,
in general, translated into several C functions. There is always an
\textit{outer function}
for the procedure. This is the function that is seen as implementing
the procedure. In addition to the outer function, there may be several
functions for success continuations that are used to implement
generative expressions.
The outer function of a procedure must have features that support the
semantics of an Icon call, just as a function implementing a run-time
operation does. In general, a procedure must have a procedure block at
run time. This procedure block references the outer function. All
functions referenced through a procedure block must conform to the
compiler system's standard calling conventions. However, invocation
optimizations usually eliminate the need for procedure variables and
their associated procedure blocks. When this happens, the calling
conventions for the outer function can be tailored to the needs of the
procedure.
As explained in Chapter 14, the standard calling convention requires
four parameters: the number of arguments, a pointer to the beginning
of an array of descriptors holding the arguments, a pointer to a
result location, and a success continuation to use for suspension. The
function itself is responsible for dereferencing and argument list
adjustment. In a tailored calling convention for an outer function of
a procedure, any dereferencing and argument list adjustment is done at
the call site. This includes creating an Icon list for the end of a
variable-sized argument list. The compiler produces code to do this
that is optimized to the particular call. An example of an
optimization is eliminating dereferencing when type inferencing
determines that an argument cannot be a variable reference.
The number of arguments is never needed in these tailored calling
conventions because the number is fixed for the procedure. Arguments
are still passed via a pointer to an array of descriptors, but if
there are no arguments, no pointer is needed. If the procedure returns
no value, no result location is needed. If the procedure does not
suspend, no success continuation is needed.
In addition to providing a calling interface for the rest of the
program, the outer function must provide local variables for use by
the code generated for the procedure. These variables, along with
several other items, are located in a procedure frame. An Icon
procedure frame is implemented as a C structure embedded within the
frame of its outer C function (that is, as a local struct
definition). Code within the outer function can access the procedure
frame directly. However, continuations must use a pointer to the
frame. A global C variable, \texttt{pfp}, points to the frame of the currently
executing procedure. For efficiency, continuations load this pointer
into a local register variable. The frame for a main procedure might
have the following declaration.
\goodbreak
\begin{iconcode}
struct PF00\_main \{\\
\>struct p\_frame old\_pfp;\\
\>dptr old\_argp;\\
\>dptr rslt;\\
\>continuation succ\_cont;\\
\>struct \{\\
\>\>struct tend\_desc *previous;\\
\>\>int num;\\
\>\>struct descrip d[5];\\
\>\>\} tend;\\
\>\};\\
\end{iconcode}
\noindent with the definition
\iconline{ \>struct PF00\_main frame; }
\noindent in the procedure's outer function. A procedure frame always
contains the following five items: a pointer to the frame of the
caller, a pointer to the argument list of the caller, a pointer to the
result location of this call, a pointer to the success continuation of
this call, and an array of tended descriptors for this procedure. It
may also contain C integer variables, C double variables, and string
and cset buffers for use in converting values. If debugging is
enabled, additional information is stored in the frame. The structure
\texttt{p\_frame} is a generic procedure frame containing a single
tended descriptor. It is used to define the pointer \texttt{old\_pfp}
because the caller can be any procedure.
The argument pointer, result location, and success continuation of the
call must be available to the success continuations of the
procedure. A global C variable, \texttt{argp}, points the argument list for the
current call. This current argument list pointer could have been put
in the procedure frame, but it is desirable to have quick access to
it. Quick access to the result location and the success continuation
of the call is less important, so they are accessed indirectly through
the procedure frame.
The array of descriptors is linked onto the chain used by the garbage
collector to locate tended descriptors. These descriptors are used for
Icon variables local to the procedure and for temporary variables that
hold intermediate results. If the function is responsible for
dereferencing and argument list adjustment (that is, if it does not
have a tailored calling convention), the modified argument list is
constructed in a section of these descriptors.
The final thing provided by the outer function is a \textit{control
environment} in which code generation starts. In particular, it
provides the bounding environment for the body of the procedure and
the implicit failure at the end of the procedure. The following C
function is the tailored outer function for a procedure named
\texttt{p}. The procedure has arguments and returns a result. However,
it does not suspend, so it needs no success continuation.
\goodbreak
\begin{iconcode}
static int P01\_p(args, rslt)\\
dptr args;\\
dptr rslt;\\
\{\\
\>struct PF01\_p frame;\\
\>register int signal;\\
\>int i;\\
\>frame.old\_pfp = pfp;\\
\>pfp = (struct p\_frame )\&frame;\\
\>frame.old\_argp = argp;\\
\>frame.rslt = rslt;\\
\>frame.succ\_cont = NULL;\\
\\
\>for (i = 0; i < 3; ++i)\\
\>\>frame.tend.d[i].dword = D\_Null;\\
\>argp = args;\\
\>frame.tend.num = 3;\\
\>frame.tend.previous = tend;\\
\>tend = (struct tend\_desc )\&frame.tend;\\
\\
\>\textit{translation of the body of procedure p}\\
\\
L10: /* bound */\\
L4: /* proc fail */\\
\>tend = frame.tend.previous;\\
\>pfp = frame.old\_pfp;\\
\>argp = frame.old\_argp;\\
\>return A\_Resume;\\
L8: /* proc return */\\
\>tend = frame.tend.previous;\\
\>pfp = frame.old\_pfp;\\
\>argp = frame.old\_argp;\\
\>return A\_Continue;\\
\>\}\\
\end{iconcode}
\noindent
The initialization code reflects the fact that this function has three
tended descriptors to use for local variables and intermediate
results. \texttt{L10} is both the bounding label and the failure label for the
body of the procedure. Code to handle procedure failure and return
(except for setting the result value) is at the end of the outer
function. As with bounding labels, the labels for these pieces of code
have associated signals. If a procedure fail or return occurs in a
success continuation, the continuation returns the corresponding
signal which is propagated to the outer function where it is converted
into a goto. The code for procedure failure is located after the body
of the procedure, automatically implementing the implicit failure at
the end of the procedure.
\section{Translating Icon Expressions}
Icon's goal-directed evaluation makes the implementation of control
flow an important issue during code generation. Code for an expression
is generated while walking the expression's syntax tree in forward
execution order. During code generation there is always a
\textit{current failure action}. This action is either ``branch to a
label'' or ``return a signal''. When the translation of a procedure
starts, the failure action is to branch to the bounding label of the
procedure body. The action is changed when generators are encountered
or while control structures that use failure are being translated.
The allocation of temporary variables to intermediate results is
discussed in more detail later. However, some aspects of it will be
addressed before presenting examples of generated code. The result
location of a subexpression may be determined when the parent
operation is encountered on the way down the syntax tree. This is
usually a temporary variable, but does not have to be. If no location
has been assigned by the time the code generator needs to use it, a
temporary variable is allocated for it. This temporary variable is
used in the code for the parent operation.
The code generation process is illustrated below with examples that
use a number of control structures and operations. Code generation
for other features of the language is similar.
Consider the process of translating the following Icon expression:
\iconline{return if a = (1 | 2) then "yes" else "no" }
\noindent
When this expression is encountered, there is some current failure
action, perhaps a branch to a bounding label. The \texttt{return}
expression produces no value, so whether a result location has been
assigned to it is of no consequence. If the argument of a
\texttt{return} fails, the procedure fails. To handle this
possibility, the current failure action is set to branch to the label
for procedure failure before translating the argument (in this
example, that action is not used). The code for the argument is then
generated with its result location set to the result location of the
procedure itself. Finally the result location is dereferenced and
control is transferred to the procedure return label. The
dereferencing function, \texttt{deref}, takes two arguments: a pointer
to a source descriptor and a pointer to a destination descriptor.
\goodbreak
\begin{iconcode}
\>\>\textit{code for the if expression }\\
\>\>deref(rslt, rslt);\\
\>\>goto L7 /* proc return */;\\
\end{iconcode}
The control clause of the \texttt{if} expression must be bounded. The
code implementing the \texttt{then} clause must be generated following
the bounding label for the control clause. A label must also be set up
for the \texttt{else} clause with a branch to this label used as the
failure action for the control clause. Note that the result location
of each branch is the result location of the \texttt{if} expression
which is in turn the result location of the procedure. Because neither
branch of the \texttt{if} expression contains operations that suspend,
the two control paths can be brought together with a branch to a
label.
\goodbreak
\begin{iconcode}
\>\>\textit{code for control clause}\\
\>L4: /* bound */\\
\>\>rslt->vword.sptr = "yes";\\
\>\>rslt->dword = 3;\\
\>\>goto L6 /* end if */;\\
\>L5: /* else */\\
\>\>rslt->vword.sptr = "no";\\
\>\>rslt->dword = 2;\\
\>L6: /* end if */\\
\end{iconcode}
\noindent
Using a branch and a label to bring together the two control paths of
the \texttt{if} expression is an optimization. If the \texttt{then} or
the \texttt{else} clauses contain operations that suspend, the general
continuation model must be used. In this model, the code following the
\texttt{if} expression is put in a success continuation, which is then
called at the end of both the code for the \texttt{then} clause and
the code for the \texttt{else} clause.
Next consider the translation of the control clause. The numeric
comparison operator takes two operands. In this translation, the
standard calling conventions are used for the library routine
implementing the operator. Therefore, the operands must be in an array
of descriptors. This array is allocated as a sub-array of the tended
descriptors for the procedure. In this example, tended location 0 is
occupied by the local variable, \texttt{a}. Tended locations 1 and 2 are free
to be allocated as the arguments to the comparison operator. The code
for the first operand simply builds a variable reference.
\goodbreak
\begin{iconcode}
\>frame.tend.d[1].dword = D\_Var;\\
\>frame.tend.d[1].vword.descptr = \&frame.tend.d[0] /* a */;\\
\end{iconcode}
\noindent
However, the second operand is alternation. This is a generator and
requires a success continuation. In this example, the continuation is
given the name \texttt{P02\_main} (the Icon expression is part of the main
procedure). The continuation contains the invocation of the run-time
function implementing the comparison operator and the end of the
bounded expression for the control clause of the \texttt{if}. The function
\texttt{O0o\_numeq} implements the comparison operator. The \texttt{if} expression
discards the operator's result. This is accomplished by using the
variable \texttt{trashcan} as the result location for the call. The compiler
knows that this operation does not suspend, so it passes a null
continuation to the function. The end of the bounded expression
consists of a transfer of control to the bounding label. This is
accomplished by returning a signal. The continuation is
\goodbreak
\begin{iconcode}
static int P02\_main()\\
\{\\
register struct PF00\_main *rpfp;\\
rpfp = (struct PF00\_main *)pfp;\\
switch (O0o\_numeq(2, \&(rpfp->tend.d[1]), \&trashcan, (continuation)NULL))\\
\>\{\\
\>case A\_Continue:\\
\>\>break;\\
\>case A\_Resume:\\
\>\>return A\_Resume;\\
\>\}\\
\ return 4; /* bound */\\
\}\\
\end{iconcode}
\noindent
Each alternative of the alternation must compute the value of its
subexpression and call the success continuation. The failure action
for the first alternative is to branch to the second alternative. The
failure action of the second alternative is the failure action of the
entire alternation expression. In this example, the failure action is
to branch to the \texttt{else} label of the \texttt{if} expression. In
each alterative, a bounding signal from the continuation must be
converted into a branch to the bounding label. Note that this bounding
signal indicates that the control expression succeeded.
\goodbreak
\begin{iconcode}
frame.tend.d[2].dword = D\_Integer;\\
frame.tend.d[2].vword.integr = 1;\\
switch (P02\_main()) \{\\
\>case A\_Resume:\\
\>\>goto L2 /* alt */;\\
\>case 4 /* bound */:\\
\>\>goto L4 /* bound */;\\
\>\}\\
L2: /* alt */\\
\>frame.tend.d[2].dword = D\_Integer;\\
\>frame.tend.d[2].vword.integr = 2;\\
\>switch (P02\_main()) \{\\
\>\>case A\_Resume:\\
\>\>\>goto L5 /* else */;\\
\>\>case 4 /* bound */:\\
\>\>\>goto L4 /* bound */;\\
\>\>\}\\
\end{iconcode}
The code for the entire \texttt{return} expression is obtained by putting
together all the pieces. The result is the following code (the code
for \texttt{P02\_main} is not repeated).
\goodbreak
\begin{iconcode}
frame.tend.d[1].dword = D\_Var;\\
frame.tend.d[1].vword.descptr = \&frame.tend.d[0] /* a */;\\
frame.tend.d[2].dword = D\_Integer;\\
frame.tend.d[2].vword.integr = 1;\\
switch (P02\_main()) \{\\
\>case A\_Resume:\\
\>\>goto L2 /* alt */;\\
\>case 4 /* bound */:\\
\>\>goto L4 /* bound */;\\
\>\}\\
L2: /* alt */\\
\>frame.tend.d[2].dword = D\_Integer;\\
\>frame.tend.d[2].vword.integr = 2;\\
\>switch (P02\_main()) \{\\
\>\>case A\_Resume:\\
\>\>\>goto L5 /* else */;\\
\>\>case 4 /* bound */:\\
\>\>\>goto L4 /* bound */;\\
\>\>\}\\
L4: /* bound */\\
\>rslt->vword.sptr = yes;\\
\>rslt->dword = 3;\\
\>goto L6 /* end if */;\\
L5: /* else */\\
\>rslt->vword.sptr = no;\\
\>rslt->dword = 2;\\
L6: /* end if */\\
\>deref(rslt, rslt);\\
\>goto L7 /* proc return */;\\
\end{iconcode}
\section{Signal Handling}
In order to produce signal handling code, the code generator must know
what signals may be returned from a call. These signals may be either
directly produced by the operation (or procedure) being called or they
may originate from a success continuation. Note that either the
operation or the continuation may be missing from a call, but not
both. The signals produced directly by an operation are
\texttt{A\_Resume}, \texttt{A\_Continue}, and \texttt{A\_FallThru}
(this last signal is only used internally within in-line code).
The signals produced by a success continuation belong to one of three
categories: \texttt{A\_Resume}, signals corresponding to labels within the
procedure the continuation belongs to, and signals corresponding to
labels in procedures farther down in the call chain. The last category
only occurs when the procedure suspends. The success continuation for
the procedure call may return a signal belonging to the calling
procedure. This is demonstrated in the following example (the
generated code has been ``cleaned-up'' a little to make it easier to
follow). The Icon program being translated is
\goodbreak
\begin{iconcode}
procedure main()\\
\>write(p())\\
end\\
procedure p()\\
\>suspend 1 to 10\\
end\\
\end{iconcode}
The generative procedure \texttt{p} is called in a bounded context. The code
generated for the call is
\goodbreak
\begin{iconcode}
switch (P01\_p(\&frame.tend.d[0], P05\_main)) \{\\
\>case 7 /* bound */:\\
\>\>goto L7 /* bound */;\\
\>case A\_Resume:\\
\>\>goto L7 /* bound */;\\
\>\}\\
L7: /* bound */\\
\end{iconcode}
\noindent
This call uses the following success continuation. The continuation
writes the result of the call to \texttt{p} then signals the end of the bounded
expression.
\goodbreak
\begin{iconcode}
static int P05\_main() \{\\
\>register struct PF00\_main *rpfp;\\
\\
\>rpfp = (struct PF00\_main )pfp;\\
\>F0c\_write(1, \&rpfp->tend.d[0], \&trashcan, (continuation)NULL);\\
\>return 7; /* bound */\\
\}\\
\end{iconcode}
\noindent
The \texttt{to} operator in procedure \texttt{p} needs a success
continuation that implements procedure suspension. Suspension is
implemented by switching to the old procedure frame pointer and old
argument pointer, then calling the success continuation for the call
to \texttt{p}. The success continuation is accessed with the
expression \texttt{rpfp--{\textgreater}succ\_cont}. In this example,
the continuation will only be the \texttt{function P05\_main}. The
suspend must check the signal returned by the procedure call's success
continuation. However, the code generator does not try to determine
exactly what signals might be returned by a continuation belonging to
another procedure. Such a continuation may return an
\texttt{A\_Resume} signal or a signal belonging to some procedure
farther down in the call chain. In this example, bounding signal 7
will be returned and it belongs to \texttt{main}.
If the call's success continuation returns \texttt{A\_Resume}, the procedure
frame pointer and argument pointer must be restored, and the current
failure action must be executed. In this case, that action is to
return an \texttt{A\_Resume} signal to the \texttt{to} operator. If the call's success
continuation returns any other signal, that signal must be propagated
back through the procedure call. The following function is the success
continuation for the \texttt{to} operator.
\goodbreak
\begin{iconcode}
static int P03\_p()\\
\{\\
\>register int signal;\\
\>register struct PF01\_p *rpfp;\\
\\
\>rpfp = (struct PF01\_p *)pfp;\\
\>deref(rpfp->rslt, rpfp->rslt);\\
\>pfp = rpfp->old\_pfp;\\
\>argp = rpfp->old\_argp;\\
\\
\>signal = (*rpfp->succ\_cont)();\\
\>if (signal != A\_Resume) \{\\
\>\>return signal;\\
\>\>\}\\
\>pfp = (struct p\_frame *)rpfp;\\
\>argp = NULL;\\
\>return A\_Resume;\\
\}\\
\end{iconcode}
The following code implements the call to the \texttt{to}
operator. The signal handling code associated with the call must pass
along any signal from the procedure call's success continuation. These
signals are recognized by the fact that the procedure frame for the
calling procedure is still in effect. At this point, the signal is
propagated out of the procedure \texttt{p}. Because the procedure
frame is about to be removed from the C stack, the descriptors it
contains must be removed from the tended list.
\goodbreak
\begin{iconcode}
frame.tend.d[0].dword = D\_Integer;\\
frame.tend.d[0].vword.integr = 1;\\
frame.tend.d[1].dword = D\_Integer;\\
frame.tend.d[1].vword.integr = 10;\\
signal = O0k\_to(2, \&frame.tend.d[0], rslt, P03\_p);\\
if (pfp != (struct p\_frame )\&frame) \{\\
\>tend = frame.tend.previous;\\
\>return signal;\\
\>\}\\
switch (signal) \{\\
\>case A\_Resume:\\
\>\>goto L2 /* bound */;\\
\>\}\\
L2: /* bound */\\
\end{iconcode}
So far, this discussion has not addressed the question of how the code
generator determines what signals might be returned from a
call. Because code is generated in execution order, a call involving a
success continuation is generated before the code in the continuation
is generated. This makes it difficult to know what signals might
originate from the success continuation. This problem exists for
direct calls to a success continuation and for calls to an operation
that uses a success continuation.
The problem is solved by doing code generation in two parts. The first
part produces incomplete signal handling code. At this time, code to
handle the signals produced directly by an operation is generated. The
second part of code generation is a fix-up pass that completes the
signal handling code by determining what signals might be produced by
success continuations.
The code generator constructs a call graph of the continuations for a
procedure. Some of these calls are indirect calls to a continuation
through an operation. However, the only effect of an operation on
signals returned by a continuation is to intercept \texttt{A\_Resume}
signals. All other signals are just passed along. This is true even if
the operation is a procedure. This call graph of continuations does
not contain the procedure call graph nor does it contain continuations
from other procedures.
Forward execution order imposes a partial order on continuations. A
continuation only calls continuations strictly greater in forward
execution order than itself. Therefore the continuation call graph is
a DAG.
The fix-up pass is done with a bottom-up walk of the continuation call
DAG. This pass determines what signals are returned by each
continuation in the DAG. While processing a continuation, the fix-up
pass examines each continuation call in that continuation. At the
point it processes a call, it has determined what signals might be
returned by the called continuation. It uses this information to
complete the signal handling code associated with the call and to
determine what signals might be passed along to continuations higher
up the DAG. If a continuation contains code for a \texttt{suspend}, the fix-up
pass notes that the continuation may return a \textit{foreign} signal
belonging to another procedure call. As explained above, foreign
signals are handled by special code that checks the procedure frame
pointer.
\section{Temporary Variable Allocation}
The code generator uses the liveness information for an intermediate
value when allocating a temporary variable to hold the value. As
explained in Chapter 16, this information consists of the furthest
program point, represented as a node in the syntax tree, through which
the intermediate value must be retained. When a temporary variable is
allocated to a value, that variable is placed on a
\textit{deallocation list} associated with the node beyond which its
value is not needed. When the code generator passes a node, all the
temporary variables on the node's deallocation list are deallocated.
The code generator maintains a \textit{status array} for temporary
variables while it is processing a procedure. The array contains one
element per temporary variable. This array is expandable, allowing a
procedure to use an arbitrary number of temporary variables. In a
simple allocation scheme, the status of a temporary variable is either
\textit{free} or \textit{in-use}. The entry for a temporary variable
is initially marked free, it is marked in-use when the variable is
allocated, and it is marked free again when the variable is
deallocated.
The simple scheme works well when temporary variables are allocated
independently. It does not work well when arrays of contiguous
temporary variables are allocated. This occurs when temporary
variables are allocated to the arguments of a procedure invocation or
any invocation conforming to the standard calling conventions; under
these circumstances, the argument list is implemented as an array. All
of the contiguous temporary variables must be reserved before the
first one is used, even though many operations may be performed before
the last one is needed. Rather than mark a temporary variable in-use
before it actually is, the compiler uses the program point where the
temporary variable will be used to mark the temporary variable's entry
in the status array as \textit{reserved}. A contiguous array of
temporary variables are marked reserved at the same time, with each
having a different reservation point. A reserved temporary variable
may be allocated to other intermediate values as long as it will be
deallocated before the reservation point. In this scheme, an entry in
a deallocation list must include the previous status of the temporary
variable as it might be a reserved status.
The compiler allocates a contiguous subarray of temporary variables
for the arguments of an invocation when it encounters the invocation
on the way down the syntax tree during its tree walk. It uses a
first-fit algorithm to find a large enough subarray that does not have
a conflicting allocation. Consider the problem of allocating temporary
variables to the expression
\iconline{ \>f1(f2(f3(x, f4())), y) }
\noindent where \texttt{f1} can fail and \texttt{f4} is a
generator. The syntax tree for this expression is shown below. Note
that invocation nodes show the operation as part of the node label and
not as the first operand to general invocation. This reflects the
direct invocation optimization that is usually performed on
invocations. Each node in the graph is given a numeric label. These
labels increase in value in forward execution order.
% define a TeX macro for the box label (a LaTeX newcommand doesn't work)
\def\tmpLabel#1#2{\shortstack{#1\\ \\{\scriptsize\em #2}}}
%define a TeX macro for the basic tree diagram (which is used several times)
\def\tmpTree#1{
\node(f1)[rbx] at (#1) {\tmpLabel{f1()}{6}}
child {
node(f2)[rbx] {\tmpLabel{f2()}{4}}
child {
node(f3)[rbx] {\tmpLabel{f3()}{3}}
child { node(x)[rbx] {\tmpLabel{x}{1}}}
child { node(f4)[rbx] {\tmpLabel{f4()}{2}}}
}
}
child { node(y)[rbx] {\tmpLabel{y}{5}}};
}
% a slanted F
\def\F{{\sl F\/}}
% annotation at top right of a node
\def\TR#1#2{\node[anchor=north west] at (#1.north east) {#2}}
% annotation at bottom right of a node
\def\BR#1#2{\node[anchor=south west] at (#1.south east) {#2}}
% status boxes
\def\stBox#1#2#3#4#5{%
\coordinate (o) at (#1);
\foreach \x in {0,1,2,3}
{
\draw (o) rectangle ($ (o) + (0.5,0.5) + 0.5*(\x,0)$);
\draw ($ (o) + (0.25,-0.3) + 0.5*(\x,0)$) node {\x};
}
\draw ($ (o) + (0.25,0.25)$) node{{\sl #2}};
\draw ($ (o) + (0.75,0.25)$) node{{\sl #3}};
\draw ($ (o) + (1.25,0.25)$) node{{\sl #4}};
\draw ($ (o) + (1.75,0.25)$) node{{\sl #5}};
}
\begin{center}
\begin{tikzpicture}[draw,very thick,font=\small\tt,
rbx/.style={rectangle,draw,rounded corners=0.3cm,minimum size=1.25cm},
level distance=2cm, sibling distance=3cm
]
\tmpTree{4,7}
\end{tikzpicture}
\end{center}
The following figure shows the operations in forward execution order
with lines on the left side of the diagram showing the lifetime of
intermediate values. This represents the output of the liveness
analysis phase of the compiler. Because \texttt{f4} can be resumed by
\texttt{f1}, the value of the expression \texttt{x} has a lifetime
that extends to the invocation of \texttt{f1}. The extended portion of
the lifetime is indicated with a dotted line.
\begin{center}
\begin{tikzpicture}[draw,font=\small\tt, very thick]
% \draw[help lines] (0,0) grid (16,5);
\foreach \y in {4.2,3.4,2.6,1.8,1}
{
\draw (2,\y) -- ($ (1,\y) - 0.2*(\y ,0) + (0.84,0)$) -- ++(0,-0.8);
}
\node(x)[anchor=west] at (2,4.2) {x};
\draw ($ (x.west) - (1,1.6)$) -- ++(0,0.8);
\draw[loosely dotted] ($ (x.west) - (1.0,1.7) $) -- (1,0.2);
\node(f4)[anchor=west] at (2,3.4) {f4()};
\node(f3)[anchor=west] at (2,2.6) {f3()};
\node(f2)[anchor=west] at (2,1.8) {f2()};
\draw ($ (f2.west) - (0.52,1.6)$) -- ++(0,0.8);
\node(y)[anchor=west] at (2,1.0) {y};
\node(f1)[anchor=west] at (2,0.2) {f1()};
\end{tikzpicture}
\end{center}
The following series of diagrams illustrate the process of allocating
intermediate values. Each diagram includes an annotated syntax tree
and a status array for temporary variables. An arrow in the tree shows
the current location of the tree walk. A deallocation list is located
near the upper right of each node. An element in the list consists of
a temporary variable number and the status with which to restore the
variable's entry in the status array. If a temporary variable has been
allocated to an intermediate value, the variable's number appears near
the lower right of the corresponding node.
The status array is shown with four elements. The elements are
initialized to \textit{F} which indicates that the temporary variables
are free. A reserved temporary variable is indicated by placing the
node number of the reservation point in the corresponding
element. When a temporary variable is actually in use, the
corresponding element is set to \textit{I}.
Temporary variables are reserved while walking down the syntax
tree. The tree illustrated below on the left shows the state of
allocation after temporary variables have been allocated for the
operands of \texttt{f1}. Two contiguous variables are needed. All
variables are free, so the first-fit algorithm allocates variables 0
and 1. The status array is updated to indicate that these variables
are reserved for nodes \textit{4} and \textit{5} respectively, and the
nodes are annotated with these variable numbers. The lifetime
information in the previous figure indicates that these variables
should be deallocated after \texttt{f1} is executed, so the
deallocation array for node \textit{6} is updated.
The next step is the allocation of a temporary variable to the operand
of \texttt{f2}. The intermediate value has a lifetime extending from node
\textit{3} to node \textit{4}. This conflicts with the allocation of
variable 0, but not the allocation of variable 1. Therefore, variable
1 is allocated to node \textit{3} and the deallocation list for node
\textit{4} is updated. This is illustrated in the tree on the right:
\makebox[0.25in]{~}
\begin{tikzpicture}[draw,very thick,font=\small\tt,
rbx/.style={rectangle,draw,rounded corners=0.3cm,minimum size=1.25cm},
level distance=2cm, sibling distance=3cm
]
\tmpTree{4,9}
\stBox{3,1}{4}{5}{F}{F};
% annotate the nodes
\TR{f1}{\{0:\F, 1:\F\}};
\TR{f2}{\{~\}};
\BR{f2}{0};
\TR{y}{\{~\}};
\BR{y}{1};
\TR{f3}{\{~\}};
\TR{x}{\{~\}};
\TR{f4}{\{~\}};
\coordinate (arrow) at ($ (f1.north west) - (0.3,0.4)$);
\draw[-Latex] (arrow) -- ($ (arrow) - (0,0.75)$);
\end{tikzpicture}
\makebox[0.75in]{~}
\begin{tikzpicture}[draw,very thick,font=\small\tt,
rbx/.style={rectangle,draw,rounded corners=0.3cm,minimum size=1.25cm},
level distance=2cm, sibling distance=3cm
]
\tmpTree{4,9}
\stBox{3,1}{4}{3}{F}{F};
% annotate the nodes
\TR{f1}{\{0:\F, 1:\F\}};
\TR{f2}{\{1:{\sl 5\/}\}};
\BR{f2}{0};
\TR{y}{\{~\}};
\BR{y}{1};
\TR{f3}{\{~\}};
\BR{f3}{1};
\TR{x}{\{~\}};
\TR{f4}{\{~\}};
\coordinate (arrow) at ($ (f2.north west) - (0.3,0.4)$);
\draw[-Latex] (arrow) -- ($ (arrow) - (0,0.75)$);
\end{tikzpicture}
The final allocation requires a contiguous pair of variables for nodes
\textit{1} and \textit{2}. The value from node \textit{1} has a
lifetime that extends to node \textit{6}, and the value from node
\textit{2} has a lifetime that extends to node \textit{3}. The current
allocations for variables 0 and 1 conflict with the lifetime of the
intermediate value of node \textit{1}, so the variables 2 and 3 are
used in this allocation. This is illustrated in the tree:
\begin{center}
\begin{tikzpicture}[draw,very thick,font=\small\tt,
rbx/.style={rectangle,draw,rounded corners=0.3cm,minimum size=1.25cm},
level distance=2cm, sibling distance=3cm
]
\tmpTree{4,9}
\stBox{3,1}{4}{3}{1}{2};
% annotate the nodes
\TR{f1}{\{0:\F, 1:\F, 2:\F\}};
\TR{f2}{\{1:{\sl 5\/}\}};
\BR{f2}{0};
\TR{y}{\{~\}};
\BR{y}{1};
\TR{f3}{\{3:\F\}};
\BR{f3}{1};
\TR{x}{\{~\}};
\BR{x}{2};
\TR{f4}{\{~\}};
\BR{f4}{3};
\coordinate (arrow) at ($ (f3.north west) - (0.3,0.4)$);
\draw[-Latex] (arrow) -- ($ (arrow) - (0,0.75)$);
\end{tikzpicture}
\end{center}
The remaining actions of the allocator in this example mark temporary
variables in-use when the code generator uses them and restore
previous allocated statuses when temporary variables are
deallocated. This is done in the six steps illustrated in the
following diagram. The annotations on the graph do not change. Only
the node of interest is shown for each step. These steps are performed
in node-number order.
\begin{center}
\begin{tikzpicture}[draw,very thick,font=\small\tt,
rbx/.style={rectangle,draw,rounded corners=0.3cm,minimum size=1.25cm}
]
% \draw[help lines] (0,0) grid (16,8);
\node(x)[rbx] at (1,7) {\tmpLabel{x}{1}};
\TR{x}{\{~\}};
\BR{x}{2};
\stBox{0,5}{4}{3}{I}{2};
\draw[-Latex] ($ (x.south west) - (0.3, -0.2)$) -- ++(0,0.8);
\node(f4)[rbx] at (6,7) {\tmpLabel{f4()}{2}};
\TR{f4}{\{~\}};
\BR{f4}{3};
\stBox{5,5}{4}{3}{I}{I};
\draw[-Latex] ($ (f4.south west) - (0.3, -0.2)$) -- ++(0,0.8);
\node(f3)[rbx] at (11,7) {\tmpLabel{f3()}{3}};
\TR{f3}{\{3:\F\}};
\BR{f3}{1};
\stBox{10,5}{4}{I}{I}{F};
\draw[-Latex] ($ (f3.south west) - (0.3, -0.2)$) -- ++(0,0.8);
\node(f2)[rbx] at (1,3) {\tmpLabel{f2}{4}};
\TR{f2}{\{1:5\}};
\BR{f2}{0};
\stBox{0,1}{I}{5}{I}{F};
\draw[-Latex] ($ (f2.south west) - (0.3, -0.2)$) -- ++(0,0.8);
\node(y)[rbx] at (6,3) {\tmpLabel{y}{5}};
\TR{y}{\{~\}};
\BR{y}{1};
\stBox{5,1}{I}{I}{I}{F};
\draw[-Latex] ($ (y.south west) - (0.3, -0.2)$) -- ++(0,0.8);
\node(f1)[rbx] at (11,3) {\tmpLabel{f1()}{6}};
\TR{f1}{\{0:\F, 1:\F, 2:\F\}};
\stBox{10,1}{F}{F}{F}{F};
\draw[-Latex] ($ (f1.south west) - (0.3, -0.2)$) -- ++(0,0.8);
\end{tikzpicture}
\end{center}
In general, the tree walk will alternate up and down the syntax
tree. For example, if node \textit{5} had children, the allocation
status after the deallocation associated with node \textit{4},
\begin{center}
\begin{tikzpicture}[draw,very thick,font=\small\tt]
\stBox{0,1}{I}{5}{I}{F};
\end{tikzpicture}
\end{center}
\noindent is used to allocate temporary variables to those
children. If this requires more than four temporary variables, the
status array is extended with elements initialized to \textit{F}.
This allocation algorithm is not guaranteed to produce an allocation
that uses a minimal number of temporary variables. Indeed, a smaller
allocation for the previous example is illustrated in the tree:
\begin{center}
\begin{tikzpicture}[draw,very thick,font=\small\tt,
rbx/.style={rectangle,draw,rounded corners=0.3cm,minimum size=1.25cm},
level distance=2cm, sibling distance=3cm
]
\tmpTree{4,9}
% annotate the nodes
\BR{f2}{1};
\BR{y}{2};
\BR{f3}{2};
\BR{x}{0};
\BR{f4}{1};
\end{tikzpicture}
\end{center}
While the non-optimality of this algorithm is unlikely to have a
measurable effect on the performance of any practical program, the
problem of finding an efficient optimal solution is of theoretical
interest. Classical results in the area of register allocation do not
apply. It is possible to allocate a minimum number of registers from
expression trees for conventional languages in polynomial time
[.dragon.]. The algorithm to do this depends on the fact that
registers (temporary variables) are dead as soon as the value they
contain is used. This is not true for Icon temporary variables.
The result of Prabhala and Sethi stating that register allocation is
NP-complete even in the presence of an infinite supply of registers
also does not apply [.prabhala subexp.]. Their complexity result
derives from performing register allocation in the presence of common
subexpression elimination (that is, from performing register
allocation on expression DAGS rather than trees) on a
2-address-instruction machine with optimality measured as the minimum
number of instructions needed to implement the program. Goal-directed
evaluation imposes more structure on lifetimes than common
subexpression elimination, the machine model used here is the C
language, and optimality is being measure as the minimum number of
temporary variables needed.
The Icon temporary variable allocation problem is different from the
Prolog variable allocation problem. Prolog uses explicit variables
whose lifetimes can have arbitrary overlaps even in the absence of
goal-directed evaluation. The Prolog allocation problem is equivalent
to the classical graph coloring problem which is NP-complete [.debray
apr91, dragon.].
If the allocation of a subarray of temporary variables is delayed
until the first one is actually needed in the generated code, an
optimum allocation results for the preceding example. It is not
obvious whether this is true for the general case of expression trees
employing goal-directed evaluation. This problem is left for future
work.
In addition to holding intermediate values, temporary variables are
used as local tended variables within in-line code. This affects the
pattern of allocations, but not the underlying allocation technique.
| {
"alphanum_fraction": 0.7380414863,
"avg_line_length": 40.0116896918,
"ext": "tex",
"hexsha": "0db091b52faf05f6c5ed3356247c2c1e308c0b38",
"lang": "TeX",
"max_forks_count": 16,
"max_forks_repo_forks_event_max_datetime": "2022-03-01T06:01:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-14T04:32:36.000Z",
"max_forks_repo_head_hexsha": "df79234dc1b8a4972f3908f601329591c06bd141",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "jschnet/unicon",
"max_forks_repo_path": "doc/ib/p2-codeGen.tex",
"max_issues_count": 83,
"max_issues_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_issues_repo_issues_event_max_datetime": "2022-03-22T11:32:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-03T20:07:12.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "MatthewCLane/unicon",
"max_issues_repo_path": "doc/ib/p2-codeGen.tex",
"max_line_length": 86,
"max_stars_count": 35,
"max_stars_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "MatthewCLane/unicon",
"max_stars_repo_path": "doc/ib/p2-codeGen.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T06:00:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-29T13:19:55.000Z",
"num_tokens": 10503,
"size": 37651
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.