Search is not available for this dataset
text
string | meta
dict |
---|---|
\subsubsection{\stid{1.16} SICM}
\paragraph{Overview} The goal of this project is to create a universal interface for discovering, managing and sharing within complex memory hierarchies. The result will be a memory API and a software library which implements it. These will allow operating system, runtime and application developers and vendors to access emerging memory technologies. The impact of the project will be immediate and potentially wide reaching, as developers in all areas are struggling to add support for the new memory technologies, each of which offers their own programming interface. The problem we are addressing is how to program the deluge of existing and emerging complex memory technologies on HPC systems. This includes the MCDRAM (on Intel Knights Landing), NV-DIMM, PCI-E NVM, SATA NVM, 3D stacked memory, PCM, memristor, and 3Dxpoint. Also, near node technologies, such as PCI-switch accessible memory or network attached memories, have been proposed in exascale memory designs. Current practice depends on ad hoc solutions rather than a uniform API that provides the needed specificity and portability. This approach is already insufficient and future memory technologies will only exacerbate the problem by adding additional proprietary APIs. Our solution is to provide a unified two-tier node-level complex memory API. The target users for the low-level interface are system and runtime developers, as well as expert application developers that prefer full control of what memory types the application is using. The high-level interface is designed for application developers who would rather define coarser-level constraints on the types of memories the application needs and leave out the details of the memory management. The low-level interface is primarily an engineering and implementation project. The solution it provides is urgently needed by the HPC community; as developers work independently to support these novel memory technologies, time and effort is wasted on redundant solutions and overlapping implementations. The high-level interface starts with a co-design effort involving application developers. It would potentially evolve into more research as intelligent allocators, migrators, and profiling tools are developed. We can achieve success due to our team’s extensive experience with runtimes and applications. Our willingness to work with and accept feedback from multiple hardware vendors and ECP developers differentiates our project from existing solutions and will ultimately determine the scale of adoption and deployment.
\begin{figure}
\begin{center}
{
\includegraphics[width=.5\textwidth]{projects/2.3.1-PMR/2.3.1.16-SICM/mike-excelent.pdf}
\caption{Interface for complex memory that is abstract, portable, extensible to future hardware; including a mechanism-based low-level interface that reins in heterogeneity and an intent-based high-level interface that makes reasonable decisions for applications}
}
\end{center}
\end{figure}
\paragraph{Key Challenges}
This effort is a new start and relies on interaction and acceptance of both the vendors and various runtime libraries. Defining a common mechanism to expose various heterogeneous memory devices is important but vendors have not little adopted a standard in this area and will have to be led to a solution.
\paragraph{Solution Strategy}
The deliverable of this project is the userspace libraries and modifications to the Linux kernel~\cite{Williams:2017:NDH:3145617.3145620} to that would allow better use of heterogeneous memories. The low-level interface would provide support for discovery, allocation/de-allocation, partitioning, and configuration of the memory hierarchy and information on the properties of each specific device (capacity, latency, bandwidth, volatility, power usage, etc.). The interface will be quickly prototyped for a small set of currently available memory technologies (both volatile and non-volatile) and delivered to runtime and application developers, as well as vendors, for feedback on its functionality, performance and features. The high-level interface will leverage the low-level library in order to further decouple applications from hardware configurations. Specifically, it would emphasize ease of use by application developers with a policy/intent driven syntax enabled through run-time intelligence and system support. Applications would specify which attributes (such as fast, low latency, high capacity, persistent, or reclaimable) are a priority for each allocation and the interface will provide the most appropriate configuration. Finally, the missing element is a common set of functions that would constitute a minimal hardware interface to the various types of memory. The goal is to allow fast porting to new hardware but still leave room for innovation by vendors.
\paragraph{Recent Progress}
\begin{itemize}
\item Low-Level Interface: Progressing on the refactoring of the initial SICM userspace library. Developed the data structure and implementation to track the memory pages in different arenas.
High-Level Interface:
\item High-Level Interface: Completed initial API design for new high-level SICM persistent memory meta allocator, now called "Metall". Design heavily leverages open-source Boost.Interprocess, and retains some compatibility with it.
\item Analysis: Ongoing meetings with Ompi-X team members at Univ. of Tennessee to discuss memory needs and SICM requirements.
\item Analysis: evaluating new static analysis capabilities in the Portland Group compiler that generates comprehensive relationship data of both data structures and functions. The results for ACME are stored in a database. As of this quarter, we can now reliably tie trace references back to a line of code via the database.
\item Analysis: evaluating ACME using Gem5. We are currently resolving some compatibility issues between the ACME build environment and our Gem5 virtual machine. We now have traces from ES3M and several mini-apps.
\item Analysis: Efforts to analyze E3SM and mini-apps using tools by Cray and Intel. These results will be combined with previous results to help characterize barriers and gaps based on feedback from the E3SM team.
\item Cost Models: We have extracted a lot of experimental data related to application memory use and layout. This was done with full applications on hardware with the memory throttling-based emulation methodology. Additionally we have finer grained measurements for smaller C benchmarks where we have isolated individual data structures and applied different tiering decisions to assess metrics such as sensitivity, benefit, etc. More of a tool and profiling process to inform the design of the a simple API/API extension that you might integrate with the running application.~\cite{Doudali:2017:CTE:3132402.3132418}
\item Cost Models: Development of a tool, Mnemo, which provides for automated recommendations of capacity sizing of heterogeneous memories for object store workloads. Given a platform with specific configuration of different memories, and a (representative) workload, we can quickly extract some of the relevant memory usage metrics, and produce cost-benefit estimation curves as a function of different capacity allocations of the different types of memories. The output of Mnemo are estimates which give its users information to make informed decisions about capacity allocations. This can have practical use in shared/capacity platforms, or to rightsize capacity allocations to collocated workloads.
\end{itemize}
\paragraph{Next Steps}
\begin{itemize}
\item Low-Level Interface: Finish implementation of refactor and test with proxy applications for functionality and correctness. Investigate Linux kernel modifications for page migration in collaboration with ECP project Argo 2.3.5.05 and RIKEN research center in Japan. Verify support of emerging OpenMP standards.
\item High-Level Interface: Continuing to work on completing first working prototype; performance testing to follow after prototype is complete. Initial performance benchmarks have been selected, including a key-value store with skewed key frequency distribution. These benchmarks also serve as part of a correctness validation test set.
\item Document needs of MPI and ACME climate app hybrid memory analysis (with ORNL collaborators and related to UNITY)
\item Understand capabilities of hwloc and netloc with respect to OMPI-X needs.
Future planned reports will include assessments of what the new SICM tools provide compared to current tools using ACME as a benchmark. The missing gaps are used by the SICM team to inform R/D directions.
\end{itemize}
| {
"alphanum_fraction": 0.8245250432,
"avg_line_length": 197.3863636364,
"ext": "tex",
"hexsha": "74100a232045f8baf86e7890ad3128d9657c96c8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_line_length": 2551,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1678,
"size": 8685
} |
\section{Related Work}
\label{sec:related-work}
\begin{figure}[t]
\centering
\vspace*{-0.3cm}
\hspace*{-0.4cm}
\begin{minipage}[t]{0.3\textwidth}
\includegraphics[width=0.9\textwidth]{fig_main_illustration3}
\end{minipage}
\begin{minipage}[t]{0.12\textwidth}
\includegraphics[width=1.2\textwidth]{fig_main_illustration2}
\end{minipage}
\vspace*{-10px}
\caption{\textbf{Measuring Flatness.} \textbf{Left:} Illustration of measuring flatness in a random (\ie, average-case, {\color{colorbrewer2}blue}) direction by computing the difference between \RCE $\tilde{\mathcal{L}}$ \emph{after} perturbing weights (\ie, $w + \nu$) and the ``reference'' \RCE $\mathcal{L}$ given a local neighborhood $B_\xi(w)$ around the found weights $w$, see \secref{subsec:main-flatness}. In practice, we average across/take the worst of several random/adversarial directions.
\textbf{Right:} Large changes in \RCE around the ``sharp'' minimum causes poor generalization from training ({\color{colorbrewer0}black}) to test examples ({\color{colorbrewer1}red}).
}
\label{fig:main-illustration}
\vspace*{-6px}
\end{figure}
\textbf{Adversarial Training (AT):}
Despite a vast amount of work on adversarial robustness, \eg, see \cite{SilvaARXIV2020,YuanARXIV2017,AkhtarACCESS2018,BiggioCCS2018,XuARXIV2019}, adversarial training (AT) has become the de-facto standard for (empirical) robustness. Originally proposed in different variants in \cite{SzegedyICLR2014,MiyatoICLR2016,HuangARXIV2015}, it received considerable attention in \cite{MadryICLR2018,robustness} and has been extended in various ways:
\cite{LambAISEC2019,CarmonNIPS2019,UesatoNIPS2019} utilize interpolated or unlabeled examples, \cite{TramerNIPS2019,MainiICML2020} achieve robustness against multiple threat models, \cite{StutzICML2020,LaidlawARXIV2019,WuICML2018} augment AT with a reject option, \cite{YeNIPS2018,LiuICLR2019b} use Bayesian networks, \cite{TramerICLR2018,GrefenstetteARXIV2018} build ensembles, \cite{BalajiARXIV2019,DingICLR2020} adapt the threat model for each example, \cite{Wong2020ICLR,AndriushchenkoNIPS2020,VivekCVPR2020} perform AT with single-step attacks, \cite{HendrycksNIPS2019} uses self-supervision and \cite{PangNIPS2020} additionally regularizes features -- to name a few directions. However, AT is slow \cite{ZhangNIPS2020} and suffers from increased sample complexity \cite{SchmidtNIPS2018} as well as reduced (clean) accuracy \cite{TsiprasICLR2019,StutzCVPR2019,ZhangICML2019,RaghunathanARXIV2019}. Furthermore, progress is slowing down. In fact, ``standard'' AT is shown to perform surprisingly well on recent benchmarks \cite{CroceICML2020,CroceARXIV2020b} when tuning hyper-parameters properly \cite{PangARXIV2020b,GowalARXIV2020}. In our experiments, we consider several popular variants \cite{WuNIPS2020,WangICLR2020,ZhangICML2019,CarmonNIPS2019,HendrycksNIPS2019}.
\textbf{Robust Overfitting:} Recently, \cite{RiceICML2020} identified \emph{robust} overfitting as a crucial problem in AT and proposed early stopping as an effective mitigation strategy. This motivated work \cite{SinglaARXIV2021,WuNIPS2020} trying to mitigate robust overfitting. While \cite{SinglaARXIV2021} studies the use of different activation functions, \cite{WuNIPS2020} proposes AT with \emph{adversarial weight perturbations} (AT-AWP) explicitly aimed at finding flatter minima in order to reduce overfitting. While the results are promising, early stopping is still necessary. Furthermore, flatness is merely assessed visually, leaving open whether AT-AWP \emph{actually} improves flatness in adversarial weight directions. We consider both average- and worst-case flatness, \ie, random and adversarial weight perturbations, to answer this question.
\textbf{Flat Minima} in the loss landscape, \wrt changes in the weights, are generally assumed to improve \emph{standard} generalization \cite{HochreiterNC1997}. \cite{LiNIPS2018} shows that residual connections in ResNets \cite{HeCVPR2016} or weight decay lead to \emph{visually} flatter minima. \cite{NeyshaburNIPS2017,KeskarICLR2017} formalize this concept of flatness in terms of \emph{average-case} and \emph{worst-case} flatness. \cite{KeskarICLR2017,JiangICLR2020} show that worst-case flatness correlates well with better generalization, \eg, for small batch sizes, while \cite{NeyshaburNIPS2017} argues that generalization can be explained using both an average-case flatness measure and an appropriate capacity measure. Similarly, batch normalization is argued to improve generalization by allowing to find flatter minima \cite{SanturkarNIPS2018,BjorckNIPS1018}. These insights have been used to explicitly regularize flatness \cite{ZhengARXIV2020c}, improve semi-supervised learning \cite{CicekICCVWOR2019} and develop novel optimization algorithms such as Entropy-SGD \cite{ChaudhariICLR2017}, local SGD \cite{TinICLR2020} or weight averaging \cite{IzmailovUAI2018}.
\cite{DinhICML2017}, in contrast, criticizes some of these flatness measures as not being scale-invariant.
We transfer the intuition of flatness to the \emph{robust} loss landscape, showing that flatness is desirable for adversarial robustness, while using scale-invariant measures. | {
"alphanum_fraction": 0.8124759338,
"avg_line_length": 173.1333333333,
"ext": "tex",
"hexsha": "ae976985e55c2b17496244dff0115638f311e128",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d63daf8fc0221d07d8cfc8b7a5bcdc213403a17b",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "davidstutz/iccv2021-robust-flatness",
"max_forks_repo_path": "paper/sec_related_work.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d63daf8fc0221d07d8cfc8b7a5bcdc213403a17b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "davidstutz/iccv2021-robust-flatness",
"max_issues_repo_path": "paper/sec_related_work.tex",
"max_line_length": 1274,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "d63daf8fc0221d07d8cfc8b7a5bcdc213403a17b",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "davidstutz/iccv2021-robust-flatness",
"max_stars_repo_path": "paper/sec_related_work.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-10T19:09:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-11-08T21:27:33.000Z",
"num_tokens": 1555,
"size": 5194
} |
\documentclass[11pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
%\usepackage[ddmmyyyy]{datetime}
\usepackage[short,nodayofweek,level,12hr]{datetime}
%\usepackage{cite}
%\usepackage{wrapfig}
%\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
\newcommand{\e}{\epsilon}
\newcommand{\dl}{\delta}
\newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\vect}[1]{\underline{#1}}
\newcommand{\uvect}[1]{\hat{#1}}
\newcommand{\1}{\vect{1}}
\newcommand{\grad}{\nabla}
\newcommand{\lc}{l_c}
\title{Surface Tension}
\date{\displaydate{date}}
\newdate{date}{22}{09}{2018}
\author{}
\begin{document}
\maketitle
If a \textit{thin} tube is half-dipped in water, we know that water rises in the tube to a height greater than that of the surrounding fluid. But what do we mean by a thin tube? There must be a critical thickness of the tube beyond which we will expect gravity to dominate and below which surface tension will be important.
Since there is a competition between gravity ($g$) and surface tension ($\Gamma$), we can obtain the critical length ($\lc$) by comparing the pressures exerted by each of these forces-
\begin{align*}
&\rho g \lc \sim \frac{\Gamma}{\lc}\\
\Rightarrow &\lc \sim \bigg(\frac{\Gamma}{\rho g}\bigg)^{1/2}
\end{align*}
This is the length scale over which the effects of surface tension are comparable with those of gravity. At lengths much smaller than this, surface tension will dominate gravity. For water, $\Gamma = 0.07 N/m$ and hence $l_c \approx 2.7$mm. Therefore a \textit{thin} tube, for water, referes to a tube whose diameter is nearly 2.7mm or less.
But why is the surface of a fluid under tension? A fluid consists of a bulk and a surface. The surface, though usually idealized as a sheet with zero thickness, is actually a few molecules thick. For a depth of the order of a few molecules near the water surface, the potential energy of the molecules is substantially higher than that in the bulk. The surface molecules are more energetic, jiggling about more rapidly, often escaping from the liquid to the air. If we wanted to calculate the total potential energy of the liquid by evaluating the potential energy per molecule in the bulk and then multiplying it by the total number of molecules, we will incur an error because potential energy for the surface molecules is much larger. This surface energy is usually incorporated by considering the surface to be an area (even though it is a volume few molecules thick). The excess energy associated with the area is proportional to the area and the coefficient of proportionality is the surface tension.
Increasing the surface area causes more molecules to move from the bulk to the surface which increases their potential energy. We know that force is a derivative of the potential energy ($F=-dU/dx$) and therefore pushing the molecules up the potential energy curve leads to a force. In order to minimize it's potential energy then, a fluid must minimize its surface area which is what we observe in nature.
\section{The Young - Laplace Equation}
The fact that surface of a fluid is under tension leads to a pressure jump between the two fluids. This can be seen by a force balance across the surface. Since the mass of a surface element is zero, the net force must be zero. Therefore, the sum of pressures acting on the two sides plus the surface tension force must add up to zero.
\begin{align*}
&\int(-p\vect n + p \vect n) dA + \oint \Gamma \vect t' ds = 0
\end{align*}
where $\vect n$ is the normal to the surface, $\vect t$ is the tangent to the curve bounding the surface and $\vect t' = \vect n \wedge \vect t$ is a vector perpendicular to the curve. Using Stokes Theorem, this can be written as
\begin{align*}
&\underbrace{\bigg[(\hat p - p)- \Gamma \pd{n_k}{x_k} \bigg]n_i}_{\text{Normal force balance}} + \underbrace{(\dl_{ik} - n_in_k)\pd{\Gamma}{x_k}}_{\text{tangential force balance}} = 0
\end{align*}
If the gradient of surface tension is zero, then second term is vanishes and we get
\begin{align*}
&\hat p - p = \Gamma \pd{n_k}{x_k}\\
&\hat p - p = \Gamma (\grad\cdot \vect{n})
\end{align*}
which is the Young-Laplace Equation. This equation gives us the pressure jump across an interface due to surface tension provided $\Gamma$ is constant everywhere. If $\Gamma$ varies with position, then the gradient of $\Gamma$ will not be zero and there will be flow due the surface tension gradient. Such flows are called Marangoni flows.
We can see a quick implementation of the Young-Laplace Equation by evaluating the pressure jump across a spherical bubble (say air bubble in water). For a sphere, the normal is given by $\vect n = x_i/r$. Hence-
\begin{align*}
\grad\cdot n &= \pd{}{x_i}\frac{x_i}{r}\\
&= \frac{1}{r}\pd{x_i}{x_i} - \frac{x_i x_i}{r^3} \tag{writing $r= (x_ix_i)^{1/2}$ }\\
&= \frac{\dl_{ii}}{r} - \frac{x_i x_i}{r^3}
= \frac{3}{r} - \frac{1}{r}
= \frac{2}{r}
\end{align*}
Therefore, for a spherical bubble, the pressure balance gives us
\begin{align*}
&\hat p - p = \frac{2 \Gamma}{r}
\end{align*}
\section{Shape of a 2D static meniscus}
The Young-Laplce Equation can be used to obtain the shape of a static meniscus. In this section we look at a meniscus close to a plane wall. Just next to the wall, the rise in water level is highest and it tapers off as we move away from the wall. We want to find out the functional form for the interface $z \equiv z(x)$ where $x$ is the distance from the wall and $z$ is the height of the meniscus at a given $x$.
Evaluation of the Young-Laplace Equation requires us to find the divergence of the unit normal. The normal to any surface $F(x,y) = 0$ is given by:
\begin{align*}
& \vect n = \frac{\grad F}{|\grad F|}\\
&\Rightarrow \pd{}{x_i}n_i = \pd{}{x_i}\frac{\pd{F}{x_i}}{|\grad F|}\\
&\Rightarrow \pd{n_i}{x_i} = \frac{1}{|\grad F|}\pd{^2F}{x_i^2} - \frac{1}{|\grad F|^3}\pd{F}{x_i}\pd{F}{x_k}\pd{^2F}{x_ix_k}
\end{align*}
For the 2D static meniscus $z=f(x)$ and hence $F(x,z) = z-f(x) = 0$. Evaluating the necessary derivatives:
\begin{align*}
&\pd{F}{x} = -\pd{f}{x}\\
&\pd{F}{z} = 1\\
&\pd{^2F}{x\partial z} = \pd{^2f}{x^2}\\
\end{align*}
and substituting them in the Young-Laplace equation, we get the following:
\begin{align*}
&\hat p - p = -\Gamma \bigg(\frac{\pd{^2f}{x^2}}{\big(1+(\pd{f}{x})^2\big)^{3/2}} \bigg)\\
\Rightarrow &\hat p - p = -\rho g z = -\Gamma \bigg(\frac{\pd{^2f}{x^2}}{\big(1+(\pd{f}{x})^2\big)^{3/2}} \bigg)
\end{align*}
Here, $\hat p$ is the atmospheric pressure and $p$ is the pressure in the fluid at $z=0$. We can non-dimensionalize the equation by considering the length scale $l_c = \sqrt{\Gamma/\rho g}$. This leaves us with
\begin{align*}
& z = \frac{\pd{^2f}{x^2}}{\big(1+(\pd{f}{x})^2\big)^{3/2}}
\end{align*}
We can greatly simplify the matters if we linearize this equation. Consider the case where the slope of the meniscus is very small everywhere, i.e. $dz/dx \ll 1$. Then the equation becomes
\begin{align*}
& z = \pd{^2f}{x^2}
\end{align*}
whose solution is an exponential
\begin{align*}
&z = z_0 e^{-x}
\end{align*}
or in dimensional terms
\begin{align*}
&z = z_0 e^{-x/l_c}
\end{align*}
which tells us that the height of the meniscus decays exponentially away from the wall with a characterstic length scale of $l_c$. Another quantity of interest here is the maximum height to which water rises along the wall. In order to find it, we must impose the contact angle boundary condition:
\begin{align*}
&\frac{dz}{dx} = \tan(\frac{\pi}{2} + \theta_c) \quad @ \quad z=0\\
\Rightarrow &\frac{-z_0}{l_c} = \tan(\frac{\pi}{2} + \theta_c) \\
\Rightarrow & z_0 =-l_c \tan(\frac{\pi}{2} + \theta_c) \\
\Rightarrow & z_0 =l_c \cot(\theta_c)
\end{align*}
This gives us the height of the meniscus at the wall. But what if $\theta_c = 0$? The height comes out to be infinity. This is obviously wrong but we must remember that linearization was only valid when slope of the meniscus was small. Since we have violated this assumption, the method is expected to give the wrong answer. In order to obtain the height of meniscus at the wall when $\theta_c = 0$, we must solve the full non-linear equation, which leads us to
\begin{align*}
&1-\frac{z^2}{2} = \frac{1}{(1+\frac{dz}{dx}^2)^{1/2}}
\end{align*}
From this, we can obtain the height at the wall to be $\sqrt 2 l_c$ when $\theta_c = 0$.
The equation can be further solved to obtain the complete shape of the meniscus as a transcendental function $x\equiv x(z)$.
\section{Appendix}
\subsection{Force balance interpretation}
Describe
\begin{align*}
&1-\frac{z^2}{2} = \frac{1}{(1+\frac{dz}{dx}^2)^{1/2}}
\end{align*}
as a force balance
\subsection{Solution of the non-linear equation for the 2D meniscus}
\end{document}
| {
"alphanum_fraction": 0.7186436834,
"avg_line_length": 58.3973509934,
"ext": "tex",
"hexsha": "42ce13d9f60a2527726a85e02f5d67ba204fb454",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pulkitkd/Fluid_Dynamics_notes",
"max_forks_repo_path": "tex_files/lecture13.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pulkitkd/Fluid_Dynamics_notes",
"max_issues_repo_path": "tex_files/lecture13.tex",
"max_line_length": 1007,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pulkitkd/Fluid_Dynamics_notes",
"max_stars_repo_path": "tex_files/lecture13.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-16T04:19:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-16T04:19:07.000Z",
"num_tokens": 2671,
"size": 8818
} |
\section{Reliability Techniques}
\label{sec-related-tech}
%The techniques used in this dissertation for achieving file-system
%reliability
We describe work related to the various techniques used in this
dissertation: using embedded information
(\sref{sec-related-embedded}), incremental fsck in the background
(\sref{sec-related-fsck}), ordering updates to storage
(\sref{sec-related-ordered}), and trading durability for performance
(\sref{sec-related-durability}).
\input{related/bp}
\input{related/fsck}
\input{related/ordered}
\input{related/durability}
| {
"alphanum_fraction": 0.7971530249,
"avg_line_length": 33.0588235294,
"ext": "tex",
"hexsha": "1ed6c3b71b3e5df1732185b657b7ba11c077daed",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "keqhe/phd_thesis",
"max_forks_repo_path": "lanyue_thesis/related/tech.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "keqhe/phd_thesis",
"max_issues_repo_path": "lanyue_thesis/related/tech.tex",
"max_line_length": 68,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "keqhe/phd_thesis",
"max_stars_repo_path": "lanyue_thesis/related/tech.tex",
"max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z",
"num_tokens": 138,
"size": 562
} |
\documentclass[12pt, a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\graphicspath{ {./desktop} }
\usepackage{indentfirst}
\usepackage{microtype}
\usepackage[breaklinks]{hyperref}
\setlength{\parskip}{1em}
\begin{titlepage}
\title{Wearable Sensor App Manual}
\author{Bourdage, R., van Dijk, M. \& Gawhns, D.}
\date{Leiden Institute of Advance Computer Science
June 2021}
\end{titlepage}
\begin{document}
\maketitle
\thispagestyle{empty}
\newpage
\tableofcontents
\newpage
\cleardoublepage
\section{Introduction}
The ``liacs.sensorapplication" software was created for the Samsung Gear Fit Pro 2, which is a digital watch issued in 2016 containing an accelerometer, gyroscope, barometer and GPS sensors.This Samsung wearable runs on Tizen OS version 2.3.1:13 and is based on Linux OS.The wearable sensor was selected for a research project as it's operating system allowed for the extraction of raw data files. In addition, the wearable had to be customizable to incorporate the following additional research project requirements: patient id, the recording of sensor files, saving of the measurement settings and privacy settings. Samsung offered Tizen Studio software environment free of charge to design applications and services for the watch. Many programming examples and API documentation were also available online.(see: https://developer.samsung.com/tizen/Galaxy-Watch /certificate/creating-certificate.html)
\cleardoublepage
\section{The Software Package}
The software package contains a Tizen OS widget application named ``liacs.sensorapplication" visible in the list of applications with title ``Sensors" and a digital brain as icon. The application carries a snipper entry widget control to enter the person id, a number between 000 and 999, and two buttons; the RESTART button to start and restart measurements and the CLEAN button to be pressed 3x to remove all measurements from the watch.The activity data is stored in sensor files and have a file name which contains the person id, watch id, date, time and sensor type. The package is configured by a configuration file which is pushed on the watch in an application independent accessible file system folder.
The configuration is stored for every measurement. The configuration file consists of: a) the recording in terms of frequency; b) recorded sensors; and c) the centre and size of the privacy circle. For reproducibility and data management, a naming convention was developed for all files, including the con files, listed in the following order: a) Patient ID; b) Watch ID; c) Date (YYYY, MM , DD); d) Time (HR, Minute, Seconds). For an example:
001 2021 05 13 10 44 00 d9a8 aag.dat
002 2021 05 12 11 09 09 d9d2 bar.dat
003 2021 05 13 10 43 01 d9d7 con.dat
004 2021 05 12 11 06 49 d9d6 gps.dat
For power consumption and memory size, the gravity accelerometer and gyroscope sensors with high frequency readouts are separately stored from the low frequency readouts of the barometer, battery charge percentage and GPS values. The sensor values are stored in a commonly used file format, the comma separated value file format.
\begin{center}
\includegraphics[width=.2\textwidth]{Pic 1.png}
\includegraphics[width=.199\textwidth]{Pic 2.png}
\end{center}
The software package also contains a Tizen OS widget service named ``liacs.sensorservice" which runs in the background, not visible in the list of applications and cannot be deleted by the user of the watch. The sensor service processes the messages RESTART and CLEAN send by the sensor application. After the start of the watch, the sensor service is activated and waiting for these messages. After one valid message received, the service starts the recording of the sensor values and stores it into comma separated values files. The measurement is based on the contents of the uploaded configuration file. Wifi on the watch needs to be on when uploading the con file and to download the collected data off of the watch. Otherwise, wifi can be off during data collection. In addition, the watch's GPS needs to be turned on to collect GPS data.
A figure in Appendix A shows a state diagram with transitions and format ``event - actions", as a formal description of the states and relevant state events between the watch application manager, the sensor application and service.
The wearable was manually tested. The accelerometer and the gyroscope were validated by perpetually rotating the watch at different speeds. The barometer was validated by comparing height differences in a GPS track with the air pressure measured. Zero measurements were obtained by calculating accelerometer means and standard deviations and comparing the results between watches. The variance between watches was of 0.028 on average. In order to account for this variance, a threshold was created by using the following formula: threshold = mean(G) + 4 * std(G), where G = the gravity acceleration vector norm [sqrt( ax2 + ay2 + az**2)]. For reproducibility, it is advised to calibrate the sensor values and convert them to SI basic units before data processing.
For a Python notebook, see Appendix B for calculating means and standard deviations and Appendix C for plots. Below, find sample of calculation outcomes.
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 44.png}
\end{center}
For every sensor readout frequency, a separate sensor file is created. This reduces power
consumption and memory usage. The application contains a clock which
is used for all recordings so the data can be aligned with time. The
sensor values are stored in a commonly used file format, the comma
separated value file format.
The wearable chosen meets the requirements set out in 2018.
\cleardoublepage
\section{How to use the Watch}
\begin{enumerate}
\item Charge watch on charging dock
\begin{center}
\includegraphics[width=.35\textwidth]{Pic 3.png}
\end{center}
\item Switch on the watch after charging by pressing and holding small side button
\begin{center}
\includegraphics[width=.35\textwidth]{Pic 4.png}
\end{center}
\item Access the Settings by pressing small side button once
\begin{center}
\includegraphics[width=.35\textwidth]{Pic 5.png}
\end{center}
\cleardoublepage
\item Also found on the main screen is the Sensors app
\begin{center}
\includegraphics[width=.35\textwidth]{Pic 6.png}
\end{center}
\item Press on the Sensor app, input an identifier and press restart to begin measurement. To delete any data collected, press Clean button three times
\begin{center}
\includegraphics[width=.25\textwidth]{Pic 7.png}
\end{center}
\item To turn off watch, press and hold small side button
\begin{center}
\includegraphics[width=.35\textwidth]{Pic 8.png}
\end{center}
\end{enumerate}
\cleardoublepage
\section{Installation of Tizen Studio Environment}
\begin{enumerate}
\item Install Tizen Studio 4.1 with IDE installer. Once instillation is completed, the package manager will be prompted
\item Install the Tizen extensions for native development for version 2.3.1 (tab: Main SDK). The watch has version 2.3.1:13 of the Tizen OS, therefore choose to install the 2.3.1 SDK together with the certificate SDK.
\item Install the Tizen extension for Certificates (tab: Extension SDK)*
\item Make a Samsung developer account, this will be required to create the necessary certificates: https://developer.samsung.com/sa/signIn.do
\item Run package manager (Tizen Studio - tools), go through the wizard steps.
\end{enumerate}
Note: Installation guide of the Tizen Studio development environment can be found in https://developer.samsung.com/tizen/Galaxy-Watch/certificate/creating-certificate.html
\subsection{*More on Certificates}
\begin{enumerate}
\item Connect watch via device manager (see How to Connect Watch to Tizen)
\item Open Certificate Manager
\item Hover over the little plus to add a new certificate
\item Click Samsung Certificate
\item You will be asked to make an author certificate and a distributor certificate
\item If you want to add a watch: click on the plus sign as if adding a new certificate, click samsung certificate, select use existing certificate profile, click no when asked if you want to create a new author certificate, create new distributor certificate, you will be asked to chose a password and either add a DUID manually or connect to the new device. Alternatively, you can use a .txt file with a new DUID per line to create certificates.
\end{enumerate}
Tip: consider setting the date to one day in the future while setting up the watch to avoid certification date issues.
\section{How to Connect Watch to Tizen}
\begin{enumerate}
\item To connect the watch to Tizen Studio, go to the computer’s ``Settings” window, navigate to the “Mobile Hotspot” and turn it on. It will then display a Network Name and Network Password
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 9.2.png}
\end{center}
\item After the hotspot is turned on, go into the watches’ settings, and select the ``Connections"
\begin{center}
\includegraphics[width=.3\textwidth]{Pic 10.png}
\end{center}
\cleardoublepage
\item Go to ``Wi-Fi" and turn it on
\begin{center}
\includegraphics[width=.3\textwidth]{Pic 11.png}
\end{center}
\item Press on ``Wi-Fi Networks" and select the laptop's mobile hotspot
\begin{center}
\includegraphics[width=.3\textwidth]{Pic 12.png}
\end{center}
\item Press on ``Password" to enter the laptop's password
\begin{center}
\includegraphics[width=.3\textwidth]{Pic 13.png}
\end{center}
\cleardoublepage
\item Make sure that the watch has connected to the laptop
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 14.2.png}
\end{center}
\item Open Tizen, and select “Device Manager” from the Tools tab
\begin{center}
\includegraphics[width=.7\textwidth]{Pic 15.2.png}
\end{center}
\item When the Device Manager opens, select the “Remote Device Manager”
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 16.2.png}
\end{center}
\item In the Remote Device Manager, select the plus sign to connect the watch
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 17.2.png}
\end{center}
\item Put in Name, and IP address listed in the laptop’s mobile hotspot window
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 18.2.png}
\end{center}
\cleardoublepage
\item Press “Add”, and then turn on the connection, the device information should then appear in the Device Manager
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 19.2.png}
\end{center}
\end{enumerate}
\cleardoublepage
\section{Installation of LIACS Sensor App and Sensor Service}
\begin{enumerate}
\item Make a GitHub account
\item Ask for an invite to this repository. The current owner is the LiacsProjects organisation run by MarinusVanDijk ([email protected])
\item Download from the main branch the software project sensor application and sensor service. Click on "code" and download zip file
\item Open the project files one by one with the Tizen Studio 4.1
\item Open the device manager and make connection to the watch
\item Select the sensor application project, right-click on mouse, choose run-as, choose native device. The software is now downloaded to the watch
\item Select the sensor service project, right-click on mouse, choose run-as, choose native device
\item The sensor application is found at the bottom of all applications
\item Put watch in debug mode, switch all app off with settings only switch on wifi
\item Push - with the device manager - a configuration file with name ``configuration.dat" on folder ``/opt/var/tmp/". Notes about the configuration file: Write timer can be set to a higher frequency than the data is collected to miss fewer signals The privacy circle has a max range of 5000 mt, anything higher sets the privacy circle to 100 mt
\item Do a zero measurement (for calibration offline) for 15 minutes, upload the sensor + con files
\end{enumerate}
NOTE: You can also use the sdb (Smart Development Bridge) tool which come with Tizen Studio instead of the Device Manager. (See How to use SDB).
\section{How to Use Smart Development Bridge (SDB)}
The Tizen Studio contains a command prompt tool called ``sdb.exe", the Smart Development Bridge tool. This tool can install packages on the target, as well as pull and push files from and to the target.
https://docs.tizen.org /application/tizen-studio/common-tools/smart-development-bridge/
\begin{enumerate}
\item First, after installation of the Tizen Studio environment, copy the sdb.exe from c:/TizenStudio/tools/sdb.exe to c:/Windows/System32 folder
\item Open a git bash command terminal or windows command prompt
\end{enumerate}
Try the following:
\begin{enumerate}
\item Get the version number of the smart development bridge
\begin{verbatim}
$ sdb version
\end{verbatim}
\item Get the list of all devices connected
\begin{verbatim}
$ sdb devices
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 20.png}
\end{center}
\item Connect to a device
\begin{verbatim}
$ sdb connect 192.168.137.95:26101
\end{verbatim}
\item Get the ip address
\begin{verbatim}
$ sdb get-serialno
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 21.png}
\end{center}
\item Install a target package on a device
\begin{verbatim}
$ sdb install "package name.tpk"
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 22.png}
\end{center}
\item Execute a Linux command on the target
\begin{verbatim}
$ sdb shell ls -l opt/var/tmp
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 23.png}
\end{center}
\item Pull the configuration.dat file from the target
\begin{verbatim}
$ sdb pull opt/var/tmp/configuration.dat
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 24.png}
\end{center}
\item Push the configuration.dat file from the target
\begin{verbatim}
$ sdb push opt/var/tmp/configuration.dat
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 25.png}
\end{center}
\item Pull all sensor files from the target
\begin{verbatim}
$ sdb pull opt/usr/apps/liacs.sensorservice/data
\end{verbatim}
\begin{center}
\includegraphics[width=.999\textwidth]{Pic 26.png}
\end{center}
\item Remove all sensor files from the target
\begin{verbatim}
$ sdb shell rm opt/usr/apps/liacs.sensorservice/data/*.dat
\end{verbatim}
\item Disconnect the device
\begin{verbatim}
$ sdb disconnect
\end{verbatim}
\end{enumerate}
\cleardoublepage
\section{Possible Errors}
\begin{enumerate}
\item Tip Windows Installations: In case you cannot run the Device Manager and get a message that msvcr120dll is missing, you will have to load an extension for your OS:
\begin{verbatim}
https://answers.microsoft.com/en-us/windows/forum/windows_10-performance
/msvcr120dll-is-missing-or-error/aafe820f-4dbb-4043-aba2-e4ac2dcf69c1
\end{verbatim}
\item In case you cannot run the Device Manager and get a message that msvcp120dll is missing, you will have to load an extension for your OS:
\begin{verbatim}
https://answers.microsoft.com/en-us/windows/forum/windows_7-windows_
install/missing-msvcp120dll-file/f0a14d55-73f0-4a21-879e-1cbacf05e906
\end{verbatim}
\item While uploading the package of the sensor service on another watch, the launch process ends up in an error 75
\begin{verbatim}
"signature invalid device unique id"
\end{verbatim}
\end{enumerate}
Additionally, there are forum questions asked and answered on:
https://wiki.tizen.org/SDK
or more information can be found at:
https://docs.tizen.org/application/tizen-studio/common-tools/certificate-registration/
and at:
https://developer.samsung.com/galaxy-watch-develop /getting-certificates/create.html
\cleardoublepage
Another hint: In one of the steps in the Certificate Manager you need to supply is a list of DUID.
\begin{center}
\includegraphics[width=.6\textwidth]{Pic 27.png}
\end{center}
\cleardoublepage
\section{Example Data}
\begin{enumerate}
\item Activity timeline of gravity accelerometer values with sample frequency of 10 Hz. The first part shows sitting then standing, activity indoors, then sleeping; the watch is on the non-dominant wrist
\begin{center}
\includegraphics[width=.7\textwidth]{Pic 28.png}
\end{center}
Gyroscope, 10 Hz:
\begin{center}
\includegraphics[width=.7\textwidth]{Pic 29.png}
\end{center}
\item A close-up moment from walking to running:
\begin{center}
\includegraphics[width=.7\textwidth]{Pic 30.png}
\end{center}
\cleardoublepage
Close-up on sample scale:
\begin{center}
\includegraphics[width=.7\textwidth]{Pic 31.png}
\end{center}
\item Interval Combination Experiment:
\begin{center}
\includegraphics[width=.7\textwidth]{Pic 32.png}
\end{center}
The best combination is 50Hz and 25Hz and lower. For 50Hz take a 10 ms for the sensor intervals and 20 ms for the writer timer interval.
The sample density is a measure calculated by the number of records in the sensor files divided by the time in seconds. This result is divided by the sample frequency expected by the interval settings.
Sample density = number of records / seconds / expected frequency
For example:
For 50Hz the samples expected is 50 per second.
If the number of records in the sensor file is 4800, the sample density is 4800/5000 = 0.96.
For the best result the sample density must be 1.0
\item Accelerometer 4700 - 5300 seconds:
\begin{center}
\includegraphics[width=.75\textwidth]{Pic 33.png}
\end{center}
\begin{center}
\includegraphics[width=.75\textwidth]{Pic 34.png}
\end{center}
\begin{center}
\includegraphics[width=.75\textwidth]{Pic 35.png}
\end{center}
Here there is walking through a door, resting, walking again, resting again, and then stepped out of the building as GPS signal is picked up.
\end{enumerate}
\cleardoublepage
\section{Appendix A}
\title{State Diagram, with transitions and format ``event - actions"}
\includegraphics[width=.999\textwidth]{Pic 37.png}
\section{Appendix B}
\title{Python Notebook, zero measurement G mean / standard deviation}
\begin{center}
\includegraphics[width=.9\textwidth]{Pic 43.png}
\end{center}
\section{Appendix C}
\title{Python Notebook, mean and standard deviation plots}
\begin{center}
\includegraphics[width=.9\textwidth]{Pic 45.png}
\end{center}
\begin{center}
\includegraphics[width=.8\textwidth]{Pic 46.png}
\end{center}
\begin{center}
\includegraphics[width=.8\textwidth]{Pic 47.png}
\end{center}
\end{document}
| {
"alphanum_fraction": 0.7594344581,
"avg_line_length": 46.7469287469,
"ext": "tex",
"hexsha": "79300272462af6a882b01fb2f3852252fc618c5e",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-08-04T19:03:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-05T08:14:59.000Z",
"max_forks_repo_head_hexsha": "387272667459bc914b71b74d24af603c25681f9c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RenelleB/Wearda",
"max_forks_repo_path": "User Manual/main.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "387272667459bc914b71b74d24af603c25681f9c",
"max_issues_repo_issues_event_max_datetime": "2021-09-10T10:24:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-09-10T10:22:30.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RenelleB/Wearda",
"max_issues_repo_path": "User Manual/main.tex",
"max_line_length": 904,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "387272667459bc914b71b74d24af603c25681f9c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RenelleB/Wearda",
"max_stars_repo_path": "User Manual/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-06T10:07:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-18T17:44:18.000Z",
"num_tokens": 4768,
"size": 19026
} |
\section{Introduction}
Tachyon is designed to be a very fast renderer, based on ray tracing,
and employing parallel processing to achieve high performance.
At the present time, Tachyon and its scene description language are fairly
primitive, this will be remedied as time passes. For now I'm going to
skip the ``intro to ray tracing'' and related things that should probably
go here, they are better addressed by the numerous books on the subject
written by others. This document is designed to serve the needs of
sophisticated users that are already experienced with ray tracing
and basic graphics concepts, rather than catering to beginners.
If you have suggestions for improving this manual, I'll be glad
to address them as time permits.
Until this document is finished and all-inclusive, the best way to
learn how Tachyon works is to examine some of the sample scenes that
I've included in the Tachyon distribution. Although they are all very
simple, each of the scenes tries to show something slightly
different Tachyon can do. Since Tachyon is rapidly changing to accommodate
new rendering primitives and speed optimizations, the scene
description language is likely to change to some degree as well.
\subsection{Tachyon Feature List}
Although Tachyon is a very small program, it is relatively
complete and even has a few features often associated with
much larger and more sophisticated ray tracing packages:
\begin{itemize}
\item Parallel execution using MPI, POSIX threads, Unix-International threads,
or Microsoft Windows threading APIs
\item Run-time display and interactive ray tracing
(on sufficiently fast systems) with built-in support
for Spaceball and SpaceNavigator 6DOF motion controllers for
fly-through navigation
\item Ambient occlusion lighting
\item Support for 48-bit deep color image formats
\item Supersampled antialiasing
\item Linear, exponential, and exponential-squared fog
\item Perspective, orthographic, and depth-of-field camera projection
modes, with eye-space frustum controls
\item Positional, directional, and spot lights, with optional attenuation.
\item Provides commonly used geometric objects including
Spheres, Planes, Triangles, Cylinders, Quadrics, and Rings
\item Texture mapping (both texture images, and volumetric textures),
with automatic MIP-map generation.
\item Direct rendering of volumetric data
\item Support for angle-modulated transparency,
limited-ray-depth transparency, edge cueing with outline shading,
and other special shading modes useful for molecular visualization
\item Grid-based spatial decomposition for effecient rendering of both
static and time-varying geometry
\end{itemize}
| {
"alphanum_fraction": 0.7960598322,
"avg_line_length": 49.8363636364,
"ext": "tex",
"hexsha": "dd950626c399fcf3df0480bf05a34758e0e517bf",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-07-06T19:38:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-27T14:08:48.000Z",
"max_forks_repo_head_hexsha": "4e347495c3d996ed5f68d96a424d481d030843da",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "thesketh/Tachyon",
"max_forks_repo_path": "docs/intro.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4e347495c3d996ed5f68d96a424d481d030843da",
"max_issues_repo_issues_event_max_datetime": "2021-04-26T13:17:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-04-13T15:58:25.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "thesketh/Tachyon",
"max_issues_repo_path": "docs/intro.tex",
"max_line_length": 78,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "4e347495c3d996ed5f68d96a424d481d030843da",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "thesketh/Tachyon",
"max_stars_repo_path": "docs/intro.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-15T21:37:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-14T19:38:32.000Z",
"num_tokens": 574,
"size": 2741
} |
% Created 2021-10-05 Tue 12:40
% Intended LaTeX compiler: pdflatex
\documentclass[presentation,aspectratio=169, usenames, dvipsnames]{beamer}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{grffile}
\usepackage{longtable}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{amssymb}
\usepackage{capt-of}
\usepackage{hyperref}
\usepackage{khpreamble}
\usepackage{amssymb}
\usepgfplotslibrary{groupplots}
\newcommand*{\shift}{\operatorname{q}}
\definecolor{ppc}{rgb}{0.1,0.1,0.6}
\definecolor{iic}{rgb}{0.6,0.1,0.1}
\definecolor{ddc}{rgb}{0.1,0.6,0.1}
\usetheme{default}
\author{Kjartan Halvorsen}
\date{\today}
\title{PID control}
\hypersetup{
pdfauthor={Kjartan Halvorsen},
pdftitle={PID control},
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 26.3 (Org mode 9.4.6)},
pdflang={English}}
\begin{document}
\maketitle
\section{Integral control example}
\label{sec:org03412a9}
\begin{frame}[label={sec:org11a0c6e}]{Lead-lag compensator for position control of the DC motor}
\begin{center}
\includegraphics[width=.6\linewidth]{../../figures/block-DC-leadlag-compensator-numerical}
\end{center}
\begin{columns}
\begin{column}{0.5\columnwidth}
\begin{center}
\includegraphics[width=.8\linewidth]{../../figures/bode-loop-gain-leadlag-normalized-DC-crop}
\end{center}
\end{column}
\begin{column}{0.5\columnwidth}
\begin{center}
\includegraphics[width=.8\linewidth]{../../figures/response-leadlag-normalized-DC-crop}
\end{center}
\end{column}
\end{columns}
\end{frame}
\section{The need for integral control}
\label{sec:org5b6b53d}
\begin{frame}[label={sec:orgd93ed95}]{Feedback control}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
{
\node[coordinate] (input) {};
\node[sumnode, right of=input] (sum) {\tiny $\sum$};
\node[block, right of=sum, node distance=2.6cm] (reg) {$F(s)$};
\node[sumnode, right of=reg] (sumd) {\tiny $\sum$};
\node[block, right of=sumd, node distance=2.2cm] (plant) {$G(s)$};
\node[coordinate, right of=plant, node distance=2cm] (output) {};
\node[coordinate, below of=plant, node distance=12mm] (feedback) {};
\node[coordinate, above of=sumd, node distance=12mm] (dist) {};
\draw[->] (plant) -- node[coordinate, inner sep=0pt] (meas) {} node[near end, above] {$y(t)$} (output);
\draw[->] (meas) |- (feedback) -| node[very near end, left] {$-$} (sum);
\draw[->] (input) -- node[very near start, above] {$r(t)$} (sum);
\draw[->] (sum) -- node[above] {$e(t)$} (reg);
\draw[->] (reg) -- node[above] {$u(t)$}(sumd);
\draw[->] (dist) -- node[right, very near start] {$v(t)$}(sumd);
\draw[->] (sumd) -- node[above] {} (plant);
}
\end{tikzpicture}
\end{center}
\pause
\alert{Activity} What is the transfer function from the load disturbance \(v(t)\) to the control error \(e(t)\)?
\end{frame}
\begin{frame}[label={sec:org61318c1}]{Feedback control - eliminating a constant disturbance}
\small
\[\frac{E(s)}{V(s)} = \frac{-G(s)}{1 + G(s)F(s)} \]
\begin{block}{The final value theorem}
If steady-state exists
\[\lim_{t\to\infty} e(t) = \lim_{s\to 0} sE(s)\]
\pause
\end{block}
\begin{block}{Applied to a constant (step input) disturbance}
\(V(s) = \frac{1}{s}\)
\begin{align*}
\lim_{t\to\infty} e(t) &= \lim_{s\to 0} sE(s) = \lim_{s\to 0} s \frac{-G(s)}{1 + G(s)F(s)} \frac{1}{s}\\
&= \lim_{s\to 0} \frac{-G(s)}{1 + G(s)F(s)} % = \frac{G(0)}{1 + G(0)F(0)}
\end{align*}
\pause
\alert{Activity} Assume \(F(s) = \frac{\bar{F}(s)}{s}\) and \(G(0) = b < \infty\). Determine \(\lim_{t\to\infty} e(t)\).
\end{block}
\end{frame}
\begin{frame}[label={sec:org0b55a8a}]{The PID controller}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt},scale=0.8, every node/.style={scale=0.8}]
\node[coordinate] (input) {};
\node[sumnode, right of=input, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[block, right of=sum, node distance=20mm] (pid) {$F(s)$};
\node[coordinate, below of=sum, node distance=12mm] (feedback) {};
\node[coordinate, right of=pid, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$r(t)$} (sum);
\draw[->] (sum) -- node[above] {$e(t)$} (pid);
\draw[->] (pid) -- node[above, near end] {$u(t)$} (output);
\draw[->] (feedback) -- node[left, near start] {$y(t)$} node[right, pos=0.95] {-} (sum);
\end{tikzpicture}
\end{center}
\alert{Parallel form (ISA)}
\[ F(s) &= k_c\left( 1 + \frac{1}{\tau_i s} + \tau_d s\right) \]
\alert{Series form}
\[F(s) = K_c \left( \frac{ \tau_I s + 1}{\tau_I s} \right) (\tau_D s + 1)
= \underbrace{\frac{K_c(\tau_I + \tau_D)}{\tau_I}}_{k_c} \left(1 + \frac{1}{\underbrace{(\tau_I + \tau_D)}_{\tau_i} s} + \underbrace{\frac{\tau_I\tau_D}{\tau_I + \tau_D}}_{\tau_d}s \right) \]
\end{frame}
\begin{frame}[label={sec:org288acd6}]{The PID - Parallel form}
\definecolor{ppc}{rgb}{0.1,0.1,0.6}
\definecolor{iic}{rgb}{0.6,0.1,0.1}
\definecolor{ddc}{rgb}{0.1,0.6,0.1}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
\node[coordinate] (input) {};
\node[sumnode, right of=input, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[color=iic,block, right of=sum, node distance=28mm] (ii) {$\frac{1}{\tau_is}$};
\node[color=ppc, coordinate, above of=ii, node distance=10mm] (pp) {};
\node[color=ddc,block, below of=ii, node distance=10mm] (dd) {$\tau_ds$};
\node[sumnode, right of=ii, node distance=20mm] (sum2) {\tiny $\Sigma$};
\node[block, right of=sum2, node distance=20mm] (gain) {$k_c$};
\node[coordinate, below of=sum, node distance=12mm] (feedback) {};
\node[coordinate, right of=gain, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$r(t)$} (sum);
\draw[->] (sum) -- node[above, pos=0.2] {$e(t)$} node[coordinate] (mm) {} (ii);
\draw[->] (gain) -- node[above, near end] {$u(t)$} (output);
\draw[->] (feedback) -- node[left, near start] {$y(t)$} node[right, pos=0.95] {-} (sum);
\draw[->, color=ppc] (mm) |- (pp) -| node[right,] {$u_P(t)$} (sum2);
\draw[->, color=ddc] (mm) |- (dd) -| node[right,] {$u_D(t)$} (sum2);
\draw[->, color=iic] (ii) -- node[above,] {$u_I(t)$} (sum2);
\draw[->] (sum2) -- node[above, near end] {} (gain);
\end{tikzpicture}
\end{center}
\begin{align*}
u(t) &= k_c\Big( \textcolor{ppc}{e(t)} + \textcolor{iic}{\frac{1}{\tau_i} \int_0^{t} e(\xi) d\xi} + \textcolor{ddc}{\tau_d \frac{d}{dt} e(t)} \Big)
\end{align*}
\end{frame}
\begin{frame}[label={sec:org9114acc}]{The PID - Parallel form, modified D-part}
\definecolor{ppc}{rgb}{0.1,0.1,0.6}
\definecolor{iic}{rgb}{0.6,0.1,0.1}
\definecolor{ddc}{rgb}{0.1,0.6,0.1}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
\node[coordinate] (input) {};
\node[sumnode, right of=input, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[color=iic,block, right of=sum, node distance=28mm] (ii) {$\frac{1}{\tau_is}$};
\node[color=ppc, coordinate, above of=ii, node distance=10mm] (pp) {};
\node[color=ddc,block, below of=ii, node distance=10mm] (dd) {$\tau_ds$};
\node[sumnode, right of=ii, node distance=20mm] (sum2) {\tiny $\Sigma$};
\node[block, right of=sum2, node distance=20mm] (gain) {$k_c$};
\node[coordinate, below of=sum, node distance=12mm] (feedback) {};
\node[coordinate, right of=gain, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$r(t)$} (sum);
\draw[->] (sum) -- node[above, pos=0.2] {$e(t)$} node[coordinate] (mm) {} (ii);
\draw[->] (gain) -- node[above, near end] {$u(t)$} (output);
\draw[->] (feedback) -- node[left, near start] {$y(t)$} node[right, pos=0.95] {-} (sum);
\draw[->, color=ppc] (mm) |- (pp) -| node[right,] {$u_P(t)$} (sum2);
\draw[->, color=ddc] (feedback |- dd) -- node[above, pos=0.95] {-} (dd);
\draw[->, color=ddc] (dd) -| node[right,] {$u_D(t)$} (sum2) ;
\draw[->, color=iic] (ii) -- node[above,] {$u_I(t)$} (sum2);
\draw[->] (sum2) -- node[above, near end] {} (gain);
\end{tikzpicture}
\end{center}
\[ u(t) = k_c\Big( \textcolor{ppc}{e(t)} + \textcolor{iic}{\overbrace{\frac{1}{\tau_i} \int_0^{t} e(\xi) d\xi}^{u_I(t)}} + \textcolor{ddc}{ \underbrace{\tau_d \frac{d}{dt} \big(-y(t)\big)}_{u_D(t)}} \Big) \]
\end{frame}
\begin{frame}[label={sec:orga8d20ff}]{The PID - Parallel form}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}, scale=0.6, every node/.style={scale=0.6}]
\node[coordinate] (input) {};
\node[sumnode, right of=input, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[color=iic,block, right of=sum, node distance=28mm] (ii) {$\frac{1}{\tau_is}$};
\node[color=ppc, coordinate, above of=ii, node distance=10mm] (pp) {};
\node[color=ddc,block, below of=ii, node distance=10mm] (dd) {$\tau_ds$};
\node[sumnode, right of=ii, node distance=20mm] (sum2) {\tiny $\Sigma$};
\node[block, right of=sum2, node distance=20mm] (gain) {$k_c$};
\node[coordinate, below of=sum, node distance=12mm] (feedback) {};
\node[coordinate, right of=gain, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$r(t)$} (sum);
\draw[->] (sum) -- node[above, pos=0.2] {$e(t)$} node[coordinate] (mm) {} (ii);
\draw[->] (gain) -- node[above, near end] {$u(t)$} (output);
\draw[->] (feedback) -- node[left, near start] {$y(t)$} node[right, pos=0.95] {-} (sum);
\draw[->, color=ppc] (mm) |- (pp) -| node[right,] {$u_P(t)$} (sum2);
\draw[->, color=ddc] (feedback |- dd) -- node[above, pos=0.95] {-} (dd) -| node[right,] {$u_D(t)$} (sum2);
\draw[->, color=iic] (ii) -- node[above,] {$u_I(t)$} (sum2);
\draw[->] (sum2) -- node[above, near end] {} (gain);
\end{tikzpicture}
\small
\( u(t) = k_c\Big( \textcolor{ppc}{e(t)} + \textcolor{iic}{\overbrace{\frac{1}{\tau_i} \int_0^{t} e(\xi) d\xi}^{u_I(t)}} + \textcolor{ddc}{ \underbrace{\tau_d \frac{d}{dt} \big(-y(t)\big)}_{u_D(t)}} \Big)\)
\end{center}
\begin{center}
\def\TT{1}
\begin{tikzpicture}
\begin{axis}[
clip=false,
width=14cm,
height=4.2cm,
ylabel={},
xlabel={$t$},
ymax = 2,
ymin = -0.5,
]
\addplot[black, no marks, domain=-0.1:8, samples=200] {(x>0)*(1 - (1+x/\TT)*exp(-x/\TT)} node[coordinate, pin=-20:{$y(t)$}, pos=0.4] {};
\addplot[magenta!70!black, no marks, domain=-0.1:8, samples=200] coordinates {(-0.1, 0) (0,0) (0,1) (8,1)} node[coordinate, pin=90:{$r(t)$}, pos=0.4] {};
\end{axis}
\end{tikzpicture}
\end{center}
\alert{Activity} Sketch the error signal \(e(t)\), the derivative signal \(u_D(t)\) and the integral signal \(u_I(t)\) (use \(\tau_i=\tau_d=1\))
\end{frame}
\begin{frame}[label={sec:orgaea28f1}]{The PID - Parallel form, solution}
\end{frame}
\begin{frame}[label={sec:org192fa56}]{The PID - Parallel form, solution}
\(u(t) = k_c\Big( \textcolor{ppc}{e(t)} + \textcolor{iic}{\overbrace{\frac{1}{\tau_i} \int_0^{t} e(\xi) d\xi}^{u_I(t)}} + \textcolor{ddc}{ \underbrace{\tau_d \frac{d}{dt} \big(-y(t)\big)}_{u_D(t)}} \Big)\)
\begin{center}
\def\TT{1}
\begin{tikzpicture}
\begin{axis}[
clip=false,
width=14cm,
height=5cm,
ylabel={},
xlabel={$t$},
ymax = 2,
]
\addplot[black, no marks, domain=-0.1:8, samples=200] {(x>0)*(1 - (1+x/\TT)*exp(-x/\TT)} node[coordinate, pin=-20:{$y(t)$}, pos=0.4] {};
\addplot[magenta!70!black, no marks, domain=-0.1:8, samples=200] coordinates {(-0.1, 0) (0,0) (0,1) (8,1)} node[coordinate, pin=90:{$r(t)$}, pos=0.21] {};
\addplot[color=ppc, no marks, domain=0:8, samples=200] {(x>=0)*( (1+x/\TT)*exp(-x/\TT)} node[coordinate, pin=20:{$e(t)$}, pos=0.7] {};
\addplot[color=iic, no marks, domain=-0.1:8, samples=200] {(x>0)*(2*(1-exp(-x/\TT)) - \x/\TT*exp(-x/\TT))} node[coordinate, pin=-20:{$u_I(t)$}, pos=0.6] {};
\addplot[color=ddc, no marks, domain=-0.1:8, samples=200] {(x>0)*(-\x/\TT*exp(-x/\TT))} node[coordinate, pin=-20:{$u_D(t)$}, pos=0.4] {};
\end{axis}
\end{tikzpicture}
\end{center}
\end{frame}
\begin{frame}[label={sec:org4816b36}]{The PID - practical form}
\definecolor{ppc}{rgb}{0.1,0.1,0.6}
\definecolor{iic}{rgb}{0.6,0.1,0.1}
\definecolor{ddc}{rgb}{0.1,0.5,0.1}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
\node[coordinate] (input) {};
\node[sumnode, right of=input, node distance=16mm] (sum) {\tiny $\Sigma$};
\node[color=iic,block, right of=sum, node distance=28mm] (ii) {$\frac{1}{\tau_is}$};
\node[color=ppc, coordinate, above of=ii, node distance=10mm] (pp) {};
\node[color=ddc,block, below of=ii, node distance=13mm] (dd) {$\frac{\tau_ds}{\frac{\tau_d}{N}s + 1}$};
\node[sumnode, right of=ii, node distance=20mm] (sum2) {\tiny $\Sigma$};
\node[block, right of=sum2, node distance=20mm] (gain) {$k_c$};
\node[coordinate, below of=sum, node distance=12mm] (feedback) {};
\node[coordinate, right of=gain, node distance=20mm] (output) {};
\draw[->] (input) -- node[above, pos=0.3] {$r(t)$} (sum);
\draw[->] (sum) -- node[above, pos=0.2] {$e(t)$} node[coordinate] (mm) {} (ii);
\draw[->] (gain) -- node[above, near end] {$u(t)$} (output);
\draw[->] (feedback) -- node[left, near start] {$y(t)$} node[right, pos=0.95] {-} (sum);
\draw[->, color=ppc] (mm) |- (pp) -| node[right,] {$u_P(t)$} (sum2);
\draw[->, color=ddc] (feedback |- dd) -- node[above, pos=0.95] {-} (dd);
\draw[->, color=ddc] (dd) -| node[right,] {$u_D(t)$} (sum2) ;
\draw[->, color=iic] (ii) -- node[above,] {$u_I(t)$} (sum2);
\draw[->] (sum2) -- node[above, near end] {} (gain);
\end{tikzpicture}
\end{center}
The parameter \(N\) is chosen to limit the influence of noisy measurements. Typically,
\[ 3 < N < 20 \]
\end{frame}
\section{PID tuning - (Smith and Corripio) Ziegler Nichols}
\label{sec:orgef9bf0d}
\begin{frame}[label={sec:orgc65ab9f}]{PID tuning}
\end{frame}
\begin{frame}[label={sec:org6cab308}]{Method by Smith \& Corripio using table by Ziegler-Nichols}
\small
Given process model (fitted to response of the system) \[ G(s) = K \frac{\mathrm{e}^{-s\theta}}{\tau s + 1} \] and PID controller
\[ F(s) = k_c\left( 1 + \frac{1}{\tau_i s} + \tau_d s\right) \]
Choose the PID parameters according to the following table (Ziegler-Nichols, 1943)
\begin{center}
\setlength{\tabcolsep}{20pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{llll}
Controller & \(k_c\) & \(\tau_i\) & \(\tau_d\)\\
\hline\hline
P & \(\frac{\tau}{\theta K}\) & & \\
PI & \(\frac{0.9\tau}{\theta K}\) & \(\frac{\theta}{0.3}\) & \\
PID & \(\frac{1.2\tau}{\theta K}\) & \(2\theta\) & \(\frac{\theta}{2}\)\\
\hline
\end{tabular}
\end{center}
Gives good control for \[0.1 < \frac{\theta}{\tau} < 0.6.\]
\end{frame}
\section{Fitting first-order model with delay}
\label{sec:org5c3dc80}
\begin{frame}[label={sec:orgadb0320}]{Fitting first-order model with delay}
Assuming a plant model of first-order with time constant \(\tau\) and delay \(\theta\)
\[ \quad \textcolor{green!50!black}{Y(s)} = \frac{K\mathrm{e}^{-s\theta}}{s\tau + 1}\textcolor{blue!80!black}{U(s)} \quad \overset{U(s) = \frac{u_f}{s}}{\Longrightarrow} \quad \textcolor{green!50!black}{y(t)} = u_f K\big( 1 - \mathrm{e}^{-\frac{t-\theta}{\tau}}\big)u_H(t-\theta)\]
\def\Tcnst{3}
\def\tdelay{0.6}
\def\ggain{2}
\def\uampl{0.8}
\pgfmathsetmacro{\yfinal}{\uampl*\ggain}
\pgfmathsetmacro{\yone}{0.283*\yfinal}
\pgfmathsetmacro{\ytwo}{0.632*\yfinal}
\pgfmathsetmacro{\tone}{\tdelay + \Tcnst/3}
\pgfmathsetmacro{\two}{\tdelay + \Tcnst}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
width=14cm,
height=4.5cm,
grid = both,
xtick = {0, \tdelay, \tone, \two},
xticklabels = {0, $\theta$, $\theta+\frac{\tau}{3}$, $\theta + \tau$},
ytick = {0, \yone, \ytwo, \uampl, \yfinal},
yticklabels = {0, $ $, $ $, $u_f$, $y_f$},
xmin = -0.2,
%minor y tick num=9,
%minor x tick num=9,
%every major grid/.style={red, opacity=0.5},
xlabel = {$t$},
]
\addplot [thick, green!50!black, no marks, domain=0:10, samples=100] {\uampl*\ggain*(x>\tdelay)*(1 - exp(-(x-\tdelay)/\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};
\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\uampl) (10,\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};
\end{axis}
\end{tikzpicture}
\end{center}
\alert{Individual activity} Evaluate the response \(y(t)\) at the two time instants \(t=\theta + \frac{\tau}{3}\) and \(t=\theta + \tau\)!
\end{frame}
\begin{frame}[label={sec:org0049621}]{Fitting first-order model with delay}
Assuming a plant model of first-order with time constant \(\tau\) and delay \(\theta\)
\[ \quad \textcolor{green!50!black}{Y(s)} = \frac{K\mathrm{e}^{-s\theta}}{s\tau + 1}\textcolor{blue!80!black}{U(s)} \quad \overset{U(s) = \frac{u_f}{s}}{\Longrightarrow} \quad \textcolor{green!50!black}{y(t)} = u_f K\big( 1 - \mathrm{e}^{-\frac{t-\theta}{\tau}}\big)u_H(t-\theta)\]
\def\Tcnst{3}
\def\tdelay{0.6}
\def\ggain{2}
\def\uampl{0.8}
\pgfmathsetmacro{\yfinal}{\uampl*\ggain}
\pgfmathsetmacro{\yone}{0.283*\yfinal}
\pgfmathsetmacro{\ytwo}{0.632*\yfinal}
\pgfmathsetmacro{\tone}{\tdelay + \Tcnst/3}
\pgfmathsetmacro{\two}{\tdelay + \Tcnst}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
width=14cm,
height=4.5cm,
grid = both,
xtick = {0, \tdelay, \tone, \two},
xticklabels = {0, $\theta$, $\theta+\frac{\tau}{3}$, $\theta + \tau$},
ytick = {0, \yone, \ytwo, \uampl, \yfinal},
yticklabels = {0, $0.283y_{f}$, $0.632y_f$, $u_f$, $y_f$},
xmin = -0.2,
%minor y tick num=9,
%minor x tick num=9,
%every major grid/.style={red, opacity=0.5},
xlabel = {$t$},
]
\addplot [thick, green!50!black, no marks, domain=0:10, samples=100] {\uampl*\ggain*(x>\tdelay)*(1 - exp(-(x-\tdelay)/\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};
\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\uampl) (10,\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};
\end{axis}
\end{tikzpicture}
\end{center}
\[ y_f = \lim_{t\to\infty} y(t) = u_f K \quad \Rightarrow \quad K = \frac{y_f}{u_f}. \]
\end{frame}
\begin{frame}[label={sec:orgce08b4c}]{First-order model with delay - example}
\[ \quad Y(s) = \frac{K\mathrm{e}^{-s\theta}}{s\tau + 1}U(s) \quad \overset{U(s) = \frac{u_f}{s}}{\Longrightarrow} \quad y(t) = u_f K\big( 1 - \mathrm{e}^{-\frac{t-\theta}{\tau}}\big)u_s(t-\theta)\]
\def\Tcnst{2.1}
\def\tdelay{1}
\def\ggain{2}
\def\uampl{0.8}
\pgfmathsetmacro{\yfinal}{\uampl*\ggain}
\pgfmathsetmacro{\yone}{0.283*\yfinal}
\pgfmathsetmacro{\ytwo}{0.632*\yfinal}
\pgfmathsetmacro{\tone}{\tdelay + \Tcnst/3}
\pgfmathsetmacro{\two}{\tdelay + \Tcnst}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
width=12cm,
height=4cm,
grid = both,
%xtick = {0, \tdelay, \tone, \two},
%xticklabels = {0, $\theta$, $\theta+\frac{\tau}{3}$, $\theta + \tau$},
%ytick = {0, \yone, \ytwo, \uampl, \yfinal},
%yticklabels = {0, $0.283y_{f}$, $0.632y_f$, $u_f$, $y_f$},
xmin = -0.2,
minor y tick num=9,
minor x tick num=9,
every major grid/.style={red, opacity=0.5},
%xlabel = {$t$},
clip = false,
]
\addplot [thick, green!50!black, smooth, no marks, domain=0:10, samples=16] {\uampl*\ggain*(x>\tdelay)*(1 - exp(-(x-\tdelay)/\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};
\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\uampl) (10,\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};
\draw[thick, green!70!black, dashed] (axis cs: 10, \yfinal) -- (axis cs: -1, \yfinal, -0.9) node[left, anchor=east] {$y_f = \yfinal$};
\draw[blue!70!black, dashed] (axis cs: 0, \uampl) -- (axis cs: -1, \uampl, -0.9) node[left, anchor=east] {$u_f = \uampl$};
\end{axis}
\end{tikzpicture}
\end{center}
\end{frame}
\begin{frame}[label={sec:orga70f7de}]{First-order model with delay - example}
\[ \quad Y(s) = \frac{K\mathrm{e}^{-s\theta}}{s\tau + 1}U(s) \quad \overset{U(s) = \frac{u_f}{s}}{\Longrightarrow} \quad y(t) = u_f K\big( 1 - \mathrm{e}^{-\frac{t-\theta}{\tau}}\big)u_s(t-\theta)\]
\def\Tcnst{2.1}
\def\tdelay{1}
\def\ggain{2}
\def\uampl{0.8}
\pgfmathsetmacro{\yfinal}{\uampl*\ggain}
\pgfmathsetmacro{\yone}{0.283*\yfinal}
\pgfmathsetmacro{\ytwo}{0.632*\yfinal}
\pgfmathsetmacro{\tone}{\tdelay + \Tcnst/3}
\pgfmathsetmacro{\two}{\tdelay + \Tcnst}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
width=12cm,
height=4cm,
grid = both,
%xtick = {0, \tdelay, \tone, \two},
%xticklabels = {0, $\theta$, $\theta+\frac{T}{3}$, $\theta + T$},
%ytick = {0, \yone, \ytwo, \uampl, \yfinal},
%yticklabels = {0, $0.283y_{f}$, $0.632y_f$, $u_f$, $y_f$},
xmin = -0.2,
minor y tick num=9,
minor x tick num=9,
every major grid/.style={red, opacity=0.5},
%xlabel = {$t$},
clip = false,
]
\addplot [thick, green!50!black, smooth, no marks, domain=0:10, samples=16] {\uampl*\ggain*(x>\tdelay)*(1 - exp(-(x-\tdelay)/\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};
\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\uampl) (10,\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};
\draw[thick, orange, dashed] (axis cs: \two, \ytwo) -- (axis cs: \two, -0.9) node[below] {$t_2 = \two = \theta + \tau$};
\draw[thick, orange, dashed] (axis cs: \two, \ytwo) -- (axis cs: -1, \ytwo, -0.9) node[left, anchor=east] {$0.632y_f = \ytwo$};
\draw[thick, green!70!black, dashed] (axis cs: 10, \yfinal) -- (axis cs: -1, \yfinal, -0.9) node[left, anchor=east] {$y_f = \yfinal$};
\draw[blue!70!black, dashed] (axis cs: 0, \uampl) -- (axis cs: -1, \uampl, -0.9) node[left, anchor=east] {$u_f = \uampl$};
\end{axis}
\end{tikzpicture}
\end{center}
\end{frame}
\begin{frame}[label={sec:orge8ca517}]{First-order model with delay - example}
\[ \quad Y(s) = \frac{K\mathrm{e}^{-s\theta}}{s\tau + 1}U(s) \quad \overset{U(s) = \frac{u_f}{s}}{\Longrightarrow} \quad y(t) = u_f K\big( 1 - \mathrm{e}^{-\frac{t-\theta}{\tau}}\big)u_s(t-\theta)\]
\def\Tcnst{2.1}
\def\tdelay{1}
\def\ggain{2}
\def\uampl{0.8}
\pgfmathsetmacro{\yfinal}{\uampl*\ggain}
\pgfmathsetmacro{\yone}{0.283*\yfinal}
\pgfmathsetmacro{\ytwo}{0.632*\yfinal}
\pgfmathsetmacro{\tone}{\tdelay + \Tcnst/3}
\pgfmathsetmacro{\two}{\tdelay + \Tcnst}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
width=12cm,
height=4cm,
grid = both,
%xtick = {0, \tdelay, \tone, \two},
%xticklabels = {0, $\theta$, $\theta+\frac{T}{3}$, $\theta + T$},
%ytick = {0, \yone, \ytwo, \uampl, \yfinal},
%yticklabels = {0, $0.283y_{f}$, $0.632y_f$, $u_f$, $y_f$},
xmin = -0.2,
minor y tick num=9,
minor x tick num=9,
every major grid/.style={red, opacity=0.5},
%xlabel = {$t$},
clip = false,
]
\addplot [thick, green!50!black, smooth, no marks, domain=0:10, samples=16] {\uampl*\ggain*(x>\tdelay)*(1 - exp(-(x-\tdelay)/\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};
\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\uampl) (10,\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};
\draw[thick, red, dashed] (axis cs: \tone, \yone) -- (axis cs: \tone, -0.45) node[below] {$t_1 = \tone = \theta + \frac{\tau}{3}$};
\draw[thick, red, dashed] (axis cs: \tone, \yone) -- (axis cs: -1,\yone) node[left, anchor=east] {$0.283y_f = \yone$};
\draw[thick, orange, dashed] (axis cs: \two, \ytwo) -- (axis cs: \two, -0.9) node[below] {$t_2 = \two = \theta + \tau$};
\draw[thick, orange, dashed] (axis cs: \two, \ytwo) -- (axis cs: -1, \ytwo, -0.9) node[left, anchor=east] {$0.632y_f = \ytwo$};
\draw[thick, green!70!black, dashed] (axis cs: 10, \yfinal) -- (axis cs: -1, \yfinal, -0.9) node[left, anchor=east] {$y_f = \yfinal$};
\draw[blue!70!black, dashed] (axis cs: 0, \uampl) -- (axis cs: -1, \uampl, -0.9) node[left, anchor=east] {$u_f = \uampl$};
\end{axis}
\end{tikzpicture}
\end{center}
\end{frame}
\begin{frame}[label={sec:org50ebc80}]{First-order model with delay - example}
\[ \quad Y(s) = \frac{K\mathrm{e}^{-s\theta}}{s\tau + 1}U(s) \quad \overset{U(s) = \frac{u_f}{s}}{\Longrightarrow} \quad y(t) = u_f K\big( 1 - \mathrm{e}^{-\frac{t-\theta}{\tau}}\big)u_s(t-\theta)\]
\def\Tcnst{2.1}
\def\tdelay{1}
\def\ggain{2}
\def\uampl{0.8}
\pgfmathsetmacro{\yfinal}{\uampl*\ggain}
\pgfmathsetmacro{\yone}{0.283*\yfinal}
\pgfmathsetmacro{\ytwo}{0.632*\yfinal}
\pgfmathsetmacro{\tone}{\tdelay + \Tcnst/3}
\pgfmathsetmacro{\two}{\tdelay + \Tcnst}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
width=12cm,
height=4cm,
grid = both,
%xtick = {0, \tdelay, \tone, \two},
%xticklabels = {0, $\theta$, $\theta+\frac{T}{3}$, $\theta + T$},
%ytick = {0, \yone, \ytwo, \uampl, \yfinal},
%yticklabels = {0, $0.283y_{f}$, $0.632y_f$, $u_f$, $y_f$},
xmin = -0.2,
minor y tick num=9,
minor x tick num=9,
every major grid/.style={red, opacity=0.5},
%xlabel = {$t$},
clip = false,
]
\addplot [thick, green!50!black, smooth, no marks, domain=0:10, samples=16] {\uampl*\ggain*(x>\tdelay)*(1 - exp(-(x-\tdelay)/\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};
\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\uampl) (10,\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};
\draw[thick, red, dashed] (axis cs: \tone, \yone) -- (axis cs: \tone, -0.45) node[below] {$t_1 = \tone = \theta + \frac{\tau}{3}$};
\draw[thick, red, dashed] (axis cs: \tone, \yone) -- (axis cs: -1,\yone) node[left, anchor=east] {$0.283y_f = \yone$};
\draw[thick, orange, dashed] (axis cs: \two, \ytwo) -- (axis cs: \two, -0.9) node[below] {$t_2 = \two = \theta + \tau$};
\draw[thick, orange, dashed] (axis cs: \two, \ytwo) -- (axis cs: -1, \ytwo, -0.9) node[left, anchor=east] {$0.632y_f = \ytwo$};
\draw[thick, green!70!black, dashed] (axis cs: 10, \yfinal) -- (axis cs: -1, \yfinal, -0.9) node[left, anchor=east] {$y_f = \yfinal$};
\draw[blue!70!black, dashed] (axis cs: 0, \uampl) -- (axis cs: -1, \uampl, -0.9) node[left, anchor=east] {$u_f = \uampl$};
\end{axis}
\end{tikzpicture}
\end{center}
\[ \begin{cases} \tone = \theta + \frac{\tau}{3}\\ \two = \theta + \tau \end{cases} \quad \Rightarrow \quad \begin{cases} \theta = \tdelay \\ \tau = \Tcnst \end{cases}, \qquad K = \frac{y_f}{u_f} = \frac{\yfinal}{\uampl} = \ggain \]
\end{frame}
\section{Analytical PID design}
\label{sec:orgb34fb58}
\begin{frame}[label={sec:org7cae885}]{Analytical PID design}
\end{frame}
\begin{frame}[label={sec:orgdf78a2d}]{Analytical PID design}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
{
\node[coordinate] (input) {};
\node[sumnode, right of=input] (sum) {\tiny $\sum$};
\node[block, right of=sum, node distance=2.6cm] (reg) {$F(s)$};
\node[block, right of=reg, node distance=2.6cm] (plant) {$G(s)$};
\node[coordinate, right of=plant, node distance=2cm] (output) {};
\node[coordinate, below of=plant, node distance=12mm] (feedback) {};
\draw[->] (plant) -- node[coordinate, inner sep=0pt] (meas) {} node[near end, above] {$y(t)$} (output);
\draw[->] (meas) |- (feedback) -| node[very near end, left] {$-$} (sum);
\draw[->] (input) -- node[very near start, above] {$r(t)$} (sum);
\draw[->] (sum) -- node[above] {$e(t)$} (reg);
\draw[->] (reg) -- node[above] {$u(t)$}(plant);
}
\end{tikzpicture}
\end{center}
\alert{Activity} Solve for \(F(s)\) in the closed-loop transfer function \[G_c(s) = \frac{G(s)F(s)}{1 + G(s)F(s)}\]
\end{frame}
\begin{frame}[label={sec:org3a440f1}]{Analytical PID design - Solution}
\end{frame}
\begin{frame}[label={sec:orgd79c818}]{Analytical PID design - Solution}
Solving for \(F(s)\) in the closed-loop transfer function \(G_c(s) = \frac{G(s)F(s)}{1 + G(s)F(s)}\)
\[ \big(1 + G(s)F(s)\big) G_c(s) = G(s)F(s)\]
\[ G_c(s) = \big( 1 - G_c(s)\big) G(s)F(s)\]
\[F(s) = \frac{\frac{G_c(s)}{G(s)}}{1 - G_c(s)}\]
\end{frame}
\begin{frame}[label={sec:org5fee8ef}]{Analytic PID tuning - first-order with delay}
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
{
\node[coordinate] (input) {};
\node[sumnode, right of=input] (sum) {\tiny $\sum$};
\node[block, right of=sum, node distance=2.6cm] (reg) {$F(s)$};
\node[block, right of=reg, node distance=2.6cm] (plant) {$G(s)$};
\node[coordinate, right of=plant, node distance=2cm] (output) {};
\node[coordinate, below of=plant, node distance=12mm] (feedback) {};
\draw[->] (plant) -- node[coordinate, inner sep=0pt] (meas) {} node[near end, above] {$y(t)$} (output);
\draw[->] (meas) |- (feedback) -| node[very near end, left] {$-$} (sum);
\draw[->] (input) -- node[very near start, above] {$r(t)$} (sum);
\draw[->] (sum) -- node[above] {$e(t)$} (reg);
\draw[->] (reg) -- node[above] {$u(t)$}(plant);
}
\end{tikzpicture}
\end{center}
Given model \(G(s) = K \frac{\mathrm{e}^{-s\theta}}{\tau s + 1}\) of the process and desired closed-loop transfer function \(G_c(s) = \frac{\mathrm{e}^{-s\theta}}{\tau_c s + 1}\)
\alert{Activity} Show that the controller becomes
\[ F(s) = \frac{1}{K} \left( \frac{\tau s + 1}{\tau_c s + 1 - \mathrm{e}^{-s\theta}} \right) \approx \frac{1}{K} \left( \frac{\tau s + 1}{(\tau_c+\theta) s}\right) = \underbrace{\frac{\tau}{K(\tau_c+\theta)}}_{k_c} \left( 1 + \frac{1}{\underbrace{\tau}_{\tau_i} s} \right).\]
Which is a PI-controller with \(k_c = \frac{\tau}{K(\tau_c+\theta)}\) and \(\tau_i = \tau\).
\end{frame}
\begin{frame}[label={sec:org0d28ec6}]{Example}
\small
\begin{center}
\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]
{
\node[coordinate] (input) {};
\node[sumnode, right of=input] (sum) {\tiny $\sum$};
\node[block, right of=sum, node distance=2.6cm] (reg) {$k_c\frac{\tau_i s + 1}{\tau_i s}$};
\node[block, right of=reg, node distance=2.6cm] (plant) {$\frac{200 \mathrm{e}^{-0.1s}}{0.1s + 1}$};
\node[coordinate, right of=plant, node distance=2cm] (output) {};
\node[coordinate, below of=plant, node distance=12mm] (feedback) {};
\draw[->] (plant) -- node[coordinate, inner sep=0pt] (meas) {} node[near end, above] {$y(t)$} (output);
\draw[->] (meas) |- (feedback) -| node[very near end, left] {$-$} (sum);
\draw[->] (input) -- node[very near start, above] {$r(t)$} (sum);
\draw[->] (sum) -- node[above] {$e(t)$} (reg);
\draw[->] (reg) -- node[above] {$u(t)$}(plant);
}
\end{tikzpicture}
\end{center}
\(k_c = \frac{\tau}{K(\tau_c+\theta)}\) and \(\tau_i = \tau\).
\alert{Activity} Determine the controller for the choice \(\tau_c = \tau\)
\end{frame}
\end{document} | {
"alphanum_fraction": 0.6061827111,
"avg_line_length": 45.9710564399,
"ext": "tex",
"hexsha": "9e90436f9d2588097c7016c6a8d0627e3621683a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kjartan-at-tec/mr2025",
"max_forks_repo_path": "controller-design/slides/pid.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kjartan-at-tec/mr2025",
"max_issues_repo_path": "controller-design/slides/pid.tex",
"max_line_length": 282,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kjartan-at-tec/mr2025",
"max_stars_repo_path": "controller-design/slides/pid.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12623,
"size": 31766
} |
\chapter{Approximation methods}
Perturbation theory is based on the assumption that the problem we wish to solve is, in some sense, only slightly different from a problem that can be solved exactly. In the case where the deviation between the two problems is small, perturbation theory is suitable for calculating the contribution associated with this deviation; this contribution is then added as a correction to the energy and the wave function of the exactly solvable Hamiltonian. So perturbation theory builds on the known exact solutions to obtain approximate solutions.\\
What about those systems whose Hamiltonians cannot be reduced to an exactly solvable part plus a small correction. For these, we may consider the variational method or the WKB approximation. The variational method is particularly useful in estimating the energy eigenvalues of the ground state and the first few excited states of a system for which one has only a qualitative idea about the form of the wave function.\\
The WKB method is useful for finding the energy eigenvalues and wave functions of systems for which the classical limit is valid.
\section{Time independent perturbation theory}
This method is most suitable when $\hat{H}$ is very close to a Hamiltonian $H_{0}$ that can be solved exactly. In this case, $\hat{H}$ can be split into two time-independent parts
$$
\hat{H}=\hat{H}_{0}+\hat{H}_{p},
$$
where $\hat{H}_{p}$ is very small compared to $\hat{H}_{0}\left(\hat{H}_{0}\right.$ is known as the Hamiltonian of the unperturbed system). As a result, $\hat{H}_{p}$ is called the perturbation, for its effects on the energy spectrum and eigenfunctions will be small; such perturbation is encountered, for instance, in systems subject to weak electric or magnetic fields. We can make this idea more explicit by writing $\hat{H}_{p}$ in terms of a dimensionless real parameter $\lambda$ which is very small compared to 1 :
$$
\hat{H}_{p}=\lambda \hat{W} \quad(\lambda \ll 1) .
$$
Thus the eigenvalue problem becomes
$$
\left(\hat{H}_{0}+\lambda \hat{W}\right)\left|\psi_{n}\right\rangle=E_{n}\left|\psi_{n}\right\rangle .
$$
In what follows we are going to consider two separate cases depending on whether the exact solutions of $\hat{H}_{0}$ are nondegenerate or degenerate. Each of these two cases requires its own approximation scheme.
\subsection{Nondegenerate Perturbation theory}
In this section we limit our study to the case where $\hat{H}_{0}$ has no degenerate eigenvalues; that is, for every energy $E_{n}^{(0)}$ there corresponds only one eigenstate $\left|\phi_{n}\right\rangle:$
$$
\hat{H}_{0}\left|\phi_{n}\right\rangle=E_{n}^{(0)}\left|\phi_{n}\right\rangle,
$$
where the exact eigenvalues $E_{n}^{(0)}$ and exact eigenfunctions $\left|\phi_{n}\right\rangle$ are known.\\
The main idea of perturbation theory consists in assuming that the perturbed eigenvalues and eigenstates can both be expanded in power series in the parameter $\lambda$ :
$$
\begin{aligned}
E_{n} &=E_{n}^{(0)}+\lambda E_{n}^{(1)}+\lambda^{2} E_{n}^{(2)}+\cdots \\
\left|\psi_{n}\right\rangle &=\left|\phi_{n}\right\rangle+\lambda\left|\psi_{n}^{(1)}\right\rangle+\lambda^{2}\left|\psi_{n}^{(2)}\right\rangle+\cdots
\end{aligned}
$$
The job of perturbation theory reduces then to the calculation of $E_{n}^{(1)}, E_{n}^{(2)}, \ldots$ and $\left|\psi_{n}^{(1)}\right\rangle$, $\left|\psi_{n}^{(2)}\right\rangle, \ldots .$ In this section we shall be concerned only with the determination of $E_{n}^{(1)}, E_{n}^{(2)}$, and $\left|\psi_{n}^{(1)}\right\rangle .$ Assuming that the unperturbed states $\left|\phi_{n}\right\rangle$ are nondegenerate, and substituting the power series expansion of $E_{n}$ and $\left|\psi_{n}\right\rangle $ in $
\left(\hat{H}_{0}+\lambda \hat{W}\right)\left|\psi_{n}\right\rangle=E_{n}\left|\psi_{n}\right\rangle .
$ we obtain
$$
\begin{aligned}
&\left(\hat{H}_{0}+\lambda \hat{W}\right)\left(\left|\phi_{n}\right\rangle+\lambda\left|\psi_{n}^{(1)}\right\rangle+\lambda^{2}\left|\psi_{n}^{(2)}\right\rangle+\cdots\right) \\
&\quad=\left(E_{n}^{(0)}+\lambda E_{n}^{(1)}+\lambda^{2} E_{n}^{(2)}+\cdots\right)\left(\left|\phi_{n}\right\rangle+\lambda\left|\psi_{n}^{(1)}\right\rangle+\lambda^{2}\left|\psi_{n}^{(2)}\right\rangle+\cdots\right)
\end{aligned}
$$
The coefficients of successive powers of $\lambda$ on both sides of this equation must be equal. Equating the coefficients of the first three powers of $\lambda$, we obtain these results:\\
- Zero order in $\lambda$ :
$$
\hat{H}_{0}\left|\phi_{n}\right\rangle=E_{n}^{(0)}\left|\phi_{n}\right\rangle
$$
- First order in $\lambda:$
$$
\hat{H}_{0}\left|\psi_{n}^{(1)}\right\rangle+\hat{W}\left|\phi_{n}\right\rangle=E_{n}^{(0)}\left|\psi_{n}^{(1)}\right\rangle+E_{n}^{(1)}\left|\phi_{n}\right\rangle
$$
- Second order in $\lambda$ :
$$
\hat{H}_{0}\left|\psi_{n}^{(2)}\right\rangle+\hat{W}\left|\psi_{n}^{(1)}\right\rangle=E_{n}^{(0)}\left|\psi_{n}^{(2)}\right\rangle+E_{n}^{(1)}\left|\psi_{n}^{(1)}\right\rangle+E_{n}^{(2)}\left|\phi_{n}\right\rangle
$$
We now proceed to determine the eigenvalues $E_{n}^{(1)}, E_{n}^{(2)}$ and the eigenvector $\left|\psi_{n}^{(1)}\right\rangle$ For this, we need to specify how the states $\left|\phi_{n}\right\rangle$ and $\left|\psi_{n}\right\rangle$ overlap. Since $\left|\psi_{n}\right\rangle$ is considered not to be very different from $\left|\phi_{n}\right\rangle$, we have $\left\langle\phi_{n} \mid \psi_{n}\right\rangle \simeq 1 .$ We can, however. normalize $\left|\psi_{n}\right\rangle$ so that its overlap with $\left|\phi_{n}\right\rangle$ is exactly equal to one:\\
$$\left\langle\phi_{n} \mid \psi_{n}\right\rangle=1 .$$\\
Substituting power series expansion of $\left|\psi_{n}\right\rangle $ ine above equation we get\\
$$
\lambda\left\langle\phi_{n} \mid \psi_{n}^{(1)}\right\rangle+\lambda^{2}\left\langle\phi_{n} \mid \psi_{n}^{(2)}\right\rangle+\cdots=0
$$
hence the coefficients of the various powers of $\lambda$ must vanish separately:
$$
\left\langle\phi_{n} \mid \psi_{n}^{(1)}\right\rangle=\left\langle\phi_{n} \mid \psi_{n}^{(2)}\right\rangle=\cdots=0 .
$$
\subsubsection{First order correction}
$$E_{n}^{(1)}=\left\langle\phi_{n}|\hat{W}| \phi_{n}\right\rangle$$
Energy of first order pertubration
$$E_{n}=E_{n}^{(0)}+\left\langle\phi_{n}\left|\hat{H}_{p}\right| \phi_{n}\right\rangle$$
First order correction of eigen vector\\
$$\left|\psi_{n}^{(1)}\right\rangle=\sum_{m \neq n} \frac{\left\langle\phi_{m}|\hat{W}| \phi_{n}\right\rangle}{E_{n}^{(0)}-E_{m}^{(0)}}\left|\phi_{m}\right\rangle$$
The eigenfunction $\left|\psi_{n}\right\rangle$ of $\hat{H}$ to first order in $\lambda \hat{W}$ can then be obtained as
$$
\left|\psi_{n}\right\rangle=\left|\phi_{n}\right\rangle+\sum_{m \neq n} \frac{\left\langle\phi_{m}\left|\hat{H}_{p}\right| \phi_{n}\right\rangle}{E_{n}^{(0)}-E_{m}^{(0)}}\left|\phi_{m}\right\rangle
$$
\subsubsection{second order correction}
$$E_{n}^{(2)}=\left\langle\phi_{n}|\hat{W}| \psi_{n}^{(1)}\right\rangle$$
which can be written as \\
$$E_{n}^{(2)}=\sum_{m \neq n} \frac{\left|\left\langle\phi_{m}|\hat{W}| \phi_{n}\right\rangle\right|^{2}}{E_{n}^{(0)}-E_{m}^{(0)}}$$\\
$\text { The eigenenergy to second order in } \hat{H}_{p}$ is obtained as
$$E_{n}=E_{n}^{(0)}+\left\langle\phi_{n}\left|\hat{H}_{p}\right| \phi_{n}\right\rangle+\sum_{m \neq n} \frac{\left|\left\langle\phi_{m}\left|\hat{H}_{p}\right| \phi_{n}\right\rangle\right|^{2}}{E_{n}^{(0)}-E_{m}^{(0)}}+\cdots .$$
\subsubsection{Application}
\subsubsection{Stark effect(n=1-non degenerate case)}
The effect that an external electric field has on the energy levels of an atom is called the Stark effect. In the absence of an electric field, the (unperturbed) Hamiltonian of the hydrogen atom (in CGS units) is:
$$
\hat{H}_{0}=\frac{\hat{\vec{p}}^{2}}{2 \mu}-\frac{e^{2}}{r} .
$$
The eigenfunctions of this Hamiltonian, $\psi_{n l m}(\vec{r})$, are given by
$$
\langle r \theta \varphi \mid n l m\rangle=\psi_{n l m}(r, \theta, \varphi)=R_{n l}(r) Y_{l m}(\theta, \varphi) .
$$
When the electric field is turned on, the interaction between the atom and the field generates a term $\hat{H}_{p}=e \overrightarrow{\mathcal{E}} \cdot \vec{r}=e \mathcal{E} \hat{Z}$ that needs to be added to $\hat{H}_{0}$.
Since the excited states of the hydrogen atom are degenerate while the ground state is not, nondegenerate perturbation theory applies only to the ground state, $\psi_{100}(\vec{r})$. Ignoring the spin degrees of freedom, the energy of this system to second-order perturbation is given as follows
$$
E_{100}=E_{100}^{(0)}+e \mathcal{E}\langle 100|\hat{Z}| 100\rangle+e^{2} \mathcal{E}^{2} \sum_{n l m \neq 100} \frac{|\langle n l m|\hat{Z}| 100\rangle|^{2}}{E_{100}^{(0)}-E_{n l m}^{(0)}}
$$
The term
$$
\langle 100|\hat{Z}| 100\rangle=\int\left|\psi_{100}(\vec{r})\right|^{2} z d^{3} r
$$
is zero, since $\hat{Z}$ is odd under parity and $\psi_{100}(\vec{r})$ has a definite parity. This means that there can be no correction term to the energy which is proportional to the electric field and hence there is no linear Stark effect. The underlying physics behind this is that when the hydrogen atom is in its ground state, it has no permanent electric dipole moment. We are left then with only a quadratic dependence of the energy on the electric field. This is called the quadratic Stark effect. This correction, which is known as the energy shift $\Delta E$, is given by
$$
\Delta E=e^{2} \mathcal{E}^{2} \sum_{n l m \neq 100} \frac{|\langle n l m|\hat{Z}| 100\rangle|^{2}}{E_{100}^{(0)}-E_{n l m}^{(0)}}
$$
\subsection{Degenerate perturbation theory}
We now apply perturbation theory to determine the energy spectrum and the states of a system whose unperturbed Hamiltonian $\hat{H}_{0}$ is degenerate:
$$
\hat{H}\left|\psi_{n}\right\rangle=\left(\hat{H}_{0}+\hat{H}_{p}\right)\left|\psi_{n}\right\rangle=E_{n}\left|\psi_{n}\right\rangle .
$$
If, for instance, the level of energy $E_{n}^{(0)}$ is $f$-fold degenerate (i.e., there exists a set of $f$ different eigenstates $\left|\phi_{n_{\alpha}}\right\rangle$, where $\alpha=1,2, \ldots, f$, that correspond to the same eigenenergy $\left.E_{n}^{(0)}\right)$, we have
$$
\hat{H}_{0}\left|\phi_{n_{\alpha}}\right\rangle=E_{n}^{(0)}\left|\phi_{n_{\alpha}}\right\rangle \quad(\alpha=1,2, \ldots, f),
$$
where $\alpha$ stands for one or more quantum numbers; the energy eigenvalues $E_{n}^{(0)}$ are independent of $\alpha$.
In the zeroth-order approximation we can write the eigenfunction $\left|\psi_{n}\right\rangle$ as a linear combination in terms of $\left|\phi_{n_{a}}\right\rangle$ :
$$
\left|\psi_{n}\right\rangle=\sum_{\alpha=1}^{f} a_{\alpha}\left|\phi_{n_{\alpha}}\right\rangle .
$$
Considering the states $\left|\phi_{n_{a}}\right\rangle$ to be orthonormal with respect to the label $\alpha$ (i.e., $\left\langle\phi_{n_{a}} \mid \phi_{n_{\beta}}\right\rangle=$ $\left.\delta_{a, \beta}\right)$ and $\left|\psi_{n}\right\rangle$ to be normalized, $\left\langle\psi_{n} \mid \psi_{n}\right\rangle=1$, we can ascertain that the coefficients $a_{\alpha}$ obey the relation
$$
\left\langle\psi_{n} \mid \psi_{n}\right\rangle=\sum_{\alpha, \beta} a_{\alpha}^{*} a_{\beta} \delta_{\alpha, \beta}=\sum_{\alpha=1}^{f}\left|a_{\alpha}\right|^{2}=1 .
$$
In what follows we are going to show how to determine these coefficients and the first-order corrections to the energy. For this, let us substitute $
\left|\psi_{n}\right\rangle=\sum_{\alpha=1}^{f} a_{\alpha}\left|\phi_{n_{\alpha}}\right\rangle .
$ and $
\hat{H}_{0}\left|\phi_{n_{\alpha}}\right\rangle=E_{n}^{(0)}\left|\phi_{n_{\alpha}}\right\rangle \quad(\alpha=1,2, \ldots, f),
$ into eigen value equation(perturbed) we will get
$$
\sum_{a}\left[E_{n}^{(0)}\left|\phi_{n_{u}}\right\rangle+\hat{H}_{p}\left|\phi_{n_{u}}\right\rangle\right] a_{\alpha}=E_{n} \sum_{\alpha} a_{a}\left|\phi_{n_{a}}\right\rangle
$$
The multiplication of both sides of this equation by $\left\langle\phi_{n_{\beta}}\right|$ leads to
$$
\sum_{u} a_{u}\left[E_{n}^{(0)} \delta_{u, \beta}+\left\langle\phi_{n_{j}}\left|\hat{H}_{p}\right| \phi_{n_{a}}\right\rangle\right]=E_{n} \sum_{\alpha} a_{a} \delta_{\alpha, \beta}
$$
or 10
$$
a_{\beta} E_{n}=a_{\beta} E_{n}^{(0)}+\sum_{\alpha=1}^{f} a_{u}\left\langle\phi_{n_{n}}\left|\hat{H}_{p}\right| \phi_{n_{a}}\right\rangle,
$$
where we have used $\left\langle\phi_{n \beta} \mid \phi_{n_{a}}\right\rangle=\delta_{\beta, a}$. We can rewrite the above equation as
$$
\sum_{a=1}^{f}\left(\hat{H}_{p_{\beta a}}-E_{n}^{(1)} \delta_{\alpha, \beta}\right) a_{\alpha}=0 \quad(\beta=1,2, \ldots, f),
$$
with $\hat{H}_{p_{\beta a}}=\left\langle\phi_{n_{\beta}}\left|\hat{H}_{p}\right| \phi_{n_{u}}\right\rangle$ and $E_{n}^{(1)}=E_{n}-E_{n}^{(0)}$. This is a system of $f$ homogeneous linear equations for the coefficients $a_{\alpha}$. These coefficients are nonvanishing only when the determinant $\left|\hat{H}_{p_{a \beta}}-E_{n}^{(1)} \delta_{\alpha, \beta}\right|$ is zero:
$$\left|\begin{array}{ccccc}
\hat{H}_{p_{11}}-E_{n}^{(1)} & \hat{H}_{p_{12}} & \hat{H}_{p_{13}} & \cdots & \hat{H}_{p_{1 /}} \\
\hat{H}_{p_{21}} & \hat{H}_{p_{22}}-E_{n}^{(1)} & \hat{H}_{p_{23}} & \cdots & \hat{H}_{p_{2} f} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\hat{H}_{p_{f 1}} & \hat{H}_{p_{f 2}} & \hat{H}_{p_{f 3}} & \cdots & \hat{H}_{p_{f f}}-E_{n}^{(1)}
\end{array}\right|=0$$
In summary, to determine the eigenvalues to first-order and the eigenstates to zeroth order for an $f$-fold degenerate level from perturbation theory, we proceed as follows:
\begin{itemize}
\item First, for each $f$-fold degenerate level, determine the $f \times f$ matrix of the perturbation $\hat{H}_{p}$ :
$$H_{p}=\left(\begin{array}{cccc}
\hat{H}_{p_{11}} & \hat{H}_{p_{12}} & \cdots & \hat{H}_{p_{1 f}} \\
\hat{H}_{p_{21}} & \hat{H}_{p_{22}} & \cdots & \hat{H}_{p_{2 f}} \\
\vdots & \vdots & \ddots & \vdots \\
\hat{H}_{p_{f 1}} & \hat{H}_{p_{f 2}} & \cdots & \hat{H}_{p_{f \prime}}
\end{array}\right)$$
\item Second, diagonalize this matrix and find the $f$ eigenvalues $E_{n_{a}}^{(1)}(\alpha=1,2, \ldots, f)$ and their corresponding eigenvectors\\
$$a_{\alpha}=\left(\begin{array}{c}
a_{\alpha_{1}} \\
a_{\alpha_{2}} \\
\vdots \\
a_{a_{f}}
\end{array}\right) \quad(\alpha=1,2, \ldots, f)$$
\item Finally, the energy eigenvalues are given to first order by
$$
E_{n_{a}}=E_{n}^{(0)}+E_{n_{a}}^{(1)} \quad(\alpha=1,2, \ldots, f)
$$
and the corresponding eigenvectors are given to zero order by
$$
\left|\psi_{n_{a}}\right\rangle=\sum_{\beta=1}^{f} a_{\alpha \beta}\left|\phi_{n_{\beta}}\right\rangle
$$
\end{itemize}
\subsubsection{Application}
\textbf{Stark effect n=2 degenerate states}\\
In the absence of any external electric field, the first excited state (i.e., $n=2$ ) is fourfold degenerate: the states $|n l m\rangle=|200\rangle,|210\rangle,|211\rangle$, and $|21-1\rangle$ have the same energy $E_{2}=-R_{y} / 4$, where $R_{y}=\mu e^{4} /\left(2 \hbar^{2}\right)=13.6 \mathrm{eV}$ is the Rydberg constant.
When the external electric field is turned on, some energy levels will split. The energy due to the interaction between the dipole moment of the electron $(\vec{d}=-e \vec{r})$ and the external electric field $(\overrightarrow{\mathcal{E}}=\overrightarrow{\mathcal{E}} \vec{k})$ is given by
$$
\hat{H}_{p}=-\vec{d} \cdot \overrightarrow{\mathcal{E}}=e \vec{r} \cdot \overrightarrow{\mathcal{E}}=e \mathcal{E} \hat{Z}
$$
To calculate the eigenenergies, we need to determine and then diagonalize the $4 \times 4$ matrix elements of $\hat{H}_{p}:\left\langle 2 l^{\prime} m^{\prime}\left|\hat{H}_{p}\right| 2 l m\right\rangle=e \mathcal{E}\left\langle 2 l^{\prime} m^{\prime}|\hat{Z}| 2 l m\right\rangle .$ The matrix elements $\left\langle 2 l^{\prime} m^{\prime}|\hat{Z}|\right.$ $2 l m)$ can be calculated more simply by using the relevant selection rules and symmetries. First, since $\hat{Z}$ does not depend on the azimuthal angle $\varphi, z=r \cos \theta$, the elements $\left\langle 2 l^{\prime} m^{\prime}|\hat{Z}| 2 l m\right\rangle$ are nonzero only if $m^{\prime}=m$. Second, as $Z$ is odd, the states $\left|2 l^{\prime} m^{\prime}\right\rangle$ and $|2 l m\rangle$ must have opposite parities so that $\left\langle 2 l^{\prime} m^{\prime}|\hat{Z}| 2 l m\right\rangle$ does not vanish. Therefore, the only nonvanishing matrix elements are those that couple the $2 \mathrm{~s}$ and $2 \mathrm{p}$ states (with $m=0$ ); that is, between | 200) and $|210\rangle$. In this case we have\\
$$\begin{aligned}
\langle 200|\hat{Z}| 210\rangle &=\int_{0}^{\infty} R_{20}^{*}(r) R_{21}(r) r^{2} d r \int Y_{00}^{*}(\Omega) z Y_{10}(\Omega) d \Omega \\
&=\sqrt{\frac{4 \pi}{3}} \int_{0}^{\infty} R_{20}(r) R_{21}(r) r^{3} d r \int Y_{00}^{*}(\Omega) Y_{10}^{2}(\Omega) d \Omega \\
&=-3 a_{0},
\end{aligned}$$
since $z=r \cos \theta=\sqrt{4 \pi / 3} r Y_{10}(\Omega),\langle\vec{r} \mid 200\rangle=R_{20}(r) Y_{00}(\Omega),\langle\vec{r} \mid 210\rangle=R_{21}(r) Y_{10}(\Omega)$, and $d \Omega=\sin \theta d \theta d \varphi ; a_{0}=\hbar^{2} /\left(\mu e^{2}\right)$ is the Bohr radius. Using the notations $|1\rangle=|200\rangle$, $|2\rangle=|21|\rangle,|3\rangle=|210\rangle$, and $|4\rangle=|21-1\rangle$, we can write the matrix of $H_{p}$ as
$$
H_{p}=\left(\begin{array}{ccccc}
\left\langle 1\left|\hat{H}_{p}\right| 1\right\rangle & \left\langle 1\left|\hat{H}_{p}\right| 2\right\rangle & \left\langle 1\left|\hat{H}_{p}\right| 3\right\rangle & \left\langle 1\left|\hat{H}_{p}\right| 4\right\rangle \\
\left\langle 2\left|\hat{H}_{p}\right| 1\right\rangle & \left\langle 2\left|\hat{H}_{p}\right| 2\right\rangle & \left\langle 2\left|\hat{H}_{p}\right| 3\right\rangle & \left\langle 2\left|\hat{H}_{p}\right| 4\right\rangle \\
\left\langle 3\left|\hat{H}_{p}\right| 1\right\rangle & \left\langle 3\left|\hat{H}_{p}\right| 2\right\rangle & \left\langle 3\left|\hat{H}_{p}\right| 3\right\rangle & \left\langle 3\left|\hat{H}_{p}\right| 4\right\rangle \\
\left\langle 4\left|\hat{H}_{p}\right| 1\right\rangle & \left\langle 4\left|\hat{H}_{p}\right| 2\right\rangle & \left\langle 4\left|\hat{H}_{p}\right| 3\right\rangle & \left\langle 4\left|\hat{H}_{p}\right| 4\right\rangle
\end{array}\right)
$$
$$H_{p}=-3 e \mathcal{E} a_{0}\left(\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}\right)$$
The diagonalization of this matrix leads to the following eigenvalues:
$$
E_{2}^{(1)},=-3 e \mathcal{E} a_{0}, \quad E_{2}^{(1)}{ }_{2}=E_{2}^{(1)}{ }_{3}=0, \quad E_{2{ }^{(1)}}{ }_{4}=3 e \mathcal{E} a_{0} .
$$
Thus, the energy levels of the $n=2$ states are given to first order by
$$
E_{2_{1}}=-\frac{R_{y}}{4}-3 e \mathcal{E} a_{0}, \quad E_{2_{2}}=E_{2_{3}}=-\frac{R_{y}}{4}, \quad E_{2_{4}}=-\frac{R_{y}}{4}+3 e \mathcal{E} a_{0} .
$$
The corresponding eigenvectors to zeroth order are
$$
\begin{aligned}
&\left|\psi_{2}\right\rangle_{1}=\frac{1}{\sqrt{2}}(|200\rangle+|210\rangle), \quad\left|\psi_{2}\right\rangle_{2}=|211\rangle, \\
&\left|\psi_{2}\right\rangle_{3}=|21-1\rangle, \quad\left|\psi_{2}\right\rangle_{4}=\frac{1}{\sqrt{2}}(|200\rangle-|210\rangle) .
\end{aligned}
$$
This perturbation has only partially removed the degeneracy of the $n=2$ level; the states $|211\rangle$ and $|21-1\rangle$ still have the same energy $E_{3}=E_{4}=-R_{y} / 4$.
\subsection{spin orbit coupling}
One of the most useful applications of perturbation theory is to calculate the energy corrections for the hydrogen atom, notably the corrections due to the fine structure. The fine structure is in turn due to two effects: spin-orbit coupling and the relativistic correction. Let us look at these corrections separately.\\
\par The spin-orbit coupling in hydrogen arises from the interaction between the electron's spin magnetic moment, $\vec{\mu}_{S}=-e \vec{S} /\left(m_{e} c\right)$, and the proton's orbital magnetic field $\vec{B}$.
The origin of the magnetic field experienced by the electron moving at $\vec{v}$ in a circular orbit around the proton can be explained classically as follows. The electron, within its rest frame. sees the proton moving at $-\vec{v}$ in a circular orbit around it . From classical electrodynamics, the magnetic field experienced by the electron is
$$
\vec{B}=-\frac{1}{c} \vec{v} \times \vec{E}=-\frac{1}{m_{e} c} \vec{p} \times \vec{E}=\frac{1}{m_{e} c} \vec{E} \times \vec{p},
$$
But $$\vec{E}(\vec{r})=-\vec{\nabla} \phi(r)=\frac{1}{e} \vec{\nabla} V(r)=\frac{1}{e} \frac{\vec{r}}{r} \frac{d V}{d r} $$
Then B is $$\vec{B}=\frac{1}{m_{e} c} \vec{E} \times \vec{p}=\frac{1}{e m_{e} c} \frac{1}{r} \frac{d V}{d r} \vec{r} \times \vec{p}=\frac{1}{e m_{e} c} \frac{1}{r} \frac{d V}{d r} \vec{L}$$
where $\vec{L}=\vec{r} \times \vec{p}$ is the orbital angular momentum of the electron.
The interaction of the electron's spin dipole moment $\vec{\mu}_{S}$ with the orbital magnetic field $\vec{B}$ of the nucleus gives rise to the following interaction energy:
$$
\hat{H}_{S O}=-\vec{\mu}_{S} \cdot \vec{B}=\frac{e}{m_{e} c} \vec{S} \cdot \vec{B}=\frac{1}{m_{e}^{2} c^{2}} \frac{1}{r} \frac{d V}{d r} \vec{S} \cdot \vec{L} .
$$
This energy turns out to be twice the observed spin-orbit interaction.Therefore \\
$$\hat{H}_{S O}=\frac{1}{2 m_{e}^{2} c^{2}} \frac{1}{r} \frac{d V}{d r} \vec{S} \cdot \vec{L}$$
The corresponding quantum mechanical expression \\
$$\hat{H}_{S O}=\frac{1}{2 m_{e}^{2} c^{2}} \frac{1}{r} \frac{d \hat{V}}{d r} \hat{S} \cdot \hat{\vec{L}}$$
For a hydrogen's electron, $V(r)=-e^{2} / r$ and $d V / d r=e^{2} / r^{2}$.
$$
\hat{H}_{S O}=\frac{e^{2}}{2 m_{e}^{2} c^{2}} \frac{1}{r^{3}} \hat{S} \cdot \hat{L}
$$
We can now use perturbation theory to calculate the contribution of the spin-orbit interaction in a hydrogen atom:
$$
\hat{H}=\frac{\hat{\vec{p}}^{2}}{2 m_{e}}-\frac{e^{2}}{r}+\frac{e^{2}}{2 m_{e}^{2} c^{2} r^{3}} \hat{\vec{S}} \cdot \hat{\vec{L}}=\hat{H}_{0}+\hat{H}_{S O}
$$
where $\hat{H}_{0}$ is the unperturbed Hamiltonian and $\hat{H}_{S O}$ is the perturbation. To apply perturbation theory, we need to specify the unperturbed states-the eigenstates of $\hat{H}_{0 .}$ Since the spin of the hydrogen's electron is taken into account, the total wave function of $\hat{H}_{0}$ consists of a direct product of two parts: a spatial part and a spin part. \\
The eigen state is
$$\Psi_{n, l, j=l \pm \frac{1}{2}, m}=R_{n l}(r)\left[\sqrt{\frac{l \mp m+\frac{1}{2}}{2 l+1}} Y_{l, m+\frac{1}{2}}\left|\frac{1}{2},-\frac{1}{2}\right\rangle \pm \sqrt{\frac{l \pm m+\frac{1}{2}}{2 l+1}} Y_{l, m-\frac{1}{2}}\left|\frac{1}{2}, \frac{1}{2}\right\rangle\right]$$
the corresponding eigenvalues are given by
$$\langle n l j m|\hat{\vec{L}} \cdot \hat{\vec{S}}| n l j m\rangle=\frac{\hbar^{2}}{2}\left[j(j+1)-l(l+1)-\frac{3}{4}\right]$$
$\text { since } \hat{\vec{S}} \cdot \hat{\vec{L}}=\frac{1}{2}\left[\hat{J}^{2}-\hat{L}^{2}-\hat{S}^{2}\right]$
The eigenvalues of total Hamiltonian H are then given to first-order correction by
$$
E_{n l j}=E_{n}^{(0)}+\left\langle n l j m_{j}\left|\hat{H}_{S O}\right| n l j m_{j}\right\rangle=-\frac{e^{2}}{2 a_{0}} \frac{1}{n^{2}}+E_{S O}^{(1)}
$$
where $E_{n}^{(0)}=-e^{2} /\left(2 a_{0} n^{2}\right)=-\left(13.6 / n^{2}\right) \mathrm{eV}$ are the energy levels of hydrogen and $E_{S O}^{(1)}$ is the energy due to spin-orbit interaction:
$$
E_{S O}^{(1)}=\left\langle n l j m_{j}\left|\hat{H}_{S O}\right| n l j m_{j}\right\rangle=\frac{e^{2} \hbar^{2}}{4 m_{e}^{2} c^{2}}\left[j(j+1)-l(l+1)-\frac{3}{4}\right]\left\langle n l\left|\frac{1}{r^{3}}\right| n l\right\rangle
$$
$$\left\langle n l\left|\frac{1}{r^{3}}\right| n l\right\rangle=\frac{2}{n^{3} l(l+1)(2 l+1) a_{0}^{3}}$$
$$E_{S O}^{(1)}=\frac{e^{2} \hbar^{2}}{2 m_{e}^{2} c^{2}}\left[\frac{j(j+1)-l(l+1)-\frac{3}{4}}{n^{3} l(l+1)(2 l+1) a_{0}^{3}}\right]$$
$$\begin{aligned}
&=\left(\frac{e^{2}}{2 a_{0}} \frac{1}{n^{2}}\right)\left(\frac{\hbar}{m_{e} c a_{0}}\right)^{2} \frac{1}{n}\left[\frac{j(j+1)-l(l+1)-\frac{3}{4}}{l(l+1)(2 l+1)}\right] \\
&E_{S O}^{(1)}=\frac{\left|E_{n}^{(0)}\right| \alpha^{2}}{n}\left[\frac{j(j+1)-l(l+1)-\frac{3}{4}}{l(l+1)(2 l+1)}\right],
\end{aligned}$$
where $\alpha$ is a dimensionless constant called the fine structure constant:
$$
\alpha=\frac{\hbar}{m_{e} c a_{0}}=\frac{e^{2}}{\hbar c} \simeq \frac{1}{137} .
$$
Since $a_{0}=\hbar^{2} /\left(m_{e} e^{2}\right)$ and hence $E_{n}^{(0)}=-e^{2} /\left(2 a_{0} n^{2}\right)=-\alpha^{2} m_{e} c^{2} /\left(2 n^{2}\right)$, we can express in terms of $\alpha$ as
$$
E_{S O}^{(1)}=\frac{\alpha^{4} m_{e} c^{2}}{2 n^{3}}\left[\frac{j(j+1)-l(l+1)-\frac{3}{4}}{l(l+1)(2 l+1)}\right] .
$$
\subsubsection{Relativistic correction}
Although the relativistic effect in hydrogen due to the motion of the electron is small, it can still be detected by spectroscopic techniques. The relativistic kinetic energy of the electron is given by $\hat{T}=\sqrt{\hat{p}^{2} c^{2}+m_{e}^{2} c^{4}}-m_{e} c^{2}$, where $m_{e} c^{2}$ is the rest mass energy of the electron; an expansion of this relation to $\hat{p}^{4}$ yields
$$
\sqrt{\hat{p}^{2} c^{2}+m_{e}^{2} c^{4}}-m_{e} c^{2} \simeq \frac{\hat{p}^{2}}{2 m_{e}}-\frac{\hat{p}^{4}}{8 m_{e}^{3} c^{2}}+\cdots
$$
When this term is included, the hydrogen's Hamiltonian becomes
$$
\hat{H}=\frac{\hat{p}^{2}}{2 m_{e}}-\frac{e^{2}}{r}-\frac{\hat{p}^{4}}{8 m^{3} c^{2}}=\hat{H}_{0}+\hat{H}_{R},
$$
where $\hat{H}_{0}=\hat{p}^{2} /\left(2 m_{e}\right)-e^{2} / r$ is the unperturbed Hamiltonian and $\hat{H}_{R}=-\hat{p}^{4} /\left(8 m_{e}^{3} c^{2}\right)$ is the relativistic mass correction which can be treated by first-order perturbation theory:
$$
E_{R}^{(1)}=\left\langle n l j m_{j}\left|\hat{H}_{R}\right| n l j m_{j}\right\rangle=-\frac{1}{8 m_{c}^{3} c^{2}}\left\langle n l j m_{j}\left|\hat{p}^{4}\right| n l j m_{j}\right\rangle .
$$
$$
\left\langle n l j m,\left|\hat{p}^{4}\right| n l j m_{j}\right\rangle=\frac{m_{e}^{4} e^{8}}{\hbar^{4} n^{4}}\left(\frac{8 n}{2 l+1}-3\right)=\frac{\alpha^{4} m_{e}^{4} c^{4}}{n^{4}}\left(\frac{8 n}{2 l+1}-3\right)
$$
An insertion of this value in last equation leads to
$$
E_{R}^{(1)}=-\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left(\frac{8 n}{2 l+1}-3\right)=-\frac{\alpha^{2}\left|E_{n}^{(0)}\right|}{4 n^{2}}\left(\frac{8 n}{2 l+1}-3\right)
$$
\textbf{Remark}\\
For a hydrogenlike atom having $Z$ electrons, and if we neglect the spin-orbit interaction,
$$
E_{n}=Z^{2}\left(E_{n}^{(0)}+E_{R}^{(1)}\right)=Z^{2} E_{n}^{(0)}\left[1+\frac{\alpha^{2}}{n}\left(\frac{2}{2 l+1}-\frac{3}{4 n}\right)\right]
$$
where $E_{n}^{(0)}=-e^{4} m_{e} /\left(2 \hbar^{2} n^{2}\right)=-\alpha^{2} m_{e} c^{2} /\left(2 n^{2}\right)=-13.6 \mathrm{eV} / n^{2}$ is the Bohr energy.
\subsubsection{Fine structure of hydrogen atom}
The fine structure correction is obtained by adding the expressions for the spin-orbit and relativistic corrections
$$
E_{F S}^{(1)}=E_{S O}^{(1)}+E_{R}^{(1)}=\frac{\alpha^{4} m_{e} c^{2}}{2 n^{3}}\left[\frac{j(j+1)-l(l+1)-\frac{3}{4}}{l(l+1)(2 l+1)}\right]-\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{8 n}{2 l+1}-3\right]
$$
where $j=l \pm \frac{1}{2} .$ If $j=l+\frac{1}{2}$ a substitution of $l=j-\frac{1}{2}$ leads to
$$\begin{aligned}
E_{F S}^{(1)} &=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{4 n j(j+1)-4 n\left(j-\frac{1}{2}\right)\left(j+\frac{1}{2}\right)-3 n}{\left(j-\frac{1}{2}\right)\left(j+\frac{1}{2}\right)(2 j-1+1)}-\frac{8 n}{2 j-1+1}+3\right] \\
&=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{4 n j-2 n}{2 j\left(j-\frac{1}{2}\right)\left(j+\frac{1}{2}\right)}-\frac{4 n}{j}+3\right]=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{2 n}{j\left(j+\frac{1}{2}\right)}-\frac{4 n}{j}+3\right] \\
&=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[3-\frac{4 n}{j+\frac{1}{2}}\right] .
\end{aligned}$$
$\text { Similarly, if } j=l-\frac{1}{2} \text {, and hence } l=j+\frac{1}{2} \text {, we can reduce } E_{F S}^{(1)}$ to\\
$$\begin{aligned}
E_{F S}^{(1)} &=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{4 n j(j+1)-4 n\left(j+\frac{1}{2}\right)\left(j+\frac{3}{2}\right)-3 n}{\left(j+\frac{1}{2}\right)\left(j+\frac{3}{2}\right)(2 j+1+1)}-\frac{8 n}{2 j+1+1}+3\right] \\
&=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{-4 n j-6 n}{2\left(j+\frac{1}{2}\right)\left(j+\frac{3}{2}\right)(j+1)}-\frac{4 n}{j+1}+3\right] \\
&=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[\frac{-2 n}{\left(j+\frac{1}{2}\right)(j+1)}-\frac{4 n}{j+1}+3\right] \\
&=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left[3-\frac{4 n}{j+\frac{1}{2}}\right]
\end{aligned}$$
As equations show, the expressions for the fine structure correction corresponding to $j=l+\frac{1}{2}$ and $j=l-\frac{1}{2}$ are the same:
$$
E_{F S}^{(1)}=E_{S O}^{(1)}+E_{R}^{(1)}=\frac{\alpha^{4} m_{e} c^{2}}{8 n^{4}}\left(3-\frac{4 n}{j+\frac{1}{2}}\right)=\frac{\alpha^{2} E_{n}^{(0)}}{4 n^{2}}\left(\frac{4 n}{j+\frac{1}{2}}-3\right)
$$
where $E_{n}^{(0)}=-\alpha^{2} m_{e} c^{2} /\left(2 n^{2}\right)$ and $j=l \pm \frac{1}{2}$.
$$\frac{E_{S O}^{(1)}}{\left|E_{n}^{(0)}\right|} \simeq \alpha^{2}, \quad\left|\frac{E_{R}^{(1)}}{E_{n}^{(0)}}\right| \simeq \alpha^{2}, \quad \frac{E_{F S}^{(1)}}{\left|E_{n}^{(0)}\right|} \simeq \alpha^{2}$$
All these terms are of the order of $10^{-4}$ since $\alpha^{2}=(1 / 137)^{2} \simeq 10^{-4}$\\
In sum, the hydrogen's Hamiltonian, when including the fine structure, is given by
$$\hat{H}=\hat{H}_{0}+\hat{H}_{F S}=\hat{H}_{0}+\left(\hat{H}_{S O}+\hat{H}_{R}\right)=\frac{\hat{p}^{2}}{2 m_{e}}-\frac{e^{2}}{r}+\left(\frac{e^{2}}{2 m_{e}^{2} c^{2} r^{3}} \hat{\hat{S}} \cdot \hat{\vec{L}}-\frac{\hat{p}^{4}}{8 m_{e}^{3} c^{2}}\right)$$
A first-order perturbation calculation of the energy levels of hydrogen, when including the fine structure, yields
$$
E_{n j}=E_{n}^{(0)}+E_{F S}^{(1)}=E_{n}^{(0)}\left[1+\frac{\alpha^{2}}{4 n^{2}}\left(\frac{4 n}{j+\frac{1}{2}}-3\right)\right]
$$
\begin{exercise}
A particle of mass $\mathrm{m}_{0}$ and charge ' $e$ ' oscillates along the the $\mathrm{x}$-axis in a one-dimensional harmonic potential with an angular frequency $\omega$. If an electric field $\varepsilon$ is applied along the $x$-axis, evaluate the first and second order corrections to the energy of the $\mathrm{n}^{\text {th }}$ state.
\end{exercise}
\begin{answer}
The potential energy due to the field $\varepsilon=-\vec{p} \cdot \vec{E}=-e \varepsilon x$\\
The perturbation $$H^{\prime}=-e \varepsilon x=-e \varepsilon \sqrt{\frac{\hbar}{2 m_{0} \omega}}\left(\hat{a}+\hat{a}^{\dagger}\right)$$
First order correction to energy $$=E_{n}^{(1)}=-e \varepsilon \sqrt{\frac{\hbar}{2 m_{0} \omega}}\left\langle n\left|\left(\hat{a}+\hat{a}^{\dagger}\right)\right| n\right\rangle=0$$
$$\text { second order correction to energy }=E_{n}^{(2)}=\sum_{m \neq n} \frac{\left|\left\langle n\left|H^{\prime}\right| m\right\rangle\right|^{2}}{E_{n}^{0}-E_{m}^{0}}$$
$$\text { Now, }\left\langle n\left|H^{\prime}\right| m\right\rangle=-e \varepsilon \sqrt{\frac{\hbar}{2 m_{0} \omega}}\left\langle n\left|a+a^{\dagger}\right| m\right\rangle$$
Here, $m$ can take all integral values except $n$.\\
The non-vanishing elements corresponds to $m=(n+1)$ and $(n-1)$.\\
Hence,
Therefore, $$E_{n}^{(2)}=e^{2} \varepsilon^{2} \frac{\hbar}{2 m_{0} \omega}\left[\frac{(\sqrt{n+1})^{2}}{-\hbar \omega}+\frac{(\sqrt{n})^{2}}{\hbar \omega}\right]=-\frac{e^{2} \varepsilon^{2}}{2 m_{0} \omega^{2}}$$
\end{answer}
\begin{exercise}
Evaluate the first and second order correction to the energy of the $n=1$ state of an oscillator of mass 'm' and angular frequency ' $\omega$ ' subjected to a potential.
\end{exercise}
\begin{answer}
$$
V(x)=\frac{1}{2} m \omega^{2} x^{2}+b x, \quad b x<<\frac{1}{2} m \omega^{2} x^{2}
$$
The first order correction to energy for the $n=1$ state is given by
$$
E_{1}^{(1)}=\langle 1|b x| 1\rangle=b \sqrt{\frac{\hbar}{2 m \omega}}\left\langle 1\left|\left(\hat{a}+\hat{a}^{\dagger}\right)\right| 1\right\rangle=0
$$
Using, $a|n\rangle=\sqrt{n}|n-1\rangle$ and $a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle$
The second order correction to energy for the $n=1$ state is given by
$$
\begin{aligned}
E_{1}^{(2)} &=b^{2}\left(\frac{\hbar}{2 m \omega}\right) \sum_{k \neq 1} \frac{\left|\left\langle 1\left|\left(\hat{a}+\hat{a}^{\dagger}\right)\right| k_{k}\right\rangle\right|^{2}}{E_{1}^{0}-E_{k}^{0}}=b^{2}\left(\frac{\hbar}{2 m \omega}\right)\left[\frac{1}{E_{1}^{0}-E_{0}^{0}}+\frac{2}{E_{1}^{0}-E_{2}^{0}}\right] \\
&=b^{2}\left(\frac{\hbar}{2 m \omega}\right)\left(\frac{1}{\hbar \omega}-\frac{2}{\hbar \omega}\right)=-\frac{b^{2}}{2 m \omega^{2}}
\end{aligned}
$$
\end{answer}
\begin{exercise}
Example 4. Consider the infinite square well defined by
$$
\begin{aligned}
&V(x)=0 \text { for } 0 \leq x<a \\
&V(x)=\infty \text { otherwise }
\end{aligned}
$$
Using the first order perturbation theory, calculate the energy of the first two states of the potential wellifa portion defined by $V(x)=V_{0} x / a$, where $\mathrm{V}_{0}$ is a small constant, with $0 \leq x \leq a$ being sliced off.
\end{exercise}
\begin{answer}
The energy eigenvalues and normalized eigenfunctions of the $\mathrm{n}^{\text {th }}$ state of unperturbed Hamiltonian are
$$
E_{n}^{0}=\frac{n^{2} \pi^{2} \hbar^{2}}{2 m a^{2}}, \psi_{n}^{0}=\sqrt{\frac{2}{a}} \sin \frac{n \pi x}{a}, \quad n=1,2,3, \ldots
$$
The perturbation $H^{\prime}=V_{0} x / a$ which is depicted in figure below.\\
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=5cm]{pert-crop}
\caption{sliced infinite potential well}
\label{}
\end{figure}
The first order correction to the energy for the $\mathrm{n}=1$ state is
$$
E_{1}^{(1)}=\left\langle\psi_{1}^{0}\left|\frac{V_{0} x}{a}\right| \psi_{1}^{0}\right\rangle=\frac{V_{0}}{a} \frac{2}{a} \int_{0}^{a} x \sin ^{2} \frac{\pi x}{a} d x
$$
$$\begin{aligned}
&=\frac{2 V_{0}}{a^{2}} \int_{0}^{a} \frac{x}{2}\left(1-\cos \frac{2 \pi x}{a}\right) d x=\frac{2 V_{0}}{a^{2}} \int_{0}^{a} \frac{x}{2} d x-\frac{2 V_{0}}{a^{2}} \int_{0}^{a} \frac{x}{2} \cos \frac{2 \pi x}{a} d x \\
&=\frac{V_{0}}{2}+0=\frac{V_{0}}{2}
\end{aligned}$$
The first order correction to the enegry for the $n=2$ state is
$$
E_{2}^{(1)}=\left\langle\psi_{2}^{0}\left|\frac{V_{0} x}{a}\right| \psi_{2}^{0}\right\rangle=\frac{V_{0}}{a} \frac{2}{a} \int x \sin ^{2} \frac{2 \pi x}{a} d x=\frac{V_{0}}{2}
$$
The ground state and first excited energies corrected upto first order are
$$
\frac{\pi^{2} \hbar^{2}}{2 m a^{2}}+\frac{V_{0}}{2} \text { and } \frac{2 \pi^{2} \hbar^{2}}{m a^{2}}+\frac{V_{0}}{2}
$$
\end{answer}
\begin{exercise}
. A particle of mass ' $m$ ' moves in an infinite one-dimensional box of bottom ' $a$ ' with a potential dip as defined by
$$
\begin{aligned}
&V(x)=\infty \text { for } x<0 \text { and } x>a \\
&V(x)=-V_{0} \text { for } 0<x<\frac{a}{3} \\
&V(x)=0 \text { for } \frac{a}{3}<x<a
\end{aligned}
$$
Find the first order energy of the ground state.
\end{exercise}
\begin{answer}
For a particle in the infinite potential well defined by $V(x)=0$ for $0<x<a$ and $V(x)=\infty$ otherwise, the energy eigenvalues and normalized eigenfunctions are
$$
E_{n}=\frac{n^{2} \pi^{2} \hbar^{2}}{2 m a^{2}}, \psi_{n}=\sqrt{\frac{2}{a}} \sin \frac{n \pi x}{a}, \quad n=1,2,3, \ldots . .
$$
The perturbing Hamiltonian is $H$ ' $=-V_{0}$ for $0<x<a / 3$.
The first order energy correction to the ground state is\\
\begin{minipage}{0.5\textwidth}
$$
\begin{aligned}
E_{0}^{(1)} &=-\frac{2}{a} V_{0} \int_{0}^{a / 3} \sin ^{2} \frac{\pi x}{a} d x \\
&=-\frac{2}{a} V_{0} \int_{0}^{a / 3} \frac{1}{2}\left(1-\cos \frac{2 \pi x}{a}\right) d x \\
&=-\frac{V_{0}}{a}[x]_{0}^{a / 3}+\frac{V_{0}}{a} \frac{a}{2 \pi}\left[\sin \frac{2 \pi x}{a}\right]_{0}^{a / 3} \\
&=-\frac{V_{0}}{3}+\frac{V_{0}}{4 \pi} \times 0.866=-0.264 V_{0}
\end{aligned}
$$
The energy of the ground state corrected to first order is
$$
E_{1}^{\prime}=\frac{\pi^{2} \hbar^{2}}{2 m a^{2}}-0.264 V_{0}
$$
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=5cm]{pert2}
\caption{}
\label{}
\end{figure}
\end{minipage}
\end{answer}
\section{The variational method}
There exist systems whose Hamiltonians are known, but they cannot be solved exactly or by a perturbative treatment. That is, there is no closely related Hamiltonian that can be solved exactly or approximately by perturbation theory because the first order is not sufficiently accurate. One of the approximation methods that is suitable for solving such problems is the variational method, which is also called the Rayleigh-Ritz method. The variational method is useful for determining upper bound values for the eigenenergies of a system whose Hamiltonian is known whereas its eigenvalues and eigenstates are not known. It is particularly useful for determining the ground state. It becomes quite difficult to determine the energy levels of the excited states.\\\\
In the context of the variational method, one does not attempt to solve the eigenvalue problem
$$
\hat{H}|\psi\rangle=E|\psi\rangle,
$$
but rather one uses a variational scheme to find the approximate eigenenergies and eigenfunctions from the variational equation
$$
\delta E(\psi)=0
$$
where $E(\psi)$ is the expectation value of the energy in the state $|\psi\rangle$ :
$$
E(\psi)=\frac{\langle\psi|\hat{H}| \psi\rangle}{\langle\psi \mid \psi\rangle}
$$
$\text { If }|\psi\rangle \text { depends on a parameter } \alpha, E(\psi) \text { will also depend on } \alpha$\\
The variational method is particularly useful for determining the ground state energy and its eigenstate without explicitly solving the Schrödinger equation. Note that for any (arbitrary) trial function $|\psi\rangle$ we choose, the energy $E$ is always larger than the exact energy $E_{0}$ :
$$
E=\frac{\langle\psi|H| \psi\rangle}{\langle\psi \mid \psi\rangle} \geq E_{0}
$$
$\text { To calculate the ground state energy, we need to carry out the following four steps: }$
\begin{itemize}
\item First, based on physical intuition, make an educated guess of a trial function that takes into account all the physical properties of the ground state (symmetries, number of nodes, smoothness, behavior at infinity, etc.). For the properties you are not sure about, include in the trial function adjustable parameters $\alpha_{1}, \alpha_{2}, \ldots$ (i.e., $\left.\left|\psi_{0}\right\rangle=\left|\psi_{0}\left(\alpha_{1}, a_{2}, \ldots\right)\right\rangle\right)$ which will account for the various possibilities of these unknown properties.
\item Second, using $E(\psi)=\frac{\langle\psi|\hat{H}| \psi\rangle}{\langle\psi \mid \psi\rangle}$, calculate the energy; this yields an expression which depends on the parameters $\alpha_{1}, \alpha_{2}, \ldots$ :
$$
E_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)=\frac{\left\langle\psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)|\hat{H}| \psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle}{\left\langle\psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right) \mid \psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle} .
$$
In most cases $\left|\psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle$ will be assumed to be normalized; hence the denominator of this expression is equal to 1 .
\item Third, using the above equation search for the minimum of $E_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)$ by varying the adjustable parameters $\alpha_{i}$ until $E_{0}$ is minimized. That is, minimize $E\left(\alpha_{1}, \alpha_{2}, \ldots\right)$ with respect to $\alpha_{1}, \alpha_{2}, \ldots$ :
$$
\frac{\partial E_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)}{\partial \alpha_{i}}=\frac{\partial}{\partial \alpha_{i}} \frac{\left\langle\psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)|\hat{H}| \psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle}{\left\langle\psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right) \mid \psi_{0}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle}=0
$$
with $i=1,2, \ldots$. This gives the values of $\left(\alpha_{1_{0}}, \alpha_{2_{0}}, \ldots\right)$ that minimize $E_{0}$.
\item - Fourth, substitute these values of $\left(\alpha_{1}, \alpha_{2_{0}}, \ldots\right)$ into $E_{0}\left(\alpha_{1_{0}}, \alpha_{2_{0}}, \ldots\right)$ to obtain the approximate value of the energy. The value $E_{0}\left(\alpha_{1_{0}}, \alpha_{2_{0}}, \ldots\right)$ thus obtained provides an upper bound for the exact ground state energy $E_{0}$. The exact ground state eigenstate $\left|\phi_{0}\right\rangle$ will then be approximated by the state $\left|\psi_{0}\left(\alpha_{1}, \alpha_{2_{0}}, \ldots\right)\right\rangle$.
\end{itemize}
The variational method can also be used to find the approximate values for the energies of the first few excited states. For instance, to find the energy and eigenstate of the first excited state that will approximate $E_{1}$ and $\left|\phi_{1}\right\rangle$, we need to choose a trial function $\left|\psi_{1}\right\rangle$ that must be orthogonal to $\left|\psi_{0}\right\rangle$ :
$$\left\langle\psi_{1} \mid \phi_{0}\right\rangle=0$$
Then proceed as we did in the case of the ground state. That is, solve the variational equation $\delta E(\psi)=0$ for $\left|\psi_{1}\right\rangle:$
$$
\frac{\partial}{\partial \alpha_{i}} \frac{\left\langle\psi_{1}\left(\alpha_{1}, \alpha_{2}, \ldots\right)|\hat{H}| \psi_{1}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle}{\left\langle\psi_{1}\left(\alpha_{1}, \alpha_{2}, \ldots\right) \mid \psi_{1}\left(\alpha_{1}, \alpha_{2}, \ldots\right)\right\rangle}=0 \quad(i=1,2, \ldots) .
$$
Similarly, to evaluate the second excited state, we solve $\delta E(\psi)=0$ for $\left|\psi_{2}\right\rangle$ and take into account the following two conditions:
$$
\left\langle\psi_{2} \mid \psi_{0}\right\rangle=0, \quad\left\langle\psi_{2} \mid \psi_{1}\right\rangle=0 .
$$
\begin{exercise}
For the harmonic oscillator $V(x)=\frac{1}{2} m \omega^{2} x^{2}$, then choose a trial wave function $\psi(x, \alpha)=A e^{-\alpha x^{2}}$. Find the energy of ground state with the method of variational principle.
\end{exercise}
\begin{answer}
Solution: From normalization condition $\int_{-\infty}^{+\infty} \psi^{2} \psi d x=1$, the value of $A=\left(\frac{2 \alpha}{\pi}\right)^{\frac{1}{4}}$\\
The expectation value of kinetic energy for given wavefunction is $\langle T\rangle=\frac{\hbar^{2} \alpha}{2 m}$\\
The expectation value of potential energy
$$
\langle V\rangle=\left(\frac{2 \alpha}{\pi}\right)^{2} \int_{-1}^{+1} \frac{1}{2} m \omega^{2} x^{2} e^{-2 \alpha x^{2}} d x=\left(\frac{2 \alpha}{\pi}\right)^{\frac{1}{2}} \times \frac{1}{2} m \omega^{2} \int_{\infty}^{\infty} x^{2} e^{-2 \alpha x^{2}} d x=\frac{m \omega^{2}}{8 \alpha}
$$
Then the expectation value of total energy is $\langle E\rangle=\frac{\hbar^{2} \alpha}{2 m}+\frac{m \omega^{2}}{8 \alpha}$\\
$\frac{d E}{d \alpha}=0, \frac{\hbar^{2}}{2 m}-\frac{m \omega^{2}}{8 \alpha^{2}}=0, \alpha_{0}=\frac{m \omega}{2 \hbar}$\\
putting the value of $\alpha_{0} \quad \alpha_{0}=\frac{m \omega}{2 \hbar} \Rightarrow\langle E\rangle=\frac{\hbar \omega}{2}$
\end{answer}
\begin{exercise}
Find the energy eigen value for the ground state if particle is confined into $1-D$ infinite potential box of width ' $a$ ' centered at $\frac{a}{2}$ by taking trial wavefunction. $\psi=A x(a-x)$.
\end{exercise}
\begin{answer}$\left. \right. $\\
\begin{minipage}{0.5\textwidth}
For normalization $|A|^{2} \int_{0}^{a} x^{2}(a-x)^{2} d x=1 \Rightarrow A=\left(\frac{30}{a^{5}}\right)^{\frac{1}{2}}$\\
The expectation value of kinetic energy
$$
\begin{aligned}
&\langle T\rangle=\int_{0}^{a}\left(\frac{30}{a^{5}}\right)^{\frac{1}{2}} x(a-x) \frac{-\hbar}{2 m} \frac{\partial^{2}}{\partial x^{2}}\left(\frac{30}{a^{5}}\right)^{\frac{1}{2}} x(a-x) d x \\
&=\frac{\hbar^{2}}{m}\left(\frac{30}{a^{5}}\right) \frac{a^{3}}{6}=\frac{5 \hbar^{2}}{m a^{2}}
\end{aligned}
$$
The expectation value of potential energy is $\langle V\rangle=0$
$$
\langle E\rangle=\langle T\rangle+\langle V\rangle=\frac{5 \hbar^{2}}{m a^{2}}
$$
(constant value)
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=5cm]{var-crop}
\end{figure}
\end{minipage}
\end{answer}
\section{The WKB appoximation method}
The Wentzel-Kramers-Brillouin (WKB) method is useful for approximate treatments of systems with slowly varying potentials; that is, potentials which remain almost constant over a region of the order of the de Broglie wavelength. In the case of classical systems, this property is always satisfied since the wavelength of a classical system approaches zero. The WKB method can thus be viewed as a semiclassical approximation.\\
\textbf{General formalism}\\
Consider the motion of a particle in a time-independent potential $V(\vec{r}) ;$ the Schrödinger equation for the corresponding stationary state is
$$
-\frac{\hbar^{2}}{2 m} \nabla^{2} \psi(\vec{r})+V(\vec{r}) \psi(\vec{r})=E \psi(\vec{r})
$$
or
$$
\nabla^{2} \psi(\vec{r})+\frac{1}{\hbar^{2}} p^{2}(\vec{r}) \psi(\vec{r})=0
$$
where $p(\vec{r})$ is the classical momentum at $\vec{r}: p(\vec{r})=\sqrt{2 m(E-V(\vec{r}))}$. If the particle is moving in a region where $V(\vec{r})$ is constant, the solution of schrodinger equation is of the form $\psi(\vec{r})=A e^{\pm i \vec{p} \cdot \vec{r} / \hbar} .$ But how does one deal with those cases where $V(\vec{r})$ is not constant? The WKB method provides an approximate treatment for systems whose potentials, while not constant, are slowly varying functions of $\vec{r}$. That is, $V(\vec{r})$ is almost constant in a region which extends over several de Broglie wavelengths; we may recall that the de Broglie wavelength of a particle of mass $m$ and energy $E$ that is moving in a potential $V(\vec{r})$ is given by $\lambda=h / p=h / \sqrt{2 m(E-V(\vec{r}))}$.\\
In essence, the WKB method consists of trying a solution to schrodinger rquation
$$
\psi(\vec{r})=A(\vec{r}) e^{i S(\vec{r}) / \hbar},
$$
Where $A(\vec{r})$ is the amplitude and $S(\vec{r})$ is the phase both are real functions and yet to be determined\\
Substituting the value of $
\psi(\vec{r})=A(\vec{r}) e^{i S(\vec{r}) / \hbar},
$ in to schrodinger equation we will get\\
$$A\left[\frac{\hbar^{2}}{A} \nabla^{2} A-(\vec{\nabla} S)^{2}+p^{2}(\vec{r})\right]+i \hbar\left[2(\vec{\nabla} A) \cdot(\vec{\nabla} S)+A \nabla^{2} S\right]=0$$
The real and imaginary parts of this equation must vanish separately:
$$
\begin{gathered}
(\vec{\nabla} S)^{2}=p^{2}(\vec{r})=2 m(E-V(\vec{r})) \\
2(\vec{\nabla} A) \cdot(\vec{\nabla} S)+A \nabla^{2} S=0
\end{gathered}
$$
To illustrate the various aspects of the WKB method, let us consider the simple case of the one-dimensional motion of a single particle. We can thus reduce the above two equations, respectively, to
$$
\begin{aligned}
&\frac{d S}{d x}=\pm \sqrt{2 m(E-V)}=\pm p(x) \\
&2\left(\frac{d}{d x} \ln A\right) p(x)+\frac{d}{d x} p(x)=0
\end{aligned}
$$
From these two equations $A(\vec{r})$ ,$S(\vec{r})$ can be found.
$$S(x)=\pm \int d x \sqrt{2 m(E-V(x))}=\pm \int p(x) d x$$
$$
\frac{d}{d x}[2 \ln A+\ln p(x)]=0
$$
which in turn leads to
$$
A(x)=\frac{C}{\sqrt{|p(x)|}}
$$
we will get the solution by substituting the values of $A(\vec{r})$ ,$S(\vec{r})$ in to
$\psi(\vec{r})$\\
$$\psi_{\pm}(x)=\frac{C_{\pm}}{\sqrt{|p(x)|}} \exp \left[\pm \frac{i}{\hbar} \int^{x} p\left(x^{\prime}\right) d x^{\prime}\right] $$
The amplitude of this wave function is proportional to $1 / \sqrt{p(x)}$; hence the probability of finding the particle between $x$ and $x+d x$ is proportional to $1 / p(x)$. This is what we expect for a "classical" particle because the time it will take to travel a distance $d x$ is proportional to the inverse of its speed (or its momentum).
\par We can now examine two separate cases corresponding to $E>V(x)$ and $E<V(x)$. First, let us consider the case $E>V(x)$, which is called the classically allowed region. Here $p(x)$ is a real function; the most general solution is a combination of $\psi_{+}(x)$ and $\psi_{-}(x)$ :
$$
\psi(x)=\frac{C_{+}}{\sqrt{p(x)}} \exp \left[\frac{i}{\hbar} \int^{x} p\left(x^{\prime}\right) d x^{\prime}\right]+\frac{C_{-}}{\sqrt{p(x)}} \exp \left[-\frac{i}{\hbar} \int^{x} p\left(x^{\prime}\right) d x^{\prime}\right]
$$
Second, in the case where $E<V(x)$, which is known as the classically forbidden region, the momentum $p(x)$ is imaginary and the exponents of become real:
$$
\psi(x)=\frac{C_{-}^{\prime}}{\sqrt{|p(x)|}} \exp \left[-\frac{1}{\hbar} \int_{x}\left|p\left(x^{\prime}\right)\right| d x^{\prime}\right]+\frac{C_{+}^{\prime}}{\sqrt{|p(x)|}} \exp \left[\frac{1}{\hbar} \int^{x}\left|p\left(x^{\prime}\right)\right| d x^{\prime}\right]
$$
But what about the structure of the wave function near the regions $E \simeq$ $V(x)$ ? At the points $x_{i}$, we have $E=V\left(x_{i}\right)$; hence the momentum $(9.167)$ vanishes, $p\left(x_{1}\right)=0$. These points are called the classical turning points, because classically the particle stops at $x_{i}$ and then turns back to resume its motion in the opposite direction. At these points the wave function $\psi_{\pm}(x)$ becomes infinite since $p\left(x_{i}\right)=0$.
\subsection{ W.K.B. Approximation Rules for Bound State Energy :}
\begin{enumerate}
\item If both walls are smooth $\Rightarrow \oint p_{x} d x=\left(n+\frac{1}{2}\right) h \quad n=0,1,2 \ldots$
$$
2 \int_{x_{1}}^{x_{2}} p_{x} d x=\left(n+\frac{1}{2}\right) h \text { where } x_{1} \text { and } x_{2} \text { are turning point. }
$$
\item If one wall is smooth \& one wall is rigid \\
$\int_{x_{1}}^{x_{2}} p_{x} d x=\left(n+\frac{\dot{3}}{4}\right) \pi \hbar \quad$\\
where $x_{1}$ and $x_{2}$ are turning point\\
$n=0,1,2,3 \ldots .$ \\
$=\left(n+1+\frac{3}{4}-1\right) \pi \hbar=\left(n-\frac{1}{4}\right) \pi \hbar, \quad n=1,2,3 \ldots .$
\item When both walls are rigid:
$\int_{x_{1}}^{x_{2}} p_{x} d x=(n+1) \pi \hbar \quad$ \\
where $x_{1}$ and $x_{2}$ are turning point $n=0,1,2, \ldots . .$
$$
\text { if }(n+1)=m \quad m=1,2,3 \ldots
$$
\end{enumerate}
\section{Time dependent perturbation theory}
To study the structure of molecular and atomic systems, we need to know how electromagnetic radiation interacts with these systems. Molecular and atomic spectroscopy deals in essence with the absorption and emission of electromagnetic radiation by molecules and atoms. As a system absorbs or emits radiation, it undergoes transitions from one state to another.\\
Time-dependent perturbation theory is most useful for studying processes of absorption and emission of radiation by atoms or, more generally, for treating the transitions of quantum systems from one energy level to another.\\\\
\par Time evolution problem uses pertubration method to find the solution. If the hamiltonian is time dependent we can write, the totl H
$$H=H_0+H^\prime$$
Where $H_0$ is time independent unperturbed term. It constitutes major part of $H$.
Its eigen values and ortho normalized eigen function are known.
$$\therefore H_0u_n=E_n u_n\quad \int u_m u_n d\tau=\delta_{mn}$$
Since it is a time evolution problem we use time dependent schoodinger equation \\
So,
$$H\psi=E\psi$$
$$H=H_0+H^\prime\quad E=-i\hbar\frac{ d}{dt}$$
$$\therefore\frac{i\hbar d \psi}{dt}=H_0\psi+H^\prime\psi$$
Which has the solution of the form.
$$\psi(x,t)=\sum_{n}a_n(t)u_n(x)e^{\frac{-i}{\hbar }E_nt}$$
Where $|a_n(t)|^2$ is the probability with which the system described by $\psi(x,t)$ in an energy eigen state $u_n(x)$\\\\
on substituting the the value of $\psi(x,t)$ in schrodinger equation
$$\implies\sum_{n}(i\hbar \dot{a}_n(t)+E_na_n)u_n e^{\frac{-i}{\hbar} E_nt}=\sum_{n}(H_0u_n+H^\prime u_n)a_ne^{\frac{-i}{\hbar}E_nt}$$
Since $H_0u_n=E_nu_n$ the term get cancelled on both side\\
$$\sum_{n}i\hbar\dot{a}_nu_ne^{\frac{-i}{\hbar}E_nt}=\sum_{n}H^\prime u_n a_n e^{\frac{-i}{\hbar}E_nt}$$
Multiplying with $u_f^*$ and integrating\\
$$ \int i\hbar \sum_{n} \dot{a}_n u_n u_f^* e^{\frac{-i}{\hbar}E_nt}d\tau=\sum_{n}\int u_f^*H^\prime u_na_ne^{\frac{-i}{\hbar}E_nt}d\tau$$
$$ i\hbar{\dot{a}}_f e^{\frac{-i}{\hbar}E_ft}=\sum_{n}a_n {H\prime} _{fn} e^{\frac{-i}{\hbar}E_nt}\hspace{2cm}
\int u_f^* u_n d\tau=\delta_{fn}$$
$$ \dot{a}_f =(i\hbar)^{-1}\sum a_n H^\prime_{fn}e^{\frac{-i}{\hbar}[E_f-E_n]t}\hspace{2cm}\int u_f^* H^\prime u_n d\tau=H^\prime_{fn}$$
$$\therefore \dot{a}_f=(i\hbar)^{-1}\sum_n a_n H^\prime_{fn}e^{i\omega_{fn}t}\hspace{2cm}E_f-E_n=\omega_{fn}\hbar
$$
Since $H^\prime$ are small $a_f$ can be expanded as \\
$$a_f(t)=a_f^0+a_f^1+a_f^2+....$$
On substituting this in $\dot{a}_f$ and equating terms of the same order on both sides,we get
$$\dot{a}_f^{(0)}=0\quad \dot{a}_f^{(r+1)}(t)=(l\hbar)^{-1}\sum_n
a_n^{(r)}H_{fn}^\prime e^{i\omega_{fn}t}$$
By substituting the initial condition $r=0$ we get
$$\dot{a}_f^{(1)}=(i\hbar)^{-1}H^{\prime}_{fi}e^{i\omega_{fn}t}$$
$$\therefore a_f^{(1)}(t)=(ih)^{-1}\int_{0}^{t}H^\prime_{fl}(t^\prime)e^{i\omega_{fi}t^{\prime}}dt^\prime$$
In many cases\\
$$H^\prime _{fi}(t)=H_{fi}^{\prime 0}f(t)$$
Where $H^{\prime 0}$ is time independent\\
$$\therefore a_f^{(1)}(t)=(i\hbar)^{-1}H^{10}_{fi}\int_{0}^{t}f(t^\prime)e^{i\omega_{fi}t^{\prime}}dt^\prime$$
\subsection{First order transition: constant perturbation }
Probability of transition from $i $ to all states $f\neq i$ is much less than unit
$$ \sum^\prime|a^\prime_f(t)|^2<<1$$
\textbf{(a)\quad Transition Probability }\\
We now consider the specific case of a perturbation $H^\prime$ which lasts from time $0 $ to $t$ and is constant during this period.\\
$$\therefore a_f(t)=(ih)^{-1}\int H^\prime _{fi}e^{i\omega_{fi}t}dt $$
$\therefore H^\prime_{fi}$ is constant\\
$$a_f(t)=(i\hbar)^{-1}H^\prime_{fi}\int e^{i\omega_{fi}t}dt$$
$$=(i\hbar)^{-1}H^\prime_{fi}\frac{e^{i\omega_{fi}t}-1}{i\omega_{fi}}=-H^\prime_{fi}\frac{e^{i\omega_{fi}t}-1}{\hbar\omega_{fi}}$$
$$|a_f(t)|^2=\frac{|H_{fi}|^2}{\hbar^2}\frac{4\sin^2\frac{1}{2}\omega_{fi}t}{\omega_{fi}^2}$$
Trnsition probability from $i$ to $f$ deosnot change monotonically with time but varies simple harmonically from zero to maximum ($=\frac{4|H^\prime_{fi}|^2}{\hbar^2\omega_{fi}^2}$) with frequency equal to Bohr frequency $\frac{\omega_{fi}}{2\pi}$. When smaller the energy difference $E_f-E_i=\hbar\omega_{fi}$ Between the pairs larger the maximum value of probability.\\
The behaviour of the factor $\frac{4}{(\omega_{fi})^2}\sin^2\frac{1}{2}\omega_{fi}t$ is drawn.\\
\begin{figure}[H]
\centering
\includegraphics[height=5cm,width=8.5cm]{Q-1}
\end{figure}
The main peak of the curve occurs at $\omega_{fi}=0$ is of height $t^2$\\
$\therefore$ We see that the transition from $i$ take place with appreciable probability only to those level $f$ such that $\omega_{fi}$ falls under the main peak: $|\omega_{fi}|\leq \frac{2\pi}{t}$.In other words the magnitude of the energy difference
$|\hbar\omega_{fi}|$ between the initial and final states is very unlikely to be significally higher than $\hbar(\frac{2\pi}{t})=\frac{h}{t}$.This result is generally considered as an expression of an energy time uncertainity relation .\\
$$\hbar\omega_{fi}\leq\frac{\hbar2\pi}{\tau}\quad\hbar=\frac{\hbar}{2\pi}$$
$$\Delta E\leq \frac{\hbar}{\Delta t}\quad \Delta E\Delta t\approx h$$
Energy time uncertainity relation\\
\textbf{(b)\quad Closely Packed Levels}\\
Suppose there are many $f$ levels within the energy level $\Delta E$ Covered by the main peak, then,\\
$$\sum_{f}|a_f(t)|^2=\frac{H_{fi}}{\hbar^2}\sum_{f}\frac{4\sin^2 \frac{1}{2}\omega_{fi}t}{\omega^2_{fi}}$$
Summation is over $f$\\
If spacing are very close summation can be changed in to integration such as,
$$\sum_{f}.....\implies \int(E_f)dE_f\rightarrow\rho(E_f)\int dE_f$$
Where $\rho(E_f)$ is the density of states that is the number of state around the energy.
Number of state with energy with in $d(E_f)=d_{nf}=\rho(E_f)dE_f \rho(f)$ is constant.
$$\therefore \sum_{f}|a^{(1)}_f(t)|^2=\frac{|H_{fi}|^2}{\hbar^2}\rho(E_f)\int\frac{4\sin^2\frac{1}{2}\omega_{fi}t}{(\omega_{fi})^2}dE_f$$
$$\hbar\omega_{fi}=E_f-E_i$$
$$\hbar d\omega_{fi}=dE_f$$
$$ d\omega_{fi}=\frac{dE_f}{\hbar}$$
$$\therefore\sum_{f}|a^{1}_f(t)|^2=\frac{|H_{fi}|^2}{\hbar^2}\rho(E_f)\int\frac{4\sin^2\frac{1}{2}\omega_{fi}t}{(\omega_{fi})^2}d\omega_{fi}$$
$$\text{Since} \quad 2t\int_{-\infty}^{\infty}x^{-2}\sin^2xdx=2\pi t$$
$$\sum_{f}|a^{1}_f(t)|^2=\frac{2\pi}{\hbar}t|H_{fi}|^2\rho(E_f)$$
$\therefore$ Transition probability /unit time
$$\frac{1}{t}\sum_{f}a^{(1)}_f(t)|^2=\frac{2\pi}{\hbar}t|H_{fi}|^2\rho(E_f)$$
This formula is called \textcolor{red}{'Fermi Golden rule'}
\subsection{Harmonic Perturbation}
\textbf{(a)Amplitude for transition with change of energy}\\
We shall now consider a harmonic perturbation .
consider a perturbation of the type
\begin{align*}
H^\prime&=H^{\prime o}e^{-i\omega t} (\omega>0) \quad \quad H^\prime=H^{\prime o}f(t)
\end{align*}
supposed to act during the time interval (o,t),Then
\begin{align*}
a^{\prime}_f (t)&=(i\hbar)^{-1}H^{\prime o}_{fi}\int f(t)e^{i\omega_{fi} t} dt \quad \quad f(t)=e^{-i\omega t}\\
\intertext{The first order transition amplitude is}
a_f(t)& =-H^{\prime o}_{fi}\ \frac{e^{l(\omega_{fi}-\omega)t}-1}{(\omega_{fi}-\omega)\hbar}
\end{align*}
For large $t$ only those transitions with $\omega_{fi}-\omega=0$ or $E_f-E_i=\hbar \omega$ are possible with appreciable probability.\\
This means that perturbation can induce transition from $E_i$ to the level $E_f$ whose energy is higher than $E_i$ by $\hbar \omega$
Such a transition may be described as absorption of energy $\hbar \omega$ by the system from the purturbing agency\\
For the hermitian congugate of $H^\prime$\\
$$ H^{1\dagger}=(H^{\prime o})^\dagger e^{i\omega t}$$
Which makes the charges in transition amplitude for the factor $(H^{\prime o}_{fi})^\dagger=(H^{\prime o}_{if})$ as the over all factor and it induce the transition.
$$\omega_{fi}+\omega=0\quad \text{or }E_f-E_i=-\hbar\omega $$
In this case $E_f$ is lower and that is energy $\hbar\omega$ is given away by the system to the purturbing agency. ie emission.\\
The actual perturbation can be written as
$$ H^\prime=H^{\prime o}e^{-i \omega t}+(H^{\prime o})^\dagger e^{+\omega t}$$
\textbf{(b) Transition induced by incoherent spectrum of perturbing frequencies}\\
As long as the perturbation contains only single frequency ,the transition probability oscillates with time as ($\omega_{fi}\mp\omega$) instead of $\omega_{fi}$.\\
However the transition probability preportional to time can arise under the following conditions\\
(i)The perturbation involves a whole spectrum of frequencies $\omega$ which are so closely spaced that very many such frequencies are contained with in the intervel (1/t).\\
(ii)These are incoherent in the sense that the phase of the different frequency componets are unrelated to each other.\\
(iii)The magnitude of these perturbations and the spacing of the frequencies are smooth function of $\omega$.\\\\
Consider the second situation of incoherent frequency.To determine the total transition probability induced by the whole spectrum of frequencies one simply add up the probabilities arising from the different frequency components.Condition (i) then enables this sum to be replaced by an integral.Thus the total transition probability will be
\begin{align*}
\sum_{\omega} |a_f(t,\omega)|^2 &= \sum_{\omega} \frac{|H^{\prime o}_{fi}(\omega)|^2}{\hbar^2}\ \frac{4 \sin^2\frac{1}{2}(\omega_{fi}+\omega)t}{(\omega_{fi}-\omega)^2}\\
&=\int \frac{|H^{\prime o}_{fi}(\omega)|^2}{\hbar^2}\frac{4 \sin^2\frac{1}{2}(\omega_{fi}+\omega)t}{(\omega_{fi}-\omega)^2}\rho (\omega)\ d\omega
\end{align*}
$|H_{fi}(\omega)|^2$ and $\rho(\omega)$ Varies smothly with $\omega$ and almost constant in the intervel $\frac{1}{t}$ around $\omega=\omega_{fi}$ both can be taken out integral and after integrating we get,\\
Transition probability per unit time
$$ = \frac{2\pi}{\hbar^2}|H^{\prime o}_{fi}(\omega_{fi})|^2 \rho (\omega_{fi})$$
here the total perturbration can be written as,
$$H^\prime = H^{\prime o}(\omega)e^{-i\omega t}+ H^{\prime o}(-\omega)e^{i\omega t}$$
with $H^{\prime o}(-\omega)=[H^{\prime o}(\omega)]^\dagger$ to ensure hermiticity.
In order to consider the both term the summation should be done for $+\omega$ and $-\omega$ with
$$ H^{\prime o}_{fi}(\omega)=H^{\prime o}_{if}(\omega),\rho(\omega)=\rho(-\omega)$$
$\therefore$ for upward transition $E_f>E_n$ from $i$ to $f$ and downward transition from $f$ to $i$ $E_f<E_n$
probability/unit time is
$$ =\frac{2\pi}{\hbar^2}|H^{\prime_0}_{fi}(\omega_{fi})|^2 \rho(\omega_{fi})\text{ and } \frac{2\pi}{\hbar^2}|H^{\prime_0}_{fi}(-\omega_{fi})|^2\rho(-\omega_{fi})$$
Since $$\omega_{mn}=-\omega_{nm}$$
These two probability/unit time are equal.\\
$\therefore$ Probabilities for upward and downward transition between a given pair of levels induced by hermition pertubration are identical.
\section{Pictures of quantum mechanics}
Each class of representation also called a picture differs from others in the way it treats the time evolution of the system.Schrodinger picture is useful when describing phenomena with time independent Hamiltonians,whereas the interaction and heisenberg pictures are useful when describing phenomena with time dependent Hamiltonians.
\subsection{The Schrodinger picture}
In describing quantum dynamics, we have been using so far the Schrödinger picture in which state vectors depend explicitly on time, but operators do not:
$$
i \hbar \frac{d}{d t}|\psi(t)\rangle=\hat{H}|\psi(t)\rangle,
$$
where $|\psi(t)\rangle$ denotes the state of the system in the Schrödinger picture.The time evolution of a state $|\psi(t)\rangle$ can be expressed by means of the propagator, or time-evolution operator, $\hat{U}\left(t, t_{0}\right)$, as follows:
$$
|\psi(t)\rangle=\hat{U}\left(t, t_{0}\right)\left|\psi\left(t_{0}\right)\right\rangle,
$$
with
$$
\hat{U}\left(t, t_{0}\right)=e^{-i\left(t-t_{0}\right) \hat{H} / \hbar}
$$
The operator $\hat{U}\left(t, t_{0}\right)$ is unitary,
$$
\hat{U}^{\dagger}\left(t, t_{0}\right) \hat{U}\left(t, t_{0}\right)=I
$$
and satisfies these properties:
$$
\begin{gathered}
\hat{U}(t, t)=I \\
\hat{U}^{\dagger}\left(t, t_{0}\right)=\hat{U}^{-1}\left(t, t_{0}\right)=\hat{U}\left(t_{0}, t\right) \\
\hat{U}\left(t_{1}, t_{2}\right) \hat{U}\left(t_{2}, t_{3}\right)=\hat{U}\left(t_{1}, t_{3}\right)
\end{gathered}
$$
\subsection{The Heisenberg Picture}
In this picture the time dependence of the state vectors is completely frozen. The Heisenberg picture is obtained from the Schrödinger picture by applying $\hat{U}$ on $|\psi(t)\rangle_{H}$ :
$$
|\psi(t)\rangle_{H}=\hat{U}^{\dagger}(t)|\psi(t)\rangle=|\psi(0)\rangle,
$$
where $|\psi(t)\rangle$ and $\hat{U}^{\dagger}(t)$ can be obtained from scrodinger picture by setting $t_{0}=0$ : $\hat{U}^{\dagger}(t)=\hat{U}^{\dagger}\left(t, t_{0}=0\right)=e^{i t \hat{H} / \hbar}$ and $|\psi(t)\rangle=\hat{U}(t)|\psi(0)\rangle$, with $\hat{U}(t)=e^{-i t \hat{H} / \hbar}$. Thus, we can rewrite \\
$$\psi(t)\rangle_{H}=e^{i t \hat{H} / \hbar}|\psi(t)\rangle$$
As $|\psi\rangle_{H}$ is frozen in time we have: $d|\psi\rangle_{H} / d t=0$. Let us see how the expectation value of an operator $\hat{A}$ in the state $|\psi(t)\rangle$ evolves in time:
$$
\left.\langle\psi(t)|\hat{A}| \psi(t)\rangle=\left\langle\psi(0)\left|e^{i t \hat{H} / \hbar} \hat{A} e^{-i t \hat{H} / \hbar}\right| \psi(0)\right\rangle=\left\langle\psi(0)\left|\hat{A}_{H}(t)\right| \psi(0)\right\rangle=H\left\langle\psi\left|\hat{A}_{H}(t)\right| \psi\right\rangle\right\rangle_{H},
$$
where $\hat{A}_{H}(t)$ is given by\\
$$\hat{A}_{H}(t)=\hat{U}^{\dagger}(t) \hat{A} \hat{U}(t)=e^{i t \hat{H} / \hbar} \hat{A} e^{-i t \hat{H} / \hbar}$$
Schrödinger and the Heisenberg pictures coincide at $t=0,5$ \\
\textbf{Heisenberg equation of motion}\\
$$\frac{d\hat{A}_H}{dt}=\frac{1}{i\hbar}\left[\hat{A}_H,\hat{H}\right] $$
\subsection{The interaction picture}
The interaction picture, also called the Dirac picture, is useful to describe quantum phenomena with Hamiltonians that depend explicitly on time. In this picture both state vectors and operators evolve in time. We need, therefore, to find the equation of motion for the state vectors and for the operators.\\
\textbf{Equation of motion of state vectors}\\
State vectors in the interaction picture are defined in terms of the Schrödinger states | $\psi(t)\}$ by
$$
|\psi(t)\rangle_{I}=e^{i t \hat{H}_{0} / \hbar}|\psi(t)\rangle .
$$
If $t=0$ we have $|\psi(0)\rangle_{I}=|\psi(0)\rangle$. The time evolution of $|\psi(t)\rangle$ is governed by the Schrödinger equation $(10.1)$ with $\hat{H}=\hat{H}_{0}+\hat{V}$ where $\hat{H}_{0}$ is time independent, but $\hat{V}$ may depend on time.
To find the time evolution of $|\psi(t)\rangle_{I}$, we need the time derivative
$$
\begin{aligned}
i \hbar \frac{d|\psi(t)\rangle_{I}}{d t} &=-\hat{H}_{0} e^{i t \hat{H}_{0} / \hbar}|\psi(t)\rangle+e^{i t \hat{H}_{0} / \hbar}\left(i \hbar \frac{d|\psi(t)\rangle}{d t}\right) \\
&=-\hat{H}_{0}|\psi(t)\rangle_{I}+e^{i t \hat{H}_{0} / \hbar} \hat{H}|\psi(t)\rangle
\end{aligned}
$$
$\text { Since } \hat{H}=\hat{H}_{0}+\hat{V} \text { and }$
$$\begin{gathered}
e^{i H_{0} t / \hbar} \hat{V}=\left(e^{i t \hat{H}_{0} / \hbar} \hat{V} e^{-i t \hat{H}_{0} / \hbar}\right) e^{i t \hat{H}_{0} / \hbar}=\hat{V}_{I}(t) e^{i t \hat{H}_{0} / \hbar}, \\
\hat{V}_{I}(t)=e^{i t \hat{H}_{0} / \hbar} \hat{V} e^{-i t \hat{H}_{0} / \hbar},
\end{gathered}$$
we can rewrite\\
$$\begin{gathered}
i \hbar \frac{d|\psi(t)\rangle_{I}}{d t}=-\hat{H}_{0}|\psi(t)\rangle_{I}+\hat{H}_{0} e^{i t \hat{H}_{0} / \hbar}|\psi(t)\rangle+\hat{V}_{I}(t) e^{i t \hat{H}_{0} / \hbar}|\psi(t)\rangle, \\
i \hbar \frac{d|\psi(t)\rangle_{I}}{d t}=\hat{V}_{I}(t)|\psi(t)\rangle_{I} .
\end{gathered}$$
This is the Schrödinger equation in the interaction picture. It shows that the time evolution of the state vector is governed by the interaction $\hat{V}_{I}(t)$.\\
\textbf{Equation of motion for the operators}\\
The interaction representation of an operator $\hat{A}_{J}(t)$ is given, in terms of its Schrödinger representation by
$$
\hat{A}_{I}(t)=e^{i \hat{H}_{0} t / \hbar} \hat{A} e^{-i \hat{H}_{0} t / \hbar}
$$
Calculating the time derivative of $\hat{A}_{I}(t)$ and since $\partial \hat{A} / \partial t=0$, we can show the time evolution of $\hat{A}_{I}(t)$ is governed by $\hat{H}_{0}$ :
$$
\frac{d \hat{A}_{I}(t)}{d t}=\frac{1}{i \hbar}\left[\hat{A}_{I}(t), \hat{H}_{0}\right]
$$
This equation is similar to the Heisenberg equation of motion except that $\hat{H}$ is replaced by $\hat{H}_{0}$. The basic difference between the Heisenberg and interaction pictures:In the Heisenberg picture it is $\hat{H}$ that appears in the exponents,whereas in the interaction picture it is $\hat{H}_0$ that appears.\\
\textbf{conclusion}\\
we have seen that with in the schrodinger picture ,the states depends on time but not operators;In the Heisenberg picture only operators depends explicitly on time,state vectors are frozen in time.The interaction picture however is intermediate between the Schrodinger and the Heisenberg pictures ,since both state vectors and operators evolve with time.\\
\newpage
\begin{abox}
Practice set 1
\end{abox}
\begin{enumerate}
\begin{minipage}{\textwidth}
\item If the perturbation $H^{\prime}=a x$, where $a$ is a constant, is added to the infinite square well potential
$$
V(x)=\left\{\begin{array}{lll}
0 & \text { for } & 0 \leq x \leq \pi \\
\infty & & \text { otherwise }
\end{array}\right.
$$
The correction to the ground state energy, to first order in $a$, is
\exyear{NET JUNE 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{a \pi}{2}$
\task[\textbf{B.}]$a \pi$
\task[\textbf{C.}]$\frac{a \pi}{4}$
\task[\textbf{D.}]$\frac{a \pi}{\sqrt{2}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item A particle in one dimension moves under the influence of a potential $V(x)=a x^{6}$, where $a$ is a real constant. For large $n$ the quantized energy level $E_{n}$ depends on $n$ as:
\exyear{NET JUNE 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $E_{n} \sim n^{3}$
\task[\textbf{B.}]$E_{n} \sim n^{4 / 3}$
\task[\textbf{C.}]$E_{n} \sim n^{6 / 5}$
\task[\textbf{D.}]$E_{n} \sim n^{3 / 2}$
\end{tasks}
\begin{minipage}{\textwidth}
\item The perturbation $H^{\prime}=b x^{4}$, where $b$ is a constant, is added to the one dimensional harmonic oscillator potential $V(x)=\frac{1}{2} m \omega^{2} x^{2}$. Which of the following denotes the correction to the ground state energy to first order in $b$ ?
Hint: The normalized ground state wave function of the one dimensional harmonic oscillator potential is $\psi_{0}=\left(\frac{m \omega}{\hbar \pi}\right)^{1 / 4} e^{-m \omega x^{2} / 2 \hbar} .$ You may use the following integral $\left.\int_{-\infty}^{\infty} x^{2 n} e^{-a x^{2}} d x=a^{-n-\frac{1}{2}} \Gamma\left(n+\frac{1}{2}\right)\right]$
\exyear{NET DEC 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{3 b \hbar^{2}}{4 m^{2} \omega^{2}}$
\task[\textbf{B.}]$\frac{3 b \hbar^{2}}{2 m^{2} \omega^{2}}$
\task[\textbf{C.}]$\frac{3 b \hbar^{2}}{2 \pi m^{2} \omega^{2}}$
\task[\textbf{D.}]$\frac{15 b \hbar^{2}}{4 m^{2} \omega^{2}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item A constant perturbation as shown in the figure below acts on a particle of mass $m$ confined in an infinite potential well between 0 and $L$.\\
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=5cm]{diagram-20210921(2)-crop(2)}
\end{figure}
$\text { The first-order correction to the ground state energy of the particle is }$
\exyear{NET DEC 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{V_{0}}{2}$
\task[\textbf{B.}]$\frac{3 V_{0}}{4}$
\task[\textbf{C.}]$\frac{V_{0}}{4}$
\task[\textbf{D.}] $\frac{3 V_{0}}{2}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider a two-dimensional infinite square well
$$
V(x, y)=\left\{\begin{array}{ll}
0, & 0<x<a, \\
\infty, & \text { otherwise }
\end{array} \quad 0<y<a\right.
$$
Its normalized Eigenfunctions are $\psi_{n_{x}, n_{y}}(x, y)=\frac{2}{a} \sin \left(\frac{n_{x} \pi x}{a}\right) \sin \left(\frac{n_{y} \pi y}{a}\right)$,
where $n_{x}, n_{y}=1,2,3, . .$\\
If a perturbation $H^{\prime}=\left\{\begin{array}{cc}V_{0} & 0<x<\frac{a}{2}, \quad 0<y<\frac{a}{2} \\ 0 & \text { otherwise }\end{array}\right.$ \\is applied, then the correction to the
energy of the first excited state to order $V_{0}$ is
\exyear{NET JUNE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{V_{0}}{4}$
\task[\textbf{B.}]$\frac{V_{0}}{4}\left[1 \pm \frac{64}{9 \pi^{2}}\right]$
\task[\textbf{C.}]$\frac{V_{0}}{4}\left[1 \pm \frac{16}{9 \pi^{2}}\right]$
\task[\textbf{D.}]$\frac{V_{0}}{4}\left[1 \pm \frac{32}{9 \pi^{2}}\right]$
\end{tasks}
\begin{minipage}{\textwidth}
\item Two identical bosons of mass $m$ are placed in a one-dimensional potential $V(x)=\frac{1}{2} m \omega^{2} x^{2} .$ The bosons interact via a weak potential,
$$
V_{12}=V_{0} \exp \left[-m \Omega\left(x_{1}-x_{2}\right)^{2} / 4 \hbar\right]
$$
where $x_{1}$ and $x_{2}$ denote coordinates of the particles. Given that the ground state wavefunction of the harmonic oscillator is $\psi_{0}(x)=\left(\frac{m \omega}{\pi \hbar}\right)^{\frac{1}{4}} e^{-\frac{m \omega x^{2}}{2 \hbar}} .$ The ground state energy of the two-boson system, to the first order in $V_{0}$, is
\exyear{NET JUNE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\hbar \omega+2 V_{0}$
\task[\textbf{B.}]$\hbar \omega+\frac{V_{0} \Omega}{\omega}$
\task[\textbf{C.}]$\hbar \omega+V_{0}\left(1+\frac{\Omega}{2 \omega}\right)^{-\frac{1}{2}}$
\task[\textbf{D.}]$\hbar \omega+V_{0}\left(1+\frac{\omega}{\Omega}\right)$
\end{tasks}
\begin{minipage}{\textwidth}
\item The bound on the ground state energy of the Hamiltonian with an attractive deltafunction potential, namely
$$
H=-\frac{\hbar^{2}}{2 m} \frac{d^{2}}{d x^{2}}-a \delta(x)
$$
using the variational principle with the trial wavefunction $\psi(x)=A \exp \left(-b x^{2}\right)$ is\\
$\left[\text { Note }: \int_{0}^{\infty} e^{-t} t^{a} d t=\Gamma(a+1)\right]$
\exyear{NET JUNE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-m a^{2} / 4 \pi \hbar^{2}$
\task[\textbf{B.}]$-m a^{2} / 2 \pi \hbar^{2}$
\task[\textbf{C.}]$-m a^{2} / \pi \hbar^{2}$
\task[\textbf{D.}]$-m a^{2} / \sqrt{5} \pi \hbar^{2}$
\end{tasks}
\begin{minipage}{\textwidth}
\item The ground state eigenfunction for the potential $V(x)=-\delta(x)$ where $\delta(x)$ is the delta function, is given by $\psi(x)=A e^{-\alpha|x|}$, where $A$ and $\alpha>0$ are constants. If a perturbation $H^{\prime}=b x^{2}$ is applied, the first order correction to the energy of the ground state will be
\exyear{NET JUNE 2014}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{b}{\sqrt{2} \alpha^{2}}$
\task[\textbf{B.}]$\frac{b}{\alpha^{2}}$
\task[\textbf{C.}]$\frac{2 b}{\alpha^{2}}$
\task[\textbf{D.}]$\frac{b}{2 \alpha^{2}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item The ground state energy of the attractive delta function potential
$$
V(x)=-b \delta(x) \text {, }
$$
where $b>0$, is calculated with the variational trial function
$$
\psi(x)=\left\{\begin{array}{ccc}
A \cos \frac{\pi x}{2 a}, & \text { for } & -a<x<a, \\
0, & & \text { otherwise, }
\end{array}\right\} \text { is }
$$
\exyear{NET DEC 2014}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-\frac{m b^{2}}{\pi^{2} \hbar^{2}}$
\task[\textbf{B.}]$-\frac{2 m b^{2}}{\pi^{2} \hbar^{2}}$
\task[\textbf{C.}]$-\frac{m b^{2}}{2 \pi^{2} \hbar^{2}}$
\task[\textbf{D.}]$-\frac{m b^{2}}{4 \pi^{2} \hbar^{2}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider a particle of mass $m$ in the potential $V(x)=a|x|, a>0$. The energy eigenvalues $E_{n}(n=0,1,2, \ldots .)$, in the WKB approximation, are
\exyear{NET DEC 2014 }
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\left[\frac{3 a \hbar \pi}{4 \sqrt{2 m}}\left(n+\frac{1}{2}\right)\right]^{1 / 3}$
\task[\textbf{B.}]$\left[\frac{3 a \hbar \pi}{4 \sqrt{2 m}}\left(n+\frac{1}{2}\right)\right]^{2 / 3}$
\task[\textbf{C.}]$\frac{3 a \hbar \pi}{4 \sqrt{2 m}}\left(n+\frac{1}{2}\right)$
\task[\textbf{D.}]$\left[\frac{3 a \hbar \pi}{4 \sqrt{2 m}}\left(n+\frac{1}{2}\right)\right]^{4 / 3}$
\end{tasks}
\begin{minipage}{\textwidth}
\item The Hamiltonian $H_{0}$ for a three-state quantum system is given by the matrix $H_{0}=\left(\begin{array}{lll}1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2\end{array}\right) .$ When perturbed by $H^{\prime}=\in\left(\begin{array}{lll}0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0\end{array}\right)$ where $\in<<1$, the resulting shift in the energy eigenvalue $E_{0}=2$ is
\exyear{NET DEC 2014}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\in,-2 \in$
\task[\textbf{B.}] $-\in, 2 \in$
\task[\textbf{C.}]$\pm \in$
\task[\textbf{D.}]$\pm 2 \in$
\end{tasks}
\begin{minipage}{\textwidth}
\item A particle of mass $m$ is in a potential $V=\frac{1}{2} m \omega^{2} x^{2}$, where $\omega$ is a constant. Let $\hat{a}=\sqrt{\frac{m \omega}{2 \hbar}}\left(\hat{x}+\frac{i \hat{p}}{m \omega}\right) .$ In the Heisenberg picture $\frac{d \hat{a}}{d t}$ is given by
\exyear{NET JUNE 2015}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\omega \hat{a}$
\task[\textbf{B.}]$-i \omega \hat{a}$
\task[\textbf{C.}]$\omega \hat{a}^{\dagger}$
\task[\textbf{D.}]$i \omega \hat{a}^{\dagger}$
\end{tasks}
\begin{minipage}{\textwidth}
\item A hydrogen atom is subjected to the perturbation
$$
V_{\text {pert }}(r)=\epsilon \cos \frac{2 r}{a_{0}}
$$
where $a_{0}$ is the Bohr radius. The change in the ground state energy to first order in $\in$
\exyear{NET DEC 2015}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{\in}{4}$
\task[\textbf{B.}] $\frac{\in}{2}$
\task[\textbf{C.}]$\frac{-\epsilon}{2}$
\task[\textbf{D.}] $\frac{-\epsilon}{4}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider a particle of mass $m$ in a potential $V(x)=\frac{1}{2} m \omega^{2} x^{2}+g \cos k x .$ The change in the ground state energy, compared to the simple harmonic potential $\frac{1}{2} m \omega^{2} x^{2}$, to first order in $g$ is
\exyear{NET JUNE 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $g \exp \left(-\frac{k^{2} \hbar}{2 m \omega}\right)$
\task[\textbf{B.}]$g \exp \left(\frac{k^{2} \hbar}{2 m \omega}\right)$
\task[\textbf{C.}]$g \exp \left(-\frac{2 k^{2} \hbar}{m \omega}\right)$
\task[\textbf{D.}] $g \exp \left(-\frac{k^{2} \hbar}{4 m \omega}\right)$
\end{tasks}
\begin{minipage}{\textwidth}
\item The energy levels for a particle of mass $m$ in the potential $V(x)=\alpha|x|$, determined in the $W K B$ approximation
$$
\sqrt{2 m} \int_{a}^{b} \sqrt{E-V(x)} d x=\left(n+\frac{1}{2}\right) \hbar \pi
$$
(where $a, b$ are the turning points and $n=0,1,2 \ldots$ ), are
\exyear{NET JUNE 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $E_{n}=\left[\frac{h \pi \alpha}{4 \sqrt{m}}\left(n+\frac{1}{2}\right)\right]^{\frac{2}{3}}$
\task[\textbf{B.}]$E_{n}=\left[\frac{3 h \pi \alpha}{4 \sqrt{2 m}}\left(n+\frac{1}{2}\right)\right]^{\frac{2}{3}}$
\task[\textbf{C.}]$E_{n}=\left[\frac{3 h \pi \alpha}{4 \sqrt{m}}\left(n+\frac{1}{2}\right)\right]^{\frac{2}{3}}$
\task[\textbf{D.}] $E_{n}=\left[\frac{h \pi \alpha}{4 \sqrt{2 m}}\left(n+\frac{1}{2}\right)\right]^{\frac{2}{3}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item A particle of charge $q$ in one dimension is in a simple harmonic potential with angular frequency $\omega$. It is subjected to a time- dependent electric field $E(t)=A e^{-\left(\frac{t}{\tau}\right)^{2}}$, where $A$ and $\tau$ are positive constants and $\omega \tau \gg 1$. If in the distant past $t \rightarrow-\infty$ the particle was in its ground state, the probability that it will be in the first excited state as $t \rightarrow+\infty$ is proportional to
\exyear{NET DEC 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $e^{-\frac{1}{2}(\omega \tau)^{2}}$
\task[\textbf{B.}]$e^{\frac{1}{2}(\omega \tau)^{2}}$
\task[\textbf{C.}] 0
\task[\textbf{D.}]$\frac{1}{(\omega \tau)^{2}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item A constant perturbation $H^{\prime}$ is applied to a system for time $\Delta t$ (where $H^{\prime} \Delta t<<\hbar$ ) leading to a transition from a state with energy $E_{i}$ to another with energy $E_{f}$. If the time of application is doubled, the probability of transition will be
\exyear{NET JUNE 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] unchanged
\task[\textbf{B.}]doubled
\task[\textbf{C.}]quadrupled
\task[\textbf{D.}]halved
\end{tasks}
\begin{minipage}{\textwidth}
\item The Coulomb potential $V(r)=-e^{2} / r$ of a hydrogen atom is perturbed by adding $H^{\prime}=b x^{2}$ (where $b$ is a constant) to the Hamiltonian. The first order correction to the ground state energy is
(The ground state wavefunction is $\psi_{0}=\frac{1}{\sqrt{\pi a_{0}^{3}}} e^{-r / a_{0}}$ )
\exyear{NET JUNE 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $2 b a_{0}^{2}$
\task[\textbf{B.}]$b a_{0}^{2}$
\task[\textbf{C.}]$b a_{0}^{2} / 2$
\task[\textbf{D.}]$\sqrt{2} b a_{0}^{2}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider a one-dimensional infinite square well
$$
V(x)=\left\{\begin{array}{lll}
0 & \text { for } & 0<x<a \\
\infty & & \text { otherwise }
\end{array}\right.
$$
If a perturbation
$$
\Delta V(x)=\left\{\begin{array}{lc}
V_{0} & \text { for } 0<x<a / 3 \\
0 & \text { otherwise }
\end{array}\right.
$$
is applied, then the correction to the energy of the first excited state, to first order in $\Delta V$, is nearest to
\exyear{NET DEC 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $V_{0}$
\task[\textbf{B.}]$0.16 V_{0}$
\task[\textbf{C.}]$0.2 V_{0}$
\task[\textbf{D.}]$0.33 V_{0}$
\end{tasks}
\end{enumerate}
\colorlet{ocre1}{ocre!70!}
\colorlet{ocrel}{ocre!30!}
\setlength\arrayrulewidth{1pt}
\begin{table}[H]
\centering
\arrayrulecolor{ocre}
\begin{tabular}{|p{1.5cm}|p{1.5cm}||p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{4}{|c|}{\textbf{Answer key}}\\\hline\hline
\rowcolor{ocrel}Q.No.&Answer&Q.No.&Answer\\\hline
1&\textbf{a}&2&\textbf{d}\\\hline
3&\textbf{a}&4&\textbf{b}\\\hline
5&\textbf{b}&6&\textbf{c}\\\hline
7&\textbf{c}&8&\textbf{d}\\\hline
9&\textbf{b}&10&\textbf{b}\\\hline
11&\textbf{none}&12&\textbf{b}\\\hline
13&\textbf{d}&14&\textbf{d}\\\hline
15&\textbf{b}&16&\textbf{a}\\\hline
17&\textbf{c}&18&\textbf{b}\\\hline
19&\textbf{d}&&\\\hline
\end{tabular}
\end{table}
\newpage
\begin{abox}
Practice set 2
\end{abox}
\begin{enumerate}
\begin{minipage}{\textwidth}
\item A particle of mass $m$ is confined in an infinite potential well:
$$
V(x)= \begin{cases}0, & \text { if } 0<x<L \\ \infty, & \text { otherwise. }\end{cases}
$$
It is subjected to a perturbing potential $V_{p}(x)=V_{o} \sin \left(\frac{2 \pi x}{L}\right)$ within the well. Let $E^{(1)}$ and $E^{(2)}$ be corrections to the ground state energy in the first and second order in $V_{0}$, respectively. Which of the following are true?
\exyear{GATE 2010}
\begin{figure}[H]
\centering
\includegraphics[height=4cm,width=5cm]{gate 2010}
\end{figure}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}]$E^{(1)}=0 ; E^{(2)}<0$
\task[\textbf{B.}]$E^{(1)}>0 ; E^{(2)}=0$
\task[\textbf{C.}]$E^{(1)}=0 ; E^{(2)}$ depends on the sign of $V_{0}$
\task[\textbf{D.}]$E^{(1)}<0 ; E^{(2)}<0$
\end{tasks}
\begin{minipage}{\textwidth}
\item The normalized eigenstates of a particle in a one-dimensional potential well
$$
V(x)= \begin{cases}0 & \text { if } 0 \leq x \leq a \\ \infty & \text { otherwise }\end{cases}
$$
are given by $\psi_{n}(x)=\sqrt{\frac{2}{a}} \sin \left(\frac{n \pi x}{a}\right)$, where $n=1,2,3, \ldots . .$
The particle is subjected to a perturbation
$$
V^{\prime}(x)= \begin{cases}V_{0} \cos \left(\frac{\pi x}{a}\right), & \text { for } 0 \leq x \leq \frac{a}{2} \\ 0, & \text { otherwise }\end{cases}
$$
The shift in the ground state energy due to the perturbation, in the first order perturbation theory,
\exyear{GATE 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{2 V_{o}}{3 \pi}$
\task[\textbf{B.}]$\frac{V_{o}}{3 \pi}$
\task[\textbf{C.}]$-\frac{V_{o}}{3 \pi}$
\task[\textbf{D.}]$-\frac{2 V_{o}}{3 \pi}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider a system in the unperturbed state described by the Hamiltonian, $H_{0}=\left(\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right)$. The system is subjected to a perturbation of the form $H^{\prime}=\left(\begin{array}{ll}\delta & \delta \\ \delta & \delta\end{array}\right)$, where $\delta \ll<1$. The energy eigenvalues of the perturbed system using the first order perturbation approximation are
\exyear{GATE 2012}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}]1 and $(1+2 \delta)$
\task[\textbf{B.}]$(1+\delta)$ and $(1-\delta)$
\task[\textbf{C.}]$(1+2 \delta)$ and $(1-2 \delta)$
\task[\textbf{D.}]$(1+\delta)$ and $(1-2 \delta)$
\end{tasks}
\textbf{\text { Common data questions } 4 \text { and } 5}\\
$\begin{aligned}
&\text { To the given unperturbed Hamiltonian }\left[\begin{array}{ccc}
5 & 2 & 0 \\
2 & 5 & 0 \\
0 & 0 & 2
\end{array}\right] \\
&\text { we add a small perturbation given by } \varepsilon\left[\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & -1 \\
1 & -1 & 1
\end{array}\right] \text { where } \varepsilon \text { is small quantity. }
\end{aligned}$\\
\begin{minipage}{\textwidth}
\item $\text { The ground state eigenvector of the unperturbed Hamiltonian is }$
\exyear{GATE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $(1 / \sqrt{2}, 1 \sqrt{2}, 0)$
\task[\textbf{B.}]$(1 / \sqrt{2},-1 / \sqrt{2}, 0)$
\task[\textbf{C.}] $(0,0,1)$
\task[\textbf{D.}]$(1,0,0)$
\end{tasks}
\begin{minipage}{\textwidth}
\item A pair of eigenvalues of the perturbed Hamiltonian, using first order perturbation theory, is
\exyear{GATE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}]$3+2 \varepsilon, 7+2 \varepsilon$
\task[\textbf{B.}]$3+2 \varepsilon,+2+\varepsilon$
\task[\textbf{C.}]$3,7+2 \varepsilon$
\task[\textbf{D.}]$3,2+2 \varepsilon$
\end{tasks}
\begin{minipage}{\textwidth}
\item A particle is confined to a one dimensional potential box, with the potential
$$
V(x)= \begin{cases}0, & 0<x<a \\ \infty, & \text { otherwise }\end{cases}
$$
If particle is subjected to a perturbation within the box. $W=\beta x$. Where $\beta$ is small constant, the first order correction to the ground state energy is
\exyear{GATE 2014}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] 0
\task[\textbf{B.}]$a \beta / 4$
\task[\textbf{C.}]$a \beta / 2$
\task[\textbf{D.}] $a \beta$
\end{tasks}
\begin{minipage}{\textwidth}
\item A particle is confined in a box of length $L$ as shown in the figure. If the potential $V_{0}$ is treated as a perturbation, including the first order correction, the ground state energy is
\exyear{GATE 2015}
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=4cm]{diagram-20210824(7)-crop}
\end{figure}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $E=\frac{\hbar^{2} \pi^{2}}{2 m L^{2}}+V_{0}$
\task[\textbf{B.}]$E=\frac{\hbar^{2} \pi^{2}}{2 m L^{2}}-\frac{V_{0}}{2}$
\task[\textbf{C.}] $E=\frac{\hbar^{2} \pi^{2}}{2 m L^{2}}+\frac{V_{0}}{4}$
\task[\textbf{D.}]$E=\frac{\hbar^{2} \pi^{2}}{2 m L^{2}}+\frac{V_{0}}{2}$
\end{tasks}
\begin{minipage}{\textwidth}
\item A one dimensional simple harmonic oscillator with Hamiltonian $H_{0}=\frac{p^{2}}{2 m}+\frac{1}{2} k x^{2}$ is subjected to a small perturbation, $H_{1}=\alpha x+\beta x^{3}+\gamma x^{4}$. The first order correction to the ground state energy is dependent on
\exyear{GATE 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] only $\beta$
\task[\textbf{B.}]$\alpha$ and $\gamma$
\task[\textbf{C.}]$\alpha$ and $\beta$
\task[\textbf{D.}]only $\gamma$
\end{tasks}
\end{enumerate}
\colorlet{ocre1}{ocre!70!}
\colorlet{ocrel}{ocre!30!}
\setlength\arrayrulewidth{1pt}
\begin{table}[H]
\centering
\arrayrulecolor{ocre}
\begin{tabular}{|p{1.5cm}|p{1.5cm}||p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{4}{|c|}{\textbf{Answer key}}\\\hline\hline
\rowcolor{ocrel}Q.No.&Answer&Q.No.&Answer\\\hline
1&\textbf{a}&2&\textbf{a}\\\hline
3&\textbf{a}&4&\textbf{c}\\\hline
5&\textbf{c}&6&\textbf{c}\\\hline
7&\textbf{d}&8&\textbf{d}\\\hline
\end{tabular}
\end{table}
\newpage
\begin{abox}
Practice set 3
\end{abox}
\begin{enumerate}
\begin{minipage}{\textwidth}
\item A one dimensional infinite potential box is defined by, $V(x)=\left\{\begin{array}{ll}0, & 0<x<a \\ \infty, & \text { otherwise }\end{array}\right.$. It is perturbed by potential $\frac{\alpha V_{0} x^{2}}{a^{2}}$, then find the first order energy correction in $n^{t h}$ state of the system.
\end{minipage}
\begin{answer}
$$E_{n}^{1}=\left\langle\phi_{n}|W| \phi_{n}\right\rangle, \text { where }\left|\phi_{n}\right\rangle=\sqrt{\frac{2}{a}} \sin \frac{n \pi x}{a}$$
\begin{align*}
&E_{n}^{1}=V_{0} \frac{2}{a} \int_{0}^{a} \frac{x^{2}}{a^{2}} \sin ^{2} \frac{n \pi x}{a} d x=V_{0} \frac{1}{a} \int_{0}^{a} \frac{x^{2}}{a^{2}}\left(1-\cos \frac{2 n \pi x}{a}\right) d x=V_{0}\left(\frac{1}{3}-\frac{1}{2 n^{2} \pi^{2}}\right) \\
&E_{n}=\frac{n^{2} \pi^{2}}{2 m a^{2}}+\alpha V_{0}\left(\frac{1}{3}-\frac{1}{2 n^{2} \pi^{2}}\right)
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item A one dimensional infinite potential box is defined as, $V(x)= \begin{cases}0, & -\frac{L}{2}<x<\frac{L}{2} . \text { It } \\ \infty, & \text { otherwise }\end{cases}$ is perturbed with potential $H_{p}=V_{0} \exp \left(-\frac{x^{2}}{a^{2}}\right) .$ Find the first order energy correction in ground state with momentum $\frac{\pi \hbar}{L}$ by assuming $\frac{a}{L}<<1$.
\end{minipage}
\begin{answer}
$\left|\phi_{1}\right\rangle=\sqrt{\frac{1}{L}} \exp \frac{i \pi x}{L}, \text { because momentum is } \frac{\pi \hbar}{L}$\\
\begin{align*}
&E_{1}^{1}=\left\langle\phi_{1}|W| \phi_{1}\right\rangle=\frac{V_{0}}{L} \int_{-L / 2}^{L / 2} \exp \left(-\frac{x^{2}}{a^{2}}\right) d x, \text { if } \frac{a}{L}<<1 \text { then } \\
&E_{1}^{1}=\frac{V_{0}}{L} \int_{-\infty}^{\infty} \exp \left(-\frac{x^{2}}{a^{2}}\right) d x=\frac{\sqrt{\pi} V_{0} a}{L} .
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item A one dimensional infinite potential box is defined as, $V(x)= \begin{cases}0, & -\frac{a}{2}<x<\frac{a}{2} . \text { It is } \\ \infty, & \text { otherwise }\end{cases}$ perturbed with potential $H_{p}=V_{0} a \delta(x)$. Find the $E_{n}^{2}$.i.e., second order correction in ground state.
\end{minipage}
\begin{answer}
$E_{n}^{2}=\sum_{m \neq n} \frac{\left|\left\langle\phi_{m}|W| \phi_{n}\right\rangle\right|^{2}}{E_{n}-E_{m}}$\\
If $n$ is even, then $\left\langle\phi_{m}|W| \phi_{n}\right\rangle=0$\\
If $m$ is even, then $\left\langle\phi_{m}|W| \phi_{n}\right\rangle=0$\\
If $m$ and $n$ odd, then $\left\langle\phi_{m}|W| \phi_{n}\right\rangle=V_{0} a\left(\frac{2}{a}\right)\int_{-a / 2}^{a / 2} \cos \frac{m \pi x}{a} \cos \frac{n \pi x}{a} \delta(x) d x=2 V_{0}$
$$
E_{n}^{2}=\sum_{m \neq n} \frac{\left|\left\langle\phi_{m}|W| \phi_{n}\right\rangle\right|^{2}}{E_{n}-E_{m}}=\sum_{m=1,3,5} \frac{4 V_{0}^{2}}{\left(n^{2}-m^{2}\right) E_{0}}, \text { where } E_{0}=\frac{\pi^{2} \hbar^{2}}{2 m a^{2}}
$$
\end{answer}
\begin{minipage}{\textwidth}
\item A one dimensional harmonic oscillator with potential, $V(x)=\frac{1}{2} m \omega^{2} x^{2}$ is given, then\\
(a) if H.O potential is perturbed with perturbation $H_{p}=\lambda X^{4}$, then find the first order energy correction in $n^{\text {th }}$ state of the system.\\
(b) if H.O Perturbed with potential $V_{0} \cosh \alpha x$, then find the first order energy correction in ground state.
\end{minipage}
\begin{answer}
(a) $E_{n}^{1}=\left\langle\phi_{n}\left|H_{p}\right| \phi_{n}\right\rangle=\left\langle\phi_{n}\left|\lambda X^{4}\right| \phi_{n}\right\rangle$\\
$=\lambda\left(\frac{\hbar}{2 m \omega}\right)^{2}\left\langle\left(a+a^{\dagger}\right)^{4}\right\rangle=\left(\frac{\hbar}{2 m \omega}\right)^{2}\left[6 n^{2}+6 n+3\right]$\\
(b)$ E_{0}^{1}=\left\langle\phi_{0}|W| \phi_{0}\right\rangle, \text { where }\left|\phi_{0}\right\rangle=\left(\frac{m \omega}{\pi \hbar}\right)^{1 / 4} \exp \left(-\frac{m \omega x^{2}}{2 \hbar}\right) \text {. }$\\
\begin{align*}
&=\left(\frac{m \omega}{\pi \hbar}\right)^{1 / 2} V_{0} \int_{-\infty}^{\infty} \exp \left(-\frac{m \omega x^{2}}{\hbar}\right) \cosh \alpha x d x \\
&=V_{0}\left(\frac{m \omega}{\pi \hbar}\right)^{1 / 2} \int_{-\infty}^{\infty} \exp \left(-\frac{m \omega x^{2}}{\hbar}\right) \frac{\left(e^{\alpha x}+e^{-\alpha x}\right)}{2} d x \\
&=\frac{V_{0}}{2}\left(\frac{m \omega}{\pi \hbar}\right)^{1 / 2}\left(\int_{-\infty}^{\infty} e^{-\frac{m \omega}{\hbar}\left(x-\frac{\hbar}{2 m \omega}\right)^{2}}\left(e^{\frac{\hbar}{4 m \omega}}\right) d x+\int_{-\infty}^{\infty} e^{-\frac{m \omega}{\hbar}\left(x+\frac{\hbar}{2 m \omega}\right)^{2}} \cdot\left(-e^{\frac{\hbar}{4 m \omega}}\right) d x\right) \\
&=\frac{V_{0}}{2}\left(\exp \frac{\hbar}{4 m \omega}+\exp \frac{\hbar}{4 m \omega}\right)=V_{0} \exp \frac{\hbar}{4 m \omega}
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item
One dimensional potential is given by, $V(x)=\left\{\begin{array}{ll}\infty, & x<0 \\ \frac{1}{2} m \omega^{2} x^{2}, & x \geq 0\end{array} .\right.$ It is perturbed by potential\\
(a) $H_{p}=\lambda V_{0}$, then find the first order correction in energy for ground state .\\
(b) $H_{p}=\lambda V_{0} \delta\left(x-\sqrt{\frac{\hbar}{m \omega}}\right)$, find the first order correction in energy for ground state .\\
(c) $H_{p}=\lambda X$\\
(d) $H_{p}=\lambda X^{2}$\\
\end{minipage}
\begin{answer}
$\text { For given system first excited state } n=1$\\
$\left|\phi_{1}\right\rangle= \begin{cases}0, & \text { if } \quad x<0 \\ 2\left(\frac{m \omega}{\pi \hbar}\right)^{1 / 4}\left(\frac{m \omega}{\hbar}\right)^{1 / 2} x \exp \left(-\frac{m \omega x^{2}}{2 \hbar}\right) ; & 0<x<\infty\end{cases}$\\
(a) $E_{0}^{\prime}=V_{0}(4 \pi)\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2 } \int_{0}^{\infty} x^2e^{-\frac{m \omega x^{2}}{\hbar}} d x=V_{0}(4 \pi)\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2} \frac{1}{2}\left(\frac{\hbar}{m \omega}\right)^{3 / 2} \frac{1}{2} \sqrt{\pi}=V_{0}$\\\\
(b)$H_{p}=\lambda V_{0} \delta\left(x-\sqrt{\frac{\hbar}{m \omega}}\right)$
\begin{align*}
E_{0}^{\prime}&=V_{0} \cdot 4 \pi\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2} \int_{0}^{\infty} x^{2} e^{-\frac{m \omega x^{2}}{\hbar}} \int\left(x-\sqrt{\frac{\hbar}{m \omega}}\right) d x\\
&=\left(4 \pi V_{0}\right)\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2}\left(\sqrt{\frac{\hbar}{m \omega}}\right)^{2} e^{-\frac{m \omega}{\hbar} \cdot\left(\frac{m \omega}{\hbar}\right)}\\
&=\frac{4 \pi V_{0}}{\pi^{3 / 2}}\left(\frac{m \omega}{\hbar}\right)^{1 / 2} e^{-1}=4 V_{0}\left(\frac{m \omega}{\hbar}\right)^{1 / 2} e^{-1}=4 V_{0} \sqrt{\frac{m \omega}{\pi \hbar}} e^{-1}
\end{align*}
$\text { (d) } H_{p}=\lambda x^{2}$
\begin{align*}
E_{0}^{\prime}&=4 \pi\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2} \int_{0}^{\infty} x^{3} e^{-\frac{m \omega x^{2}}{\hbar}} d x\\
&=4 \pi\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2} \frac{1}{2}\left(\frac{\hbar}{m \omega}\right)^{2} \times 1=2 \times \sqrt{\frac{\hbar}{\pi m \omega}}
\end{align*}
$\text { (d) } H_{p}=\lambda x^{2}$\\
\begin{align*}
E_{0}^{\prime}&=4 \pi\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2} \int_{0}^{\infty} x^{4} e^{-\frac{m \omega x^{2}}{h}} d x\\
&=4 \pi\left(\frac{m \omega}{\pi \hbar}\right)^{3 / 2}\left(\frac{\hbar}{m \omega}\right)^{5 / 2} \times \frac{1}{2} \times \frac{3}{4} \sqrt{\pi}=\frac{3}{2}\left(\frac{\hbar}{m \omega}\right)
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item $\text { If Harmonic Potential is given by } V=\frac{1}{2} m \omega^{2} x^{2} \text { and wave function is given as }$\\
$$
\psi(x)= \begin{cases}A \cos \frac{\pi x}{a}, & \frac{-a}{2}<x<\frac{a}{2} \\ 0 & , \quad \text { otherwise }\end{cases}
$$
then find the ground state energy.
\end{minipage}
\begin{answer}
$\text { Since, }\langle\psi \mid \psi\rangle=1 \Rightarrow|A|=\sqrt{\frac{2}{a}}$\\
Hence, $|\psi\rangle=\sqrt{\frac{2}{a}} \cos \left(\frac{\pi x}{a}\right), \frac{-a}{2}<x<\frac{a}{2}$\\
And also, average kinetic energy, $\langle T\rangle=\frac{\pi^{2} \hbar^{2}}{2 m a^{2}}$ \\
\begin{align*}
\langle V\rangle&=\frac{1}{2} m \omega^{2} \int_{-a / 2}^{a / 2}\left(\sqrt{\frac{2}{a}}\right)^{2} x^{2} \cos ^{2}\left(\frac{\pi x}{a}\right) d x\\
&=\frac{1}{-2} \frac{m \omega^{2}}{a} \int_{-a / 2}^{a / 2}\left[x^{2}\left(1+\cos \frac{2 \pi x}{a}\right)\right] d x\\
&=\frac{m \omega^{2}}{2 a}\left[\frac{1}{3}\left(x^{3}\right)_{-a / 2}^{a / 2}+x^{2} \frac{a}{2 \pi} \sin \left(\frac{2 \pi x}{a}\right)-\frac{a}{\pi}\left\{x\left(-\frac{a}{2 \pi}\right) \cos \frac{2 \pi x}{a}+\left(\frac{a}{2 \pi}\right)^{2} \sin \frac{2 \pi x}{a}\right\}\right]_{-a / 2}^{a / 2}\\
\langle V\rangle&=\frac{m \omega^{2}}{2 a}\left[\frac{a^{3}}{12}-\frac{a^{3}}{2 \pi^{2}}\right]=\frac{m \omega^{2} a^{2}}{4}\left(\frac{1}{6}-\frac{1}{\pi^{2}}\right)\\
\text { Hence, }\langle E\rangle&=\langle T\rangle+\langle V\rangle=\left(\frac{\pi^{2} \hbar^{2}}{2 m a^{2}}\right)+\frac{m \omega^{2} a^{2}}{4}\left(\frac{1}{6}-\frac{1}{\pi^{2}}\right)\\
&\text { For minimum energy, } \frac{d\langle E\rangle}{d a}=0\\
&\Rightarrow \frac{\pi^{2} \hbar^{2}}{2 m}\left(-\frac{2}{a^{3}}\right)+\frac{m \omega^{2} a}{2}\left(\frac{1}{6}-\frac{1}{\pi^{2}}\right)=0 \Rightarrow a=\left[\frac{2 \pi^{2} \hbar^{2}}{m^{2} \omega^{2}} \times\left(\frac{6 \pi^{2}}{\pi^{2}-6}\right)\right]^{1 / 4} \\
\therefore\langle E\rangle&=\hbar \omega\left(\frac{\pi^{2}}{12}-\frac{1^{-}}{2}\right)^{1 / 2}
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item Using the trial wave-function, $\psi=e^{-a r}$, find the ground state energy of $3-D$ harmonic oscillator with potential $V=\frac{1}{2} m \omega^{2} r^{2}$.
\end{minipage}
\begin{answer}
$V=\frac{1}{2} m \omega^{2} r^{2}$\\
\begin{align*}
&\text { Trial wave function, } \psi=e^{-u r} \\
&\because \quad \text { Normalising, } \psi=|A|^{2} \cdot 4 \pi \int_{0}^{\infty} r^{2} e^{-2 \alpha r} d r=1 \Rightarrow|A|=\sqrt{\frac{\alpha^{3}}{\pi}}
\end{align*}
$\langle T\rangle=-\frac{\hbar^{2}}{2 m} 4 \pi \int_{0}^{\infty} \psi^{\cdot}\left(\frac{d^{2}}{d r^{2}}+\frac{2}{r}\right) \psi r^{2} d r, \text { where } \psi=\sqrt{\frac{\alpha^{3}}{\pi}} e^{-\alpha r}$\\
\begin{align*}
&\langle T\rangle=-\frac{\hbar^{2}}{2 m}\left(\frac{\alpha^{3}}{\pi}\right) 4 \pi \int_{0}^{\infty} e^{-\alpha r}\left(\alpha^{2}-\frac{2 \alpha}{r}\right) e^{-\alpha r} r^{2} d r \\
&\langle T\rangle=-\frac{\hbar^{2}}{2 m}\left(\frac{\alpha^{3}}{\pi}\right) 4 \pi\left(\alpha^{2} \int_{0}^{\infty} e^{-2 \alpha r} r^{2} d r-2 \alpha \int_{0}^{\infty} e^{-2 \alpha r} r d r\right) \\
&=\frac{-\hbar^{2} \alpha^{3}}{2 m \pi} 4 \pi\left[\alpha^{2}\left(\frac{1}{2 \alpha}\right)^{3}\left\lfloor 2-2 \alpha\left(\frac{1}{2 \alpha}\right)^{2}\right]=\frac{-\hbar^{2} \alpha^{3}}{2 m \pi} 4 \pi\left[\frac{1}{4 \alpha}-\frac{1}{2 \alpha}\right]\right.\\
&\langle T\rangle=\frac{\hbar^{2} \alpha^{2}}{2 m} \\
&\langle V\rangle=\frac{\alpha^{3}}{\pi} \times \frac{1}{2} m \omega^{2} \times 4 \pi \int_{0}^{\infty} r^{2} r^{2} e^{-2 \alpha r} d r=\frac{3 \omega^{2} m}{2 \alpha^{2}} \\
&\therefore\langle E\rangle=\frac{\hbar^{2} \alpha^{2}}{2 m}+\frac{3 \omega^{2} m}{2 \alpha^{2}}\\
&\frac{d\langle E\rangle}{d \alpha}=0 \Rightarrow \frac{\hbar^{2} \alpha}{m}-\frac{3 \omega^{2} m}{\alpha^{3}}=0 \Rightarrow \alpha=\left(\frac{3 m^{2} \omega^{2}}{\hbar^{2}}\right)^{1 / 4} \\
&\langle E\rangle=\frac{\hbar^{2}}{2 m}\left(\frac{3 m^{2} \omega^{2}}{\hbar^{2}}\right)^{1 / 2}+\frac{3 \omega^{2} m}{2}\left(\frac{\hbar^{2}}{3 m^{2} \omega^{2}}\right)^{1 / 2} \\
&=\frac{1}{2}\left(3 \hbar^{2} \omega^{2}\right)^{1 / 2}+\frac{1}{2}\left(3 \hbar^{2} \omega^{2}\right)^{1 / 2} \Rightarrow\langle E\rangle=\sqrt{3} \hbar \omega
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item A particle in one dimension moves under the influence of a potential $V(x)=a x^{6}$, where $a$ is a real constant. For large $n$ the quantized energy level $E_{n}$ depends on $n$ as $E_{n}=n^{\alpha}$ then find the value of $\alpha$.
\end{minipage}
\begin{answer}$\left. \right. $\\
\begin{minipage}{0.5\textwidth}
$V(x)=a x^{6} \text { and } E=\frac{p^{2}}{2 m}+a x^{6}$\\
According to Bohr Somerfield theory $\oint p d x=n h, n=1,2 \ldots$\\\\
$
\oint \sqrt{2 m\left(E-a x^{6}\right)} d x=n h, n=1,2 \ldots
$
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=5cm]{diagram-20220205-20220205160131-crop}
\end{figure}
\end{minipage}
$$\text { 4. } \int_{0}^{\left(\frac{E}{a}\right)} \sqrt{2 m\left(E-a x^{6}\right)} d x=n h \Rightarrow 4 \sqrt{2 m E} \int_{0}^{\frac{E}{a}} \int_{1 / 6}^{1 / 6} \sqrt{\left(1-\frac{a x^{6}}{E}\right)} d x=n h, n=1,2 \ldots$$
\begin{align*}
&\text { Put }\left(\frac{a}{E}\right)^{1 / 6} x=t \text { then } d x=d t\left(\frac{E}{a}\right)^{\frac{1}{6}} \Rightarrow 4 \sqrt{2 m E} \cdot\left(\frac{E}{a}\right)^{1 / 6} \int_{0}^{1} \sqrt{\left(1-t^{6}\right)} d t=n h \\
&\Rightarrow E^{\frac{1}{2}+\frac{1}{6}} \propto n \Rightarrow E^{\frac{4}{6}} \propto n \Rightarrow E \propto n^{\frac{3}{2}} \text { so value of } \alpha=\frac{3}{2}
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item A particle of mass $m$ interacts with potential $V(x)=\left\{\begin{array}{l}\infty, x \leq 0 \\ \lambda x, x \geq 0\end{array} .\right.$ Using the WKB approximation Find the energy of bound state system.
\end{minipage}
\begin{answer}
$\text { For bound state system } E>0 \text { where } E=\frac{p^{2}}{2 m}+\lambda x \Rightarrow p=\sqrt{2 m(E-\lambda x)}$\\
The two turning points are given by $x_{1}=0$ and $x_{2}=\frac{E}{\lambda}$\\
In the given potential one boundary is rigid and another is smooth so according to WKB
Approximation Bound state is given by $\int_{x_{1}}^{x_{2}} p d x=\left(n+\frac{3}{4}\right) \pi \hbar$, where $n=0,1,2$\\
\begin{minipage}{0.5\textwidth}
$\sqrt{2 m E} \int_{0}^{\frac{E}{\lambda}} \sqrt{1-\left(\frac{\lambda}{E}\right)} x d x=\left(n+\frac{3}{4}\right) \pi \hbar$ where $n=0,1,2$ \\
$\frac{\lambda}{E} x=t \Rightarrow d x=\frac{E}{\lambda} d t$\\
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{figure}[H]
\centering
\includegraphics[height=3cm,width=5cm]{diagram-20220205(1)-20220205162522-crop}
\end{figure}
\end{minipage}
So the integration is given by $\sqrt{2 m E} \cdot \frac{E}{\lambda} \int_{0}^{1} \sqrt{1-t} d t=(n+3 / 4) \pi \hbar, n=0 ; 1,2 \ldots$ To find $I=\int_{0}^{1} \sqrt{1-t} d t$\\
\begin{align*}
&\text { put } 1-t=y-d t=d y \text { so } I=-\int_{1}^{0} \sqrt{y} d y \quad \Rightarrow \int_{0}^{1} y^{1 / 2} d y=\frac{\left(y^{3 / 2}\right)_{0}^{1}}{3 / 2}=\frac{2}{3} \\
&\sqrt{2 m E} \cdot \frac{E}{\lambda} \int_{0}^{1} \sqrt{1-t} d t=\sqrt{2 m E} \cdot \frac{E}{\lambda} \times \frac{2}{3}=\left(n+\frac{3}{4}\right) \pi \hbar \quad \text { where } n=0,1,2 \ldots \\
&=E^{3 / 2} \cdot 2 \frac{\sqrt{2 m}}{3 \lambda}=\left(n+\frac{3}{4}\right) \pi \hbar \Rightarrow E=\left[\frac{3 \hbar \pi \lambda}{2 \sqrt{2 m}}(n+3 / 4)\right]^{\frac{2}{3}} n=0,1,2 \ldots . .
\end{align*}
\end{answer}
\end{enumerate}
| {
"alphanum_fraction": 0.6423537045,
"avg_line_length": 67.5491367862,
"ext": "tex",
"hexsha": "fd3c6cfb44f98b7952821bc3d66d36c5f457e238",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material",
"max_forks_repo_path": "QM -CSIR/chapter/Approximation methods.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material",
"max_issues_repo_path": "QM -CSIR/chapter/Approximation methods.tex",
"max_line_length": 1074,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material",
"max_stars_repo_path": "QM -CSIR/chapter/Approximation methods.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 39572,
"size": 101729
} |
\section{The wedge and smash product of pointed types}
\begin{defn}
Let $A$ and $B$ be pointed types.
\begin{enumerate}
\item We define the \define{wedge} $A\vee B$ of $A$ and $B$ to be the pushout
\begin{equation*}
\begin{tikzcd}
\unit \arrow[r] \arrow[d] & B \arrow[d] \\
A \arrow[r] & A\vee B
\end{tikzcd}
\end{equation*}
\item We define the \define{smash product} $A\wedge B$ of $A$ and $B$ to be the cofiber of the cogap map of the square
\begin{equation*}
\begin{tikzcd}
\unit \arrow[r] \arrow[d] & B \arrow[d] \\
A \arrow[r] & A\times B
\end{tikzcd}
\end{equation*}
That is, the smash product is defined as the cofiber of the canonical map $A\vee B\to A\times B$.
\end{enumerate}
\end{defn}
For any two pointed types $A$ and $B$, there is a pointed map
\begin{equation*}
\mathsf{pair}_\ast : A \to_\ast (B\to_\ast A\wedge B).
\end{equation*}
\begin{thm}\label{thm:smash_adj}
Let $A$, $B$, and $X$ be pointed types. Then the pointed map
\begin{equation*}
(A\wedge B \to_\ast X)\to_\ast (A \to_\ast (B\to_\ast X))
\end{equation*}
given by $f\mapsto f\mathbin{\circ_\ast}\mathsf{pair}_\ast$ is an equivalence.
Moreover, these equivalences are natural in $A$, $B$, and $X$ in the sense that...
\end{thm}
\begin{cor}
For any $m,n:\N$ we have an equivalence
\begin{equation*}
\eqv{{\sphere{m}}\wedge{\sphere{n}}}{\sphere{m+n}}.
\end{equation*}
\end{cor}
\begin{proof}
We have
\begin{align*}
({\sphere{m}}\wedge{\sphere{n}}\to_\ast X) & \eqvsym (\sphere{m}\to_\ast (\sphere{n}\to_\ast X)) \\
& \eqvsym \loopspace[m]{\loopspace[n]{X}} \\
& \eqvsym \loopspace[m+n]{X} \\
& \eqvsym (\sphere{m+n}\to_\ast X)
\end{align*}
By the naturality of the equivalences in \cref{thm:smash_adj} it follows that the composite equivalence is given by precomposition by the pointed map
\begin{equation*}
\sphere{m}\wedge\sphere{n}\to_\ast \sphere{m+n}
\end{equation*}
that corresponds to the identity map $\sphere{m+n}\to_\ast \sphere{m+n}$. Thus it follows by \cref{ex:yoneda_ptd_types} that this pointed map is an equivalence.
\end{proof}
\begin{thm}
Given two pointed spaces, there is an equivalence
\begin{equation*}
\eqv{\join{X}{Y}}{\susp(X\wedge Y)}.
\end{equation*}
\end{thm}
\begin{exercises}
\exercise
\begin{subexenum}
\item Show that $Y$ is equivalent to the mapping cone of $X\to X\vee Y$.
\item Show that the pushout of $X \leftarrow X\vee Y \rightarrow Y$ is contractible.
\end{subexenum}
\exercise Let $A$ and $B$ be pointed types. Show that the square
\begin{equation*}
\begin{tikzcd}
A+B \arrow[r] \arrow[d] & A\times B \arrow[d] \\
1+1 \arrow[r] & A\wedge B
\end{tikzcd}
\end{equation*}
is cocartesian.
\exercise Show that if
\begin{equation*}
\begin{tikzcd}
S_1 \arrow[r] \arrow[d] & Y_1 \arrow[d] & S_2 \arrow[r] \arrow[d] & Y_2 \arrow[d] \\
X_1 \arrow[r] & Z_1 & X_2 \arrow[r] & Z_2
\end{tikzcd}
\end{equation*}
are pushout squares, where all types, maps and homotopies are pointed, then so is
\begin{equation*}
\begin{tikzcd}
S_1\vee S_2 \arrow[r] \arrow[d] & Y_1\vee Y_2 \arrow[d] \\
X_1 \vee X_2 \arrow[r] & Z_1\vee Z_2.
\end{tikzcd}
\end{equation*}
\exercise Show that if
\begin{equation*}
\begin{tikzcd}
S \arrow[r] \arrow[d] & Y \arrow[d] \\
X \arrow[r] & Z
\end{tikzcd}
\end{equation*}
is a cocartesian square of pointed spaces, then the cofiber of $X\vee Y\to Z$ is equivalent to $\susp(S)$.
\exercise Show that there is an equivalence
\begin{equation*}
\eqv{\susp(X\times Y)}{\susp(X\vee Y)\vee \susp(X\wedge Y)}
\end{equation*}
\exercise Show that $\susp(X\vee Y)$ is a retract of $\susp(X\times Y)$.
\exercise Show that if $f:A\to X$ is a constant of pointed spaces, then $\eqv{M_f}{X\vee \susp(A)}$.
\exercise Show that the cofiber of the diagonal $\delta:\sphere{1}\to \sphere{1}\times\sphere{1}$ is equivalent to $\sphere{2}\vee\sphere{2}$.
\exercise Show that $\eqv{\mathsf{Fin}(n+1)\wedge \mathsf{Fin}(m+1)}{\mathsf{Fin}(n\cdot m)+\unit}$.
\end{exercises}
| {
"alphanum_fraction": 0.6856995885,
"avg_line_length": 34.7142857143,
"ext": "tex",
"hexsha": "855d8c93c6979dede497b9ec0e0e7ee3b15f9796",
"lang": "TeX",
"max_forks_count": 30,
"max_forks_repo_forks_event_max_datetime": "2022-03-16T00:33:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-26T09:08:57.000Z",
"max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "hemangandhi/HoTT-Intro",
"max_forks_repo_path": "Book/smash.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934",
"max_issues_repo_issues_event_max_datetime": "2020-10-16T15:27:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-18T04:16:04.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "hemangandhi/HoTT-Intro",
"max_issues_repo_path": "Book/smash.tex",
"max_line_length": 160,
"max_stars_count": 333,
"max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "hemangandhi/HoTT-Intro",
"max_stars_repo_path": "Book/smash.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-22T23:50:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-26T08:33:30.000Z",
"num_tokens": 1484,
"size": 3888
} |
% SPDX-License-Identifier: MIT
% Copyright (c) 2017-2020 Forschungszentrum Juelich GmbH
% This code is licensed under MIT license (see the LICENSE file for details)
%
\documentclass[t]{beamer}
\usepackage{trace}
\usetheme{Juelich}
\fzjset{
title page = text,
part page = text,
section page = text
}
\title{This is a very long title that will end up being split into multiple lines, but only the first line is indented}
\subtitle{The subtitle of this presentation is also quite long, oh my, look what a mess you have made}
\author{Your Name}
\institute{Your Institute}
\date{\today}
\titlegraphic{
\includegraphics[
width=\paperwidth]{placeholder}
}
\begin{document}
\maketitle
\part{This is quite a long name for a part, are you sure that it really has to be this long}
\makepart
\section{This is a long section name that will end up being split into multiple lines}
\makesection
\begin{frame}{This frame has a very long frame title that will end up being split into multiple lines}{My gosh, I do not think I have ever seen a subtitle this long, yet it seems it is still not quite long enough, let us add a few more words}
This is the frame content
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7556485356,
"avg_line_length": 32.2972972973,
"ext": "tex",
"hexsha": "9030241890677068190f68a58300d4a740c98c02",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ca7040b8877e934ad030bd5d29fa7d449120706c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SFKV/Hackathon-Talks",
"max_forks_repo_path": "tests/longtitles.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ca7040b8877e934ad030bd5d29fa7d449120706c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SFKV/Hackathon-Talks",
"max_issues_repo_path": "tests/longtitles.tex",
"max_line_length": 242,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ca7040b8877e934ad030bd5d29fa7d449120706c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SFKV/Hackathon-Talks",
"max_stars_repo_path": "tests/longtitles.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 319,
"size": 1195
} |
%
% set up at March 9th, 2011
%
%
%
\chapter{Advanced Discussion for Functionals}
%
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{$\lambda$ dependent functionals}
%
% this part is coded in q-chem, from weitao yang's original paper
%
The MCY functional, is actually based on the idea that how to find a way to
construct the adiabatic connection path:
\begin{equation}
E_{XC}[\rho] = \int_{0}^{1} W_{\lambda}[\rho] d \lambda
\label{MCY_functional_eq:1}
\end{equation}
If the $W_{\lambda}[\rho]$ is known, then we can know how to calculate the
exchange correlation energy.
MCY functional starts from the Pade form:
\begin{equation}
W_{\lambda}[\rho] = a + \frac{\lambda b}{1 + \lambda c}
\label{MCY_functional_eq:2}
\end{equation}
Through integral by parts we can get that:
\begin{equation}
E_{XC}[\rho] = a + \frac{b}{c}\left(1 - \frac{\ln
(1+c)}{c}\right)
\label{MCY_functional_eq:3}
\end{equation}
Hence the unknown $E_{xc}$ is then determined by the $a$, $b$ and $c$.
According to the already known exact information for the adiabatic connection
path, we have:
\begin{itemize}
\item As $\lambda \rightarrow 0$, then $W_{\lambda}[\rho]\rightarrow E_{x}$.
The $E_{x}$ is the exact exchange energy.
\item As $\lambda \rightarrow 0$, then
$\frac{\partial W_{\lambda}[\rho]}{\partial \lambda}\rightarrow
2 E_{c}^{GL}$. This is the known ``Gorling-Levy'' functional expression.
\end{itemize}
According to the (\ref{MCY_functional_eq:2}), we can know that as $\lambda
\rightarrow 0$, the $W_{\lambda}[\rho] = a$. Hence the functional of $a$ just
characterizes the exact exchange energy in this functional. On the other hand,
for the one electron system; we have $E_{xc} = a = E_{x}$ so that to cancel the
SIE error. So this functional is also SIE free functional.
The next step is to associate the $b$ functional with the ``Gorling-Levy''
expression. According to the (\ref{MCY_functional_eq:2}), assume that $a$, $b$
and $c$ are independent of $\lambda$ variable, we can have:
\begin{align}
\frac{\partial W_{\lambda}[\rho]}{\partial \lambda}
&= \left. \frac{b(1+\lambda c) - \lambda cb}{(1+\lambda c)^{2}}\right |_{\lambda
= 0} \nonumber \\
&= b
\label{MCY_functional_eq:4}
\end{align}
Hence $b$ is related to the initial slope for $\lambda = 0$ on the adiabatic
connection path, then we can have:
\begin{equation}
b = 2\lim_{\lambda \rightarrow 0}E_{c}[\rho_{1/\lambda}]
\label{MCY_functional_eq:5}
\end{equation}
Now to find a proper form of $b$ is necessary.
In order to retain the SIE free functional, it requires that the selected
correlation functional should be vanished in one electron system. Hence they
choose the TPSS correlation functional, then we can have such expression:
\begin{equation}
b =2 \lim_{\lambda \rightarrow 0}E_{tpss}[\rho_{1/\lambda}]
\label{MCY_functional_eq:6}
\end{equation}
Now it's only the last component of $c$ need to be determined. For the
adiabatic connection path, we already have the initial point and initial
slope, which is characterized by $a$ and $b$ respectively; to determine the
adiabatic connection path we only need another point on this path, that is:
\begin{equation}
W_{\lambda}[\rho] = W_{\lambda_{p}}[\rho]
\label{MCY_functional_eq:7}
\end{equation}
So that to determine the component $c$.
Therefore, as $\lambda = \lambda_{p}$, according to the
(\ref{MCY_functional_eq:2}) the $c$ is expressed as:
\begin{equation}
c =\frac{W_{\lambda_{p}}[\rho] - \lambda_{p}b - a}{
\lambda_{p}(a - W_{\lambda_{p}}[\rho])}
\label{MCY_functional_eq:8}
\end{equation}
Now we finished the work to produce the MCY functional.
On the other hand, from the given Fortran file of xc\_getv.F; we can see that
the subroutine scf\_adiabatic is used to calculate the MCY total energy and Fock
matrix components. In this program, it's rather strange to see that $b$ is
instead some composite functional, which is expressed as:
\begin{equation}
b = \frac{-a^{2} + aE_{blyp} - \lambda_{p}E_{tpss}E_{blyp}}{
\lambda_{p}a(a-E_{blyp})}
\end{equation}
Here the $E_{tpss}$ is the energy for scaled $\lambda$-dependent TPSS
correlation functional, which is characterized in (\ref{MCY_functional_eq:6});
the $E_{blyp}$ is the energy for the $\lambda$-dependent BLYP functional, which
is just the $W_{\lambda_{p}}[\rho]$ in the (\ref{MCY_functional_eq:7}). On the
other hand, the components of $a$ and $c$ are still same.
What's more, the exchange-correlation functional, is also quite different from
the expression (\ref{MCY_functional_eq:3}); in the codes the total energy is
expressed as:
\begin{equation}
E_{xc} = \frac{ab}{c} + \frac{a(c-b)}{c^{2}}\ln(1+c)
\end{equation}
This is different from what we have known from the published paper.
Thanks and best wishes,
fenglai
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../../main"
%%% End:
| {
"alphanum_fraction": 0.7049281314,
"avg_line_length": 34.7857142857,
"ext": "tex",
"hexsha": "8fa7fc3809bdc8b7805e2ae24fa84d7862bd60a4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "murfreesboro/fenglai-note",
"max_forks_repo_path": "theory/chemistry/advfunctional.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "murfreesboro/fenglai-note",
"max_issues_repo_path": "theory/chemistry/advfunctional.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "murfreesboro/fenglai-note",
"max_stars_repo_path": "theory/chemistry/advfunctional.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-16T07:23:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-16T07:23:48.000Z",
"num_tokens": 1502,
"size": 4870
} |
\documentclass[a4paper,11pt]{article}
\usepackage{ifpdf}
\ifpdf
\usepackage[pdftex]{graphicx}
\usepackage[pdftex]{hyperref}
\else
\usepackage{graphicx}
\usepackage{hyperref}
\fi
\usepackage[svgnames]{xcolor}
\usepackage{minted}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{xspace}
\usepackage{booktabs}
\usepackage{longtable}
\usepackage[top=3cm, bottom=3cm, left=2.5cm,right=2.5cm]{geometry}
%\usepackage[left=3cm,right=3cm]{geometry}
\pagestyle{headings}
\author{Stuart Moodie, Maciej Swat and Niels Kristensen}
\date{Aug 12, 2013}
\title{Proposed revision to Trail Design in \pharmml}
\colorlet{bkgd}{gray!5}
%\usemintedstyle{trac}
%\newminted{xml}{bgcolor=bkgd,fontsize=\footnotesize%
%,fontfamily=courier%
%}
\newminted{xml}{fontsize=\footnotesize,fontfamily=courier}
% \newcommand{\inputxml}[1]{\inputminted[bgcolor=bkgd,fontsize=\scriptsize%
% ,fontfamily=courier%
% ]{xml}{codesnippets/#1}}
\newcommand{\cellml}{CellML\xspace}
\newcommand{\sbml}{SBML\xspace}
\newcommand{\sedml}{SED-ML\xspace}
\newcommand{\mathml}{MathML\xspace}
\newcommand{\uncertml}{UncertML\xspace}
\newcommand{\pharmml}{PharmML\xspace}
\newcommand{\xelem}[1]{\texttt{<#1>}\index{XML Element!\texttt{<#1>}}}
\newcommand{\xatt}[1]{\texttt{#1}\index{XML Attribute!\texttt{#1}}}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}
This document contains a proposal to radically revise the way the
trial design and estimation steps are designed in the existing
\pharmml specification. The proposal has been discussed and agreed
between us and is our preferred way ahead. This document aims to set
out the problems with the current approach and then go on to propose,
what we think is a better solution.
\section{Fixing the problems in the Trial Design}
\subsection{The problems to be addressed}
In version 0.1 of the specification we provided 2 ways to define the
trial design. You could define the trial structure explicitly using
the \xelem{Design} element and its children or you could define the
trial design in the data file used to describe estimation or
simulation. The first approach has many virtues in that it explicitly
defines what is only implied by the data file approach, it is easy
to understand, non-redundant. The second approach is much less clear
since the dosing regiment, occasion structure, covariates and
objective data are all combined into a single tabular structure that
is inherently very redundant and much harder to understand. However,
the data-file approach is that adopted by tools such as NONMEM and
MONOLIX.
Both approaches, as implemented in the current version of \pharmml,
have limitations that we do need to address in a future release.
{\small%
\begin{longtable}{p{5cm} p{10cm}}\toprule
\multicolumn{2}{l}{\textbf{Explicit Trial Design}}\\\midrule
Occasions cannot span epochs. & Because an occasion is a child of an
\xelem{TrialEpoch} element, it was not designed to span an epoch. It
is desirable to do this however, so this needs hierarchy relationship
needs to be re-examined.\\
Higher levels of variability cannot be represent. & If one wants to
represent variability between study centres, or countries and one
wants to represent this variability as a random effect (and not use
occasions) then \pharmml cannot do this currently.\\
Washout and run-in periods are treated as epochs. & For consistency we
should treat these as epochs and so enable occasions to span them.\\
Varying bolus doses not supported. & It is not possible to define a
dose amount that changes at each dosing time within a single Bolus
dosing regimen. It would seem sensible to enable this.\\
Per-individual doses not supported. & It is not possible to define a
dose amount that changes per individual. If a dosing amount is
per-body weight then this can be done in an equation, but you cannot
currently specify a different absolute dose amount per subject.\\\\
\multicolumn{2}{l}{\textbf{Design in Data-File}}\\\midrule
Reading a data-file requires is imperative. & One of the main
principles of \pharmml is that is declarative. However, the components
in the estimation and simulation steps that define how a data file is
read are inherently not. In particular the order that the line of the file
is read in is significant. In some more complex trial designs there
may be two or three lines describing the same time-point. Altering the
order of these lines may, in pathological cases, alter the result of
the model.\\
Easy to generate, difficult to convert to a target. & The current
data-file structures in the estimation and simulation steps can be
relatively generated from formats such as NMTRAN and MLXTRAN. However,
there may be information loss because the \pharmml representation does
not make any assumptions about the content of the data-file and so
has to do more work.\\
Some dosing regimens cannot be described. & Some dosing regimens
described in NONMEM cannot be handled by this approach. For example
a repeated steady state infusion will have values in the
columns TIME, DOSE, RATE, SS and II, which in combination have
special meanings. In the current version it is not possible to use this information in the
datafile to describe such a dosing regiment.\\
Typing of file contents. & Many of the values in a data file are
assigned to variables or parameters in the model. This means that they
should have types that match the element in the model. At present this
is undefined behaviour and the implication is that the item in the
file which be converted from a string to the appropriate
value. However, although this can be validated it is more work and as
with all implicit conversions there may cases where the conversion
made is incorrect. For example 'T' can be a string or a Boolean value for TRUE?\\
Defining possible values. & In come cases, such as defining the set of
individuals, or the possible categorical values for a categorical
covariate such information must be implied by the unique set of values
in the data file. This does not permit one to define categories in
your model, that are not used by the current population of
subjects. It is also potentially error prone. For example in a data
file you may have a set of 10 rows containing the same ID value. What
if the 5th row contains a different value? Is this a mistake or
intended? It's probably a mistake, but when using the contents to
describe the set of all possible values this kind of error cannot be
identified with certainty.\\
\bottomrule
\end{longtable}%
}
A final issue with the approach take in the current specification is
that we have 2 ways of doing the same thing, namely defining the trial
design. Which version should we recommend you use? Worse, they don't
have completely equivalent functionality. The issues associated with
maintaining two ways to do the same thing can be summarised as:
\begin{itemize}
\item It increases the complexity of the language making it harder to
learn and to validate in software.
\item It increases the effort required to design and test the
language and to bug fix. Naively, one would expect duplicate structures within the
language to require double such effort.
\item Duplication also increases the work of converters because they
now need to map two sets of \pharmml structures to the same target structures.
\item Ironically increased flexibility can make \pharmml harder to
learn and be detrimental to adoption. Users are unclear which
structures to use and would tend to want to avoid having to learn
two ways to do the same thing.
\pharmml.
\end{itemize}
%
So to conclude, I am advocating that whether we adopt my proposed
changes here, I would recommend that we stick with only one way of
defining the trial design in the next version of \pharmml.
\subsection{Outline of changes to the trial design}
From the critique above, it is inevitable that any changes I am
proposing must also affect the estimation and simulation step sections
of \pharmml. That is the case. In looking at the problem, it occurred
to me that the data-file conflates three classes of information:
\begin{description}
\item[Population] The attributes of the individuals in the study: the
population in the population model. Each individual has a weight,
an age, a gender and numerous other properties that may or may not
be modelled as covariates in a given model. In addition, these
properties may change over time.
\item[Dosing] When and how a drug or drugs are administered to the
individuals in the trial.
\item[Measurements] These are the observations taken from each
individual and specific times during the study. Such measurements
provide the objective data used during parameter estimation and are
typically the outputs calculated during a simulation.
\end{description}
Fortunately, I was not alone in this view and the developers of PharML\footnote{I'm not
sure what the definitive reference of link is to the PharML spec
is.} had this insight and developed this language accordingly.
By separating out these classes of information you can see that the
information we need to define the clinical trial is as follows:
\begin{description}
\item[Trial Structure] The organisation of the trial, how the subjects
are grouped into different treatment groups and what the dosing
regiment is within these treatment groups.
\item[Population] As above, the properties specific to the individual,
including those that vary over time.
\item[Individual Dosing] This is related to the treatment regimens
described in the trial structure, but describes the dosing of each
subject in the study.
\end{description}
In essence my proposal is that we redesign the Design section of
\pharmml to handle the represent the information and that this is the
only place the trial design is defined in \pharmml. That way the
estimation step can be significantly simplified and it's main focus
can be the mapping of the objective data to the model and the
description of an estimation task.
\subsection{Proposed changes}
The first thing to note is that the \xelem{Design} element has been
renamed to \xelem{TrialDesign}. I think this is more consistent with
what we tend to call the information represented in this element. The
next things is the element \xelem{Structure}. This element is used to
define the structure of the trial. To do this I have reused, almost
verbatim, the CDISC Study Design Model\footnote{CDISC URL to go
here.}, which is an XML representation of a clinical trial. Using
this design gives us the reassurance that we should be able to
represent all trial structures that we are likely to encounter.
The figure below (figure \ref{fig:cdiscstruct}) shows how the CDISC
trial structure is organised. It has five main components:
\begin{description}
\item[Epoch] The epoch defines a period of time during the study which
has a purpose within the study. For example a washout or a treatment
window. In CDISC Epochs can describe screening or follow-up periods,
which are out of the scope of \pharmml. An epoch is usually defined
by a time period.
\item[Arm] The arm represents a path through the study taken by a
subject. An arm is composed of a study cell for each epoch in the study.
\item[Cell] The study cell describes what is carried out during an
epoch in a particular arm. There is only one cell per epoch.
\item[Segment] The segment describes a set of planned observations and
interventions, which may or may not involve treatment. Note that in
\pharmml our definition is more limited and we only describe
treatments. A segment can contains one or more activities.
\item[Activity] The activity is an action that is taken in the
study. Here it is typically a treatment regimen.
\item[StudyEvent] A study event describes the collection of
information about a particular individual. In CDISC this can be
information captured during screening or other non-treatment phases
of the clinical trial. But here we restrict it to capturing
observations during the treatment. In \pharmml this is how we
capture occasions.
\end{description}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{./CDISCTrialStructure}
\caption{Overview of the Trial Structure used in the CDISC Study
Design Model.}
\label{fig:cdiscstruct}
\end{figure}
To get an idea how this works in practise, the following example shows
a simple clinical trial describing a steady state model, with one
study arm.
\begin{xmlcode}
<TrialDesign xmlns="http://www.pharmml.org/2013/03/TrialDesign">
<Structure>
<!-- Define the trial structure -->
<Epoch oid="e1">
<Order>1</Order>
</Epoch>
<Arm oid="a1"/>
<Cell oid="c1">
<EpochRef oid="e1"/>
<ArmRef oid="a1"/>
<SegmentRef oid="s1"/>
</Cell>
<Segment oid="s1">
<ActivityRef oid="a1"/>
</Segment>
<Activity oid="a1">
<Bolus>
<DoseAmount>
<DoseVar block="main" symbId="D"/>
<ct:Assign>
<ct:Real>100</ct:Real>
</ct:Assign>
</DoseAmount>
<SteadyState>
<EndTime>
<ct:SymbRef symbId="tD"/>
<ct:Assign><ct:Real>0</ct:Real></ct:Assign>
</EndTime>
<Interval>
<ct:SymbRef block="p1" symbId="tau"/>
<ct:Assign><ct:Real>12</ct:Real></ct:Assign>
</Interval>
</SteadyState>
</Bolus>
</Activity>
</Structure>
<Population>
<!-- Define the variability level associated with the
population -->
<ct:VariabilityReference>
<ct:SymbRef symbId=""></ct:SymbRef>
</ct:VariabilityReference>
<!-- Define the individuals -->
<Individual oid="i">
<ArmRef oid="a1"/>
<Replicates>
<ct:Int>50</ct:Int>
</Replicates>
</Individual>
</Population>
</TrialDesign>
\end{xmlcode}
The CDISC like elements are contained in the \xelem{Structure}
tag and you can see how the study is constructed of a single epoch,
with a single arm and a single cell that contains a single
segment. Note, though that this structure is not hierarchical and the
\xelem{Cell} element joins together the arms, epoch and segments
together. Note that a Cell can span several arms contains several
segments. Finally the segment points to an activity the describes
the steady state dosing regimen. The dosing regimen is very similar
to that used in version 0.1 of \pharmml. The details should be ignored
here and please refer to proposed changes to the dosing regimen in the
section below.
Following on from the \xelem{Structure} element is
\xelem{Population}. This is where we describe the individuals in the
study, their attributes (such as weight, gender etc) and assign them
to an arm of the study. In this simple example we only have one arm
and so we assign all the individuals in the study to that arm. As a
shorthand we provide one individual definition and use the element
\xelem{Replicates} to specify how many individuals this
represents. This is obviously useful when simulating a model when (as
in this example) covariates such as weight are calculated for each
individual. An identifier for each individual is created on the fly by
suffixing a sequential number after the identifier. In this case they
would be i1, i2 \ldots i49, i50. While not useful at the moment, such
identifiers will be generated when writing out output to a file. The
\xelem{Replicates} element is optional and later examples will
enumerate the properties of each individual in the population without
using it.
In this simple example the same amount of dose was administered for
each individual. However, in many models the size of the dose varies
per individual. In the example below you can see more complex trial
design with multiple arms and dosing specified per individual. This
corresponds the trial design describing example 6 in the \pharmml
specification. I've reproduced the figure describing it below (figure
\ref{fig:eg6-trial-design}).
\begin{figure}[htb]
\includegraphics[width=\linewidth]{../pics/iov1simulation_grafio}
\caption{Overview of the trial design used in example 6 of the spec.}
\label{fig:eg6-trial-design}
\end{figure}
\begin{xmlcode}
<TrialDesign xmlns="http://www.pharmml.org/2013/03/TrialDesign">
<Structure>
<Epoch oid="ep1">
<Order>1</Order>
</Epoch>
<Epoch oid="ep2">
<Order>2</Order>
</Epoch>
<Epoch oid="ep3">
<Order>3</Order>
</Epoch>
<Arm oid="a1"/>
<Arm oid="a2"/>
<Cell oid="c1">
<EpochRef oid="e1" />
<ArmRef oid="a1"/>
<SegmentRef oid="ta"/>
</Cell>
<Cell oid="c2">
<EpochRef oid="e1" />
<ArmRef oid="a1"/>
<SegmentRef oid="tb"/>
</Cell>
<Cell oid="c3">
<EpochRef oid="e2" />
<ArmRef oid="a1"/>
<ArmRef oid="a2"/>
<SegmentRef oid="wash"/>
</Cell>
<Cell oid="c4">
<EpochRef oid="e3"/>
<ArmRef oid="a1"/>
<SegmentRef oid="tb"/>
</Cell>
<Cell oid="c5">
<EpochRef oid="e3"/>
<ArmRef oid="a2"/>
<SegmentRef oid="ta"/>
</Cell>
<Segment oid="ta">
<ActivityRef oid="d1"/>
</Segment>
<Segment oid="tb">
<ActivityRef oid="d2"/>
</Segment>
<Segment oid="wash">
<ActivityRef oid="w1"/>
</Segment>
<Activity oid="d1">
<Bolus>
<!-- SNIP -->
</Bolus>
</Activity>
<Activity oid="d2">
<Bolus>
<!-- SNIP -->
</Bolus>
</Activity>
<Activity oid="w1">
<Washout/>
</Activity>
<ObservationsEvent oid="occasions">
<ArmRef oid="a1"/>
<ArmRef oid="a2"/>
<ct:VariabilityReference>
<ct:SymbRef block="model" symbId="iov"></ct:SymbRef>
</ct:VariabilityReference>
<ObservationGroup oid="occ1">
<EpochRef oid="ep1"/>
</ObservationGroup>
<ObservationGroup oid="occ2">
<EpochRef oid="ep3"/>
</ObservationGroup>
</ObservationsEvent>
</Structure>
<Population>
<ct:VariabilityReference>
<ct:SymbRef block="model" symbId="indiv"/>
</ct:VariabilityReference>
<Individual oid="i1">
<ArmRef oid="a1"/>
<Covariate>
<ct:SymbRef block="c1" symbId="Sex"/>
<ct:Assign><ct:String>M</ct:String></ct:Assign>
</Covariate>
<Covariate>
<ct:SymbRef block="c1" symbId="Treat"/>
<IVDependent>
<EpochRef oid="e1"/>
<ct:Assign>
<ct:String>A</ct:String>
</ct:Assign>
</IVDependent>
<IVDependent>
<EpochRef oid="e3"/>
<ct:Assign>
<ct:String>B</ct:String>
</ct:Assign>
</IVDependent>
</Covariate>
</Individual>
<Individual oid="i2">
<ArmRef oid="a1"/>
<Covariate>
<ct:SymbRef block="c1" symbId="Sex"/>
<ct:Assign><ct:String>M</ct:String></ct:Assign>
</Covariate>
<Covariate>
<ct:SymbRef block="c1" symbId="Treat"/>
<IVDependent>
<EpochRef oid="e1"/>
<ct:Assign>
<ct:String>A</ct:String>
</ct:Assign>
</IVDependent>
<IVDependent>
<EpochRef oid="e3"/>
<ct:Assign>
<ct:String>B</ct:String>
</ct:Assign>
</IVDependent>
</Covariate>
</Individual>
<Individual oid="i3">
<!-- Omitted the remaining individual definitions
for brevity -->
</Individual>
</Population>
</TrialDesign>
\end{xmlcode}
In the above example you can see that the more complex trial design
structure is encoded, hopefully as you would expect. Note that the
washout is now defined clearly as an epoch. The new feature that was
not shown in the previous example is the definition of the
occasion as an ObservationsEvent. As you can see this enables us to
define a set of observations, and map them to a variability level. The
duration of each observation (specified by the
\xelem{ObservationGroup} element) can be defined as the duration of a
specified epoch (as in this example) or can be a time period. The
later only makes sense if the epochs define time periods
too\footnote{Note that if each epoch specifies a time period then an
ObservationEvent can specify a time period that spans multiple
epochs.}. The \xelem{ObservationsEvent} is associated with an arm,
which means that different sets of inter-occasion variability can be
applied to different arms. I'm not sure if this makes sense or whether
the \xelem{ObservationsEvent} should apply to all arm at the same time
--- and therefore all subjects in the study.
Note also that the definition of the population is richer, with
covariates being defined with each individual. In this example too
each individual in the study is being explicitly defined. Take
particular note of the \xelem{IVDependent} element which is a child of
the \xelem{Individual} element. This is how we define covariates that
are dependent on the independent variable (typically time). In this
example we specify a value for the covariate to be used during a given
epoch. However, it is also possible to specify a time-point
which specifies from which time in the study the covariate value applies.
Perhaps the most complex example is from the Chan and Holford
model\footnote{ref} that defines the most complex trial design
structure that we have so far encoded. It was only possible to encode
this using the data-file approach in the previous version of \pharmml
and even then we had to fix a problem where this method could not
define steady state dosing. As it stands this model cannot be encoded
in version 0.1 of \pharmml. The example is below:
\begin{xmlcode}
<TrialDesign xmlns="http://www.pharmml.org/2013/03/TrialDesign">
<Structure>
<Epoch oid="m0">
<Start>
<ct:Real>0</ct:Real>
</Start>
<End>
<ct:Real>90</ct:Real>
</End>
<Order>1</Order>
</Epoch>
<Epoch oid="m6">
<Start>
<ct:Real>408</ct:Real>
</Start>
<End>
<ct:Real>499</ct:Real>
</End>
<Order>2</Order>
</Epoch>
<Epoch oid="m12">
<Start>
<ct:Real>908</ct:Real>
</Start>
<End>
<ct:Real>9999</ct:Real>
</End>
<Order>3</Order>
</Epoch>
<Epoch oid="m24">
<Start>
<ct:Real>1908</ct:Real>
</Start>
<End>
<ct:Real>1999</ct:Real>
</End>
<Order>4</Order>
</Epoch>
<Epoch oid="m48">
<Start>
<ct:Real>3908</ct:Real>
</Start>
<End>
<ct:Real>3999</ct:Real>
</End>
<Order>5</Order>
</Epoch>
<Arm oid="a1"/>
<Cell oid="c1">
<EpochRef oid="m1"/>
<ArmRef oid="a1"/>
<SegmentRef oid="s1"/>
</Cell>
<Cell oid="c2">
<EpochRef oid="m6"/>
<ArmRef oid="a1"/>
<SegmentRef oid="s2"/>
</Cell>
<Cell oid="c2">
<EpochRef oid="m12"/>
<ArmRef oid="a1"/>
<SegmentRef oid="s2"/>
</Cell>
<Cell oid="c2">
<EpochRef oid="m24"/>
<ArmRef oid="a1"/>
<SegmentRef oid="s2"/>
</Cell>
<Cell oid="c2">
<EpochRef oid="m48"/>
<ArmRef oid="a1"/>
<SegmentRef oid="s2"/>
</Cell>
<Segment oid="s1">
<ActivityRef oid="exoinf"/>
</Segment>
<Segment oid="s2">
<ActivityRef oid="exoinf"/>
<ActivityRef oid="oralss"/>
</Segment>
<Activity oid="exoinf">
<Infusion>
<DoseAmount>
<TargetVar block="sm1" symbId="Ac"/>
</DoseAmount>
<DosingTimes>
<ct:Assign>
<ct:Vector>
<ct:Real>0</ct:Real>
<ct:Real>72</ct:Real>
</ct:Vector>
</ct:Assign>
</DosingTimes>
<Duration><ct:SymbRef block="pm1" symbId="TTK0"></ct:SymbRef></Duration>
</Infusion>
</Activity>
<Activity oid="oralss">
<Infusion>
<DoseAmount>
<TargetVar block="sm1" symbId="Ac"/>
<ct:Assign><ct:SymbRef block="pm1" symbId="Css"/></ct:Assign>
</DoseAmount>
<SteadyState>
<EndTime>
<ct:Assign>
<ct:Real>0</ct:Real>
</ct:Assign>
</EndTime>
</SteadyState>
<Rate><ct:SymbRef block="sm1" symbId="R1"></ct:SymbRef></Rate>
</Infusion>
</Activity>
<ObservationsEvent oid="occ">
<ArmRef oid="a1"/>
<ct:Name>Occasions</ct:Name>
<ct:VariabilityReference>
<ct:SymbRef symbId="occ"/>
</ct:VariabilityReference>
<ObservationGroup oid="occ1">
<Period> <Start>
<ct:Real>0</ct:Real>
</Start>
<End>
<ct:Real>24</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ2">
<Period><Start>
<ct:Real>72</ct:Real>
</Start>
<End>
<ct:Real>90</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ3">
<Period><Start>
<ct:Real>408</ct:Real>
</Start>
<End>
<ct:Real>457</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ4">
<Period><Start>
<ct:Real>481</ct:Real>
</Start>
<End>
<ct:Real>499</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ5">
<Period> <Start>
<ct:Real>908</ct:Real>
</Start>
<End>
<ct:Real>957</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ6">
<Period><Start>
<ct:Real>981</ct:Real>
</Start>
<End>
<ct:Real>999</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ7">
<Period><Start>
<ct:Real>1908</ct:Real>
</Start>
<End>
<ct:Real>1957</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ8">
<Period> <Start>
<ct:Real>1981</ct:Real>
</Start>
<End>
<ct:Real>1999</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ9">
<Period><Start>
<ct:Real>3908</ct:Real>
</Start>
<End>
<ct:Real>3957</ct:Real>
</End></Period>
</ObservationGroup>
<ObservationGroup oid="occ10">
<Period><Start>
<ct:Real>3981</ct:Real>
</Start>
<End>
<ct:Real>3999</ct:Real>
</End></Period>
</ObservationGroup>
</ObservationsEvent>
<ObservationsEvent oid="trial">
<ArmRef oid="a1"/>
<ct:Name>Trials</ct:Name>
<ct:VariabilityReference>
<ct:SymbRef symbId="occ"/>
</ct:VariabilityReference>
<ObservationGroup oid="t1">
<EpochRef oid="m1"/>
</ObservationGroup>
<ObservationGroup oid="t1">
<EpochRef oid="m6"/>
</ObservationGroup>
<ObservationGroup oid="t1">
<EpochRef oid="m12"/>
</ObservationGroup>
<ObservationGroup oid="t4">
<EpochRef oid="m24"/>
</ObservationGroup>
<ObservationGroup oid="t5">
<EpochRef oid="m48"/>
</ObservationGroup>
</ObservationsEvent>
</Structure>
<Population>
<ct:VariabilityReference>
<ct:SymbRef symbId=""/>
</ct:VariabilityReference>
<Individual oid="i552">
<ArmRef oid="a1"/>
<Covariate>
<ct:SymbRef block="c1" symbId="W"/>
<IVDependent>
<EpochRef oid="m1"/>
<ct:Assign><ct:Real>73</ct:Real></ct:Assign>
</IVDependent>
<IVDependent>
<EpochRef oid="m6"/>
<ct:Assign><ct:Real>70</ct:Real></ct:Assign>
</IVDependent>
<IVDependent>
<EpochRef oid="m12"/>
<ct:Assign><ct:Real>73</ct:Real></ct:Assign>
</IVDependent>
<IVDependent>
<EpochRef oid="m24"/>
<ct:Assign><ct:Real>71</ct:Real></ct:Assign>
</IVDependent>
<IVDependent>
<EpochRef oid="m48"/>
<ct:Assign><ct:Real>69</ct:Real></ct:Assign>
</IVDependent>
</Covariate>
</Individual>
</Population>
<IndividualDosing>
<ActivityRef oid="inf1"/>
<Individual columnRef="id"/>
<DoseAmount columnRef="dose"/>
<DosingTime columnRef="t"/>
<DataSet xmlns="http://www.pharmml.org/2013/03/CommonTypes">
<Definition>
<Column columnNum="1" columnVar="id"/>
<Column columnNum="2" columnVar="t"/>
<Column columnNum="3" columnVar="dose"/>
</Definition>
<Row>
<String>i552</String><Real>0</Real><Real>740.37</Real>
<String>i552</String><Real>72</Real><Real>740.37</Real>
<String>i552</String><Real>409</Real><Real>709.94</Real>
<String>i552</String><Real>481</Real><Real>709.94</Real>
<String>i552</String><Real>909</Real><Real>740.94</Real>
<String>i552</String><Real>981</Real><Real>740.94</Real>
<String>i552</String><Real>1909</Real><Real>720.08</Real>
<String>i552</String><Real>1981</Real><Real>720.08</Real>
<String>i552</String><Real>3909</Real><Real>699.8</Real>
<String>i552</String><Real>3981</Real><Real>699.8</Real>
</Row>
</DataSet>
</IndividualDosing>
</TrialDesign>
\end{xmlcode}
This uses all the components that you have seen in the previous
examples, with the addition of the \xelem{IndividualDosing}
element. Its purpose is to provide per individual dosing information
a given dosing activity (specified by \verb|<ActivityRef oid="inf1"/>|).
Note that in this example the dose varies with time, but of course it
need not do, in which case the time column and \xelem{DosingTime}
record are omitted. Note that the type of each value is specified
explicitly so it is clear whether the value is compatible with the
information it is being mapped to. If we need dosing information for
more than one dosing activity then multiple \xelem{IndividualDosing}
can be defined.
\section{Simplifying the Estimation and Simulation Steps}
A by-product of the redesign of the Trial Design structure is that the
estimation and simulation steps can be considerably
simplified. Perhaps the greatest simplification is with the
\xelem{SimulationStep} which can drop the need to specify the design
in a data file. It now just needs to define initial values and specify
what observations to simulate. A by-product of the re-design is that
we also fix one of the unresolved issues in version 0.1 of
\pharmml. It wasn't clear how to simulate only one epoch since it was
possible for each epoch to have the same time period when separated by
a washout. Now however the epochs must specify consecutive time
periods so the simulation can easily be defined to span all the time
periods in all the epochs of the study.
Finally the use of a data-file in the estimation step becomes very
simple. See for example the estimation step of the Holford and Chan
model:
%
\begin{xmlcode}
<EstimationStep id="estTask1">
<ObjectiveDataSet>
<IndividualMapping>
<ColumnRef oid="id"/>
</IndividualMapping>
<VariableMapping>
<ColumnRef oid="time"/>
<ct:SymbRef symbId="t"/>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<VariableMapping>
<ColumnRef oid="DV"/>
<ct:SymbRef block="om1" symbId="CP"></ct:SymbRef>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<ct:DataSet>
<ct:Definition>
<ct:Column columnNum="1" columnVar="id"/>
<ct:Column columnNum="2" columnVar="time"/>
<ct:Column columnNum="3" columnVar="DV"/>
</ct:Definition>
<ct:Row>
<ct:String>i552</ct:String><ct:Real>0</ct:Real><ct:Real>0.46</ct:Real>
<ct:String>i552</ct:String><ct:Real>1</ct:Real><ct:Real>6.11</ct:Real>
<ct:String>i552</ct:String><ct:Real>1.5</ct:Real><ct:Real>7.38</ct:Real>
<ct:String>i552</ct:String><ct:Real>2</ct:Real><ct:Real>7.73</ct:Real>
<!-- Snip -->
</ct:Row>
</ct:DataSet>
</ObjectiveDataSet>
<ParametersToEstimate>
<!-- Snip -->
</ParametersToEstimate>
<EstimationOperation opType="estPop"/>
</EstimationStep>
\end{xmlcode}
%
Here the mapping of the objective data to the model becomes trivial and
this simplicity enables us to focus on issues we neglected when the
\xelem{EstimationStep} was more complicated. You will notice that each
\xelem{VariableMapping} element contains an \xelem{Interpolation}
element. This allows us to specify how the data should be interpolated
during the estimation procedure. The only available method at the
moment is \emph{default} so clearly this needs more work, but the
objective is clear.
We can extend this approach to also support multiple datasets within
one estimation step. Why? Well possible scenarios are:
\begin{enumerate}
\item You have separate PK and PD measurements taken at different
time-points.
\item You have replicate observations and you want to map each
replicate to a different error model.
\end{enumerate}
The hypothetical example below shows, how you might describe the first
case:
\begin{xmlcode}
<EstimationStep id="estTask1">
<ObjectiveDataSet>
<IndividualMapping>
<ColumnRef oid="id"/>
</IndividualMapping>
<VariableMapping>
<ColumnRef oid="time"/>
<ct:SymbRef symbId="t"/>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<VariableMapping>
<ColumnRef oid="DV"/>
<ct:SymbRef block="om1" symbId="Cc"></ct:SymbRef>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<ct:DataSet>
<ct:Definition>
<ct:Column columnNum="1" columnVar="id"/>
<ct:Column columnNum="2" columnVar="time"/>
<ct:Column columnNum="3" columnVar="DV"/>
</ct:Definition>
<ct:Row>
<ct:String>i1</ct:String><ct:Real>0</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>1</ct:Real><ct:Real>5</ct:Real>
<ct:String>i1</ct:String><ct:Real>4</ct:Real><ct:Real>9</ct:Real>
<ct:String>i1</ct:String><ct:Real>6</ct:Real><ct:Real>11</ct:Real>
<!-- Snip -->
</ct:Row>
</ct:DataSet>
</ObjectiveDataSet>
<ObjectiveDataSet>
<IndividualMapping>
<ColumnRef oid="id"/>
</IndividualMapping>
<VariableMapping>
<ColumnRef oid="time"/>
<ct:SymbRef symbId="t"/>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<VariableMapping>
<ColumnRef oid="DV"/>
<ct:SymbRef block="om1" symbId="E"></ct:SymbRef>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<ct:DataSet>
<ct:Definition>
<ct:Column columnNum="1" columnVar="id"/>
<ct:Column columnNum="2" columnVar="time"/>
<ct:Column columnNum="3" columnVar="DV"/>
</ct:Definition>
<ct:Row>
<ct:String>i1</ct:String><ct:Real>0</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>6</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>12</ct:Real><ct:Real>10</ct:Real>
<ct:String>i1</ct:String><ct:Real>24</ct:Real><ct:Real>20</ct:Real>
<ct:String>i1</ct:String><ct:Real>36</ct:Real><ct:Real>40</ct:Real>
<ct:String>i1</ct:String><ct:Real>48</ct:Real><ct:Real>80</ct:Real>
<!-- Snip -->
</ct:Row>
</ct:DataSet>
</ObjectiveDataSet>
<ParametersToEstimate>
<!-- Snip -->
</ParametersToEstimate>
<EstimationOperation opType="estPop"/>
</EstimationStep>
\end{xmlcode}
%
Here as you can see there are two datasets. The first provides
measurements for the PK variable $\mathrm{Cc}$ and the second for the
effect variable $E$. The example below shows the second case where the
first dataset describes the first set of replicates and the second
dataset the other. Note that we indicate to the observation model
which replicate set it is being assigning a value to an indicator variable.
%
\begin{xmlcode}
<EstimationStep id="estTask1">
<ObjectiveDataSet>
<IndividualMapping>
<ColumnRef oid="id"/>
</IndividualMapping>
<ct:VariableAssignment>
<ct:SymbRef symbId="replicateFlag"/>
<ct:Assign>
<ct:Int>1</ct:Int>
</ct:Assign>
</ct:VariableAssignment>
<VariableMapping>
<ColumnRef oid="time"/>
<ct:SymbRef symbId="t"/>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<VariableMapping>
<ColumnRef oid="DV"/>
<ct:SymbRef block="om1" symbId="Cc"></ct:SymbRef>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<ct:DataSet>
<!-- Snip -->
</ct:DataSet>
</ObjectiveDataSet>
<ObjectiveDataSet>
<IndividualMapping>
<ColumnRef oid="id"/>
</IndividualMapping>
<ct:VariableAssignment>
<ct:SymbRef symbId="replicateFlag"/>
<ct:Assign>
<ct:Int>2</ct:Int>
</ct:Assign>
</ct:VariableAssignment>
<VariableMapping>
<ColumnRef oid="time"/>
<ct:SymbRef symbId="t"/>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<VariableMapping>
<ColumnRef oid="DV"/>
<ct:SymbRef block="om1" symbId="Cc"></ct:SymbRef>
<Interpolation>
<Method name="default"/>
</Interpolation>
</VariableMapping>
<ct:DataSet>
<!-- Snip -->
</ct:DataSet>
</ObjectiveDataSet>
<ParametersToEstimate>
<!-- Snip -->
</ParametersToEstimate>
<EstimationOperation opType="estPop"/>
</EstimationStep>
\end{xmlcode}
%
Because we can duplicate datasets in this way this allows us to add a
constraint on the datasets we can use. Namely that we only allow one
row per combination of individual and time. In doing so we can
then permit observations to be defined in an order independent
way. So:
%
\begin{xmlcode}
<ct:Row>
<ct:String>i1</ct:String><ct:Real>0</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>6</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>12</ct:Real><ct:Real>10</ct:Real>
<ct:String>i1</ct:String><ct:Real>24</ct:Real><ct:Real>20</ct:Real>
<ct:String>i1</ct:String><ct:Real>36</ct:Real><ct:Real>40</ct:Real>
<ct:String>i1</ct:String><ct:Real>48</ct:Real><ct:Real>80</ct:Real>
<!-- Snip -->
</ct:Row>
\end{xmlcode}
%
is therefore equivalent to:
\begin{xmlcode}
<ct:Row>
<ct:String>i1</ct:String><ct:Real>48</ct:Real><ct:Real>80</ct:Real>
<ct:String>i1</ct:String><ct:Real>12</ct:Real><ct:Real>10</ct:Real>
<ct:String>i1</ct:String><ct:Real>24</ct:Real><ct:Real>20</ct:Real>
<ct:String>i1</ct:String><ct:Real>0</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>6</ct:Real><ct:Real>0</ct:Real>
<ct:String>i1</ct:String><ct:Real>36</ct:Real><ct:Real>40</ct:Real>
<!-- Snip -->
</ct:Row>
\end{xmlcode}
%
Remember that this representation is meant for information exchange
between software tools. For a human reader this may seem more complex
to comprehend, but for software the fact that there are no duplicates
and no order makes the processing and validation of this information
less so.
\end{document}
| {
"alphanum_fraction": 0.585875295,
"avg_line_length": 41.5192307692,
"ext": "tex",
"hexsha": "e6a5a1fab30433e50c8a87ecfac9af81f6d29134",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "pharmml/pharmml-spec",
"max_forks_repo_path": "change_proposals/trialDesignProposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "pharmml/pharmml-spec",
"max_issues_repo_path": "change_proposals/trialDesignProposal.tex",
"max_line_length": 92,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "pharmml/pharmml-spec",
"max_stars_repo_path": "change_proposals/trialDesignProposal.tex",
"max_stars_repo_stars_event_max_datetime": "2018-01-26T13:17:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-26T13:17:54.000Z",
"num_tokens": 10711,
"size": 45339
} |
\chapter{Components}
\section{Entity}
All objects that the user wants to create get created via the Entity class.
The constructor of Entity takes these arguments:
\begin{itemize}
\item name
\item body
\item physics
\item collisionDetection
\item audioManager
\item sprite
\end{itemize}
Example of how to create a $Entity$:
\begin{lstlisting}
class Pipe {
constructor(startPos, topPos, height, width) {
this.len;
this.entity = new Entity(
"Bottom pipe",
new Body(this, 1920 + startPos, topPos, height, width),
new Physics(this, -8.85, 0),
new CollisionDetection(this),
null
);
}
}
\end{lstlisting}
\section{Body}
Body class is the body of the entity.
The constructor of Body takes these arguments:
\begin{itemize}
\item entity
\item left
\item top
\item height
\item width
\end{itemize}
The body class contains only setters and getters for these parameters.
\begin{lstlisting}
class Bird {
constructor() {
this.entity = new Entity(
"Bird",
new Body(this, 300, 540, 100, 100),
}
}
\end{lstlisting}
Here is a small example of how to move the entity bird:
\begin{lstlisting}
if (this.getBody().getTop() > 1040) {
this.getBody().setTop(400);
this.getBody().setLeft(300);
}
\end{lstlisting}
\section{CollisionDetection}
To check for collitionDectection use:
\begin{lstlisting}
checkForCollision(otherEntity)
\end{lstlisting}
Example of this can be:
\begin{lstlisting}
let hasPlayerCollided = player
.getCollisionDetection()
.checkForCollision(this.state.playerArr[index].getEntity());
\end{lstlisting}
\section{Physics}
\section{AudioManager}
example to use audioManager:
Object:
\begin{lstlisting}
new AudioManager([
ResMan.getAudioPath("soundEffect1.mp3"),
ResMan.getAudioPath("soundEffect2.mp3"),
ResMan.getAudioPath("soundEffect3.mp3")
]),
this.enum = {
BIRD_JUMPS: 0,
BIRD_SCORE: 1,
BIRD_DIES: 2
};
\end{lstlisting}
Then if something happens:
\begin{lstlisting}
object.getAudioManager().play(2); // testing!!!
\end{lstlisting}
\section{Sprite}
\section{ResourceManager}
To use the ResourceManager simply import the class and use it like this:
\begin{lstlisting}
ResourceManager.getImagePath("background.png")
\end{lstlisting}
\begin{lstlisting}
ResourceManager.getAudioPath("one.mp3")
\end{lstlisting}
\begin{lstlisting}
ResourceManager.getSpritePath("birds.png")
\end{lstlisting}
The $ResourceManager$ uses a default paths:
\begin{itemize}
\item ../resources/image/
\item ../resources/audio/
\item ../resources/sprite/
\end{itemize}
You can change these paths in the $resourceManager$ file
\section{Background}
To use the $Background$ component, simply add it to the component.
\begin{lstlisting}
<Background
height={1080}
width={1920}
speed={0.5}
image={ResourceManager.getImagePath("background.png")}
>
{" "}
</Background>{" "}
\end{lstlisting}
$ackground$ contains $defaultProps$, so it is not needed to set $height$, $width$ and $speed$. To set a image you can either use the $ResourceManager$ or simply import a image.
\section{HUD}
To use the HUD simply add the HUD component
\begin{lstlisting}
<HUD score={this.state.score} position={"tc"} />{" "}
\end{lstlisting}
Where score is a score variable from the game.
\section{Menu}
To use the menu, simply import the component into the file you want.
\begin{lstlisting}
<Menu showMenu={this.state.showMenu}
\end{lstlisting}
Using ${this.state.showMenu}$ gives you the option of toggling it on and off, depending of a boolean $showMenu$.
To add more menu options, simply add more items into the $menuItems$, and use $handleClick(e)$ for the event.
\section{ScoreBoard}
To use scoreboard simply add:
\begin{lstlisting}
<ScoreBoard />
\end{lstlisting}
To get the score from the game, $context$ is recommended, as it is shown in the $flappy$ demo, since normally a $menuItem$ shows a scoreboard.
\section{Logger}
To use the logger simply use:
\begin{lstlisting}
Logger.setText("flappy.js", `score: ${this.state.score}`);
\end{lstlisting}
where first argument is the name of file and second argument is what you want to log.
Then you can add this to the game:
\begin{lstlisting}
<LoggerManager />
\end{lstlisting}
| {
"alphanum_fraction": 0.7000226398,
"avg_line_length": 22.1959798995,
"ext": "tex",
"hexsha": "a4dc3783bcce99883f75d0b4c1015f136974d47d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "164ba3bc851ee2326e98d78fad7893906c8459eb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ElPolloLoco007/LudumFlappy",
"max_forks_repo_path": "extra/How to use Ludum game engine/Mainmatter/components.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "164ba3bc851ee2326e98d78fad7893906c8459eb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ElPolloLoco007/LudumFlappy",
"max_issues_repo_path": "extra/How to use Ludum game engine/Mainmatter/components.tex",
"max_line_length": 176,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "164ba3bc851ee2326e98d78fad7893906c8459eb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ElPolloLoco007/LudumFlappy",
"max_stars_repo_path": "extra/How to use Ludum game engine/Mainmatter/components.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1132,
"size": 4417
} |
\documentclass[a4paper,11pt]{book}
\usepackage{import}
\usepackage{preamb}
\makeindex
\begin{document}
\input{head}
\newpage
\input{Title}
% \section{Blocks and Community structure}
\begin{subbox}{subbox}{}
\centering
\Large{\textbf{Scale-Free Networks}}
\end{subbox}
\begin{textbox}{Scale-Free: Definition}
A network is said to be \textbf{Scale-Free} when its degree distribution follows a Power-Law distribution, or can be approximated by a Power-Law distribution.
\end{textbox}
\begin{textbox}{Power-Law (PL)- Approximate Distribution}
A \textbf{Power-Law distribution} is defined as follows:
\[
P(k) \sim k^{-\alpha} = \frac{1}{k^\alpha}
\]
$\alpha$ is called the \textit{exponent} of the distribution.
Intuitively, the more $\alpha$ is large, the more large values are rare. For instance, with $\alpha=0$, it corresponds to a uniform distribution (any degree is equivalently probable). With $\alpha=1$, the probability of a node taken at random to have degree $k$ is $\sim \frac{1}{k}$. Usually, a distribution is considered scale free when $2\leq \alpha\leq 3$, as we will see.
\end{textbox}
\begin{textbox}{PL - Boundaries}
In most settings, the Power-Law degree distribution exists only for a certain range of degrees.
This makes sense in real networks: few people have 0 or 1 social contacts, for instance, few websites have no incoming nor outgoing hypertext links, or we wouldn't even be aware of them. Thus there is a \textbf{lower bound} $k_{min}$ from which the distribution exists.
Similarly, real networks represent entities of the real world, which are in finite numbers, therefore the number of elements itself is a limit. But in many situations, even lower thresholds exist: Social networks often impose a limit to the number of connections to avoid spammers, time and space also typically impose limits to what is possible or not in a network. An \textbf{upper bound} $k_{max}$ can be used to limit the distribution.
\end{textbox}
\begin{textbox}{Power-Law - Exact Distribution}
For a distribution to be properly defined, the sum of all probability must be equal to one, we therefore add a normalization constant $C$ to ensure this property.
\[
\int P(k)=1= \int Ck^{-\alpha}=C\int k^{-\alpha}
\]
From which we can define $C$:
\[
C = \frac{1}{\int_{k_{\min}}^{\infty}k^{-\alpha} dk}=(\alpha-1)k_{\min}^{\alpha-1}
\]
And finally, the exact definition of the Power-Law degree distribution with lower bound:
\[
P(k) = (\alpha-1)k_{\min}^{\alpha-1}k^{-\alpha} \\
P(k) = \frac{\alpha-1}{k_{\min}}\left(\frac{k}{k_{\min}}\right)^{-\alpha}
\]
\end{textbox}
\begin{textbox}{Power-Law - Plotting}
A famous property of the Power-Law distribution is that it looks like a line when plotted in a log-log plot, i.e., a plot in which the x-axis (degrees) and the y-axis (frequency of degrees) are represented using a logarithmic scale.
\begin{center}
Power-Law distributions in linear scale for degrees [1-10] \\
(100 000 samples)
\includegraphics[width=0.5\textwidth]{pics/SF1.png}
Power-Law distributions in linear scale for degrees [1-100000] \\
(100 000 samples). The distribution is so heterogeneous that is is not readable.
\includegraphics[width=0.5\textwidth]{pics/SF2.png}
Power-Law distributions in log-log scale for degrees [1-100000] \\
(100 000 samples).
\includegraphics[width=0.5\textwidth]{pics/SF3.png}
\end{center}
\end{textbox}
\begin{textbox}{Power-Law - Long tail}
Compared with other well-known distributions such as Poisson or exponential distribution, a key difference is what is called the \textbf{long-tail} property: very large values are rare, but possible. We can observe this long tail by comparing with other distributions on log-log plots.
\begin{center}
Comparing power-law with Poisson distributions
\includegraphics[width=0.8\textwidth]{pics/SF-poisson.png}
Comparing power-law with Exponential distributions
\includegraphics[width=0.8\textwidth]{pics/SF-exponential.png}
\end{center}
\end{textbox}
\begin{textbox}{SF networks - universality}
Scale-Free networks are widely studied because they are considered to be very frequent in the real world. Some important papers discovered the existence of Power-Law degree distribution in a variety of large real networks, notably:
\begin{enumerate}
\item The World Wide Web (webpages) (\cite{barabasi1999emergence})
\item The internet (physical network) (\cite{faloutsos1999power})
\item Airline connections (\cite{guimera2004modeling})
\item Scientific collaborations (\cite{newman2001structure})
\item Romantic interactions (\cite{liljeros2001web})
\end{enumerate}
It must be noted, however, that many real world networks \textbf{are not scale-free}. A typical counter-example is a road-network, in which nodes correspond to intersection and edges to roads: for practical reasons, intersections with large degrees do not make sense.
\end{textbox}
\begin{textbox}{Why is it called \textit{scale free}}
Because they have no (typical) scale!
It is defined in opposition to Poisson and other Bell-Shaped distributions, which are \textbf{centered around their average value}. Let's take a typical example: The height of humans follow a Bell-shaped distribution: the average height is 1.65m, and most humans are quite close to this value, there is a \textbf{typical scale} of human height. On the contrary, human wealth distribution follows approximately a power-law \footnote{ \url{https://en.wikipedia.org/wiki/Distribution_of_wealth}}: a few humans are extremely wealthy (Billions of \$), while more than half the world population posses less than 10 000\$. As a consequence, the average human wealth (70 000\$) is not at all representative of human wealth.
\end{textbox}
\begin{textbox}{Central moments}
The first two central moments of a distribution are the mean $\langle k^1 \rangle $ and the variance $\langle k^2 \rangle $. They are defined as
\[
\langle k^m \rangle = \int_{k_{\min}}^{\infty}k^mp(k)dk=(\alpha-1)k^{\alpha-1}_{\min}\int_{k_{\min}}^{\infty}k^{-\alpha+m}dk
\]
From this, we can conclude that \textbf{central moments are defined only if $\alpha>m+1$}, otherwise they diverge towards infinity, they are not properly defined.
Thus:
\begin{enumerate}
\item $\langle k^2 \rangle = \frac{\alpha-1}{\alpha-2}k_{\min}$, \textbf{if and only if $\alpha\geq 2$}
\item $\langle k^3 \rangle = \frac{\alpha-1}{\alpha-3}k^2_{\min}$, \textbf{if and only if $\alpha\geq 3$}
\end{enumerate}
\end{textbox}
\begin{textbox}{Divergence in practice}
In practice, one can always compute the mean and variance of a provided, observed degree distribution. So what does it mean that they \textit{diverge}?
The problem arises when we are not certain to observe the whole network. Usually, a large sample of a population has the same mean and variation than the whole population, and the largest the sample, the more precise the value.
But in a power law, moments are \textbf{dominated} by the largest values in the long tail: some rare values are \textit{so large} that they shift the moments. So the more data we observe, the higher the moments.
\end{textbox}
\begin{textbox}{Divergence: consequences}
The consequence of diverging moments is that \textbf{if the distribution follows a power law}, then if the exponent is below 2, you should now rely on the mean degree or the variance. If the exponent is between 2 and 3, you can (relatively) rely on the mean, but not on the variance. Be careful though, even if $\alpha>2$, the mean converges slowly, i.e., you need a very large sample for your mean to be close to the real value.
\end{textbox}
\begin{textbox}{Fitting power laws}
When confronted with a power law degree distribution, we might want to \textbf{fit} the distribution, i.e., to \textbf{find the exponent} of the distribution. A naive and simple way to do it is to plot the distribution on a log-log plot and to find the slope of the line, either graphically or through least-square regression on the log-transformed values of degrees and frequencies.
This however suffers from a strong bias: values in the tail are based on a few samples, and introduce noise.
The most appropriate method is to use Maximum Likelihood Estimation (MLE\footnote{https://towardsdatascience.com/a-gentle-introduction-to-maximum-likelihood-estimation-9fbff27ea12f}), taking into account min and max-boundaries, as described in \footcite{goldstein2004problems}
\end{textbox}
\begin{textbox}{Exponent limits}
In real networks, we consider that we should have $\alpha\geq2$, because a lower exponent would mean that the distribution is so skewed that we expect to find nodes with a degree larger than the size of the network.
Furthermore, if the exponent is too large, large degree nodes becomes so rare, that the network would need to be enormous to observe such a node. For instance, with $\alpha=5$, we need to observe $N=10^{12}$ nodes to expect to observe a single node of size 1000.
\end{textbox}
\begin{textbox}{Exponent and shortest-paths}
Random networks with Poisson degree distribution already have a \textit{short average distance}. However, it is possible to define classes of networks with even smaller average distance based on the exponent $\alpha$:
\begin{itemize}
\item $\alpha=2$: The biggest hub degree is of order $\mathcal{O}(N)$, thus most nodes are at distance 2. The average path length can be considered a small constant, independent of $N$
\item $2<\alpha<3$: \textbf{Ultra Small World}: $\langle \ell \rangle = \frac{\log \log N}{\log(\alpha-1)}$
\item $\alpha=3$: $\langle \ell \rangle = \frac{ \log N}{\log \log N}$
\item $\alpha>3$: $\langle \ell \rangle = \log N$, the network behaves approximately like an ER network.
\end{itemize}
\end{textbox}
\begin{textbox}{Scale-free network controversy}
There is an on-going debate in the network science community over the prevalence of scale-free networks. For some authors \footcite{barabasi2003scale}, most real networks follow to some extent a power-law degree distribution, while, for some others, scale-free networks are rare \footcite{broido2019scale}. The controversy has been studied\footcite{jacomy2020epistemic} and can be interpreted as differences between scientific approaches: one popular among (some) physicists (\textit{scale-freeness is the sign of a universal law}) and another one common among statisticians \textit{(scale-freeness is an empirical characterization)}.
\end{textbox}
\begin{textbox}{SF networks: what to do}
Is your network Scale-Free? The first question you might ask yourself is: \textit{why do you need to know?}.
\begin{itemize}
\item If the goal is to characterize a network, then plotting the degree distribution might be more useful than fitting a power-law exponent to it
\item If the goal is to show that the distribution is broad, significantly different from a bell-shaped distribution, then plotting the distribution might be enough
\item If the goal is to show that the distribution is \textit{approximately} a power-law, for instance because an algorithm complexity or a proof can be made for such cases, then fitting a line on a log-log plot and talking about \textit{power-law-ish} might be enough
\item If on the contrary it is scientifically important to argue that the network is, without doubts, a scale-free networks, then you need to be fully aware of the controversy and to position your work relatively to it.
\end{itemize}
\end{textbox}
\input{tail}
\end{document}
| {
"alphanum_fraction": 0.7593541106,
"avg_line_length": 50.5219298246,
"ext": "tex",
"hexsha": "340892a92734ee11804bbc56a6f6b0162c6d4942",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0e5e7680504599b1a88c0bb0043803c06e0e110b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Yquetzal/NetworkScience_CheatSheets",
"max_forks_repo_path": "latex_sources/ScaleFree.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0e5e7680504599b1a88c0bb0043803c06e0e110b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Yquetzal/NetworkScience_CheatSheets",
"max_issues_repo_path": "latex_sources/ScaleFree.tex",
"max_line_length": 715,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "0e5e7680504599b1a88c0bb0043803c06e0e110b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Yquetzal/NetworkScience_CheatSheets",
"max_stars_repo_path": "latex_sources/ScaleFree.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-05T23:25:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-26T06:33:55.000Z",
"num_tokens": 2942,
"size": 11519
} |
\documentclass{article}
\usepackage{color}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{indentfirst}
\usepackage{enumerate}
\usepackage{booktabs}
\usepackage{dcolumn}
\usepackage{textcomp}
\usepackage{minted}
\usepackage{geometry}
\usepackage{float}
\usepackage{multirow}
\usepackage{subfigure}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\geometry{left=1cm,right=1cm,top=1cm,bottom=1.5cm}
\begin{document}
\begin{center}
\begin{large}
\sc{VE482: Introduction to Operating Systems}
\end{large}
\begin{large}
\sc{{Laboratory Report 3}}
\end{large}
\end{center}
\begin{center}
Name: He Ruizhe ID: 515370910213
\end{center}
\section{Simple git}
\begin{enumerate}[$\bullet$]
\item Search what is git.\\
\\
Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people. It is basically used for source code management in software development, but it can be used to keep track of changes in any set of files.
\item Install a git client.
\item Search the use of the following git commands:
\begin{enumerate}[$-$]
\item help: Display help information about Git.
\item branch: List, create, or delete branches.
\item merge: Join two or more development histories together.
\item tag: Create, list, delete or verify a tag object signed with GPG.
\item commit: Record changes to the repository.
\item init: Create an empty Git repository or reinitialize an existing one.
\item push: Update remote refs along with associated objects.
\item add: Add file contents to the index.
\item log: Show commit logs.
\item clone: Clone a repository into a new directory.
\item checkout: Switch branches or restore working tree files.
\item pull: Fetch from and integrate with another repository or a local branch.
\item diff: Show changes between commits, commit and working tree, etc.
\item fetch: Download objects and refs from another repository.
\item reset: Reset current HEAD to the specified state.
\end{enumerate}
\item Setup your git repository on the VE482 server.
First, add VE482 as a new host in config file
\begin{minted}{text}
Host ve482
HostName 202.120.43.199
Port 2482
IdentityFile ~/.ssh/id_rsa
\end{minted}
Then use the following commands to upload files
\begin{minted}{shell}
git init
git add .
git commit -m 'initial commit'
git remote add ve482 git@ve482:515370910213/p1
git push ve482
\end{minted}
\end{enumerate}
\section{Git game}
Finished online.
\newpage
\section{Working with source code}
\subsection{The rsync command}
In Unix-like systems the rsync program allows to synchronise different folders on the same system or over the network. When applying some changes to the source code it is highly recommended to have a copy of the original version such as to be able to revert back to the previous version in case of problem.
Proceed with the following steps:
\begin{enumerate}[$\bullet$]
\item In Minix 3 install the rsync software
\begin{minted}{shell}
$ pkgin install rsync
\end{minted}
%$
\item Install rsync on you Linux system
\begin{minted}{shell}
$ apt-get install rsync
\end{minted}
%$
\item Read rsync manpage
\begin{minted}{shell}
$ man rsync
\end{minted}
%$
\item Create an exact copy of the directory /usr/src into the directory /usr/src$\_$orig
\begin{minted}{shell}
$ cp -r /usr/src /usr/src_orig
\end{minted}
%$
\item If you have altered Minix 3 source code during homework 2 remove your changes from /usr/src$\_$orig
\item Create an exact copy of the Minix 3 directory /usr/src$\_$orig into your Linux system, using rsync and ssh (note that the ssh server must be activated under Linux)
\begin{minted}{shell}
$ rsync -avz minix:/usr/src_orig ~/minix
\end{minted}
%$
\end{enumerate}
\subsection{The diff and patch commands}
When dealing with source code two main situations are likely to arise: (i) you want to share your changes with others, or (ii) you want to apply changed performed by someone else.
Most of the time updates on source code concern few lines scattered over several files. Therefore instead of sharing all the files it is much more convenient to only specify which lines should be updated, and how. This is the role of the diff command. The patch command is used to apply the changes previously created with diff. Both diff and patch programs should already be installed in your OS.
Proceed with the following steps:
\begin{enumerate}[$\bullet$]
\item Read the manpages of diff and patch
\begin{minted}{shell}
$ man diff
$ man patch
\end{minted}
%$
\item Using the diff command, create a patch corresponding to your changes in homework 2
\begin{minted}{shell}
$ diff -r /usr/src /usr/src_orig
\end{minted}
%$
\item Retrieve your patch on your Linux system
\begin{minted}{shell}
$ rsync -avz minix:/root/data ~/patch
\end{minted}
%$
\item Apply your patch to the copy of /usr/src orig on your Linux system
\begin{minted}{shell}
$ cd ~/minix
$ patch -p3 ../patch
\end{minted}
%$
\item Revert the patch
\begin{minted}{shell}
$ cd ~/minix
$ patch -R -p3 ../patch
\end{minted}
%$
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.7581903276,
"avg_line_length": 26.4329896907,
"ext": "tex",
"hexsha": "2138959d8114c875dedfa77aeffe9b02dadd420d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4d3376ce13b78d091624b169daad3fa9f0445eae",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Krystor/VE482",
"max_forks_repo_path": "lab/lab3/lab3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4d3376ce13b78d091624b169daad3fa9f0445eae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Krystor/VE482",
"max_issues_repo_path": "lab/lab3/lab3.tex",
"max_line_length": 397,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4d3376ce13b78d091624b169daad3fa9f0445eae",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Krystor/VE482",
"max_stars_repo_path": "lab/lab3/lab3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1379,
"size": 5128
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% chapters/chapter-3.tex
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Applications}
\label{chap:apps}
| {
"alphanum_fraction": 0.2276785714,
"avg_line_length": 28,
"ext": "tex",
"hexsha": "e85a519e74cbb69d2679891acaeb20a315e69389",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aff16ae01f40a2181eba09cabd8e18e516890082",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mekeetsa/lic-template",
"max_forks_repo_path": "chapters/chapter-3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aff16ae01f40a2181eba09cabd8e18e516890082",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mekeetsa/lic-template",
"max_issues_repo_path": "chapters/chapter-3.tex",
"max_line_length": 77,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "aff16ae01f40a2181eba09cabd8e18e516890082",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mekeetsa/lic-template",
"max_stars_repo_path": "chapters/chapter-3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 28,
"size": 224
} |
\documentclass[11pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{amsfonts,amsmath,amssymb}
\usepackage[none]{hyphenat}
\usepackage{fancyhdr}
\usepackage{graphicx}
\usepackage{float}
\usepackage{enumitem}
\usepackage{hyperref}
\usepackage[noline,boxed]{algorithm2e}
\usepackage[skins]{tcolorbox}
\usepackage[nottoc,notlot,notlof]{tocbibind}
\usepackage{xcolor}
\usepackage{mathtools}
\newcommand{\defeq}{\vcentcolon=}
\newcommand{\eqdef}{=\vcentcolon}
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\fancyhead[L]{\slshape \MakeUppercase{}}
\fancyhead[R]{\slshape}
\fancyfoot[C]{\thepage}
%\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
%\parindent 0ex
\setlength{\parindent}{4em}
\setlength{\parskip}{0em}
\renewcommand{\baselinestretch}{1.5}
\graphicspath{images/}
\begin{document}
\begin{titlepage}
\begin{center}
\vspace*{0.5cm}
\Large{\textbf{Design and Analysis of Algorithms}}\\
\Large{\textbf{Theory Project}}\\
\vfill
\line(1,0){400}\\[1mm]
\huge{\textbf{Fractional Cascading}}\\[3mm]
\Large{\textbf{An Algorithmic Approach}}\\[1mm]
\line(1,0){400}\\
\vfill
By \\
IMT2019051 Mani Nandadeep Medicharla \\
IMT2019063 R Prasannavenkatesh \\
IMT2019525 Vijay Jaisankar \\
\end{center}
\end{titlepage}
\tableofcontents
\thispagestyle{empty}
\clearpage
\setcounter{page}{1}
\section{Abstract}
In this paper, we investigate the \textit{Fractional Cascading} technique which is used in building range trees and for fast searching of an element in multiple arrays. In this venture, we introduce and examine \textit{Linear Search}, \textit{Binary Search}, \textit{Bridge Building} and \textit{Fractional Cascading}. We also look at some of the \textit{applications} of this technique, and give a video demonstration of its capabilities.
\section{Problem Statement}
\textit{Fractional cascading: }You are given an input of $k$ ordered lists of numbers, each of size $n$ as well as a query value $x$. The problem's output is
to return, for each list, \textit{True} if the query value appears in the list and
\textit{False} if it does not. For example, if the input is:
\begin{enumerate}[label=(\alph*)]
\item List $L_1$: $[3,4,6]$
\item List $L_2$: $[2,6,7]$
\item List $L_3$: $[2,4,9]$
\end{enumerate}
and the query value is 4, then the expected output is [\textit{True},\textit{False},\textit{True}]. \\
\textbf{Give an algorithm to solve the fractional cascading problem.}
\section{Brute Force}
\subsection{Linear Search}
Linear search is the most basic search technique, wherein we sequentially compare each array element to the target element. In the worst case of the target element not coinciding with \textit{any} list element, the algorithm would reach the end of the list and we would report an unsuccessful search. \\
As each element is compared at most once, the time complexity is $O(n)$, where $n$ is the size of the list. \\ \\
This algorithm forms the basis of the simplest solution to our problem: We just run linear search on each of the $k$ lists. If we have $q$ queries, this takes $O(q \cdot k \cdot n)$ time, which is a lot and in real world situations, if $k$, $q$, and $n$ are even moderately large, the time taken would become astronomical. \\
\subsection{Pseudocode}
\begin{tcolorbox}[blanker,width=(\linewidth-3.5cm)]
\begin{algorithm}[H]
\SetAlgoLined
\KwData{k arrays of size n and a query element x}
\KwResult{Boolean array regarding whether element is present in the indiced array or not}
\SetKwFunction{LinearSearch}{LinearSearch}
\SetKwFunction{FMain}{Main}
\SetKwProg{Fn}{Function}{:}{\KwRet \textit{False}}
\Fn{\LinearSearch{Array,x}}
{
\For{$i\leftarrow 0$ \KwTo $n$}
{
\If{\textit{Array}[i] == x}
{\KwRet \textit{True}\;}
}
}
\SetKwProg{Fn}{Function}{:}{\KwRet 0}
\Fn{\FMain}{
output = [] \;
\For{$i\leftarrow 0$ \KwTo $k$}
{
output.append(linearSearch(input[i],x));
}
}
% \caption{Brute Force method by doing k linear searches}
\end{algorithm}
\end{tcolorbox}
Note that, however, this approach does not take into account any relevant information given to us in the question which can speed up this algorithm. It is given that the \textit{lists are sorted}, so we can exploit this property and employ a faster searching technique to solve this problem in a better way: \textit{Binary Search}
%------------------------------------------------------------------------------%
\section{Improved Brute Force}
\subsection{Binary Search}
Binary search is another searching algorithm that works correctly only on sorted arrays. \\
It begins by comparing the target element with the element at the middle of the list.
\begin{itemize}
\item If they are equal, we have found the target in the list
\item If the target is larger, and as the list is sorted, we must now turn our attention to the \textit{right half} of the list
\item Similarly, if the target is smaller, we must focus on the \textit{left half} of the list.
\end{itemize}
In the worst case, Binary Search will take $O(\log n)$ comparisons, where n is the size of the list. \\ \\
To improve the performance of the Brute Force subroutine, we replace the Linear Search subroutine with the aforementioned Binary Search subroutine. If we have $q$ queries, this takes $O(q \cdot k \cdot \log n)$ time, which is certainly a lot better than the initial brute force algorithm, but can yet be improved further.
\subsection{Proof of correctness}
We prove the correctness of binary search by the \textit{method of strong induction}
Let $p(k) \defeq $ Binary search works on an array of size $k$
\begin{enumerate}
\item \textbf{Base Case}
\begin{itemize}
\item If $k=1$, $p(1)$ is trivially true; as either $array[mid] = x$, or not
\item Hence, $p(1)$ is true
\end{itemize}
\item \textbf{Induction Hypothesis}
\begin{itemize}
\item Let $p(1) \land p(2) \land \dots \land p(z) = $ True
\item This means that binary search works for all arrays of size \textit{at most} $z$.
\end{itemize}
\item \textbf{Induction Step}
\begin{itemize}
\item Now, we look at an array of size $z+1$. Here, we have three cases as outlined in the algorithm:
\begin{itemize}
\item If $array[mid] = x$, we are done as we can return prematurely from the loop
\item Else, our new search space will become roughly half(\textit{Note: It will be exactly $\frac{1}{2}$ of the original size, if the size of the array is a power of 2}). In any case, the size of the ``new'' array we should search in, is $\leq z$.
\item Now, from our \textit{Strong Induction Hypothesis}, we are done as all of their previous cases are true.
\item So, $p(1) \land p(2) \land \dots \land p(z) \rightarrow p(z+1)$
\end{itemize}
\end{itemize}
\item \textbf{So, we have proved the correctness of the binary search algorithm by strong induction.}
\end{enumerate}
\subsection{Pseudocode}
\begin{tcolorbox}[blanker,width=(\linewidth-3.5cm)]
\begin{algorithm}[H]
\SetAlgoLined
\KwData{k arrays of size n and a query element x}
\KwResult{Boolean array regarding whether element is present in the indiced array or not}
\SetKwFunction{BinarySearch}{BinarySearch}
\SetKwFunction{FMain}{Main}
\SetKwProg{Fn}{Function}{:}{\textbf{end Function}}
\Fn{\BinarySearch{Array,x,left,right}}
{
\If{right $\geq$ left}{
mid=$\frac{(left+right)}{2}$\;
\If{Array[mid] == x}{
\KwRet \textit{True}\;
}
\If{Array[mid] $>$ x}{
BinarySearch(Array,x,left,mid-1)\;
}
\Else{
BinarySearch(Array,x,mid+1,right)\;
}
}
\Else{
\KwRet \textit{False}\;
}
}
\SetKwProg{Fn}{Function}{:}{\KwRet 0}
\Fn{\FMain}{
output = [] \;
\For{$i\leftarrow 0$ \KwTo $k$}
{
output.append(BinarySearch(input[i],x));
}
}
% \caption{Brute Force method by doing k linear searches}
\end{algorithm}
\end{tcolorbox}
%------------------------------------------------------------------------------%
\section{Bridge Building}
\subsection{Introduction to bridges}
A \textit{bridge} is a pointer from an element $a_i$ of $A_i$ to an element $a_j$ of $A_{i+1}$ where $|a_i-a_j|$ is \textbf{small}, where $A$ is the list and $a$ represents an element of the array. By \textit{small}, we mean an element that is either of the same value, or with the smallest difference to the one considered as reference.$^{\cite{Fractional cascading notes by Prof Prof.Roberto Tamassia}}$ \\
Once we locate the position to a query in a array, we should be able to \textbf{follow a bridge} to a element that is close to the answer in the next array. \\
In the best case, we follow a bridge from the answer to the query in $A_i$ to the endpoint of the bridge in $A_{i+1}$ and then from there locate the answer in $A_{i+1}$, all in constant time. If we can do \textit{this},then once we have the answer in $A_1$ we can find the answer in the remaining $k-1$ sorted arrays in $O(k)$ time complexity. \\
From a technical standpoint, we \textit{implement} this method as follows: \\
For every element $e$ in the first array, give $e$ a pointer to the element with the \textit{same value} in the second array or if the value doesn't exist, the \textit{predecessor} (\textit{Note}: predecessor(x) = $v \in $ Search space where $x-v$ is minimum, and $x>v$.). This is called \textit{bridge building} between $A_i$ and $A_{i+1}$. Then, once we've found the item in the first array, we can just follow these pointers down in order to figure out where the item might be located in all the other arrays. To find the answer in $A_1$, we can just use a balanced binary search tree,thus making the overall time complexity of our algorithm $O(\log n + k)$ per query.
\begin{figure}[H]
\centering
\includegraphics[]{Images/Screenshot_2021-03-15 You could have invented fractional cascading Inside 245-5D.png}
\caption{Process of bridge building for a given question.$^{\cite{Blog by Edward Z. Yang}}$}
\label{fig:label}
\end{figure}
From the image, we can see the bridge building in action, where the lines between the arrays represent bridges. We can clearly see the predecessor linkages and how we can follow the pointers down and generate the output for all of the arrays. \\ \\
In this example, we are searching for 8 in all 3 arrays. We can clearly see the path of pointers we should traverse, as outlined in red.Note that, in the last array, the most plausible candidate element is 7 and not 8, so we would return \textit{False} for this array, and hence our overall output will be [\textit{True},\textit{True},\textit{False}] as the element 8 is only present in the first two arrays and not in the third one.
\subsection{Pseudocode}
\begin{tcolorbox}[blanker,width=(\linewidth-3.5cm)]
\begin{algorithm}[H]
\SetAlgoLined
\KwData{k arrays of size n and a query element x}
\KwResult{Boolean array regarding whether element is present in the indiced array or not}
\SetKwFunction{BuildBridges}{BuildBridges}
\SetKwFunction{FMain}{Main}
\SetKwProg{Fn}{Function}{:}{\textbf{end Function}}
\Fn{\BuildBridges{Array,x}}
{
\For{$i\leftarrow 0$ \KwTo $k-1$}{
\For{$j\leftarrow 0$ \KwTo $n$}{
Build bridge from Array[i][j] to Array[i+1][x] where $|$Array[i+1][y] - Array[i][j]$|$ is small. In this approach, if both predecessor and successor exists, then we take predecessor first.
}
}
}
\SetKwProg{Fn}{Function}{:}{\KwRet 0}
\Fn{\FMain}{
output = [] \;
BuildBridges(input,x) \;
output.append(BinarySearch(Array[i],x)) \;
Once the element is found in the first array, follow the bridge path till the final array and append it to the output. \;
}
% \caption{Brute Force method by doing k linear searches}
\end{algorithm}
\end{tcolorbox}
% \subsection{Proof of correctness}
\subsection{Shortcomings}
This method seems like a very interesting and efficient alternative to solve this problem. However, there are some glaring weaknesses to this approach, the most important one being the fact that certain classes of inputs render this method useless. \\
In particular, if a later list is completely \textit{in between} two elements of the first list, we have to redo the entire search, as the pointer pre-processing gives us no information that we didn't already know.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Images/Screenshot_2021-03-15 fc dvi - notes08 pdf.png}
\caption{An example where a later list is completely \textit{in between} two elements of the first list illustrated via bridge building.$^{\cite{Fractional cascading notes by Prof Prof.Roberto Tamassia}}$}
\label{fig:label}
\end{figure}
\textbf{Let's consider a simple example to elucidate this statement} \\
Consider the case where $k = 2$. Everything would be better if only we could guarantee that the first list contained the \textit{right elements} to give you useful information about the second array. The bridges would be built with only the maximum and minimum of the lower array as endpoints leading us to either search the lower array again or we could just \textit{merge the arrays} naively, but if we did this, we'd end up with an array of size $k \cdot n$, which is not optimal at all, if $k$ is even moderately large.
Even if each time we find an answer in the sorted Array $A_i$, we follow the bridge pointer from it as well as from the key above it we are still left with the entire contents of array $A_{i+1}$ to search. This continues through the entire set of $k$ arrays. If we search each array by doing a linear scan from the point at which the bridge told us to begin,the total search time will be $O(n \cdot k)$ .Even if we build a balanced search tree over all elements in $A_{i+1}$ that appear between two consecutive bridge pointers from $A_i$ the query time will still be $O(k \cdot \log n)$ which is similar to performing $k$ binary searches to get the query element.
%------------------------------------------------------------------------------%
\section{Fractional Cascading}
\subsection{Intuition behind Fractional Cascading}
Because of the shortcomings of the \textit{Bridge Building} algorithm, i.e the case where every element of the ``below'' array is in between all the elements of the ``above'' array.(\textbf{Note}: We say that array 1 is “above” array 2 and array 2 is “below” array 1. Therefore, an Array $j$ is below Array $i$ if $j > i$). In this case, we either have to do $k$ binary searches, or we have to merge all $n$ elements of the below arrays recursively and maintain bridges, to get the output. This will warrant a sub-optimal, extra time complexity of $O(k \cdot n)$, as we are going through \textit{each} array element iteratively to merge it.
To avoid this problem of merging all elements and getting $O(k \cdot n)$ time complexity, we start with the \textit{lowest} list in the sequence and select every $i$th element, where i = $\frac{1}{\alpha}$ and insert it into the array above it while still maintaining sorted order. We then mark that element as \textit{promoted}$^{\cite{Fractional Cascading Demo}}$ and keep a pointer from it, to its original position in the bottom list. This operation of taking every $i$th element and \textit{promoting} it to the array above it is called \textit{cascading}, and since we are only promoting a fraction of the elements, the algorithm is called \textbf{fractional cascading}.
For selecting, $\alpha$, we can choose between a variety of fractions, however $\alpha$ = $\frac{1}{2}$ seems most appropriate because, we will have to compare only 2 elements while searching, after prepocessing. This will also ensure that our time complexity stays low, and easily computable.
\subsection{Algorithmic Approach to Fractional Cascading}
Let the input be specified by $k$ $n$-element arrays, $A_1,A_2,\dots,A_k$, Let the query element value be $x$.
Let $M_1,M_2,\dots,M_k$ be the new \textit{merged arrays} such that $M_k = A_k$ and $\forall i < k$, $M_i$ is defined as the result of merging $M_i$ with every $\frac{1}{\alpha}$th element of $M_{i+1}$. \\ \\
As we're taking $\alpha$ = $\frac{1}{2}$, $M_i$ will be the result of merging $M_i$ with every alternate element of $M_{i+1}$. For every \textbf{cascaded element} of $M_i$ $\forall$ i $<$ k, we keep two pointers from each element which are derived as follows:
\begin{itemize}
%\item If the element came from the same array, i.e, $A_i$, we keep a pointer to the nearest neighbouring elements which are cascaded from $M_{i+1}$
\item If the element is non cascaded,i.e, If the element is from the Array $A_i$, then the first pointer points to the smallest cascaded element greater than the element in $A_i$ and the second pointer points to the largest element lesser than the element in $A_i$.
\item If the element has been cascaded, we keep a pointer to the non cascaded predecessor and non cascaded successor of the element in $M_i$ to be able to efficiently find the next and previous element which is a member of the current array; and also a keep a bridge between $M_i$ and $M_{i+1}$ at the position where it is present in both of the arrays.
\end{itemize}
Additionally, we add bridges between the pseudo-elements ,i.e, -$\infty$ and $\infty$ in consecutive arrays.If there is no key of the appropriate type above or below the key,the pointer points to the pseudo-keys at $\pm \infty$, whichever is appropriate.$^{\cite{Fractional cascading notes by Prof Prof.Roberto Tamassia}}$
These pointers helps to find the position of the query element x in $A_i$ and also in the cascaded arrays below in $O(1)$ time. \\ \\
\textbf{Note}: Since we are merging every alternate element of the below list to the current list, we have
$|M_i| = |A_i| +\frac{1}{2}|M_{i+1}|$, which in turn ensures that $|A_i| \leq 2n = O(n).$ \\
After we perform the aforementioned pre-processing, querying $x$ in all $k$ lists is done as follows:
First, we make a query for x in $M_1$ using a binary search in $O(\log n)$. Once we have found the position of $x$ in $M_1$, we use the \textit{cascaded pointers} to find the position of x in $M_2$. Generalising this step, once we found the position of x in $M_i$ where i $<$ k, we use the cascaded pointers to find the position of x in $M_{i+1}$. \\ \\
To find the location in $M_{i+1}$, we find the \textit{two neighbouring elements} in $M_{i}$ that came from $M_{i+1}$ using the pointers we had assigned during the pre-processing phase. Now, these elements will have \textit{exactly one element} between them in $M_{i+1}$. \textbf{Therefore, to find the exact location in $M_{i+1}$, we just have to do a simple comparison with only the intermediate element}. This is the significance of taking $\alpha$ = $\frac{1}{2}$ as we just have to perform only one comparison, which takes $O(1)$ time, and hence we can retrieve the location of $x$ in $A_i$ from its location in $M_i$ again in $O(1)$ time. \\ \\
Hence, the time to perform the pre-processing for fractional cascading is O(nk), the total search time per query is $O(k+log n)$ and, the total time taken by the Fractional Cascading algorithm is $O(q(k+logn))$ for $q$ queries, which is an improvement over the previous algorithms.
\subsection{Pseudocode}
\begin{tcolorbox}[blanker,width=(\linewidth-3.5cm)]
\begin{algorithm}[H]
\SetAlgoLined
\KwData{k arrays of size n and a query element x}
\KwResult{Boolean array regarding whether element is present in the indiced array or not}
\SetKwFunction{FFractionalCascading}{Fractional\_Cascading}
\SetKwProg{Fn}{Function}{:}{\KwRet output}
\Fn{\FFractionalCascading}{
output = [] \;
MergedArrays = [] \;
% // Insert all elements of the last array at position 0 of MergedArrays % \;
MergedArrays = MergedArrays.insert(0,all elements of the last array) \;
MergedArrays = MergedArrays.insert(0,merge the below array with the above array by only taking alternate elements of the below array) \;
Generate the boundary case predecessors and successors; $- \infty$ and $+ \infty$ \;
%For every element assign a two sized array to hold the pointers to the locations. \;
For every element in the merged arrays, assign locations based on the presence of that particular element in $A_i$ and $M_{i+1}$.If the element came from the same array, i.e, $A_i$, we keep a pointer to the nearest neighbouring element on either side from $M_{i+1}$. If the element has been cascaded, we keep a pointer to the non cascaded predecessor and successor of the element in $M_i$ and also a bridge between $M_i$ and $M_{i+1}$ at the position where it is present in both of the arrays.\;
We then check for the position of the target element in the merged array, then we follow the pointers down to get the positions of the predecessors of the said target element in all of the k arrays. Let's call this array $positions[k]$ \;
The last step is to scan through $positions[k]$, and see if the respective predecessor is actually the given target, or not. This generates the $[True,False]$ format given in the question and append it to the output array.
}
% \caption{Brute Force method by doing k linear searches}
\end{algorithm}
\end{tcolorbox}
\subsection{Example}
\begin{enumerate}[label=(\alph*)]
\item List $L_1$: $[3,4,6]$
\item List $L_2$: $[2,6,7]$
\item List $L_3$: $[2,4,9]$
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Images/WhatsApp Image 2021-03-17 at 8.19.27 PM.jpeg}
\caption{Fractional cascading pre processing for the given question.}
\label{fig:label}
\end{figure}
The figure gives the final MergedArrays and we can also see how the elements are cascaded. Every cascaded element has a pointer to it's predecessor and successor which was initally present in the same array $A_i$ as shown by the red curved pointers and also has a bridge between merged arrays $M_i$ and $M_{i+1}$ depending on the location of that particular element in both the arrays. The red lines denote the bridges. Also the elements that are present in both $A_i$ and $M_i$ has a pointer to the nearest cascaded elements as shown by the blue and green lines in the figure.
Here, the search query element, $x = 4$. So we start at $M_1$ and search for 4 using a binary search which will in turn return the first 4 in the array, i.e, the element present in index 1. Now we will follow this index's pointer to the nearest cascaded element which is 4 present at index 2. However, this is a cascaded element from the below arrays and hence will have a bridge to the below arrays. So the algorithm follows the bridge and goes to index 1 of $M_2$. However, this elements is also cascaded, so we check whether the previous element is equal to 4 or not. The previous element in this case is 2 present at index 0 of $M_2$ which is not equal to the query element. So we again go to the nearest cascaded element via 2's pointer and reach the next merged array which is $M_3$. Here the bridge's endpoint is the query element and it will return true and the algorithm is terminated as we have reached the lowest array. Hence the output will be [\textit{True},\textit{False},\textit{True}]. \\
We have made a short walkthrough of the above example which can be found on youtube in this \href{https://www.youtube.com/watch?v=eUQtziH6cDo}{Link(click here)}$^\cite{Youtube Walkthrough}$
\subsection{Proof of Correctness}
To prove that our Fractional Cascading algorithm is correct, we perform the following steps.
\begin{itemize}
\item We look at another simpler algorithm and prove its correctness, let's call this algorithm X.
\item We show that our algorithm and X produce the same output. This forms the proof of correctness of our algorithm.
\end{itemize}
Note that we already have a candidate for algorithm X; Our \textit{improved brute force algorithm} which uses binary search which we have already proved earlier.
%proof of correctness of binary search.
We will follow the concept of universal generalization here; lets consider the $r^{th}$ array of our input sequence consisting of k arrays of size $n$ where r $<$ k. Let's also consider a target element $x$. Clearly there are two cases
\begin{itemize}
\item x $\in$ array[r] : let's call this situation 1.
\item x $\notin$ array[r] : let's call this situation 2.
\end{itemize}
\textbf{Let's start with situation 1},
Now there are two possibilities here when our control reaches array[r]
\begin{itemize}
\item We find element $x$ and it is not cascaded from below then we can say that it returns \textit{true} which is exactly what algorithm X returns in the same situation.
\item We find element $x$ but it is cascaded from below,then, our control moves into the nearest non-cascaded neighbours of $x$. As x $\in$ array[r], \textit{native x} will be one of the neighbours of the cascaded $x$. So, the control gets transferred to the native $x$, and this case gets \textit{transformed} into the previous case. Hence, we return \textit{true} which is exactly what algorithm X returns in the same situation.
\end{itemize} \\ \\ \\
Now, \textbf{let's continue with situation 2}.
Now, there are two possibilities here when our control reaches array[r]
\begin{itemize}
\item We find element \textit{x} but it is cascaded from below. So we look at its nearest non-cascaded neighbours which are guaranteed not to be $x$(as $x$ $\notin$ array[r] by supposition). So we can return \textit{false}, which is exactly what algorithm X returns in this situation.
\item We find an element which is not equal to $x$. Now, regardless of whether it's cascaded or not when we traverse to the pointers, we are guaranteed not to find $x$ so we return $false$, which is exactly what algorithm X returns in this situation.
\end{itemize}
Hence, the output given by the fractional cascading algorithm and the one given by the binary search algorithm is equal. All the cases are exhausted and the algorithm terminates when $M_k$ = $A_k$. This proves that our \textbf{algorithm gives correct output}, and hence concludes the proof of correctness.
%------------------------------------------------------------------------------%
\section{Applications}
This technique has various applications$^\cite{Wikipedia article on fractional cascading}$ in numerous fields. \\
\begin{enumerate}
\item Computational Geometry
\begin{itemize}
\item Half-Range Plane Reporting
\item Explicit Searching
\item Point Location
\end{itemize}
\item Networks
\begin{itemize}
\item Fast Packet filtering in internet routers.
\item Data distribution and retrieval in sensor networks
\end{itemize}
\item Linear Range Queries
\begin{itemize}
\item As an accompaniment to \textit{Segment Trees}
\end{itemize}
\end{enumerate}
%------------------------------------------------------------------------------%
\pagebreak
\begin{thebibliography}{}
\bibitem{Fractional cascading notes by Prof Prof.Roberto Tamassia}
C.S.252 ``Prof.RobertoTamassia'' ComputationalGeometry Sem.II, 1992-1993
\\
\texttt{http://cs.brown.edu/courses/cs252/misc/resources/lectures/pdf/notes08.pdf} \\
\bibitem{MIT OCW Notes on Fractional Cascading}
6.851:Advanced Data Structures Spring 2012 ``Prof.Erik Demaine'' \\
\href{https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-851-advanced-data-structures-spring-2012/calendar-and-notes/MIT6_851S12_L3.pdf}{Link to MIT Lecture notes by Prof Erik Demaine on the topic Advanced Data Structures} \\
\bibitem{Fractional Cascading Demo}
Fractional Cascading webpage created by Ravi Sinha alias ravix339-zz \\
\texttt{https://ravix339.github.io/FractionalCascading/index.html} \\
\texttt{https://ravix339.github.io/FractionalCascading/Demo.html} \\
\bibitem{Blog by Edward Z. Yang}
A blog about fractional cascading and bridge building by Edward Z. Yang \\
\texttt{http://blog.ezyang.com/2012/03/you-could-have-invented-fractional-cascading/} \\
\bibitem{Wikipedia article on fractional cascading}
Wikipedia article on fractional cascading \\
\texttt{https://en.wikipedia.org/wiki/Fractional\_cascading} \\
\bibitem{Youtube Walkthrough}
Our youtube walkthrough for fractional cascading \\
\texttt{https://www.youtube.com/watch?v=eUQtziH6cDo} \\
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7126957243,
"avg_line_length": 60.5251046025,
"ext": "tex",
"hexsha": "c2a4466b4085f8667470e6db5f668259ad540f85",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-03-18T04:48:17.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-18T04:47:07.000Z",
"max_forks_repo_head_hexsha": "9f199a685aea3c81518dddb351c1ba839d3aba87",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hanzohasashi33/DAA_TheoryProject",
"max_forks_repo_path": "Fractional_Cascading.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9f199a685aea3c81518dddb351c1ba839d3aba87",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hanzohasashi33/DAA_TheoryProject",
"max_issues_repo_path": "Fractional_Cascading.tex",
"max_line_length": 1004,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9f199a685aea3c81518dddb351c1ba839d3aba87",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hanzohasashi33/DAA_TheoryProject",
"max_stars_repo_path": "Fractional_Cascading.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7726,
"size": 28931
} |
%
% Analytics
%
% Aleph Objects Firewall
%
% Copyright (C) 2014, 2015, 2016 Aleph Objects, Inc.
%
% This document is licensed under the Creative Commons Attribution 4.0
% International Public License (CC BY-SA 4.0) by Aleph Objects, Inc.
%
\section{Overview}
What is the network doing?
\begin{itemize}
\item snort
\item MRTG
\item Aguri
\end{itemize}
| {
"alphanum_fraction": 0.7254901961,
"avg_line_length": 17,
"ext": "tex",
"hexsha": "c053f2c7de939fe8d664c0ffffd738da17886e24",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8d8461d0761fcf33ca3964bf11fb8e5251f18b3e",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "alephobjects/aof",
"max_forks_repo_path": "source/Analytics.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8d8461d0761fcf33ca3964bf11fb8e5251f18b3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "alephobjects/aof",
"max_issues_repo_path": "source/Analytics.tex",
"max_line_length": 70,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8d8461d0761fcf33ca3964bf11fb8e5251f18b3e",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "alephobjects/aof",
"max_stars_repo_path": "source/Analytics.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 110,
"size": 357
} |
\section{Discussion}
\label{section:discussion}
In the previous sections we demonstrated that modeling the ages of stars using
isochrones {\it and} gyrochronology can result in more precise ages than using
isochrone fitting alone.
Isochrone fitting and gyrochronology are complementary because gyrochronology
is more precise where isochrone fitting is less precise (on the MS) and vice
versa (at MS turn off).
% Age precision is determined by the spacing of isochrones or gyrochrones: in
% regions where iso/gyrochrones are more tightly spaced, ages will be less
% precise.
% Isochrones becone less tightly spaced (and more precise) at larger stellar
% masses and lower surface gravities.
% Gyrochrones become more tightly spaced (and less precise) at larger stellar
% masses.
% - How will this be used/what will it be useful for?
The method we present here is available as a {\it python} package called \sd\
which allows users to infer ages from their available apparent magnitudes,
parallaxes, rotation periods and spectroscopic propertes in just a few lines
of code.
This method is applicable to an extremely large number of stars: late F, GK
and early M stars with a rotation period and broad-band photometry.
This already includes tens-of-thousands of \kepler\ and \ktwo\ stars and could
include millions more from \tess, \lsst, \wfirst, \plato, \gaia, and others in
the future.
Although this method is designed for combining isochrone fitting with
gyrochronology, \sd\ can still be used without rotation periods, in
which case it will predict an isochrone-only stellar age.
% \sd\ is therefore applicable to all stars covered by the MIST isochrones:
% masses from 0.1 to 300 M$_\odot$, ages ranging from 100,000 years to longer
% than the age of the Universe, and metallicities from -4 to 0.5.
However, it is {\it optimally applicable} to stars with rotation periods,
otherwise the result will be identical to ages measured with {\tt
isochrones.py}.
% - Where is it not useful?
However, \sd\ will often predict inaccurate ages for stars younger than around
500 million years, where stars are more likely to be rapidly rotating
outliers, and close binaries whose interactions influence their rotation
period evolution.
% These ages may still be precise even though they are inaccurate.
% Building a mixture model into \sd\ would allow these outliers to be identified
% and this is one of the main improvements to \sd\ that we plan to make in
% future.
% Since many stars with measurable rotation periods do not have precise
% spectroscopic properties, it is not always possible to tell whether a star
% falls within these permissable ranges of masses, surface gravities and rossby
% numbers.
% In addition, any given star, even if it does meet the criteria for mass,
% age, binarity, etc, may still be a rotational outlier.
Rotational outliers are often seen in clusters \citep[see \eg][]{douglas2016,
rebull2016, douglas2017, rebull2017} and many of these fall above the main
sequence, indicating that they are binaries.
When a star's age is not accurately represented by its rotation period, its
isochronal age will be in tension with its gyrochronal one, however, given the
high information content of gyrochronology, the gyrochronal age will dominate
on the MS.
% Figure \ref{fig:bimodal} shows the posterior PDF for a star with a
% misrepresentative rotation period.
% This star is rotating more rapidly than its age and mass indicate it should,
% so the gyrochronal age of this star is under-predicted.
% Situations like this are likely to arise relatively often, partly because
% rotational spin-down is not a perfect process and some unknown physical
% processes can produce outliers, and partly because misclassified giants, hot
% stars, M dwarfs or very young or very old stars will not have rotation periods
% that relate to their ages in the same way.
In addition, measured rotation periods may not always be accurate and can, in
many cases, be a harmonic of the true rotation period.
A common rotation period measurement failure mode is to measure half the true
rotation period.
The best way to prevent an erroneous or outlying rotation period from
resulting in an erroneous age measurement is to {\it allow} for outlying
rotation periods using a mixture model.
We intend to build a mixture model into \sd\ in the future.
% As shown in figure \ref{fig:praesepe}, the gyrochronology model used here
% \citep{angus2015} does not provide a good fit to all available data.
% In future we intend to calibrate a new gyrochronology model that fits all
% available cluster and asteroseismic data.
% For now however, we simply warn users of these caveats and suggest that ages
% calculated using \sd\ are treated with appropriate caution.
% % - Caveats and gotchas -- e.g. isochrones aren't 100% accurate.
% Throughout this manuscript we have referred to the `accuracy' of the
% isochronal models.
% In reality though, stellar evolution models are not 100\% accurate and
% different stellar evolution models, \eg, MIST, Dartmouth, Yonsei-Yale, etc
% will predict slightly different ages.
% The disagreement between these models varies with position on the HRD,
% but in general, ages predicted using different stellar evolution models will
% vary by around 10\%.
% We use the MIST models in our code because they cover a broader range of ages,
% masses and metallicities than the Dartmouth models.
% % - How including the rotation period improves precision of all parameters.
% Our focus so far has been on stellar age because this is the most difficult
% stellar parameter to measure.
% However, if the age precision is improved, then the mass, \feh, distance and
% extinction precision must also be improved, since these parameters are
% strongly correlated and co-dependent in the isochronal model.
% Figure \ref{fig:mass_improvement} shows the improvement in relative precision
% of mass measurements from our simulated star sample.
| {
"alphanum_fraction": 0.7896945284,
"avg_line_length": 58.4117647059,
"ext": "tex",
"hexsha": "e578037cea09e47caad1a47a5d8b1ad3eaa48288",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-03-03T22:16:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-02-11T02:46:32.000Z",
"max_forks_repo_head_hexsha": "5c0d45c1e2eb9ec5b6c57aeacbcb301304065bbc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "john-livingston/stardate",
"max_forks_repo_path": "paper/discussion.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "5c0d45c1e2eb9ec5b6c57aeacbcb301304065bbc",
"max_issues_repo_issues_event_max_datetime": "2019-09-09T10:38:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-02-21T21:37:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "john-livingston/stardate",
"max_issues_repo_path": "paper/discussion.tex",
"max_line_length": 80,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "5c0d45c1e2eb9ec5b6c57aeacbcb301304065bbc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "john-livingston/stardate",
"max_stars_repo_path": "paper/discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-31T23:46:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-19T13:46:46.000Z",
"num_tokens": 1385,
"size": 5958
} |
\section{Throughput}
\label{sec:verification_and_benchmark:throughput}
All steps from the image classification chain and the weighting are measured.
This results in the following measurements:
\begin{itemize}
\item Getting the raw Baumer \texttt{BayerRG8} frame
\item Converting the color space
\item Resizing the frame
\item Normalizing the pixel values
\item Running inference
\item Applying the softmax function
\item Weighting
\end{itemize}
To measure the time needed for each step, the Python module datetime was used.
The function \texttt{datetime.now} returns the current local date and time.
To measure the individual steps, the Python file \texttt{aionfpga.py} is slightly modified.
The initialization is not changed at all.
The infinite loop is removed, so that only one throw is captured and the application ends afterwards.
Additionally, the start and end time around each of the above steps are recorded.
The times are stored in a three-dimensional dictionary.
The first dimension contains the measured command.
The second dimension indicates whether it is the beginning or the end of the command.
Since the times vary from frame to frame, several images are acquired and averaged in a final step.
The times are appended to the end of the corresponding list.
Listing \ref{lst:measure_time} shows an example for the color transformation.
\begin{lstlisting}[style=python, caption={Measuring the required time for the color space conversion}, label=lst:measure_time]
meas_time['Transform color']['start'].append(datetime.now())
frames[idx] = cv2.cvtColor(raw_frame, cv2.COLOR_BayerBG2BGR)
meas_time['Transform color']['end'].append(datetime.now())
\end{lstlisting}
The time needed for each step is the difference between end and start.
When all individual times are computed, the average value is calculated for each step.
This is printed out on the console and written to a file.
Table \ref{tab:measured_times} lists the measured times.
By calculating the reciprocal of the total time required, the frame rate can be determined:
\begin{equation}
\text{frame rate} = \frac{1}{T_{\text{tot}}} = \frac{1}{\SI{24.329}{ms}} = \SI{41.104}{fps}
\label{eq:achieved_framerate}
\end{equation}
\begin{table}
\caption{Measured time for the different steps}
\label{tab:measured_times}
\centering
\begin{tabular}{ll}
\toprule
\textbf{Step} & \textbf{Time} \\
\midrule
Getting the raw frame & \SI{0.233}{ms} \\
Converting the color space & \SI{9.612}{ms} \\
Resizing the frame & \SI{1.534}{ms} \\
Normalizing the pixel values & \SI{4.503}{ms} \\
Running inference & \SI{7.422}{ms} \\
Applying the softmax function & \SI{0.458}{ms} \\
Weighting & \SI{0.567}{ms} \\
\midrule
Total & \SI{24.329}{ms} \\
\bottomrule
\end{tabular}
\end{table}
| {
"alphanum_fraction": 0.7465535525,
"avg_line_length": 41,
"ext": "tex",
"hexsha": "7d92ad0016b06899376e933d7e81c56b8ddf9a82",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-09-20T14:17:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-20T14:17:25.000Z",
"max_forks_repo_head_hexsha": "f2379782660d4053a5bb60b9f6c6dea17363f96d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "MuellerDominik/AIonFPGA",
"max_forks_repo_path": "doc/thesis/chapters/verification_and_benchmark/throughput.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f2379782660d4053a5bb60b9f6c6dea17363f96d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "MuellerDominik/AIonFPGA",
"max_issues_repo_path": "doc/thesis/chapters/verification_and_benchmark/throughput.tex",
"max_line_length": 126,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "f2379782660d4053a5bb60b9f6c6dea17363f96d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "MuellerDominik/AIonFPGA",
"max_stars_repo_path": "doc/thesis/chapters/verification_and_benchmark/throughput.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-22T13:36:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-21T09:42:10.000Z",
"num_tokens": 735,
"size": 2829
} |
%!TEX ROOT = thesis.tex
\chapter{How to Use umalayathesis to Write Your Thesis, and then some}
\textsf{umalayathesis} is a \LaTeX\ class for authoring theses that fulfil formatting specifications required by Universiti Malaya (UM), Malaysia. The thesis preparation guide can be accessed at \url{http://bit.ly/2xaYpzN}.
\section{Files}\label{sec:files}
Here's a quick list of the files required when writing your thesis with the \textsf{umalayathesis} class. Easiest way to go about things is to put all the files in the same directory. (See \ref{sec:howto} for more details.)
%
\begin{itemize}
\item \texttt{\bfseries umalayathesis.cls}, the \LaTeX\ class file implementing the UM thesis formatting requirements.
\item A ``main driver'' \texttt{.tex} file of your thesis, analogous to \verb|int main()|. You can name this file anything you like; it is known as \texttt{thesis.tex} in this guide. (See \ref{sec:howto}.) \textbf{This is the \emph{only} file that you should run the processing tools on!}
\item Two \texttt{.tex} files containing your thesis abstract, in English and Bahasa Malaysia. (See \ref{sec:abstract}.)
\item \texttt{.tex} files containing your thesis chapters and appendices, one chapter per file. (See \ref{sec:chapters} and \ref{sec:appendices}.)
\item A \texttt{.bib} file containing your references and publications. (See \ref{sec:bibliography}).
\item A \texttt{.tex} file containing your glossary. (See \ref{sec:glossary}).
\end{itemize}
\subsection{\LaTeX{} IDE Configuration}\label{sec:texworks}
\emph{This section is unnecesary if you are using Overleaf, arara or latexmk, as these build tools would automatically run the necessary processors.}
Assuming \textsf{TeXworks} is your \LaTeX\ editor of choice on Windows, you will probably want to configure it so that you can process your glossary and list of own publications from within \textsf{TeXworks}.
(You can always, of course, opt to run the relevant commands from the command line prompt, or adapt these configurations for other editors and operating systems: I have tested on Windows XP/7, Ubuntu and Mac OS X.)
\subsubsection{Tool Configuration for Generating the Glossary}\label{sec:texworks:makeglossaries}
Access the \textsf{TeXworks} menu \textsf{Edit $\triangleright$ Preferences\ldots\ $\triangleright$ Typesetting}. Add a new processing tool called ``\textsf{MakeGlossaries}''. Configure it as shown below:
\includegraphics[width=.9\textwidth]{texworks-win_glossaries}
Now \textbf{repeat the above step} for another similar tool called ``\textsf{MakeAcronyms}'', but replace \texttt{thesis.glg} with \texttt{thesis.alg}; \texttt{thesis.gls} with \texttt{thesis.acr}; \texttt{thesis.glo} with \texttt{thesis.acn}.
On Linux and Mac systems, these are equivalent to the command lines
\begin{lstlisting}
makeindex -s <base>.ist -t <base>.glg -o <base>.gls <base>.glo
\end{lstlisting}
\textbf{If you have Perl installed} (likely if you're using Linux or Mac), you can just run
\begin{lstlisting}
makeglossaries <base>
\end{lstlisting}
\noindent and it'll process both the acronyms and the glossaries.
\subsubsection{Tool Configuration for Generating the List of Publications}
Now add a new processing tool called ``\textsf{Create Own Publication List}'' (or some other name). Configure it as shown below:
\includegraphics[width=.9\textwidth]{texworks-win_ownpub}
On Linux and Mac systems, these are equivalent to the command lines
\begin{lstlisting}
bibtex own
\end{lstlisting}
If you are using the \texttt{splitpubs} environment for separating your published journal articles and conference proceedings, then you would need to set up similar processors for \texttt{ownjour} and \texttt{ownconf} instead:
\begin{lstlisting}
bibtex ownjour
bibtex ownconf
\end{lstlisting}
\section{Compiling \texttt{thesis.tex}}
The following processing tools/commands are triggered automatically on Overleaf as you edit your file, but you must execute them manually if compiling on your own machine. (The \verb|$| is the terminal command prompt; don't type that!)
\begin{lstlisting}
$ pdflatex thesis
$ bibtex thesis
$ makeglossaries thesis <-- if you have acronyms and glossaries
$ makeindex thesis <-- if you have indices
$ pdflatex thesis
$ pdflatex thesis
\end{lstlisting}
You will need to run \texttt{makeglossaries} again if you add and use a \emph{new glossary or acronym entry}.
If you do not have Perl installed on your system (Mac and GNU/Linux systems are likely to already have Perl installed), then you should execute the following commands to replace \texttt{makeglossaries}:
\begin{lstlisting}
$ makeindex -s thesis.ist -t thesis.glg -o thesis.gls thesis.glo
$ makeindex -s thesis.ist -t thesis.alg -o thesis.acr thesis.acn
\end{lstlisting}
\section{Printing from Acrobat Reader}
Remember to set the \textbf{paper size} to \textbf{A4} and \textbf{page scaling} to \textbf{None} in the \textsf{Print} dialog, otherwise the margins would be incorrect.
\section{Using the umalayathesis Class}\label{sec:howto}
\subsection{Activation}\label{sec:activation}
To `activate' the class, make sure your main document file (e.g.\ \texttt{thesis.tex}) starts off with \verb|\documentclass{umalayathesis}|:
\begin{lstlisting}
\documentclass{umalayathesis}
\usepackage{graphicx}
\usepackage{... other packages you need}
\end{lstlisting}
This will set up the page margins, paragraph spacing, indents, page numbers, font face and size, citation and bibliography format, amongst other things.
\subsection{Document Class Options}
Some faculties or departments may have varying, and sometimes conflicting, requirements on various formatting details, which may not have been explicitly described in the official thesis style guidelines. The following document class options may be used to address some of the more commonly requested changes: please also read the commented code in the sample \texttt{thesis.tex} carefully for examples and tips.
\begin{description}
\item[\texttt{english}] (default) English thesis.
\item[\texttt{bahasam}] Malay thesis. At present \texttt{apacite} and \texttt{newapa} has not yet been localised to Bahasa Malaysia.
\item[\texttt{apacite}] (default) Loads the \texttt{apacite} package, which implements the APA
citation and referencing styles strictly, including expansion of
of 3--5 authors on first citation.
\item[\texttt{newapa}] Loads the \texttt{natbib} and \texttt{apalike} package for a APA-like reference list
but \emph{does not fully implement} all APA citation styles. In particular,
this option will not expand references with 3--5 authors on first citation.
\textbf{Not recommended unless explicitly requested by examiner}.
\item[\texttt{custombib}] Does not pre-load any bibliography style or packages; you will need to
specify \verb|\bibliographystyle|, \verb|\bibliographystyleown| etc yourself, or load \texttt{natbib} yourself if necessary. See \ref{sec:custombib} for an example.
\item[\texttt{appendixhead}] Add `APPENDICES' before the first Appendix.
\item[\texttt{altcaption}] Caption in smaller fonts; only Figure X, Table Y bold.
\item[\texttt{singlespacedlisttitles}] Long titles in the ToC, LoT, LoF, LoA are single-spaced.
\item[\texttt{listpageheader}] If your faculty requires a ``header row'' at the top of
the List of Figures/Tables. You can re-define \verb|\lofpageheader| and
\verb|\lotpageheader| if necessary, e.g.
\verb|\renewcommand{\lofpageheader}{\hfill Page}|
\item[\texttt{boldfrontmattertoc}] If your faculty wants front matter ``chapters'' to be
bold in the ToC.
\item[\texttt{boldbackmattertoc}] If the backmatter ``chapters'' are to be bolded as well.
\item[\texttt{uppercasetoc}] If all ``chapters'' level headings must be upper-cased in the ToC.
\end{description}
If further minor modifications are required, it is recommended to do so with re-issuing commands to change settings or \verb|\renewcommand|, \verb|\patchcmd| etc in the \emph{preamble of \texttt{thesis.tex}}. Modifying \texttt{umalayathesis.cls} directly is discouraged, as far as possible, since this may complicate debugging and future maintenance.
\subsection{Author Information}\label{sec:author:info}
You need to provide some author information in the preamble. Example lines from \texttt{thesis.tex}:
\lstset{language=[LaTeX]{TeX}}
\begin{lstlisting}[moretexcs={submissionyear,submissionmonth,faculty,othertitle,degree,qualification}]
\author{Lim Lian Tze}
\title{My Ground-breaking Research}
\othertitle{Hasil Penyelidikan yang Menggegarkan}
\faculty{Faculty of Amazing Research}
\submissionyear{2012}
\degree{Doctor of Philosophy}
\end{lstlisting}
These information are needed to generate the preliminary pages.
If \verb|\othertitle| is given, then the second abstract will display it
(i.e. if your thesis is `\texttt{english}' then this is printed on top of the Malay abstract.
and if your thesis is `\texttt{bahasam}' then this is printed on top of the English abstract).
If no \verb|\othertitle| is given, then the second abstract will not have any translated thesis title.
If you need to specify your department as well, you may write
\begin{lstlisting}
\faculty{Department of Hyperboles\\Faculty of Amazing Research}
\end{lstlisting}
\subsection{Preliminary Pages}\label{sec:prelim:pages}
Once in the main document body, \verb|\frontmatter| sets up the, well, front matter. This include setting the page numbers to lower-case Roman numerals.
\textsf{umalayathesis} can generate the cover page, title page and original literary work declaration page with the following lines (included in \texttt{thesis.tex}):
\begin{lstlisting}[moretexcs={makecoverandtitlepage,copyrightpage,declarationpage}]
% \makecoverandtitlepage{\mastercoursework}
% \makecoverandtitlepage{\mastermixedmode}
% \makecoverandtitlepage{\masterresearch}
\makecoverandtitlepage{\doctoralresearch}
% \makecoverandtitlepage{\doctoralmixedmode}
\declarationpage
\end{lstlisting}
Please \emph{uncomment} the correct \verb|\makecoverandtitle| line to generate the correct statement on the title page.
\subsection{Acknowledgements}\label{sec:acknowledge}
\lstset{moretexcs={acknowledgements}}
This is provided using \verb|\acknowledgements|:
\begin{lstlisting}
\acknowledgements{I would like to thank my parents, my family, my supervisor...}
\end{lstlisting}
\subsection{Abstract}\label{sec:abstract}
Write your abstracts in separate files (\texttt{sample-abstract.tex} for the English abstract and \texttt{sample-msabstract.tex} for the Malay abstract in this example), and include them in \texttt{thesis.tex} like this:
\begin{lstlisting}[moretexcs={abstractfromfile,msabstractfromfile}]
\abstractfromfile{sample-abstract}
\msabstractfromfile{sample-msabstract}
\end{lstlisting}
\subsection{Table of contents, List of figures and tables}\label{sec:toc}
These are auto-generated by the following lines (included in \texttt{thesis.tex}):
\begin{lstlisting}[moretexcs={tableofcontents,listoftables,listoffigures}]
{\clearpage
\tableofcontents\clearpage
\listoffigures\clearpage
\listoftables\clearpage}
\end{lstlisting}
\subsection{Main Chapters}\label{sec:chapters}
I highly recommend that each chapter be written in a separate file. For example, \texttt{chap-intro.tex} has the contents
\begin{lstlisting}[moretexcs={chapter}]
\chapter{Introduction}
This is the introduction chapter.
\section{Problem Background}
We study the...
\end{lstlisting}
And \texttt{chap-litreview.tex}:
\begin{lstlisting}[moretexcs={chapter}]
\chapter{Literature Review}
We review the state of the art in...
\section{Early Approach}
Researchers first attempted to...
\end{lstlisting}
In \texttt{thesis.tex}, these chapter files are included with the following lines:
\begin{lstlisting}[keepspaces=true,moretexcs=mainmatter]
\mainmatter % signal start of main chapters
\input{chap-intro} % no .tex extension!
\input{chap-litreview}
\input{...}
\end{lstlisting}
\subsection{Appendices}\label{sec:appendices}
Again, I recommend keeping each appendix chapter in its own file e.g.~\texttt{app-umldiagram.tex}:
\begin{lstlisting}
\chapter{UML Diagrams}
...
\end{lstlisting}
And in \texttt{thesis.tex}:
\begin{lstlisting}
\begin{appendices}
\input{app-umldiagram}
\input{...}
\end{appendices}
\end{lstlisting}
\subsection{Citations and Bibliography}\label{sec:bibliography}
\textsf{umalayathesis} uses the \textsf{apacite} package with the \texttt{natbibapa} option to format citations and bibliography in the APA style.
Here are some useful variants of the \verb|\cite| command; see the \textsf{apacite} manual for full list.
\bigskip
{\lstset{moretexcs={citeA,citeNP,citeauthor,citeyear}}
\begin{tabular}{>{\textbullet\hspace{6pt}}l @{\hspace{6pt}$\rightarrow$\hspace{6pt}} l}
\verb|\citep{Lim:2009}| & (Lim, 2009)\\
\verb|\citet{Lim:2009}| & Lim (2009)\\
\verb|\citealp{Lim:2009}| & Lim, 2009 (no parenthesis)\\
\verb|\citep[see][p.~7]{Lim:2009}| & (see Lim, 2009, p.~7)\\
\verb|\citeauthor{Lim:2009}| & Lim\\
\verb|\citeyearpar{Lim:2009}| & (2009)\\
\end{tabular}
}
\bigskip
In \texttt{thesis.tex}, these lines will print the bibliography list:
\begin{lstlisting}[keepspaces=true,moretexcs=backmatter]
\backmatter % signal start of back matter
\bibliography{bibfile} % bibliography file name without .bib extension
\end{lstlisting}
\subsection{List of Publications}
First, make sure that you enter details about your own publications in your \verb|.bib| file. Then in \verb|thesis.tex|, search for the following line:
\lstset{escapechar={}}
\begin{lstlisting}
\nociteown{Lim:2009}
\end{lstlisting}
Replace the BibTeX key between the curly braces with that of your own publication. If you have more than one publications, simply separate them with commas inside the curly braces, like this:
\begin{lstlisting}
\nociteown{lim:tang:2004,Lim:2009}
\end{lstlisting}
If you need your publications to be categorised by types (journal articles and conference proceedings), use the \texttt{splitpubs} environment with \verb=\...jour= and \verb=...conf= instead:
\begin{lstlisting}[keepspaces=true,moretexcs={\nociteownjour,\nociteownconf,\bibliographyownjour,\bibliographyownconf},emph={splitpubs},emphstyle={\bfseries}]
\begin{splitpubs}
\nociteownjour{lim:tang:2004} % journal articles
\nociteownconf{Lim:2009} % conference proceedings
\bibliographyownjour{myrefs}
\bibliographyownconf{myrefs}
\end{splitpubs}
\end{lstlisting}
\subsection{Glossary}\label{sec:glossary}
You can maintain a consistent glossary and acronym list using the \textsf{glossaries} package. It also supports acronym expansion on first mention!
First, define your acronyms and terms in a separate file e.g.~\texttt{myacronyms.tex}:
\begin{lstlisting}[moretexcs={newacronym,newglossaryentry},
emph={name,description,plural,firstplural},emphstyle=\bfseries]
% \newglossaryentry{label}{name={term},description={explanation}}
\newglossaryentry{lexicon}{
name={lexicon},
description={The vocabulary of a language, including its words and expressions. More formally, it is a language's inventory of lexemes}
}
% \newacronym[description={explanation}]{label}{abbrv}{full form}
\newacronym
[description={single word or words that are grouped in a language's lexicon}]
{LI}{LI}{lexical item}
\newacronym[description={The application of computational linguistics principles to problems}]
{NLP}{NLP}{Natural Language Processing}
% when the plural form is irregular, specify firstplural and plural
\newacronym
[firstplural={parts of speech}, plural={POS},
description={linguistic category of lexical items}]
{POS}{POS}{part of speech}
\end{lstlisting}
Loading the glossary and acronym list, and later printing the list of acronyms and glossary in \texttt{thesis.tex}:
\begin{lstlisting}[moretexcs={loadglsentries,printglossaries,listofacronyms}]
% Must be loaded BEFORE \begin{document}!
\loadglsentries{myacronyms}
\begin{document}
...
% List of acronyms is between list of tables and list of appendices
\listofacronyms\clearpage
...
\bibliography{bibfile}
% Glossaries is placed AFTER the bibliography
% (only entries that are actually used in the text will be listed)
\printglossary
...
\end{lstlisting}
To mention them in the text (i.e.~\texttt{chap-xxx.tex} etc):
\lstset{moretexcs={gls,glsplural,ac,acp,Gls,Glsplural,Ac,Acp}}
\begin{lstlisting}[frame=single]
Let's talk about \acp{LI} and \acp{POS} in \ac{NLP}. I mention again \acp{LI}. We will also talk about \glsplural{lexicon}.
\end{lstlisting}
Notice how the acronyms are expanded on first use, as well as the use of \verb|\glsplural| and \verb|\acp| for plurals:
{\fboxsep=12pt\SingleSpacing
\noindent\fbox{\begin{minipage}{.95\textwidth}
Let's talk about lexical items (LIs) and parts of speech (POS) in Natural
Language Processing (NLP). I mention again LIs. We will also talk about lexicons.
\end{minipage}}
}
You will need to run \texttt{pdflatex}, \texttt{makeglossaries}, then 2 more runs of \texttt{pdflatex} for the glossaries to appear properly.
Use \verb|\Gls|, \verb|\Glsplural|, \verb|\Ac|, \verb|\Acp| etc.~if you need to capitalise the first letter of your terms at the beginning of sentences.
| {
"alphanum_fraction": 0.7742629528,
"avg_line_length": 42.4617283951,
"ext": "tex",
"hexsha": "adc498a14d2c1edae10e84aba22510bf924d3ab6",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-11-10T14:00:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-03-10T12:11:30.000Z",
"max_forks_repo_head_hexsha": "9fbb2913298c6b93f4c5bc0db7ef4602038ea277",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "liantze/umalayathesis",
"max_forks_repo_path": "umalayathesis-howtouse.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "9fbb2913298c6b93f4c5bc0db7ef4602038ea277",
"max_issues_repo_issues_event_max_datetime": "2020-06-01T09:50:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-05-31T09:37:49.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "liantze/umalayathesis",
"max_issues_repo_path": "umalayathesis-howtouse.tex",
"max_line_length": 412,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "9fbb2913298c6b93f4c5bc0db7ef4602038ea277",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "liantze/umalayathesis",
"max_stars_repo_path": "umalayathesis-howtouse.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-01T07:18:14.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-16T19:00:52.000Z",
"num_tokens": 4706,
"size": 17197
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Please note that whilst this template provides a
% preview of the typeset manuscript for submission, it
% will not necessarily be the final publication layout.
%
% letterpaper/a4paper: US/UK paper size toggle
% num-refs/alpha-refs: numeric/author-year citation and bibliography toggle
%\documentclass[letterpaper]{oup-contemporary}
\documentclass[a4paper,num-refs]{oup-contemporary}
%%% Journal toggle; only specific options recognised.
%%% (Only "gigascience" and "general" are implemented now. Support for other journals is planned.)
\journal{gigascience}
\usepackage{color}
\usepackage{graphicx}
\usepackage{siunitx}
\usepackage{cite}
\usepackage{svg}
\usepackage{enumitem}
%\usepackage{courier}
%%% Flushend: You can add this package to automatically balance the final page, but if things go awry (e.g. section contents appearing out-of-order or entire blocks or paragraphs are coloured), remove it!
% \usepackage{flushend}
\usepackage{tikz}
\def\checkmark{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;}
% Commands for \textit{CWLProv} Table
\usepackage{mathabx}
\newcommand{\return}{$\drsh$ ~}
\newcommand{\nary}[1]{
~ \return #1
}
\newcommand{\placeholder}[1]{
\textit{#1}
}
% RRIDs like RRID:SCR_008394
% detokenize as we need to escape _
\newcommand{\rrid}[1]{
(\href{http://identifiers.org/rrid/RRID:#1}{\detokenize{#1}})
}
%%% https://texblog.org/2014/10/24/removinghiding-a-column-in-a-latex-table/
\usepackage{array}
\newcolumntype{H}{>{\setbox0=\hbox\bgroup}c<{\egroup}@{}}
\title{Sharing interoperable workflow provenance: A review of best practices and their practical application in CWLProv}
\author[1,2,\authfn{1}]{Farah Zaib Khan} %% https://orcid.org/0000-0002-6337-3037
\author[2,3,\authfn{1}]{Stian Soiland-Reyes} %% https://orcid.org/0000-0001-9842-9718
\author[1,\authfn{1}]{Richard O. Sinnott} %% http://orcid.org/0000-0001-5998-222X
\author[1,\authfn{1}]{Andrew Lonie} %% https://orcid.org/0000-0002-2006-3856
\author[3,\authfn{1}]{Carole Goble} %% https://orcid.org/0000-0003-1219-2137
\author[2,\authfn{1}]{Michael R. Crusoe} %% https://orcid.org/0000-0002-2961-9670
\affil[1]{The University of Melbourne, Australia }
\affil[2]{Common Workflow Language Project}
\affil[3]{The University of Manchester, UK}
%%% Author Notes
\authnote{\authfn{1}[email protected]; [email protected]; [email protected];
[email protected];
[email protected];
[email protected]}
%%% Paper category
\papercat{Research}
%%% "Short" author for running page header
\runningauthor{Khan et al.}
%%% Should only be set by an editor
\jvolume{00}
\jnumber{0}
\jyear{2018}
\begin{document}
\begin{frontmatter}
\maketitle
\begin{abstract}
\textbf{Background}: The automation of data analysis in the form of \textit{scientific workflows} has become a widely adopted practice in many fields of research. Computationally driven data-intensive experiments using workflows enable Automation, Scaling, Adaption and Provenance support (ASAP). However, there are still several challenges associated with the effective sharing, publication and reproducibility of such workflows due to the incomplete capture of provenance and lack of interoperability between different technical (software) platforms.
\newline
\textbf{Results}: Based on best practice recommendations identified from literature on workflow design, sharing and publishing, we define a hierarchical provenance framework to achieve uniformity in the provenance and support comprehensive and fully re-executable workflows equipped with domain-specific information. To realise this framework, we present \textit{CWLProv}, a standard-based format to represent any workflow-based computational analysis to produce workflow output artefacts that satisfy the various levels of provenance. We utilize open source community-driven standards; interoperable workflow definitions in Common Workflow Language (CWL), structured provenance representation using the W3C PROV model, and resource aggregation and sharing as workflow-centric Research Objects (RO) generated along with the final outputs of a given workflow enactment. We demonstrate the utility of this approach through a practical implementation of \textit{CWLProv} and evaluation using real-life genomic workflows developed by independent groups.
\newline
\textbf{Conclusions}: The underlying principles of the standards utilized by \textit{CWLProv} enable semantically-rich and executable Research Objects that capture computational workflows with retrospective provenance such that any platform supporting CWL will be able to understand the analysis, re-use the methods for partial re-runs, or reproduce the analysis to validate the published findings.
\end{abstract}
\begin{keywords}
Provenance; Common Workflow Language; CWL; Research Object; RO; BagIt; Interoperability; Scientific Workflows; Containers
\end{keywords}
\end{frontmatter}
\section{Introduction} \label{sec:introduction}
Out of the many big data domains, genomics is considered \textit{``the most demanding''} with respect to all stages of the data lifecycle - from acquisition, storage, distribution and analysis \citep{stephens_2015}. As genomic data is growing at an unprecedented rate due to improved sequencing technologies and reduced cost, it is currently challenging to analyse the data at a rate matching its production. With data growing exponentially in size and volume, the practice to perform computational analyses using \textit{workflows} has overtaken more traditional research methods using ad-hoc scripts which were the typical modus operandi over the last few decades \citep{atkinson_2017, Spjuth2015}. Scientific workflow design and management has become an essential part of many computationally driven data-intensive analyses enabling Automation, Scaling, Adaptation and Provenance support (ASAP)\citep{cuevasvicenttn_2012}. Increased use of workflows has driven rapid growth in the number of computational data analysis Workflow Management Systems (WMSs), with hundreds of heterogeneous approaches now existing for workflow specification and execution \citep{cwl-existing-workflow-systems}. There is an urgent need for a common format and standard to define workflows and enable sharing of analysis results using a given workflow environment.
\begin{keypoints*} \label{contributions}
The contribution of this paper is fourfold:
\begin{itemize}
\item We have gathered best-practice recommendations from the existing literature, and reflect on the various authors' experiences with workflow managements systems and especially with regards to factors to consider when a computational analysis is designed, executed and shared.
\item Combining the above with our own experiences from empirical studies \citep{kanwal_2017, wolstencroft_2013, moller_2017, Alterovitz2019}, we define a set of hierarchical levels of provenance tracking and method sharing where the highest level represent complete understanding of the shared resources supported by reproducibility and re-use of the methods from the lower levels.
\item Building on this provenance hierarchy, we define \textit{CWLProv} for the methodical representation of artefacts associated with a given workflow enactment associated with any study involving computational data-intensive analysis.
\item Finally, we demonstrate the utilisation of \textit{CWLProv} by extending an existing workflow execution engine \textit{cwltool} \citep{cwltool} to produce workflow-centric Research Objects generated as a result of a given workflow enactment. We illustrate this through a case study of using workflows designed by external (independent) developers, and subsequently evaluate the interoperability, reproducibility and completeness of the generated \textit{CWLProv} outcome.
\end{itemize}
\end{keypoints*}
Common Workflow Language (CWL) \citep{cwl} has emerged as a workflow definition standard designed to enable portability, interoperability and reproducibility of analyses between workflow platforms. CWL has been widely adopted by more than 20 organisations, providing an interoperable bridge overcoming the heterogeneity of workflow environments.
Whilst a common standard for workflow definition is an important step towards interoperable solutions for workflow specifications, sharing and publishing the \emph{results} of these workflow enactments in a common format is equally important. Transparent and comprehensive sharing of experimental designs is critical to establish trust and ensure authenticity, quality and reproducibility of any workflow-based research result. Currently there is no common format defined and agreed upon for interoperable workflow archiving or sharing \citep{Ivie2018}.
In this paper, we utilize open-source standards such as CWL together with related efforts such as Research Objects (ROs) \citep{belhajjame_2015}, BagIt \citep{bagit17} and PROV \citep{Missier2013} to define \textbf{\textit{CWLProv}}, a format for the interoperable representation of a CWL workflow enactment. We focus on production of a workflow-centric executable RO as the final result of a given CWL workflow enactment. This RO is equipped with the artefacts used in a given execution including the workflow inputs, outputs and, most importantly, the retrospective provenance. This approach enables the complete sharing of a computational analysis such that any future CWL-based workflow can be re-run given the best practices discussed later for software environment provision are followed.
The concept of workflow-centric ROs has been previously considered \citep{belhajjame_2015, hettne_2014, belhajjame_2012} for structuring the analysis methods and aggregating digital resources utilized in a given analysis. The generated ROs in these studies typically aggregate data objects, example inputs, workflow specifications, attribution details, details about the execution environment amongst various other elements. These previous efforts were largely tied to a single platform or a single WMS. \textit{CWLProv} aims to provide a platform-independent solution for workflow sharing, enactment and publication. All the standards and vocabularies used to design \textit{CWLProv} have an overarching goal to support a domain-neutral and interoperable solution (detailed in Section \textbf{\nameref{sec:standards}}).
The contribution of this work are summarized and listed in the \textbf{Key Points} section and the remainder of this paper is structured as follows. In Section \textbf{\nameref{sec:background}} we discuss the key concepts and related work followed by a summary of the published best-practices and recommendations for workflow representation and sharing in Section \textbf{\nameref{sec:levels}}. This section also details the hierarchical provenance framework that we define to provide a principled approach for provenance capture and method sharing. Section \textbf{\nameref{sec:CWLProv}} introduces \textit{CWLProv} and outlines its format, structure and the details of the standards and ontologies it utilizes. Section \textbf{\nameref{sec:demo}} presents the implementation details of \textit{CWLProv} using \textit{cwltool} \citep{cwltool} and Section \textbf{\nameref{sec:evaluation}} demonstrates and evaluates the implemented module for three existing workflow case studies. We discuss the challenges of interoperable workflow sharing and the limitations of the proposed solution listing several possible future research directions in Section \textbf{\nameref{sec:discussion}} before finally drawing conclusions on the work as a whole in Section \textbf{\nameref{sec:conclusion}}.
\section{Background and Related Work} \label{sec:background}
This work draws upon a range of topics as \textit{Provenance} and \textit{Interoperability}. We define these here to provide better context for the reader.
\subsection{Provenance} \label{sec:provenance}
A number of studies have advocated the need for complete provenance tracking of scientific workflows to ensure transparency, reproducibility, analytic validity, quality assurance and attribution of (published) research results \citep{herschel_2017}. The term \textit{Provenance} is defined by World Wide Web Consortium (W3C) \citep{PROVDM} as:
\begin{quote}
\centering
\textit{``Provenance is information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness.''}
\end{quote}
Provenance for workflows is commonly divided into the following three categories: \textit{Retrospective Provenance}; \textit{Prospective Provenance} and \textit{Workflow Evolution}. \textit{Retrospective Provenance} refers to the detailed record of the implementation of a computational task including the details of every executed process together with comprehensive information about the execution environment used to derive a specific product. \textit{Prospective Provenance} refers to the ‘recipes’ used to capture a set of computational tasks and their order, e.g. the workflow specification \citep{clifford_2008}. This is typically given as an abstract representation of the steps (tools/data analysis steps) that are necessary to create a particular research output, e.g. a data artefact. \textit{Workflow Evolution} refers to tracking of any alteration in the existing workflow resulting in another version of the workflow that may produce either the same or different resultant data artefacts \citep{Casati1998}. In this work, our focus is mainly on improving representation and capture of \textit{Retrospective Provenance}.
\subsection{Interoperability} \label{sec:interoperability}
The concept of interoperability varies in different domains. Here we focus on \textit{computational interoperability} defined as:
\begin{quote}
\centering
\textit{The ability of two or more components or systems to exchange information and to use the information that has been exchanged} \citep{interope55:online}.
\end{quote}
The focus of this study is to propose and devise methods to achieve \textit{syntactic}, \textit{semantic} and \textit{pragmatic} interoperability as defined in Levels of Conceptual Interoperability Model (LCIM)\citep{Tolk}. \textit{Syntactic} interoperability is achieved when a common data format for information exchange is unambiguously defined. The next level of interoperability, referred to as \textit{semantic} interoperability, is reached when the content of the actual information exchanged is unambiguously defined. Once there is an agreement about the format and content of the information, \textit{pragmatic} interoperability is achieved when the context, application and use of the shared information and data exchanged is also unambiguously defined. In the section \textbf{\nameref{sec:eval-results}}, we relate these general definitions to specific workflow applications with respect to workflow-centric ROs and describe to what extent these interoperability requirements are addressed.
\subsection{Related Work} \label{sec:relwork}
We focus on relevant studies and efforts trying to resolve the issue of availability of required resources used in a given computational analysis. In addition, we cover efforts directed towards provenance capture of workflow enactments. As these concepts have been around for a considerable time, we restrict our attention to scientific workflows and studies related to the bioinformatics domain.
\subsubsection{\textcolor{black} Workflow Software Environment Capture}
\textit{Freezing} and packaging the run-time environment to encompass all the software components and their dependencies used in an analysis is a recommended and widely adopted practice \citep{cohen2017scientific} especially after use of cloud computing resources where images and snapshots of the cloud instances are created and shared with fellow researchers \citep{howe}. Nowadays, preservation and sharing of the software environment e.g. in open access repositories, is becoming a regular practice in the workflow domain as well. Leading platforms managing infrastructure and providing cloud computing services and configuration on demand include DigitalOcean \citep{DigitalOcean}, Amazon Elastic Compute Cloud \citep{AmazonEC}, Google Cloud Platform \citep{GoogleCl} and Microsoft Azure \citep{Microsof}. The instances launched on these platforms can be saved as snapshots and published with an analysis study to later re-create an instance representing the computing state at analysis time.
Using \textit{``System-wide packaging''} for data-driven analyses, although simplest on part of the workflow developers and researchers, has its own caveats. One of the notable issue is the size of the snapshot as it captures everything in an instance at a given time, hence the size can range from few gigabytes to many terabytes. To distribute research software and share execution environments, various light-weight and container-based virtualisation and package managers are emerging, including: Docker, Singularity, Debian Med and Bioconda.
\textit{Docker}\citep{docker} is a lightweight container-based virtualisation technology that facilitates the automation of application development by archiving software systems and environment to improve portability of the applications on many common platforms including Linux, Microsoft windows, Mac OS X and cloud instances. \textit{Singularity}\citep{kurtzer_2017} is also a cross-platform open source container engine specifically supporting High Performance Computing (HPC) resources. An existing Docker format software image can be imported and used by the Singularity container engine. \textit{Debian Med} \citep{debian-med} contribute packages of medical practice and biomedical research to the Debian Linux distribution, lately also including workflows \citep{moller_2017}. \textit{Bioconda}\citep{Grning2018} packages, based on the an open source package manager Conda \citep{Conda}, are available for Mac OS X and Linux environments, directing towards availability and portability of software used in the life science domain.
\subsubsection{\textcolor{black} Data/Method Preservation, Aggregation \& Sharing}
Preserving and sharing only the software environment is not enough to verify results of any computational analysis or re-use the methods used (e.g. workflows) with a different dataset. It is also necessary to share other details including data (example or the original), scripts, workflow files, input configuration settings, the hypothesis of the experiment and any/all trace/logging information related to ``what happened'', i.e. the retrospective provenance of the actual workflow enactment. The publishing of resources to improve state of scholarly publications is now supported by various online repositories, including Zenodo \citep{Zenodo}, GitHub \citep{GitHub}, myExperiment \citep{Goble2010} and Figshare \citep{figshare}. These repositories facilitate collaborative research, in addition to public sharing of source code and the results of a given analysis. There is however no agreed format that must be followed when someone shares artefacts associated with an analysis. As a result, the quality of the shared resources can range from a highly annotated, properly documented and complete set of artefacts, to raw data with undocumented code and incomplete information about the analysis as a whole. Individual organisations or groups might provide a set of ``recommended practices'', e.g. in readme files, to attempt to maintain the quality of shared resources. The initiative \textit{Code as a Research Object} \citep{CodeasaR} is a joint project between Figshare, GitHub and Mozilla Science Lab \citep{Mozilla} and aims to archive any GitHub code repository to Figshare and produce a Digital Object Identifier (DOI) to improve the discovery of resources\footnote{For the source code that support this work we have used a similar publishing feature with Zenodo.}.
Reprozip\citep{Chirigati2016} aims to resolve portability issues by identifying and packaging all dependencies in a self-contained package which when unpacked and executed on another system (with Reprozip installed) should reproduce the methods and results of the analysis. Each package also contains a human readable configuration file containing provenance information obtained by tracing system calls during system execution. The corresponding provenance trace is however not formatted using existing open standards established by the community.
Several platform-dependent studies have been targeted towards extensions to existing standards by implementing the Research Object model and improving aggregation of resources. \citet{belhajjame_2015} proposed the application of ROs to develop workflow-centric ROs containing data and metadata to support the understandability of the utilized methods (in this case workflow specifications). They explored five essential requirements to workflow preservation and identified data and metadata that could be stored to satisfy the said requirements. These requirements include providing example data, preserving workflows with provenance traces, annotating workflows, tracking the evolution in workflows and packaging the auxiliary data and information with workflows. They proposed extensions to existing ontologies such as Object Reuse and Exchange (ORE), the Annotation Ontology (AO) and PROV-O, with four additional ontologies to represent workflow specific information. However, as stated in the paper, the scope of the proposed model at that time was not focused on interoperability of heterogeneous workflows as it was demonstrated for a workflow specific to Taverna WMS using myExperiment, which makes it quite platform-dependent.
A domain-specific solution is proposed by \citet{gomezperez_2017} by extending the RO model to equip workflow-centric ROs with information catering for the specific needs of the Earth Science community, resulting in enhanced discovery and reusability by experts. They demonstrated that the principles of ROs can support extensions to generate aggregated resources leveraging domain specific knowledge. \citet{hettne_2014} used three genomic workflow case studies to demonstrate the utilisation of ROs to capture methods and data supporting querying and useful extraction of information about the scientific investigation under observation. The solution was tightly coupled with the Taverna WMS and hence if shared, would not be reproducible outside of the Taverna environment. Other notable efforts to use ROs for workflow preservation and method aggregation include \citep{wolstencroft_2013} in systems biology, \citep{Custovic799} in clinical settings and \citep{Alterovitz2019} in precision medicine.
\subsubsection{\textcolor{black} Provenance Capture \& Standardization}
A range of standards for provenance representation have been proposed. Many studies have emphasized the need for provenance focusing on aspects such as scalability, granularity, security, authenticity, modelling and annotation \citep{herschel_2017}. They identify the need to support standardized dialogues to make provenance interoperable. Many of these were used as inputs to initial attempts at creating a standard Provenance Model to tackle the often inconsistent and disjointed terminology related to provenance concepts. This ultimately resulted in the specification of the \textit{Open Provenance Model} (OPM)\citep{Moreau2008} together with an open-source model for the governance of OPM \citep{moreau2009governance}. Working towards similar goals of interoperability and standardization of provenance for web technologies, the World Wide Web Consortium (W3C) Provenance Incubator Group \citep{W3CProvWorkingGroup} and the authors of OPM together set the fourth provenance challenge at the International Provenance and Annotation Workshop, 2010 (IPAW'10) that later resulted in \textit{PROV}, a family of documents serving as the conceptual model for provenance capture, its representation, sharing and exchange over the Web \citep{Moreau2015} regardless of the domain or platform. Since then, a number of studies have proposed extensions to this domain-neutral standard. The model is general enough to be adapted to any field and flexible enough to allow extensions for specialized cases.
\citet{michaelides_2016} presented a domain-specific PROV-based solution for retrospective provenance to support portability and reproducibility of a statistical software suite. They captured the essential elements from the log of a workflow enactment and represented them using an intermediate notation. This representation was later translated to PROV-N and used as the basis for the PROV Template System. A Linux specific system provenance approach was proposed in \citep{pasquier_2017} where they demonstrated retrospective provenance capture at the system level. Another project \textit{UniProv} is working to extract information from Unicore middleware and transform it into a PROV-O representation to facilitate the back-tracking of workflow enactments \citep{giesler_2017}. Other notable domain-specific efforts leveraging the established standards to record provenance and context information are \textit{PROV-man} \citep{benabdelkader_2015}, PoeM \citep{PoeM} and micropublications \citep{Clark2014}. Platforms such as VisTrails and Taverna have built in retrospective provenance support. \textit{Taverna} \citep{wolstencroft_2013} implements an extensive provenance capture system \textit{TavernaProv}\citep{tavernaprov}, utilising both PROV ontologies as well as ROs aggregating the resources used in an analysis. \textit{VisTrails}\citep{freire_2012} is an open source project supporting platform-dependent provenance capture, visualisation and querying for extraction of required information about a workflow enactment. \citep{Chirigati2016} provide an overview of PROV terms and how they can be translated from the VisTrails schema and serialized to PROV-XML. \textit{WINGS}\citep{wings2011} can report fine-grained workflow execution provenance as Linked Data using the OPMW ontology \citep{garijo_2017}, which builds on both PROV-O and OPM.
All these efforts are fairly recent and use a standardized approach to provenance capture and hence are relevant to our work on the capture of retrospective provenance. However, our aim is a domain-neutral and platform-independent solution that can be easily adapted for any domain and shared across different platforms and operating systems.
As evident from the literature, there are efforts in progress to resolve the issues associated with effective and complete sharing of computational analysis including both the results and provenance information. These studies range from highly domain-specific solutions and platform-dependent objects to open source flexible interoperable standards. CWL has widespread adoption as a workflow definition standard, hence is an ideal candidate for portable workflow definitions. The next section investigates existing studies focused on workflow-centric science, and summarises best practice recommendations put forward in these studies. From this we define a hierarchical provenance and resource sharing framework.
\begin{figure*} [t!]
\centering
\includegraphics[width=.9\textwidth]{images/recommendations2.png}
\captionsetup{justification=centering}
\caption{Recommendations from Table \ref{tab:recommendation:wide} classified into these categories} \label{fig:recommendationclasses}
\end{figure*}
\begin{table*}[!htbp]
\caption{Summarized recommendations and justifications from literature covering best practices on reproducibility, accessibility, interoperability and portability of workflows}\label{tab:recommendation:wide}
\begin{tabularx}{\linewidth}{p{1.4cm} L L}
\toprule
{\textbf{R.no}} & {\textbf{Recommendations}} & {\textbf{Justifications}}\\
\midrule
R1 \newline\smaller{parameters} & Save and share all parameters used for each software executed in a given workflow (including default values of parameters used) \citep{Nekrutenko2012, garijo_2013, garijo_2017, sandve_2013}. & Impacts on reproducibility of results since different inputs and configurations of the software can produce different results. Different versions of a tool might upgrade the default values of the parameters. \\ \midrule
R2 \newline\smaller{automate} & Avoid manual processing of data and if using \textit{shims} \citep{Mohan2014} then make these part of the workflow to fully automate the computational process \citep{Nekrutenko2012, sandve_2013}. & This ensures the complete capture of the computational process without broken links so that the analysis can be executed without need for performing manual steps. \\ \midrule
R3 \newline\smaller{intermediate} & Include intermediate results where possible when publishing an analysis \citep{garijo_2013, garijo_2017, sandve_2013}. & Intermediate data products can be used to inspect and understand shared analysis when re-enactment is not possible. \\ \midrule
R4 \newline\smaller{sw-version} & Record the exact software versions used \citep{Nekrutenko2012, sandve_2013}. & This is necessary for reproducibility of results as different software versions can produce different results. \\ \midrule
R5 \newline\smaller{data-version} & If using public data (reference data, variant databases), then it is necessary to store and share the actual data versions used \citep{Spjuth2015, kanwal_2017, Nekrutenko2012, sandve_2013} . & This is needed as different versions of data, e.g. human reference genome or variant databases, can result in slightly different results for the same workflow. \\ \midrule
R6 \newline\smaller{annotation} & Workflows should be well-described, annotated and offer associated metadata. Annotations such as user contributed tags and versions should be assigned to workflows and shared when publishing the workflows and associated results \citep{belhajjame_2015, belhajjame_2012, garijo_2017, Littauer2012, stodden_2016} . & Metadata and annotations improve the understandability of the workflow, facilitate independent re-use by someone skilled in the field, make workflows more accessible and hence promote the longevity of the workflows. \\ \midrule
R7 \newline\smaller{identifier} & Use and store stable identifiers for all artefacts including the workflow, the datasets and the software components \citep{Littauer2012, stodden_2016}. & Identifiers play an important role in the discovery, citation and accessibility of resources made available in open-access repositories. \\ \midrule
R8 \newline\smaller{environment} & Share the details of the computational environment \citep{belhajjame_2015, kanwal_2017, stodden_2016} . & Such details support requirements analysis before any re-enactment or reproducibility is attempted. \\ \midrule
R9 \newline\smaller{workflow} & Share workflow specifications/descriptions used in the analysis \citep{belhajjame_2015, garijo_2013, garijo_2017, stodden_2016, Stodden2014}. & The same workflow specifications can be used with different datasets thereby supporting re-usability. \\ \midrule
R10 \newline\smaller{software} & Aggregate the software with the analysis and share this when publishing a given analysis \citep{belhajjame_2015, kanwal_2017, stodden_2016, Stodden2014, garijo_2017}. & Making software available reduces dependence on third party resources and as a result minimizes \textit{workflow decay} \citep{Zhao2012}. \\ \midrule
R11 \newline\smaller{raw-data} & Share raw data used in the analysis \citep{belhajjame_2015, garijo_2013, garijo_2017, stodden_2016, Stodden2014}. & When someone wants to validate published results, availability of data supports verification of claims and hence establishes trust in the published analysis \\ \midrule
R12 \newline\smaller{attribution} & Store all attributions related to data resources and software systems used \citep{garijo_2017, Stodden2014}. & Accreditation supports proper citation of resources used. \\ \midrule
R13 \newline\smaller{provenance} & Workflows should be preserved along with the provenance trace of the data and results \citep{belhajjame_2015, belhajjame_2012, garijo_2017, sandve_2013, Stodden2014}. & A provenance trace provides a historical view of the workflow enactment, enabling end users to better understand the analysis retrospectively \\ \midrule
R14 \newline\smaller{diagram} & Data flow diagrams of the computational analysis using workflows should be provided \citep{kanwal_2017, garijo_2013}. & These diagrams are easy to understand and provide a human readable view of the workflow. \\ \midrule
R15 \newline\smaller{open-source} & Open source licensing for methods, software, code, workflows and data should be adopted instead of proprietary resources \citep{kanwal_2017, garijo_2013, sandve_2013, stodden_2016, Stodden2014, Gymrek2016}. & This improve availability and legal re-use of the resources used in the original analysis, while restricted licenses would hinder reproducibility. \\ \midrule
R16 \newline\smaller{format} & Data, code and all workflow steps should be shared in a format that others can easily understand preferably in a system neutral language \citep{belhajjame_2015, garijo_2013, Gymrek2016}. & System neutral languages help achieve interoperability and make an analysis understandable. \\ \midrule
R17 \newline\smaller{executable} & Promote easy execution of workflows without making significant changes to the underlying environment \citep{Spjuth2015}. & In addition to helping reproducibility, this enables adapting the analysis methods to other infrastructures and improves workflow portability. \\ \midrule
R18 \newline\smaller{resource-use} & Information about compute and storage resources should be stored and shared as part of the workflow \citep{kanwal_2017}. & Such information can assist users in estimating the required resources needed for an analysis and thereby reduce the amount of failed executions. \\ \midrule
R19 \newline\smaller{example} & Example input and sample output data should be preserved and published along with the workflow-based analysis \citep{belhajjame_2015, Zhao2012}. & This information enables more efficient test runs of an analysis to verify and understand the methods used. \\ \midrule
\bottomrule
\end{tabularx}
\begin{tablenotes}
\item This list is not exhaustive, other studies have identified separate issues (e.g. lab work provenance and data security) that are beyond the scope of this work.
\end{tablenotes}
\end{table*}
\section{Levels of Provenance and Resource Sharing} \label{sec:levels}
Various studies have empirically investigated the role of automated computational methods in the form of workflows and published best practice recommendations to support workflow design, preservation, understandability and re-use. We summarise a number of these recommendations and the their justifications in Table \ref{tab:recommendation:wide}, where each recommendation addresses specific requirement of workflow design and sharing. These recommendations can be clustered into broad themes as shown in Figure \ref{fig:recommendationclasses}. This classification can be in more than one way e.g. according to how these recommendations are supporting each FAIR dimension \citep{wilkinson_2016}. In this study, we have focused on categories with respect to workflow design, prospective provenance, data sharing, retrospective provenance, the computational environment required/used for an analysis and lastly better findability and understandability of all shared resources.
Sharing \textit{``all artefacts''} from a computational experiment (following all recommendations and best practices) is a demanding task without any informed guidance. It requires consolidated understanding of the impact of the many different artefacts involved in that analysis. This places extra efforts on workflow designers, (re)-users, authors, reviewers and expectations on the community as a whole. Given the numerous WMS and differences in how each system deals with provenance documentation, representation and sharing of these artefacts, the granularity of provenance information preserved will vary for each workflow definition approach. Hence, devising one universal but technology-specific solution for provenance capture and the related resource sharing is impossible. Instead we propose a generic framework of provenance in Figure \ref{fig:levels} that all WMSs can benefit from and conform to with minimum technical overheads.
The recommendations in Table \ref{tab:recommendation:wide} aid in our understanding to define this framework by classifying the granularity of the provenance and related artefacts where the uppermost level exhibits comprehensive, reproducible, understandable and provenance-rich computational experiment sharing. The purpose of this framework is threefold. First, because of its generic nature it brings the uniformity in the provenance granularity across various WMS belonging to different workflow definition approaches. Second, it provides comprehensive and well-defined guidelines that can be used by the researchers to conduct principled analysis of the provenance of any published study. Third, due to its hierarchical nature, the framework can be leveraged by the workflow authors to progress incrementally towards the most transparent workflow-centric analysis. Overall, this framework will help achieve a uniform level of provenance and resource sharing with a given workflow-centric analysis guaranteed to fulfill the respective provenance applications.
Our proposed provenance levels are ordered from low granularity to higher degrees of specificity. In brief, \textbf{Level 0} is unstructured information about the overall workflow enactment, \textbf{Level 1} adds structured retrospective provenance, access to primary data and executable workflows, \textbf{Level 2} enhances the white-box provenance for individual steps, and \textbf{\textit{Level 3}} adds domain-specific annotations for improved understanding. These levels are described in the following sub-sections and mapped to the requirements in Table \ref{tab:recommendation:wide} that these levels aim to satisfy.
\begin{figure*} %[b!]
\centering
\includegraphics[width=.9\textwidth]{images/ProvenanceLevels}
\captionsetup{justification=centering}
\caption{Levels of Provenance and resource sharing and their applications}\label{fig:levels}
\end{figure*}
\subsection{Level 0} \label{sec:level0}
To achieve this level, researchers should share the workflow specifications, input parameters used for a given workflow enactment, raw logs and output data preferably through an open-access repository. This is the least information that could be shared without putting any extra efforts to support seamless reuse or understandability of a given analysis. The artefacts shared at this level would only require uploading of the associated resources to a repository without necessarily providing any supporting metadata or provenance information. Information captured at \textit{Level 0} is the bare minimum that can be used for result interpretation.
Workflow definitions based on \textit{Level 0} can also potentially be re-purposed for other analyses. As argued by Ludäscher, a well-written scientific workflow and its graphical representation is itself a source of prospective provenance giving user an idea of the steps taken and data produced \citep{ludascher2016brief}. Therefore a well-described workflow specification indirectly provides prospective provenance without aiming for it. In addition to the textual workflow specification, its graphical representation should also be shared if available for better understandability fulfilling \textit{R14-diagram}. At this level, reproducing the workflow would only be possible if the end-user devotes extra efforts to understand the shared artefacts and carefully recreate the execution environment. As open access journals frequently require availability of methods and data, many published studies now share workflow specifications and optionally the outputs thereby achieving \textit{Level 0} and specifically satisfying \textit{R1-parameters} and \textit{R9-workflow} (Table \ref{tab:recommendation:wide}). In addition, the resources shared should have open licence starting from \textit{Level 0} and this practice proposed by \textit{R15-open-source} should be adopted at each higher level.
\subsection{Level 1} \label{sec:level1}
At \textit{Level 1}, \textit{R4-sw-version}, \textit{R5-data-version}, \textit{R12-attribution} and \textit{R13-provenance} should be satisfied by providing retrospective provenance of the workflow enactment - i.e. a structured representation of machine readable provenance which can answer questions such as ``what happened'', ``when happened'', ``what was executed'', ``what was used'', ``who did this'' and ``what was produced''. Seamless re-enactment of the workflow should be supported at this level. This is only possible when along with provenance information, \textit{R8-environment} and \textit{R10-software} is satisfied by potentially packaging the software environment for analysis sharing or there is enough information about the software environment that guide the user to reliably re-enact the workflow. Hence \textit{R17-executable} should be satisfied making it possible for the end users to re-enact the shared analyses without making major changes to the underlying software environment.
In addition to the software availability and retrospective provenance, access to input data should also be provided fulfilling \textit{R11-raw-data}. This data can be used to re-enact the published methods or utilized in a different analysis, e.g. for performance comparison of methods. At \textit{Level 1}, it is preferable to provide content-addressable data artefacts such as input, output and intermediate files, avoiding local paths and file names to make a given workflow executable outside its local environment. The intermediate data artefacts should also be provided to facilitate inspection of all step results, hence satisfying \textit{R3-intermediate}. All resources, including workflow specifications and provenance, should be shared in a format that is understandable across platforms, preferably in a technology-neutral language as proposed by \textit{R16-format}.
While software and data can be digitally captured, the hardware and infrastructure requirements also need to be captured to fulfill \textit{R18-resource-use}. This kind of information can naturally vary widely with runtime environments, architectures and data sizes \citep{Bubak_2013}, as well as rapidly becoming outdated as hardware and cloud offerings evolve. Nevertheless a snapshot of the workflow's overall execution resource usage for an actual run can be beneficial to give a broad overview of the requirements, and can facilitate cost-efficient re-computation by taking advantage of spot-pricing for cloud resources \citep{Angiuoli_2011}.
\subsection{Level 2} \label{sec:level2}
It is a common practice in scientific workflows to modularize the workflow specifications by separating the related tasks into ``sub-workflows'' or ``nested workflows'' \citep{cohen2017scientific} to be incorporated and used in other workflows or be assigned to compute and storage resources in case of distributed computing \citep{chen2011partitioning}. These modular solutions promote understanding and re-usability of the workflows as researchers are inclined to use these modules instead of workflow as whole for their own computational experiments. An example of a sub-workflow is the mandatory ``pre-processing'' \citep{GATKBP} needed for the Genome Analysis ToolKit (GATK) best practice pipelines used for genomic variant calling. These steps can be separated into a sub-workflow to be used before any variant calling pipeline, be it somatic or germline.
At \textit{Level 1}, retrospective provenance is coarse grained and as such, there is no distinction between workflows and their sub-workflows. Ludäscher \citep{ludascher2016brief} distinguishes workflow provenance between \textit{black-box} and database provenance as \textit{white-box}. The reasoning behind this distinction is that often the steps in a workflow, especially those based on graphical user interface-based platforms, provide levels of abstraction/obscurity to the actual tasks being implemented. In our previous work we used an empirical case study to demonstrate that declarative approaches to workflow definition resulted in transparent workflows with the least number of assumptions \citep{kanwal_2017}. This resolves the black box/white box issue to some extent, but to further support research transparency, we propose to share retrospective provenance logs for each nested/sub-workflow making the details of a workflow enactment as explicit as possible and moving a step closer to \textit{white-box} provenance. These provenance logs will support the inspection and automatic re-enactment of targeted components of a workflow such as a single step or a sub-workflow individually without necessarily having to re-enact the full analysis. Some existing make-like systems such as Snakemake support partial re-enactments but typically rely on fixed file paths for input data and require manual intervention to provide the specific directory structure. With detailed provenance logs and the corresponding content-addressable data artefacts, the partial re-runs can be achieved with automatic generation of input configuration setting.
In addition, we propose to include \textit{permalinks} at \textit{Level 2} to identify the workflows and their individual steps which facilitates the inspection of each step and aim to improve the longevity of the shared resources, hence supporting \textit{R7-identifier}. Improving \textit{R18-resource-use} for \textit{Level 2} would include resource usage per task execution. Along with execution times this can be useful information to identify bottlenecks in a workflow and for more complex calculations in cost optimization models \citep{Malawski_2013}. At this provenance level resource usage data will however also become more noisy and highly variant on scheduling decisions by the workflow engine, e.g. sensitivity to cloud instance reuse or co-use for multiple tasks, or variation in data transfers between tasks on different instances. Thus \textit{Level 2} resource usage information should be further processed with statistical models for it to be meaningful for a user keen to estimate the resource requirement for re-enactment of a given analysis.
\subsection{Level 3} \label{sec:level3}
Levels 0-2 are generic and domain-neutral, and can apply to any scientific workflow. However, domain-specific information/metadata about data and processes plays an important role in better understanding of the analysis and exploitation of provenance information, e.g. for meaningful queries to extract information to the domain under consideration \citep{Alper2018, Gaignard2014}. Addition of domain specific metadata e.g. file formats, user-defined tags and other annotations to generic retrospective provenance can improve the \textit{white-boxness} by providing domain context to the analysis as described in \textit{R6-annotations}. Annotations can range from adding textual description and tags to marking data with more systematic and well-defined domain-specific ontologies such as EDAM \citep{Ison2013} and BioSchemas \citep{Michel_2018} in the case of bioinformatic workflows. Some studies also propose to provide example or test data sets which eventually helps in analyzing the methods shared and verifying their results (as described in \textit{R19-example}).
At \textit{Level 3}, the information from previous levels combined with specific metadata about data artefacts facilitates higher level classification of workflow steps into \textit{motifs} \citep{garijo_2014} such as data retrieval, pre-processing, analysis and visualisation. This level of provenance, resource aggregation and sharing can provide a researcher-centric view of data and enable users to re-enact a set of steps or full workflow by providing filtered and annotated view of the execution. This can be non-trivial to achieve with mainstream methods of workflow definition and sharing, as it requires guided user annotations with controlled vocabularies, but this can be simplified by reusing related tooling from existing efforts like BioCompute Objects \citep{Alterovitz2019} and DataCrate \citep{datacrate_2018}.
Communicating resource requirements (\textit{R18-resource-use}) at \textit{Level 3} would involve domain-specific models for hardware use and cost prediction, as suggested for dynamic cloud costing \citep{biosimspacewebinar} in \textit{BioSimSpace} \citep{biosimspace}, or predicting assembler and memory settings through machine learning of variables like source biome, sequencing platform, file size, read count and base count in the \textit{European Bionformatics Institute (EBI) Metagenomics} pipeline \citep{mitchell_2017}. For robustness such models typically need to be derived from resource usage across multiple workflow runs with varied inputs, e.g. by a multi-user workflow platform. Taking advantage of \textit{Level 3} resource usage models might require pre-processing workflow inputs and calculations in an environment like R or Python, and so we recommend that models are provided with separate sidecar workflows for interoperable execution before the main workflow.
By explicit enumeration of the levels of provenance, it should be possible to quantify and directly assess the effort required to re-use a workflow and reproduce experiments directly. Similar effort like \textit{5-star Open Data} \citep{5star} strongly advocates open-licensed structured representation, use of stable identifiers for data sharing and following Linked Data principles to cross-relate data. One challenge on achieving the Open Data stars is that it needs tool support during data processing. In our framework we proposed systematic workflow-centric resource sharing using structured Linked Data representation, including recording of the executed data operations. Hence, our effort compliments the already proposed 5-star Open Data principles and contributes to further understanding by sharing the computational method following the same principles.
Requiring researchers to achieve the above defined levels individually is unrealistic without guidance and direct technical support. Ideally, the conceptual meaning of these levels would be translated into a practical solution utilising the available resources. However, given the heterogeneity of workflow definition approaches, it is expected that the proposed framework, when translated into practical solutions, will also naturally result in varying workflow-centric solutions tied to specific WMSs. To support interoperability of the workflow-centric analysis achieving the provenance levels, we propose \textbf{\textit{CWLProv}}, a format for annotating resource aggregations equipped with retrospective provenance. The next section describes \textit{CWLProv} and the associated standards that are applied in this process.
\section{CWLProv 0.6.0 and utilized standards}\label{sec:CWLProv}
Here we present \textit{CWLProv}, a format for the methodical representation of workflow enactment, associated artefacts and capturing and using retrospective provenance information. Keeping in view the recommendations from Table \ref{tab:recommendation:wide} for example \textit{R15-open-source} and \textit{R16-format}, we leverage \textbf{open-source}, \textbf{domain-independent}, \textbf{system-neutral}, \textbf{interoperable} and most importantly \textbf{community-driven} standards as the basis for the design and formatting of reproducible and interoperable workflow-based ROs. The profile description in this section correspond to \textit{CWLProv} 0.6.0 \citep{cwlprov}. (see \url{https://w3id.org/cwl/prov} for the latest profile).
\subsection{Applied Standards and Vocabularies} \label{sec:standards}
We follow the recommendation \textit{``Reuse vocabularies, preferably standardized ones''} \citep{reusevocab} from best practices associated with data sharing, representation and publication on the web to achieve consensus and interoperability of workflow-based analyses. Specifically we integrate the \emph{Common Workflow Language} (CWL) for workflow definition, \emph{Research Objects} (ROs) for resource aggregation and the \emph{PROV-Data Model} (PROV-DM) to support the retrospective provenance associated with workflow enactment. The key properties and principles of these standards are described below.
\subsubsection{\textcolor{black}Common Workflow Language (CWL)}
Common Workflow Language \citep{cwl} provides declarative constructs for workflow structure and command line tool interface definition. It makes minimal assumptions about base software dependencies, configuration settings, software versions, parameter settings or indeed the execution environment more generally \citep{kanwal_2017}. The CWL object model supports comprehensive recording and capture of information for workflow design and execution. This can subsequently be published as structured information alongside any resultant analysis using that workflow.
CWL is a community-driven standard effort that has been widely adopted by many workflow design and execution platforms, supporting interoperability across a set of diverse platforms. Current adopters include Toil, Arvados, Rabix \citep{kaushik_2017}, Cromwell \citep{cromwell}, REANA, and Bcbio \citep{guimera_2012} with implementations for Galaxy, Apache Taverna, and AWE currently in progress.
\begin{figure*} [t!]
\centering
\includegraphics[width=.7\textwidth]{images/twostep}
\captionsetup{justification=centering}
\caption{Left: A snapshot of part of a GATK workflow described using CWL. Two steps named as \textit{bwa-mem} and \textit{samtools-view} are shown where the former links to the tool description executing the underlying tool (BWA-mem for alignment) and provides the output used as input for samtools. Right: Snapshot of BWA-mem.cwl and the associated Docker requirements for the exact tool version used in the workflow execution.}\label{fig:bwa-mem}
\end{figure*}
A workflow in CWL is composed of “steps” where each step refers either to a command line tool (also specified using CWL) or another workflow specification incorporating the concept of “sub-workflows”. Each “step” is associated with “inputs” that are comprised of any data artefact required for the execution of that step (Figure \ref{fig:bwa-mem}). As a result of the execution of each step, “outputs” are produced which can become (part of) “inputs” for the next steps making the execution data-flow oriented. CWL is not tied to a specific operating system or platform which makes it an ideal approach for interoperable workflow definitions.
\subsubsection{ \textcolor{black}Research Object (RO)}
A Research Object encapsulates all of the digital artefacts associated with a given computational analysis contributing towards preservation of the analysis \citep{bechhofer_2013}, together with their metadata, provenance and identifiers.
The aggregated resources can include but are not limited to: input and output data for analysis results validation; computational methods such as command line tools and workflow specifications to facilitate workflow re-enactment; attribution details regarding users; retrospective as well as prospective provenance for better understanding of workflow requirements, and machine-readable annotations related to the artefacts and the relationships between them. The goal of ROs is to make any published scientific investigation and the produced artefacts \textit{“interoperable, reusable, citable, shareable and portable”}.
The three core principles \citep{roprinciples} of the RO approach are to support ``Identity'', ``Aggregation'', and ``Annotation'' of research artefacts. They look to enable accessibility of tightly-coupled, interrelated and well-understood aggregated resources involved in a computational analysis as identifiable objects, e.g. using unique (persistent) identifiers such as DOIs and/or ORCIDs. The RO approach is well aligned with the idea of interoperable and platform-independent solutions for provenance capture of workflows because of its domain-neutral and platform-independent nature.
While ROs can be serialized in several different ways, in this work we have reused the BDBag approach based on \textit{BagIt} (see box), which has been shown to support large-scale workflow data \citep{chard_2016}. This approach is also compatible with data archiving efforts from the NIH Data Commons, Library of Congress and the Research Data Alliance. The specialized workflow-centric RO in this study encompasses the components mentioned in the previous paragraph annotated with various targeted tools and a PROV-based \textit{Workflow provenance profile} to capture the detailed retrospective provenance of the CWL workflow enactment.
\subsubsection{\textcolor{black}PROV Data Model (PROV-DM)}
The World Wide Web Consortium (W3C) developed \emph{PROV}, a suite of specifications for unified/interoperable representation and publication of provenance information on the Web. The underlying conceptual PROV Data Model (PROV-DM) \citep{PROVDM} provides a domain-agnostic model designed to capture fundamental features of provenance with support for extensions to integrate domain-specific information (Figure \ref{fig:prov-dm}).
\begin{figure} [!b]
\centering
\includegraphics[width=\linewidth]{images/key-concepts.pdf}
\captionsetup{justification=centering}
\caption{Core concepts of the PROV Data Model. \\Adapted from W3C PROV Model Primer \citep{PROVModel}. }\label{fig:prov-dm}
\end{figure}
We utilize mainly two serialisations of PROV for this study, PROV-Notation (PROV-N) \citep{moreau_2013} and PROV-JSON \citep{huynh_2013}. PROV-N is designed to achieve serialisation of PROV-DM instances by formally representing the information using a simplified textual syntax to improve human readability. PROV-JSON is a lightweight interoperable representation of PROV assertions using JavaScript constructs and data types. The key design and implementation principles of these two serialisations of PROV are in compliance with the goals of this study, i.e. understandable and interoperable, hence are a natural choice to support the design of an adaptable provenance profile. For completeness we also explored serializing the provenance graph as PROV-XML \citep{PROVXML} as well as PROV-O \citep{PROVO}, which provides a mapping to Linked Data and ontologies, with potential for rich queries and further integration using a triple store. One challenge here is the wide variety of OWL and RDF formats, we opted for Turtle, N-Triples and JSON-LD, but concluded that requiring all of these PROV and RDF serializations would be an unnecessary burden for other implementations of \textit{CWLProv}.
\subsection{\textit{CWLProv} Research Object} \label{sec:cwlprovRO}
The provenance framework defined in previous section can be satisfied by using a structured approach to share the identified resources. In this section, we define the representation of data and metadata to be shared for a given workflow enactment, stored as multiple files in their native formats. The folder structure of the \textit{CWLProv} Research Object complies with the \emph{BagIt} \citep{bagit17} format such that its content and completeness can be verified with any BagIt tool or library (see box \textbf{What is BagIt?}). The files used and generated by the workflow are here considered the \emph{data payload}; the remaining directories include \emph{metadata} of how the workflow results were created. We systematized the aggregated resources into various collections for better understanding and accessibility for a CWL workflow execution (Figure \ref{fig:RO-format}).
\begin{figure*} %[b!]
\centering
\includegraphics[width=.7\textwidth]{images/RO-structure-NEW-file}
\captionsetup{justification=centering}
\caption{Schematic representation of the aggregation and links between the components of a given workflow enactment. Layers of execution are separated for clarity. The workflow specification and command line tool specifications are described using CWL. Each individual command line tool specification can optionally interact with Docker to satisfy software dependencies. [A] The RO layer shows the structure of the RO including its content and interactions with different components in the RO and [B] the CWL layer. }\label{fig:RO-format}
\end{figure*}
\subsubsection{\textcolor{black}data/}
\texttt{data/} is the \emph{payload} collection of the Research Object, in \textit{CWLProv} this contains all input and output files used in a given workflow enactment. Data files should be labelled and identified based on a hashed checksum rather than derived from its file path during workflow execution. This use of \emph{content-addressable} reference and storage \citep{services_2012} simplifies identifier generation for data and helps to avoid local dependencies, e.g. hard-coded file names. However, the workflow execution engine might use other unique identifiers for file objects. It is advised to re-use such identifiers to avoid redundancy and to comply with the system/platform used to run the workflow.
\subsubsection{ \textcolor{black}workflow/}
\textit{CWLProv} ROs must include a system-independent executable version of the workflow under the \texttt{workflow/} folder. When using CWL, this sub-folder must contain the complete executable \emph{workflow specification} file, an \emph{input file object} with parameter settings used to enact the workflow and an \emph{output file object} generated as a result of workflow enactment. The latter contain details of the workflow outputs such as data files produced by the workflow, but may exclude intermediate outputs.
To ensure RO portability, these file objects may not exactly match the file names at enactment time, as the absolute paths of the inputs are recommended to be replaced with relativized \emph{content-addressed} paths within the RO, e.g. \texttt{/home/alice/exp15/sequence.fa} is replaced with \texttt{../data/b1/b1946ac92492d2347c6235b4d2611184}. The input file object should also capture any dependencies of the input data files, such as \texttt{.bam.bai} indexes neighbouring \texttt{.bam} (\emph{Binary Alignment Map}) files. Any folder objects should be expanded to list contained files and their file names at time of enactment.
In the case of a CWL workflow, \textit{cwltool} can aggregate the CWL description and any referenced external descriptions (such as sub-workflows or command line tool descriptions) into a single workflow file using \texttt{cwltool -{}-pack}. This feature is used in our implementation (details in section \textbf{\nameref{sec:demo}}) to rewrite the workflow files, making them re-executable without depending on workflow or commandline descriptions on the file system outside the RO. Other workflow definition approaches, WMS or CWL executors should apply similar features to ensure workflow definitions are executable outside their original file system location.
\begin{mdframed}[linewidth=1pt,linecolor=black,
innerleftmargin=8pt,innerrightmargin=8pt,
innertopmargin=8.2pt,innerbottommargin=6pt]
{\fontsize{8.2pt}{10pt}\bfseries What is BagIt?\par}
\textbf{BagIt} is an IETF Internet Standard (RFC8493)\citep{bagit17} that defines a structured file hierarchy for the purpose of digital preservation of data files. BagIt was initiated by the US Library of Congress and the California Digital Library, and is now used by libraries and archives to ensure safe transmission and storage of datasets using ``bags''.
A \textbf{bag} is indicated by the presence of \texttt{bagit.txt} and a \emph{payload} of digital content stored as files and sub-folders in the \texttt{data/} folder. Other files are considered \emph{tag files} to further describe the payload. All the payload files are listed in a \emph{manifest} with checksums of their byte content, e.g. \texttt{manifest-sha256.txt} and equivalent for tag files in \texttt{tagmanifest-sha256.txt}. Basic metadata can be provided in \texttt{bag-info.txt} as key-value pairs.
A bag can be checked to be \emph{complete} if all the files listed in the manifests exist, and is also considered \emph{valid} if the manifest matches the checksum of each file, ensuring they have been correctly transferred.
\textbf{BDBag} (Big Data bag)\citep{chard_2016} is a profile of BagIt that adds a \emph{Research Object}\citep{RObundle} \texttt{metadata/manifest.json} in JSON-LD \citep{JSONLD} format to contain richer Linked Data annotations that may not fit well in \texttt{bag-info.txt}, e.g. authors of an individual file. BDBags can include a \texttt{fetch.txt} to reference external resources using \emph{ARK MinIDs} or HTTP URLs, allowing bags that contain large files without necessarily transferring their bytes.
\end{mdframed}
\subsubsection{\textcolor{black}snapshot/}
\texttt{snapshot/} comprises copies of the workflow and tool specifications files “as-is” at enactment time, without any rewrites, packing or relativizing as described above.
It is recommended to use snapshot resources only for validity checking results and for understanding the workflow enactment, since these files might contain absolute paths or be host-specific, and thus may not be possible to re-enact elsewhere. Preserving these files untouched may nevertheless retain information that could otherwise get lost, e.g. commented out workflow code, or identifiers baked into file names.
A challenge in capturing snapshot files is that they typically live within a file system hierarchy which can difficult to replicate accurately, and may have internal references to other files. In our implementation we utilize \texttt{cwltool -{}-print-deps} to find indirectly referenced files and store their snapshots in a flat folder.
\subsubsection{\textcolor{black}metadata/}
Each \textit{CWLProv} RO must contain an RO manifest file \texttt{metadata/manifest.json} and two sub-directories \texttt{metadata/logs} and \texttt{metadata/provenance}. The RO manifest, part of the BDBag \citep{chard_2016} profile, follows the JSON-LD structure defined for Research Object Bundles \citep{RObundle} and can provide structured Linked Data for each file in the RO, like file type and creation date. Further detail about the manifest file contents is documented on GitHub as \textit{CWLProv} specification \citep{cwlprov}.
Any raw log information from the workflow enactment should be made available in \texttt{metadata/logs}. This typically includes the actual commands executed for each step. Similar to the snapshot files, log files may however be difficult to process outside the original enactment system. An example of such processing is \emph{CWL-metrics} \citep{10.1093/gigascience/giz052}, which post-process cwltool log files to capture runtime metrics of individual Docker containers.
Capturing the details of a workflow execution require rich metadata in provenance files (see section \textbf{\nameref{sec:provenanceprofile}}). These should exist in the sub-folder \texttt{metadata/provenance}. It is recommended to make the availability of a \textit{primary} provenance file mandatory, which should conform with the PROV-N \citep{moreau_2013} format. This file describes the top-level workflow execution. As described in \textit{Level 2} (Section \textbf{\nameref{sec:levels}}), it is quite possible to have nested workflows. In that case, a separate provenance file for each nested workflow execution should be included in this folder. If there are additional formats of provenance files such as PROV-JSON \citep{huynh_2013}, PROV-XML \citep{PROVXML}, PROV-O \citep{PROVO} etc, then these should be included in the said folder with a declaration using \texttt{conformsTo} to declare their formats in the RO manifest being mandatory. The nested workflow profile should be named such that there is a link between the respective step in the primary workflow and the nested workflow preferably using unique identifiers.
As the PROV-DM has a generalized structure, there might be some provenance aspects specific to particular workflows that are hard to capture if only using PROV-N, hence ontologies such as \textit{wfdesc} \citep{wf4ever1} can be used to describe the abstract representation of the workflow and its steps. Use of \textit{wfprov} \citep{wf4ever2} to capture some workflow provenance aspects is also encouraged. Alternative extensions such as ProvOne \citep{caoprovone} can also be utilized if the WMS or workflow executor is using these extensions already.
\textit{CWLProv} reuses Linked Data standards like JSON-LD \citep{JSONLD}, W3C PROV \citep{PROVDM} and Research Object \citep{hettne_2014}. A challenge with Linked Data in distributed and desktop computing is how to make identifiers that are absolute URIs and hence globally unique. For example, for \textit{CWLProv} a workflow may be executed by an engine that does not know where its workflow provenance will be stored, published or finally integrated. To this end \textit{CWLProv} generators should use the proposed \emph{arcp} \citep{soilandreyes_2018} URI scheme to map local file paths within the RO BagIt folder structure to absolute URIs for use within the RO manifest and associated PROV traces. Consumers of \textit{CWLProv} ROs that do not contain an arcp-based External-Identifier should generate a temporary arcp base to safely resolve any relative URI references not present in the \textit{CWLProv} folder. Implementations processing a \textit{CWLProv} RO may convert arcp URIs to local \texttt{file:///} or \texttt{http://} URIs depending on how and where the \textit{CWLProv} RO was saved, e.g. using the ``arcp.py'' library \citep{arcp_ro2018}.
\subsection{Retrospective Provenance Profile}\label{sec:provenanceprofile}
\begin{table*}[!htbp]
\caption{Fulfilling recommendations with the \textit{CWLProv} profile of W3C PROV, extended with Research Object Model's \textit{wfdesc} (prospective provenance) and \textit{wfprov} (retrospective provenance). }\label{tab:provProfile}
\begin{tabularx}{\linewidth}{l l l l L}
\toprule
PROV type & Subtype & Relation & Range & Recommendation \\
\midrule
\textbf{Plan} & wfdesc:Workflow & wfdesc:hasSubProcess & wfdesc:Process & R9-workflow\\
& wfdesc:Process & & \\
\textbf{Activity} & wfprov:WorkflowRun & wasAssociatedWith &
wfprov:WorkflowEngine & R8-environment\\
& & \nary{hadPlan} & ~ wfdesc:Workflow & R9-workflow, R17-executable \\
& & wasStartedBy & wfprov:WorkflowEngine & R8-environment \\
& & \nary{atTime} & ~ \placeholder{ISO8601 timestamp} & R13-provenance \\
& & wasStartedBy & wfprov:WorkflowRun & R9-workflow \\
& & wasEndedBy & wfprov:WorkflowEngine & R8-environment \\
& & \nary{atTime} & ~ \placeholder{ISO8601 timestamp} & R13-provenance \\
& wfprov:ProcessRun & wasStartedBy & wfprov:WorkflowRun & R10-software \\
& & \nary{atTime} & ~ \placeholder{ISO8601 timestamp} & R14-provenance\\
& & used & wfprov:Artifact & R11-raw-data \\
& & \nary{role} & ~ wfdesc:InputParameter & R1-parameters \\
& & wasAssociatedWith & wfprov:WorkflowRun & R9-workflow \\
& & \nary{hadPlan} & ~ wfdesc:Process & R17-executable, R16-format \\
& & wasEndedBy & wfprov:WorkflowRun & R13-provenance \\
& & \nary{atTime} & ~ \placeholder{ISO8601 timestamp} & R13-provenance \\
& SoftwareAgent & wasAssociatedWith & wfprov:ProcessRun & R8-environment
\\
& & \nary{cwlprov:image} & ~ \placeholder{docker image id} & R4-sw-version \\
\textbf{SoftwareAgent} & wfprov:WorkFlowEngine & wasStartedBy & Person \placeholder{ORCID} & R12-attribution \\
& & label & \placeholder{cwltool \texttt{-{}-}version} & R4-sw-version \\
\textbf{Entity} & wfprov:Artefact & wasGeneratedBy & wfprov:Processrun & R3-intermediate, R7-identifier \\
& & \nary{role} & ~ wfdesc:OutputParameter & R1-parameters \\
\textbf{Collection} & wfprov:Artefact & hadMember & wfprov:Artefact & R3-intermediate\\
& Dictionary & hadDictionaryMember & wfprov:Artefact & \\
& & \nary{pairKey} & ~ \placeholder{filename} & R7-identifier \\
\bottomrule
\end{tabularx}
\begin{tablenotes}
\item Indentation with \return indicates n-ary relationships which are expressed differently depending on PROV syntax.
Namespaces:
\url{http://www.w3.org/ns/prov#} (default),
\url{http://purl.org/wf4ever/wfdesc#} (\textit{wfdesc}),
\url{http://purl.org/wf4ever/wfprov#} (\textit{wfprov}),
\url{https://w3id.org/cwl/prov#} \textit{cwlprov})
\end{tablenotes}
\end{table*}
As stated earlier, the primary provenance file should conform to the PROV-N \citep{moreau_2013} serialisation of PROV data model, and may optionally use ontologies specific to the workflow execution. The key features used in the structure of the retrospective provenance profile for a CWL workflow enactment in \textit{CWLProv} are listed in Table \ref{tab:provProfile}). These features are not tied to any platform or workflow definition approach and hence can be used to document retrospective provenance of any workflow irrespective of the workflow definition approach.
The core mapping is following the PROV data model as in Figure \ref{fig:prov-dm}): The PROV \emph{Activity} represent the duration of a workflow run, as well as individual step executions, which \emph{used} file and data (\emph{Entity}), which again may be \emph{wasGeneratedBy} previous step activities. The workflow engine (e.g. cwltool) is the \emph{Agent} controlling these activities according to the workflow definition (\emph{Plan}).
PROV is a general standard not specific to workflows, and lacks features to relate a \emph{plan} (i.e. a workflow description) with sub-plans and workflow-centric retrospective provenance elements e.g. specific workflow enactment and its related steps enactment. We have utilized \textit{wfdesc} and \textit{wfprov} to represent few elements of prospective and retrospective provenance respectively. In addition, the provenance profile documented details of all the uniquely identified \textit{activities} e.g. workflow enactment and related command line tool invocations, their associated \textit{entities} (e.g. input and output data artefacts, input configuration files, workflows and command line tool specifications). The profile also documents the relationship between activities such as which activity (workflow enactment) was responsible for starting and ending another activity (command line tool invocation).
As described in Section \textbf{\nameref{sec:levels}}, in order to achieve maximum \textit{white-box} provenance, the inner workings of a nested workflow should also be included in the provenance trace. If a step represents a nested workflow, a separate provenance profile is included in the RO. Moreover, in the parent workflow trace, this relationship is recorded using \textit{has\_provenance} as an attribute of the \textit{Activity} step which refers to the profile of the nested workflow.
\section{Practical Realisation of \textit{CWLProv}} \label{sec:demo}
\textit{CWLProv} \citep{cwlprov} provides a format that can be adopted by any workflow executor or platform, provided that the underlying workflow definition approach is at least as declarative as CWL, i.e. it captures the necessary components described in Section \textbf{\nameref{sec:standards}}. In the case of CWL, as long as the conceptual constructs are common amongst the available implementations and executors, a workflow enactment can be represented in \textit{CWLProv} format. To demonstrate the practical realisation of the proposed model we consider a Python-based reference implementation of CWL \textit{cwltool}.
\textit{cwltool} is a feature complete reference implementation of CWL. It provides extensive validation of CWL files as well as offering a comprehensive set of test cases to validate new modules introduced as extensions to the existing implementation. Thus it provides the ideal choice for implementing \textit{CWLProv} for provenance support and resource aggregation. The existing classes and methods of the implementation were utilized to achieve various tasks such as packaging of the workflow and all associated tool specifications together. In addition, the existing python library \textit{prov} \citep{provpython} was used to create a provenance document instance and populate it with the required artefacts generated as the workflow enactment proceeds.
It should be noted that we elected to implement \textit{CWLProv} in the reference implementation \textit{cwltool} instead of the more scalable and production-friendly CWL implementations like Toil \citep{vivian2017toil}, Arvados \citep{arvados}, Rabix \citep{kaushik_2017}, CWL-Airflow \citep{cwlairflow2018} or Cromwell \citep{cromwell}. An updated list of implementations is available at the CWL homepage \footnote{\url{https://www.commonwl.org/\#Implementations}}. Compared to \textit{cwltool} these generally have extensive scheduler and cloud compute support, and extensions for large data transfer and storage, and should therefore be considered for any adopters of the Common Workflow Language. In this study we have however focused on \textit{cwltool} as its code base was found to be easy to adapt for rich provenance capture without having to modify subsystems for distributed execution or data management, and as a reference implementation better informing us on how to model \textit{CWLProv} for the general case rather than being tied into execution details of the more sophisticated CWL workflow engines.
\textit{CWLProv} support for \textit{cwltool} is built as an optional module which when invoked as \textit{``cwltool \texttt{-{}-}provenance ro/ workflow.cwl job.json''}, will automatically generate an RO with the given folder name \textit{ro/} without requiring any additional information from the user. Each input file is assigned a hash value and placed in the folder \textit{ro/data}, making it content-addressable to avoid local dependencies (Figure \ref{fig:processflow}).
\begin{figure*} [t!]
\includegraphics[width= 0.8\textwidth]{images/ProvenanceProcessFlow.png}
\centering
\captionsetup{justification=centering,margin=2cm}
\caption{High level process flow representation of retrospective provenance capture }\label{fig:processflow}
\end{figure*}
In order to avoid including information about attribution without consent of the user, we introduce an additional flag \textit{`` \texttt{-{}-}enable-user-provenance''}. If a user provides the options \textit{ \texttt{-{}-}orcid} and \textit{ \texttt{-{}-}full-name}, this information will be included in the provenance profile related to user attribution. Enabling \textit{`` \texttt{-{}-}enable-user-provenance''} and not providing the full name or ORCID will store user account details from the local machine for attribution, i.e. the details of the \textit{agent} that enacted the workflow.
The workflow and command line tool specifications are aggregated in one file to create an executable workflow and placed in folder \textit{ro/workflow}. This folder also contains transformed input job objects containing the input parameters with references to artefacts in the \textit{ro/data} based on relativising the paths present in the input object. These two files are sufficient to re-enact the workflow, provided the other required artefacts are also included in the RO and comply to the \textit{CWLProv} format. The \textit{cwltool} control flow \citep{cwltool-controlflow} indicates the points when the execution of the workflow and command line tools involved in the workflow enactment start, end and how the output is reported back. This information and the artefacts are captured and stored in the RO.
When the execution of a workflow begins, \textit{CWLProv} extensions to \textit{cwltool} generate a provenance document (using the \textit{prov} library) which includes default namespaces for the workflow enactment \textit{“activity”}. The attribution details as an \textit{agent} are also added at this stage if user provenance capture is enabled, e.g. to answer ``who ran the workflow?''. Each step of the workflow can correspond to either a command line tool or another nested workflow referred to as a \textit{sub-workflow} in the CWL documentation. For each nested workflow, a separate provenance profile is initialized recursively to achieve a \textit{white-box} finer-grained provenance view as explained in Section \textbf{\nameref{sec:levels}}. This profile is continually updated throughout the nested workflow enactment. Each step is identified by a unique identifier and recorded as an \textit{activity} in the parent workflow provenance profile, i.e. the \textit{``primary profile''}. The \textit{nested} workflow is recorded as a step in the \textit{primary profile} using the same identifier as the ``nested workflow enactment activity'' identifier in the respective provenance profile. For each step in the activity, the start time and association with the workflow activity is created and stored as part of the overall provenance to answer the question ``when did it happen?''.
The data used as input by these steps is either provided by the user or produced as an intermediate result from the previous steps. In both cases, the \textit{Usage} is recorded in the respective provenance profile using checksums as identifiers to answer the question ``what was used?''. The non-file input parameters such as strings and integers are stored ``as-is'' using an additional optional argument, \textit{prov:value}. Upon completion, each step typically generates some data. The provenance profile records the generation of outputs at the step level to record ``what was produced?'' and ``which process produced it?''. Once all steps complete, the workflow outputs are collected and the generation of these outputs at the workflow level are recorded in the provenance profile. Moreover, using the checksum of these files generated by the \textit{cwltool}, content-addressable copies are saved in the folder \textit{ro/data}. The provenance profile refers to these files using the same checksum such that they are traceable or can be used for further analysis if required. The workflow specification, command line tool specifications and JSON job file is archived in the \textit{ro/snapshot} folder to preserve the actual workflow history.
This prototype implementation provides a model and guidance for workflow platforms and executors to identify their respective features that can be utilized in devising their own implementation of \textit{CWLProv}.
\subsection{Achieving recommendations with provenance levels}
Table \ref{tab:fulfilling} map the best practices and recommendations from Table \ref{tab:recommendation:wide} to the Levels of Provenance (Figure \ref{fig:levels}). The shown methods and implementation readiness indicate to which extent the recommendations are addressed by the implementation of \textit{CWLProv} (detailed in this section).
Note that other approaches may solve this mapping differently. For instance, Nextflow \citep{ditommaso_2017} may fulfill \textit{R18-resource-use} at Provenance \nameref{sec:level2} as it can produce trace reports with hardware resource usage per task execution \citep{nextflow_tracing}, but not for the overall workflow. While a Nextflow trace report is a separate CSV file with implementation-specific columns, our planned \textit{R18-resource-use} approach for CWL is to combine \textit{CWL-metrics} \citep{tazro2018}, permalinks and the standard \textit{GFD.204} \citep{cristofori2013usage} to further relate resource use with \nameref{sec:level1} and \nameref{sec:level2} provenance within the \textit{CWLProv} Research Object.
In addition to following the recommendations from Table \ref{tab:recommendation:wide} through computational methods, the workflow authors are also required to exercise \textit{best practices for workflow design and authoring}. For instance, to achieve \textit{R1-parameters} the workflow must be written in such a way that parameters are exposed and documented at workflow level, rather than hard-coded within an underlying Python script. Similarly, while the CWL format support rich details of user annotations that can fulfill \textit{R6-annotation}, for these to survive into a Research Object at execution time, such annotation capabilities must actually be used by workflow authors instead of unstructured text files.
It should be a goal of a scientific WMS to guide users towards achieving the required level of the provenance framework through automation where possible. For instance a user may in the workflow have specified a Docker container image without preserving the version, but the provenance log could still record the specific container version used at execution time, achieving \textit{R4-sw-version} retrospectively by computation rather than relying on a prospective declaration in the workflow definition.
\begin{table}[bt!]
\caption{Recommendations and provenance levels implemented in \textit{CWLProv}} \label{tab:fulfilling}
\begin{tabular}{l c c c c l}
\toprule
Recommendation & L0 & L1 & L2 & L3 & Methods \\
\midrule
R1-parameters & $\bullet$ && $\bullet$ && CWL, BP \\
R2-automate & $\bullet$ &&&& CWL, Docker \\
R3-intermediate && $\bullet$ &&& PROV, RO \\
R4-sw-version & $\bullet$ && $\bullet$ && CWL, Docker, PROV \\
R5-data-version & $\bullet$ &&& $\bullet$ & CWL, BP\\
R6-annotation && $\bullet$ && $\coasterisk$ & CWL, RO, BP \\
R7-described && $\bullet$ &&& CWL, RO\\
R7-identifier && $\bullet$ & $\bullet$ & $\bullet$ & RO, CWLProv\\
R8-environment && $\coasterisk$& $\coasterisk$ && GFD.204 \\
R9-workflow & $\bullet$ & $\bullet$ & $\bullet$ && CWL, wfdesc \\
R10-software & $\bullet$ && $\bullet$ && CWL, Docker \\
R11-raw-data & $\bullet$ & $\bullet$ &&& CWLProv, BP \\
R12-attribution & & $\bullet$ &&& RO, CWL, BP \\
R13-provenance & & $\bullet$ & $\bullet$ && PROV, RO \\
R14-diagram & $\circ$ &&& $\coasterisk$ & CWL, RO \\
R15-open-source & $\bullet$ &&&& CWL, BP \\
R16-format & & $\bullet$ && $\bullet$ & CWL, BP \\
R17-executable & $\circ$ & $\bullet$ &&& CWL, Docker \\
R18-resource-use & & $\coasterisk$ & $\coasterisk$ && CWL, GFD.204 \\
R19-example & $\coasterisk$ & $\circ$ &&& RO, BP \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item \textbf{CWL}: Common Workflow Language and embedded annotations
\item \textbf{RO}: Research Object model and BagIt
\item \textbf{PROV}: W3C Provenance model
\item \textbf{CWLProv}: Additional attributes in PROV
\item \textbf{wfdesc}: Prospective provenance in PROV
\item \textbf{BP}: Best Practice need to be followed manually
\item $\bullet$ Implemented
\item $\circ$ Partially implemented
\item $\coasterisk$ Implementation planned/ongoing
\end{tablenotes}
\end{table}
\section{CWLProv Evaluation with Bioinformatics Workflows} \label{sec:evaluation}
\textit{CWLProv} as a standard supports \textit{syntactic}, \textit{semantic} and \textit{pragmatic} interoperability (defined in Section \textbf{\nameref{sec:interoperability}}) of a given workflow and its associated results. We have defined a \textit{``common data format''} for workflow sharing and publication such that any executor or WMS with CWL support can interpret this information and make use of it. This ensures the \textit{syntactic} interoperability between the workflow executors on different computing platforms. Similarly the \textit{``content''} of the shared aggregation artefact as a workflow-centric RO is unambiguously defined, thus ensuring uniform representation of the workflow and its associated results across different platforms and executors hence supporting \textit{semantic} interoperability. With \textit{Level 3} provenance satisfied providing domain-specific information along with level 0-2 provenance tracking, we posit that \textit{CWLProv} would be able to accomplish \textit{pragmatic} interoperability by providing unambiguous information about the \textit{``context''}, \textit{``application''} and \textit{``use''} of the shared/published workflow-centric ROs. Hence, extension of the current implementation (described in section \ref{sec:demo}) in future to include domain-rich information in the provenance traces and the \textit{CWLProv RO} will result in pragmatic interoperability.
To demonstrate the interoperability and portability of the proposed solution, we evaluate \textit{CWLProv} and its reference implementation using open source bioinformatics workflows available on GitHub from different research initiatives and from different developers. Conceptually, these workflows are selected for evaluation due to their excessive use in real-life data analyses and variety of the input data. Alignment workflow is included in the evaluation as it is one of the most time consuming yet mandatory steps in any variant calling workflow. Practically, choosing the workflows by these particular groups out of numerous existing implementations is justified in each section below.
\subsection{RNA-seq Analysis Workflow} \label{sec:rnaseq-wf}
\begin{figure*} [t!]
\centering
\includegraphics[width=\textwidth]{images/rnaseq-cwlviewer-half.png}
\captionsetup{justification=centering}
\caption{Portion of a RNA-seq workflow generated by CWL viewer \citep{robinson_2017}.}\label{fig:rna-seq}
\end{figure*}
RNA sequencing (RNA-seq) data generated by Next Generation Sequencing (NGS) platforms is comprised of short sequence reads that can be aligned to a reference genome, where the alignment results form the basis of various analyses such as quantitating transcript expression; identifying novel splice junctions and isoforms and differential gene expression \citep{dobin2015mapping}. RNA-seq experiments can link phenotype to gene expression and are widely applied in multi-centric cancer studies \citep{cohen2017scientific}. Computational analysis of RNA-seq data is performed by different techniques depending on the research goals and the organism under study \citep{Conesa2016}. The workflow \citep{rnaseq} included in this case study has been defined in CWL by one of the teams \citep{heliumda} participating in NIH Data Commons initiative \citep{DataComm86}, a large research infrastructure program aiming to make digital objects (such as data generated during biomedical research and software/tools required to utilize such data) shareable and accessible and hence aligned with the FAIR principles \citep{wilkinson_2016}.
This workflow (Figure \ref{fig:rna-seq}), designed for the pilot phase of the NIH Data Commons initiative \citep{NIH-PILOT}, adapts the approach and parameter settings of Trans-Omics for precision Medicine (TOPMed) \citep{TransOmi24}. The RNA-seq pipeline originated from the Broad Institute \citep{gtexpipe}. There are in total five steps in the workflow starting from:
1) Read alignment using STAR \citep{Dobin2012} which produces aligned BAM files including the Genome BAM and Transcriptome BAM.
2) The Genome BAM file is processed using Picard MarkDuplicates \citep{MarkDuplicates} producing an updated BAM file containing information on duplicate reads (such reads can indicate biased interpretation).
3) SAMtools index \citep{li2009sequence} is then employed to generate an index for the BAM file, in preparation for the next step.
4) The indexed BAM file is processed further with RNA-SeQC \citep{DeLuca2012} which takes the BAM file, human genome reference sequence and Gene Transfer Format (GTF) file as inputs to generate transcriptome-level expression quantifications and standard quality control metrics.
5) In parallel with transcript quantification, isoform expression levels are quantified by RSEM \citep{li2011rsem}. This step depends only on the output of the STAR tool, and additional RSEM reference sequences.
For testing and analysis, the workflow author provided example data created by down-sampling the read files of a TOPMed public access data \citep{seo2012transcriptional}. Chromosome 12 was extracted from the \textit{Homo Sapien Assembly 38} reference sequence and provided by the workflow authors. The required GTF and RSEM reference data files are also provided. The workflow is well-documented with a detailed set of instructions of the steps performed to down-sample the data are also provided for transparency. The availability of example input data, use of containerization for underlying software and detailed documentation are important factors in choosing this specific CWL workflow for \textit{CWLProv} evaluation.
\subsection{Alignment Workflow} \label{sec:align}
Alignment is an essential step in variant discovery workflows and considered an obligatory \textit{pre-processing} stage according to Best Practices by the Broad Institute \citep{GATKBP}. The purpose of this stage is to filter low-quality reads before variant calling or other interpretative steps \citep{xu2018review}. The workflow for alignment is designed to operate on raw sequence data to produce analysis-ready BAM files as the final output. The typical steps followed include file format conversions, aligning the read files to the reference genome sequence, and sorting the resulting files.
\begin{figure*} [b!]
\centering
\includegraphics[width=0.95\textwidth]{images/new_alignment.png}
\captionsetup{justification=centering}
\caption{Alignment workflow representation generated by CWL viewer.}\label{fig:align}
\end{figure*}
The CWL alignment workflow \citep{alignment-wf} included in this evaluation (Figure \ref{fig:align}) is designed by Data Biosphere \citep{DataBios21}. It adapts the alignment pipeline \citep{docker-alignment} originally developed at Abecasis Lab, The University of Michigan \citep{Abecasis14}. This workflow is also part of NIH Data Commons initiative (as \nameref{sec:rnaseq-wf}) and comprises of four stages.
First step, ``Pre-align'' accepts a Compressed Alignment Map (CRAM) file (a compressed format for BAM files developed by European Bioinformatics Institute (EBI) \citep{Cochrane2012}) and human genome reference sequence as input and using underlying software utilities of SAMtools such as view, sort and fixmate returns a list of fastq files which can be used as input for the next step. The next step ``Align'' also accepts the human reference genome as input along with the output files from ``Pre-align'' and uses BWA-mem \citep{li2013aligning} to generate aligned reads as BAM files. SAMBLASTER \citep{samblaster} is used to mark duplicate reads and SAMtools view to convert read files from SAM to BAM format. The BAM files generated after ``Align'' are sorted with ``SAMtool sort''. Finally these sorted alignment files are merged to produce single sorted BAM file using SAMtools merge in ``Post-align'' step. The authors provide an example CRAM file, \textit{Homo Sapien Assembly 38} reference genome along with its index files to be used as inputs for testing and analysis of the workflow.
\subsection{Somatic Variant Calling Workflow}
Variant discovery analysis for high-throughput sequencing data is a widely used bioinformatics technique, focused on finding genetic associations with diseases, identifying somatic mutations in cancer and characterizing heterogeneous cell populations \citep{variantcallinglecture}. The \textit{pre-processing} explained for the Alignment workflow is part of any variant calling workflow as reads are classified and ordered as part of the variant discovery process. Numerous variant calling algorithms have been developed depending on the input data characteristics and the specific application area \citep{xu2018review}. Somatic variant calling workflows are designed to identify somatic (non-inherited) variants in a sample - generally a cancer sample - by comparing the set of variants present in a sequenced tumour genome to a non-tumour genome from the same host \citep{Saunders2012}. The set of tumour variants is a super-set of the set of host variants, and somatic mutations can be identified through various algorithmic approaches to subtracting host familial variants. Each somatic variant calling workflow typically consists of three stages: pre-processing; variant evaluation and post-filtering.
The somatic variant calling workflow (Figure \ref{fig:somatic}) included in this case study is designed by Blue Collar Bioinformatics (bcbio) \citep{bcbio}, a community-driven initiative to develop best-practice pipelines for variant calling, RNA-seq and small RNA analysis workflows. According to the documentation, the goal of this project is to facilitate the automated analysis of high throughput data by making the resources \textit{quantifiable}, \textit{analyzable}, \textit{scalable}, \textit{accessible} and \textit{reproducible}. All the underlying tools are containerized facilitating software use in the workflow. The somatic variant calling workflow defined in CWL is available on GitHub \citep{bcbiowf} and equipped with a well defined test dataset.
\begin{figure*} [t!] %% preferably at bottom or top of column
\centering
\includegraphics[width=.65\textwidth, height=120mm]{images/2variantcalling.png}
\captionsetup{justification=centering}
\caption{Visual representation of the bcbio somatic variant calling workflow (Adapted from \citep{bcbiocwl}) and the subworkflow images are generated by CWL viewer.}\label{fig:somatic}
\end{figure*}
\subsection{Evaluation Activity} \label{eval-activity}
This section describes the evaluation of cross-executor and cross-platform interoperability of \textit{CWLProv}. To test cross-executor interoperability, two CWL executors \textit{cwltool} and \textit{toil-cwl-runner} were selected. \textit{toil-cwl-runner} is an open source Python workflow engine supporting robust cross-platform workflow execution on Cloud and High Performance Computing (HPC) environments \citep{vivian2017toil}. The two operating system platforms utilized in this analysis were MacOS and Ubuntu Linux. For the Linux OS, a 16-core Linux instance with 64GB RAM was launched on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) research cloud \citep{Nectar}. To cater for the storage requirements, a 1000GB persistent volume was attached to this instance. For MacOS, a local system with 16GB RAM, 250GB storage and 2.8 GHz Intel Core i7 processor was used. These platforms were selected to cater for the required storage and compute resources of the workflows described above. The reference genome provided with \nameref{sec:align} was not down-sampled and hence this workflow required most resources among the three evaluated.
It is worth mentioning that this evaluation does not include details of the installation process for \textit{cwltool}, \textit{toil-cwl-runner} and \textit{Docker} on systems described above. To create \textit{CWLProv} ROs during workflow execution, it is necessary to use the CWL reference runner (\textit{cwltool}) until this practice spreads to other CWL implementations. Moreover, it is assumed that the software container (Docker) should also be installed on the system to use the workflow definitions aggregated in a given \textit{CWLProv} RO.
In addition, the resource requirements (identified in \textit{R18-resource-use} and discussed in Section \textbf{\nameref{sec:discussion}}) should also be satisfied by choosing a system with enough compute and storage resources for successful enactment. The systems used in this case study should be a reference when selecting a system as inadequate compute and storage resources such as insufficient RAM or number of cores will hinder the successful re-enactment of workflows using these ROs. The hardware requirements may also vary if a different dataset is used as input to re-enact the workflow using the methods aggregated in the RO. In that case, the end user must ensure availability of adequate compute and storage resources by choosing a system that meets the required specifications \citep{kanwal2017digital}.
Since the \textit{CWLProv} implementation is demonstrated for one of the executors (\textit{cwltool}), currently a \textit{CWLProv} RO for any workflow can only be produced using \textit{cwltool}. Hence, in this activity the workflows are initially enacted using just \textit{cwltool} (Table \ref{tab:eval}). The outline of the steps performed to analyse \textit{CWLProv} for each case study is as follows.
\begin{enumerate}[label=\Roman*)]
\item The workflow was enacted using \textit{cwltool} to produce a RO on a MacOS computer.
\begin{enumerate} [label=\arabic*)]
\item The resulting RO and aggregated resources were used to re-enact the workflow using \textit{toil-cwl-runner} on the same MacOS computer;
\item The RO produced in step I was transferred to the cloud-based Linux instance used in this activity;
\item On the cloud-based Linux environment and only utilizing the resources aggregated in the RO, the workflow was re-enacted using \textit{cwltool} and \textit{toil-cwl-runner}.
\end{enumerate}
\item The workflow was enacted using \textit{cwltool} to produce a RO on Linux.
\begin{enumerate} [label=\arabic*)]
\item The resulting RO and aggregated resource were utilized to re-enact the workflow using \textit{toil-cwl-runner} on the same cloud-based Linux instance;
\item The RO produced in step II was transferred to the MacOS computer used in this activity;
\item On the MacOS computer and only utilizing the resources aggregated in the RO, the workflow was re-enacted using \textit{cwltool} and \textit{toil-cwl-runner}.
\end{enumerate}
\end{enumerate}
The \textit{CWLProv} ROs produced as a results of this activity are published on Mendeley Data \citep{rnaseq_mendeley, alignment_mendeley, somatic_mendeley} with mirrors on Zenodo.
\subsection{Evaluation Results} \label{sec:eval-results}
The steps described above were taken to produce ROs which were then used to re-enact the workflows (outlined in Table \ref{tab:eval}), without any further changes required. This demonstration illustrated the syntactic and semantic interoperability of the workflows across different systems. It shows that \textbf{both CWL executors were able to \textit{exchange}, \textit{comprehend} and \textit{use} the information represented as \textit{CWLProv} ROs}. The current implementation described in section \textbf{\nameref{sec:demo}} does not resolve \textit{Level 3}. Hence, the inclusion of domain-specific annotations referring to scientific context to address pragmatic interoperability is identified as crucial future direction and further detailed in section \textbf{\nameref{sec:discussion}}.
\begin{table}[!htbp]
\caption {\textit{CWLProv} evaluation summary and status for the 3 bioinformatics case studies. }\label{tab:eval}
\begin{tabularx}{\linewidth}{c c c}
\toprule
Enact-produce RO with & Re-enact using RO with & Status \\
\midrule
\textit{cwltool} on MacOS & \textit{toil-cwl-runner} on MacOS & \checkmark \\
& \textit{cwltool} on Linux & \checkmark \\
& \textit{toil-cwl-runner} on Linux & \checkmark \\
\midrule
\textit{cwltool} on Linux & \textit{toil-cwl-runner} on Linux & \checkmark \\
& \textit{cwltool} on MacOS & \checkmark \\
& \textit{toil-cwl-runner} on MacOS & \checkmark \\
\bottomrule
\end{tabularx}
\end{table}
\subsubsection{\textcolor{black}\textit{CWLProv} and Interoperability}
CWL already builds on technologies such as JavaScript Object Notation for Linked Data (JSON-LD) \citep{JSONLD} for data modeling and Docker \citep{docker} to support portability of the run-time environments. The portability and interoperability as basic principles of the underlying workflow definition approach for any workflow-centric analysis implies that the analysis should also be portable and interoperable. However, the workflow definition/specification alone is insufficient when dealing with commandline tool specifications, data, and input configuration files used in the analysis if these are not readily available.
\textit{CWLProv} ensures availability of these resources for a given analysis conforming to the framework defined in Section \textbf{\nameref{sec:CWLProv}}. The input configurations are saved as \textit{primary-job.json} in folder \textit{workflow/} and refer to the input data contained in the payload \textit{data/} folder of the given RO. In this way, availability of data aggregated with the analysis is made possible. Existing features of \textit{cwltool} are used to generate the CWL workflow specification file containing all of the commandline tool specifications referred to in the workflow specification and placed in the same \textit{workflow/} folder.
One might argue that copying a folder tree might serve the same purpose but in that case we again will be relying on users to put substantial amount of effort on top of the actual analysis, i.e. they would have to carefully structure their directories to be aligned with the workflow creators. Instead CWL encourages researchers to utilize container technologies such as Docker, Singularity, or software packaging systems like Debian (Med) or Bioconda to ensure availability of underlying tools as recommended by numerous studies \citep{belhajjame_2015, kanwal_2017, garijo_2017, stodden_2016, Stodden2014, Gruening2018}. This practice facilitates the preservation of methods utilized in data-intensive scientific workflows and enables verification of the published claims without requiring the end-user to do any manual installation and configuration. Examples of tools available via Docker containers used here are the alignment tool (BWA mem) used in the Alignment workflow and STAR aligner used in RNA-seq workflow.
\subsubsection{\textcolor{black}Evaluating Provenance Profile}
The retrospective provenance profile generated as part of \textit{CWLProv} for each workflow enactment can be examined and queried to extract the required subset of information. \textit{Provenance Analytics} is a separate domain and a next step after provenance collection in the provenance life cycle \citep{Missier2016}. Often provenance data is queried using specialized query languages such as SQL SPARQL or TriQL depending on the storage mechanism used. Query operations can combine information from prospective and retrospective provenance to understand computational experiments better.
The focus of this paper is not in-depth provenance analytics but we have demonstrated the application of the provenance profile generated as part of \textit{CWLProv}. We have developed a commandline tool and Python API \textit{``cwlprov-py''} \citep{cwlprov-py} for \textit{CWLProv} RO analytics to interpret the captured retrospective provenance of CWL workflow enactment. This API currently supports the following use-cases.
Given a \textit{CWLProv} RO:
\begin{itemize}
\item \textbf{Workflow Runs}\newline
As each RO can contain more than one \textit{workflow run} if sub-workflows are utilized to group related tasks into one workflow. In that case, the provenance traces are stored in separate files for each workflow run. \textit{cwlprov-py} identifies the workflow enactments including the sub-workflows \textit{(if any)} and returns the workflow identifiers annotated with the step names. The user can select the required trace and explore particular traces in detail.
\item \textbf{Attribution} \newline
Each RO is assumed to be associated with a single enactment of the primary workflow and hence assumed to be enacted by one person. As discussed previously, \textit{CWLProv} provides additional flags to enable user provenance capture. A user can provide their name and ORCID details that can be stored as part of a RO. \textit{cwlprov-py} displays attribution details of the researcher responsible for the enactment \textit{(if enabled)} and the versions of the workflow executor utilized in the analysis.
\item \textbf{Input/Output of a Process} \newline
Provenance traces contain associations between the steps/workflows with the data they used or generated. A user interested in a particular step can identify the inputs used and outputs produced linked explicitly to that process using \textit{cwlprov-py}. This option works using individual step identifiers (level 1) as well as nested workflows (level 2), facilitating re-use of intermediate data even if the original workflow author did not explicitly expose these as workflow outputs.
\item \textbf{Partial Re-runs} \newline
Re-running or re-using only desired parts of a given workflow has been emphasized \citep{cohen2017scientific} as important to evaluate the workflow process or validate the published results associated without necessarily re-enacting the workflow as a whole. \textit{cwlprov-py} uses the identifier of the step/workflow to be re-run, parses the provenance trace to identify the inputs required and ultimately creates a JSON input object with the associated input parameters. This input object can then be used for partial re-runs of the desired step/workflow, making segmented analysis possible even for CWLProv consumers who don't have sufficient hardware resources for re-executing more computationally heavy steps.
\end{itemize}
While the above explores some use cases for consuming and re-using workflow execution data, we have not explored this in full detail. Further work could develop more specific user scenarios and perform usability testing with independent domain-experts who have not seen the executed workflow before.
An important point of \textit{CWLProv} is to capture sufficient information at workflow execution time, so that post-processing (potentially by a third-party) can support unforeseen queries without requiring instrumentation at workflow design time. For instance, \texttt{cwlprov runtimes} calculates average runtime per step (requiring capture of start/stop time of each step iteration), while \texttt{cwlprov derived} calculates derivation paths back to input data (requiring consistent identifiers during execution). Further work could build a more researcher-oriented interface based on this approach, e.g. hardcoded data exploration for a particular workflow.
\subsubsection{\textcolor{black}Temporal and Spatial Overhead with Provenance}
\begin{table*}[t!]
\caption{Run-time comparison for the workflow enactments done cross-executor and cross-platform}\label{tab:time}
\captionsetup{width=\linewidth}
\centering
\begin{tabularx}{\linewidth}{|L | c | c | c | c | c | c |}
\toprule
\textbf{Workflow} & \multicolumn{3}{C}{\textbf{Linux}} & \multicolumn{3}{|C|}{\textbf{MacOS}} \\
\midrule
& \multicolumn{2}{|c|}{\textbf{cwltool}} & \textbf{toil-cwl-runner} & \multicolumn{2}{|c|}{\textbf{cwltool}} & \textbf{toil-cwl-runner} \\ \hline
& With Prov & W/O Prov & W/O Prov & With Prov & W/O Prov & W/O Prov \\ \hline
RNA-Seq Analysis Workflow & 4m30.289s & 4m0.139s & 3m46.817s & 3m33.306s & 3m41.166s & 3m30.406s \\ \hline
Alignment Workflow& 28m23.792s & 24m12.404s & 15m3.539s & -- & 162m35.111s & 146m27.592s \\ \hline
Somatic Variant Calling Workflow & 21m25.868s & 19m27.519s & 7m10.470s & 17m26.722s & 17m0.227s & ** \\
\hline\hline
\multicolumn{7}{l}{%
\begin{minipage}{\linewidth}%
\tiny ** This could not be tested because of a Docker mount issue on MacOS: \url{https://github.com/DataBiosphere/toil/issues/2680}%
\newline
\tiny -- This could not be tested because of the insufficient hardware resources on the MacOS test machine, hence step I of the evaluation activity could not be performed for this workflow%
\end{minipage}%
}\\
\bottomrule
\end{tabularx}
\end{table*}
Table \ref{tab:time} shows the run-times for the three workflow enactments using cwltool and toil-cwl-runner on Linux and MacOS with and without enabling provenance capture as described in the evaluation activity section. These workflows were enacted at least once before this time calculation, hence the timing does not include the time for Docker images to be downloaded. On a new system, when re-running these workflows for the first time, the Docker images will be downloaded and may take significantly longer than the time specified here especially in case of the Somatic Variant Calling workflow because of the image size.
Run-time and storage overheads are important for provenance-enabled computational experiments. The choice of different operating systems and provenance capture mechanisms such as operating-system level, application-level or workflow-level as well as I/O workload, interception mechanism and fine-grained information capture are key for provenance \citep{Carata2014, kim2016assessing}.
In our case study, significant time difference can be seen for the alignment workflow that used the most voluminous dataset, hence producing a sizable RO as well. This was due to the RO-generation where data was aggregated within the RO. The difference between the provenance-enabled enactment versus the enactment without provenance is barely noticeable for the other two workflow enactments with the smaller datasets. The discussion about handling the big `-omics' data such as human genome reference sequence, its index files and other database files (e.g. dbsnp) in Section \textbf{\nameref{sec:discussion}} provides a possible solution to avoid such overheads.
In addition, noticeable time difference between the cwltool and toil-cwl-runner enactments is because of the default parallel versus serial job execution in case of \textit{toil-cwl-runner} and \textit{cwltool} respectively. The ``scatter'' operation in CWL when applied to one or more input parameters of a workflow step or a sub-workflow, supports parallel execution of the associated processes. Parallelism is also available without “scatter” when separate processes have all their inputs ready. If sufficient compute resources are available, these jobs will be enacted concurrently otherwise they are queued for subsequent execution. Compute intensive steps of a workflow can benefit from scatter features for parallel execution by reducing the overall run-time. Both Alignment and Somatic Variant Calling workflows utilize the scatter feature to enable higher degrees of parallel job execution in case of \textit{toil-cwl-runner} which explains the time difference for the cross-executor of these two workflows. The difference is negligible for RNA-Seq workflow which is comprised of serial jobs with comparatively small test data.
\subsubsection{\textcolor{black}Output Comparison Across Enactments}
We compared the workflow outputs after each enactment to observe the concordance and/or discordance (if any) for the workflow enactment results produced across the platforms and across the executors. As \textit{CWLProv} RO refers to the data with hashed checksums, these checksums are utilized for the result comparison. It is worth-mentioning that the comparison was made between the output files generated by the different enactments against a single \textit{``truth-set''} output file available and checksum in the respective Git repositories.
The checksum of the output data generated cross-platform and cross-executor comparison data as a result of the initial enactments and re-runs using the CWL ROs to elicit the concordance in all but one cases. The ``correctness'' as well as agreement of these outputs given different execution environments (e.g. platform and executor) hold true except for Alignment workflow. Alignment workflow produced varying outputs after every execution even with the same executor and platform. The output of the alignment algorithm, ``BWA mem'' used in this workflow was non-deterministic as it depended on the \textit{number of threads -{}-t} and the \textit{seed length -{}-K} which affected the output produced. While the seed length in this case was set to a constant value, the number of threads varied depending on the availability of hardware resources at run-time, thereby resulting in varying output for the same input files.
\section{Discussion and Future Directions} \label{sec:discussion}
This section discusses the current and future work with reference to enriched provenance capture and smart resource aggregation, and enhancements to both the \textit{CWLProv} standard and implementation.
\subsubsection{\textcolor{black}Compute and Storage Resources}
The \textit{CWLProv} format encapsulates the data and workflow definitions involved in a given workflow enactment along with its retrospective provenance trace. CWL as a standard provides constructs to declare basic hardware resource requirements such as minimum and maximum cores, RAM and reserved file system storage required for a particular workflow enactment. The workflow authors can provide this information in the \textit{``requirements''} or \textit{``hints''} section as \textit{``ResourceRequirement''}. These requirements/hints can be declared at workflow or individual step level, to help platforms/executors to allocate the required resources. This information indirectly stores some aspects of prospective view of provenance with respect to hardware requirements of the underlying system used to enact a workflow. Currently this information is only available if declared as part of workflow specification. In future, we plan to include these requirements as part of provenance for a given workflow such that all such information is gathered in one space and users are not required to inspect multiple sources to extract this information. This information can then be used as a pre-condition for potential successful enactment of a given workflow.
As \textit{CWLProv} is focused on retrospective provenance capture of workflow enactment, we plan to include provenance information about the compute and storage resources utilized in a given enactment to fulfill \textit{R18-resource-use}. We believe that documenting these resources will allow users to analyse their environment and resource allocations before execution, as opposed to trial and error methods that may result in multiple failed enactments of a given workflow. Despite being an important factor, it is surprising to see that most of existing provenance standards lack dedicated constructs to represent the underlying hardware resource usage information as part of prospective or retrospective provenance. In the case of complex workflows using distributed resources, where each step could be executed on a different node/server, including all this information in a single \textit{PROV} profile will clutter the profile and render it potentially incomprehensible. Therefore, we plan to add a separate \textit{Usage Record} document in the RO conforming to GFD.204 \citep{cristofori2013usage} to describe \textit{Level 1} (and potentially \textit{Level 2}) resource usage in a common format independent on actual execution environment.
Capturing such resource usage records require a tighter integration with the execution platform, and so we consider this future work better suited for a cloud-based CWL engine like \textit{Toil} or \textit{Arvados}, as the reference implementation \textit{cwltool} does not exercise fine-grained control of its task execution. Detailed raw log files can also be provided as \textit{Level 0} provenance, as we have demonstrated with cwltool, but these will by their nature be custom per execution platform and thus should be considered unstructured. Related work that is already exploring this approach is \textit{cwl-metrics} \citep{tazro2018}, which analyses raw \textit{cwltool} log files in combination with detailed Docker invocation statistics using the container monitoring tool \textit{Telegraf}. Ongoing collaboration is exploring adding these metrics as additional provenance to the \textit{CWLProv} RO with summaries in PROV and GFD.204 formats.
\subsubsection{\textcolor{black}Provenance Profile Augmented with Domain Knowledge} \label{sec:domainknowledge}
\textit{CWLProv} benefits from existing best practices proposed by numerous studies (Table \ref{tab:recommendation:wide}) and includes defined standards for workflow representation, resource aggregation and provenance tracking (Section \textbf{\nameref{sec:standards}}). We posit that the principle of following well-defined data and metadata standards enables explicit data sharing and reuse. In order to include rich metadata for bioinformaticians to produce specialized ROs for bioinformatics to achieve \textit{CWLProv} \textit{Level 3} as defined in section \textbf{\nameref{sec:levels}}, we are investigating re-use of concepts from the BioCompute Object (BCO) project \citep{Alterovitz2019}. This domain-specific information is not necessary for computation and execution but for understandability of the shared resources. We encourage workflow authors to include such metadata and external identifiers for data and underlying tools, e.g. EDAM identifiers for the resources employed in designing a given workflow. The plan is to extract these annotations and represent in the retrospective provenance profile in \textit{CWLProv} to ultimately achieve pragmatic interoperability by providing domain-specific scientific context of the experiments. Domain-specific information is essential in determining the nature of inputs, outputs and context of the processes linked to a given workflow enactment \citep{Alper2018}. This information can be captured in the RO if and only if the workflow author adds it in the workflow definition, thus achieving \textit{CWLProv} \textit{Level 3} depends on the individual workflows.
\subsubsection{\textcolor{black}Big -omics Data}
While aggregating all resources as one download-able object improves reproducibility, the size of the resulting RO is an important factor in practice. On one hand, completeness of the resources contributes towards minimizing the \textit{workflow decay} phenomenon by least dependence on availability of third party resources. On the other hand, the nature of -omics data sizes can result in hard-to-manage workflow-centric ROs also leading to the spatial and temporal overheads as discussed in evaluation.
One solution is archiving the big datasets in online repositories or data stores and including the existing persistent identifiers and checksums in the RO instead of the actual data files, as previously demonstrated with BDBags \citep{chard_2016, madduri_2018}. While CWL executors like \textit{toil-cwl-runner} can be configured to deposit data in a shared repository, the \textit{cwltool} reference implementation explored in this study can only write to the local file system. External references raise the risk of unavailability of data at a later time. Therefore we recommend including the data in the RO if sufficient network and storage resources are available. Future work may explore post-processing \textit{CWLProv} ROs to replace large data files with references to stable data repositories, producing a slimmer RO for transfer where individual data items can be retrieved on demand, as well as reducing data duplication across multiple related ROs.
\subsubsection{\textcolor{black}Improving \textit{CWLProv} efficiency with selective provenance capture}
\textit{Shim} refers to an adaptor step to resolve a format incompatibility issues between two workflow tasks \citep{Mohan2014}, typically converting the previous output into an acceptable format for the next step. For example in our case study \textit{RNA-seq} workflow, \textit{RNA-SeQC} require an indexed BAM file, whereas the output of \textit{STAR} or \textit{Picard MarkDuplicates} only comprises of the BAM file alone. Hence, a shim step executing \textit{SAMtools index} make the aligned reads analysis ready for RNA-SeQC. Compared to the more analytical steps, the provenance of such shim steps are not particularly interesting for domain scientists, and in many cases their intermediate data would effectively double the storage cost with little information gains, as such data can be reliably recreated by re-applying the predictable transformation step (considering it as a \textit{pure function} without side-effects). Another type of ignorable steps could be purely diagnostic, which outputs are used primarily during workflow design to verify tool settings. A workflow engine does not necessarily know which steps are ``boring'' \footnote{The CWL 1.1 specification will add a hint \texttt{WorkReuse} for this purpose.} and our proof of concept implementation will dutifully store provenance from all steps.
To improve efficiency, future \textit{CWLProv} work could add options to ignore capturing outputs of specified \textit{shim} steps, or to not store files over a particular file size. Similarly a scientist or a WMS may elect to only capture provenance at a particular provenance level (see Section \textbf{\nameref{sec:levels}}).
Provenance captured under such settings would be ``incomplete'' (e.g. PROV would say \textit{RNA-SeQC} consumed an identified BAM index file, but the corresponding bytes would not be stored in the RO), thus it is envisioned this can be indicated in the RO manifest as a variant of the \textit{CWLProv} profile identifier to give the end-user clear indication of what to expect in terms of completeness, so that tools like \textit{cwlprov-py} could be extended to re-create missing outputs, verifying their expected checksums, or collapse provenance listing of ``boring'' steps to improve human tractability.
\subsubsection{\textcolor{black} Enforcement of Best Practices -- An Open Problem}
Recommendations and best practices from the scientific community are proposed frequently, to guide researchers to design their computational experiments in such a way as to make their research reproducible and verifiable. Not only the best practices for workflow design, but also for resource declaration, software packaging and configuration management are put forward \citep{Gruening2018} to avoid dependence on local installations and manual processes of dependency management. The term \textit{``Better Software, Better Research''} \citep{Goble2014} can also be well-applied on and adapted for the workflow design process.
Declarative approaches to workflow definition such as CWL facilitate and encourage users to explicitly declare everything in a workflow, improving white-box view of the retrospective as well as prospective provenance. Such workflows should provide insights of the complete process followed, to produce a data artefact resolving the black-boxness often associated with the workflow provenance. However, it is entirely up to researchers to leverage these approaches to produce well-defined workflows with explicit details facilitating enriched capture of the provenance trace at the appropriate level, and this can require considerable effort and consistency on the workflow designer's behalf. For instance, the alignment workflow used in this case study embeds bash scripts into the CWL tool definition, therefore requiring another layer needed to be penetrated for provenance information extraction. Despite using CWL for the workflow definition and \textit{CWLProv} for provenance capture, the provenance trace will be missing critical information making it coarse-grained, and the raw logs capturing the enactment will also not be as informative.
The three criteria defined by \citet{cohen2017scientific} to be followed by workflow designers are: modularized specifications, unified representation and workflow annotations. CWL facilitates a modular structure to workflow definitions by coupling similar steps to \textit{subworkflows}; and, as an interoperable standard, CWL provides a common platform moving towards resolution of the heterogeneity of the workflow specification languages. In addition, users can add standardised domain-specific annotations to data and workflows incorporating the constructs defined by external ontologies (e.g. EDAM) to enhance understanding of the shared specification and the resources it refers to. All these features can be utilized to design better workflows and maximize the information declaration resulting in semantically-rich and provenance-complete \textit{CWLProv} ROs, and should thus be expressed clearly in user guides\footnote{See for instance \url{https://view.commonwl.org/about\#format}} for workflow authors.
The usability of any \textit{CWLProv} RO directly relies on the choice of practices followed by the researchers to design and communicate their computational analyses. Workflow-centric initiatives similar to \textit{software carpentry} \citep{softwarecarpentry} and \textit{code is science} \citep{CodeIsSc} are one possible way to organize training and create awareness around best practices. Community-driven efforts to further consolidate the understanding of requirements to make a given workflow explicit and understandable should be made. Not only awareness about the workflow design is needed, but also the availability of the associated resources should be emphasized e.g. software as containers or software packages, big datasets in public repositories and pre-processing/post-processing as part of workflow. Without putting proposed best practices into actual practice, complete communication and hence the reproducibility of a workflow-centric computational analysis is likely to remain challenging.
\section{Conclusion} \label{sec:conclusion}
The comprehensive sharing and communication of the computational experiments employed to achieve a scientific objective establishes trust on published results. Shared resources are sometimes rendered ineffective due to incomplete provenance, heterogeneity of platforms, unavailability of software and limited access to data. To this context, the contributions of this study are four-fold. First, we have provided a comprehensive summary of the recommendations put forward by the community regarding workflow design and resource sharing. Second, we define a hierarchical provenance framework to achieve homogeneity in the granularity of the information shared with each level addressing specific provenance recommendations.
Third, we leverage the existing standards best suited to define a standardized format, \textit{CWLProv} for methodical representation of workflow enactments, its provenance and the associated artefacts employed. Finally, to demonstrate the applicability of \textit{CWLProv}, we extend an existing workflow executor (\textit{cwltool}) to provide a reference implementation to generate interoperable workflow-centric ROs, aggregating and preserving data and methods to support the coherent sharing of computational analyses and experiments.
With any published scientific research, statements such as \textit{``Methods and data are available upon request''} should no longer be acceptable in a modern open-science-driven research community. Considering on one hand the collaborative nature and emerging openness of bioinformatics research and on the other hand the heterogeneity of workflow design approaches, it is essential to provide open access to the structured representation of the data and methods utilized in any scientific study to achieve interoperable solutions facilitating reproducibility of science.
Provenance capture and its subsequent use to support published research transparency should not be treated as an after-thought but rather as a standard practice of up-most priority. With adoption of well-defined standards for provenance and declarative workflow definition approaches, the assumption of black-box provenance often associated with workflows can be addressed. The workflow authors should be encouraged to follow well-established and agreed upon best practices for workflow design and software environment deployment. In conclusion, we do not require new standards, new WMSs or indeed new best practices, instead the focus should be to implement, utilize and re-use existing mature community-driven initiatives to achieve consensus in representing different aspects of computational experiments.
\section{Availability of source code and requirements}
\textit{CWLProv} is implemented as part of the CWL reference implementation \textit{cwltool}:
\begin{itemize}
\item Project name: cwltool \rrid{RRID:SCR_015528}
\item Project home page: \newline \url{https://github.com/common-workflow-language/cwltool}
\item Version: \href{https://pypi.org/project/cwltool/1.0.20180809224403/}{1.0.20181012180214} \citep{cwltool}
\item Operating system(s): Platform independent
\item Programming language: Python 3.5 or later \rrid{RRID:SCR_008394}
\item Other requirements: \href{https://www.docker.com/}{Docker} \rrid{RRID:SCR_016445} recommended
\item License: \href{http://www.apache.org/licenses/LICENSE-2.0}{Apache License, Version 2.0}
\end{itemize}
The \textit{\textit{CWLProv} profile} documents the use of W3C PROV in a Research Object to capture a CWL workflow run:
\begin{itemize}
\item Project name: \textit{CWLProv} profile
\item Project home page: \url{https://w3id.org/cwl/prov}
\item Version: \href{https://w3id.org/cwl/prov/0.6.0}{0.6.0} \citep{cwlprov}
\item Operating system(s): Platform independent
\item License: \href{http://www.apache.org/licenses/LICENSE-2.0}{Apache License, Version 2.0}
\end{itemize}
The \textit{\textit{CWLProv} Python Tool} can be used to explore \textit{CWLProv} ROs on the command line:
\begin{itemize}
\item Project name: \textit{CWLProv} Python Tool (\textit{cwlprov-py})
\item Project home page: \newline \url{https://github.com/common-workflow-language/cwlprov-py}
\item Version: \href{https://pypi.org/project/cwlprov/0.1.1/}{0.1.1} \citep{cwlprov-py}
\item Operating system(s): Platform independent
\item Programming language: Python 3.5 or later \rrid{RRID:SCR_008394}
\item License: \href{http://www.apache.org/licenses/LICENSE-2.0}{Apache License, Version 2.0}
\end{itemize}
\section{Availability of supporting data and materials}
\textit{CWLProv} Research Objects of CWL workflow executions are published in Mendeley Data and mirrored to Zenodo.
\begin{itemize}
\item \begin{sloppypar}
CWL run of Somatic Variant Calling Workflow (CWLProv 0.5.0 Research Object)
\citep{somatic_mendeley} \\
\url{https://doi.org/10.17632/97hj93mkfd.3} \\
\url{https://zenodo.org/record/2841641}
\end{sloppypar}
\item \begin{sloppypar}
CWL run of Alignment Workflow (CWLProv 0.6.0 Research Object)
\citep{alignment_mendeley} \\
\url{https://doi.org/10.17632/6wtpgr3kbj.1} \\
\url{https://zenodo.org/record/2632836}
\end{sloppypar}
\item \begin{sloppypar}
CWL run of RNA-seq Analysis Workflow (CWLProv 0.5.0 Research Object)
\citep{rnaseq_mendeley} \\
\url{https://doi.org/10.17632/xnwncxpw42.1} \\
\url{https://zenodo.org/record/2838898}
\end{sloppypar}
\end{itemize}
The \textit{CWLProv Python Tool} can be used to explore the above research objects.
The data and methods supporting this work are also available in the GigaScience repository, GigaDB \citep{GigaScienceData}.
\section{Declarations}
\subsection{List of abbreviations}
BAM: Binary Alignment Map; BCO: BioCompute Object; CRAM: Compressed Alignment Map; CWL: Common Workflow Language; EBI: European Bionformatics Institute; GATK: Genome Analysis ToolKit; HPC: High Performance Computing; JSON-LD: JavaScript Object Notation for Linked Data; OS: Operating System; PROV-DM: PROVenance Data Model; RO: Research Object; W3C: World Wide Web Consortium; WMS: Workflow Management System;
\subsection{Ethical Approval (optional)}
Not applicable.
\subsection{Consent for publication}
Not applicable.
\subsection{Competing Interests}
SSR and MRC are members of the leadership team for Common Workflow Language at the Software Freedom Conservancy.
\subsection{Funding}
FZK funded by Melbourne International Research Scholarship (MIRS) and Melbourne International Fee Remission Scholarship (MIFRS).
SSR and CG funded by \href{https://www.bioexcel.eu}{BioExcel CoE}, a project funded by the European Commission
\href{http://dx.doi.org/10.13039/100010666}{Horizon 2020 Framework Programme} under
contracts \href{https://cordis.europa.eu/project/id/823830}{H2020-INFRAEDI-02-2018-823830} and
\href{http://cordis.europa.eu/projects/675728}{H2020-EINFRA-2015-1-675728}, as well as \href{https://www.ibisba.eu/}{IBISBA} (\href{http://cordis.europa.eu/projects/730976}{H2020-INFRAIA-1-2014-2015-730976}).
\subsection{Author's Contributions}
% CASRAI terms - see https://casrai.org/credit/
Conceptualization: FZK, SSR, MRC.
Data curation: FZK.
Formal analysis: FZK.
Funding acquisition: ROS, AL, CAG.
Investigation: FZK.
Methodology: FZK, SSR.
Project administration: FZK, SSR, ROS, AL
Computing Resources: ROS, AL.
Software: FZK, SSR, MRC.
Supervision: MRC, ROS, AL, CAG.
Validation: FZK, SSR.
Writing - original draft: FZK.
Writing - review \& editing: FZK, SSR, ROS, AL, MRC.
\section{Acknowledgements}
An earlier version of this article \citep{cwlprov-preprint} was submitted for consideration at International Provenance and Annotation Workshop (IPAW) 2018. We would like to thank the IPAW reviewers for their constructive comments.
We would also like to thank the GigaScience editors and reviewers Tomoya Tanjo and Alban Gaignard for constructive and valuable feedback we think has improved the manuscript and future directions.
We would like to thank the Common Workflow Language community, and in particular Peter Amstutz, Pau Ruiz Safont and Pjotr Prins, for their continuing support, review and feedback. We would also like to thank Brad Chapman, Christopher Ball and Lon Blauvelt for the workflows used in the evaluation and their prompt replies to our enquiries.
We are grateful for partial travel support from Open Bioinformatics Foundation (OBF) Travel Fellowship Program \citep{OBFTravel} to Farah Zaib Khan for attending the Bioinformatics Open Source Conference (BOSC) 2017 and 2018 Codefests subsidizing this collaborative effort.
\bibliography{paper-refs}
\end{document}
| {
"alphanum_fraction": 0.8091412402,
"avg_line_length": 176.0317662008,
"ext": "tex",
"hexsha": "f37313f5c00a70810186311b83d81fb7fc0ec584",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "00706e62388dcea171c9296ebfbced71d163bb3b",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "stain/cwlprov-paper-gigascience",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "00706e62388dcea171c9296ebfbced71d163bb3b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "stain/cwlprov-paper-gigascience",
"max_issues_repo_path": "main.tex",
"max_line_length": 1861,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "00706e62388dcea171c9296ebfbced71d163bb3b",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "stain/cwlprov-paper-gigascience",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 31133,
"size": 138537
} |
\title{Self Organizing Systems Exercise 1}
\author{
Alexander Dobler 01631858\\
Thomas Kaufmann 01129115
}
\date{\today}
\documentclass[12pt]{article}
\usepackage{hyperref}
\usepackage{booktabs}
\usepackage{graphics}
\usepackage{multirow}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{mwe}
\begin{document}
\maketitle
\section{Introduction \& Problem Description}
For exercise 1 of self-organizing systems we chose the task \textit{Sequence alignment for Genetic Data (DNA Lattice) Anonymization} in which we should solve the DNA lattice anonymization described in \cite{mainpaper} with two metaheuristic techniques from the lecture.
The task is to find a pairing of DNA sequences, such that the sum of distances between two DNA sequences of a pair over all pairs is as small as possible.
More specifically we are given a set of $n$ sequences described by strings which can have different length.
In a first step we have to align these sequences, such that all sequences have the same length.
This is done by introducing \textit{gap}-characters to increase the length of sequences.
In general this process is called multiple sequence alignment (MSA) and we do not describe how this is done here, but rather use a library as a black-box tool for this step.
Now, that every sequence has the same length, we can compute the distance between two sequences as described in \cite{mainpaper}.
The last and main step is to combine this set of sequences into pairs, such that the sum of distances of two sequences of a pair summed up over all pairs is minimal.
Obviously this is just an application of minimal weighted matching in a complete graph, where a graph is represented by $G=(V,E)$ as follows.
The set $V$ of nodes are described by the sequences and the set $E$ of edges is $V\times V$.
For an edge $e=\{u,v\}$ its weight $w(e)$ is just the distance between the sequences $u$ and $v$.
We denote by $M \subset E$ a complete matching on a graph $G$, s.t. $2 \times |M| = |V|$, and by $c(M)$ the corresponding weight of the matching.
In the next sections we describe our solution approaches and main results.
\section{Test Data \& Preprocessing}
\label{sec:test-data}
As base set of data we chose DNA sequences from \url{https://www.kaggle.com/neelvasani/humandnadata} which consists of over 4000 human DNA sequences.
In a next step we created multiple instances of different size by selecting 10-300 random samples of these sequences.
Test-case sizes are always even, such that we do not have to bother about the leftover single sequence.
For each of these instances we performed some preprocessing with the python module \textit{Bio.SeqIO} from the package \textit{Bio} (\url{https://biopython.org/wiki/SeqIO}) and as suggested in ~\cite{mainpaper} compute multiple sequence alignments with the MSA-toolkit \emph{ClustalW}~\cite{clustalw}.
In the last step we compute the cost-matrix between pairs of sequences as described in \cite{mainpaper}.
All of the algorithms described in the next section only use the cost-matrix to compute a pairing.
Although we chose instances of sizes comparable to the work of Bradley~\cite{mainpaper}, as shown below, instances turned out being relatively easy to solve where even the relatively simple construction heuristic obtains high quality solutions in little to no execution time. We then further investigated this behavior in more depth and found that a specially structured cost matrix due to the tree-based alignment procedure in \emph{ClustalW} often leads to edge-costs of $0$ between specific pairs in the graph, inherently favouring the nature of the DNALA construction heuristic.
These test-cases can be found in the project under the folder \textit{data}:
Sequence alignments for each test-case are stored in the files \textit{human\_data\_XX.fasta} where \textit{XX} denotes the size of the test-case.
Similarly \textit{human\_data\_XX.cm} stores the cost-matrices in \textit{pickle}-format (\url{https://docs.python.org/3/library/pickle.html}).
\section{DNALA \& Exact Method}
We provide two preliminary methods to solve this problem which are used to benchmark the metaheuristik techniques:
\begin{itemize}
\item An implementation of the DNALA algorithm as described in \cite{mainpaper} can be found in the \textit{algorithms} directory.
We did implement the described randomness, but do not use multiple runs of the algorithm, but instead only run the algorithm for a testset once to determine its capability.
\item As DNALA is only a heuristic and is not guaranteed to find an optimal solution we furthermore use a maximum weighted matching implementation provided by the package \textit{networkx} (\url{https://networkx.org/documentation/stable//index.html}) to compute an exact solution used for optimality gaps in benchmarking.
\end{itemize}
\section{Genetic Algorithm}
The first metaheuristic technique with which we solve the problem is a genetic algorithm.
For this we use the \textit{deap}-package for python (\url{https://deap.readthedocs.io/en/master/}) which provides a framework for creating genetic algorithms.
The well-know genetic algorithm compononents are implemented as follows:
\begin{itemize}
\item \textbf{Solution representation}: A solution consists of $\frac{n}{2}$ pairs such that each node (sequence) appears in exactly one pair.
\item \textbf{Fitness}: The fitness is just the sum of distances between pairs of points.
This in fact also represents the value of the corresponding weighted matching.
\item \textbf{Crossover}: Given a pair of individuals $M_i$ and $M_j$, we derive a successor individual $M_{i'}$ with the following procedure:
\begin{enumerate}
\item Select a subset $M'_i \subset M_i$ of matchings from individual $M_i$, where each edge $e \in M_i$ is selected with a probability of $0.5$.
\item We then try to identify matchings $e \in M_j$ composed of edges not contained in $M'_i$, which are entirely preserved for the offspring $M_{i'}$.
\item Finally, we identify the remaining unmatched vertices and pair them randomly.
\end{enumerate}
It is worth to notice that different variations of this approach were tested, considering orders of matchings in the solution representation as well as alternative strategies to pair remaining vertices, but no significant improved was obtained.
\item \textbf{Mutation}: Given an individual $M$, selects a random pair $e=\{u,v\}$, $e'=\{u',v'\} \in M$ of matchings based on a given mutation probability and constructs an offspring $M'$ by performing 1) an arbitrary or 2) the best two-opt in the subgraph induced by vertices $V=\{u,v,u',v'\}$. Experiments showed slight advantages for the intensifying mutation approach (i.e. the best two-opt neighbour) at only a negligible increase in computational cost.
\end{itemize}
Furthermore we used a tournament selection strategy with tournament size of $k=3$ and have the option to select different population sizes and mutation rates for running the algorithm.
\section{Ant Colony Optimization}
At first we did not know how to solve the problem with ant colony optimization.
But then we realized that a weighted matching in a complete graph is just a tour visiting every vertice, where every second edge will have weight 0.
This was really convenient, as ant colony optimization is well-known to perform well on the TSP-problem.
Furthermore there are a lot of implementations available, which provide ACO-methods to solve the TSP-problem.
For our purposes it was enough to alter the implementation provided by the \textit{pants}-package (\url{https://pypi.org/project/ACO-Pants/}) in a way, such that we could solve the minimum weighted matching problem.
Our alteration can be described as follows.
When solving the TSP-problem every ant starts at a specific node and tries to find a best tour.
This means that at a specific point of the algorithm each ant has currently visited an acyclic path.
Now this path has either even or odd length.
This means when an ant can chose its next edge there are two different strategies:
\begin{itemize}
\item If the path until now has even length, then choose the usual strategy to select the next node to visit (pheromones in ACO with $\alpha$- and $\beta$-values).
\item If the path until now has odd length, then go to any not yet visited node.
This resembles the fact that every other edge of the tour will have weight 0.
\end{itemize}
Of course we also have to alter the cost of a tour, in particular that every other edge has weight 0, to guide the algorithm to an optimal solution.
One might think that this construction of an algorithm to solve the minimum weighted matching problem is pretty superficial, but we will see that its performance is not too bad.
\section{Local Search}
While population based metaheuristics are usually great to identify high quality solution components in widely distributed solution spaces, they tend to end up in suboptimal regions, where simple intensification procedures like local search heuristics can significantly improve the solution quality. To this end, it is so common that population based metaheuristics are combined with local search procedures that for genetic algorithms the term \emph{memetic algorithm} has been established in the literature. In this work, we used a relatively simple two-opt neighbourhood structure with a constant time neighbour evaluation scheme and a first-improvement step function to improve populations of both the ACO as well as the GA.
\section{Results \& Conclusion}
To evaluate the described approaches, experiments with a set of $12$ differently sized instances were run for different kinds of algorithm configurations.
We used the optimality gap as a scale invariant performance measure describing the relative distance to the global optimum.
As for particularly large instances, some configurations did converge towards the global optimum.
All experiments were run with a wallclock time of $300 s$ and $5$ repetitions each for statistical stability. Execution times do not include preparation time of cost matrices, since they have been precomputed and stored for reuse. All experiments were run on a Windows 10 machine with AMD Ryzen 3700X CPU in single-threaded mode.
\subsection{Comparison of GA, ACO and DNALA}
Table~\ref{tab:main} compares both the GA as well as the ACO to the DNALA baseline, where we show the median values, for the optimality gap, the execution time as well as the iteration where the best solution was obtained.
Main observations:
\begin{itemize}
\item Purely population-based metaheuristic approaches work quite well for moderate sized instances, but solution quality declines significantly with increasing instance sizes. While for smaller instances, some instances could even be solved to optimality, for large instances the gap increases even above $100\%$ optimality gap.
\item Both approaches show the same behaviour with respect to the overall number of iterations conducted in the given period, where in smaller instances optimal solutions are obtained relatively fast and as the instance size increases, also the number of iterations until premature termination due to optimality increases. At some point, however, instances become more difficult and more expensive in terms of computational cost, such that the iteration count then steadily decreases, which in turn also affects solution quality.
\item It can be observed that our GA does not suffer from premature convergence as the iteration where the best solution was obtained tends to be close to the overall iteration count. This suggests that more iterations could still improve the solution quality.
\item Local search generally improves quality significantly, allowing to solve even larger instances almost to optimality. However, iteration count drops to a level suggesting our GA being basically just a \emph{fancy} random restart procedure.
\item The effectiveness of two-opt local search backs up our observation of the biased instance generation procedure with \textit{ClustalW} mentioned in Section~\ref{sec:test-data}, since this simple neighborhood is able to identify the respective pairs of minimal cost easily. Furthermore, DNALA + LS seems to be quite good combtination
\end{itemize}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llr|rrrrr|rrrrr}
\toprule
& & & \multicolumn{5}{|c|}{Without Local Search} & \multicolumn{5}{|c}{With Local Search} \\
& & Optimum & Fitness & Gap & Time & $I_{opt}$ & $I_{total}$ & Fitness & Gap & Time & $I_{opt}$ & $I_{total}$ \\
ALG & n & & & & & & & & & \\
\midrule
GA & 10 & 1994 & 1994 & 0.00 & 0.00 & 1 & 1 & 1994 & 0.00 & 0.00 & 0 & 0 \\
& 20 & 11758 & 11758 & 0.00 & 0.00 & 13 & 13 & 11758 & 0.00 & 0.00 & 0 & 0 \\
& 30 & 19032 & 19032 & 0.00 & 2.00 & 33 & 33 & 19032 & 0.00 & 1.00 & 0 & 0 \\
& 40 & 20892 & 20892 & 0.00 & 5.00 & 54 & 54 & 20892 & 0.00 & 3.00 & 0 & 0 \\
& 50 & 34198 & 34198 & 0.00 & 18.00 & 124 & 124 & 34198 & 0.00 & 3.00 & 0 & 0 \\
& 60 & 42382 & 44060 & 3.96 & 300.00 & 141 & 1795 & 42382 & 0.00 & 10.00 & 1 & 1 \\
& 70 & 47234 & 47498 & 0.56 & 300.00 & 221 & 1744 & 47234 & 0.00 & 15.00 & 1 & 1 \\
& 80 & 53538 & 53538 & 0.00 & 62.00 & 291 & 347 & 53538 & 0.00 & 23.00 & 2 & 2 \\
& 90 & 69731 & 70489 & 1.09 & 300.00 & 477 & 1322 & 69731 & 0.00 & 36.00 & 2 & 2 \\
& 100 & 83495 & 90825 & 8.78 & 300.00 & 314 & 314 & 83495 & 0.00 & 56.00 & 3 & 3 \\
& 150 & 104772 & 164686 & 57.19 & 300.00 & 141 & 141 & 104772 & 0.00 & 183.00 & 6 & 6 \\
& 200 & 130178 & 285042 & 118.96 & 300.00 & 142 & 142 & 133848 & 2.82 & 300.00 & 7 & 7 \\
\midrule
ACO & 10 & 1994 & 1994 & 0.00 & 0.00 & 0 & 0 & 1994 & 0.00 & 0.00 & 0 & 0 \\
& 20 & 11758 & 11758 & 0.00 & 0.00 & 0 & 0 & 11758 & 0.00 & 0.00 & 0 & 0 \\
& 30 & 19032 & 19032 & 0.00 & 0.00 & 1 & 1 & 19032 & 0.00 & 0.00 & 0 & 0 \\
& 40 & 20892 & 20892 & 0.00 & 8.00 & 89 & 89 & 20892 & 0.00 & 0.00 & 0 & 0 \\
& 50 & 34198 & 34280 & 0.24 & 300.00 & 1380 & 1856 & 34198 & 0.00 & 1.00 & 0 & 0 \\
& 60 & 42382 & 45370 & 7.05 & 300.00 & 326 & 1358 & 42382 & 0.00 & 1.00 & 0 & 0 \\
& 70 & 47234 & 50798 & 7.55 & 300.00 & 361 & 1271 & 47234 & 0.00 & 2.00 & 0 & 0 \\
& 80 & 53538 & 60098 & 12.25 & 300.00 & 684 & 775 & 53538 & 0.00 & 3.00 & 0 & 0 \\
& 90 & 69731 & 83165 & 19.27 & 300.00 & 500 & 598 & 69731 & 0.00 & 5.00 & 0 & 0 \\
& 100 & 83495 & 107569 & 28.83 & 300.00 & 428 & 512 & 83495 & 0.00 & 22.00 & 4 & 4 \\
& 150 & 104772 & 157702 & 50.52 & 300.00 & 57 & 194 & 108990 & 4.03 & 300.00 & 18 & 32 \\
& 200 & 130178 & 218002 & 67.46 & 300.00 & 49 & 125 & 171176 & 31.49 & 300.00 & 20 & 24 \\
\midrule
DNALA & 10 & 1994 & 1994 & 0.00 & 0.00 & - & - & 1994 & 0.00 & 0.00 & - & - \\
& 20 & 11758 & 11758 & 0.00 & 0.00 & - & - & 11758 & 0.00 & 0.00 & - & - \\
& 30 & 19032 & 19032 & 0.00 & 0.00 & - & - & 19032 & 0.00 & 0.00 & - & - \\
& 40 & 20892 & 22066 & 5.62 & 0.00 & - & - & 21164 & 1.30 & 0.00 & - & - \\
& 50 & 34198 & 35336 & 3.33 & 0.00 & - & - & 34198 & 0.00 & 0.00 & - & - \\
& 60 & 42382 & 44666 & 5.39 & 0.00 & - & - & 42890 & 1.20 & 0.00 & - & - \\
& 70 & 47234 & 47252 & 0.04 & 0.00 & - & - & 47244 & 0.02 & 0.00 & - & - \\
& 80 & 53538 & 55448 & 3.57 & 0.00 & - & - & 53538 & 0.00 & 0.00 & - & - \\
& 90 & 69731 & 71779 & 2.94 & 0.00 & - & - & 69747 & 0.02 & 0.00 & - & - \\
& 100 & 83495 & 86513 & 3.61 & 0.00 & - & - & 85293 & 2.15 & 0.00 & - & - \\
& 150 & 104772 & 105824 & 1.00 & 0.00 & - & - & 104940 & 0.16 & 0.00 & - & - \\
& 200 & 130178 & 134640 & 3.43 & 0.00 & - & - & 130194 & 0.01 & 0.00 & - & - \\
\bottomrule
\end{tabular}%
}
\caption{Comparison of GA and ACO against the DNALA baseline in $12$ different instances.}\label{tab:main}
\end{table}
\subsection{Comparison of GA Mutation Operators}
The previous section showed that our GA is not subject to premature convergence. However, the effectiveness of the two-opt neighbourhood structure and the fact that improvements occur frequently in the last iterations, we suspected that our GA was actually a multi-solution, random-neighbour local search. To this end, we compared the random two-opt move mutation operator against the best two-opt move one. Table~\ref{tab:mutation-operator} compares median values for fitness, optimality gap, execution time and iteration characteristics for both types for moderate and large instance types.
Main observations:
\begin{itemize}
\item The randomized approach general performs quite similarly, however, tends to require more iterations for comparable solution qualities (e.g. $n=70$)
\item For smaller instances, the locally-improving moves seem to be slightly advantageous in terms of convergence pace.
\item As the instance size increases, the effect of small locally-improving moves seem to vanish and both quality as well as iteration characteristics seem to align.
\end{itemize}
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llr|rrrrr}
\toprule
& & Optimum & Fitness & Gap & Time & $I_{opt}$ & $I_{total}$ \\
type & n & & & & & & \\
\midrule
Random & 70 & 47234.00 & 47371.00 & 0.29 & 300.00 & 278.50 & 2203.50 \\
& 80 & 53538.00 & 55116.00 & 2.95 & 300.00 & 564.00 & 1650.00 \\
& 90 & 69731.00 & 70490.00 & 1.09 & 300.00 & 599.00 & 1012.00 \\
& 100 & 83495.00 & 94405.00 & 13.07 & 300.00 & 501.00 & 629.50 \\
& 150 & 104772.00 & 180576.00 & 72.35 & 300.00 & 149.00 & 149.00 \\
& 200 & 130178.00 & 280278.00 & 115.30 & 300.00 & 142.50 & 142.50 \\
\midrule
Best & 70 & 47234.00 & 47366.00 & 0.28 & 221.00 & 320.00 & 1523.50 \\
& 80 & 53538.00 & 53538.00 & 0.00 & 88.00 & 295.50 & 364.50 \\
& 90 & 69731.00 & 71061.00 & 1.91 & 300.00 & 486.00 & 1492.50 \\
& 100 & 83495.00 & 89561.00 & 7.27 & 300.00 & 401.50 & 1002.50 \\
& 150 & 104772.00 & 192572.00 & 83.80 & 300.00 & 135.50 & 135.50 \\
& 200 & 130178.00 & 279480.00 & 114.69 & 300.00 & 143.50 & 143.50 \\
\bottomrule
\end{tabular}%
}
\caption{Comparison of the random and best two-opt move in the GA mutation operator.}\label{tab:mutation-operator}
\end{table}
\subsection{Population Size}
As the name might suggest, population size is obviously a crucial parameter in population-based metaheuristics. On the one hand, an increased population size allows vaster search diversity and allows to cover more areas in the solution space at once, obviously increasing likelihood to identifying new promising regions.
On the other hand, metaheuristics usually require lots of iterations to continuously improve the solution quality, which obviously is a major limiting factor for large population sizes.
In Figure~\ref{fig:population} we compare the impact of different population sizes on the optimality gap over time in single executions on two different instances (usually a time bucket average would be more appropriate for statistical stability, but we assumed this was beyond the scope if this exercise).
Main observations:
\begin{itemize}
\item Too large population sizes inherently lead to increased computational costs, thus slowing down individual iterations, as can be observed in Figure~\ref{fig:population-b}
\item The increased population sizes, however, increase search diversity, thus may improve solution quality (compare again Figure~\ref{fig:population-b}). Of course this depends on the structure of the solution space, e.g. big-valley vs. widely distrusted.
\item Independent of the population size, it can be observed that the GA tends to perform small steps towards improving solutions until convergence towards an (local or global) optimum sets in, while the ACO seems to require large periods to obtain improving solutions. This could be either an indicator for a suboptimal model or some hyper-parameter issue. In a more elaborated study, we would use for instance the R-package irace for proper parameter tuning.
\end{itemize}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ga_50_population_comparison.pdf}
\caption%
{{\small }}
\label{fig:population-a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ga_100_population_comparison.pdf}
\caption%
{{\small }}
\label{fig:population-b}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/aco_50_population_comparison.pdf}
\caption%
{{\small }}
\label{fig:population-c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/aco_100_population_comparison.pdf}
\caption%
{{\small }}
\label{fig:population-d}
\end{subfigure}
\caption
{\small Optimality gap over time for different population sizes on instances with $n=50$ and $n=100$ vertices. }
\label{fig:population}
\end{figure*}
\subsection{Hyper-parameters}
In addition to some manual tweaking of hyper-parameters, we performed a very limited comparison of selected hyper-parameters of both the GA and the ACO.
The default mutation rate corresponds to $2/|V|$, multiplied by a tunable parameter $r \in \{1,4,8\}$. The integer values for alpha and beta define the relative importance of pheromones and distances respectively.
Main observations:
\begin{itemize}
\item We can clearly see that with too high mutation rate ($r=8$) the genetic algorithm does not converge to the optimum, whereas for $r=1$ and $r=4$ the performance is similar.
\item For ACO alpha=1 and beta=4 or beta=8 clearly outperforms all other parameter choices, suggesting the importance of distance information rather than pheromones again likely due to specially structured instances.
\end{itemize}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ga_50_mutation_rate_comparison.pdf}
\caption%
{{\small }}
\label{fig:mean and std of net14}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/aco_50_alpha_beta_ratio_comparison.pdf}
\caption%
{{\small }}
\label{fig:mean and std of net24}
\end{subfigure}
\caption
{\small Optimality gap over time for different parameters with $n=50$ vertices. }
\label{fig:mean and std of nets}
\end{figure*}
\section{Conclusion}
We have seen two metaheuristic techniques to solve the minimum weighted matching problem, namely ACO and genetic algorithms.
Both approaches performed similarly and converged to optimal solutions in appropriate times even for larger instances.
\bibliographystyle{abbrv}
\bibliography{main}
\end{document}
| {
"alphanum_fraction": 0.6359355259,
"avg_line_length": 84.5275080906,
"ext": "tex",
"hexsha": "e827a5d2dd27f69ecd0b099de14f6d41dd74d5c9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b75188097d095e4acaca32290ba4f49fa8cb6c0e",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "tkauf15k/sos2020",
"max_forks_repo_path": "ex1/report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b75188097d095e4acaca32290ba4f49fa8cb6c0e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "tkauf15k/sos2020",
"max_issues_repo_path": "ex1/report/report.tex",
"max_line_length": 729,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b75188097d095e4acaca32290ba4f49fa8cb6c0e",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "tkauf15k/sos2020",
"max_stars_repo_path": "ex1/report/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7112,
"size": 26119
} |
\documentclass[letterpaper, 10pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath,amsthm,amssymb,scrextend}
\usepackage{fancyhdr}
\pagestyle{fancy}
\usepackage{silence}
\WarningFilter{latex}{You have requested package}
\input{ltx/pkg/preamble}
\begin{document}
\lhead{MAT224 Linear Algebra II}
\chead{Vector spaces and subspaces}
\rhead{Week 01}
\title{Linear Algebra II \\ \Large{MAT224}}
\author{Lennart Döppenschmitt}
\maketitle
% \tableofcontents
\section*{Welcome,}%
my name is Lennart, I will be your instructor for Linear Algebra II this term.
I am a graduate student in the maths department and with my advisor I work on differential
geometry and the moduli space of Kähler metrics.
Apart from math (surprisingly similar though), I enjoy rock climbing.
\lb
You can reach me with questions about the course via \textbf{email: }
\emph{lennart at math dot toronto dot edu}
\pr
For math questions this course uses \emph{Piazza}.
\lb
My \textbf{office hours} take place on Wednesdays at 2 pm (EST) on Zoom.
\lb
\lb
\textbf{A few things about the class:}
\begin{itemize}
\item
Details can be found in the syllabus on Quercus
\href{https://q.utoronto.ca}
{\ble{\texttt{q.utoronto.ca}}}.
\item The content will build on what you have learned in MAT223 with a
particular focus on abstraction. Thinking abstractly is an important skill and
makes arguments more powerful as it applies to a wider range of applications.
\item
A weekly class schedule, which we will follow closely, can be found at
the end of the syllabus.
\item
All lectures will be recorded and made availble on YouTube for
\emph{one} week, link will follow.
Office hours will \emph{not} be recorded.
\end{itemize}
\lb
\textbf{About the lectures:}
\begin{itemize}
\item
Our lectures will be a mix of me explaining new math to you, time for you to try out
what you just learned in short exercises and discussions and then a discussion round
where we discuss the answer together.
\item
I will try to give Definitions, Theorems, etc. the same numbers as in the textbook.
\item
These notes will be available on Quercus.
% Alternatively, you can access them on
% \pr
% \href{https://www.github.com/researchnix/mat224}
% {\ble{\texttt{github.com/researchnix/mat224}}}.
% \pr
% (The notes might be under construction up until the start of the lecture, but
I encourage you to have a copy of them ready for the lecture to fill in
the blanks and scribble down ideas for the discussions.
\item
Please let me know when you find mistakes in the notes, there will be plenty.
\end{itemize}
\lb
% \textbf{Miscellaneous:}
\textbf{About Zoom:}
\begin{itemize}
\item We will use the same Zoom meeting ID for lectures and office hours.
Please do \textbf{NOT} share the ID or password publically.
\item To join you need a Zoom account with an official UofT email.
\item Please mute yourself in Zoom.
\item For questions, you can \emph{virtually} raise your hand and unmute yourself when
I ask you to.
\item Please only use the chat function for lecture relevant messages and please don't spoil
answers to give everyone time to think about the questions.
\end{itemize}
\lb
\textbf{Thank you } to Professor Sean Uppal and Jeffrey Im for providing me with their old slides from a previous iteration of this course as a reference!
\newpage
\section*{Introduction}%
\label{sec:introduction}
Let us start by going over mathematical basics that we will need for this course.
\lb
\begin{itemize}
\item
A \emph{set} is a collection of objects. Examples include the set of integers
$\Z$, the set of real numbers $\R$, the set of nonnegative integers $\Z_{\geq 0}$.
\pr
$A \subseteq B$ indicates that every element in $A$ is
also an element in $B$. We say in this case that $A$ is a \emph{subset} of $B$.
\pr
A subset may be declared by pruning a set with a specified condition. For example, the
set of even integers and the set of odd integers are
\[ \bb E = \cb{ n ∈ \Z ~\vert~ n \tx{ is even }} \subseteq \Z\]
\[ \bb O = \cb{ n ∈ \Z ~\vert~ n \tx{ is odd}} \subseteq \Z\]
\pr
If $A \subseteq B$ and at least one element of $B$ is not in $A$, we say that $A$ is
properly contained in $B$, in symbols $A \subsetneq B$.
\newpage
\item
A function $\map{A}[f]{B}$ between sets is an assignment that chooses for every $a ∈ A$
an element $f(a) = b ∈ B$. For example,
\begin{align*}
&\map{\Z}[g]{\Z} \\
&x \mapsto 2x + 1
\end{align*}
sends every real number $x$ to twice its value plus one. That is, $g(1) = 3$,
$g(2) = 5$, and so on $\ldots$.
\lb
The notation $\map{A}[f]{B}$ moreover describes that we are allowed to apply f to
elements $a ∈ A$, we therefore call $A$ the \emph{domain} of $f$.
\lb
The set $B$ is called the \emph{codomain} or \emph{target} of $f$, this is where $f$
takes values in.
\lb
for $a ∈ A$ we call $f(a)$ the \emph{image of $a$ under $f$}.
\lb
The set of all possible functions between two sets $A$ and $B$ is
denoted by $\mathcal{F}(A,B)$.
\pr
For any set $A$, there is the function $\map{A}[\id{A}]{A}$ that sends every element
back to itself.
\[ \id{A}(a) = a \qquad \tx{for all } a ∈ A \]
\newpage
\item
A function $\map{A}[f]{B}$ can have the following properties:
\begin{itemize}
\item[] \emph{injective} \\
No two distinct elements $a \neq b$ in $A$ have the same value
$f(a) \neq f(b)$ under $f$. \\
In other words, if $f$ is injective, then $f(a) = f(b)$ implies that
$a = b$.
\item[] \emph{surjective} \\
Every element $b$ in $B$ is the image of some element $a$ in $A$. \\
Equivalently, for all $b ∈ B$ there is an $a ∈ A$ such that $b = f(a)$.
\item[] \emph{bijective}\\
The function is both injective and surjective.
\lb
\q{Which of these properties apply to $g(x) = 2x + 1$ defined above as a function
from $\Z$ to $\Z$?} \q{If not, how can you change the definition to make it bijective?}
\end{itemize}
\newpage
\item
Two functions of the form
$\map{A}[f]{B}$ and
$\map{B}[g]{C}$ may be concatinated. This means, whatever the first $f$ functions spits
out is fed back into the next function $g$.
\[ a \mapsto f(a) \mapsto g(f(a)) \]
This is formally called \emph{composition} of $f$ and $g$ and denoted by
\[ (g \circ f) (a) = g(f(a)) \]
Notice, this only makes sense if the domain of $g$ is also the codomain of $f$.
\newpage
\item
An \emph{inverse} of a function $\map{A}[f]{B}$ is a function in the \emph{opposite}
direction $\map{B}[h]{A}$ with the property that
\[ h(f(a)) = a \qquad \tx{for all } a ∈ A \]
and
\[ f(h(b)) = b \qquad \tx{for all } b ∈ B \]
Intuitively, this means $h$ is undoing whatever $f$ was doing. For example,
the function $h(y) = \frac{y-1}{2}$ from $\bb O$ to $\Z$ is an inverse to
the function $g(x) = 2x + 1$.
\end{itemize}
\pr
We will wrap up this introduction with the following theorem
\lb
\textbf{Theorem}
\lb
A function $\map{A}[f]{B}$ is invertible if and only if it is bijective
% \begin{proof}
% \end{proof}
\newpage
\section*{Vector spaces}%
\label{sec:vector spaces}
\pr
\textbf{Textbook:} Section 1.1
\lb
\textbf{Definition 1.1.1}
\pr
A (real) vector space $(V, + , \cdot)$ consists of a set $V$ and two operations that we
call \emph{addition} $(+)$ and \emph{scalar multiplication} $\cdot$
\[ \map{V \times V}[+]{V} \]
\[ \map{\R \times V}[\cdot]{V} \]
such that the following axioms hold
\begin{enumerate}
\item
(additive closure)
\qquad $\vec x + \vec y ∈ V$,
for all $\vec x, \vec y ∈ V$
\item
(multiplicative closure)
\qquad $α \cdot \vec x ∈ V$,
for all $\vec x ∈ V$ and scalars $α ∈ \R$
\item
(commutativity)
\qquad $\vec x + \vec y = \vec y + \vec x$,
for all $\vec x, \vec y ∈ V$
\item
(additive associativity)
\qquad $(\vec x + \vec y) + \vec z =\vec x + (\vec y + \vec z)$,
for all $\vec x , \vec y , \vec z ∈ V$
\item
(additive identity)
\qquad There exists a vector $\vec 0 ∈ V$ such that $\vec x + \vec 0 = \vec x$
for all $\vec x ∈ V$
\item
(additive inverse)
\qquad For each $\vec x ∈ V$, there exists a vector $- \vec x ∈ V$
with the property that $\vec x + (- \vec x) = \vec 0$
\item
(multiplicative associativity)
\qquad $(α \cdot β) \cdot \vec x = α \cdot ( β \cdot \vec x)$,
for all $α, β ∈ \R$ and $\vec x ∈ V$
\item
(distributivity over vector addition)
\qquad $α \cdot ( \vec x + \vec y) = α \vec x + α \vec y$,
for all $α ∈ \R$ and $\vec x, \vec y ∈ V$
\item
(distributivity over scalar addition)
\qquad $(α + β) \cdot \vec x = α \vec x + β \vec x$,
for all $α, β ∈ \R$ and $\vec x ∈ V$
\item
(identity property)
\qquad $1 \cdot \vec x = \vec x$,
for all $\vec x ∈ V$, $1 ∈ \R$
\end{enumerate}
\lb
\textbf{Remark}
\begin{itemize}
\item
Think of a vector space as set, only that you are able to do more with its elements
(you can add two of them together and multiply them by a number).
Plus, there are 10 `rules' you know your vector space obeys to.
\item
For elements in a vector space $V$, we write $\vec x, \vec y, \ldots∈ V$.
The textbook writes ${\bf x, y, } \ldots ∈ V$.
\item
We often abbreviate $α \cdot \vec x$ with $α \vec x$.
\item
Elements in a vetcor space a called vectors. Be aware that anything can be a vector,
even functions for example.
\item
We say that a vector space is \emph{real} if the scalars are real numbers. For now every vector space is real, we will only later allow the scalars to be \emph{complex numbers}
and such.
\end{itemize}
\newpage
\pr
\textbf{Examples}
\begin{enumerate}
\item The real numbers $\R$ form a vector space with 'usual' addition $+$
and multiplication $\cdot$.
\item An n-tuple of real numbers can be written as
$\vec v = (v_1, v_2, \ldots, v_n)$ where
each $v_i ∈ \R$. The set of n-tuples is a vector space denoted by $\R^n$.
\pr
The operations addition and componentwise are performed componentwise.
The additive identity is the zero tuple $\vec 0 = (0, \ldots, 0)$.
\item
The set $\mat{n, m}{\R}$ of $n \times m$ matrices with componentwise addition
and scalar multiplication.
\item
The set of polynomials of degree at most $n$
\[ \pol{n}{\R} = \cb{a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n \vert a_0, \ldots, a_n ∈ \R } \] is
a vector space. \\
Addition of two polynomials is applied to coefficientwise and the identity element $\vec 0$ is the
polynomial that is constantly zero $p(x) = 0$.
\pr
Because every term in a polynomial looks like $a_i x^i$ for some value of $i$ between $0$ and $n$, we can abbreviate
\[ \Sum[i=0][n] a_i x^i = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n \]
\item
The set
\[ \cal F (\R, \R) \]
of functions from the real numbers to the real numbers is a vector space.
\pr
What might be the operations $+$ and $\cdot$ ?
\end{enumerate}
\vspace{20pt}
\textbf{Intuition}
\lb
In many cases vectors may be represented with arrows becasue they, too, have a
direction and a magnitude. But be careful, every analogy has its limitations.
\newpage
\lb
\textbf{Discussion}
\begin{enumerate}
\item[(I)]
Is the set of 2-tuples of integers $\Z^2$ a real vector space?
\pr \textbf{Hint: } To verify that something is a vector space, we need to check
\emph{all} axioms in the definition. However, to prove the contrary, it is enough to
disprove \emph{one single} axiom!
\vspace{400pt}
\item[(II)]
Is the set $\pol{n}{\R}'$ of polynomials of \emph{exactly} degree $n$ a vector space?
\end{enumerate}
\newpage
\subsection*{Some Properties of vector spaces}%
\label{sub:Some Properties of vector spaces}
Whenever we introduce a new mathematical \emph{object}, such as a vector space,
we may not take anything for granted. In some ways, vectors behave like real numbers
(addition, scalar multiplication, zero element, additive inverse $\ldots$) but in many ways
they do not!\\
For example, for $3 ∈ \R$ we can write $ \frac{1}{3} $, but for a vector
$\vec v ∈ V$ we may not wirte $\frac{1}{\vec v}$. To be a good mathematican, it is very helpful
to be extremely pedantic!
\lb
\textbf{Theorem (Cancellation)}
\lb
Let $V$ be a vector space and $\vec u, \vec v, \vec w ∈ V$. If
\[ \vec u + \vec w = \vec v + \vec w \]
the
\[ \vec u = \vec v \]
\begin{proof}
\begin{align*}
\vec u
&= \vec u + \vec 0 \\
&= \vec u + (\vec w - \vec w) \\
&= (\vec u + \vec w) - \vec w \\
&= (\vec v + \vec w) - \vec w \\
&= \vec v + (\vec w - \vec w) \\
&= \vec v + \vec 0 \\
&= \vec v
\end{align*}
\end{proof}
\lb
\textbf{Proposition}
Let $V$ be a vector space and $\vec v \in V$, then
\[ 0 \vec v = \vec 0 \]
\q{Explain the difference between $0$ and $\vec 0$.}
\begin{proof}
\begin{align*}
\vec 0 + 0 \vec v
&= 0 \vec v \\
&= (0 + 0) \vec v \\
&= 0 \vec v + 0 \vec v
\end{align*}
So by the cancellation theorem we can simplify
\[ \vec 0 + 0 \vec v = 0 \vec v \]
to
\[ \vec 0 = 0 \vec v \]
\end{proof}
\lb
\textbf{Proposition}
Let $V$ be a vector space and $\vec v \in V$, then
\[ -1 \cdot \vec v = - \vec v \]
\begin{proof}
\begin{align*}
- \vec v
&= - \vec v + \vec 0 \\
&= - \vec v + 0 \vec v \\
&= - \vec v + (1 + (-1)) \vec v \\
&= - \vec v + (1 + (-1)) \vec v \\
&= - \vec v + 1 \vec v + (-1) \cdot \vec v \\
&= - \vec v + \vec v + (-1) \cdot \vec v \\
&= \vec 0 + (-1) \cdot \vec v \\
&= -1 \cdot \vec v
\end{align*}
\end{proof}
\newpage
\lb
Notice that the symbol $+$ does \emph{not} necessarily refer to the standard addition, it
could be defined in a different way as we can observe in the following.
\lb
\textbf{Discussion}
\lb
Let $V$ be the set of 2-tuples of real numbers $\ttpl{u_1}{u_2}$.
\lb
Define addition of 2-tuples as
\[ \ttpl{u_1}{u_2} \diamond \ttpl{v_1}{v_2} = \ttpl{u_1 + v_1 + 1}{u_2 + v_2 + 1} \]
\lb
Let scalar multiplication be given by
\[ α \star \ttpl{u_1}{u_2} = \ttpl{α u_1 + α - 1 }{β u_2 + β - 1} \]
\lb
Is there an additive identity? Are there inverses?
Is $(V, \diamond , \star )$ a vector space?
\lb
\textbf{Hint} Look at the propositions from the previous page.
\newpage
\section*{Subspaces}%
\label{sec:Subspaces}
\pr
\textbf{Textbook:} Section 1.2
\lb
\textbf{Definition}
\pr
A \emph{subspace} $U$ of a vector space $(V, +, \cdot)$ is a subset $U \subseteq V$ that
is a vector space in its own right (with the same addition and scalar multiplication of $V$).
\lb
This is the same idea as for subsets, only that the sets have been `upgraded' to vector spaces.
\lb
\textbf{Exmaples}
\begin{enumerate}
\item
As a first example, consider the vector spaces of polynomials $\pol{n}{\R}$ and of
functions $ \mathcal{F}(\R, \R)$ we have already encountered. Now, every polynomial
with real coefficients is automatically a function $\map{\R}[]{\R}$.
\lb
\q{Is this enough to be a subspace?}
\lb
We also need to check that the operations of addition and scalar multiplication of
polynomials are the same when we consider polynomials to be functions.
\vspace{200pt}
\item
Sine we can view a vector in $\R^2$, that is a tuple with 2 entries $\ttpl{v_1}{v_2}$,
also as a vector in $\R^3$, namely a tuple with 3 entries,
by adding the value $0$ at the last entry,
\[ \begin{pmatrix} v_1 \\ v_2 \\ 0 \end{pmatrix} \]
it is clear that $\R^2 \subseteq \R^3$ as sets.
\lb
As before, we need to check addition and scalar multiplication.
\end{enumerate}
\newpage
\lb
\textbf{Discussion}
\lb
Let $V$ be a real vector space.
\begin{enumerate}
\item
Can $U \subseteq V$ be a subspace if $U$ is empty (i.e. has no elements)?
\item
Suppose $U$ is a subset of $V$, which vector space axioms does $U$
automatically inherit from $V$?
\item
Following the previous question, which axioms are left to check for $U$
to be a vector space?
\end{enumerate}
\newpage
\lb
As we observed, we don't need to start from scratch to check all 10 axioms for subspaces.
\lb
\textbf{Theorem 1.2.8}
\lb
Let $V$ be a vector space with a subset $U$. Then $U \subseteq V$ is a subspace if and only if
\begin{enumerate}
\item $U$ is nonempty
\item $α \vec u + \vec v ∈ U$ for all $\vec u, \vec v ∈ U$ and $α ∈ \R$.
\end{enumerate}
\begin{proof}
\end{proof}
\newpage
\lb
We are now equipped with a very powerful tool to quickly test if a subset is a subspace.
\lb
\lb
\textbf{Examples / Discussion}
\lb
Decide which of the following subsets in vector spaces are subspaces.
\begin{enumerate}
\item
Continuous functions $C^1(\R, \R)$ contained in all functions $\mathcal{F}(\R,\R)$
\lb
\item
Invertible $n \times n$ matrices $\cb{A ∈ \mat{n}{\R} \vert A \tx{ is invertible}}$
contained in all matrices $\mat{n}{\R}$
\lb
\item
The set $ \Big \{ \ttpl{x}{y} ∈ \R^2 ~ \vert ~ x + y = 1 \Big \} $ in $\R^2$
\lb
\item
Nonnegative real numbers $\R_{\geq 0} = \cb{r ∈ \R ~ \vert ~ r \geq 0}$ in $\R$
\lb
\item
The plane of vectors
$ \Bigg \{ \tttpl{x}{y}{z} ∈ \R^3 ~ \Bigg \vert ~ y =0 \Bigg \}$ in $\R^3$
\lb
\item
Only the zero element $ \cb{\vec 0}$ in $\R^2$
\lb
\item
The space of \emph{even} and \emph{odd} functions respectively
\[ \mathcal{F}(\R, \R)^{\tx{even}} = \cb{f ∈ \mathcal{F}(\R, \R) \s f(x) = f(-x)} \]
\[ \mathcal{F}(\R, \R)^{\tx{odd}} = \cb{f ∈ \mathcal{F}(\R, \R) \s -f(x) = f(-x)} \]
in all functions $\mathcal{F}(\R,\R)$
\end{enumerate}
\lb
\lb
\textbf{Hint :} Remember that the easiest way to disprove a \emph{general} statement is to give
a counterexample.
\newpage
\pr
\textbf{Theorem}
\lb
Let $U, W \subseteq V$ be subspaces in a vector space $V$, then
\begin{enumerate}
\item $U \cap W$ is a subspace in $V$ \quad (Theorem 1.2.13)
\item $U + W$ is a subspace in $V$
\end{enumerate}
\begin{proof}
\end{proof}
\end{document}
| {
"alphanum_fraction": 0.5982467832,
"avg_line_length": 31.2612179487,
"ext": "tex",
"hexsha": "b9bcb93ef4676f6f75d77b6d6920499e6a402f14",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "05a6bd843d87f5877c457cefef28d2a976df4973",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Researchnix/mat224",
"max_forks_repo_path": "week01.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "05a6bd843d87f5877c457cefef28d2a976df4973",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Researchnix/mat224",
"max_issues_repo_path": "week01.tex",
"max_line_length": 184,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "05a6bd843d87f5877c457cefef28d2a976df4973",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Researchnix/mat224",
"max_stars_repo_path": "week01.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-30T03:26:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-03T21:56:57.000Z",
"num_tokens": 6123,
"size": 19507
} |
\documentclass{memoir}
\addtolength{\textwidth}{1.3in}
\addtolength{\textheight}{1in}
\addtolength{\oddsidemargin}{-0.65in}
\addtolength{\evensidemargin}{-0.65in}
\addtolength{\topmargin}{-0.5in}
\usepackage{fourier} % or what ever
\usepackage[scaled=.92]{helvet}%. Sans serif - Helvetica
\usepackage{color,calc}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{anyfontsize}
\newsavebox{\ChpNumBox}
\definecolor{ChapBlue}{rgb}{0.00,0.65,0.65}
\makeatletter
\newcommand
*
{\thickhrulefill}{%
\leavevmode\leaders\hrule height 1\p@ \hfill \kern \z@}
\newcommand
*
\BuildChpNum[2]{%
\begin{tabular}[t]{@{}c@{}}
\makebox[0pt][c]{#1\strut} \\[.5ex]
\colorbox{cyan}{%
\rule[-10em]{0pt}{0pt}%
\rule{1ex}{0pt}\color{black}#2\strut
\rule{1ex}{0pt}}%
\end{tabular}}
\makechapterstyle{BlueBox}{%
\renewcommand{\chapnamefont}{\large\scshape}
\renewcommand{\chapnumfont}{\Huge\bfseries}
\renewcommand{\chaptitlefont}{\raggedright\Huge\bfseries}
\setlength{\beforechapskip}{20pt}
\setlength{\midchapskip}{26pt}
\setlength{\afterchapskip}{40pt}
\renewcommand{\printchaptername}{}
\renewcommand{\chapternamenum}{}
\renewcommand{\printchapternum}{%
\sbox{\ChpNumBox}{%
\BuildChpNum{\chapnamefont\@chapapp}%
{\chapnumfont\thechapter}}}
\renewcommand{\printchapternonum}{%
\sbox{\ChpNumBox}{%
\BuildChpNum{\chapnamefont\vphantom{\@chapapp}}%
{\chapnumfont\hphantom{\thechapter}}}}
\renewcommand{\afterchapternum}{}
\renewcommand{\printchaptertitle}[1]{%
\usebox{\ChpNumBox}\hfill
\parbox[t]{\hsize-\wd\ChpNumBox-1em}{%
\vspace{\midchapskip}%
\color{cyan}\thickhrulefill\par
\color{black}\chaptitlefont ##1\par}}%
}
\chapterstyle{BlueBox}
\flushbottom \raggedbottom
\makeatletter
% this is a copy of the original \fs@ruled but with {\color{cyan} ... } added:
\newcommand\fs@colorruled{\def\@fs@cfont{\bfseries}\let\@fs@capt\floatc@ruled
\def\@fs@pre{{\color{cyan}\hrule height1.25pt depth0pt }\kern2pt}%
\def\@fs@post{\kern2pt{\color{cyan}\hrule height 1.25pt}\relax}%
\def\@fs@mid{\kern2pt{\color{cyan}\hrule height 1.25pt}\kern2pt}%
\let\@fs@iftopcapt\iftrue}
\makeatother
\usepackage{float}
\floatstyle{colorruled}
\newfloat{program}{thp}{lop}[chapter]
\floatname{program}{Program}
\begin{document}
\begin{titlingpage}
\includegraphics[width=1\textwidth]{../../../../ESCIENCE_logo_B_nl_long_cyanblack.png} \\
\vspace{1cm}
\begin{tabular}{p{5cm} l}
{\small Date} & {\small Version} \\
{\Large \today} & {\Large 0.1}
\end{tabular}
\vspace{0.5cm}
\begin{tabular}{p{5cm}}
{\small Author}\\
{\Large Ronald van Haren}
\end{tabular}
\vspace{2cm}
{\fontsize{30}{40} \selectfont Summer in the city}
\vspace{0.5cm}
\Huge UHI validation
\vfill
\noindent
\begin{minipage}[t]{0.4\textwidth}
\begin{flushleft} {\Large
Netherlands eScience Center \\
\vspace{0.25cm}
Science Park 140 \\
1098 XG Amsterdam \\
\vspace{0.25cm}
+31 (0)20 888 41 97 \\
\vspace{0.25cm}
[email protected] \\
www.eScienceCenter.com} \\
\vspace{0.5cm}
{\huge{\color{cyan}by SURF \& NWO}}
\end{flushleft}
\end{minipage}
\end{titlingpage}
\chapter{Download data from \textsl{Weather Underground} and KNMI}
\section{Extract Dutch stations}
The script \textsl{wunderground\_dump\_stationid.py} parses the Weather Underground website to find all stations in the Netherlands. For each station found, the following variables are extracted: \textsl{station id}, \textsl{neighborhood}, \textsl{city}, \textsl{station type}, and \textsl{location} (latitude, longitude, height). The zipcode of the station location is extracted using the api of Google Maps. The extracted data is saved to a csv file.
The script takes one (optional) argument, which is the name of the output csv file. The default name of the output csv file is \textsl{wunderground\_stations.csv} in the current working directory.
\begin{program}
\begin{verbatim}
usage: wunderground_dump_stationid.py [-h] [-o OUTPUT]
Extract all Wunderground stations in the Netherlands and write the station
names and locations to a csv file
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
CSV output file [default: wunderground_stations.csv]
\end{verbatim}
\caption{wunderground\_dump\_stationid.py}
\end{program}
\section{Download Wunderground data}
Downloading data from Weather Underground is performed using the script \textsl{wunderground\_getdata.py}. A station id can be supplied to the script, using the (-s, -{}-stationid) switch, to download data for a single station. Alternatively, the csv file that results from the \textsl{wunderground\_dump\_stationid.py} script can be supplied to the script using the (-c, -{}-csvfile) switch to download data for all stations in the csv file.
Other relevant arguments to the script are:
\begin{itemize}
\item (-b, -{}-startyear) and (-e, -{}-endyear): Defines the begin and end of the period that data is downloaded for.
\item (-o, -{}-outputdir): Data output directory. Data files will be saved to \$\{outputdir\}/\$\{STATIONID\}
\item (-k, -{}-keep): Use the argument switch to keep already downloaded data. If the argument switch is not used existing data is overwritten.
\end{itemize}
\begin{program}
\begin{verbatim}
usage: wunderground_getdata.py [-h] [-o OUTPUTDIR] [-b STARTYEAR] [-e ENDYEAR]
[-s STATIONID] [-c CSVFILE] [-k]
[-l {debug,info,warning,critical,error}]
Combine csv files weather underground in one output file
optional arguments:
-h, --help show this help message and exit
-o OUTPUTDIR, --outputdir OUTPUTDIR
Data output directory
-b STARTYEAR, --startyear STARTYEAR
Start year
-e ENDYEAR, --endyear ENDYEAR
End year
-s STATIONID, --stationid STATIONID
Station id
-c CSVFILE, --csvfile CSVFILE
CSV data file
-k, --keep Keep downloaded files
-l {debug,info,warning,critical,error}, --log {debug,info,warning,critical,error}
Log level
\end{verbatim}
\caption{wunderground\_getdata.py}
\end{program}
After the data is downloaded, the directory \$\{outputdir\}/\$\{STATIONID\} contains a separate csv file for each day. In order to combine this data in a single netCDF file, the script \textsl{combine\_wunderground\_data.py} is used. The script takes two arguments:
\begin{itemize}
\item (-i, -{}-inputdir): Input directory containing daily csv files that need to be combined
\item (-o, -{}-outputdir): Output directory of the resulting netCDF file
\end{itemize}
Multiple input directories can be processed at once using a simple shell command. For example, to process all subdirectories within the current working directory and save the resulting netCDF files to a subdirectory called ncfiles:
\begin{verbatim}
for directory in *; do ./wunderground_getdata.py -i ${directory} -o ncfiles; done
\end{verbatim}
\begin{program}
\begin{verbatim}
usage: combine_wunderground_data.py [-h] -i INPUTDIR [-o OUTPUTDIR]
Combine csv files weather underground in one output file
optional arguments:
-h, --help show this help message and exit
-i INPUTDIR, --inputdir INPUTDIR
Data input directory containing txt files
-o OUTPUTDIR, --outputdir OUTPUTDIR
Data output directory
\end{verbatim}
\caption{combine\_wunderground\_data.py}
\end{program}
\section{KNMI reference data}
KNMI station data is used as reference data for rural temperatures. The script \textsl{knmi\_getdata.py} downloads the reference data from the KNMI website.
\begin{itemize}
\item (-o, -{}-outputdir): Output directory where the data should be saved.
\item (-s, -{}-stationid): Station id of the KNMI reference station to download data from. If the argument is not used, the KNMI website is parsed to get all available stationids and data for all stations is downloaded.
\item (-c, -{}-csvfile): Name of output csv file that contains information about the KNMI stations and their location.
\item (-k, -{}-keep): Use the argument switch to keep already downloaded data. If the argument switch is not used existing data is overwritten.
\end{itemize}
\begin{program}
\begin{verbatim}
usage: knmi_getdata.py [-h] [-o OUTPUTDIR] [-s STATIONID] -c CSVFILE [-k]
[-l {debug,info,warning,critical,error}]
Get data KNMI reference stations
optional arguments:
-h, --help show this help message and exit
-o OUTPUTDIR, --outputdir OUTPUTDIR
Data output directory
-s STATIONID, --stationid STATIONID
Station id
-c CSVFILE, --csvfile CSVFILE
CSV data file
-k, --keep Keep downloaded files
-l {debug,info,warning,critical,error}, --log {debug,info,warning,critical,error}
Log level
\end{verbatim}
\caption{knmi\_getdata.py}
\end{program}
\chapter{Urban heat island}
An urban heat island (UHI) is an metropolitan area that is warmer than its surrounding rural areas due to human activities. The temperature difference usually is larger at night than during the day, and is most apparent when the winds are weak.
\section{Determining the UHI effect}
\subsection{Time filtering Wunderground data}
The data from Weather Underground is measured at random time steps. In order to compare it with hourly KNMI reference data, the data should be processed to create a dataset on the same temporal density. This is done by the functionality in the script \textsl{time\_filter\_wunderground\_data.py}. This script is just a helper script and does not need to be called directly, the filtering functionality is imported in the script \textsl{UHI\_reference.py}.
The script supports two ways to time filter the Weather underground data to a fixed time step, namely:
\begin{itemize}
\item \textbf{interpolate}: depending on the measurements available, the value at the time step is determined according to the following rules
\begin{itemize}
\item if a measurement coincides directly with the time step, the value at the time step is set equal to the measurement
\item if within the timewindow set measurements are available both before and after the time step, the value at the time step is set to the value obtained from interpolating the last measurement before the time step and the first measurement after the time step
\item if only measurements within the time window set are available before the time step, the value is set to the last measurement before the time step
\item if only measurements within the time window set are available after the time step, the value is set to the first measurement after the time step
\item if no measurements are available within the time window the value is set to None
\end{itemize}
\item \textbf{average}: all measurements within the selected time window are averaged
\end{itemize}
\subsection{UHI calculation script}
Calculation of the UHI effect for all Wunderground stations is performed using the script \textsl{UHI\_reference.py}. The script has one required argument, the numbers of the months over which the UHI needs to be calculated (-m, -{}-months). Optional arguments are:
\begin{itemize}
\item (-w, -{}-wundfile): Csv file with station from Weather Underground. Output from the script \textsl{wunderground\_dump\_stationid.py}. If the argument is not specified, the default filename \textsl{wunderground\_stations.csv} is used.
\item (-k, -{}-knmifile): Csv file with KNMI reference stations. Output from the script \textsl{knmi\_getdata.py}. If the argument is not specified, the default filename \textsl{knmi\_reference\_data.csv} is used.
\item (-i, -{}-interpolate): Specify if time filtering of Weather Underground data should use interpolation instead of time averaging.
\item (-s, -{}-stationtype): Specify if only station that use an instrument that contains the (converted to lowercase) string specified.
\end{itemize}
\begin{program}
\begin{verbatim}
usage: UHI_reference.py [-h] [-w WUNDFILE] [-k KNMIFILE] [-i] [-s STATIONTYPE]
-m MONTHS [MONTHS ...]
Time filter Wunderground netCDF data
optional arguments:
-h, --help show this help message and exit
-w WUNDFILE, --wundfile WUNDFILE
Wunderground csv file [default:
wunderground_stations.csv]
-k KNMIFILE, --knmifile KNMIFILE
KNMI csv file [default: knmi_reference_data.csv]
-i, --interpolate Distance weighted interpolation of KNMI reference data
instead of nearest reference station
-s STATIONTYPE, --stationtype STATIONTYPE
Require a certain instrument for the Wunderground
station
-m MONTHS [MONTHS ...], --months MONTHS [MONTHS ...]
month numbers (1-12) separated by space used to
calculate UHI
\end{verbatim}
\caption{UHI\_reference.py}
\end{program}
\section{Methods}
describe methods
\chapter{Results}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{../figures/test.png}
\label{fig:uhi}
\caption{UHI}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{../figures/reconst.png}
\label{fig:uhi}
\caption{Reconstruction}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.5661682791,
"avg_line_length": 55.3733766234,
"ext": "tex",
"hexsha": "34b77bd8179b4b18222a04a218b39b69df5b478e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fce056aae1f92348c78df4ba364a7bdce506856b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rvanharen/SitC",
"max_forks_repo_path": "documentation/sitc.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "fce056aae1f92348c78df4ba364a7bdce506856b",
"max_issues_repo_issues_event_max_datetime": "2015-03-04T12:44:29.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-04T12:43:51.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rvanharen/SitC",
"max_issues_repo_path": "documentation/sitc.tex",
"max_line_length": 455,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fce056aae1f92348c78df4ba364a7bdce506856b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rvanharen/SitC",
"max_stars_repo_path": "documentation/sitc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3721,
"size": 17055
} |
\documentclass[12pt]{article}
% OR
% \documentclass[letterpaper,11pt]{report}
\usepackage{fullpage}
\usepackage{float}
\usepackage{alltt}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage[table,usenames,dvipsnames]{xcolor}
% handle characters reasonably, especially <>
\usepackage[T1]{fontenc}
\usepackage{lmodern}
% nice for tables
\usepackage{tabularx}
% FloatBarrier
\usepackage{placeins}
% html links
\usepackage{hyperref}
\hypersetup{colorlinks=true, linkcolor=blue, filecolor=blue, urlcolor=blue}
\renewcommand*{\subsubsectionautorefname}{section}
\renewcommand*{\subsectionautorefname}{section}
% algorithm environment for pseudocode
\usepackage{algorithm}
\usepackage{algorithmic}
%\floatname{algorithm}{}
% draft - include comments as footnotes and marginpars
\newif\ifdraft
\drafttrue
\ifdraft
\typeout{DRAFT - WITH COMMENTS}
\newcommand{\doccomment}[3]%
%beamer {\textcolor{#2}{\bf \ti #1}%
{\marginpar{\textcolor{#2}{\bf #1}}%
\footnote{\textcolor{#2}{#3}}%
}
\else
\typeout{NOT DRAFT - NO COMMENTS}
\newcommand{\doccomment}[3]{}
\fi
% comments for individuals
\newcommand{\jpscomment}[1]%
{\doccomment{SCHEWE}{Bittersweet}{#1}}
\title{TITLE}
\author{Jon Schewe}
\begin{document}
\maketitle
\texttt{type-faced text}
\textbf{bold-faced text}
\begin{verbatim}
Pre-formatted text.
\end{verbatim}
\begin{figure}[htb]
\centering
% can shrink by putting decimal in front of \textwidth eg: "0.8\textwidth"
% will need type=png,read=.png,ext=.png if multiple dots in the filename
\includegraphics[width=\textwidth]{file.jpg}
\caption{Cool image}
\label{fig:cool_image}
\end{figure}
\autoref{label}
\FloatBarrier
\section{foo}
% use tabular if you don't need the tables to be full width
\begin{center}
\begin{tabularx}{\textwidth}{|c|c|c|c|c|c|c|c|c|X|} \hline
\rowcolor[gray]{.8}
\multicolumn{3}{|>{\columncolor[gray]{.8}}c|}{Parameters} &
\multicolumn{3}{|>{\columncolor[gray]{.8}}c|}{Missing} &
\multicolumn{3}{|>{\columncolor[gray]{.8}}c|}{Extra} &
Total \\ \hline
\rowcolor[gray]{.8} Crosstrack & Height & Rotation &
Before & After & Diff &
Before & After & Diff &
Diff \\ \hline
true & true & true &
48 & 33 & 15 &
81 & 85 & 4 &
19
\\ \hline
\end{tabularx}
\end{center}
\end{document}
| {
"alphanum_fraction": 0.7227944469,
"avg_line_length": 21.0660377358,
"ext": "tex",
"hexsha": "f4a23c271f429d9fbf1abe43ea7a2a6edcc2d490",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a32d59ceddce1db55105235c7e649b37ea418491",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "jpschewe/code-templates",
"max_forks_repo_path": "latex.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a32d59ceddce1db55105235c7e649b37ea418491",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "jpschewe/code-templates",
"max_issues_repo_path": "latex.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a32d59ceddce1db55105235c7e649b37ea418491",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "jpschewe/code-templates",
"max_stars_repo_path": "latex.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 759,
"size": 2233
} |
%\documentclass[10pt]{beamer} % aspect ratio 4:3, 128 mm by 96 mm
\documentclass[10pt,aspectratio=169]{beamer} % aspect ratio 16:9
%\graphicspath{{../../figures/}}
\graphicspath{{figs/}}
%\includeonlyframes{frame1,frame2,frame3}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Packages
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{appendixnumberbeamer}
\usepackage{booktabs}
\usepackage{csvsimple} % for csv read
\usepackage[scale=2]{ccicons}
\usepackage{pgfplots}
\usepackage{xspace}
\usepackage{amsmath}
\usepackage{totcount}
\usepackage{tikz}
\usepackage{bm}
\usepackage{float}
\usepackage{eso-pic}
\usepackage{wrapfig}
%\usepackage{FiraSans}
%\usepackage{comment}
%\usetikzlibrary{external} % speedup compilation
%\tikzexternalize % activate!
%\usetikzlibrary{shapes,arrows}
%\usepackage{bibentry}
%\nobibliography*
\usepackage{animate}
\usepackage{ifthen}
\newcounter{angle}
\setcounter{angle}{0}
%\usepackage{bibentry}
%\nobibliography*
\usepackage{caption}%
\captionsetup[figure]{labelformat=empty}%
\graphicspath{{gif/}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Metropolis theme custom modification file
\input{metropolis_mods.tex}
\usefonttheme[onlymath]{Serif} % It should be uncommented if Fira fonts in math does not work
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Custom commands
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% matrix command
\newcommand{\matr}[1]{\mathbf{#1}} % bold upright (Elsevier, Springer)
%\newcommand{\matr}[1]{#1} % pure math version
%\newcommand{\matr}[1]{\bm{#1}} % ISO complying version
% vector command
\newcommand{\vect}[1]{\mathbf{#1}} % bold upright (Elsevier, Springer)
% bold symbol
\newcommand{\bs}[1]{\boldsymbol{#1}}
% derivative upright command
\DeclareRobustCommand*{\drv}{\mathop{}\!\mathrm{d}}
\newcommand{\ud}{\mathrm{d}}
%
\newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Title page options
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \date{\today}
\date{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% option 1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{PhD project presentation}
\subtitle{Feasibility studies of artificial intelligence driven diagnostics}
\author{\textbf{Abdalraheem Ijjeh\\Supervisor: Prof. Pawel Kudela}}
% logo align to Institute
\institute{Institute of Fluid Flow Machinery\\Polish Academy of Sciences \\ \vspace{-1.5cm}\flushright \includegraphics[width=4cm]{//odroid-sensors/sensors/MISD_shared/logo/logo_eng_40mm.eps}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% option 2 - authors in one line
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \title{My fancy title}
% \subtitle{Lamb-opt}
% \author{\textbf{Paweł Kudela}\textsuperscript{2}, Maciej Radzieński\textsuperscript{2}, Wiesław Ostachowicz\textsuperscript{2}, Zhibo Yang\textsuperscript{1} }
% logo align to Institute
% \institute{\textsuperscript{1}Xi'an Jiaotong University \\ \textsuperscript{2}Institute of Fluid Flow Machinery\\ \hspace*{1pt} Polish Academy of Sciences \\ \vspace{-1.5cm}\flushright \includegraphics[width=4cm]{//odroid-sensors/sensors/MISD_shared/logo/logo_eng_40mm.eps}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% option 3 - multilogo vertical
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%\title{My fancy title}
%%\subtitle{Lamb-opt}
%% \author{\textbf{Paweł Kudela}\inst{1}, Maciej Radzieński\inst{1}, Wiesław Ostachowicz\inst{1}, Zhibo Yang\inst{2} }
%% logo under Institute
%% \institute%
%% {
%% \inst{1}%
%% Institute of Fluid Flow Machinery\\ \hspace*{1pt} Polish Academy of Sciences \\ \includegraphics[height=0.85cm]{//odroid-sensors/sensors/MISD_shared/logo/logo_eng_40mm.eps} \\
%% \and
%% \inst{2}%
%% Xi'an Jiaotong University \\ \includegraphics[height=0.85cm]{//odroid-sensors/sensors/MISD_shared/logo/logo_box.eps}
%% }
% end od option 3
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% option 4 - 3 Institutes and logos horizontal centered
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\title{My fancy title}
%\subtitle{Lamb-opt }
%\author{\textbf{Paweł Kudela}\textsuperscript{1}, Maciej Radzieński\textsuperscript{1}, Marco Miniaci\textsuperscript{2}, Zhibo Yang\textsuperscript{3} }
%
%\institute{
%\begin{columns}[T,onlytextwidth]
% \column{0.39\textwidth}
% \begin{center}
% \textsuperscript{1}Institute of Fluid Flow Machinery\\ \hspace*{3pt}Polish Academy of Sciences
% \end{center}
% \column{0.3\textwidth}
% \begin{center}
% \textsuperscript{2}Zurich University
% \end{center}
% \column{0.3\textwidth}
% \begin{center}
% \textsuperscript{3}Xi'an Jiaotong University
% \end{center}
%\end{columns}
%\vspace{6pt}
%% logos
%\begin{columns}[b,onlytextwidth]
% \column{0.39\textwidth}
% \centering
% \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{//odroid-sensors/sensors/MISD_shared/logo/logo_eng_40mm.eps}
% \column{0.3\textwidth}
% \centering
% \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{//odroid-sensors/sensors/MISD_shared/logo/logo_box.eps}
% \column{0.3\textwidth}
% \centering
% \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{//odroid-sensors/sensors/MISD_shared/logo/logo_box2.eps}
%\end{columns}
%}
%\makeatletter
%\setbeamertemplate{title page}{
% \begin{minipage}[b][\paperheight]{\textwidth}
% \centering % <-- Center here
% \ifx\inserttitlegraphic\@empty\else\usebeamertemplate*{title graphic}\fi
% \vfill%
% \ifx\inserttitle\@empty\else\usebeamertemplate*{title}\fi
% \ifx\insertsubtitle\@empty\else\usebeamertemplate*{subtitle}\fi
% \usebeamertemplate*{title separator}
% \ifx\beamer@shortauthor\@empty\else\usebeamertemplate*{author}\fi
% \ifx\insertdate\@empty\else\usebeamertemplate*{date}\fi
% \ifx\insertinstitute\@empty\else\usebeamertemplate*{institute}\fi
% \vfill
% \vspace*{1mm}
% \end{minipage}
%}
%
%\setbeamertemplate{title}{
% % \raggedright% % <-- Comment here
% \linespread{1.0}%
% \inserttitle%
% \par%
% \vspace*{0.5em}
%}
%\setbeamertemplate{subtitle}{
% % \raggedright% % <-- Comment here
% \insertsubtitle%
% \par%
% \vspace*{0.5em}
%}
%\makeatother
% end of option 4
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% option 5 - 2 Institutes and logos horizontal centered
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\title{My fancy title}
%\subtitle{Lamb-opt }
%\author{\textbf{Paweł Kudela}\textsuperscript{1}, Maciej Radzieński\textsuperscript{1}, Marco Miniaci\textsuperscript{2}}
%
%\institute{
% \begin{columns}[T,onlytextwidth]
% \column{0.5\textwidth}
% \centering
% \textsuperscript{1}Institute of Fluid Flow Machinery\\ \hspace*{3pt}Polish Academy of Sciences
% \column{0.5\textwidth}
% \centering
% \textsuperscript{2}Zurich University
% \end{columns}
% \vspace{6pt}
% % logos
% \begin{columns}[b,onlytextwidth]
% \column{0.5\textwidth}
% \centering
% \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{//odroid-sensors/sensors/MISD_shared/logo/logo_eng_40mm.eps}
% \column{0.5\textwidth}
% \centering
% \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{//odroid-sensors/sensors/MISD_shared/logo/logo_box.eps}
% \end{columns}
%}
%\makeatletter
%\setbeamertemplate{title page}{
% \begin{minipage}[b][\paperheight]{\textwidth}
% \centering % <-- Center here
% \ifx\inserttitlegraphic\@empty\else\usebeamertemplate*{title graphic}\fi
% \vfill%
% \ifx\inserttitle\@empty\else\usebeamertemplate*{title}\fi
% \ifx\insertsubtitle\@empty\else\usebeamertemplate*{subtitle}\fi
% \usebeamertemplate*{title separator}
% \ifx\beamer@shortauthor\@empty\else\usebeamertemplate*{author}\fi
% \ifx\insertdate\@empty\else\usebeamertemplate*{date}\fi
% \ifx\insertinstitute\@empty\else\usebeamertemplate*{institute}\fi
% \vfill
% \vspace*{1mm}
% \end{minipage}
%}
%
%\setbeamertemplate{title}{
% % \raggedright% % <-- Comment here
% \linespread{1.0}%
% \inserttitle%
% \par%
% \vspace*{0.5em}
%}
%\setbeamertemplate{subtitle}{
% % \raggedright% % <-- Comment here
% \insertsubtitle%
% \par%
% \vspace*{0.5em}
%}
%\makeatother
% end of option 5
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% End of title page options
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% logo option - alternative manual insertion by modification of coordinates in \put()
%\titlegraphic{%
% %\vspace{\logoadheight}
% \begin{picture}(0,0)
% \put(305,-185){\makebox(0,0)[rb]{\includegraphics[width=4cm]{//odroid-sensors/sensors/MISD_shared/logo/logo_eng_40mm.eps}}}
% \end{picture}}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\tikzexternalize % activate!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\maketitle
\note{
Welcome and thank you for joining me here at this remote presentation event. At the beginning I want to introduce my self, my name is Abdalrheem Ijjeh, a PhD student at the Institute of Fluid flow machinery, Polish academy of sciences, supervised by professor Pawel Kudela. Today I am going to briefly present my PhD project entitled by Feasibility studies of artificial intelligence-driven diagnostics}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% SLIDES
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[label=frame1]{Table of contents}
\setbeamertemplate{section in toc}[sections numbered]
\tableofcontents
\end{frame}
\note{
The presentation will be as follow:
In the first section, I am going to introduce and define composite materials, then I will talk beirfly about defects in composite materials, and the conventional approach of damage detection in composite materials. \\
In the second section, I will briefly introduce Artificial intelligence, machine learning and deep learning approaches.\\
In the third section, the objectives of the project will be presented. \\
Then I will talk about the Dataset generation in the fourth section. \\
Finally, in the last section, I am going to talk about deep learning techniques for delamination identification.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Composite materials}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Composite materials}
\begin{wrapfigure}{r}{7cm}
\begin{figure}[h!]
\centering
\vspace{-1cm}
\hspace{3cm}
\includegraphics[scale=0.20]{Composite_3d.png}
\end{figure}
\end{wrapfigure}
Composites materials are formed by combining materials together to form an overall structure with properties that differ from that of the individual components as shown in the below image.
\tiny {Image source: https://en.wikipedia.org/wiki/Composite\_material}
\end{frame}
\note{
A composite material can be defined as a combination of a matrix and a reinforcement, which when combined gives properties superior to the properties of the individual components. \\
The reinforcement fibres (e.g Glass, carbon) can be cut, aligned, placed in different ways to affect the properties of the resulting composite.
The matrix, normally a form of resin (e.g polyester, epoxy), keeps the reinforcement in the desired orientation. \\
It protects the reinforcement from chemical and environmental attack, and it bonds the reinforcement so that applied loads can be effectively transferred.
The Gray layer is a protection for the composite.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{{Composite materials} Cont.}
\begin{wrapfigure}{r}{7cm}
\begin{figure}[h!]
\centering
\vspace{-2cm}
\includegraphics[scale=1]{composite_materials.png}
\end{figure}
\end{wrapfigure}
\textbf{Examples of Composite Uses:}
\begin{itemize}
\item Aerospace structures
\item Automotive
\item Energy %(e.g. wind turbines)
\item Infrastructure (e.g. bridges, roads, railways,...)
\item Pipes and tanks
\end{itemize}
\end{frame}
\note{
Composite materials have enormous life application:
in aerospace applications like components of the aircraft (whether business or military), rockets and missiles etc.\\
In the Automobile industries for example (steel and aluminium body). \\
In the energy application, composites are used in wind turbines.\\
In infrastructures, composites have so many applications for example bridges, roads, railways, in communication antennae, etc.\\
Now, why to use composite materials?
As shown in the figure, composites have so many advantages over metals such as their lightweight, flexibility in design, can embed instruments like sensors (FBGs to monitor the structure), their capability to perform better regarding fatigue strength, their lower cost, their capability to resists corrosion caused by environmental causes, and other advantages.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Defects in composite materials}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Problems of composite materials}
Composite materials can have several kinds of defects such as:
\begin{wrapfigure}{r}{0.5\textwidth}
\begin{figure}[h!]
\centering
\vspace{-10pt}
\includegraphics[width=0.48\textwidth]{delamination1.png}
\end{figure}
\end{wrapfigure}
\begin{itemize}
\item Cracks
\item Fibre breakage
\item Debonding
\item Delamination
\end{itemize}
Among these defects, delamination is one of the most hazardous forms of the defects, which essentially leads to very catastrophic failures if not detected at early stages.
\tiny {Image source: https://docplayer.biz.tr/151413952-Tabakali-kompozitlerde-yapraklanma-delamination-bolgesinin-burkulma-mukavemetine-etkileri.html}
\end{frame}
\note{
Generally, impact damage in composite materials is caused by various impact events that result from the lack of reinforcement in the out-of-plane direction.
Causing:
Matrix cracks,
Fibre breakage,
Debonding (occurs when an adhesive stops adhering to an adherend)
And delamination, as shown in the figure, which can alter the compression strength of composite laminate and gradually affect the composite to encounter failure by buckling.
Among these defects in composites, delamination is considered one of the most hazardous types of defects.
That is because delamination can seriously decrease the performance of the composite.
Therefore, delamination detection in the early stages can help to avoid structural collapses.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Damage detection in composite materials}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Damage detection in composite materials}
\begin{wrapfigure}{r}{0.5\textwidth}
\begin{figure}[h!]
\centering
\vspace{-2.5cm}
\hspace{-2.0cm}
\includegraphics[width=0.55\textwidth]{GW_LambWaves.png}
\end{figure}
\end{wrapfigure}
Conventional structural damage detection methods involve two processes:
\begin{itemize}
\item Feature extraction
\item Feature classification
\end{itemize}
\end{frame}
\note{
In conventional damage detection methods, to detect the damage, first, we must analyse the response of structure obtained by sensors such as (PZTs) that can sense for example the generated guided waves (e.g. Lamb waves).
Then, we need to extract the features of response (which requires large computational effort).
Then we can attempt to classify these features, which are sensitive to minor damage and that can be distinguished from the response to natural and environmental changes (baseline).
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Damage detection in composite materials Cont.}
Drawbacks of Conventional methods:
\begin{itemize}
\item Requires a great amount of human labor and computational effort.
\item Demands a high amount of experience of the practitioner.
\item Inefficient with big data which requires a complex computation of damage features.
\end{itemize}
\end{frame}
\note{
Conventional methods of damage detection focus on patterns
extraction from registered measurements and accordingly make decisions based on these patterns.
Moreover, conventional methods for pattern recognition require feature selection and classification (handcrafted features).
These conventional methods can perform efficient damage detection. However, these methods depend on selected features from their scope of measurement.
Accordingly, introducing new patterns will cause them to fail in detecting the damage.
Furthermore, these methods could fail in detecting damage when dealing with big data requiring a complex computation of damage features.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{AI, Machine learning and Deep learning}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{AI, Machine learning and Deep learning}
\begin{figure}
\centering
\includegraphics[width=0.88\textwidth]{AI_vs_ML_vs_Deep_Learning.png}
\end{figure}
\tiny{Image source: https://top6sites.com/2020/05/15/ai-vs-machine-learning-vs-deep-learning/}
\end{frame}
\note{
\begin{itemize}
\item Artificial intelligence (AI) refers to a broader field of computer science.
AI aims to enable machines to think without human intervention
\item Machine Learning (ML) is a subset of AI, that uses statistical learning algorithms to build smart systems.
\item Deep learning is a subset of ML.
It was inspired by the information processing patterns observed in the human brain (Neural Network).
Just like humans use their brain to identify patterns and classify various types of information.
Deep learning algorithms as Artificial Neural Network (ANN) can be taught to perform the same tasks for machines.
\item Deep Learning is the most exciting and powerful branch of Machine Learning.
It's a technique that teaches computers to do what comes naturally to humans: learn by example.
Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason.
It’s achieving results that were not possible before.
\end{itemize}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{AI, Machine learning and Deep learning Cont.}
\begin{minipage}[c]{0.45\textwidth}
AI technologies are in accelerating growth due to:
\begin{itemize}
\item Exponential development in computer hardware industries
% (CPUs, GPUs, FPGAs, TPUs and ASICs)
\item Era of Big data,
\end{itemize}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{CPU.jpg}
\includegraphics[width=.75\textwidth]{big_data_icon.jpg}
\end{figure}
\tiny{Image source: https://depositphotos.com/213472866/stock-illustration-big-data-color-icon-cloud.html}
\end{minipage}
\end{frame}
\note{
In recent years, the field of AI has been developed in the accelerating rate due to:
the advanced evolution occurred in technology that produced high computational powers \\
like Central Processing Units, Graphical Processing Units,
Field Programmable Gate Arrays, Tensor Processing Units, and Application Specific Integrated Circuit.\\
The second thing that causes AI to be in accelerating growth is that in recent years the era of big data began, which is extremely essential in the learning process of AI systems.\\
We can say that deep learning techniques rapidly developed after AlexNet was developed.
AlexNet is the name of a convolutional neural network which has had a large impact on the field of machine learning, specifically in the application of deep learning to computer vision.
It famously won the 2012 ImageNet Large Scale Visual Recognition Challenge-2012 competition which was a breakthrough. }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{AI, Machine learning and Deep learning Cont.}
\begin{minipage}[c]{0.45\textwidth}
\begin{itemize}
\item Neuron is responsible for acting as a gate (previous-to-next) layers with a nonlinear operation.
\end{itemize}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig_neuron.png}
\includegraphics[width=1.0\textwidth]{neuron_perceptron.png}
\end{figure}
\end{minipage}
\tiny{source of biological neuron: https://tlr.gitbook.io/data-science/neural-network/perceptron}
\end{frame}
\note{
The idea behind perceptrons or (artificial neurons) is that it is possible to mimic certain parts of biological neurons, such as dendrites, cell bodies and axons using a simplified mathematical models of what limited knowledge we have on their inner workings:
signals can be received from dendrites, and sent down the axon once enough signals were received.
This outgoing signal can then be used as another input for other neurons, repeating the process.
Some signals are more important than others and can trigger some neurons to fire easier.
Connections can become stronger or weaker, new connections can appear while others can cease to exist.
We can mimic most of this process by coming up with a function that receives a list of weighted input signals and outputs some kind of signal if the sum of these weighted inputs reaches a certain threshold value.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{AI, Machine learning and Deep learning Cont.}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\vspace{-40pt}
\hspace{15pt}
\includegraphics[width=0.45\textwidth]{ANN.png}
\end{wrapfigure}
Artificial neural network (ANN) can be created by cascading n-layers of “neurons”.
\begin{itemize}
\item Input layer.
\item Hidden layer(s).
\item Output layer.
\end{itemize}
\end{frame}
\note{
Artificial Neural Networks or ANN is an information processing paradigm that is inspired by the way the biological nervous system such as brain process information.
It is composed of a large number of highly interconnected processing elements(neurons) working together in harmony to solve a specific problem such as regression, classification, image detection and identification and in time series data.
A basic ANN is composed of three layers:\\
the input layer, the hidden layer or layers and finally an output layer.
Information flows from the input layer, through the hidden layer to the output layer.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{AI, Machine learning and Deep learning Cont.}
\begin{figure}[h!]
\centering
\includegraphics[scale=.47]{MLvsDL.png}
\end{figure}
\tiny{Image source: https://lawtomated.com/a-i-technical-machine-vs-deep-learning/ }
\end{frame}
\note{
Feature extraction (aka feature engineering) is the process of putting domain knowledge into the creation of feature extractors to reduce the complexity of the data and make patterns more visible to learning algorithms. \\
This process is difficult and expensive in terms of time and expertise.
The shown diagram summaries the core distinction between machine learning and deep learning techniques. \\
For the classical machine learning techniques, we have to perform feature extraction by ourselves, then we use a suitable method to do the classification. \\
On the contrary, in deep learning techniques, the process of feature extraction is done automatically without the need for human intervention, since the model will learn by itself to perform this process.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Objective of the Project}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Objective of the Project}
\begin{itemize}
\item To develop an AI driven diagnostic model for delamination identification in composite laminates such as carbon fibre reinforced polymers (CFRP).
\item End-to-end approach (Perform feature extraction, selection and classification automatically.)
\end{itemize}
\end{frame}
\note{
Our main objectives in this project are to explore the feasibility of using an AI approach in detecting defects such as delaminations in composite laminates such as carbon fibre reinforced polymers (CFRP). \\
And accordingly to develop an end to end models that can identify delaminations in composite laminates which are capable of performing feature extraction and classification without the need for human intervention.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}{Objective of the Project Cont.}
%
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Dataset generation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Dataset generation}
\begin{minipage}[c]{0.45\textwidth}
\begin{itemize}
\item 475 cases.
\item Delamination has a different shape, size and location for each case.
\item CFRP is made of 8-layers (delamination was modelled between the 3rd and 4th layer)
\item Root mean square (RMS) was applied to the wavefield (To improve the visibility)
\end{itemize}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
%\begin{wrapfigure}{r}%{0.5\textwidth}
%\centering
%\vspace{-10pt}
\hspace{1cm}
\includegraphics[scale=1.0]{dataset2_labels_ellipses.png}
%\end{wrapfigure}
\end{minipage}
\end{frame}
\note{
Dataset generation.
Numerically generating 475 cases of full wavefield of propagating Lamb waves in a plate made of CFRP as shown in the figure. \\
Essentially, the output resembles measurements acquired by SLDV in the transverse direction (perpendicular to the plate surface). \\
Each delamination with different shape, size and location was modelled randomly on the plate.\\
To improve the delamination visibility Root mean square RMS was applied.
The dataset is used to train the deep learning models in order to identify the delaminations..
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Deep learning techniques for delamination identification}
%\subsection{Convolutional Neural Networks CNN}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{wrapfigure}{r}{0.5\textwidth}
% \centering
% \vspace{0pt}
% \hspace{15pt}
% \includegraphics[width=0.25\textwidth]{hardware.png}
%\end{wrapfigure}
%
%\begin{wrapfigure}{r}{0.5\textwidth}
% \centering
% \vspace{-40pt}
% \hspace{15pt}
% \includegraphics[width=0.25\textwidth]{big_data.jpg}
%\end{wrapfigure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}{Deep learning approach cont.}
% \begin{itemize}
% \item Deep learning technologies offered the opportunity of being implemented in NDT and further with SHM approaches.
% \item Deep learning techniques handled issues regarding data preprocessing and feature extraction
% \item End-to-end approaches are developed, in which the whole unprocessed data are fed into the model, hence, it will learn by itself to recognise the patterns and detect the damage
% \end{itemize}
%\end{frame}
%\note{
% We can say that deep learning techniques rapidly developed after AlexNet in 2012 was created, actually it was a breakthrough. \\
% Due to the rapid development in deep learning feild, deep learning techniques could be implemented in NDT and further with SHM approches. \\
% Moreover, the application of deep learning techniques in NDT and SHM regarding damage detection and identification is still new and not fully integrated or investigated. \\
% The beuty about deep learning techniques is it offers an end to end approchs, accordingly, processes of feature extraction or classification are skipped, because it will be performed automaticlly by the model which will learn by itself to perfom these processes.
%
%}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Computer Vision}
The increasingly advancement in deep learning field has vastly improved computer vision (CV) applications such as:
\begin{itemize}
\item Image classification
\item Object detection and localisation
\item Image segmentation
\end{itemize}
\end{frame}
\note{
Computer Vision, often abbreviated as CV, is defined as a field of study that seeks to develop techniques to help computers “see” and understand the content of digital images such as photographs and videos.\\
Many popular computer vision applications involve trying to recognize things in photographs; for example, \\
Image classification concerns what broad categories are there in the image. \\
Object detection and localisation concerns with whare are the objects in the image.\\
and Image segmentation which is the process of partitioning an image into multiple segments. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.
More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Convolutional Neural Network CNN}
\begin{itemize}
\item CNN is a type of feedforward ANNs which additionally performs convolution operations
\item CNNs are designed to handle data as tensors with different dimensions such as: 1D signals, 2D and 3D images and sequence of images (videos)in 4D.
\end{itemize}
\end{frame}
\note{
The convolutional neural network, or CNN for short, is a specialized type of neural network model designed for working with two-dimensional image data, although they can be used with one-dimensional and three-dimensional data. \\
Central to the convolutional neural network is the convolutional layer that gives the network its name. This layer performs an operation called a “convolution“.\\
CNNs are primarily used in applications of CV (previously mentioned).\\
The innovation of convolutional neural networks is the ability to automatically learn a large number of filters in parallel specific to a training dataset under the constraints of a specific predictive modelling problem, such as image classification. The result is highly specific features that can be detected anywhere on input images.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Convolutional Neural Network CNN cont.}
\begin{wrapfigure}{r}{0.62\textwidth}
\centering
\vspace{-35pt}
\hspace{0pt}
\includegraphics[width=0.62\textwidth]{Fig_Convnet.png}
\end{wrapfigure}
Typical architecture of a CNN consists of three main parts:
\begin{itemize}
\item Convolutional layer
\item Downsampling layer
\item Dense layer
\end{itemize}
\end{frame}
\note{
In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input, much like a traditional neural network.
Given that the technique was designed for two-dimensional input, the multiplication is performed between an array of input data and a two-dimensional array of weights, called a filter or a kernel. \\
The filter is smaller than the input data and the type of multiplication applied between a filter-sized patch of the input and the filter is a dot product.
The output of multiplying the filter with the input array one time is a single value.
As the filter is applied multiple times to the input array, the result is a two-dimensional array of output values that represent filtering of the input.
The two-dimensional output array from this operation is called a “feature map“.
Once a feature map is created, we can pass each value in the feature map through a nonlinearity, such as a ReLU, much like we do for the outputs of a fully connected layer.
After the feature maps are produced, downsampling is applied to reduce the spatial resolution on the exact positioning of features within networks.
Finally, the feature maps are flattened and input into the dense layer.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Convolutional Neural Network CNN cont.}
\begin{minipage}[c]{0.45\textwidth}
\begin{itemize}
\item Fully CNN (FCN) is a neural network in which dense layer is replaced with convolutional layers.
\item The idea behind FCN is to stack a group of convolutional layers in an
encoder-decoder style.
\end{itemize}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{encoder_decoder_1.png}
\end{minipage}
\end{frame}
\note{
The idea behind FCN is to stack a group of convolutional layers in an
encoder-decoder style. \\
The encoder is capable for downsampling the input image through convolutions with strides, consequently, resulting in a compressed feature representation of the input image, and the decoder is capable to upsample the image with compressed features applying techniques like transposed convolution with strides and upsampling with interpolation (e.g. bilinear or nearest).
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Convolutional Neural Network CNN cont.}
\centering
\includegraphics[width=.95\textwidth]{encoder_decoder.png}
\end{frame}
\note{
Finally, in this slide, we present an abstract level of how FCN models perform Image segmentation to identify the delamination as shown in the figure.
The input to the model is an RMS Image with delamination located almost at the upper middle of the image.
The FCN is responsible for performing image segmentation accordingly, Identifying the delamination as shown on the predicted damage on the right.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Convolution}
% \animategraphics[loop,controls,scale=0.25]{2}{ConV-}{0}{8}
% \transpush
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Activation functions}
% \animategraphics[loop,controls,scale=0.25]{1}{activation_functions/ac-}{0}{98}
% \transpush
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{TanH}
% \animategraphics[loop,controls,scale=0.25]{5}{sigmoid/sig-}{0}{467}
% \transpush
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Maxpool}
% \animategraphics[loop,controls,scale=0.25]{2}{maxpool-}{0}{13}
% \transpush
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Transposed Convolution}
% \animategraphics[loop,controls,scale=0.25]{1}{abdalraheem-}{0}{15}
% \transpush
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%\begin{frame}[fragile]{Transposed convolution}
%% \animategraphics[loop,controls,scale=0.25]{1}{TransposedConv-}{0}{67}
%% \transpush
%%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{LSTM}
% \animategraphics[loop,controls,scale=0.25]{1}{LSTM/lstm-}{0}{12}
% \transpush
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Intermediate outputs}
% \animategraphics[loop,controls,width=\linewidth]{10}{450/input }{1}{220}
% \transblindsvertical<2,3>
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[t,label=A]{Acknowledgements}
% \begin{alertblock}{Project title: Elastic constants identification of composite laminates by using Lamb wave dispersion curves and optimization methods}
%The research was funded by the Polish National Science Center under grant agreement
%no 2018/29/B/ST8/00045.
%\end{alertblock}
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
{\setbeamercolor{palette primary}{fg=blue, bg=white}
\begin{frame}[standout]
Thank you for your listening!\\ \vspace{12pt}
Questions?\\ \vspace{12pt}
\url{[email protected]}
\end{frame}
}
\note{Than you for listening, and I am }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% END OF SLIDES
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document} | {
"alphanum_fraction": 0.7003203789,
"avg_line_length": 49.1712328767,
"ext": "tex",
"hexsha": "54d911da8304552c3a071c354a8205a263df3788",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "IFFM-PAS-MISD/aidd",
"max_forks_repo_path": "reports/beamer_presentations/sept/presnetation_Sept.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "IFFM-PAS-MISD/aidd",
"max_issues_repo_path": "reports/beamer_presentations/sept/presnetation_Sept.tex",
"max_line_length": 405,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "IFFM-PAS-MISD/aidd",
"max_stars_repo_path": "reports/beamer_presentations/sept/presnetation_Sept.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T05:36:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-03T05:36:07.000Z",
"num_tokens": 9022,
"size": 35895
} |
% !TEX root = ../00_thesis.tex
\section{Overview of \DRP}
\label{sec:design_overview}
\afterpage{
\begin{figure}
\centering
\includegraphics[scale=1]{dpp}
\caption{Conceptual view of the DPP, based one the \bolt processor interconnect. %
\capt{%
Using functions \emph{\opwrite}, \emph{\opread}, and \emph{\opflush}, the application (\ap) and communication (\cp) processors can asynchronously exchange messages with predictable latency.
The \AP executes application tasks (\eg sensing, actuation, control, \etc) while the \CP is dedicated to radio communication.}}
\label{fig:bolt_logical}
\end{figure}
}
This chapter presents the \DRPLong (\DRP), a solution to provide end-to-end real-time guarantees between distributed applications.
Before delving into details, this section provides an overview of \DRP's principles.
The system model of \DRP divides the end-to-end communication between local and wireless parts~(\cref{fig:DRP_sysmodel}):
\pagebreak
\begin{description}
\item[\AP $\boldsymbol{\leftrightarrow}$ \CP]
Applications run on dedicated application processors (\APs) which are isolated from the rest of the network by their attached communication processor (\CP).
Local communication between \APs and \CPs takes place over the \bolt interconnect~\cite{sutton2015Bolt}, which provides asynchronous message passing with bounded delays.
This device architecture, called the Dual-Processor Platform (\DPP), is illustrated in \cref{fig:bolt_logical} (more details in \cref{ch:introduction}).
\item[\CP $\boldsymbol{\leftrightarrow}$ \CP]
The \CPs exchange messages over a multi-hop wireless network using the \blink real-time protocol~\cite{zimmerling2017Blink}.
\blink is adaptive to dynamic changes in traffic demands, energy efficient, and delivers messages in real-time.
\end{description}
The \DPP and \blink are key building blocks to fulfill the \feature{Reliability}, \feature{Adaptability}, \feature{Composability}, and \feature{Efficiency} requirements.
However, two major issues remain in order to achieve \feature{Timeliness}.
\squarepar{%}
First, the communication between \APs and \CPs cannot be completely asynchronous: to guarantee end-to-end deadlines, both processors must look for incoming messages with some minimal rate.
%
Second, \blink assumes a periodic release of messages at the network interfaces (\ie the \CPs); since our flow model is not periodic but sporadic with jitter~(\cref{sec:problem}), messages may be delayed in \CPs buffer until they can be transmitted over the network.%
}
\DRP strikes a balance between \feature{Composability} and \feature{Efficiency}; that is, between
\linebreak
\inlineitem decoupling the execution of \APs, \CPs, and Blink, \\
\inlineitem supporting short the end-to-end deadlines between the \APs.\\
The idea behind \DRP is to split the responsibility of meeting end-to-end deadlines between (i)~the source node $n^s_i$ and \blink, and (ii)~the destination node $n^d_i$;
If the source does not write too many messages, \blink guarantees every message will meet a given network deadline $D$, in turns, the destination commits to read its \bolt queue sufficiently often to meet the flow's end-to-end deadlines \deadlineany.
\squarepar{%}
\DRP formalizes these ``commitments'' into \emph{contracts} between the different entities. The challenge is to define, given the current network state and an end-to-end deadline \deadlineany to satisfy, what must be
% \begin{itemize}
% \item
(i)~the network deadline $D$ requested to \blink and
% \item
(ii)~the minimal reading rate at the destination node.
% \end{itemize}
The goal is to make these contracts minimally restrictive, such that \APs, \CPs, and Blink can operate as much as possible independently from each other (\feature{Composability}).%
}
| {
"alphanum_fraction": 0.7665450756,
"avg_line_length": 63.9666666667,
"ext": "tex",
"hexsha": "ce1b0eed5367714620220ef08e20a56fd111cfdc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fd21e9f0cddeda91821eb061c9ab12df9f610da9",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "romain-jacob/doctoral-theis",
"max_forks_repo_path": "40_DRP/4_design.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fd21e9f0cddeda91821eb061c9ab12df9f610da9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "romain-jacob/doctoral-theis",
"max_issues_repo_path": "40_DRP/4_design.tex",
"max_line_length": 269,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fd21e9f0cddeda91821eb061c9ab12df9f610da9",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "romain-jacob/doctoral-theis",
"max_stars_repo_path": "40_DRP/4_design.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 943,
"size": 3838
} |
\chapter{Sample Extended System Input File}
This is an input file for doing a band structure and average
properties calculation on a square 2 dimensional mesh of hydrogen
atoms.
\vspace{0.25in}
\shrinkspacing
\begin{verbatim}
; the title
2-D mesh of hydrogen
; the geometry
Geometry
; number of atoms
3
; the positions
1 H 0.0 0.0 0.0
2 & 1.0 0.0 0.0
3 & 0.0 1.0 0.0
; lattice parameters
Lattice
; dimension of lattice
2
; number of overlaps along each lattice vector
4 4
; the lattice vectors (begin atom -> end atom)
1 2
1 3
; number of electrons per unit cell
Electrons
1
; band structure details
Band
; K points per symmetry line
40
; number of special points
4
; special points
Gamma 0.0 0.0 0.0
X 0.5 0.0 0.0
M 0.5 0.5 0.0
Gamma 0.0 0.0 0.0
; do average properties calculation
Average Properties
; do a COOP
COOP
; number of COOP's
2
; COOP specifications
; type which contrib1 contrib2 cell
orbital 1 1 1 1 0 0
orbital 1 1 1 0 1 0
; this averages H-H COOP's between cells (0,0,0)->(1,0,0) and (0,0,0)->(0,1,0)
; the K points
K points
; number of K points
10
; the K points and respective weights
0.0625 0.0625 0.0000 1
0.1875 0.0625 0.0000 2
0.1875 0.1875 0.0000 1
0.3125 0.0625 0.0000 2
0.3125 0.1875 0.0000 2
0.3125 0.3125 0.0000 1
0.4375 0.0625 0.0000 2
0.4375 0.1875 0.0000 2
0.4375 0.3125 0.0000 2
0.4375 0.4375 0.0000 1
; end of file
\end{verbatim}
\resumespacing
| {
"alphanum_fraction": 0.6884906961,
"avg_line_length": 17.2738095238,
"ext": "tex",
"hexsha": "eb309e640f6914648924998bae9ebe2c12c983f8",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2022-01-19T17:07:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-07-28T18:57:32.000Z",
"max_forks_repo_head_hexsha": "d8c7e437b949af4868f7d79c68faf77433081549",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "richardjgowers/yaehmop",
"max_forks_repo_path": "docs/sample_file.tex",
"max_issues_count": 26,
"max_issues_repo_head_hexsha": "d8c7e437b949af4868f7d79c68faf77433081549",
"max_issues_repo_issues_event_max_datetime": "2021-02-22T13:03:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-07-28T18:59:31.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "richardjgowers/yaehmop",
"max_issues_repo_path": "docs/sample_file.tex",
"max_line_length": 78,
"max_stars_count": 17,
"max_stars_repo_head_hexsha": "d8c7e437b949af4868f7d79c68faf77433081549",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "richardjgowers/yaehmop",
"max_stars_repo_path": "docs/sample_file.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-19T16:57:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-08-07T05:17:19.000Z",
"num_tokens": 603,
"size": 1451
} |
\lab{Algorithms}{Functions and Logic}{Functions and Logic}
\objective{This section will teach how to build functions. It will also introduce logical statements used in programming.}
Up to this point we have only run scripts. We now introduce a much more powerful and versatile method of problem solving: writing functions.
A function, in the programming sense, is a block of code that accepts input (arguments) and returns output. When you need a function, you simply define one. Python functions begin with the keyword \li{def}, followed by the function name. Input arguments are listed inside parentheses separated by commas. Finally the function definition ends with a colon. In Python, every function returns a value. This value can be explicitly returned using a \li{return} statement. If no \li{return} is defined, the function will always return \li{None}. Below is a simple function that returns what it receives
\begin{lstlisting}[style=python]
: def returnInput(x):
....: return x
\end{lstlisting}
Note that all code inside the function is indented. This is very important in Python. Any code not indented under the function definition is not part of the function. To execute a function, you call the function by name and pass input parameters. Try calling the function and passing different arguments.
\begin{lstlisting}[style=python]
: returnInput("This is a string")
'This is a string'
: returnInput(37)
37
: returnInput(3**2)
9
\end{lstlisting}
This function is defined for as long as we have IPython running. If we happen to close IPython, our function definition will be lost. To avoid losing our function definitions, we can place them in a Python script file (a *.py file).
\begin{problem}
In a file, functions.py, define functions that will do the following:
\begin{itemize}
\item Accept anything, but always return the number 42
\item Accept two numbers and return their product
\item Accepts no arguments and prints ``You called!"
\end{itemize}
\end{problem}
Restart IPython. Now, how do we use our functions defined in functions.py without having to redefine each function inside IPython. Fortunately, Python allows us to import functions. Each *.py is really a Python module which can be imported in the same fashion as the SciPy library.
\begin{lstlisting}[style=python]
: import functions
\end{lstlisting}
Now our function are accessible as \li{functions.<function_name>}. Using the \li{as} keyword, we can assign an alias to our module. Let us define a function in \li{functions.py} that will compute the roots of a quadratic equation. This function will return a tuple containing multiple values. Note that before using the \li{sqrt} function, we have to define it, which we do by importing \li{sqrt} from Python's \li{math} library.
\begin{lstlisting}[style=python]
from math import sqrt
def quadrForm(a,b,c):
descr = sqrt(b**2-4*a*c)
x1 = (-b+descr)/2.0*a
x2 = (-b-descr)/2.0*a
return (x1, x2)
\end{lstlisting}
This function takes as inputs the coefficients of a quadratic function and returns the output of the quadratic formula.
\begin{lstlisting}[style=python]
: functions.quadrForm(1,-2,1)
(1.0, 1.0)
\end{lstlisting}
We note that this function is susceptible to floating point error. For example, try to find the roots of the polynomial $x^2 - (10^7 + 10^-7)x + 1$ using our function:
\begin{lstlisting}[style=python]
>> [x1 x2] = quadrForm(1,-(1e7 + 1e-7),1)
(10000000.0, 9.96515154838562e-08)
\end{lstlisting}
Clearly the first root is correct, but the second root is clearly in error (admittedly the error is small, but in this case it is easily fixed). The second root shows the error because it is calculated by subtracting two numbers that are very close together.
We can solve this problem by using slightly different approach. We first calculate the root that is farther from zero using the formula. Note that we will have to write our own \li{sign()} function in Python. The \li{sign()} function should return $-1.0$ if the input is negative, $0.0$ if the input is zero, or $1.0$ if the input is positive.
\[
x_1 = \frac{-b - \text{sign}(b)*\text{descr}}{2a}
\]
We can then use a formula known as Viete's formula to calculate the other root (solving for x2).
\[
x_1 x_2 = c/a
\]
\begin{problem}
Write a function that implements this second approach to finding the roots. Call it \li{quadrForm2}. Note the improved accuracy.
\end{problem}
\section*{Logic, Conditionals and Loops}
Three basic operators in logic are \li{and} (\&), \li{or} (\textbar), and \li{not} (!). We can use relational operators from the~\ref{tbl:relops} to build logical statements.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|}
\hline
\li{==} & equal to\\
\li{<} & less than\\
\li{<=} & less than or equal to\\
\li{\!=} & not equal to\\
\li{>} & greater than\\
\li{>=} & greater than or equal to\\
\hline
\end{tabular}
\caption{Some relational operators}
\label{tbl:relops}
\end{center}
\end{table}
Using these building blocks we can build complicated statements. For example, consider a vector $x$ and the statement \li{(x>1) & (x<10)}. The statement will evaluate \li{True} if $x_i$ is between one and ten, and \li{False} otherwise. This technique is called \emph{masking} and is useful for operating only on parts of a vector or an array that meet certain conditions. Applying the mask to our vector $x$ will yield a boolean vector (a vector with only \li{True} or \li{False} values). We can assign an operation for when we encounter a \li{True} value. For example, suppose we want to add $5.5$ to any number that is less than $0.6$ in our vector $x$. We can do this simply with the following statement:
\begin{lstlisting}[style=python]
: a = sp.rand(10); a
array([ 0.90492936, 0.42251211, 0.80463372, 0.45087988, 0.94395439,
0.11372628, 0.17177908, 0.42053736, 0.94759312, 0.59030686])
: b = (a<.6); b
array([False, True, False, True, False, True, True, True, False, True], dtype=bool)
: a[b] #the values that correspond to the True values of our mask
array([ 0.42251211, 0.45087988, 0.11372628, 0.17177908, 0.42053736,
0.59030686])
: a[b] += 5.5; a # a += 1 is equivalent to a = a + 1. a[b] makes it so that the operation is only applied to values which correspond to a 'True' value in b.
array([ 0.90492936, 5.92251211, 0.80463372, 5.95087988, 0.94395439,
5.61372628, 5.67177908, 5.92053736, 0.94759312, 6.09030686])
\end{lstlisting}
\begin{problem}
Create logical statements for the following:
\begin{itemize}
%\item True if x is prime and less than one-thousand, false otherwise (use the function \li{isprime}).
\item True if $x^2$ is greater than 10 or x is positive and smaller than 2.
\item True if the Bessel function of the second kind evaluated at x, with nu = 1, has magnitude greater than 1 (you will need to import \li{special.yn()}).
\end{itemize}
\end{problem}
\begin{problem}
Create a function that takes an input vector x and shuffles it like a deck of cards. You may assume that x has even length. The key is to create a mask using some random vector and then assign specific cards to even or odd slots of an output vector using that mask. Then see how many shuffles it takes to make the output $y$ ``random." (you can do this by looking at the offdiagonal entry of corrcoef(x,y), which should be close to zero if the output is random).
\end{problem}
| {
"alphanum_fraction": 0.7369129323,
"avg_line_length": 59.448,
"ext": "tex",
"hexsha": "78214014e2ee7ab26d3e4b24a2dcc777fbeb8583",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "abefrandsen/numerical_computing",
"max_forks_repo_path": "Algorithms/Functions/Functions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "abefrandsen/numerical_computing",
"max_issues_repo_path": "Algorithms/Functions/Functions.tex",
"max_line_length": 713,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "abefrandsen/numerical_computing",
"max_stars_repo_path": "Algorithms/Functions/Functions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2113,
"size": 7431
} |
\documentclass[12pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{stackrel}
\usepackage{array}
\usepackage{mathtools}
\usepackage{amsmath, amsthm, amssymb, amsfonts, amsxtra, amscd, thmtools, xpatch}
\usepackage{calligra, mathrsfs}
\usepackage{colonequals}
\usepackage{graphicx}
\usepackage{enumitem}
\usepackage{mleftright}
\usepackage{stackengine}
\usepackage{glossaries}
\usepackage{xcolor}
\usepackage{tikz}
\usetikzlibrary{arrows,positioning, shapes.geometric, calc}
\usepackage{hyperref}
\setlength{\parindent}{0in}
\input{/home/zack/dotfiles/.pandoc/custom/latexmacs.tex}
%\newtheorem{theorem}{Theorem}[section]
\newcommand{\qty}[1]{\left( {#1} \right)}
% Pandoc-specific fixes
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{titlepage}
\begin{center}
\textbf{\LARGE{MAKEMEAQUAL UNIVERSITY}} \\
\vspace{2mm} %5mm vertical space
\textbf{\Large{Department of Mathematics}}\\
\vspace{15mm} %5mm vertical space
\LARGE\textsc{PhD Qualifying Examination} \\
\vspace{2mm} %5mm vertical space
\Large \textit{in} \\
\Large\textsc {Mathematics}\\
\vspace{30mm} %5mm vertical space
\textbf{\LARGE{\today}}
\end{center}
\end{titlepage}
\hypertarget{algebra-140-questions}{%
\section{Algebra (140 Questions)}\label{algebra-140-questions}}
\hypertarget{question-1}{%
\subsection{Question 1}\label{question-1}}
Let \(G\) be a finite group with \(n\) distinct conjugacy classes. Let
\(g_1 \cdots g_n\) be representatives of the conjugacy classes of \(G\).
Prove that if \(g_i g_j = g_j g_i\) for all \(i, j\) then \(G\) is
abelian.
\hypertarget{question-2}{%
\subsection{Question 2}\label{question-2}}
Let \(G\) be a group of order 105 and let \(P, Q, R\) be Sylow 3, 5, 7
subgroups respectively.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove that at least one of \(Q\) and \(R\) is normal in \(G\).
\item
Prove that \(G\) has a cyclic subgroup of order 35.
\item
Prove that both \(Q\) and \(R\) are normal in \(G\).
\item
Prove that if \(P\) is normal in \(G\) then \(G\) is cyclic.
\end{enumerate}
\hypertarget{question-3}{%
\subsection{Question 3}\label{question-3}}
Let \(R\) be a ring with the property that for every
\(a \in R, a^2 = a\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove that \(R\) has characteristic 2.
\item
Prove that \(R\) is commutative.
\end{enumerate}
\hypertarget{question-4}{%
\subsection{Question 4}\label{question-4}}
Let \(F\) be a finite field with \(q\) elements.
Let \(n\) be a positive integer relatively prime to \(q\) and let
\(\omega\) be a primitive \(n\)th root of unity in an extension field of
\(F\).
Let \(E = F [\omega]\) and let \(k = [E : F]\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove that \(n\) divides \(q^{k}-1\).
\item
Let \(m\) be the order of \(q\) in \(\ZZ/n\ZZ\). Prove that \(m\)
divides \(k\).
\item
Prove that \(m = k\).
\end{enumerate}
\hypertarget{question-5}{%
\subsection{Question 5}\label{question-5}}
Let \(R\) be a ring and \(M\) an \(R\dash\)module.
\begin{quote}
Recall that the set of torsion elements in M is defined by \[
\Tor(m) = \{m \in M \suchthat \exists r \in R, ~r \neq 0, ~rm = 0\}
.\]
\end{quote}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove that if \(R\) is an integral domain, then \(\Tor(M )\) is a
submodule of \(M\) .
\item
Give an example where \(\Tor(M )\) is not a submodule of \(M\).
\item
If \(R\) has zero-divisors, prove that every non-zero \(R\dash\)module
has non-zero torsion elements.
\end{enumerate}
\hypertarget{question-6}{%
\subsection{Question 6}\label{question-6}}
Let \(R\) be a commutative ring with multiplicative identity. Assume
Zorn's Lemma.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that \[
N = \{r \in R \mid r^n = 0 \text{ for some } n > 0\}
\] is an ideal which is contained in any prime ideal.
\item
Let \(r\) be an element of \(R\) not in \(N\). Let \(S\) be the
collection of all proper ideals of \(R\) not containing any positive
power of \(r\). Use Zorn's Lemma to prove that there is a prime ideal
in \(S\).
\item
Suppose that \(R\) has exactly one prime ideal \(P\) . Prove that
every element \(r\) of \(R\) is either nilpotent or a unit.
\end{enumerate}
\hypertarget{question-7}{%
\subsection{Question 7}\label{question-7}}
Let \(\zeta_n\) denote a primitive \(n\)th root of 1 \(\in \QQ\). You
may assume the roots of the minimal polynomial \(p_n(x)\) of \(\zeta_n\)
are exactly the primitive \(n\)th roots of 1.
Show that the field extension \(\QQ(\zeta_n )\) over \(\QQ\) is Galois
and prove its Galois group is \((\ZZ/n\ZZ)\units\).
How many subfields are there of \(\QQ(\zeta_{20} )\)?
\hypertarget{question-8}{%
\subsection{Question 8}\label{question-8}}
Let \(\{e_1, \cdots, e_n \}\) be a basis of a real vector space \(V\)
and let \[
\Lambda \definedas \theset{ \sum r_i e_i \mid ri \in \ZZ}
\]
Let \(\cdot\) be a non-degenerate (\(v \cdot w = 0\) for all
\(w \in V \iff v = 0\)) symmetric bilinear form on V such that the Gram
matrix \(M = (e_i \cdot e_j )\) has integer entries.
Define the dual of \(\Lambda\) to be
\[
\Lambda \dual \definedas \{v \in V \suchthat v \cdot x \in \ZZ \text{ for all } x \in \Lambda
\}
.\]
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that \(\Lambda \subset \Lambda \dual\).
\item
Prove that \(\det M \neq 0\) and that the rows of \(M\inv\) span
\(\Lambda\dual\).
\item
Prove that \(\det M = |\Lambda\dual /\Lambda|\).
\end{enumerate}
\hypertarget{question-9}{%
\subsection{Question 9}\label{question-9}}
Let \(A\) be a square matrix over the complex numbers. Suppose that
\(A\) is nonsingular and that \(A^{2019}\) is diagonalizable over
\(\CC\).
Show that \(A\) is also diagonalizable over \(\CC\).
\hypertarget{question-10}{%
\subsection{Question 10}\label{question-10}}
Let \(F = \FF_p\) , where \(p\) is a prime number.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that if \(\pi(x) \in F[x]\) is irreducible of degree \(d\), then
\(\pi(x)\) divides \(x^{p^d} - x\).
\item
Show that if \(\pi(x) \in F[x]\) is an irreducible polynomial that
divides \(x^{p^n} - x\), then \(\deg \pi(x)\) divides \(n\).
\end{enumerate}
\hypertarget{question-11}{%
\subsection{Question 11}\label{question-11}}
How many isomorphism classes are there of groups of order 45?
Describe a representative from each class.
\hypertarget{question-12}{%
\subsection{Question 12}\label{question-12}}
For a finite group \(G\), let \(c(G)\) denote the number of conjugacy
classes of \(G\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove that if two elements of \(G\) are chosen uniformly at
random,then the probability they commute is precisely \[
\frac{c(G)}{\abs G}
.\]
\item
State the class equation for a finite group.
\item
Using the class equation (or otherwise) show that the probability in
part (a) is at most \[
\frac 1 2 + \frac 1 {2[G : Z(G)]}
.\]
\end{enumerate}
\begin{quote}
Here, as usual, \(Z(G)\) denotes the center of \(G\).
\end{quote}
\hypertarget{question-13}{%
\subsection{Question 13}\label{question-13}}
Let \(R\) be an integral domain. Recall that if \(M\) is an
\(R\dash\)module, the \emph{rank} of \(M\) is defined to be the maximum
number of \(R\dash\)linearly independent elements of \(M\) .
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove that for any \(R\dash\)module \(M\), the rank of \(\tor(M )\) is
0.
\item
Prove that the rank of \(M\) is equal to the rank of of
\(M/\tor(M )\).
\item
Suppose that M is a non-principal ideal of \(R\).
\item
Prove that \(M\) is torsion-free of rank 1 but not free.
\end{enumerate}
\hypertarget{question-14}{%
\subsection{Question 14}\label{question-14}}
Let \(R\) be a commutative ring with 1.
\begin{quote}
Recall that \(x \in R\) is nilpotent iff \(xn = 0\) for some positive
integer \(n\).
\end{quote}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that every proper ideal of \(R\) is contained within a maximal
ideal.
\item
Let \(J(R)\) denote the intersection of all maximal ideals of \(R\).
Show that \(x \in J(R) \iff 1 + rx\) is a unit for all \(r \in R\).
\item
Suppose now that \(R\) is finite. Show that in this case \(J(R)\)
consists precisely of the nilpotent elements in R.
\end{enumerate}
\hypertarget{question-15}{%
\subsection{Question 15}\label{question-15}}
Let \(p\) be a prime number. Let \(A\) be a \(p \times p\) matrix over a
field \(F\) with 1 in all entries except 0 on the main diagonal.
Determine the Jordan canonical form (JCF) of \(A\)
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
When \(F = \QQ\),
\item
When \(F = \FF_p\).
\end{enumerate}
\begin{quote}
Hint: In both cases, all eigenvalues lie in the ground field. In each
case find a matrix \(P\) such that \(P\inv AP\) is in JCF.
\end{quote}
\hypertarget{question-16}{%
\subsection{Question 16}\label{question-16}}
Let \(\zeta = e^{2\pi i/8}\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
What is the degree of \(\QQ(\zeta)/\QQ\)?
\item
How many quadratic subfields of \(\QQ(\zeta)\) are there?
\item
What is the degree of \(\QQ(\zeta, \sqrt[4] 2)\) over \(\QQ\)?
\end{enumerate}
\hypertarget{question-17}{%
\subsection{Question 17}\label{question-17}}
Let \(G\) be a finite group whose order is divisible by a prime number
\(p\). Let \(P\) be a normal \(p\dash\)subgroup of \(G\) (so
\(\abs P = p^c\) for some \(c\)).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that \(P\) is contained in every Sylow \(p\dash\)subgroup of
\(G\).
\item
Let \(M\) be a maximal proper subgroup of \(G\). Show that either
\(P \subseteq M\) or \(|G/M | = p^b\) for some \(b \leq c\).
\end{enumerate}
\hypertarget{question-18}{%
\subsection{Question 18}\label{question-18}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Suppose the group \(G\) acts on the set \(X\) . Show that the
stabilizers of elements in the same orbit are conjugate.
\item
Let \(G\) be a finite group and let \(H\) be a proper subgroup. Show
that the union of the conjugates of \(H\) is strictly smaller than
\(G\), i.e. \[
\union_{g\in G} gHg\inv \subsetneq G
\]
\item
Suppose \(G\) is a finite group acting transitively on a set \(S\)
with at least 2 elements. Show that there is an element of \(G\) with
no fixed points in \(S\).
\end{enumerate}
\hypertarget{question-19}{%
\subsection{Question 19}\label{question-19}}
Let \(F \subset K \subset L\) be finite degree field extensions. For
each of the following assertions, give a proof or a counterexample.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
If \(L/F\) is Galois, then so is \(K/F\).
\item
If \(L/F\) is Galois, then so is \(L/K\).
\item
If \(K/F\) and \(L/K\) are both Galois, then so is \(L/F\).
\end{enumerate}
\hypertarget{question-20}{%
\subsection{Question 20}\label{question-20}}
Let \(V\) be a finite dimensional vector space over a field (the field
is not necessarily algebraically closed).
Let \(\phi : V \to V\) be a linear transformation. Prove that there
exists a decomposition of \(V\) as \(V = U \oplus W\) , where \(U\) and
\(W\) are \(\phi\dash\)invariant subspaces of \(V\) ,
\(\restrictionof{\phi}{U}\) is nilpotent, and
\(\restrictionof{\phi}{W}\) is nonsingular.
\hypertarget{question-21}{%
\subsection{Question 21}\label{question-21}}
Let \(A\) be an \(n \times n\) matrix.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Suppose that \(v\) is a column vector such that the set
\(\{v, Av, . . . , A^{n-1} v\}\) is linearly independent. Show that
any matrix \(B\) that commutes with \(A\) is a polynomial in \(A\).
\item
Show that there exists a column vector \(v\) such that the set
\(\{v, Av, . . . , A^{n-1} v\}\) is linearly independent \(\iff\) the
characteristic polynomial of A equals the minimal polynomial of A.
\end{enumerate}
\hypertarget{question-22}{%
\subsection{Question 22}\label{question-22}}
Let \(R\) be a commutative ring, and let \(M\) be an \(R\dash\)module.
An \(R\dash\)submodule \(N\) of \(M\) is maximal if there is no
\(R\dash\)module \(P\) with \(N \subsetneq P \subsetneq M\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that an \(R\dash\)submodule \(N\) of \(M\) is maximal
\(\iff M /N\) is a simple \(R\dash\)module: i.e., \(M /N\) is nonzero
and has no proper, nonzero \(R\dash\)submodules.
\item
Let \(M\) be a \(\ZZ\dash\)module. Show that a \(\ZZ\dash\)submodule
\(N\) of \(M\) is maximal \(\iff \#M /N\) is a prime number.
\item
Let \(M\) be the \(\ZZ\dash\)module of all roots of unity in \(\CC\)
under multiplication. Show that there is no maximal
\(\ZZ\dash\)submodule of \(M\).
\end{enumerate}
\hypertarget{question-23}{%
\subsection{Question 23}\label{question-23}}
Let \(R\) be a commutative ring.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let \(r \in R\). Show that the map \begin{align*}
r\bullet : R &\to R \\
x &\mapsto r x
.\end{align*} is an \(R\dash\)module endomorphism of \(R\).
\item
We say that \(r\) is a \textbf{zero-divisor} if r\(\bullet\) is not
injective. Show that if \(r\) is a zero-divisor and \(r \neq 0\), then
the kernel and image of \(R\) each consist of zero-divisors.
\item
Let \(n \geq 2\) be an integer. Show: if \(R\) has exactly \(n\)
zero-divisors, then \(\#R \leq n^2\) .
\item
Show that up to isomorphism there are exactly two commutative rings
\(R\) with precisely 2 zero-divisors.
\end{enumerate}
\begin{quote}
You may use without proof the following fact: every ring of order 4 is
isomorphic to exactly one of the following: \[
\frac{ \ZZ }{ 4\ZZ}, \quad
\frac{ \frac{ \ZZ }{ 2\ZZ} [t]}{(t^2 + t + 1)}, \quad
\frac{ \frac{ \ZZ }{ 2\ZZ} [t]}{ (t^2 - t)}, \quad
\frac{ \frac{ \ZZ}{2\ZZ}[t]}{(t^2 )}
.\]
\end{quote}
\hypertarget{question-24}{%
\subsection{Question 24}\label{question-24}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Use the Class Equation (equivalently, the conjugation action of a
group on itself) to prove that any \(p\dash\)group (a group whose
order is a positive power of a prime integer \(p\)) has a nontrivial
center.
\item
Prove that any group of order \(p^2\) (where \(p\) is prime) is
abelian.
\item
Prove that any group of order \(5^2 \cdot 7^2\) is abelian.
\item
Write down exactly one representative in each isomorphism class of
groups of order \(5^2 \cdot 7^2\).
\end{enumerate}
\hypertarget{question-25}{%
\subsection{Question 25}\label{question-25}}
Let \(f(x) = x^4 - 4x^2 + 2 \in \QQ[x]\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Find the splitting field \(K\) of \(f\), and compute \([K: \QQ]\).
\item
Find the Galois group \(G\) of \(f\), both as an explicit group of
automorphisms, and as a familiar abstract group to which it is
isomorphic.
\item
Exhibit explicitly the correspondence between subgroups of \(G\) and
intermediate fields between \(\QQ\) and \(k\).
\end{enumerate}
\hypertarget{question-26}{%
\subsection{Question 26}\label{question-26}}
Let \(K\) be a Galois extension of \(\QQ\) with Galois group \(G\), and
let \(E_1 , E_2\) be intermediate fields of \(K\) which are the
splitting fields of irreducible \(f_i (x) \in \QQ[x]\).
Let \(E = E_1 E_2 \subset K\).
Let \(H_i = \Gal(K/E_i)\) and \(H = \Gal(K/E)\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that \(H = H_1 \cap H_2\).
\item
Show that \(H_1 H_2\) is a subgroup of \(G\).
\item
Show that \[
\Gal(K/(E_1 \cap E_2 )) = H_1 H_2
.\]
\end{enumerate}
\hypertarget{question-27}{%
\subsection{Question 27}\label{question-27}}
Let
\[
A=\left[\begin{array}{lll}{0} & {1} & {-2} \\ {1} & {1} & {-3} \\ {1} & {2} & {-4}\end{array}\right] \in M_{3}(\mathbb{C})
\]
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Find the Jordan canonical form J of A.
\item
Find an invertible matrix \(P\) such that \(P\inv AP = J\).
\end{enumerate}
\begin{quote}
You should not need to compute \(P\inv\).
\end{quote}
\hypertarget{question-28}{%
\subsection{Question 28}\label{question-28}}
Let \[
M=\left(\begin{array}{ll}{a} & {b} \\ {c} & {d}\end{array}\right)
\quad \text{and} \quad
N=\left(\begin{array}{cc}{x} & {u} \\ {-y} & {-v}\end{array}\right)
\]
over a commutative ring \(R\), where \(b\) and \(x\) are units of \(R\).
Prove that \[
M N=\left(\begin{array}{ll}{0} & {0} \\ {0} & {*}\end{array}\right)
\implies MN = 0
.\]
\hypertarget{question-29}{%
\subsection{Question 29}\label{question-29}}
Let \[
M = \{(w, x, y, z) \in \ZZ^4 \suchthat w + x + y + z \in 2\ZZ\}
,\]
and
\[
N = \{(w, x, y, z) \in \ZZ^4 \suchthat 4\divides (w - x),~ 4\divides (x - y),~ 4\divides ( y - z)\}
.\]
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that \(N\) is a \(\ZZ\dash\)submodule of \(M\) .
\item
Find vectors \(u_1 , u_2 , u_3 , u_4 \in \ZZ^4\) and integers
\(d_1 , d_2 , d_3 , d_4\) such that \[
\{u_1 , u_2 , u_3 , u_4 \}
\] is a free basis for \(M\), and \[
\{d_1 u_1,~ d_2 u_2,~ d_3 u_3,~ d_4 u_4 \}
\] is a free basis for \(N\) .
\item
Use the previous part to describe \(M/N\) as a direct sum of cyclic
\(\ZZ\dash\)modules.
\end{enumerate}
\hypertarget{question-30}{%
\subsection{Question 30}\label{question-30}}
Let \(R\) be a PID and \(M\) be an \(R\dash\)module. Let \(p\) be a
prime element of \(R\). The module \(M\) is called
\emph{\(\generators{p}\dash\)primary} if for every \(m \in M\) there
exists \(k > 0\) such that \(p^k m = 0\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Suppose M is \(\generators{p}\dash\)primary. Show that if \(m \in M\)
and \(t \in R, ~t \not\in \generators{p}\), then there exists
\(a \in R\) such that \(atm = m\).
\item
A submodule \(S\) of \(M\) is said to be \emph{pure} if
\(S \cap r M = rS\) for all \(r \in R\). Show that if \(M\) is
\(\generators{p}\dash\)primary, then \(S\) is pure if and only if
\(S \cap p^k M = p^k S\) for all \(k \geq 0\).
\end{enumerate}
\hypertarget{question-31}{%
\subsection{Question 31}\label{question-31}}
Let \(R = C[0, 1]\) be the ring of continuous real-valued functions on
the interval \([0, 1]\). Let I be an ideal of \(R\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that if \(f \in I, a \in [0, 1]\) are such that \(f (a) \neq 0\),
then there exists \(g \in I\) such that \(g(x) \geq 0\) for all
\(x \in [0, 1]\), and \(g(x) > 0\) for all \(x\) in some open
neighborhood of \(a\).
\item
If \(I \neq R\), show that the set
\(Z(I) = \{x \in [0, 1] \suchthat f(x) = 0 \text{ for all } f \in I\}\)
is nonempty.
\item
Show that if \(I\) is maximal, then there exists \(x_0 \in [0, 1]\)
such that \(I = \{ f \in R \suchthat f (x_0 ) = 0\}\).
\end{enumerate}
\hypertarget{question-32}{%
\subsection{Question 32}\label{question-32}}
Suppose the group \(G\) acts on the set \(A\). Assume this action is
faithful (recall that this means that the kernel of the homomorphism
from \(G\) to \(\sym(A)\) which gives the action is trivial) and
transitive (for all \(a, b\) in \(A\), there exists \(g\) in \(G\) such
that \(g \cdot a = b\).)
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
For \(a \in A\), let \(G_a\) denote the stabilizer of \(a\) in \(G\).
Prove that for any \(a \in A\), \[
\intersect_{\sigma\in G} \sigma G_a \sigma\inv = \theset{1}
.\]
\item
Suppose that \(G\) is abelian. Prove that \(|G| = |A|\). Deduce that
every abelian transitive subgroup of \(S_n\) has order \(n\).
\end{enumerate}
\hypertarget{question-33}{%
\subsection{Question 33}\label{question-33}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\tightlist
\item
Classify the abelian groups of order 36.
\end{enumerate}
For the rest of the problem, assume that \(G\) is a non-abelian group of
order 36.
\begin{quote}
You may assume that the only subgroup of order 12 in \(S_4\) is \(A_4\)
and that \(A_4\) has no subgroup of order 6.
\end{quote}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\setcounter{enumi}{1}
\item
Prove that if the 2-Sylow subgroup of \(G\) is normal, \(G\) has a
normal subgroup \(N\) such that \(G/N\) is isomorphic to \(A_4\).
\item
Show that if \(G\) has a normal subgroup \(N\) such that \(G/N\) is
isomorphic to \(A_4\) and a subgroup \(H\) isomorphic to \(A_4\) it
must be the direct product of \(N\) and \(H\).
\item
Show that the dihedral group of order 36 is a non-abelian group of
order 36 whose Sylow-2 subgroup is not normal.
\end{enumerate}
\hypertarget{question-34}{%
\subsection{Question 34}\label{question-34}}
Let \(F\) be a field. Let \(f(x)\) be an irreducible polynomial in
\(F[x]\) of degree \(n\) and let \(g(x)\) be any polynomial in \(F[x]\).
Let \(p(x)\) be an irreducible factor (of degree \(m\)) of the
polynomial \(f(g(x))\).
Prove that \(n\) divides \(m\). Use this to prove that if \(r\) is an
integer which is not a perfect square, and \(n\) is a positive integer
then every irreducible factor of \(x^{2n} - r\) over \(\QQ[x]\) has even
degree.
\hypertarget{question-35}{%
\subsection{Question 35}\label{question-35}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let \(f (x)\) be an irreducible polynomial of degree 4 in \(\QQ[x]\)
whose splitting field \(K\) over \(\QQ\) has Galois group \(G = S_4\).
Let \(\theta\) be a root of \(f(x)\). Prove that \(\QQ[\theta]\) is an
extension of \(\QQ\) of degree 4 and that there are no intermediate
fields between \(\QQ\) and \(\QQ[\theta]\).
\item
Prove that if \(K\) is a Galois extension of \(\QQ\) of degree 4, then
there is an intermediate subfield between \(K\) and \(\QQ\).
\end{enumerate}
\hypertarget{question-36}{%
\subsection{Question 36}\label{question-36}}
A ring R is called \emph{simple} if its only two-sided ideals are \(0\)
and \(R\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Suppose \(R\) is a commutative ring with 1. Prove \(R\) is simple if
and only if \(R\) is a field.
\item
Let \(k\) be a field. Show the ring \(M_n (k)\), \(n \times n\)
matrices with entries in \(k\), is a simple ring.
\end{enumerate}
\hypertarget{question-37}{%
\subsection{Question 37}\label{question-37}}
For a ring \(R\), let \(U(R)\) denote the multiplicative group of units
in \(R\). Recall that in an integral domain \(R\), \(r \in R\) is called
\emph{irreducible} if \(r\) is not a unit in R, and the only divisors of
\(r\) have the form \(ru\) with \(u\) a unit in \(R\).
We call a non-zero, non-unit \(r \in R\) \emph{prime} in \(R\) if
\(r \divides ab \implies r \divides a\) or \(r \divides b\). Consider
the ring \(R = \{a + b \sqrt{-5}\suchthat a, b \in Z\}\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Prove \(R\) is an integral domain.
\item
Show \(U(R) = \{\pm1\}\).
\item
Show \(3, 2 + \sqrt{-5}\), and \(2 - \sqrt{-5}\) are irreducible in
\(R\).
\item
Show 3 is not prime in \(R\).
\item
Conclude \(R\) is not a PID.
\end{enumerate}
\hypertarget{question-38}{%
\subsection{Question 38}\label{question-38}}
Let \(F\) be a field and let \(V\) and \(W\) be vector spaces over \(F\)
.
Make \(V\) and \(W\) into \(F[x]\dash\)modules via linear operators
\(T\) on \(V\) and \(S\) on \(W\) by defining \(X \cdot v = T (v)\) for
all \(v \in V\) and \(X \cdot w = S(w)\) for all w \(\in\) W .
Denote the resulting \(F[x]\dash\)modules by \(V_T\) and \(W_S\)
respectively.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Show that an \(F[x]\dash\)module homomorphism from \(V_T\) to \(W_S\)
consists of an \(F\dash\)linear transformation \(R : V \to W\) such
that \(RT = SR\).
\item
Show that \(VT \cong WS\) as \(F[x]\dash\)modules \(\iff\) there is an
\(F\dash\)linear isomorphism \(P : V \to W\) such that
\(T = P\inv SP\).
\item
Recall that a module \(M\) is \emph{simple} if \(M \neq 0\) and any
proper submodule of \(M\) must be zero. Suppose that \(V\) has
dimension 2. Give an example of \(F\), \(T\) with \(V_T\) simple.
\item
Assume \(F\) is algebraically closed. Prove that if \(V\) has
dimension 2, then any \(V_T\) is not simple.
\end{enumerate}
\hypertarget{question-39}{%
\subsection{Question 39}\label{question-39}}
Classify the groups of order \(182 = 2 \cdot 7 \cdot 13\).
\hypertarget{question-40}{%
\subsection{Question 40}\label{question-40}}
Let \(G\) be a finite group of order \(p^nm\) where \(p\) is a prime and
\(m\) is not divisible by \(p\). Prove that if \(H\) is a subgroup of
\(G\) of order \(p^k\) for some \(k<n\), then the normalizer of \(H\) in
\(G\) properly contains \(H\).
\hypertarget{question-41}{%
\subsection{Question 41}\label{question-41}}
Let \(H\) be a subgroup of \(S_n\) of index \(n\). Prove:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
There is an isomorphism \(f: S_n \to S_n\) such that \(f(H)\) is the
subgroup of \(S_n\) stabilizing \(n\). In particular, \(H\) is
isomorphic to \(S_{n-1}\).
\item
The only subgroups of \(S_n\) containing \(H\) are \(S_n\) and \(H\).
\end{enumerate}
\hypertarget{question-42}{%
\subsection{Question 42}\label{question-42}}
\begin{itemize}
\item
Prove that a group of order \(351=3^3\cdot 13\) cannot be simple.
\item
Prove that a group of order \(33\) must be cyclic.
\end{itemize}
\hypertarget{question-43}{%
\subsection{Question 43}\label{question-43}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Let \(G\) be a group, and \(Z(G)\) the center of \(G\). Prove that if
\(G/Z(G)\) is cyclic, then \(G\) is abelian.
\item
Prove that a group of order \(p^n\), where \(p\) is a prime and
\(n \geq 1\), has non-trivial center.
\item
Prove that a group of order \(p^2\) must be abelian.
\end{enumerate}
\hypertarget{question-44}{%
\subsection{Question 44}\label{question-44}}
Let \(G\) be a finite group.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Prove that if \(H < G\) is a proper subgroup, then \(G\) is not the
union of conjugates of \(H\).
\item
Suppose that \(G\) acts transitively on a set \(X\) with \(|X| > 1\).
Prove that there exists an element of \(G\) with no fixed points in
\(X\).
\end{enumerate}
\hypertarget{question-45}{%
\subsection{Question 45}\label{question-45}}
Classify all groups of order \(15\) and of order \(30\).
\hypertarget{question-46}{%
\subsection{Question 46}\label{question-46}}
Count the number of \(p\)-Sylow subgroups of \(S_p\).
\hypertarget{question-47}{%
\subsection{Question 47}\label{question-47}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Let \(G\) be a group of order \(n\). Suppose that for every divisor
\(d\) of \(n\), \(G\) contains at most one subgroup of order \(d\).
Show that \(G\) is clyclic.
\item
Let \(F\) be a field. Show that every finite subgroup of the group of
units \(F^\times\) is cyclic.
\end{enumerate}
\hypertarget{question-48}{%
\subsection{Question 48}\label{question-48}}
Let \(K\) and \(L\) be finite fields. Show that \(K\) is contained in
\(L\) if and only if \(\# K = p^r\) and \(\# L = p^s\) for the same
prime \(p\), and \(r \leq s\).
\hypertarget{question-49}{%
\subsection{Question 49}\label{question-49}}
Let \(K\) and \(L\) be finite fields with \(K \subseteq L\). Prove that
\(L\) is Galois over \(K\) and that \(\mathrm{Gal}(L/K)\) is cyclic.
\hypertarget{question-50}{%
\subsection{Question 50}\label{question-50}}
Fix a field \(F\), a separable polynomial \(f\in F[x]\) of degree
\(n \geq 3\), and a splitting field \(L\) for \(f\). Prove that if
\([L:F] = n!\) then:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\(f\) is irreducible.
\item
For each root \(r\) of \(f\), \(r\) is the unique root of \(f\) in
\(F(r)\).
\item
For every root \(r\) of \(f\), there are no proper intermediate fields
\(F \subset L \subset F(r)\).
\end{enumerate}
\hypertarget{question-51}{%
\subsection{Question 51}\label{question-51}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that \(\sqrt{2+\sqrt{2}}\) is a root of
\(p(x) = x^2 - 4x^2 + 2 \in \mathbb{Q}[x]\).
\item
Prove that \(\mathbb{Q}(\sqrt{2 + \sqrt{2}})\) is a Galois extension
of \(\mathbb{Q}\) and find its Galois group. (Hint: note that
\(\sqrt{2 - \sqrt{2}}\) is another root of \(p(x)\)).
\item
Let \(f(x) = x^3 - 5\). Determine the splitting field \(K\) of
\(f(x)\) over \(\mathbb{Q}\) and the Galois group of \(f(x)\). Give an
example of a proper sub-extension \(\mathbb{Q} \subset L \subset K\),
such that \(L/\mathbb{Q}\) is Galois.
\end{enumerate}
\hypertarget{question-52}{%
\subsection{Question 52}\label{question-52}}
An integral domain \(R\) is said to be an \emph{Euclidean domain} if
there is a function \(N: R \to \{n\in\mathbb{Z} \mid n\geq 0\}\) such
that \(N(0)=0\) and for each \(a,b\in R\) with \(b\neq 0\), there exist
elements \(q,r\in R\) with \begin{align*}
a = qb + r, \quad \text{and} \quad r = 0 \, \text{ or } \, N(r) < N(b).
\end{align*}
Prove:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
The ring \(F[[x]]\) of power series over a field \(F\) is an Euclidean
domain.
\item
Every Euclidean domain is a PID.
\end{enumerate}
\hypertarget{question-53}{%
\subsection{Question 53}\label{question-53}}
Let \(F\) be a field, and let \(R\) be the subring of \(F[X]\) of
polynomials with \(X\) coefficient equal to \(0\). Prove that \(R\) is
not a UFD.
\hypertarget{question-54}{%
\subsection{Question 54}\label{question-54}}
\(R\) is a commutative ring with 1. Prove that if \(I\) is a maximal
ideal in \(R\), then \(R/I\) is a field. Prove that if \(R\) is a PID,
then every nonzero prime ideal in \(R\) is maximal. Conclude that if
\(R\) is a PID and \(p\in R\) is prime, then \(R/(p)\) is a field.
\hypertarget{question-55}{%
\subsection{Question 55}\label{question-55}}
Prove that any square matrix is conjugate to its transpose matrix. (You
may prove it over \(\mathbb{C}\)).
\hypertarget{question-56}{%
\subsection{Question 56}\label{question-56}}
Determine the number of conjugacy classes of \(16 \times 16\) matrices
with entries in \(\mathbb{Q}\) and minimal polynomial
\((x^2+1)^2(x^3+2)^2\).
\hypertarget{question-57}{%
\subsection{Question 57}\label{question-57}}
Let \(V\) be a vector space over a field \(F\). The evaluation map
\(e\colon V \to (V^\vee)^\vee\) is defined by
\(e(v)(f) \colonequals f(v)\) for \(v\in V\) and \(f\in V^\vee\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Prove that \(e\) is an injection.
\item
Prove that \(e\) is an isomorphism if and only if \(V\) is finite
dimensional.
\end{enumerate}
\hypertarget{question-58}{%
\subsection{Question 58}\label{question-58}}
Let \(R\) be a principal ideal domain that is not a field, and write
\(F\) for its field of fractions. Prove that \(F\) is not a finitely
generated \(R\)-module.
\hypertarget{question-59}{%
\subsection{Question 59}\label{question-59}}
Carefully state Zorn's lemma and use it to prove that every vector space
has a basis.
\hypertarget{question-60}{%
\subsection{Question 60}\label{question-60}}
Show that no finite group is the union of conjugates of a proper
subgroup.
\hypertarget{question-61}{%
\subsection{Question 61}\label{question-61}}
Classify all groups of order 18 up to isomorphism.
\hypertarget{question-62}{%
\subsection{Question 62}\label{question-62}}
Let \(\alpha,\beta\) denote the unique positive real \(5^{\text{th}}\)
root of 7 and \(4^{\text{th}}\) root of 5, respectively. Determine the
degree of \(\mathbb Q(\alpha,\beta)\) over \(\mathbb Q\).
\hypertarget{question-63}{%
\subsection{Question 63}\label{question-63}}
Show that the field extension
\(\mathbb Q\subseteq\mathbb Q\left( \sqrt{2+\sqrt2}\right)\) is Galois
and determine its Galois group.
\hypertarget{question-64}{%
\subsection{Question 64}\label{question-64}}
Let \(M\) be a square matrix over a field \(K\). Use a suitable
canonical form to show that \(M\) is similar to its transpose \(M^T\).
\hypertarget{question-65}{%
\subsection{Question 65}\label{question-65}}
Let \(G\) be a finite group and \(\pi_0\), \(\pi_1\) be two irreducible
representations of \(G\). Prove or disprove the following assertion:
\(\pi_0\) and \(\pi_1\) are equivalent if and only if
\(\det\pi_0(g)=\det\pi_1(g)\) for all \(g\in G\).
\hypertarget{question-66}{%
\subsection{Question 66}\label{question-66}}
Let \(R\) be a Noetherian ring. Prove that \(R[x]\) and \(R[[x]]\) are
both Noetherian. (The first part of the question is asking you to prove
the Hilbert Basis Theorem, not to use it!)
\hypertarget{question-67}{%
\subsection{Question 67}\label{question-67}}
Classify (with proof) all fields with finitely many elements.
\hypertarget{question-68}{%
\subsection{Question 68}\label{question-68}}
Suppose \(A\) is a commutative ring and \(M\) is a finitely presented
module. Given any surjection \(\phi:A^n\rightarrow M\) from a finite
free \(A\)-module, show that \(\ker\phi\) is finitely generated.
\hypertarget{question-69}{%
\subsection{Question 69}\label{question-69}}
Classify all groups of order 57.
\hypertarget{question-70}{%
\subsection{Question 70}\label{question-70}}
Show that a finite simple group cannot have a 2-dimensional irreducible
representation over \(\mathbb C\).
\begin{quote}
Hint: the determinant might prove useful.
\end{quote}
\hypertarget{question-71}{%
\subsection{Question 71}\label{question-71}}
Let \(G\) be a finite simple group. Assume that every proper subgroup of
\(G\) is abelian. Prove that then \(G\) is cyclic of prime order.
\hypertarget{question-72}{%
\subsection{Question 72}\label{question-72}}
Let \(a\in\mathbb N\), \(a>0\). Compute the Galois group of the
splitting field of the polynomial \(x^5-5a^4x+a\) over \(\mathbb Q\).
\hypertarget{question-73}{%
\subsection{Question 73}\label{question-73}}
Recall that an inner automorphism of a group is an automorphism given by
conjugation by an element of the group. An outer automorphism is an
automorphism that is not inner.
\begin{itemize}
\tightlist
\item
Prove that \(S_5\) has a subgroup of order 20.
\item
Use the subgroup from (a) to construct a degree 6 permutation
representation of \(S_5\) (i.e., an embedding
\(S_5 \hookrightarrow S_6\) as a transitive permutation group on 6
letters).
\item
Conclude that \(S_6\) has an outer automorphism.
\end{itemize}
\hypertarget{question-74}{%
\subsection{Question 74}\label{question-74}}
Let \(A\) be a commutative ring and \(M\) a finitely generated
\(A\)-module. Define \begin{align*}
\Ann(M) = \{a \in A: am = 0 \text{ for all } m \in M\}
.\end{align*} Show that for a prime ideal \(\mathfrak p \subset A\), the
following are equivalent:
\begin{itemize}
\item
\(\Ann(M) \not\subset \mathfrak p\)
\item
The localization of \(M\) at the prime ideal \(\mathfrak p\) is \(0\).
\item
\(M \otimes_A k(\mathfrak p) = 0\), where
\(k(\mathfrak p) = A_{\mathfrak p}/\mathfrak p A_{\mathfrak p}\) is
the residue field of \(A\) at \(\mathfrak p\).
\end{itemize}
\hypertarget{question-75}{%
\subsection{Question 75}\label{question-75}}
Let \(A = \CC[x,y]/(y^2-(x-1)^3 - (x-1)^2)\).
\begin{itemize}
\tightlist
\item
Show that \(A\) is an integral domain and sketch the \(\RR\)-points of
\(\text{Spec} A\).
\item
Find the integral closure of \(A\). Recall that for an integral domain
\(A\) with fraction field \(K\), the integral closure of \(A\) in
\(K\) is the set of all elements of \(K\) integral over \(A\).
\end{itemize}
\hypertarget{question-76}{%
\subsection{Question 76}\label{question-76}}
Let \(R = k[x,y]\) where \(k\) is a field, and let \(I=(x,y)R\).
\begin{itemize}
\item
Show that \begin{align*}
0 \to R \mapsvia{\phi} R \oplus R \mapsvia{\psi} R \to k \to 0
\end{align*} where \(\phi(a) = (-ya,xa)\), \(\psi((a,b)) = xa+yb\) for
\(a,b \in R\), is a projective resolution of the \(R\)-module
\(k \simeq R/I\).
\item
Show that \(I\) is not a flat \(R\)-module by computing
\(\Tor_i^R(I,k)\)
\end{itemize}
\hypertarget{question-77}{%
\subsection{Question 77}\label{question-77}}
\begin{itemize}
\tightlist
\item
Find an irreducible polynomial of degree 5 over the field
\(\mathbb Z/2\) of two elements and use it to construct a field of
order 32 as a quotient of the polynomial ring \(\mathbb Z/2[x]\).
\item
Using the polynomial found in part (a), find a \(5\times5\) matrix
\(M\) over \(\mathbb Z/2\) of order 31, so that \(M^{31}=I\) but
\(M\neq I\).
\end{itemize}
\hypertarget{question-78}{%
\subsection{Question 78}\label{question-78}}
Find the minimal polynomial of \(\sqrt2+\sqrt3\) over \(\mathbb Q\).
Justify your answer.
\hypertarget{question-79}{%
\subsection{Question 79}\label{question-79}}
\hfill
\begin{itemize}
\tightlist
\item
Let \(R\) be a commutative ring with no nonzero nilpotent elements.
Show that the only units in the polynomial ring \(R[x]\) are the units
of \(R\), regarded as constant polynomials.
\item
Find all units in the polynomial ring \(\mathbb Z_4[x]\).
\end{itemize}
\hypertarget{question-80}{%
\subsection{Question 80}\label{question-80}}
Let \(p\), \(q\) be two distinct primes. Prove that there is at most one
non-abelian group of order \(pq\) and describe the pairs \((p,q)\) such
that there is no non-abelian group of order \(pq\).
\hypertarget{question-81}{%
\subsection{Question 81}\label{question-81}}
\begin{itemize}
\tightlist
\item
Let \(L\) be a Galois extension of a field \(K\) of degree 4. What is
the minimum number of subfields there could be strictly between \(K\)
and \(L\)? What is the maximum number of such subfields? Give examples
where these bounds are attained.
\item
How do these numbers change if we assume only that \(L\) is separable
(but not necessarily Galois) over \(K\)?
\end{itemize}
\hypertarget{question-82}{%
\subsection{Question 82}\label{question-82}}
Let \(R\) be a commutative algebra over \(\mathbb C\). A derivation of
\(R\) is a \(\mathbb C\)-linear map \(D:R\rightarrow R\) such that (i)
\(D(1)=0\) and (ii) \(D(ab)=D(a)b+aD(b)\) for all \(a,b\in R\).
\begin{itemize}
\item
Describe all derivations of the polynomial ring \(\mathbb C[x]\).
\item
Let \(A\) be the subring (or \(\mathbb C\)-subalgebra) of
\(\mathrm{End}_{\mathbb C}(\mathbb C[x])\) generated by all
derivations of \(\mathbb C[x]\) and the left multiplications by \(x\).
Prove that \(\mathbb C[x]\) is a simple left \(A\)-module.
\textgreater{} Note that the inclusion
\(A\rightarrow\mathrm{End}_{\mathbb C}(\mathbb C[x])\) defines a
natural left \(A\)-module structure on \(\mathbb C[x]\).
\end{itemize}
\hypertarget{question-83}{%
\subsection{Question 83}\label{question-83}}
Let \(G\) be a non-abelian group of order \(p^3\) with \(p\) a prime.
\begin{itemize}
\item
Determine the order of the center \(Z\) of \(G\).
\item
Determine the number of inequivalent complex 1-dimensional
representations of \(G\).
\item
Compute the dimensions of all the inequivalent irreducible
representations of \(G\) and verify that the number of such
representations equals the number of conjugacy classes of \(G\).
\end{itemize}
\hypertarget{question-84}{%
\subsection{Question 84}\label{question-84}}
\begin{itemize}
\item
Let \(G\) be a group (not necessarily finite) that contains a subgroup
of index \(n\). Show that \(G\) contains a \textit{normal} subgroup
\(N\) such that \(n\leq[G:N]\leq n!\)
\item
Use part (a) to show that there is no simple group of order 36.
\end{itemize}
\hypertarget{question-85}{%
\subsection{Question 85}\label{question-85}}
Let \(p\) be a prime, let \(\mathbb F_p\) be the \(p\)-element field,
and let \(K=\mathbb F_p(t)\) be the field of rational functions in \(t\)
with coefficients in \(\mathbb F_p\). Consider the polynomial
\(f(x)= x^p-t\in K[x]\).
\begin{itemize}
\item
Show that \(f\) does not have a root in \(K\).
\item
Let \(E\) be the splitting field of \(f\) over \(K\). Find the
factorization of \(f\) over \(E\).
\item
Conclude that \(f\) is irreducible over \(K\).
\end{itemize}
\hypertarget{question-86}{%
\subsection{Question 86}\label{question-86}}
Recall that a ring \(A\) is called \textit{graded} if it admits a direct
sum decomposition \(A=\oplus_{n=0}^{\infty}A_n\) as abelian groups, with
the property that \(A_iA_j\subseteq A_{i+j}\) for all \(i,j\geq 0\).
Prove that a graded commutative ring \(A=\oplus_{n=0}^{\infty} A_n\) is
Noetherian if and only if \(A_0\) is Noetherian and \(A\) is finitely
generated as an algebra over \(A_0\).
\hypertarget{question-87}{%
\subsection{Question 87}\label{question-87}}
Let \(R\) be a ring with the property that \(a^2=a\) for all \(a\in R\).
\begin{itemize}
\item
Compute the Jacobson radical of \(R\).
\item
What is the characteristic of \(R\)?
\item
Prove that \(R\) is commutative.
\item
Prove that if \(R\) is finite, then \(R\) is isomorphic (as a ring) to
\((\mathbb Z/2\mathbb Z)^d\) for some \(d\).
\end{itemize}
\hypertarget{question-88}{%
\subsection{Question 88}\label{question-88}}
Let \(\overline{\mathbb F_p}\) denote the algebraic closure of
\(\mathbb F_p\). Show that the Galois group
\(\Gal(\overline{\mathbb F_p}/\mathbb F_p)\) has no non-trivial finite
subgroups.
\hypertarget{question-89}{%
\subsection{Question 89}\label{question-89}}
Let \(C_p\) denote the cyclic group of order \(p\).
\begin{itemize}
\item
Show that \(C_p\) has two irreducible representations over
\(\mathbb Q\) (up to isomorphism), one of dimension 1 and one of
dimension \(p-1\).
\item
Let \(G\) be a finite group, and let
\(\rho:G\rightarrow \GL_n(\mathbb Q)\) be a representation of \(G\)
over \(\mathbb Q\). Let
\(\rho_{\mathbb C}:G\rightarrow\GL_n(\mathbb C)\) denote \(\rho\)
followed by the inclusion
\(\GL_n(\mathbb Q)\rightarrow \GL_n(\mathbb C)\). Thus
\(\rho_{\mathbb C}\) is a representation of \(G\) over \(\mathbb C\),
called the \textit{complexification} of \(\rho\). We say that an
irreducible representation \(\rho\) of \(G\) is
\textit{absolutely irreducible} if its complexification remains
irreducible over \(\mathbb C\).\textbackslash{} Now suppose \(G\) is
abelian and that every representation of \(G\) over \(\mathbb Q\) is
absolutely irreducible. Show that \(G\cong(C_2)^k\) for some \(k\)
(i.e., is a product of cyclic groups of order 2).
\end{itemize}
\hypertarget{question-90}{%
\subsection{Question 90}\label{question-90}}
Let \(G\) be a finite group and \(\mathbb Z[G]\) the internal group
algebra. Let \(\mathcal Z\) be the center of \(\mathbb Z[G]\). For each
conjugacy class \(C\subseteq G\), let \(P_C=\sum_{g\in C}g\).
\begin{itemize}
\item
Show that the elements \(P_C\) form a \(\mathbb Z\)-basis for
\(\mathcal Z\). Hence \(\mathcal Z\cong\mathbb Z^d\) as an abelian
group, where \(d\) is the number of conjugacy classes in \(G\).
\item
Show that if a ring \(R\) is isomorphic to \(\mathbb Z^d\) as an
abelian group, then every element in \(R\) satisfies a monic integral
polynomial.
\begin{quote}
\textbf{Hint:} Let \(\{v_1,\dots,v_d\}\) be a basis of \(R\) and for a
fixed non-zero \(r\in R\), write \(rv_i=\sum_j a_{ij}v_j\). Use the
Hamilton-Cayley theorem.
\end{quote}
\item
Let \(\pi:G\rightarrow\GL(V)\) be an irreducible representation of
\(G\) (over \(\mathbb C\)). Show that \(\pi(P_C)\) acts on \(V\) as
multiplication by the scalar
\begin{align*}
\frac{|C|\chi_{\pi}(C)}{\dim V},
\end{align*}
where \(\chi_{\pi}(C)\) is the value of the character \(\chi_{\pi}\)
on any element of \(C\).
\item
Conclude that \(|C|\chi_{\pi}(C)/\dim V\) is an algebraic integer.
\end{itemize}
\hypertarget{question-91}{%
\subsection{Question 91}\label{question-91}}
\begin{itemize}
\item
Suppose that \(G\) is a finitely generated group. Let \(n\) be a
positive integer. Prove that \(G\) has only finitely many subgroups of
index \(n\)
\item
Let \(p\) be a prime number. If \(G\) is any finitely-generated
abelian group, let \(t_p(G)\) denote the number of subgroups of \(G\)
of index \(p\). Determine the possible values of \(t_p(G)\) as \(G\)
varies over all finitely-generated abelian groups.
\end{itemize}
\hypertarget{question-92}{%
\subsection{Question 92}\label{question-92}}
Suppose that \(G\) is a finite group of order 2013. Prove that \(G\) has
a normal subgroup \(N\) of index 3 and that \(N\) is a cyclic group.
Furthermore, prove that the center of \(G\) has order divisible by 11.
(You will need the factorization \(2013=3\cdot11\cdot61\).)
\hypertarget{question-93}{%
\subsection{Question 93}\label{question-93}}
This question concerns an extension \(K\) of \(\mathbb Q\) such that
\([K:\mathbb Q]=8\). Assume that \(K/\mathbb Q\) is Galois and let
\(G=\Gal(K/\mathbb Q)\). Furthermore, assume that \(G\) is non-abelian.
\begin{itemize}
\item
Prove that \(K\) has a unique subfield \(F\) such that \(F/\mathbb Q\)
is Galois and \([F:\mathbb Q]=4\).
\item
Prove that \(F\) has the form \(F=\mathbb Q(\sqrt{d_1},\sqrt{d_2})\)
where \(d_1,d_2\) are non-zero integers.
\item
Suppose that \(G\) is the quaternionic group. Prove that \(d_1\) and
\(d_2\) are positive integers.
\end{itemize}
\hypertarget{question-94}{%
\subsection{Question 94}\label{question-94}}
This question concerns the polynomial ring \(R=\mathbb Z[x,y]\) and the
ideal \(I=(5,x^2+2)\) in \(R\).
\begin{itemize}
\item
Prove that \(I\) is a prime ideal of \(R\) and that \(R/I\) is a PID.
\item
Give an explicit example of a maximal ideal of \(R\) which contains
\(I\). (Give a set of generators for such an ideal.)
\item
Show that there are infinitely many distinct maximal ideals in \(R\)
which contain \(I\).
\end{itemize}
\hypertarget{question-95}{%
\subsection{Question 95}\label{question-95}}
Classify all groups of order 2012 up to isomorphism.
\begin{quote}
Hint: 503 is prime.
\end{quote}
\hypertarget{question-96}{%
\subsection{Question 96}\label{question-96}}
For any positive integer \(n\), let \(G_n\) be the group generated by
\(a\) and \(b\) subject to the following three relations: \begin{align*}
a^2=1, \quad b^2=1, \quad \text{and} \quad (ab)^n=1.
.\end{align*}
\begin{itemize}
\tightlist
\item
Find the order of the group \(G_n\)
\end{itemize}
\hypertarget{question-97}{%
\subsection{Question 97}\label{question-97}}
Determine the Galois groups of the following polynomials over
\(\mathbb Q\).
\begin{itemize}
\item
\(f(x)=x^4+4x^2+1\)
\item
\(f(x)=x^4+4x^2-5\).
\end{itemize}
\hypertarget{question-98}{%
\subsection{Question 98}\label{question-98}}
Let \(R\) be a (commutative) principal ideal domain, let \(M\) and \(N\)
be finitely generated free \(R\)-modules, and let
\(\varphi:M\rightarrow N\) be an \(R\)-module homomorphism.
\begin{itemize}
\item
Let \(K\) be the kernel of \(\varphi\). Prove that \(K\) is a direct
summand of \(M\).
\item
Let \(C\) be the image of \(\varphi\). Show by example (specifying
\(R\), \(M\), \(N\), and \(\varphi\)) that \(C\) need not be a direct
summand of \(N\).
\end{itemize}
\hypertarget{question-99}{%
\subsection{Question 99}\label{question-99}}
In this problem, as you apply Sylow's Theorem, state precisely which
portions you are using.
\begin{itemize}
\item
Prove that there is no simple group of order 30.
\item
Suppose that \(G\) is a simple group of order 60. Determine the number
of \(p\)-Sylow subgroups of \(G\) for each prime \(p\) dividing 60,
then prove that \(G\) is isomorphic to the alternating group \(A_5\).
\end{itemize}
\begin{quote}
Note: in the second part, you needn't show that \(A_5\) is simple. You
need only show that if there is a simple group of order 60, then it must
be isomorphic to \(A_5\).
\end{quote}
\hypertarget{question-100}{%
\subsection{Question 100}\label{question-100}}
Describe the Galois group and the intermediate fields of the cyclotomic
extension \(\mathbb Q(\zeta_{12})/\mathbb Q\).
\hypertarget{question-101}{%
\subsection{Question 101}\label{question-101}}
Let \begin{align*}
R=\mathbb Z[x]/(x^2+x+1).
\end{align*}
\begin{itemize}
\tightlist
\item
Answer the following questions with suitable justification.
\begin{itemize}
\tightlist
\item
Is \(R\) a Noetherian ring?
\item
Is \(R\) an Artinian ring?
\end{itemize}
\item
Prove that \(R\) is an integrally closed domain.
\end{itemize}
\hypertarget{question-102}{%
\subsection{Question 102}\label{question-102}}
Let \(R\) be a commutative ring. Recall that an element \(r\) of \(R\)
is \textit{nilpotent} if \(r^n=0\) for some positive integer \(n\) and
that the \textit{nilradical} of \(R\) is the set \(N(R)\) of nilpotent
elements.
\begin{itemize}
\item
Prove that \begin{align*}
N(R)=\cap_{P\text{ prime}}P.
.\end{align*}
\begin{quote}
Hint: given a non-nilpotent element \(r\) of \(R\), you may wish to
construct a prime ideal that does not contain \(r\) or its powers.
\end{quote}
\item
Given a positive integer \(m\), determine the nilradical of
\(\mathbb Z/(m)\).
\item
Determine the nilradical of \(\mathbb C[x,y]/(y^2-x^3)\).
\item
Let \(p(x,y)\) be a polynomial in \(\mathbb C[x,y]\) such that for any
complex number \(a\), \(p(a,a^{3/2})=0\). Prove that \(p(x,y)\) is
divisible by \(y^2-x^3\).
\end{itemize}
\hypertarget{question-103}{%
\subsection{Question 103}\label{question-103}}
Given a finite group \(G\), recall that its \emph{regular
representation} is the representation on the complex group algebra
\(\mathbb C[G]\) induced by left multiplication of \(G\) on itself and
its \textit{adjoint representation} is the representation on the complex
group algebra \(\mathbb C[G]\) induced by conjugation of \(G\) on
itself.
\begin{itemize}
\item
Let \(G=\GL_2(\mathbb F_2)\). Describe the number and dimensions of
the irreducible representations of \(G\). Then describe the
decomposition of its regular representation as a direct sum of
irreducible representations.
\item
Let \(G\) be a group of order 12. Show that its adjoint representation
is reducible; that is, there is an \(H\)-invariant subspace of
\(\mathbb C[H]\) besides 0 and \(\mathbb C[H]\).
\end{itemize}
\hypertarget{question-104}{%
\subsection{Question 104}\label{question-104}}
Let \(R\) be a commutative integral domain. Show that the following are
equivalent:
\begin{itemize}
\item
\(R\) is a field;
\item
\(R\) is a semi-simple ring;
\item
Any \(R\)-module is projective.
\end{itemize}
\hypertarget{question-105}{%
\subsection{Question 105}\label{question-105}}
Let \(p\) be a positive prime number, \(\mathbb F_p\) the field with
\(p\) elements, and let \(G=\text{GL}_2(\mathbb F_p)\).
\begin{itemize}
\item
Compute the order of \(G\), \(|G|\).
\item
Write down an explicit isomorphism from \(\mathbb Z/p\mathbb Z\) to
\begin{align*}
U=\left\{
\begin{pmatrix}
1 & a\\
0 & 1
\end{pmatrix}
\bigg|a\in\mathbb F_p\right\}.
\end{align*}
\item
How many subgroups of order \(p\) does \(G\) have?
\begin{quote}
Hint: compute \(gug\inv\) for \(g\in G\) and \(u\in U\); use this to
find the size of the normalizer of \(U\) in \(G\).
\end{quote}
\end{itemize}
\hypertarget{question-106}{%
\subsection{Question 106}\label{question-106}}
\begin{itemize}
\item
Give definitions of the following terms:
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\tightlist
\item
a finite length (left) module, (ii) a composition series for a
module, and (iii) the length of a module,
\end{enumerate}
\item
Let \(l(M)\) denote the length of a module \(M\). Prove that if
\begin{align*}
0\rightarrow M_1\rightarrow M_2\rightarrow\dots\rightarrow
M_n\rightarrow 0
.\end{align*}
is an exact sequence of modules of finite length, then \begin{align*}
\sum_{i=1}^n(-1)^kl(M_i)=0.
.\end{align*}
\end{itemize}
\hypertarget{question-107}{%
\subsection{Question 107}\label{question-107}}
Let \(\mathbb F\) be a field of characteristic \(p\), and \(G\) a group
of order \(p^n\). Let \(R=\mathbb F[G]\) be the group ring (group
algebra) of \(G\) over \(\mathbb F\), and let \(u:=\sum_{x\in G}x\) (so
\(u\) is an element of \(R\)).
\begin{itemize}
\item
Prove that \(u\) lies in the center of \(R\).
\item
Verify that \(Ru\) is a 2-sided ideal of \(R\).
\item
Show there exists a positive integer \(k\) such that \(u^k=0\).
Conclude that for such a \(k\), \((Ru)^k=0\).
\item
Show that \(R\) is \textbf{not} a semi-simple ring.
\begin{quote}
\textbf{Warning:} Please use the definition of a semi-simple ring: do
\textbf{not} use the result that a finite length ring fails to be
semisimple if and only if it has a non-zero nilpotent ideal.
\end{quote}
\end{itemize}
\hypertarget{question-108}{%
\subsection{Question 108}\label{question-108}}
Let \(f(x)=a_nx^n+a_{n-1}x^{n-1}+\dots+a_0\in\mathbb Z[x]\) (where
\(a_n\neq 0\)) and let \(R=\mathbb Z[x]/(f)\). Prove that \(R\) is a
finitely generated module over \(\mathbb Z\) if and only if
\(a_n=\pm 1\).
\hypertarget{question-109}{%
\subsection{Question 109}\label{question-109}}
Consider the ring \begin{align*}
S=C[0,1]=\{f:[0,1]\rightarrow\mathbb R:f\text{ is continuous}\}
.\end{align*}
with the usual operations of addition and multiplication of functions.
\begin{itemize}
\item
What are the invertible elements of \(S\)?
\item
For \(a\in[0,1]\), define \(I_a=\{f\in S:f(a)=0\}\). Show that \(I_a\)
is a maximal ideal of \(S\).
\item
Show that the elements of any proper ideal of \(S\) have a common
zero, i.e., if \(I\) is a proper ideal of \(S\), then there exists
\(a\in[0,1]\) such that \(f(a)=0\) for all \(f\in I\). Conclude that
every maximal ideal of \(S\) is of the form \(I_a\) for some
\(a\in[0,1]\).
\begin{quote}
\textbf{Hint}: As \([0,1]\) is compact, every open cover of \([0,1]\)
contains a finite subcover.
\end{quote}
\end{itemize}
\hypertarget{question-110}{%
\subsection{Question 110}\label{question-110}}
Let \(F\) be a field of characteristic zero, and let \(K\) be an
\emph{algebraic} extension of \(F\) that possesses the following
property: every polynomial \(f\in F[x]\) has a root in \(K\). Show that
\(K\) is algebraically closed.\textbackslash{}
\begin{quote}
\textbf{Hint:} if \(K(\theta)/K\) is algebraic, consider \(F(\theta)/F\)
and its normal closure; primitive elements might be of help.
\end{quote}
\hypertarget{question-111}{%
\subsection{Question 111}\label{question-111}}
Let \(G\) be the unique non-abelian group of order 21.
\begin{itemize}
\item
Describe all 1-dimensional complex representations of \(G\).
\item
How many (non-isomorphic) irreducible complex representations does
\(G\) have and what are their dimensions?
\item
Determine the character table of \(G\).
\end{itemize}
\hypertarget{question-112}{%
\subsection{Question 112}\label{question-112}}
\begin{itemize}
\item
Classify all groups of order \(2009=7^2\times 41\).
\item
Suppose that \(G\) is a group of order 2009. How many intermediate
groups are there---that is, how many groups H are there with
\(1\subsetneq H\subsetneq G\), where both inclusions are proper?
(There may be several cases to consider.)
\end{itemize}
\hypertarget{question-113}{%
\subsection{Question 113}\label{question-113}}
Let \(K\) be a field. A discrete valuation on \(K\) is a function
\(\nu: K\setminus\{0\}\rightarrow\mathbb Z\) such that
\begin{itemize}
\item
\(\nu(ab)=\nu(a)+\nu(b)\)
\item
\(\nu\) is surjective
\item
\(\nu(a+b)\geq\text{min}\{(\nu(a),\nu(b)\}\) for
\(a,b\in K\setminus\{0\}\) with \(a+b\neq 0\).
\end{itemize}
Let \(R:=\{x\in K\setminus\{0\}:\nu(x)\geq0\}\cup\{0\}\). Then \(R\) is
called the valuation ring of \(\nu\).
Prove the following:
\begin{itemize}
\item
\(R\) is a subring of \(K\) containing the 1 in \(K\).
\item
for all \(x\in K\setminus\{0\}\), either \(x\) or \(x\inv\) is in
\(R\).
\item
\(x\) is a unit of \(R\) if and only if \(\nu(x)=0\).
\item
Let \(p\) be a prime number, \(K=\mathbb Q\), and
\(\nu_p:\mathbb Q\setminus\{0\}\rightarrow\mathbb Z\) be the function
defined by \(\nu_p(\frac ab)=n\) where \(\frac ab=p^n\frac cd\) and
\(p\) does not divide \(c\) and \(d\). Prove that the corresponding
valuation ring \(R\) is the ring of all rational numbers whose
denominators are relatively prime to \(p\).
\end{itemize}
\hypertarget{question-114}{%
\subsection{Question 114}\label{question-114}}
Let \(F\) be a field of characteristic not equal to 2.
\begin{itemize}
\item
Prove that any extension \(K\) of \(F\) of degree 2 is of the form
\(F(\sqrt D)\) where \(D\in F\) is not a square in \(F\) and,
conversely, that each such extension has degree 2 over \(F\).
\item
Let \(D_1,D_2\in F\) neither of which is a square in \(F\). Prove that
\([F(\sqrt{D_1},\sqrt{D_2}):F]=4\) if \(D_1D_2\) is not a square in
\(F\) and is of degree 2 otherwise.
\end{itemize}
\hypertarget{question-115}{%
\subsection{Question 115}\label{question-115}}
Let \(F\) be a field and \(p(x)\in F[x]\) an irreducible polynomial.
\begin{itemize}
\item
Prove that there exists a field extension \(K\) of \(F\) in which
\(p(x)\) has a root.
\item
Determine the dimension of \(K\) as a vector space over \(F\) and
exhibit a vector space basis for \(K\).
\item
If \(\theta\in K\) denotes a root of \(p(x)\), express \(\theta\inv\)
in terms of the basis found in part (b).
\item
Suppose \(p(x)=x^3+9x+6\). Show \(p(x)\) is irreducible over
\(\mathbb Q\). If \(\theta\) is a root of \(p(x)\), compute the
inverse of \((1+\theta)\) in \(\mathbb Q(\theta)\).
\end{itemize}
\hypertarget{question-116}{%
\subsection{Question 116}\label{question-116}}
Fix a ring \(R\), an \(R\)-module \(M\), and an \(R\)-module
homomorphism \(f:M\rightarrow M\).
\begin{itemize}
\item
If \(M\) satisfies the descending chain condition on submodules, show
that if \(f\) is injective, then \(f\) is surjective.
\begin{quote}
Hint: note that if \(f\) is injective, so are \(f\circ f\),
\(f\circ f\circ f\), etc.
\end{quote}
\item
Give an example of a ring \(R\), an \(R\)-module \(M\), and an
injective \(R\)-module homomorphism \(f:M\rightarrow M\) which is not
surjective.
\item
If \(M\) satisfies the ascending chain condition on submodules, show
that if \(f\) is surjective, then \(f\) is injective.
\item
Give an exampe of a ring \(R\), and \(R\)-module \(M\), and a
surjective \(R\)-module homomorphism \(f:M\rightarrow M\) which is not
injective.
\end{itemize}
\hypertarget{question-117}{%
\subsection{Question 117}\label{question-117}}
Let \(G\) be a finite group, \(k\) an algebraically closed field, and
\(V\) an irreducible \(k\)-linear representation of \(G\).
\begin{itemize}
\item
Show that \(\hom_{kG}(V,V)\) is a division algebra with \(k\) in its
center.
\item
Show that \(V\) is finite-dimensional over \(k\), and conclude that
\(\hom_{kG}(V,V)\) is also finite dimensional.
\item
Show the inclusion \(k\hookrightarrow\hom_{kG}(V,V)\) found in (a) is
an isomorphism. (For \(f\in\hom_{kG}(V,V)\), view \(f\) as a linear
transformation and consider \(f-\alpha I\), where \(\alpha\) is an
eigenvalue of \(f\)).
\end{itemize}
\hypertarget{question-118}{%
\subsection{Question 118}\label{question-118}}
Let \(f(x)\) be an irreducible polynomial of degree 5 over the field
\(\mathbb Q\) of rational numbers with exactly 3 real roots.
\begin{itemize}
\item
Show that \(f(x)\) is not solvable by radicals.
\item
Let \(E\) be the splitting field of \(f\) over \(\mathbb Q\).
Construct a Galois extension \(K\) of degree 2 over \(\mathbb Q\)
lying in \(E\) such that \textit{no} field \(F\) strictly between
\(K\) and \(E\) is Galois over \(\mathbb Q\).
\end{itemize}
\hypertarget{question-119}{%
\subsection{Question 119}\label{question-119}}
Let \(F\) be a finite field. Show for any positive integer \(n\) that
there are irreducible polynomials of degree \(n\) in \(F[x]\).
\hypertarget{question-120}{%
\subsection{Question 120}\label{question-120}}
Show that the order of the group \(\text{GL}_n(\mathbb F_q)\) of
invertible \(n\times n\) matrices over the field \(\mathbb F_q\) of
\(q\) elements is given by \((q^n-1)(q^n-q)\dots(q^n-q^{n-1})\).
\hypertarget{question-121}{%
\subsection{Question 121}\label{question-121}}
\begin{itemize}
\item
Let \(R\) be a commutative principal ideal domain. Show that any
\(R\)-module \(M\) generated by two elements takes the form
\(R/(a)\oplus R/(b)\) for some \(a,b\in R\). What more can you say
about \(a\) and \(b\)?
\item
Give a necessary and sufficient condition for two direct sums as in
part (a) to be isomorphic as \(R\)-modules.
\end{itemize}
\hypertarget{question-122}{%
\subsection{Question 122}\label{question-122}}
Let \(G\) be the subgroup of \(\text{GL}_3(\mathbb C)\) generated by the
three matrices \begin{align*}
A=
\begin{pmatrix}
0 & 0 & 1\\
0 & 1 & 0\\
1 & 0 & 0
\end{pmatrix},
\quad B=
\begin{pmatrix}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix},
\quad C=
\begin{pmatrix}
i & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}
\end{align*}
where \(i^2=-1\). Here \(\mathbb C\) denotes the complex field.
\begin{itemize}
\item
Compute the order of \(G\).
\item
Find a matrix in \(G\) of largest possible order (as an element of
\(G\)) and compute this order.
\item
Compute the number of elements in \(G\) with this largest order.
\end{itemize}
\hypertarget{question-123}{%
\subsection{Question 123}\label{question-123}}
\begin{itemize}
\item
Let \(G\) be a group of (finite) order \(n\). Show that any
irreducible left module over the group algebra \(\mathbb CG\) has
complex dimension at least \(\sqrt n\).
\item
Give an example of a group \(G\) of order \(n\geq5\) and an
irreducible left module over \(\mathbb CG\) of complex dimension
\(\lfloor\sqrt n\rfloor\), the greatest integer to \(\sqrt n\).
\end{itemize}
\hypertarget{question-124}{%
\subsection{Question 124}\label{question-124}}
Use the rational canonical form to show that any square matrix \(M\)
over a field \(k\) is similar to its transpose \(M^t\), recalling that
\(p(M)=0\) for some \(p\in k[t]\) if and only if \(p(M^t)=0\).
\hypertarget{question-125}{%
\subsection{Question 125}\label{question-125}}
Let \(K\) be a field of characteristic zero and \(L\) a Galois extension
of \(K\). Let \(f\) be an irreducible polynomial in \(K[x]\) of degree 7
and suppose \(f\) has no zeroes in \(L\). Show that \(f\) is irreducible
in \(L[x]\).
\hypertarget{question-126}{%
\subsection{Question 126}\label{question-126}}
Let \(K\) be a field of characteristic zero and \(f\in K[x]\) an
irreducible polynomial of degree \(n\). Let \(L\) be a splitting field
for \(f\). Let \(G\) be the group of automorphisms of \(L\) which act
trivially on \(K\).
\begin{itemize}
\item
Show that \(G\) embeds in the symmetric group \(S_n\).
\item
For each \(n\), give an example of a field \(K\) and polynomial \(f\)
such that \(G=S_n\).
\item
What are the possible groups \(G\) when \(n=3\). Justify your answer.
\end{itemize}
\hypertarget{question-127}{%
\subsection{Question 127}\label{question-127}}
Show there are exactly two groups of order 21 up to isomorphism.
\hypertarget{question-128}{%
\subsection{Question 128}\label{question-128}}
Let \(K\) be the field \(\mathbb Q(z)\) of rational functions in a
variable \(z\) with coeffiecients in the rational field \(\mathbb Q\).
Let \(n\) be a positive integer. Consider the polynomial
\(x^n-z\in K[x]\).
\begin{itemize}
\item
Show that the polynomial \(x^n-z\) is irreducible over \(K\).
\item
Describe the splitting field of \(x^n-z\) over \(K\).
\item
Determine the Galois group of the splitting field of \(x^5-z\) over
the field \(K\).
\end{itemize}
\hypertarget{question-129}{%
\subsection{Question 129}\label{question-129}}
\begin{itemize}
\item
Let \(p<q<r\) be prime integers. Show that a group of order \(pqr\)
cannot be simple.
\item
Consider groups of orders \(2^2\cdot 3\cdot p\) where \(p\) has the
values 5, 7, and 11. For each of those values of \(p\), either display
a simple group of order \(2^2\cdot 3\cdot p\), or show that there
cannot be a simple group of that order.
\end{itemize}
\hypertarget{question-130}{%
\subsection{Question 130}\label{question-130}}
Let \(K/F\) be a finite Galois extension and let \(n=[K:F]\). There is a
theorem (often referred to as the ``normal basis theorem'') which states
that there exists an irreducible polynomial \(f(x)\in F[x]\) whose roots
form a basis for \(K\) as a vector space over \(F\). You may assume that
theorem in this problem.
\begin{itemize}
\item
Let \(G=\Gal(K/F)\). The action of \(G\) on \(K\) makes \(K\) into a
finite-dimensional representation space for \(G\) over \(F\). Prove
that \(K\) is isomorphic to the regular representation for \(G\) over
\(F\).
\begin{quote}
The regular representation is defined by letting \(G\) act on the
group algebra \(F[G]\) by multiplication on the left.
\end{quote}
\item
Suppose that the Galois group \(G\) is cyclic and that \(F\) contains
a primitive \(n^{\text{th}}\) root of unity. Show that there exists an
injective homomorphism \(\chi:G\rightarrow F^{\times}\).
\item
Show that \(K\) contains a non-zero element \(a\) with the following
property: \begin{align*}
g(a)=\chi(g)\cdot a
.\end{align*}
for all \(g\in G\).
\item
If \(a\) has the property stated in (c), show that \(K=F(a)\) and that
\(a^n\in F^{\times}\).
\end{itemize}
\hypertarget{question-131}{%
\subsection{Question 131}\label{question-131}}
Let \(G\) be the group of matrices of the form \begin{align*}
\begin{pmatrix}
1 & a & b\\
0 & 1 & c\\
0 & 0 & 1
\end{pmatrix}
.\end{align*}
with entries in the finite field \(\mathbb F_p\) of \(p\) element, where
\(p\) is a prime.
\begin{itemize}
\item
Prove that \(G\) is non-abelian.
\item
Suppose \(p\) is odd. Prove that \(g^p=I_3\) for all \(g\in G\).
\item
Suppose that \(p=2\). It is known that there are exactly two
non-abelian groups of order 8, up to isomorphism: the dihedral group
\(D_8\) and the quaternionic group. Assuming this fact without proof,
determine which of these groups \(G\) is isomorphic to.
\end{itemize}
\hypertarget{question-132}{%
\subsection{Question 132}\label{question-132}}
There are five nonisomorphic groups of order 8. For each of those groups
\(G\), find the smallest positive integer n such that there is an
injective homomorphism \(\varphi: G\rightarrow S_n\).
\hypertarget{question-133}{%
\subsection{Question 133}\label{question-133}}
For any group \(G\) we define \(\Omega(G)\) to be the image of the group
homomorphism \(\rho:G\rightarrow\Aut(G)\) where \(\rho\) maps \(g\in G\)
to the conjugation automorphism \(x\mapsto gxg\inv\). Starting with a
group \(G_0\), we define \(G_1=\Omega(G_0)\) and \(G_{i+1}=\Omega(G_i)\)
for all \(i\geq 0\). If \(G_0\) is of order \(p^e\) for a prime \(p\)
and integer \(e\geq 2\), prove that \(G_{e-1}\) is the trivial group.
\hypertarget{question-134}{%
\subsection{Question 134}\label{question-134}}
Let \(\mathbb F_2\) be the field with two elements.
\begin{itemize}
\item
What is the order of \(\text{GL}_3(\mathbb F_2)\)?
\item
Use the fact that \(\text{GL}_3(\mathbb F_2)\) is a simple group
(which you should not prove) to find the number of elements of order 7
in \(\text{GL}_3(\mathbb F_2)\).
\end{itemize}
\hypertarget{question-135}{%
\subsection{Question 135}\label{question-135}}
Let \(G\) be a finite abelian group. Let \(f:\mathbb Z^m\rightarrow G\)
be a surjection of abelian groups. We may think of \(f\) as a
homomorphism of \(\mathbb Z\)-modules. Let \(K\) be the kernel of \(f\).
\begin{itemize}
\item
Prove that \(K\) is isomorphic to \(\mathbb Z^m\).
\item
We can therefore write the inclusion map \(K\rightarrow\mathbb Z^m\)
as \(\mathbb Z^m\rightarrow\mathbb Z^m\) and represent it by an
\(m\times m\) integer matrix \(A\). Prove that \(|\det A|=|G|\).
\end{itemize}
\hypertarget{question-136}{%
\subsection{Question 136}\label{question-136}}
Let \(R=C([0,1])\) be the ring of all continuous real-valued functions
on the closed interval \([0,1]\), and for each \(c\in[0,1]\), denote by
\(M_c\) the set of all functions \(f\in R\) such that \(f(c)=0\).
\begin{itemize}
\item
Prove that \(g\in R\) is a unit if and only if \(g(c)\neq0\) for all
\(c\in[0,1]\).
\item
Prove that for each \(c\in[0,1]\), \(M_c\) is a maximal ideal of
\(R\).
\item
Prove that if \(M\) is a maximal ideal of \(T\), then \(M=M_c\) for
some \(c\in[0,1]\).
\begin{quote}
Hint: compactness of \([0,1]\) may be relevant.
\end{quote}
\end{itemize}
\hypertarget{question-137}{%
\subsection{Question 137}\label{question-137}}
Let \(R\) and \(S\) be commutative rings, and \(f:R\rightarrow S\) a
ring homomorphism.
\begin{itemize}
\item
Show that if \(I\) is a prime ideal of \(S\), then \begin{align*}
f\inv(I)=\{r\in R:f(r)\in I\}
\end{align*}
is a prime ideal of \(R\).
\item
Let \(N\) be the set of nilpotent elements of \(R\): \begin{align*}
N=\{r\in R:r^m=0\text{ for some }m\geq 1\}.
.\end{align*}
\(N\) is called the \textit{nilradical} of \(R\). Prove that it is an
ideal which is contained in every prime ideal.
\item
Part (a) lets us define a function \begin{align*}
f^*:\{\text{prime ideals of }S\} &\rightarrow
\{\text{prime ideals of }R\}.
I &\mapsto f\inv(I).
.\end{align*}
Let \(N\) be the nilradical of \(R\). Show that if \(S=R/N\) and
\(f:R\rightarrow R/N\) is the quotient map, then \(f^*\) is a
bijection
\end{itemize}
\hypertarget{question-138}{%
\subsection{Question 138}\label{question-138}}
Consider the polynomial \(f(x)=x^{10}+x^5+1\in\mathbb Q[x]\) with
splitting field \(K\) over \(\mathbb Q\).
\begin{itemize}
\item
Determine whether \(f(x)\) is irreducible over \(\mathbb Q\) and find
\([K:\mathbb Q]\).
\item
Determine the structure of the Galois group \(\Gal(K/\mathbb Q)\).
\end{itemize}
\hypertarget{question-139}{%
\subsection{Question 139}\label{question-139}}
For each prime number \(p\) and each positive integer \(n\), how many
elements \(\alpha\) are there in \(\mathbb F_{p^n}\) such that
\(F_p(\alpha)=F_{p^6}\)?
\hypertarget{question-140}{%
\subsection{Question 140}\label{question-140}}
Assume that \(K\) is a cyclic group, \(H\) is an arbitrary group, and
\(\varphi_1\) and \(\varphi_2\) are homomorphisms from \(K\) into
\(\Aut(H)\) such that \(\varphi_1(K)\) and \(\varphi_2(K)\) are
conjugate subgroups of \(\Aut(H)\).
Prove by constructing an explicit isomorphism that
\(H\rtimes_{\varphi_1}K\cong H\rtimes_{\varphi_2} K\).
\begin{quote}
Suppose \(\sigma_{\varphi_1}(K)\sigma\inv=\varphi_2(K)\) so that for
some \(a\in\mathbb Z\) we have
\(\sigma\varphi_1(k)\sigma\inv =\varphi_2(k)^a\) for all \(k\in K\).
Show that the map
\(\psi:H \rtimes_{\varphi_1}K\rightarrow H\rtimes_{\varphi_2}K\) defined
by \(\psi((h,k))=(\sigma(h),k^a)\) is a homomorphism. Show \(\psi\) is
bijective by construcing a 2-sided inverse.
\end{quote}
\hypertarget{real-analysis-85-questions}{%
\section{Real Analysis (85
Questions)}\label{real-analysis-85-questions}}
\hypertarget{question-1-1}{%
\subsection{Question 1}\label{question-1-1}}
Let \(C([0, 1])\) denote the space of all continuous real-valued
functions on \([0, 1]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\tightlist
\item
Prove that \(C([0, 1])\) is complete under the uniform norm
\(\norm{f}_u := \displaystyle\sup_{x\in [0,1]} |f (x)|\).
\item
Prove that \(C([0, 1])\) is not complete under the \(L^1\dash\)norm
\(\norm{f}_1 = \displaystyle\int_0^1 |f (x)| ~dx\).
\end{enumerate}
\hypertarget{question-2-1}{%
\subsection{Question 2}\label{question-2-1}}
Let \(\mathcal B\) denote the set of all Borel subsets of \(\RR\) and
\(\mu : \mathcal B \to [0, \infty)\) denote a finite Borel measure on
\(\RR\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if \(\{F_k\}\) is a sequence of Borel sets for which
\(F_k \supseteq F_{k+1}\) for all \(k\), then \[
\lim _{k \rightarrow \infty} \mu\left(F_{k}\right)=\mu\left(\bigcap_{k=1}^{\infty} F_{k}\right)
\]
\item
Suppose \(mu\) has the property that \(mu(E) = 0\) for every
\(E \in \mathcal B\) with Lebesgue measure \(m(E) = 0\). Prove that
for every \(\eps > 0\) there exists \(\delta > 0\) so that if
\(E \in \mathcal B\) with \(m(E) < \delta\), then \(mu(E) < \eps\).
\end{enumerate}
\hypertarget{question-3-1}{%
\subsection{Question 3}\label{question-3-1}}
Let \(\{f_k\}\) be any sequence of functions in \(L^2([0, 1])\)
satisfying \(\norm{f_k}_2 \leq M\) for all \(k \in \NN\).
Prove that if \(f_k \to f\) almost everywhere, then
\(f \in L^2([0, 1])\) with \(\norm{f}_2 \leq M\) and \[
\lim _{k \rightarrow \infty} \int_{0}^{1} f_{k}(x) dx = \int_{0}^{1} f(x) d x
\]
\begin{quote}
Hint: Try using Fatou's Lemma to show that \(\norm{f}_2 \leq M\) and
then try applying Egorov's Theorem.
\end{quote}
\hypertarget{question-4-1}{%
\subsection{Question 4}\label{question-4-1}}
Let \(f\) be a non-negative function on \(\RR^n\) and
\(\mathcal A = \{(x, t) \in \RR^n \times \RR : 0 \leq t \leq f (x)\}\).
Prove the validity of the following two statements:
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
\(f\) is a Lebesgue measurable function on \(\RR^n \iff \mathcal A\)
is a Lebesgue measurable subset of \(\RR^{n+1}\)
\item
If \(f\) is a Lebesgue measurable function on \(\RR^n\), then \[
m(\mathcal{A})=\int_{\mathbb{R}^{n}} f(x) d x=\int_{0}^{\infty} m\left(\left\{x \in \mathbb{R}^{n}: f(x) \geq t\right\}\right) d t
\]
\end{enumerate}
\hypertarget{question-5-1}{%
\subsection{Question 5}\label{question-5-1}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(L^2([0, 1]) \subseteq L^1([0, 1])\) and argue that
\(L^2([0, 1])\) in fact forms a dense subset of \(L^1([0, 1])\).
\item
Let \(\Lambda\) be a continuous linear functional on \(L^1([0, 1])\).
Prove the Riesz Representation Theorem for \(L^1([0, 1])\) by
following the steps below:
\begin{enumerate}
\def\labelenumii{\roman{enumii}.}
\tightlist
\item
Establish the existence of a function \(g \in L^2([0, 1])\) which
represents \(\Lambda\) in the sense that \[
\Lambda(f ) = f (x)g(x) dx \text{ for all } f \in L^2([0, 1]).
\]
\end{enumerate}
\begin{quote}
Hint: You may use, without proof, the Riesz Representation Theorem for
\(L^2([0, 1])\).
\end{quote}
\begin{enumerate}
\def\labelenumii{\roman{enumii}.}
\setcounter{enumii}{1}
\tightlist
\item
Argue that the \(g\) obtained above must in fact belong to
\(L^\infty([0, 1])\) and represent \(\Lambda\) in the sense that \[
\Lambda(f)=\int_{0}^{1} f(x) \overline{g(x)} d x \quad \text { for all } f \in L^{1}([0,1])
\] with \[
\|g\|_{L^{\infty}([0,1])}=\|\Lambda\|_{L^{1}([0,1])\dual}
\]
\end{enumerate}
\end{enumerate}
\hypertarget{question-6-1}{%
\subsection{Question 6}\label{question-6-1}}
Let \(\{a_n\}_{n=1}^\infty\) be a sequence of real numbers.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if \(\displaystyle\lim_{n\to\infty} a_n = 0\), then
\(\displaystyle\lim_{n\to\infty} a_1 + \cdots + a_n = 0\). \[
\lim _{n \rightarrow \infty} \frac{a_{1}+\cdots+a_{n}}{n}=0
\]
\item
Prove that if \(\displaystyle\sum_{n=1}^{\infty} \frac{a_{n}}{n}\)
converges, then \[
\lim _{n \rightarrow \infty} \frac{a_{1}+\cdots+a_{n}}{n}=0
\]
\end{enumerate}
\hypertarget{question-7-1}{%
\subsection{Question 7}\label{question-7-1}}
Prove that \[
\left|\frac{d^{n}}{d x^{n}} \frac{\sin x}{x}\right| \leq \frac{1}{n}
\]
for all \(x \neq 0\) and positive integers \(n\).
\begin{quote}
Hint: Consider \(\displaystyle\int_0^1 \cos(tx) dt\)
\end{quote}
\hypertarget{question-8-1}{%
\subsection{Question 8}\label{question-8-1}}
Let \((X, \mathcal B, mu)\) be a measure space with \(mu(X) = 1\) and
\(\{B_n\}_{n=1}^\infty\) be a sequence of \(\mathcal B\)-measurable
subsets of \(X\), and \[
B \definedas \theset{x\in X \suchthat x\in B_n \text{ for infinitely many } n}.
\]
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Argue that \(B\) is also a \(\mathcal{B} \dash\)measurable subset of
\(X\).
\item
Prove that if \(\sum_{n=1}^\infty \mu(B_n) < \infty\) then
\(\mu(B)= 0\).
\item
Prove that if \(\sum_{n=1}^\infty \mu(B_n) = \infty\) \textbf{and} the
sequence of set complements \(\theset{B_n^c}_{n=1}^\infty\) satisfies
\[
\mu\left(\bigcap_{n=k}^{K} B_{n}^{c}\right)=\prod_{n=k}^{K}\left(1-\mu\left(B_{n}\right)\right)
\] for all positive integers \(k\) and \(K\) with \(k < K\), then
\(mu(B) = 1\).
\end{enumerate}
\begin{quote}
Hint: Use the fact that \(1 - x \leq e^{-x}\) for all \(x\).
\end{quote}
\hypertarget{question-9-1}{%
\subsection{Question 9}\label{question-9-1}}
Let \(\{u_n\}_{n=1}^\infty\) be an orthonormal sequence in a Hilbert
space \(\mathcal{H}\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that for every \(x \in \mathcal H\) one has \[
\displaystyle\sum_{n=1}^{\infty}\left|\left\langle x, u_{n}\right\rangle\right|^{2} \leq\|x\|^{2}
\]
\item
Prove that for any sequence \(\{a_n\}_{n=1}^\infty \in \ell^2(\NN)\)
there exists an element \(x\in\mathcal H\) such that \[
a_n = \inner{x}{u_n} \text{ for all } n\in \NN
\] and \[
\norm{x}^2 = \sum_{n=1}^{\infty}\left|\left\langle x, u_{n}\right\rangle\right|^{2}
\]
\end{enumerate}
\hypertarget{question-10-1}{%
\subsection{Question 10}\label{question-10-1}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that if \(f\) is continuous with compact support on \(\RR\), then
\[
\lim _{y \rightarrow 0} \int_{\mathbb{R}}|f(x-y)-f(x)| d x=0
\]
\item
Let \(f\in L^1(\RR)\) and for each \(h > 0\) let \[
\mathcal{A}_{h} f(x):=\frac{1}{2 h} \int_{|y| \leq h} f(x-y) d y
\]
\item
Prove that \(\left\|\mathcal{A}_{h} f\right\|_{1} \leq\|f\|_{1}\) for
all \(h > 0\).
\end{enumerate}
\begin{enumerate}
\def\labelenumi{\roman{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Prove that \(\mathcal{A}_h f \to f\) in \(L^1(\RR)\) as \(h \to 0^+\).
\end{enumerate}
\hypertarget{question-11-1}{%
\subsection{Question 11}\label{question-11-1}}
Define \[
E:=\left\{x \in \mathbb{R}:\left|x-\frac{p}{q}\right|<q^{-3} \text { for infinitely many } p, q \in \mathbb{N}\right\}.
\]
Prove that \(m(E) = 0\).
\hypertarget{question-12-1}{%
\subsection{Question 12}\label{question-12-1}}
Let \[
f_{n}(x):=\frac{x}{1+x^{n}}, \quad x \geq 0.
\]
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that this sequence converges pointwise and find its limit. Is the
convergence uniform on \([0, \infty)\)?
\item
Compute \[
\lim _{n \rightarrow \infty} \int_{0}^{\infty} f_{n}(x) d x
\]
\end{enumerate}
\hypertarget{question-13-1}{%
\subsection{Question 13}\label{question-13-1}}
Let \(f\) be a non-negative measurable function on \([0, 1]\).
Show that \[
\lim _{p \rightarrow \infty}\left(\int_{[0,1]} f(x)^{p} d x\right)^{\frac{1}{p}}=\|f\|_{\infty}.
\]
\hypertarget{question-14-1}{%
\subsection{Question 14}\label{question-14-1}}
Let \(f\in L^2([0, 1])\) and suppose \[
\int_{[0,1]} f(x) x^{n} d x=0 \text { for all integers } n \geq 0.
\] Show that \(f = 0\) almost everywhere.
\hypertarget{question-15-1}{%
\subsection{Question 15}\label{question-15-1}}
Suppose that
\begin{itemize}
\tightlist
\item
\(f_n, f \in L^1\),
\item
\(f_n \to f\) almost everywhere, and
\item
\(\int\left|f_{n}\right| \rightarrow \int|f|\).
\end{itemize}
Show that \(\int f_{n} \rightarrow \int f\)
\hypertarget{question-16-1}{%
\subsection{Question 16}\label{question-16-1}}
Let \(f(x) = \frac 1 x\). Show that \(f\) is uniformly continuous on
\((1, \infty)\) but not on \((0,\infty)\).
\hypertarget{question-17-1}{%
\subsection{Question 17}\label{question-17-1}}
Let \(E\subset \RR\) be a Lebesgue measurable set. Show that there is a
Borel set \(B \subset E\) such that \(m(E\setminus B) = 0\).
\hypertarget{question-18-1}{%
\subsection{Question 18}\label{question-18-1}}
Suppose \(f(x)\) and \(xf(x)\) are integrable on \(\RR\). Define \(F\)
by \[
F(t):=\int_{-\infty}^{\infty} f(x) \cos (x t) d x
\] Show that \[
F'(t)=-\int_{-\infty}^{\infty} x f(x) \sin (x t) d x.
\]
\hypertarget{question-19-1}{%
\subsection{Question 19}\label{question-19-1}}
Let \(f\in L^1([0, 1])\). Prove that \[
\lim_{n \to \infty} \int_{0}^{1} f(x) \abs{\sin n x} ~d x= \frac{2}{\pi} \int_{0}^{1} f(x) ~d x
\]
\begin{quote}
Hint: Begin with the case that \(f\) is the characteristic function of
an interval.
\end{quote}
\hypertarget{question-20-1}{%
\subsection{Question 20}\label{question-20-1}}
Let \(f \geq 0\) be a measurable function on \(\RR\). Show that \[
\int_{\mathbb{R}} f=\int_{0}^{\infty} m(\{x: f(x)>t\}) d t
\]
\hypertarget{question-21-1}{%
\subsection{Question 21}\label{question-21-1}}
Compute the following limit and justify your calculations: \[
\lim_{n \rightarrow \infty} \int_{1}^{n} \frac{d x}{\left(1+\frac{x}{n}\right)^{n} \sqrt[n]{x}}
\]
\hypertarget{question-22-1}{%
\subsection{Question 22}\label{question-22-1}}
Let \(K\) be the set of numbers in \([0, 1]\) whose decimal expansions
do not use the digit \(4\).
\begin{quote}
We use the convention that when a decimal number ends with 4 but all
other digits are different from 4, we replace the digit \(4\) with
\(399\cdots\). For example, \(0.8754 = 0.8753999\cdots\).
\end{quote}
Show that \(K\) is a compact, nowhere dense set without isolated points,
and find the Lebesgue measure \(m(K)\).
\hypertarget{question-23-1}{%
\subsection{Question 23}\label{question-23-1}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\tightlist
\item
Let \(\mu\) be a measure on a measurable space \((X, \mathcal M)\) and
\(f\) a positive measurable function.
\end{enumerate}
Define a measure \(\lambda\) by \[
\lambda(E):=\int_{E} f ~d \mu, \quad E \in \mathcal{M}
\]
Show that for \(g\) any positive measurable function, \[
\int_{X} g ~d \lambda=\int_{X} f g ~d \mu
\]
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Let \(E \subset \RR\) be a measurable set such that \[
\int_{E} x^{2} ~d m=0.
\] Show that \(m(E) = 0\).
\end{enumerate}
\hypertarget{question-24-1}{%
\subsection{Question 24}\label{question-24-1}}
Let \[
f_{n}(x)=a e^{-n a x}-b e^{-n b x} \quad \text{ where } 0 < a < b.
\]
Show that
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\tightlist
\item
\(\sum_{n=1}^{\infty}\left|f_{n}\right| \text { is not in } L^{1}([0, \infty), m)\)
\end{enumerate}
\begin{quote}
Hint: \(f_n(x)\) has a root \(x_n\).
\end{quote}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
\[
\sum_{n=1}^{\infty} f_{n} \text { is in } L^{1}([0, \infty), m)
\quad \text { and } \quad
\int_{0}^{\infty} \sum_{n=1}^{\infty} f_{n}(x) ~d m=\ln \frac{b}{a}
\]
\end{enumerate}
\hypertarget{question-25-1}{%
\subsection{Question 25}\label{question-25-1}}
Let \(f(x, y)\) on \([-1, 1]^2\) be defined by \[
f(x, y) = \begin{cases}
\frac{x y}{\left(x^{2}+y^{2}\right)^{2}} & (x, y) \neq (0, 0) \\
0 & (x, y) = (0, 0)
\end{cases}
\] Determine if \(f\) is integrable.
\hypertarget{question-26-1}{%
\subsection{Question 26}\label{question-26-1}}
Let \(f, g \in L^2(\RR)\). Prove that the formula \[
h(x):=\int_{-\infty}^{\infty} f(t) g(x-t) d t
\] defines a uniformly continuous function \(h\) on \(\RR\).
\hypertarget{question-27-1}{%
\subsection{Question 27}\label{question-27-1}}
Show that the space \(C^1([a, b])\) is a Banach space when equipped with
the norm \[
\|f\|:=\sup _{x \in[a, b]}|f(x)|+\sup _{x \in[a, b]}\left|f^{\prime}(x)\right|.
\]
\hypertarget{question-28-1}{%
\subsection{Question 28}\label{question-28-1}}
Let \[
f(x) = s \sum_{n=0}^{\infty} \frac{x^{n}}{n !}.
\]
Describe the intervals on which \(f\) does and does not converge
uniformly.
\hypertarget{question-29-1}{%
\subsection{Question 29}\label{question-29-1}}
Let \(f(x) = x^2\) and \(E \subset [0, \infty) \definedas \RR^+\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that \[
m^*(E) = 0 \iff m^*(f(E)) = 0.
\]
\item
Deduce that the map
\end{enumerate}
\begin{align*}
\phi: \mathcal{L}(\RR^+) &\to \mathcal{L}(\RR^+) \\
E &\mapsto f(E)
\end{align*} is a bijection from the class of Lebesgue measurable sets
of \([0, \infty)\) to itself.
\hypertarget{question-30-1}{%
\subsection{Question 30}\label{question-30-1}}
Let \[
S = \spanof_\CC\theset{\chi_{(a, b)} \suchthat a, b \in \RR},
\] the complex linear span of characteristic functions of intervals of
the form \((a, b)\).
Show that for every \(f\in L^1(\RR)\), there exists a sequence of
functions \(\theset{f_n} \subset S\) such that \[
\lim _{n \rightarrow \infty}\left\|f_{n}-f\right\|_{1}=0
\]
\hypertarget{question-31-1}{%
\subsection{Question 31}\label{question-31-1}}
Let \[
f_{n}(x)=n x(1-x)^{n}, \quad n \in \mathbb{N}.
\]
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that \(f_n \to 0\) pointwise but not uniformly on \([0, 1]\).
\begin{quote}
Hint: Consider the maximum of \(f_n\).
\end{quote}
\item
\[
\lim _{n \rightarrow \infty} \int_{0}^{1} n(1-x)^{n} \sin x d x=0
\]
\end{enumerate}
\hypertarget{question-32-1}{%
\subsection{Question 32}\label{question-32-1}}
Let \(\phi\) be a compactly supported smooth function that vanishes
outside of an interval \([-N, N]\) such that
\(\int_{\mathrm{R}} \phi(x) d x=1\).
For \(f\in L^1(\RR)\), define \[
K_{j}(x):=j \phi(j x), \quad \quad f \ast K_{j}(x):=\int_{\mathbb{R}} f(x-y) K_{j}(y) ~d y
\] and prove the following:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Each \(f\ast K_j\) is smooth and compactly supported.
\item
\[
\lim _{j \rightarrow \infty}\left\|f * K_{j}-f\right\|_{1}=0
\]
\end{enumerate}
\begin{quote}
Hint: \[
\lim _{y \rightarrow 0} \int_{\mathbb{R}}|f(x-y)-f(x)| d y=0
\]
\end{quote}
\hypertarget{question-33-1}{%
\subsection{Question 33}\label{question-33-1}}
Let \(X\) be a complete metric space and define a norm \[
\|f\|:=\max \{|f(x)|: x \in X\}.
\]
Show that \((C^0(\RR), \norm{\wait} )\) (the space of continuous
functions \(f: X\to \RR\)) is complete.
\hypertarget{question-34-1}{%
\subsection{Question 34}\label{question-34-1}}
For \(n\in \NN\), define \[
e_{n}=\left(1+\frac{1}{n}\right)^{n}
\quad \text { and } \quad
E_{n}=\left(1+\frac{1}{n}\right)^{n+1}
\]
Show that \(e_n < E_n\), and prove Bernoulli's inequality: \[
(1+x)^{n} \geq 1+n x \text { for }-1<x<\infty \text { and } n \in \mathbb{N}
\]
Use this to show the following:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
The sequence \(e_n\) is increasing.
\item
The sequence \(E_n\) is decreasing.
\item
\(2 < e_n < E_n < 4\).
\item
\(\lim _{n \rightarrow \infty} e_{n}=\lim _{n \rightarrow \infty} E_{n}\).
\end{enumerate}
\hypertarget{question-35-1}{%
\subsection{Question 35}\label{question-35-1}}
Let \(0 < \lambda < 1\) and construct a Cantor set \(C_\lambda\) by
successively removing middle intervals of length \(\lambda\).
Prove that \(m(C_\lambda) = 0\).
\hypertarget{question-36-1}{%
\subsection{Question 36}\label{question-36-1}}
Let \(f\) be Lebesgue measurable on \(\RR\) and \(E \subset \RR\) be
measurable such that \[
0<A=\int_{E} f(x) d x<\infty.
\]
Show that for every \(0 < t < 1\), there exists a measurable set
\(E_t \subset E\) such that \[
\int_{E_{t}} f(x) d x=t A.
\]
\hypertarget{question-37-1}{%
\subsection{Question 37}\label{question-37-1}}
Let \(E \subset \RR\) be measurable with \(m(E) < \infty\). Define \[
f(x)=m(E \cap(E+x)).
\]
Show that
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\(f\in L^1(\RR)\).
\item
\(f\) is uniformly continuous.
\item
\(\lim _{|x| \rightarrow \infty} f(x)=0\)
\end{enumerate}
\begin{quote}
Hint: \[
\chi_{E \cap(E+x)}(y)=\chi_{E}(y) \chi_{E}(y-x)
\]
\end{quote}
\hypertarget{question-38-1}{%
\subsection{Question 38}\label{question-38-1}}
Let \((X, \mathcal M, \mu)\) be a measure space. For \(f\in L^1(\mu)\)
and \(\lambda > 0\), define \[
\phi(\lambda)=\mu(\{x \in X | f(x)>\lambda\})
\quad \text { and } \quad
\psi(\lambda)=\mu(\{x \in X | f(x)<-\lambda\})
\]
Show that \(\phi, \psi\) are Borel measurable and \[
\int_{X}|f| ~d \mu=\int_{0}^{\infty}[\phi(\lambda)+\psi(\lambda)] ~d \lambda
\]
\hypertarget{question-39-1}{%
\subsection{Question 39}\label{question-39-1}}
Without using the Riesz Representation Theorem, compute \[
\sup \left\{\left|\int_{0}^{1} f(x) e^{x} d x\right| \suchthat f \in L^{2}([0,1], m),~~ \|f\|_{2} \leq 1\right\}
\]
\hypertarget{question-40-1}{%
\subsection{Question 40}\label{question-40-1}}
Define \[
f(x) = \sum_{n=1}^{\infty} \frac{1}{n^{x}}.
\]
Show that \(f\) converges to a differentiable function on
\((1, \infty)\) and that \[
f'(x) =\sum_{n=1}^{\infty}\left(\frac{1}{n^{x}}\right)^{\prime}.
\]
\begin{quote}
Hint: \[
\left(\frac{1}{n^{x}}\right)^{\prime}=-\frac{1}{n^{x}} \ln n
\]
\end{quote}
\hypertarget{question-41-1}{%
\subsection{Question 41}\label{question-41-1}}
Let \(f, g: [a, b] \to \RR\) be measurable with \[
\int_{a}^{b} f(x) ~d x=\int_{a}^{b} g(x) ~d x.
\]
Show that either
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\(f(x) = g(x)\) almost everywhere, or
\item
There exists a measurable set \(E \subset [a, b]\) such that \[
\int_{E} f(x) ~d x>\int_{E} g(x) ~d x
\]
\end{enumerate}
\hypertarget{question-42-1}{%
\subsection{Question 42}\label{question-42-1}}
Let \(f\in L^1(\RR)\). Show that \[
\lim _{x \rightarrow 0} \int_{\mathbb{R}}|f(y-x)-f(y)| d y=0
\]
\hypertarget{question-43-1}{%
\subsection{Question 43}\label{question-43-1}}
Let \((X, \mathcal M, \mu)\) be a measure space and suppose
\(\theset{E_n} \subset \mathcal M\) satisfies \[
\lim _{n \rightarrow \infty} \mu\left(X \backslash E_{n}\right)=0.
\]
Define \[
G \definedas \theset{x\in X \suchthat x\in E_n \text{ for only finitely many } n}.
\]
Show that \(G \in \mathcal M\) and \(\mu(G) = 0\).
\hypertarget{question-44-1}{%
\subsection{Question 44}\label{question-44-1}}
Let \(\phi\in L^\infty(\RR)\). Show that the following limit exists and
satisfies the equality \[
\lim _{n \rightarrow \infty}\left(\int_{\mathbb{R}} \frac{|\phi(x)|^{n}}{1+x^{2}} d x\right)^{\frac{1}{n}} = \norm{\phi}_\infty.
\]
\hypertarget{question-45-1}{%
\subsection{Question 45}\label{question-45-1}}
Let \(f, g \in L^2(\RR)\). Show that \[
\lim _{n \rightarrow \infty} \int_{\mathbb{R}} f(x) g(x+n) d x=0
\]
\hypertarget{question-46-1}{%
\subsection{Question 46}\label{question-46-1}}
Let \((X, d)\) and \((Y, \rho)\) be metric spaces, \(f: X\to Y\), and
\(x_0 \in X\).
Prove that the following statements are equivalent:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
For every \(\varepsilon > 0 \quad \exists \delta > 0\) such that
\(\rho( f(x), f(x_0) ) < \varepsilon\) whenever
\(d(x, x_0) < \delta\).
\item
The sequence \(\theset{f(x_n)}_{n=1}^\infty \to f(x_0)\) for every
sequence \(\theset{x_n} \to x_0\) in \(X\).
\end{enumerate}
\hypertarget{question-47-1}{%
\subsection{Question 47}\label{question-47-1}}
Let \(f: \RR \to \CC\) be continuous with period 1. Prove that \[
\lim _{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^{N} f(n \alpha)=\int_{0}^{1} f(t) d t \quad \forall \alpha \in \RR\setminus\QQ.
\]
\begin{quote}
Hint: show this first for the functions \(f(t) = e^{2\pi i k t}\) for
\(k\in \ZZ\).
\end{quote}
\hypertarget{question-48-1}{%
\subsection{Question 48}\label{question-48-1}}
Let \(\mu\) be a finite Borel measure on \(\RR\) and \(E \subset \RR\)
Borel. Prove that the following statements are equivalent:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\(\forall \varepsilon > 0\) there exists \(G\) open and \(F\) closed
such that \[
F \subseteq E \subseteq G \quad \text{and} \quad \mu(G\setminus F) < \varepsilon.
\]
\item
There exists a \(V \in G_\delta\) and \(H \in F_\sigma\) such that \[
H \subseteq E \subseteq V \quad \text{and}\quad \mu(V\setminus H) = 0
\]
\end{enumerate}
\hypertarget{question-49-1}{%
\subsection{Question 49}\label{question-49-1}}
Define \[
f(x, y):=\left\{\begin{array}{ll}{\frac{x^{1 / 3}}{(1+x y)^{3 / 2}}} & {\text { if } 0 \leq x \leq y} \\ {0} & {\text { otherwise }}\end{array}\right.
\]
Carefully show that \(f \in L^1(\RR^2)\).
\hypertarget{question-50-1}{%
\subsection{Question 50}\label{question-50-1}}
Let \(\mathcal H\) be a Hilbert space.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Let \(x\in \mathcal H\) and \(\theset{u_n}_{n=1}^N\) be an orthonormal
set. Prove that the best approximation to \(x\) in \(\mathcal H\) by
an element in \(\spanof_\CC\theset{u_n}\) is given by \[
\hat x \definedas \sum_{n=1}^N \inner{x}{u_n}u_n.
\]
\item
Conclude that finite dimensional subspaces of \(\mathcal H\) are
always closed.
\end{enumerate}
\hypertarget{question-51-1}{%
\subsection{Question 51}\label{question-51-1}}
Let \(f \in L^1(\RR)\) and \(g\) be a bounded measurable function on
\(\RR\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Show that the convolution \(f\ast g\) is well-defined, bounded, and
uniformly continuous on \(\RR\).
\item
Prove that one further assumes that \(g \in C^1(\RR)\) with bounded
derivative, then \(f\ast g \in C^1(\RR)\) and \[
\frac{d}{d x}(f * g)=f *\left(\frac{d}{d x} g\right)
\]
\end{enumerate}
\hypertarget{question-52-1}{%
\subsection{Question 52}\label{question-52-1}}
Define \[
f(x)=c_{0}+c_{1} x^{1}+c_{2} x^{2}+\ldots+c_{n} x^{n} \text { with } n \text { even and } c_{n}>0.
\]
Show that there is a number \(x_m\) such that \(f(x_m) \leq f(x)\) for
all \(x\in \RR\).
\hypertarget{question-53-1}{%
\subsection{Question 53}\label{question-53-1}}
Let \(f: \RR \to \RR\) be Lebesgue measurable.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Show that there is a sequence of simple functions \(s_n(x)\) such that
\(s_n(x) \to f(x)\) for all \(x\in \RR\).
\item
Show that there is a Borel measurable function \(g\) such that
\(g = f\) almost everywhere.
\end{enumerate}
\hypertarget{question-54-1}{%
\subsection{Question 54}\label{question-54-1}}
Compute the following limit: \[
\lim _{n \rightarrow \infty} \int_{1}^{n} \frac{n e^{-x}}{1+n x^{2}} ~\sin \left(\frac x n\right) ~d x
\]
\hypertarget{question-55-1}{%
\subsection{Question 55}\label{question-55-1}}
Let \(f: [1, \infty) \to \RR\) such that \(f(1) = 1\) and \[
f^{\prime}(x)= \frac{1} {x^{2}+f(x)^{2}}
\]
Show that the following limit exists and satisfies the equality \[
\lim _{x \rightarrow \infty} f(x) \leq 1 + \frac \pi 4
\]
\hypertarget{question-56-1}{%
\subsection{Question 56}\label{question-56-1}}
Let \(f, g \in L^1(\RR)\) be Borel measurable.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Show that
\end{enumerate}
\begin{itemize}
\tightlist
\item
The function \[F(x, y) \definedas f(x-y) g(y)\] is Borel measurable on
\(\RR^2\), and
\item
For almost every \(y\in \RR\), \[F_y(x) \definedas f(x-y)g(y)\] is
integrable with respect to \(y\).
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Show that \(f\ast g \in L^1(\RR)\) and \[
\|f * g\|_{1} \leq\|f\|_{1}\|g\|_{1}
\]
\end{enumerate}
\hypertarget{question-57-1}{%
\subsection{Question 57}\label{question-57-1}}
Let \(f: [0, 1] \to \RR\) be continuous. Show that \[
\sup \left\{\|f g\|_{1} \suchthat g \in L^{1}[0,1],~~ \|g\|_{1} \leq 1\right\}=\|f\|_{\infty}
\]
\hypertarget{question-58-1}{%
\subsection{Question 58}\label{question-58-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Give an example of a continuous \(f\in L^1(\RR)\) such that
\(f(x) \not\to 0\) as\(\abs x \to \infty\).
\item
Show that if \(f\) is \emph{uniformly} continuous, then \[
\lim_{\abs{x} \to \infty} f(x) = 0.
\]
\end{enumerate}
\hypertarget{question-59-1}{%
\subsection{Question 59}\label{question-59-1}}
Let \(\theset{a_n}\) be a sequence of real numbers such that \[
\theset{b_n} \in \ell^2(\NN) \implies \sum a_n b_n < \infty.
\] Show that \(\sum a_n^2 < \infty\).
\begin{quote}
Note: Assume \(a_n, b_n\) are all non-negative.
\end{quote}
\hypertarget{question-60-1}{%
\subsection{Question 60}\label{question-60-1}}
Let \(f: \RR \to \RR\) and suppose \[
\forall x\in \RR,\quad f(x) \geq \limsup _{y \rightarrow x} f(y)
\] Prove that \(f\) is Borel measurable.
\hypertarget{question-61-1}{%
\subsection{Question 61}\label{question-61-1}}
Let \((X, \mathcal M, \mu)\) be a measure space and suppose \(f\) is a
measurable function on \(X\). Show that \[
\lim _{n \rightarrow \infty} \int_{X} f^{n} ~d \mu =
\begin{cases}
\infty & or \\
\mu(f\inv(1)),
\end{cases}
\] and characterize the collection of functions of each type.
\hypertarget{question-62-1}{%
\subsection{Question 62}\label{question-62-1}}
Let \(f, g \in L^1([0, 1])\) and for all \(x\in [0, 1]\) define \[
F(x):=\int_{0}^{x} f(y) d y \quad \text { and } \quad G(x):=\int_{0}^{x} g(y) d y.
\]
Prove that \[
\int_{0}^{1} F(x) g(x) d x=F(1) G(1)-\int_{0}^{1} f(x) G(x) d x
\]
\hypertarget{question-63-1}{%
\subsection{Question 63}\label{question-63-1}}
Let \(\theset{f_n}\) be a sequence of continuous functions such that
\(\sum f_n\) converges uniformly.
Prove that \(\sum f_n\) is also continuous.
\hypertarget{question-64-1}{%
\subsection{Question 64}\label{question-64-1}}
Let \(I\) be an index set and \(\alpha: I \to (0, \infty)\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that \[
\sum_{i \in I} a(i):=\sup _{\substack{ J \subset I \\ J \text { finite }}} \sum_{i \in J} a(i)<\infty \implies I \text{ is countable.}
\]
\item
Suppose \(I = \QQ\) and \(\sum_{q \in \mathbb{Q}} a(q)<\infty\).
Define \[
f(x):=\sum_{\substack{q \in \mathbb{Q}\\ q \leq x}} a(q).
\] Show that \(f\) is continuous at \(x \iff x\not\in \QQ\).
\end{enumerate}
\hypertarget{question-65-1}{%
\subsection{Question 65}\label{question-65-1}}
Let \(f\in L^1(\RR)\). Show that \[
\forall\varepsilon > 0 ~~\exists \delta > 0 \text{ such that } m(E) < \delta \implies \int_{E}|f(x)| d x<\varepsilon
\]
\hypertarget{question-66-1}{%
\subsection{Question 66}\label{question-66-1}}
Let \(g\in L^\infty([0, 1])\) Prove that \[
\int_{[0,1]} f(x) g(x) d x=0 \quad\text{for all continuous } f:[0, 1] \to \RR \implies g(x) = 0 \text{ almost everywhere. }
\]
\hypertarget{question-67-1}{%
\subsection{Question 67}\label{question-67-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Let \(f \in C_c^0(\RR^n)\), and show \[
\lim _{t \rightarrow 0} \int_{\mathbb{R}^{n}}|f(x+t)-f(x)| d x=0.
\]
\item
Extend the above result to \(f\in L^1(\RR^n)\) and show that \[
f\in L^1(\RR^n),~ g\in L^\infty(\RR^n) \implies f \ast g \text{ is bounded and uniformly continuous. }
\]
\end{enumerate}
\hypertarget{question-68-1}{%
\subsection{Question 68}\label{question-68-1}}
Let \(1 \leq p,q \leq \infty\) be conjugate exponents, and show that \[
f \in L^p(\RR^n) \implies \|f\|_{p}=\sup _{\|g\|_{q}=1}\left|\int f(x) g(x) d x\right|
\]
\hypertarget{question-69-1}{%
\subsection{Question 69}\label{question-69-1}}
Prove or disprove each of the following statements.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
If \(f\) is of bounded variation on \([0,1]\), then it is continuous
on \([0,1]\).
\item
If \(f : [0, 1] \to [0, 1]\) is a continuous function, then there
exists \(x_0 \in [0, 1]\) such that \(f(x_0) = x_0\).
\item
Let \(\{f_n\}\) be a sequence of uniformly continuous functions on an
interval \(I\). If \(\{f_n\}\) converges uniformly to a function \(f\)
on \(I\), then \(f\) is also uniformly continuous on \(I\).
\item
If \(f\) is differentiable on a connected set
\(E \subset \mathbb{R}^n\), then for any \(x, y \in E\), there exists
\(z \in E\) such that \(f(x) - f(y) = \nabla f(z)(x - y)\).
\end{enumerate}
\hypertarget{question-70-1}{%
\subsection{Question 70}\label{question-70-1}}
Prove or disprove each of the following statements.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\setcounter{enumi}{3}
\item
If \(\lim_{n\to\infty} |a_n+1/a_n|\) exists, then
\(\lim_{n\to \infty} |a_n|^{1/n}\) exists and the two limits are
equal.
\item
If \(\sum_{n=1}^\infty a_n x^n\) converges for all \(x \in [0, 1]\),
then
\(\lim_{x\to 1^-} \sum_{n=1}^\infty a_n x^n=\sum_{n=1}^\infty a_n\)
\end{enumerate}
\hypertarget{question-71-1}{%
\subsection{Question 71}\label{question-71-1}}
Prove or disprove each of the following statements.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\setcounter{enumi}{5}
\item
If \(E \subset \mathbb{R}\) and
\(\mu(E) = \inf\{\sum_{I_i \in S} |I_i| : S = \{I_i\}_{i=1}^n \text{ such that } E \subset \union_{i=1}^n I_i \text{ for some } n \in \mathbb{N}\}\)
then \(\mu\) coincides with the outer measure of \(E\).
\item
If \(E\) is a Borel set and \(f\) is a measurable function, then
\(f^{-1}(E)\) is also measurable.
\end{enumerate}
\hypertarget{question-72-1}{%
\subsection{Question 72}\label{question-72-1}}
If \(f\) is a finite real valued measurable function on a measurable set
\(E \subset \mathbb{R}\), show that the set \(\{(x, f(x)) : x \in E\}\)
is measurable.
\hypertarget{question-73-1}{%
\subsection{Question 73}\label{question-73-1}}
Let g : \([0, 1] \times [0, 1] \to [0, 1]\) be a continuous function and
let \(\{f_n\}\) be a sequence of functions such that
\[f_n(x)=\begin{cases}{0, 0\leq x\leq 1/n},\\{\int_0^{x-\frac1n} g(t,f_n(t))dt, 1/n\leq x \leq 1.}\end{cases}\]
With the help of the Arzela-Ascoli theorem or otherwise, show that there
exists a continuous function \(f : [0, 1] \to \mathbb{R}\) such that
\(f(x) = \int_0^x g(t, f(t))dt\)
for all \(x \in [0, 1]\).
\begin{quote}
Hint: first show that \(|f_n(x_1) - f_n(x_2)| \leq |x_1 - x_2|\).
\end{quote}
\hypertarget{question-74-1}{%
\subsection{Question 74}\label{question-74-1}}
If \(\limsup_{n\rightarrow \infty} a_n\leq l\), show that
\(\limsup_{n\rightarrow \infty}\sum_{i=1}^n{a_i/n}\leq l\).
\hypertarget{question-75-1}{%
\subsection{Question 75}\label{question-75-1}}
If \(f\) is a nonnegative measurable function on \(\mathbb{R}\) and
\(p > 0\), show that
\[\int f^p ~dx = \int_0^{\infty} p t^{p-1} \abs{\{x : f(x) > t\}} ~dt\]
where \(\abs{\{x : f(x) > t\}}\) is the Lebesgue measure of the set
\(\{x : f(x) > t\}\).
\hypertarget{question-76-1}{%
\subsection{Question 76}\label{question-76-1}}
If \(f\) is a nonnegative measurable function on \([0, \pi]\) and
\(\int_0^\pi f(x)^3~dx < \infty\), show that \begin{align*}
\lim_{\alpha\to\infty} \int_{ \theset{x :f(x) > \alpha} } f(x)^2 ~dx=0
.\end{align*}
\hypertarget{question-77-1}{%
\subsection{Question 77}\label{question-77-1}}
Prove or disprove each of the following statements.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
If \(f : [0, 1] \to \mathbb{R}\) is a measurable function, then given
any \(\varepsilon > 0\), there exists a compact set
\(K \subset [0, 1]\) such that \(f\) is continuous on \(K\) relative
to \(K\).
\item
If f is Borel measurable on \(\mathbb{R} \times \mathbb{R}\), then for
any \(x \in \mathbb{R}\), the function \(g(y) = f(x, y)\) is also
Borel measurable on \(\mathbb{R}\).
\item
If \(E \subset \mathbb{R}\), then \(E\) is measurable if and only if
given any \(\varepsilon > 0\), there exist a closed set \(F\) and an
open set \(G\) such that \(F \subset E \subset G\) and the measure of
\(G-F\) is less than \(\varepsilon\).
\end{enumerate}
\hypertarget{question-78-1}{%
\subsection{Question 78}\label{question-78-1}}
Prove or disprove each of the following statements.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\setcounter{enumi}{1}
\item
If \({f_n}\) is a sequence of measurable functions that converges
uniformly to \(f\) on \(\mathbb{R}\), then
\(\int{f}=\lim_{k\to \infty} \int f_k\)
\item
If \(\{f_k\}\) is a sequence of function in \(L_p[0,\infty)\) that
converges to a function \(f \in L_p [0,\infty)\), then \(\{f_k\}\) has
a subsequence that converges to \(f\) almost everywhere.
\end{enumerate}
\hypertarget{question-79-1}{%
\subsection{Question 79}\label{question-79-1}}
Prove or disprove each of the following statements.
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\setcounter{enumi}{5}
\item
If \(f\) is Riemann integrable on \([\eps, 1]\) for all
\(0 < \eps < 1\), then \(f\) is Lebesgue integrable on \([0,1]\) if
\(f\) is nonnegative and the following limit exists
\(\lim_{\varepsilon\to 0^+} \int_\varepsilon^1 f dx\).
\item
If \(f\) is integrable on \([0,1]\), then
\(\lim_{n\to\infty} \int_0^1 f(x)\sin(n\pi x)dx = 0\).
\item
If \(f\) is continuous on \([0, 1]\), then it is of bounded variation
on {[}0, 1{]}\$.
\end{enumerate}
\hypertarget{question-80-1}{%
\subsection{Question 80}\label{question-80-1}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let \(f : \mathbb{R} \to \mathbb{R}\) be a differentiable function. If
\(f'(-1) < 2\) and \(f'(1) > 2\), show that there exists
\(x_0 \in (i1, 1)\) such that \(f'(x_0) = 2\).
\begin{quote}
Hint: consider the function \(f(x) - 2x\) and recall the proof of
Rolle's theorem.)
\end{quote}
\item
Let \(f : (-1, 1) \to \mathbb{R}\) be a differentiable function on
\((-1, 0) \union (0, 1)\) such that \(\lim_{x\to 0} f'(x) = L\). If
\(f\) is continuous on \((-1, 1)\), show that \(f\) is indeed
differentiable at \(0\) and \(f'(0) = L\).
\end{enumerate}
\hypertarget{question-81-1}{%
\subsection{Question 81}\label{question-81-1}}
Describe the process that extends a measure on an algebra
\(\mathcal{A}\) of subsets of \(X\), to a complete measure defined on a
\(\sigma\)-algebra \(\mathcal{B}\) containing \(\mathcal{A}\). State the
corresponding definitions and results (without proofs).
\hypertarget{question-82-1}{%
\subsection{Question 82}\label{question-82-1}}
State and prove Fatou's Lemma on a general measurable space.
\hypertarget{question-83-1}{%
\subsection{Question 83}\label{question-83-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
State the Dominated Convergence Theorem for Lebesgue integrals.
\item
Let \(\{f_n\}\) be a sequence of measurable functions on a Lebesgue
measurable set \(E\) which converges \emph{in measure} to a function
\(f\) on \(E\). Suppose that for every \(n\), \(|f_n| \leq g\) with
\(g\) integrable on \(E\). Using the above theorem show that
\begin{align*}
\int_E |f_n-f| \longrightarrow 0 \, .
\end{align*}
\end{enumerate}
\hypertarget{question-84-1}{%
\subsection{Question 84}\label{question-84-1}}
Let \(f\in L^1([0,1])\). Show that
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
The limit \(\lim_{p\to 0^+} \| f \|_p\) exists.
\item
If \(m \{x : f(x) = 0\} > 0\), then the above limit is zero.
\end{enumerate}
\hypertarget{question-85-1}{%
\subsection{Question 85}\label{question-85-1}}
Let \(f\) be a continuous function on \([0,1]\). Show that the following
statements are equivalent.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\(f\) is absolutely continuous.
\item
For any \(\epsilon > 0\) there exists \(\delta > 0\) such that
\(m(f(E)) < \epsilon\) for any set \(E\subseteq [0,1]\) with
\(m(E) < \delta\).
\item
\(m(f(E)) = 0\) for any set \(E \subseteq [0,1]\) with \(m(E)=0\).
\end{enumerate}
\hypertarget{complex-analysis-125-questions}{%
\section{Complex Analysis (125
Questions)}\label{complex-analysis-125-questions}}
\hypertarget{question-1-2}{%
\subsection{Question 1}\label{question-1-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Assume \(\displaystyle f(z) = \sum_{n=0}^\infty c_n z^n\) converges in
\(|z| < R\). Show that for \(r <R\),
\[\frac{1}{2 \pi} \int_0^{2 \pi} |f(r e^{i \theta})|^2 d \theta =
\sum_{n=0}^\infty |c_n|^2 r^{2n} \; .\]
\item
Deduce Liouville's theorem from (1).
\end{enumerate}
\hypertarget{question-2-2}{%
\subsection{Question 2}\label{question-2-2}}
Let \(f\) be a continuous function in the region
\[D=\{z \suchthat \abs{z}>R, 0\leq \arg z\leq \theta\}\quad\text{where}\quad
1\leq \theta \leq 2\pi.\] If there exists \(k\) such that
\(\displaystyle{\lim_{z\to\infty} zf(z)=k}\) for \(z\) in the region
\(D\). Show that \[\lim_{R'\to\infty} \int_{L} f(z) dz=i\theta k,\]
where \(L\) is the part of the circle \(|z|=R'\) which lies in the
region \(D\).
\hypertarget{question-3-2}{%
\subsection{Question 3}\label{question-3-2}}
Suppose that \(f\) is an analytic function in the region \(D\) which
contains the point \(a\). Let
\[F(z)= z-a-qf(z),\quad \text{where}~ q \ \text{is a complex
parameter}.\]
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Let \(K\subset D\) be a circle with the center at point \(a\) and also
we assume that \(f(z)\not =0\) for \(z\in K\). Prove that the function
\(F\) has one and only one zero \(z=w\) on the closed disc \(\bar{K}\)
whose boundary is the circle \(K\) if
\(\displaystyle{ |q|<\min_{z\in K} \frac{|z-a|}{|f(z)|}.}\)\\
\item
Let \(G(z)\) be an analytic function on the disk \(\bar{K}\). Apply
the residue theorem to prove that
\(\displaystyle{ \frac{G(w)}{F'(w)}=\frac{1}{2\pi i}\int_K \frac{G(z)}{F(z)} dz,}\)
where \(w\) is the zero from (1).\\
\item
If \(z\in K\), prove that the function
\(\displaystyle{\frac{1}{F(z)}}\) can be represented as a convergent
series with respect to \(q\):
\(\displaystyle{ \frac{1}{F(z)}=\sum_{n=0}^{\infty} \frac{(qf(z))^n}{(z-a)^{n+1}}.}\)
\end{enumerate}
\hypertarget{question-4-2}{%
\subsection{Question 4}\label{question-4-2}}
Evaluate \[\displaystyle{ \int_{0}^{\infty}\frac{x\sin x}{x^2+a^2} \,
dx }.\]
\hypertarget{question-5-2}{%
\subsection{Question 5}\label{question-5-2}}
Let \(f=u+iv\) be differentiable (i.e.~\(f'(z)\) exists) with continuous
partial derivatives at a point \(z=re^{i\theta}\), \(r\not= 0\). Show
that
\[\frac{\partial u}{\partial r}=\frac{1}{r}\frac{\partial v}{\partial \theta},\quad
\frac{\partial v}{\partial r}=-\frac{1}{r}\frac{\partial u}{\partial \theta}.\]
\hypertarget{question-6-2}{%
\subsection{Question 6}\label{question-6-2}}
Show that
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{1+x^n} dx=\frac{\pi}{n\sin \frac{a\pi}{n}}\)
using complex analysis, \(0< a < n\). Here \(n\) is a positive integer.
\hypertarget{question-7-2}{%
\subsection{Question 7}\label{question-7-2}}
For \(s>0\), the \textbf{gamma function} is defined by
\(\displaystyle{\Gamma(s)=\int_0^{\infty} e^{-t}t^{s-1} dt}\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that the gamma function is analytic in the half-plane
\(\Re (s)>0\), and is still given there by the integral formula above.
\item
Apply the formula in the previous question to show that
\[\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin \pi s}.\]
\end{enumerate}
\begin{quote}
Hint: You may need
\(\displaystyle{\Gamma(1-s)=t \int_0^{\infty}e^{-vt}(vt)^{-s} dv}\) for
\(t>0\).
\end{quote}
\hypertarget{question-8-2}{%
\subsection{Question 8}\label{question-8-2}}
Apply Rouché's Theorem to prove the Fundamental Theorem of Algebra: If
\[P_n(z) = a_0 + a_1z + \cdots + a_{n-1}z^{n-1} + a_nz^n\quad (a_n \neq 0)\]
is a polynomial of degree n, then it has n zeros in \(\mathbb C\).
\hypertarget{question-9-2}{%
\subsection{Question 9}\label{question-9-2}}
Suppose \(f\) is entire and there exist \(A, R >0\) and natural number
\(N\) such that \[|f(z)| \geq A |z|^N\ \text{for}\ |z| \geq R.\] Show
that
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(f\) is a polynomial and
\item
the degree of \(f\) is at least \(N\).
\end{enumerate}
\hypertarget{question-10-2}{%
\subsection{Question 10}\label{question-10-2}}
Let \(f: {\mathbb C} \rightarrow {\mathbb C}\) be an injective analytic
(also called \emph{univalent}) function. Show that there exist complex
numbers \(a \neq 0\) and \(b\) such that \(f(z) = az + b\).
\hypertarget{question-11-2}{%
\subsection{Question 11}\label{question-11-2}}
Let \(g\) be analytic for \(|z|\leq 1\) and \(|g(z)| < 1\) for
\(|z| = 1\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that \(g\) has a unique fixed point in \(|z| < 1\).
\item
What happens if we replace \(|g(z)| < 1\) with \(|g(z)|\leq 1\) for
\(|z|=1\)? Give an example if (a) is not true or give an proof if (a)
is still true.
\item
What happens if we simply assume that \(f\) is analytic for
\(|z| < 1\) and \(|f(z)| < 1\) for \(|z| < 1\)? Suppose that
\(f(z) \not\equiv z\). Can f have more than one fixed point in
\(|z| < 1\)?
\end{enumerate}
\begin{quote}
Hint: The map
\(\displaystyle{\psi_{\alpha}(z)=\frac{\alpha-z}{1-\bar{\alpha}z}}\) may
be useful.
\end{quote}
\hypertarget{question-12-2}{%
\subsection{Question 12}\label{question-12-2}}
Find a conformal map from \(D = \{z :\ |z| < 1,\ |z - 1/2| > 1/2\}\) to
the unit disk \(\Delta=\{z: \ |z|<1\}\).
\hypertarget{question-13-2}{%
\subsection{Question 13}\label{question-13-2}}
Let \(f(z)\) be entire and assume values of \(f(z)\) lie outside a
\emph{bounded} open set \(\Omega\). Show without using Picard's theorems
that \(f(z)\) is a constant.
\hypertarget{question-14-2}{%
\subsection{Question 14}\label{question-14-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Assume \(\displaystyle f(z) = \sum_{n=0}^\infty c_n z^n\) converges in
\(|z| < R\). Show that for \(r <R\),
\[\frac{1}{2 \pi} \int_0^{2 \pi} |f(r e^{i \theta})|^2 d \theta
= \sum_{n=0}^\infty |c_n|^2 r^{2n} \; .\]
\item
Deduce Liouville's theorem from (1).
\end{enumerate}
\hypertarget{question-15-2}{%
\subsection{Question 15}\label{question-15-2}}
Let \(f(z)\) be entire and assume that \(f(z) \leq M |z|^2\) outside
some disk for some constant \(M\). Show that \(f(z)\) is a polynomial in
\(z\) of degree \(\leq 2\).
\hypertarget{question-16-2}{%
\subsection{Question 16}\label{question-16-2}}
Let \(a_n(z)\) be an analytic sequence in a domain \(D\) such that
\(\displaystyle \sum_{n=0}^\infty |a_n(z)|\) converges uniformly on
bounded and closed sub-regions of \(D\). Show that
\(\displaystyle \sum_{n=0}^\infty |a'_n(z)|\) converges uniformly on
bounded and closed sub-regions of \(D\).
\hypertarget{question-17-2}{%
\subsection{Question 17}\label{question-17-2}}
Let \(f(z)\) be analytic in an open set \(\Omega\) except possibly at a
point \(z_0\) inside \(\Omega\). Show that if \(f(z)\) is bounded in
near \(z_0\), then \(\displaystyle \int_\Delta f(z) dz = 0\) for all
triangles \(\Delta\) in \(\Omega\).
\hypertarget{question-18-2}{%
\subsection{Question 18}\label{question-18-2}}
Assume \(f\) is continuous in the region:
\(0< |z-a| \leq R, \; 0 \leq \arg(z-a) \leq \beta_0\)
(\(0 < \beta_0 \leq 2 \pi\)) and the limit
\(\displaystyle \lim_{z \rightarrow a} (z-a) f(z) = A\) exists. Show
that
\[\lim_{r \rightarrow 0} \int_{\gamma_r} f(z) dz = i A \beta_0 \; , \; \;\]
where
\[\gamma_r : = \{ z \; | \; z = a + r e^{it}, \; 0 \leq t \leq \beta_0 \}.\]
\hypertarget{question-19-2}{%
\subsection{Question 19}\label{question-19-2}}
Show that \(f(z) = z^2\) is uniformly continuous in any open disk
\(|z| < R\), where \(R>0\) is fixed, but it is not uniformly continuous
on \(\mathbb C\).
\hypertarget{question-20-2}{%
\subsection{Question 20}\label{question-20-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Show that the function \(u=u(x,y)\) given by
\[u(x,y)=\frac{e^{ny}-e^{-ny}}{2n^2}\sin nx\quad \text{for}\ n\in {\mathbf N}\]
is the solution on \(D=\{(x,y)\ | x^2+y^2<1\}\) of the Cauchy problem
for the Laplace equation
\[\frac{\partial ^2u}{\partial x^2}+\frac{\partial ^2u}{\partial y^2}=0,\quad
u(x,0)=0,\quad \frac{\partial u}{\partial y}(x,0)=\frac{\sin nx}{n}.\]
\item
Show that there exist points \((x,y)\in D\) such that
\(\displaystyle{\limsup_{n\to\infty} |u(x,y)|=\infty}\).
\end{enumerate}
\hypertarget{question-21-2}{%
\subsection{Question 21}\label{question-21-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Assume \(\displaystyle f(z) = \sum_{n=0}^\infty c_n z^n\) converges in
\(|z| < R\). Show that for \(r <R\),
\[\frac{1}{2 \pi} \int_0^{2 \pi} |f(r e^{i \theta})|^2 d \theta =
\sum_{n=0}^\infty |c_n|^2 r^{2n} \; .\]
\item
Deduce Liouville's theorem from (1).
\end{enumerate}
\hypertarget{question-22-2}{%
\subsection{Question 22}\label{question-22-2}}
Let \(f\) be a continuous function in the region
\[D=\{z\ | |z|>R, 0\leq \arg Z\leq \theta\}\quad\text{where}\quad
0\leq \theta \leq 2\pi.\] If there exists \(k\) such that
\(\displaystyle{\lim_{z\to\infty} zf(z)=k}\) for \(z\) in the region
\(D\). Show that \[\lim_{R'\to\infty} \int_{L} f(z) dz=i\theta k,\]
where \(L\) is the part of the circle \(|z|=R'\) which lies in the
region \(D\).
\hypertarget{question-23-2}{%
\subsection{Question 23}\label{question-23-2}}
Evaluate
\(\displaystyle{ \int_{0}^{\infty}\frac{x\sin x}{x^2+a^2} \,dx }\).
\hypertarget{question-24-2}{%
\subsection{Question 24}\label{question-24-2}}
Let \(f=u+iv\) be differentiable (i.e.~\(f'(z)\) exists) with continuous
partial derivatives at a point \(z=re^{i\theta}\), \(r\not= 0\). Show
that
\[\frac{\partial u}{\partial r}=\frac{1}{r}\frac{\partial v}{\partial \theta},\quad
\frac{\partial v}{\partial r}=-\frac{1}{r}\frac{\partial u}{\partial \theta}.\]
\hypertarget{question-25-2}{%
\subsection{Question 25}\label{question-25-2}}
Show that
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{1+x^n} dx=\frac{\pi}{n\sin \frac{a\pi}{n}}\)
using complex analysis, \(0< a < n\). Here \(n\) is a positive integer.
\hypertarget{question-26-2}{%
\subsection{Question 26}\label{question-26-2}}
For \(s>0\), the \textbf{gamma function} is defined by
\(\displaystyle{\Gamma(s)=\int_0^{\infty} e^{-t}t^{s-1} dt}\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that the gamma function is analytic in the half-plane
\(\Re (s)>0\), and is still given there by the integral formula above.
\item
Apply the formula in the previous question to show that
\[\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin \pi s}.\]
\end{enumerate}
\begin{quote}
Hint: You may need
\(\displaystyle{\Gamma(1-s)=t \int_0^{\infty}e^{-vt}(vt)^{-s} dv}\) for
\(t>0\).
\end{quote}
\hypertarget{question-27-2}{%
\subsection{Question 27}\label{question-27-2}}
Suppose \(f\) is entire and there exist \(A, R >0\) and natural number
\(N\) such that \[|f(z)| \geq A |z|^N\ \text{for}\ |z| \geq R.\] Show
that
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(f\) is a polynomial and
\item
the degree of \(f\) is at least \(N\).
\end{enumerate}
\hypertarget{question-28-2}{%
\subsection{Question 28}\label{question-28-2}}
Let \(f: {\mathbb C} \rightarrow {\mathbb C}\) be an injective analytic
(also called univalent) function. Show that there exist complex numbers
\(a \neq 0\) and \(b\) such that \(f(z) = az + b\).
\hypertarget{question-29-2}{%
\subsection{Question 29}\label{question-29-2}}
Let \(g\) be analytic for \(|z|\leq 1\) and \(|g(z)| < 1\) for
\(|z| = 1\).
\begin{itemize}
\item
Show that \(g\) has a unique fixed point in \(|z| < 1\).
\item
What happens if we replace \(|g(z)| < 1\) with \(|g(z)|\leq 1\) for
\(|z|=1\)? Give an example if (a) is not true or give an proof if (a)
is still true.
\item
What happens if we simply assume that \(f\) is analytic for
\(|z| < 1\) and \(|f(z)| < 1\) for \(|z| < 1\)? Suppose that
\(f(z) \not\equiv z\). Can f have more than one fixed point in
\(|z| < 1\)?
\end{itemize}
\begin{quote}
Hint: The map
\(\displaystyle{\psi_{\alpha}(z)=\frac{\alpha-z}{1-\bar{\alpha}z}}\) may
be useful.
\end{quote}
\hypertarget{question-30-2}{%
\subsection{Question 30}\label{question-30-2}}
Find a conformal map from \(D = \{z :\ |z| < 1,\ |z - 1/2| > 1/2\}\) to
the unit disk \(\Delta=\{z: \ |z|<1\}\).
\hypertarget{question-31-2}{%
\subsection{Question 31}\label{question-31-2}}
Let \(f(z)\) be entire and assume values of \(f(z)\) lie outside a
\emph{bounded} open set \(\Omega\). Show without using Picard's theorems
that \(f(z)\) is a constant.
\hypertarget{question-32-2}{%
\subsection{Question 32}\label{question-32-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Assume \(\displaystyle f(z) = \sum_{n=0}^\infty c_n z^n\) converges in
\(|z| < R\). Show that for \(r <R\),
\[\frac{1}{2 \pi} \int_0^{2 \pi} |f(r e^{i \theta})|^2 d \theta
= \sum_{n=0}^\infty |c_n|^2 r^{2n} \; .\]
\item
Deduce Liouville's theorem from (1).
\end{enumerate}
\hypertarget{question-33-2}{%
\subsection{Question 33}\label{question-33-2}}
Let \(f(z)\) be entire and assume that \(f(z) \leq M |z|^2\) outside
some disk for some constant \(M\). Show that \(f(z)\) is a polynomial in
\(z\) of degree \(\leq 2\).
\hypertarget{question-34-2}{%
\subsection{Question 34}\label{question-34-2}}
Let \(a_n(z)\) be an analytic sequence in a domain \(D\) such that
\(\displaystyle \sum_{n=0}^\infty |a_n(z)|\) converges uniformly on
bounded and closed sub-regions of \(D\). Show that
\(\displaystyle \sum_{n=0}^\infty |a'_n(z)|\) converges uniformly on
bounded and closed sub-regions of \(D\).
\hypertarget{question-35-2}{%
\subsection{Question 35}\label{question-35-2}}
Let \(f(z)\) be analytic in an open set \(\Omega\) except possibly at a
point \(z_0\) inside \(\Omega\). Show that if \(f(z)\) is bounded in
near \(z_0\), then \(\displaystyle \int_\Delta f(z) dz = 0\) for all
triangles \(\Delta\) in \(\Omega\).
\hypertarget{question-36-2}{%
\subsection{Question 36}\label{question-36-2}}
Assume \(f\) is continuous in the region:
\(0< |z-a| \leq R, \; 0 \leq \arg(z-a) \leq \beta_0\)
(\(0 < \beta_0 \leq 2 \pi\)) and the limit
\(\displaystyle \lim_{z \rightarrow a} (z-a) f(z) = A\) exists. Show
that
\[\lim_{r \rightarrow 0} \int_{\gamma_r} f(z) dz = i A \beta_0 \; , \; \;\]
where
\[\gamma_r : = \{ z \; | \; z = a + r e^{it}, \; 0 \leq t \leq \beta_0 \}.\]
\hypertarget{question-37-2}{%
\subsection{Question 37}\label{question-37-2}}
Show that \(f(z) = z^2\) is uniformly continuous in any open disk
\(|z| < R\), where \(R>0\) is fixed, but it is not uniformly continuous
on \(\mathbb C\).
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\tightlist
\item
Show that the function \(u=u(x,y)\) given by
\[u(x,y)=\frac{e^{ny}-e^{-ny}}{2n^2}\sin nx\quad \text{for}\ n\in {\mathbf N}\]
is the solution on \(D=\{(x,y)\ | x^2+y^2<1\}\) of the Cauchy problem
for the Laplace equation
\[\frac{\partial ^2u}{\partial x^2}+\frac{\partial ^2u}{\partial y^2}=0,\quad
u(x,0)=0,\quad \frac{\partial u}{\partial y}(x,0)=\frac{\sin nx}{n}.\]
\end{enumerate}
\hypertarget{question-38-2}{%
\subsection{Question 38}\label{question-38-2}}
This question provides some insight into Cauchy's theorem. Solve the
problem without using Cauchy's theorem.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Evaluate the integral \(\displaystyle{\int_{\gamma} z^n dz}\) for all
integers \(n\). Here \(\gamma\) is any circle centered at the origin
with the positive (counterclockwise) orientation.
\item
Same question as (a), but with \(\gamma\) any circle not containing
the origin.
\item
Show that if \(|a|<r<|b|\), then
\(\displaystyle{\int_{\gamma}\frac{dz}{(z-a)(z-b)} dz=\frac{2\pi i}{a-b}}\).
Here \(\gamma\) denotes the circle centered at the origin, of radius
\(r\), with the positive orientation.
\end{enumerate}
\hypertarget{question-39-2}{%
\subsection{Question 39}\label{question-39-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Assume the infinite series \(\displaystyle \sum_{n=0}^\infty c_n z^n\)
converges in \(|z| < R\) and let \(f(z)\) be the limit. Show that for
\(r <R\),
\[\frac{1}{2 \pi} \int_0^{2 \pi} |f(r e^{i \theta})|^2 d \theta =
\sum_{n=0}^\infty |c_n|^2 r^{2n} \; .\]
\item
Deduce Liouville's theorem from (1).
\end{enumerate}
\begin{quote}
Liouville's theorem: If \(f(z)\) is entire and bounded, then \(f\) is
constant.
\end{quote}
\hypertarget{question-40-2}{%
\subsection{Question 40}\label{question-40-2}}
Let \(f\) be a continuous function in the region
\[D=\{z\ | |z|>R, 0\leq \arg Z\leq \theta\}\quad\text{where}\quad
0\leq \theta \leq 2\pi.\] If there exists \(k\) such that
\(\displaystyle{\lim_{z\to\infty} zf(z)=k}\) for \(z\) in the region
\(D\). Show that \[\lim_{R'\to\infty} \int_{L} f(z) dz=i\theta k,\]
where \(L\) is the part of the circle \(|z|=R'\) which lies in the
region \(D\).
\hypertarget{question-41-2}{%
\subsection{Question 41}\label{question-41-2}}
Evaluate
\(\displaystyle{ \int_{0}^{\infty}\frac{x\sin x}{x^2+a^2} \, dx }\).
\hypertarget{question-42-2}{%
\subsection{Question 42}\label{question-42-2}}
Let \(f=u+iv\) be differentiable (i.e.~\(f'(z)\) exists) with continuous
partial derivatives at a point \(z=re^{i\theta}\), \(r\not= 0\). Show
that
\[\frac{\partial u}{\partial r}=\frac{1}{r}\frac{\partial v}{\partial \theta},\quad
\frac{\partial v}{\partial r}=-\frac{1}{r}\frac{\partial u}{\partial \theta}.\]
\hypertarget{question-43-2}{%
\subsection{Question 43}\label{question-43-2}}
Show that
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{1+x^n} dx=\frac{\pi}{n\sin \frac{a\pi}{n}}\)
using complex analysis, \(0< a < n\). Here \(n\) is a positive integer.
\hypertarget{question-44-2}{%
\subsection{Question 44}\label{question-44-2}}
For \(s>0\), the \textbf{gamma function} is defined by
\(\displaystyle{\Gamma(s)=\int_0^{\infty} e^{-t}t^{s-1} dt}\).
\begin{itemize}
\item
Show that the gamma function is analytic in the half-plane
\(\Re (s)>0\), and is still given there by the integral formula above.
\item
Apply the formula in the previous question to show that
\[\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin \pi s}.\]
\end{itemize}
\begin{quote}
Hint: You may need
\(\displaystyle{\Gamma(1-s)=t \int_0^{\infty}e^{-vt}(vt)^{-s} dv}\) for
\(t>0\).
\end{quote}
\hypertarget{question-45-2}{%
\subsection{Question 45}\label{question-45-2}}
Suppose \(f\) is entire and there exist \(A, R >0\) and natural number
\(N\) such that \[|f(z)| \geq A |z|^N\ \text{for}\ |z| \geq R.\] Show
that
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(f\) is a polynomial and
\item
the degree of \(f\) is at least \(N\).
\end{enumerate}
\hypertarget{question-46-2}{%
\subsection{Question 46}\label{question-46-2}}
Let \(f: {\mathbb C} \rightarrow {\mathbb C}\) be an injective analytic
(also called univalent) function. Show that there exist complex numbers
\(a \neq 0\) and \(b\) such that \(f(z) = az + b\).
\hypertarget{question-47-2}{%
\subsection{Question 47}\label{question-47-2}}
Let \(g\) be analytic for \(|z|\leq 1\) and \(|g(z)| < 1\) for
\(|z| = 1\).
\begin{itemize}
\item
Show that \(g\) has a unique fixed point in \(|z| < 1\).
\item
What happens if we replace \(|g(z)| < 1\) with \(|g(z)|\leq 1\) for
\(|z|=1\)? Give an example if (a) is not true or give an proof if (a)
is still true.
\item
What happens if we simply assume that \(f\) is analytic for
\(|z| < 1\) and \(|f(z)| < 1\) for \(|z| < 1\)? Suppose that
\(f(z) \not\equiv z\). Can f have more than one fixed point in
\(|z| < 1\)?
\end{itemize}
\begin{quote}
Hint: The map
\(\displaystyle{\psi_{\alpha}(z)=\frac{\alpha-z}{1-\bar{\alpha}z}}\) may
be useful.
\end{quote}
\hypertarget{question-48-2}{%
\subsection{Question 48}\label{question-48-2}}
Find a conformal map from \(D = \{z :\ |z| < 1,\ |z - 1/2| > 1/2\}\) to
the unit disk \(\Delta=\{z: \ |z|<1\}\).
\hypertarget{question-49-2}{%
\subsection{Question 49}\label{question-49-2}}
Let \(a_n \neq 0\) and assume that
\(\displaystyle \lim_{n \rightarrow \infty} \frac{|a_{n+1}|}{|a_n|} = L\).
Show that
\(\displaystyle \lim_{n \rightarrow \infty} \sqrt[n]{|a_n|} = L. %p_n^{\frac{1}{n}} = L.
\) In particular, this shows that when applicable, the ratio test can be
used to calculate the radius of convergence of a power series.
\hypertarget{question-50-2}{%
\subsection{Question 50}\label{question-50-2}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let \(z, w\) be complex numbers, such that \(\bar{z} w \neq 1\). Prove
that
\[\abs{\frac{w - z}{1 - \bar{w} z}} < 1 \; \; \; \mbox{if} \; |z| < 1 \; \mbox{and}\; |w| < 1,\]
and also that
\[\abs{\frac{w - z}{1 - \bar{w} z}} = 1 \; \; \; \mbox{if} \; |z| = 1 \; \mbox{or}\; |w| = 1.\]
\item
Prove that for fixed \(w\) in the unit disk \(\mathbb D\), the mapping
\[F: z \mapsto \frac{w - z}{1 - \bar{w} z}\] satisfies the following
conditions:
\item
\(F\) maps \(\mathbb D\) to itself and is holomorphic.~
\end{enumerate}
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\setcounter{enumi}{1}
\item
\(F\) interchanges \(0\) and \(w\), namely, \(F(0) = w\) and
\(F(w) = 0\).
\item
\(|F(z)| = 1\) if \(|z| = 1\).
\item
\(F: {\mathbb D} \mapsto {\mathbb D}\) is bijective.
\end{enumerate}
\begin{quote}
Hint: Calculate \(F \circ F\).
\end{quote}
\hypertarget{question-51-2}{%
\subsection{Question 51}\label{question-51-2}}
Use \(n\)-th roots of unity (i.e.~solutions of \(z^n - 1 =0\)) to show
that
\[2^{n-1} \sin\frac{\pi}{n} \sin\frac{2\pi}{n} \cdots \sin\frac{(n-1)\pi}{n}
= n
\; .\]
\begin{quote}
Hint:
\(1 - \cos 2 \theta = 2 \sin^2 \theta,\; \sin 2 \theta = 2 \sin \theta \cos \theta\).
\end{quote}
\hypertarget{question-52-2}{%
\subsection{Question 52}\label{question-52-2}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\tightlist
\item
Show that in polar coordinates, the Cauchy-Riemann equations take the
form
\end{enumerate}
\[\frac{\partial u}{\partial r} = \frac{1}{r} \frac{\partial v}{\partial \theta}
\; \; \; \text{and} \; \; \;
\frac{\partial v}{\partial r} = - \frac{1}{r} \frac{\partial u}{\partial \theta}\]
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\setcounter{enumi}{1}
\tightlist
\item
Use these equations to show that the logarithm function defined by
\[\log z = \log r + i \theta \; \;
\mbox{where} \; z = r e^{i \theta } \; \mbox{with} \; - \pi < \theta < \pi\]
is a holomorphic function in the region
\(r>0, \; - \pi < \theta < \pi\). Also show that \(\log z\) defined
above is not continuous in \(r>0\).
\end{enumerate}
\hypertarget{question-53-2}{%
\subsection{Question 53}\label{question-53-2}}
Assume \(f\) is continuous in the region:
\(x \geq x_0, \; 0 \leq y \leq b\) and the limit
\[\displaystyle \lim_{x \rightarrow + \infty} f(x + iy) = A\] exists
uniformly with respect to \(y\) (independent of \(y\)).
Show that
\[\lim_{x \rightarrow + \infty} \int_{\gamma_x} f(z) dz = iA b \; , \; \;\]
where \(\gamma_x : = \{ z \; | \; z = x + it, \; 0 \leq t \leq b\}.\)
\hypertarget{question-54-2}{%
\subsection{Question 54}\label{question-54-2}}
(Cauchy's formula for ``exterior'' region) Let \(\gamma\) be piecewise
smooth simple closed curve with interior \(\Omega_1\) and exterior
\(\Omega_2\). Assume \(f'(z)\) exists in an open set containing
\(\gamma\) and \(\Omega_2\) and
\(\lim_{z \rightarrow \infty } f(z) = A\). Show that
\[\frac{1}{2 \pi i} \int_\gamma \frac{f(\xi)}{\xi - z} \, d \xi =
\begin{cases}
A, & \text{if\ $z \in \Omega_1$}, \\
-f (z) + A, & \text{if\ $z \in \Omega_2$}
\end{cases}\]
\hypertarget{question-55-2}{%
\subsection{Question 55}\label{question-55-2}}
Let \(f(z)\) be bounded and analytic in \(\mathbb C\). Let \(a \neq b\)
be any fixed complex numbers. Show that the following limit exists
\[\lim_{R \rightarrow \infty} \int_{|z|=R} \frac{f(z)}{(z-a)(z-b)} dz.\]
Use this to show that \(f(z)\) must be a constant (Liouville's theorem).
\hypertarget{question-56-2}{%
\subsection{Question 56}\label{question-56-2}}
Prove by \emph{justifying all steps} that for all
\(\xi \in {\mathbb C}\) we have
\(\displaystyle e^{- \pi \xi^2} = \int_{- \infty}^\infty e^{- \pi x^2} e^{2 \pi i x \xi} dx \; .\)
\begin{quote}
Hint: You may use that fact in Example 1 on p.~42 of the textbook
without proof, i.e., you may assume the above is true for real values of
\(\xi\).
\end{quote}
\hypertarget{question-57-2}{%
\subsection{Question 57}\label{question-57-2}}
Suppose that \(f\) is holomorphic in an open set containing the closed
unit disc, except for a pole at \(z_0\) on the unit circle. Let
\(f(z) = \sum_{n = 1}^\infty c_n z^n\) denote the power series in the
open disc. Show that
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
\(c_n \neq 0\) for all large enough \(n\)'s, and
\item
\(\displaystyle \lim_{n \rightarrow \infty} \frac{c_n}{c_{n+1}}= z_0\).
\end{enumerate}
\hypertarget{question-58-2}{%
\subsection{Question 58}\label{question-58-2}}
Let \(f(z)\) be a non-constant analytic function in \(|z|>0\) such that
\(f(z_n) = 0\) for infinite many points \(z_n\) with
\(\lim_{n \rightarrow \infty} z_n =0\). Show that \(z=0\) is an
essential singularity for \(f(z)\). (An example of such a function is
\(f(z) = \sin (1/z)\).)
\hypertarget{question-59-2}{%
\subsection{Question 59}\label{question-59-2}}
Let \(f\) be entire and suppose that
\(\lim_{z \rightarrow \infty} f(z) = \infty\). Show that \(f\) is a
polynomial.
\hypertarget{question-60-2}{%
\subsection{Question 60}\label{question-60-2}}
Expand the following functions into Laurent series in the indicated
regions:
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
\(\displaystyle f(z) = \frac{z^2 - 1}{ (z+2)(z+3)}, \; \; 2 < |z| < 3\),
\(3 < |z| < + \infty\).
\item
\(\displaystyle f(z) = \sin \frac{z}{1-z}, \; \; 0 < |z-1| < + \infty\)
\end{enumerate}
\hypertarget{question-61-2}{%
\subsection{Question 61}\label{question-61-2}}
Assume \(f(z)\) is analytic in region \(D\) and \(\Gamma\) is a
rectifiable curve in \(D\) with interior in \(D\). Prove that if
\(f(z)\) is real for all \(z \in \Gamma\), then \(f(z)\) is a constant.
\hypertarget{question-62-2}{%
\subsection{Question 62}\label{question-62-2}}
Find the number of roots of \(z^4 - 6z + 3 =0\) in \(|z|<1\) and
\(1 < |z| < 2\) respectively.
\hypertarget{question-63-2}{%
\subsection{Question 63}\label{question-63-2}}
Prove that \(z^4 + 2 z^3 - 2z + 10 =0\) has exactly one root in each
open quadrant.
\hypertarget{question-64-2}{%
\subsection{Question 64}\label{question-64-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Let \(f(z) \in H({\mathbb D})\), \(\text{Re}(f(z)) >0\),
\(f(0)= a>0\). Show that
\[\abs{ \frac{f(z)-a}{f(z)+a}} \leq |z|, \; \; \;
|f'(0)| \leq 2a.\]
\item
Show that the above is still true if \(\text{Re}(f(z)) >0\) is
replaced with \(\text{Re}(f(z)) \geq 0\).
\end{enumerate}
\hypertarget{question-65-2}{%
\subsection{Question 65}\label{question-65-2}}
Assume \(f(z)\) is analytic in \({\mathbb D}\) and \(f(0)=0\) and is not
a rotation (i.e.~\(f(z) \neq e^{i \theta} z\)). Show that
\(\displaystyle \sum_{n=1}^\infty f^{n}(z)\) converges uniformly to an
analytic function on compact subsets of \({\mathbb D}\), where
\(f^{n+1}(z) = f(f^{n}(z))\).
\hypertarget{question-66-2}{%
\subsection{Question 66}\label{question-66-2}}
Let \(f(z) = \sum_{n=0}^\infty c_n z^n\) be analytic and one-to-one in
\(|z| < 1\). For \(0<r<1\), let \(D_r\) be the disk \(|z|<r\). Show that
the area of \(f(D_r)\) is finite and is given by
\[S = \pi \sum_{n=1}^\infty n |c_n|^2 r^{2n}.\] (Note that in general
the area of \(f(D_1)\) is infinite.)
\hypertarget{question-67-2}{%
\subsection{Question 67}\label{question-67-2}}
Let \(f(z) = \sum_{n= -\infty}^\infty c_n z^n\) be analytic and
one-to-one in \(r_0< |z| < R_0\). For \(r_0<r<R<R_0\), let \(D(r,R)\) be
the annulus \(r<|z|<R\). Show that the area of \(f(D(r,R))\) is finite
and is given by
\[S = \pi \sum_{n=- \infty}^\infty n |c_n|^2 (R^{2n} - r^{2n}).\]
\hypertarget{question-68-2}{%
\subsection{Question 68}\label{question-68-2}}
Let \(a_n(z)\) be an analytic sequence in a domain \(D\) such that
\(\displaystyle \sum_{n=0}^\infty |a_n(z)|\) converges uniformly on
bounded and closed sub-regions of \(D\). Show that
\(\displaystyle \sum_{n=0}^\infty |a'_n(z)|\) converges uniformly on
bounded and closed sub-regions of \(D\).
\hypertarget{question-69-2}{%
\subsection{Question 69}\label{question-69-2}}
Let \(f_n, f\) be analytic functions on the unit disk \({\mathbb D}\).
Show that the following are equivalent.
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(f_n(z)\) converges to \(f(z)\) uniformly on compact subsets in
\(\mathbb D\).
\item
\(\int_{|z|= r} |f_n(z) - f(z)| \, |dz|\) converges to \(0\) if
\(0< r<1\).
\end{enumerate}
\hypertarget{question-70-2}{%
\subsection{Question 70}\label{question-70-2}}
Let \(f\) and \(g\) be non-zero analytic functions on a region
\(\Omega\). Assume \(|f(z)| = |g(z)|\) for all \(z\) in \(\Omega\). Show
that \(f(z) = e^{i \theta} g(z)\) in \(\Omega\) for some
\(0 \leq \theta < 2 \pi\).
\hypertarget{question-71-2}{%
\subsection{Question 71}\label{question-71-2}}
Suppose \(f\) is analytic in an open set containing the unit disc
\(\mathbb D\) and \(|f(z)| =1\) when \(|z|\)=1. Show that either
\(f(z) = e^{i \theta}\) for some \(\theta \in \mathbb R\) or there are
finite number of \(z_k \in \mathbb D\), \(k \leq n\) and
\(\theta \in \mathbb R\) such that
\(\displaystyle f(z) = e^{i\theta} \prod_{k=1}^n \frac{z-z_k}{1 - \bar{z}_k z } \, .\)
\begin{quote}
Also cf.~Stein et al, 1.4.7, 3.8.17
\end{quote}
\hypertarget{question-72-2}{%
\subsection{Question 72}\label{question-72-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Let \(p(z)\) be a polynomial, \(R>0\) any positive number, and
\(m \geq 1\) an integer. Let
\[M_R = \sup \{ |z^{m} p(z) - 1|: |z| = R \}.\] Show that \(M_R>1\).
\item
Let \(m \geq 1\) be an integer and
\(K = \{z \in {\mathbb C}: r \leq |z| \leq R \}\) where \(r<R\). Show
(i) using (1) as well as, (ii) without using (1) that there exists a
positive number \(\varepsilon_0>0\) such that for each polynomial
\(p(z)\),
\[\sup \{|p(z) - z^{-m}|: z \in K \} \geq \varepsilon_0 \, .\]
\end{enumerate}
\hypertarget{question-73-2}{%
\subsection{Question 73}\label{question-73-2}}
Let \(\displaystyle f(z) = \frac{1}{z} + \frac{1}{z^2 -1}\). Find all
the Laurent series of \(f\) and describe the largest annuli in which
these series are valid.
\hypertarget{question-74-2}{%
\subsection{Question 74}\label{question-74-2}}
Suppose \(f\) is entire and there exist \(A, R >0\) and natural number
\(N\) such that \(|f(z)| \leq A |z|^N\) for \(|z| \geq R\). Show that
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(f\) is a polynomial and
\item
the degree of \(f\) is at most \(N\).
\end{enumerate}
\hypertarget{question-75-2}{%
\subsection{Question 75}\label{question-75-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Explicitly write down an example of a non-zero analytic function in
\(|z|<1\) which has infinitely zeros in \(|z|<1\).
\item
Why does not the phenomenon in (1) contradict the uniqueness theorem?
\end{enumerate}
\hypertarget{question-76-2}{%
\subsection{Question 76}\label{question-76-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Assume \(u\) is harmonic on open set \(O\) and \(z_n\) is a sequence
in \(O\) such that \(u(z_n) = 0\) and \(\lim z_n \in O\). Prove or
disprove that \(u\) is identically zero. What if \(O\) is a region?
\item
Assume \(u\) is harmonic on open set \(O\) and \(u(z) = 0\) on a disc
in \(O\). Prove or disprove that \(u\) is identically zero. What if
\(O\) is a region?
\item
Formulate and prove a Schwarz reflection principle for harmonic
functions
\end{enumerate}
\begin{quote}
cf.~Theorem 5.6 on p.60 of Stein et al.
\end{quote}
\begin{quote}
Hint: Verify the mean value property for your new function obtained by
Schwarz reflection principle.
\end{quote}
\hypertarget{question-77-2}{%
\subsection{Question 77}\label{question-77-2}}
Let \(f\) be holomorphic in a neighborhood of \(D_r(z_0)\). Show that
for any \(s<r\), there exists a constant \(c>0\) such that
\[||f||_{(\infty, s)} \leq c ||f||_{(1, r)},\] where
\(\displaystyle |f||_{(\infty, s)} = \text{sup}_{z \in D_s(z_0)}|f(z)|\)
and \(\displaystyle ||f||_{(1, r)} = \int_{D_r(z_0)} |f(z)|dx dy\).
\begin{quote}
Note: Exercise 3.8.20 on p.107 in Stein et al is a straightforward
consequence of this stronger result using the integral form of the
Cauchy-Schwarz inequality in real analysis.
\end{quote}
\hypertarget{question-78-2}{%
\subsection{Question 78}\label{question-78-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Let \(f\) be analytic in \(\Omega: 0<|z-a|<r\) except at a sequence of
poles \(a_n \in \Omega\) with \(\lim_{n \rightarrow \infty} a_n = a\).
Show that for any \(w \in \mathbb C\), there exists a sequence
\(z_n \in \Omega\) such that
\(\lim_{n \rightarrow \infty} f(z_n) = w\).
\item
Explain the similarity and difference between the above assertion and
the Weierstrass-Casorati theorem.
\end{enumerate}
\hypertarget{question-79-2}{%
\subsection{Question 79}\label{question-79-2}}
Compute the following integrals.
\(i\) \(\displaystyle \int_0^\infty \frac{1}{(1 + x^n)^2} \, dx\),
\(n \geq 1\) (ii)
\(\displaystyle \int_0^\infty \frac{\cos x}{(x^2 + a^2)^2} \, dx\),
\(a \in \mathbb R\) (iii)
\(\displaystyle \int_0^\pi \frac{1}{a + \sin \theta} \, d \theta\),
\(a>1\)
\(iv\)
\(\displaystyle \int_0^{\frac{\pi}{2}} \frac{d \theta}{a+ \sin ^2 \theta},\)
\(a >0\). (v)
\(\displaystyle \int_{|z|=2} \frac{1}{(z^{5} -1) (z-3)} \, dz\) (v)
\(\displaystyle \int_{- \infty}^{\infty} \frac{\sin \pi a}{\cosh \pi x + \cos \pi a} e^{- i x \xi} \, d x\),
\(0< a <1\), \(\xi \in \mathbb R\) (vi)
\(\displaystyle \int_{|z| = 1} \cot^2 z \, dz\).
\hypertarget{question-80-2}{%
\subsection{Question 80}\label{question-80-2}}
Compute the following integrals.
\(i\) \(\displaystyle \int_0^\infty \frac{\sin x}{x} \, dx\) (ii)
\(\displaystyle \int_0^\infty (\frac{\sin x}{x})^2 \, dx\) (iii)
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{(1 + x)^2} \, dx\),
\(0< a < 2\)
\(i\) \(\displaystyle \int_0^\infty \frac{\cos a x - \cos bx}{x^2} dx\),
\(a, b >0\) (ii)
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{1 + x^n} \, dx\),
\(0< a < n\)
\(iii\) \(\displaystyle \int_0^\infty \frac{\log x}{1 + x^n} \, dx\),
\(n \geq 2\) (iv)
\(\displaystyle \int_0^\infty \frac{\log x}{(1 + x^2)^2} dx\) (v)
\(\displaystyle \int_0^{\pi} \log|1 - a \sin \theta| d \theta\),
\(a \in \mathbb C\)
\hypertarget{question-81-2}{%
\subsection{Question 81}\label{question-81-2}}
Let \(0<r<1\). Show that polynomials
\(P_n(z) = 1 + 2z + 3 z^2 + \cdots + n z^{n-1}\) have no zeros in
\(|z|<r\) for all sufficiently large \(n\)'s.
\hypertarget{question-82-2}{%
\subsection{Question 82}\label{question-82-2}}
Let \(f\) be an analytic function on a region \(\Omega\). Show that
\(f\) is a constant if there is a simple closed curve \(\gamma\) in
\(\Omega\) such that its image \(f(\gamma)\) is contained in the real
axis.
\hypertarget{question-83-2}{%
\subsection{Question 83}\label{question-83-2}}
\begin{enumerate}
\def\labelenumi{(\arabic{enumi})}
\item
Show that \(\displaystyle \frac{\pi^2}{\sin^2 \pi z}\) and
\(\displaystyle g(z) = \sum_{n = - \infty}^{ \infty} \frac{1}{(z-n)^2}\)
have the same principal part at each integer point.
\item
Show that \(\displaystyle h(z) = \frac{\pi^2}{\sin^2 \pi z} - g(z)\)
is bounded on \(\mathbb C\) and conclude that
\(\displaystyle \frac{\pi^2}{\sin^2 \pi z} = \sum_{n = - \infty}^{ \infty} \frac{1}{(z-n)^2} \, .\)
\end{enumerate}
\hypertarget{question-84-2}{%
\subsection{Question 84}\label{question-84-2}}
Let \(f(z)\) be an analytic function on
\({\mathbb C} \backslash \{ z_0 \}\), where \(z_0\) is a fixed point.
Assume that \(f(z)\) is bijective from
\({\mathbb C} \backslash \{ z_0 \}\) onto its image, and that \(f(z)\)
is bounded outside \(D_r(z_0)\), where \(r\) is some fixed positive
number. Show that there exist \(a, b, c, d \in \mathbb C\) with
\(ad-bc \neq 0\), \(c \neq 0\) such that
\(\displaystyle f(z) = \frac{az + b}{cz + d}\).
\hypertarget{question-85-2}{%
\subsection{Question 85}\label{question-85-2}}
Assume \(f(z)\) is analytic in \(\mathbb D: |z|<1\) and \(f(0)=0\) and
is not a rotation (i.e.~\(f(z) \neq e^{i \theta} z\)). Show that
\(\displaystyle \sum_{n=1}^\infty f^{n}(z)\) converges uniformly to an
analytic function on compact subsets of \({\mathbb D}\), where
\(f^{n+1}(z) = f(f^{n}(z))\).
\hypertarget{question-86-1}{%
\subsection{Question 86}\label{question-86-1}}
Let \(f\) be a non-constant analytic function on \(\mathbb D\) with
\(f(\mathbb D) \subseteq \mathbb D\). Use \(\psi_{a} (f(z))\) (where
\(a=f(0)\), \(\displaystyle \psi_a(z) = \frac{a - z}{1 - \bar{a}z}\)) to
prove that \[\displaystyle
\frac{|f(0)| - |z|}{1 + |f(0)||z|} \leq |f(z)| \leq
\frac{|f(0)| + |z|}{1 - |f(0)||z|}.\]
\hypertarget{question-87-1}{%
\subsection{Question 87}\label{question-87-1}}
Find a conformal map
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
From \(\{ z: |z - 1/2| > 1/2, \text{Re}(z)>0 \}\) to \(\mathbb H\)
\item
From \(\{ z: |z - 1/2| > 1/2, |z| <1 \}\) to \(\mathbb D\)
\item
From the intersection of the disk \(|z + i| < \sqrt{2}\) with
\({\mathbb H}\) to \({\mathbb D}\).
\item
From \({\mathbb D} \backslash [a, 1)\) to
\({\mathbb D} \backslash [0, 1)\) (\(0<a<1)\).
\begin{quote}
Hint: Short solution possible using Blaschke factor
\end{quote}
\item
From \(\{ z: |z| < 1, \text{Re}(z) > 0 \} \backslash (0, 1/2]\) to
\(\mathbb H\).
\end{enumerate}
\hypertarget{question-88-1}{%
\subsection{Question 88}\label{question-88-1}}
Let \(C\) and \(C'\) be two circles and let \(z_1 \in C\),
\(z_2 \notin C\), \(z'_1 \in C'\), \(z'_2 \notin C'\). Show that there
is a unique fractional linear transformation \(f\) with \(f(C) = C'\)
and \(f(z_1) = z'_1\), \(f(z_2) = z'_2\).
\hypertarget{question-89-1}{%
\subsection{Question 89}\label{question-89-1}}
Assume \(f_n \in H(\Omega)\) is a sequence of holomorphic functions on
the region \(\Omega\) that are uniformly bounded on compact subsets and
\(f \in H(\Omega)\) is such that the set
\(\displaystyle \{z \in \Omega: \lim_{n \rightarrow \infty} f_n(z) = f(z) \}\)
has a limit point in \(\Omega\). Show that \(f_n\) converges to \(f\)
uniformly on compact subsets of \(\Omega\).
\hypertarget{question-90-1}{%
\subsection{Question 90}\label{question-90-1}}
Let \(\displaystyle{\psi_{\alpha}(z)=\frac{\alpha-z}{1-\bar{\alpha}z}}\)
with \(|\alpha|<1\) and \({\mathbb D}=\{z:\ |z|<1\}\). Prove that
\begin{itemize}
\item
\(\displaystyle{\frac{1}{\pi}\iint_{{\mathbb D}} |\psi'_{\alpha}|^2 dx dy =1}\).
\item
\(\displaystyle{\frac{1}{\pi}\iint_{{\mathbb D}} |\psi'_{\alpha}| dx dy =\frac{1-|\alpha|^2}{|\alpha|^2} \log \frac{1}{1-|\alpha|^2}}\).
\end{itemize}
\hypertarget{question-91-1}{%
\subsection{Question 91}\label{question-91-1}}
Prove that
\(\displaystyle{f(z)=-\frac{1}{2}\left(z+\frac{1}{z}\right)}\) is a
conformal map from half disc \(\{z=x+iy:\ |z|<1,\ y>0\}\) to upper half
plane \({\mathbb H}=\{z=x+iy:\ y>0\}\).
\hypertarget{question-92-1}{%
\subsection{Question 92}\label{question-92-1}}
Let \(\Omega\) be a simply connected open set and let \(\gamma\) be a
simple closed contour in \(\Omega\) and enclosing a bounded region \(U\)
anticlockwise. Let \(f: \ \Omega \to {\mathbb C}\) be a holomorphic
function and \(|f(z)|\leq M\) for all \(z\in \gamma\). Prove that
\(|f(z)|\leq M\) for all \(z\in U\).
\hypertarget{question-93-1}{%
\subsection{Question 93}\label{question-93-1}}
Compute the following integrals.
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{1 + x^n} \, dx\),
\(0< a < n\)
\item
\(\displaystyle \int_0^\infty \frac{\log x}{(1 + x^2)^2}\, dx\)
\end{enumerate}
\hypertarget{question-94-1}{%
\subsection{Question 94}\label{question-94-1}}
Let \(0<r<1\). Show that the polynomials
\(P_n(z) = 1 + 2z + 3 z^2 + \cdots + n z^{n-1}\) have no zeros in
\(|z|<r\) for all sufficiently large \(n\)'s.
\hypertarget{question-95-1}{%
\subsection{Question 95}\label{question-95-1}}
Let \(f\) be holomorphic in a neighborhood of \(D_r(z_0)\). Show that
for any \(s<r\), there exists a constant \(c>0\) such that
\[\|f\|_{(\infty, s)} \leq c \|f\|_{(1, r)},\] where
\(\displaystyle \|f\|_{(\infty, s)} = \text{sup}_{z \in D_s(z_0)}|f(z)|\)
and \(\displaystyle \|f\|_{(1, r)} = \int_{D_r(z_0)} |f(z)|dx dy\).
\hypertarget{question-96-1}{%
\subsection{Question 96}\label{question-96-1}}
Let \(\displaystyle{\psi_{\alpha}(z)=\frac{\alpha-z}{1-\bar{\alpha}z}}\)
with \(|\alpha|<1\) and \({\mathbb D}=\{z:\ |z|<1\}\). Prove that
\begin{itemize}
\item
\(\displaystyle{\frac{1}{\pi}\iint_{{\mathbb D}} |\psi'_{\alpha}|^2 dx dy =1}\).
\item
\(\displaystyle{\frac{1}{\pi}\iint_{{\mathbb D}} |\psi'_{\alpha}| dx dy =\frac{1-|\alpha|^2}{|\alpha|^2} \log \frac{1}{1-|\alpha|^2}}\).
\end{itemize}
\hypertarget{question-97-1}{%
\subsection{Question 97}\label{question-97-1}}
Let \(\Omega\) be a simply connected open set and let \(\gamma\) be a
simple closed contour in \(\Omega\) and enclosing a bounded region \(U\)
anticlockwise. Let \(f: \ \Omega \to {\mathbb C}\) be a holomorphic
function and \(|f(z)|\leq M\) for all \(z\in \gamma\). Prove that
\(|f(z)|\leq M\) for all \(z\in U\).
\hypertarget{question-98-1}{%
\subsection{Question 98}\label{question-98-1}}
Compute the following integrals.
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\item
\(\displaystyle \int_0^\infty \frac{x^{a-1}}{1 + x^n} \, dx\),
\(0< a < n\)
\item
\(\displaystyle \int_0^\infty \frac{\log x}{(1 + x^2)^2}\, dx\)
\end{enumerate}
\hypertarget{question-99-1}{%
\subsection{Question 99}\label{question-99-1}}
Let \(f\) be holomorphic in a neighborhood of \(D_r(z_0)\). Show that
for any \(s<r\), there exists a constant \(c>0\) such that
\[\|f\|_{(\infty, s)} \leq c \|f\|_{(1, r)},\] where
\(\displaystyle \|f\|_{(\infty, s)} = \text{sup}_{z \in D_s(z_0)}|f(z)|\)
and \(\displaystyle \|f\|_{(1, r)} = \int_{D_r(z_0)} |f(z)|dx dy\).
\hypertarget{question-100-1}{%
\subsection{Question 100}\label{question-100-1}}
Let \(u(x,y)\) be harmonic and have continuous partial derivatives of
order three in an open disc of radius \(R>0\).
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let two points \((a,b), (x,y)\) in this disk be given. Show that the
following integral is independent of the path in this disk joining
these points:
\[v(x,y) = \int_{a,b}^{x,y} ( -\frac{\partial u}{\partial y}dx + \frac{\partial u}{\partial x}dy).\]\\
\item
\hfill
\begin{enumerate}
\def\labelenumii{(\roman{enumii})}
\item
Prove that \(u(x,y)+i v(x,y)\) is an analytic function in this disc.
\item
Prove that \(v(x,y)\) is harmonic in this disc.
\end{enumerate}
\end{enumerate}
\hypertarget{question-101-1}{%
\subsection{Question 101}\label{question-101-1}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
\(f(z)= u(x,y) +i v(x,y)\) be analytic in a domain
\(D\subset {\mathbb C}\). Let \(z_0=(x_0,y_0)\) be a point in \(D\)
which is in the intersection of the curves \(u(x,y)= c_1\) and
\(v(x,y)=c_2\), where \(c_1\) and \(c_2\) are constants. Suppose that
\(f'(z_0)\neq 0\). Prove that the lines tangent to these curves at
\(z_0\) are perpendicular.
\item
Let \(f(z)=z^2\) be defined in \({\mathbb C}\).
\item
Describe the level curves of \(\mbox{\textrm Re}{(f)}\) and of
\(\mbox{Im}{(f)}\).
\end{enumerate}
\begin{enumerate}
\def\labelenumi{(\roman{enumi})}
\setcounter{enumi}{1}
\tightlist
\item
What are the angles of intersections between the level curves
\(\mbox{\textrm Re}{(f)}=0\) and \(\mbox{\textrm Im}{(f)}\)? Is your
answer in agreement with part a) of this question?
\end{enumerate}
\hypertarget{question-102-1}{%
\subsection{Question 102}\label{question-102-1}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let \(f: D\rightarrow \mathbb C\) be a continuous function, where
\(D\subset \mathbb C\) is a domain. Let \(\alpha:[a,b]\rightarrow D\)
be a smooth curve. Give a precise definition of the \emph{complex line
integral} \[\int_{\alpha} f.\]
\item
Assume that there exists a constant \(M\) such that
\(|f(\tau)|\leq M\) for all \(\tau\in \mbox{\textrm Image}(\alpha\)).
Prove that
\[\big | \int_{\alpha} f \big |\leq M \times \mbox{\textrm length}(\alpha).\]
\item
Let \(C_R\) be the circle \(|z|=R\), described in the counterclockwise
direction, where \(R>1\). Provide an upper bound for
\(\abs{ \int_{C_R} \dfrac{\log{(z)} }{z^2} }\) which depends
\emph{only} on \(R\) and other constants.
\end{enumerate}
\hypertarget{question-103-1}{%
\subsection{Question 103}\label{question-103-1}}
\begin{enumerate}
\def\labelenumi{(\alph{enumi})}
\item
Let \(f:{\mathbb C}\rightarrow {\mathbb C}\) be an entire function.
Assume the existence of a non-negative integer \(m\), and of positive
constants \(L\) and \(R\), such that for all \(z\) with \(|z|>R\) the
inequality \[|f(z)| \leq L |z|^m\] holds. Prove that \(f\) is a
polynomial of degree \(\leq m\).
\item
Let \(f:{\mathbb C}\rightarrow {\mathbb C}\) be an entire function.
Suppose that there exists a real number M such that for all
\(z\in {\mathbb C}\) \[\mbox{\textrm Re} (f) \leq M.\] Prove that
\(f\) must be a constant.
\end{enumerate}
\hypertarget{question-104-1}{%
\subsection{Question 104}\label{question-104-1}}
Prove that all the roots of the complex polynomial
\[z^7 - 5 z^3 +12 =0\] lie between the circles \(|z|=1\) and \(|z|=2\).
\hypertarget{question-105-1}{%
\subsection{Question 105}\label{question-105-1}}
Let \(F\) be an analytic function inside and on a simple closed curve
\(C\), except for a pole of order \(m\geq 1\) at \(z=a\) inside \(C\).
Prove that
\[
\frac{1}{2 \pi i}\oint_{C} F(\tau) d\tau =
\lim_{\tau\rightarrow a} \frac{d^{m-1}}{d\tau^{m-1}}\big((\tau-a)^m F(\tau))\big)
.\]
\hypertarget{question-106-1}{%
\subsection{Question 106}\label{question-106-1}}
Find the conformal map that takes the upper half-plane comformally onto
the half-strip \(\{ w=x+iy:\ -\pi/2<x<\pi/2\ y>0\}\).
\hypertarget{question-107-1}{%
\subsection{Question 107}\label{question-107-1}}
Compute the integral
\(\displaystyle{\int_{-\infty}^{\infty} \frac{e^{-2\pi ix\xi}}{\cosh\pi x}dx}\)
where \(\displaystyle{\cosh z=\frac{e^{z}+e^{-z}}{2}}\).
\hypertarget{question-108-1}{%
\subsection{Question 108}\label{question-108-1}}
Find the number of zeroes, counting multiplicities, of the polynomial
\(f(z) = 2z^5 - 6z^2 - z + 1 = 0\)
in the annulus \(1 \leq |z| \leq 2\).
\hypertarget{question-109-1}{%
\subsection{Question 109}\label{question-109-1}}
Find an analytic isomorphism from the open region between \(|z| = 1\)
and \(|z -\frac 1 2| =\frac 1 2\) to the upper half plane \(\Im z > 0\).
(You may leave your result as a composition of functions).
\hypertarget{question-110-1}{%
\subsection{Question 110}\label{question-110-1}}
Use Green theorem or otherwise to prove the Cauchy theorem.
\hypertarget{question-111-1}{%
\subsection{Question 111}\label{question-111-1}}
State and prove the divergence theorem on any rectangle in
\(\mathbb{R}^2\).
\hypertarget{question-112-1}{%
\subsection{Question 112}\label{question-112-1}}
Find an analytic isomorphism from the open region between \(x = 1\) and
\(x = 3\) to the upper half unit disk \(\{|z| < 1,\Im z > 0\}\). (You
may leave your result as a composition of functions)
\hypertarget{question-113-1}{%
\subsection{Question 113}\label{question-113-1}}
Use Cauchy's theorem to prove the argument principle.
\hypertarget{question-114-1}{%
\subsection{Question 114}\label{question-114-1}}
Evaluate the following by the method of residues:
\(\int_0^{\pi /2} \frac{1}{3+\sin^2x}dx\)
\hypertarget{question-115-1}{%
\subsection{Question 115}\label{question-115-1}}
Evaluate the improper integral
\(\int_0^\infty \frac{x^2~dx}{(x^2+1)(x^2+4)}\)
\hypertarget{question-116-1}{%
\subsection{Question 116}\label{question-116-1}}
Use residues to compute the integral \begin{align*}
\int_{0}^{\infty} \dfrac{\cos x}{(x^2+1)^2} \mathrm{d}x
\end{align*}
\hypertarget{question-117-1}{%
\subsection{Question 117}\label{question-117-1}}
State and prove the Cauchy integral formula for holomorphic functions.
\hypertarget{question-118-1}{%
\subsection{Question 118}\label{question-118-1}}
Let \(f\) be an entire function and suppose that \(|f(z)| \leq A|z|^2\)
for all \(z\) and some constant \(A\). Show that \(f\) is a polynomial
of degree \(\leq 2\).
\hypertarget{question-119-1}{%
\subsection{Question 119}\label{question-119-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
State the Schwarz lemma for analytic functions in the unit disc.
\item
Let \(f: \mathbb{D} \to \mathbb{D}\) be an analytic map from the unit
disc \(\mathbb{D}\) into itself. Use the Schwarz lemma to show that
for each \(a\in \mathbb{D}\) we have \begin{align*}
\dfrac{|f'(a)|}{1-|f(a)|^2} \leq \dfrac{1}{1-|a|^2}
\end{align*}
\end{enumerate}
\hypertarget{question-120-1}{%
\subsection{Question 120}\label{question-120-1}}
State the Riemann mapping theorem and prove the uniqueness part.
\hypertarget{question-121-1}{%
\subsection{Question 121}\label{question-121-1}}
Compute the integrals \begin{align*}
\int_{|z-2|=1} \dfrac{e^z}{z(z-1)^2} \,
\mathrm{d}z, \quad \int_0^\infty \dfrac{\cos 2x}{x^2 + 2} \, \mathrm{d}x
\end{align*}
\hypertarget{question-122-1}{%
\subsection{Question 122}\label{question-122-1}}
Let \((f_n)\) be a sequence of holomorphic functions in a domain \(D\).
Suppose that \(f_n \to f\) uniformly on each compact subset of \(D\).
Show that
\begin{itemize}
\item
\(f\) is holomorphic on \(D\).
\item
\(f_n' \to f'\) uniformly on each compact subset of \(D\).
\end{itemize}
\hypertarget{question-123-1}{%
\subsection{Question 123}\label{question-123-1}}
If \(f\) is a non-constant entire function, then \(f(\mathbb{C})\) is
dense in the plane.
\hypertarget{question-124-1}{%
\subsection{Question 124}\label{question-124-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
State Rouche's theorem.
\item
Let \(f\) be analytic in a neighborhood of \(0\), and satisfying
\(f'(0) \neq 0\). Use Rouche's theorem to show that there exists a
neighborhood \(U\) of \(0\) such that \(f\) is a bijection in \(U\).
\end{enumerate}
\hypertarget{question-125-1}{%
\subsection{Question 125}\label{question-125-1}}
Let \(f\) be a meromorphic function in the plane such that
\begin{align*}
\lim_{|z|\to\infty} |f(z)| = \infty
\end{align*}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Show that \(f\) has only finitely many poles.
\item
Show that \(f\) is a rational function.
\end{enumerate}
\hypertarget{topology-158-questions}{%
\section{Topology (158 Questions)}\label{topology-158-questions}}
\hypertarget{question-1-3}{%
\subsection{Question 1}\label{question-1-3}}
Suppose \((X, d)\) is a metric space. State criteria for continuity of a
function \(f : X \to X\) in terms of:
\begin{enumerate}
\def\labelenumi{\roman{enumi}.}
\item
open sets;
\item
\(\eps\)'s and \(\delta\)'s; and
\item
convergent sequences.
\end{enumerate}
Then prove that (iii) implies (i).
\hypertarget{question-2-3}{%
\subsection{Question 2}\label{question-2-3}}
Let \(X\) be a topological space.
\begin{enumerate}
\def\labelenumi{\roman{enumi}.}
\item
State what it means for \(X\) to be compact.
\item
Let \(X = \theset{0} \cup \theset{{1\over n} \mid n \in \ZZ^+ }\). Is
\(X\) compact?
\item
Let \(X = (0, 1]\). Is \(X\) compact?
\end{enumerate}
\hypertarget{question-3-3}{%
\subsection{Question 3}\label{question-3-3}}
Let \((X, d)\) be a compact metric space, and let \(f : X \to X\) be an
isometry: \[\forall~ x, y \in X, \qquad d(f (x), f (y)) = d(x, y).\]
Prove that \(f\) is a bijection.
\hypertarget{question-4-3}{%
\subsection{Question 4}\label{question-4-3}}
Suppose \((X, d)\) is a compact metric space and \(U\) is an open
covering of \(X\).
Prove that there is a number \(\delta > 0\) such that for every
\(x \in X\), the ball of radius \(\delta\) centered at \(x\) is
contained in some element of \(U\).
\hypertarget{question-5-3}{%
\subsection{Question 5}\label{question-5-3}}
Let \(X\) be a topological space, and \(B \subset A \subset X\). Equip
\(A\) with the subspace topology, and write \(\cl_X (B)\) or
\(\cl_A (B)\) for the closure of \(B\) as a subset of, respectively,
\(X\) or \(A\).
Determine, with proof, the general relationship between
\(\cl_X (B) \cap A\) and \(\cl_A (B)\)
\begin{quote}
I.e., are they always equal? Is one always contained in the other but
not conversely? Neither?
\end{quote}
\hypertarget{question-6-3}{%
\subsection{Question 6}\label{question-6-3}}
Prove that the unit interval \(I\) is compact. Be sure to explicitly
state any properties of \(\RR\) that you use.
\hypertarget{question-7-3}{%
\subsection{Question 7}\label{question-7-3}}
A topological space is \textbf{sequentially compact} if every infinite
sequence in \(X\) has a convergent subsequence.
Prove that every compact metric space is sequentially compact.
\hypertarget{question-8-3}{%
\subsection{Question 8}\label{question-8-3}}
Show that for any two topological spaces \(X\) and \(Y\) ,
\(X \cross Y\) is compact if and only if both \(X\) and \(Y\) are
compact.
\hypertarget{question-9-3}{%
\subsection{Question 9}\label{question-9-3}}
Recall that a topological space is said to be \textbf{connected} if
there does not exist a pair \(U, V\) of disjoint nonempty subsets whose
union is \(X\).
\begin{enumerate}
\def\labelenumi{\roman{enumi}.}
\item
Prove that \(X\) is connected if and only if the only subsets of \(X\)
that are both open and closed are \(X\) and the empty set.
\item
Suppose that \(X\) is connected and let \(f : X \to \RR\) be a
continuous map. If \(a\) and \(b\) are two points of \(X\) and \(r\)
is a point of \(\RR\) lying between \(f (a)\) and \(f (b)\) show that
there exists a point \(c\) of \(X\) such that \(f (c) = r\).
\end{enumerate}
\hypertarget{question-10-3}{%
\subsection{Question 10}\label{question-10-3}}
Let \[
X = \theset{(0, y) \mid - 1 \leq y \leq 1} \cup \theset{\qty{x, s = \sin\qty{1 \over x}} \mid 0 < x \leq 1}
.\]
Prove that \(X\) is connected but not path connected.
\hypertarget{question-11-3}{%
\subsection{Question 11}\label{question-11-3}}
Let \begin{align*}
X=\left\{(x, y) \in \mathbb{R}^{2} | x>0, y \geq 0, \text { and } \frac{y}{x} \text { is rational }\right\}
\end{align*} and equip \(X\) with the subspace topology induced by the
usual topology on \(\RR^2\).
Prove or disprove that \(X\) is connected.
\hypertarget{question-12-3}{%
\subsection{Question 12}\label{question-12-3}}
Write \(Y\) for the interval \([0, \infty)\), equipped with the usual
topology.
Find, with proof, all subspaces \(Z\) of \(Y\) which are retracts of
\(Y\).
\hypertarget{question-13-3}{%
\subsection{Question 13}\label{question-13-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if the space \(X\) is connected and locally path connected
then \(X\) is path connected.
\item
Is the converse true? Prove or give a counterexample.
\end{enumerate}
\hypertarget{question-14-3}{%
\subsection{Question 14}\label{question-14-3}}
Let \(\theset{X_\alpha \mid \alpha \in A}\) be a family of connected
subspaces of a space \(X\) such that there is a point \(p \in X\) which
is in each of the \(X_\alpha\).
Show that the union of the \(X_\alpha\) is connected.
\hypertarget{question-15-3}{%
\subsection{Question 15}\label{question-15-3}}
Let \(X\) be a topological space.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(X\) is connected if and only if there is no continuous
nonconstant map to the discrete two-point space \(\theset{0, 1}\).
\item
Suppose in addition that \(X\) is compact and \(Y\) is a connected
Hausdorff space. Suppose further that there is a continuous map
\(f : X \to Y\) such that every preimage \(f\inv (y)\) for
\(y \in Y\), is a connected subset of \(X\).
Show that \(X\) is connected.
\item
Give an example showing that the conclusion of (b) may be false if
\(X\) is not compact.
\end{enumerate}
\hypertarget{question-16-3}{%
\subsection{Question 16}\label{question-16-3}}
If \(X\) is a topological space and \(S \subset X\), define in terms of
open subsets of \(X\) what it means for \(S\) \textbf{not} to be
connected.
Show that if \(S\) is not connected there are nonempty subsets
\(A, B \subset X\) such that \[
A \cup B = S \qtext{and} A \cap \bar B = \bar A \cap B = \emptyset
\]
\begin{quote}
Here \(\bar A\) and \(\bar B\) denote closure with respect to the
topology on the ambient space \(X\).
\end{quote}
\hypertarget{question-17-3}{%
\subsection{Question 17}\label{question-17-3}}
A topological space is \textbf{totally disconnected} if its only
connected subsets are one-point sets.
Is it true that if \(X\) has the discrete topology, it is totally
disconnected?
Is the converse true? Justify your answers.
\hypertarget{question-18-3}{%
\subsection{Question 18}\label{question-18-3}}
Prove that if \((X, d)\) is a compact metric space, \(f : X \to X\) is a
continuous map, and \(C\) is a constant with \(0 < C < 1\) such that \[
d(f (x), f (y)) \leq C \cdot d(x, y) \quad \forall x, y
,\] then \(f\) has a fixed point.
\hypertarget{question-19-3}{%
\subsection{Question 19}\label{question-19-3}}
Prove that the product of two connected topological spaces is connected.
\hypertarget{question-20-3}{%
\subsection{Question 20}\label{question-20-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define what it means for a topological space to be:
\begin{enumerate}
\def\labelenumii{\roman{enumii}.}
\item
\textbf{Connected}
\item
\textbf{Locally connected}
\end{enumerate}
\item
Give, with proof, an example of a space that is connected but not
locally connected.
\end{enumerate}
\hypertarget{question-21-3}{%
\subsection{Question 21}\label{question-21-3}}
Let \(X\) and \(Y\) be topological spaces and let \(f : X \to Y\) be a
function.
Suppose that \(X = A \cup B\) where \(A\) and \(B\) are closed subsets,
and that the restrictions \(f \mid_A\) and \(f \mid_B\) are continuous
(where \(A\) and \(B\) have the subspace topology).
Prove that \(f\) is continuous.
\hypertarget{question-22-3}{%
\subsection{Question 22}\label{question-22-3}}
Let \(X\) be a compact space and let \(f : X \times R \to R\) be a
continuous function such that \(f (x, 0) > 0\) for all \(x \in X\).
Prove that there is \(\eps > 0\) such that \(f (x, t) > 0\) whenever
\(\abs t < \eps\).
Moreover give an example showing that this conclusion may not hold if
\(X\) is not assumed compact.
\hypertarget{question-23-3}{%
\subsection{Question 23}\label{question-23-3}}
Define a family \(\mct\) of subsets of \(\RR\) by saying that
\(A \in T\) is \(\iff A = \emptyset\) or \(\RR \setminus A\) is a finite
set.
Prove that \(\mct\) is a topology on \(\RR\), and that \(\RR\) is
compact with respect to this topology.
\hypertarget{question-24-3}{%
\subsection{Question 24}\label{question-24-3}}
In each part of this problem \(X\) is a compact topological space.
Give a proof or a counterexample for each statement.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
If \(\theset{F_n }_{n=1}^\infty\) is a sequence of nonempty
\emph{closed} subsets of \(X\) such that \(F_{n+1} \subset F_{n}\) for
all \(n\) then \[\intersect^\infty_{n=1} F_n\neq \emptyset.\]
\item
If \(\theset{O_n}_{n=1}^\infty\) is a sequence of nonempty \emph{open}
subsets of \(X\) such that \(O_{n+1} \subset O_n\) for all \(n\) then
\[\intersect_{n=1}^\infty O_{n}\neq \emptyset.\]
\end{enumerate}
\hypertarget{question-25-3}{%
\subsection{Question 25}\label{question-25-3}}
Let \(\mcs, \mct\) be topologies on a set \(X\). Show that
\(\mcs \cap \mct\) is a topology on \(X\).
Give an example to show that \(\mcs \cup \mct\) need not be a topology.
\hypertarget{question-26-3}{%
\subsection{Question 26}\label{question-26-3}}
Let \(f : X \to Y\) be a continuous function between topological spaces.
Let \(A\) be a subset of \(X\) and let \(f (A)\) be its image in \(Y\) .
One of the following statements is true and one is false. Decide which
is which, prove the true statement, and provide a counterexample to the
false statement:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
If \(A\) is closed then \(f (A)\) is closed.
\item
If \(A\) is compact then \(f (A)\) is compact.
\end{enumerate}
\hypertarget{question-27-3}{%
\subsection{Question 27}\label{question-27-3}}
A metric space is said to be \textbf{totally bounded} if for every
\(\eps > 0\) there exists a finite cover of \(X\) by open balls of
radius \(\eps\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show: a metric space \(X\) is totally bounded iff every sequence in
\(X\) has a Cauchy subsequence.
\item
Exhibit a complete metric space \(X\) and a closed subset \(A\) of
\(X\) that is bounded but not totally bounded.
\end{enumerate}
\begin{quote}
You are not required to prove that your example has the stated
properties.
\end{quote}
\hypertarget{question-28-3}{%
\subsection{Question 28}\label{question-28-3}}
Suppose that \(X\) is a Hausdorff topological space and that
\(A \subset X\).
Prove that if \(A\) is compact in the subspace topology then \(A\) is
closed as a subset of X.
\hypertarget{question-29-3}{%
\subsection{Question 29}\label{question-29-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that a continuous bijection from a compact space to a Hausdorff
space is a homeomorphism.
\item
Give an example that shows that the ``Hausdorff'' hypothesis in part
(a) is necessary.
\end{enumerate}
\hypertarget{question-30-3}{%
\subsection{Question 30}\label{question-30-3}}
Let \(X\) be a topological space and let \[
\Delta = \theset{(x, y) \in X \times X \mid x = y}
.\]
Show that \(X\) is a Hausdorff space if and only if \(\Delta\) is closed
in \(X \times X\).
\hypertarget{question-31-3}{%
\subsection{Question 31}\label{question-31-3}}
If \(f\) is a function from \(X\) to \(Y\) , consider the graph \[
G = \theset{(x, y) \in X \times Y \mid f (x) = y}
.\]
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if \(f\) is continuous and \(Y\) is Hausdorff, then \(G\)
is a closed subset of \(X \times Y\).
\item
Prove that if \(G\) is closed and \(Y\) is compact, then \(f\) is
continuous.
\end{enumerate}
\hypertarget{question-32-3}{%
\subsection{Question 32}\label{question-32-3}}
Let X be a noncompact locally compact Hausdorff space, with topology
\(\mct\). Let \(\tilde X = X \cup \theset{\infty}\) (\(X\) with one
point adjoined), and consider the family \(\mcb\) of subsets of
\(\tilde X\) defined by \[
\mcb = T \cup \theset{S \cup \theset{\infty}\mid S \subset X,~~ X \backslash S \text{ is compact}}
.\]
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(\mcb\) is a topology on \(\tilde X\), that the resulting
space is compact, and that \(X\) is dense in \(\tilde X\).
\item
Prove that if \(Y \supset X\) is a compact space such that \(X\) is
dense in \(Y\) and \(Y \backslash X\) is a singleton, then Y is
homeomorphic to \(\tilde X\).
\end{enumerate}
\begin{quote}
The space \(\tilde X\) is called the \textbf{one-point compactification}
of \(X\).
\end{quote}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\setcounter{enumi}{2}
\item
Find familiar spaces that are homeomorphic to the one point
compactifications of
\begin{enumerate}
\def\labelenumii{\roman{enumii}.}
\tightlist
\item
\(X = (0, 1)\) and
\end{enumerate}
\end{enumerate}
\hypertarget{question-33-3}{%
\subsection{Question 33}\label{question-33-3}}
Prove that a metric space \(X\) is \textbf{normal}, i.e.~if
\(A, B \subset X\) are closed and disjoint then there exist open sets
\(A \subset U \subset X, ~B \subset V \subset X\) such that
\(U \cap V = \emptyset\).
\hypertarget{question-34-3}{%
\subsection{Question 34}\label{question-34-3}}
Prove that every compact, Hausdorff topological space is normal.
\hypertarget{question-35-3}{%
\subsection{Question 35}\label{question-35-3}}
Show that a connected, normal topological space with more than a single
point is uncountable.
\hypertarget{question-36-3}{%
\subsection{Question 36}\label{question-36-3}}
Give an example of a quotient map in which the domain is Hausdorff, but
the quotient is not.
\hypertarget{question-37-3}{%
\subsection{Question 37}\label{question-37-3}}
Let \(X\) be a compact Hausdorff space and suppose
\(R \subset X \times X\) is a closed equivalence relation.
Show that the quotient space \(X/R\) is Hausdorff.
\hypertarget{question-38-3}{%
\subsection{Question 38}\label{question-38-3}}
Let \(U \subset \RR^n\) be an open set which is bounded in the standard
Euclidean metric.
Prove that the quotient space \(\RR^n / U\) is not Hausdorff.
\hypertarget{question-39-3}{%
\subsection{Question 39}\label{question-39-3}}
Let \(A\) be a closed subset of a normal topological space \(X\).
Show that both \(A\) and the quotient \(X/A\) are normal.
\hypertarget{question-40-3}{%
\subsection{Question 40}\label{question-40-3}}
Define an equivalence relation \(\sim\) on \(\RR\) by \(x \sim y\) if
and only if \(x - y \in Q\). Let \(X\) be the set of equivalence
classes, endowed with the quotient topology induced by the canonical
projection \(\pi : \RR \to X\).
Describe, with proof, all open subsets of \(X\) with respect to this
topology.
\hypertarget{question-41-3}{%
\subsection{Question 41}\label{question-41-3}}
Let \(A\) denote a subset of points of \(S^2\) that looks exactly like
the capital letter A. Let \(Q\) be the quotient of \(S^2\) given by
identifying all points of \(A\) to a single point.
Show that \(Q\) is homeomorphic to a familiar topological space and
identify that space.
\hypertarget{question-42-3}{%
\subsection{Question 42}\label{question-42-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that a topological space that has a countable base for its
topology also contains a countable dense subset.
\item
Prove that the converse to (a) holds if the space is a metric space.
\end{enumerate}
\hypertarget{question-43-3}{%
\subsection{Question 43}\label{question-43-3}}
Recall that a topological space is \textbf{regular} if for every point
\(p \in X\) and for every closed subset \(F \subset X\) not containing
\(p\), there exist disjoint open sets \(U, V \subset X\) with
\(p \in U\) and \(F \subset V\).
Let \(X\) be a regular space that has a countable basis for its
topology, and let \(U\) be an open subset of \(X\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(U\) is a countable union of closed subsets of \(X\).
\item
Show that there is a continuous function \(f : X \to [0,1]\) such that
\(f (x) > 0\) for \(x \in U\) and \(f (x) = 0\) for \(x \in U\).
\end{enumerate}
\hypertarget{question-44-3}{%
\subsection{Question 44}\label{question-44-3}}
Let \(S^1\) denote the unit circle in \(C\), \(X\) be any topological
space, \(x_0 \in X\), and \[\gamma_0, \gamma_1 : S^1 \to X\] be two
continuous maps such that \(\gamma_0 (1) = \gamma_1 (1) = x_0\).
Prove that \(\gamma_0\) is homotopic to \(\gamma_1\) if and only if the
elements represented by \(\gamma_0\) and \(\gamma_1\) in
\(\pi_1 (X, x_0 )\) are conjugate.
\hypertarget{question-45-3}{%
\subsection{Question 45}\label{question-45-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
State van Kampen's theorem.
\item
Calculate the fundamental group of the space obtained by taking two
copies of the torus \(T = S^1 \times S^1\) and gluing them along a
circle \(S^1 \times {p}\) where \(p\) is a point in \(S^1\).
\item
Calculate the fundamental group of the Klein bottle.
\item
Calculate the fundamental group of the one-point union of
\(S^1 \times S^1\) and \(S^1\).
\item
Calculate the fundamental group of the one-point union of
\(S^1 \times S^1\) and \(\RP^2\).
\end{enumerate}
\begin{quote}
\textbf{Note: multiple appearances!!}
\end{quote}
\hypertarget{question-46-3}{%
\subsection{Question 46}\label{question-46-3}}
Prove the following portion of van Kampen's theorem. If \(X = A\cup B\)
and \(A\), \(B\), and \(A \cap B\) are nonempty and path connected with
\(\pt \in A \cap B\), then there is a surjection \[
\pi_1 (A, \pt) \ast \pi_1 (B, \pt) \to \pi_1 (X, \pt)
.\]
\hypertarget{question-47-3}{%
\subsection{Question 47}\label{question-47-3}}
Let \(X\) denote the quotient space formed from the sphere \(S^2\) by
identifying two distinct points.
Compute the fundamental group and the homology groups of \(X\).
\hypertarget{question-48-3}{%
\subsection{Question 48}\label{question-48-3}}
Start with the unit disk \(\DD^2\) and identify points on the boundary
if their angles, thought of in polar coordinates, differ a multiple of
\(\pi/2\).
Let \(X\) be the resulting space. Use van Kampen's theorem to compute
\(\pi_1 (X, \ast)\).
\hypertarget{question-49-3}{%
\subsection{Question 49}\label{question-49-3}}
Let \(L\) be the union of the \(z\)-axis and the unit circle in the
\(xy\dash\)plane. Compute \(\pi_1 (\RR^3 \backslash L, \ast)\).
\hypertarget{question-50-3}{%
\subsection{Question 50}\label{question-50-3}}
Let \(A\) be the union of the unit sphere in \(\RR^3\) and the interval
\(\theset {(t, 0, 0) : -1 \leq t \leq 1} \subset \RR^3\).
Compute \(\pi_1 (A)\) and give an explicit description of the universal
cover of \(X\).
\hypertarget{question-51-3}{%
\subsection{Question 51}\label{question-51-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(S_1\) and \(S_2\) be disjoint surfaces. Give the definition of
their connected sum \(S^1 \#S^2\).
\item
Compute the fundamental group of the connected sum of the projective
plane and the two-torus.
\end{enumerate}
\hypertarget{question-52-3}{%
\subsection{Question 52}\label{question-52-3}}
Compute the fundamental group, using any technique you like, of
\(\RP^2 \#\RP^2 \#\RP^2\).
\hypertarget{question-53-3}{%
\subsection{Question 53}\label{question-53-3}}
Let \[
V = \DD^2 \times S^1 = \theset{ (z, e^{it}) \suchthat \norm z \leq 1,~~ 0 \leq t < 2\pi}
\] be the ``solid torus'' with boundary given by the torus
\(T = S^1 \times S^1\) .
For \(n \in Z\) define \begin{align*}
\phi_n : T &\to T \\
(e^{is} , e^{it} ) &\mapsto (e^{is} , e^{i(ns+t)})
.\end{align*}
Find the fundamental group of the identification space \[
V_n = {V\disjoint V \over \sim n}
\] where the equivalence relation \(\sim_n\) identifies a point \(x\) on
the boundary \(T\) of the first copy of \(V\) with the point
\(\phi_n (x)\) on the boundary of the second copy of \(V\).
\hypertarget{question-54-3}{%
\subsection{Question 54}\label{question-54-3}}
Let \(S_k\) be the space obtained by removing \(k\) disjoint open disks
from the sphere \(S^2\). Form \(X_k\) by gluing \(k\) Möbius bands onto
\(S_k\) , one for each circle boundary component of \(S_k\) (by
identifying the boundary circle of a Möbius band homeomorphically with a
given boundary component circle).
Use van Kampen's theorem to calculate \(\pi_1 (X_k)\) for each \(k > 0\)
and identify \(X_k\) in terms of the classification of surfaces.
\hypertarget{question-55-3}{%
\subsection{Question 55}\label{question-55-3}}
\begin{enumerate}
\def\labelenumi{\roman{enumi}.}
\item
Let \(A\) be a subspace of a topological space \(X\). Define what it
means for \(A\) to be a \textbf{deformation retract} of \(X\).
\item
Consider \(X_1\) the ``planar figure eight'' and
\[X_2 = S^1 \cup ({0} \times [-1, 1])\] (the ``theta space''). Show
that \(X_1\) and \(X_2\) have isomorphic fundamental groups.
\item
Prove that the fundamental group of \(X_2\) is a free group on two
generators.
\end{enumerate}
\hypertarget{question-56-3}{%
\subsection{Question 56}\label{question-56-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Give the definition of a \textbf{covering space} \(\hat{X}\) (and
\textbf{covering map} \(p : \hat{X} \to X\)) for a topological space
\(X\).
\item
State the homotopy lifting property of covering spaces. Use it to show
that a covering map \(p : \hat{X} \to X\) induces an injection \[
p^\ast : \pi_1 (\hat{X}, \hat{x}) \to \pi_1 (X, p(\hat{x}))
\] on fundamental groups.
\item
Let \(p : \hat{X} \to X\) be a covering map with \(Y\) and \(X\)
path-connected. Suppose that the induced map \(p^\ast\) on \(\pi_1\)
is an isomorphism. Prove that \(p\) is a homeomorphism.
\end{enumerate}
\hypertarget{question-57-3}{%
\subsection{Question 57}\label{question-57-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Give the definitions of \textbf{covering space} and \textbf{deck
transformation} (or covering transformation).
\item
Describe the universal cover of the Klein bottle and its group of deck
transformations.
\item
Explicitly give a collection of deck transformations on
\[\theset{(x, y) \mid -1 \leq x \leq 1, -\infty < y < \infty}\] such
that the quotient is a Möbius band.
\item
Find the universal cover of \(\RP^2 \times S^1\) and explicitly
describe its group of deck transformations.
\end{enumerate}
\hypertarget{question-58-3}{%
\subsection{Question 58}\label{question-58-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
What is the definition of a \textbf{regular} (or Galois) covering
space?
\item
State, without proof, a criterion in terms of the fundamental group
for a covering map \(p : \tilde X \to X\) to be regular.
\item
Let \(\Theta\) be the topological space formed as the union of a
circle and its diameter (so this space looks exactly like the letter
\(\Theta\)). Give an example of a covering space of \(\Theta\) that is
not regular.
\end{enumerate}
\hypertarget{question-59-3}{%
\subsection{Question 59}\label{question-59-3}}
Let \(S\) be the closed orientable surface of genus 2 and let \(C\) be
the commutator subgroup of \(\pi_1 (S, \ast)\). Let \(\tilde S\) be the
cover corresponding to \(C\). Is the covering map \(\tilde S \to S\)
regular?
\begin{quote}
The term ``normal'' is sometimes used as a synonym for regular in this
context.
\end{quote}
What is the group of deck transformations?
Give an example of a nontrivial element of \(\pi_1 (S, \ast)\) which
lifts to a trivial deck transformation.
\hypertarget{question-60-3}{%
\subsection{Question 60}\label{question-60-3}}
Describe the 3-fold connected covering spaces of \(S^1 \lor S^1\).
\hypertarget{question-61-3}{%
\subsection{Question 61}\label{question-61-3}}
Find all three-fold covers of the wedge of two copies of \(\RP^2\) .
Justify your answer.
\hypertarget{question-62-3}{%
\subsection{Question 62}\label{question-62-3}}
Describe, as explicitly as you can, two different (non-homeomorphic)
connected two-sheeted covering spaces of \(\RP^2 \lor \RP^3\), and prove
that they are not homeomorphic.
\hypertarget{question-63-3}{%
\subsection{Question 63}\label{question-63-3}}
Is there a covering map from \[
X_3 = \theset{x^2 + y^2 = 1} \cup \theset{(x - 2)^2 + y^2 = 1} \cup \theset{(x + 2)^2 + y^2 = 1} \subset \RR^2
\] to the wedge of two \(S^1\)'s? If there is, give an example; if not,
give a proof.
\hypertarget{question-64-3}{%
\subsection{Question 64}\label{question-64-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Suppose \(Y\) is an \(n\)-fold connected covering space of the torus
\(S^1 \times S^1\). Up to homeomorphism, what is \(Y\)? Justify your
answer.
\item
Let \(X\) be the topological space obtained by deleting a disk from a
torus. Suppose \(Y\) is a 3-fold covering space of \(X\).
What surfaces could \(Y\) be? Justify your answer, but you need not
exhibit the covering maps explicitly.
\end{enumerate}
\hypertarget{question-65-3}{%
\subsection{Question 65}\label{question-65-3}}
Let \(S\) be a connected surface, and let \(U\) be a connected open
subset of \(S\). Let \(p : \tilde S \to S\) be the universal cover of
\(S\). Show that \(p\inv (U )\) is connected if and only if the
homeomorphism \(i_\ast : \pi_1 (U ) \to \pi_1 (S)\) induced by the
inclusion \(i : U \to S\) is onto.
\hypertarget{question-66-3}{%
\subsection{Question 66}\label{question-66-3}}
Suppose that X has universal cover \(p : \tilde X \to X\) and let
\(A \subset X\) be a subspace with \(p(\tilde a) = a \in A\). Show that
there is a group isomorphism \[
\ker(\pi_1 (A, a) \to \pi_1 (X, a)) \cong \pi_1 (p\inv A, \bar a)
.\]
\hypertarget{question-67-3}{%
\subsection{Question 67}\label{question-67-3}}
Prove that every continuous map \(f : \RP^2 \to S^1\) is homotopic to a
constant.
\begin{quote}
Hint: think about covering spaces.
\end{quote}
\hypertarget{question-68-3}{%
\subsection{Question 68}\label{question-68-3}}
Prove that the free group on two generators contains a subgroup
isomorphic to the free group on five generators by constructing an
appropriate covering space of \(S^1 \lor S^1\).
\hypertarget{question-69-3}{%
\subsection{Question 69}\label{question-69-3}}
Use covering space theory to show that \(\ZZ_2 \ast \ZZ\) (that is, the
free product of \(\ZZ_2\) and \(\ZZ\)) has two subgroups of index 2
which are not isomorphic to each other.
\hypertarget{question-70-3}{%
\subsection{Question 70}\label{question-70-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that any finite index subgroup of a finitely generated free group
is free. State clearly any facts you use about the fundamental groups
of graphs.
\item
Prove that if \(N\) is a nontrivial normal subgroup of infinite index
in a finitely generated free group \(F\) , then \(N\) is not finitely
generated.
\end{enumerate}
\hypertarget{question-71-3}{%
\subsection{Question 71}\label{question-71-3}}
Let \(p : X \to Y\) be a covering space, where \(X\) is compact,
path-connected, and locally path-connected.
Prove that for each \(x \in X\) the set \(p\inv (\theset{p(x)})\) is
finite, and has cardinality equal to the index of \(p_* (\pi_1 (X, x))\)
in \(\pi_1 (Y, p(x))\).
\hypertarget{question-72-3}{%
\subsection{Question 72}\label{question-72-3}}
Compute the homology of the one-point union of \(S^1 \times S^1\) and
\(S^1\).
\hypertarget{question-73-3}{%
\subsection{Question 73}\label{question-73-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
State the \textbf{Mayer-Vietoris theorem}.
\item
Use it to compute the homology of the space \(X\) obtained by gluing
two solid tori along their boundary as follows. Let \(\DD^2\) be the
unit disk and let \(S^1\) be the unit circle in the complex plane
\(\CC\). Let \(A = S^1 \times \DD^2\) and \(B = \DD^2 \times S^1\).
Then \(X\) is the quotient space of the disjoint union
\(A \disjoint B\) obtained by identifying \((z, w) \in A\) with
\((zw^3 , w) \in B\) for all \((z, w) \in S^1 \times S^1\).
\end{enumerate}
\hypertarget{question-74-3}{%
\subsection{Question 74}\label{question-74-3}}
Let \(A\) and \(B\) be circles bounding disjoint disks in the plane
\(z = 0\) in \(\RR^3\). Let \(X\) be the subset of the upper half-space
of \(\RR^3\) that is the union of the plane \(z = 0\) and a
(topological) cylinder that intersects the plane in
\(\partial C = A \cup B\).
Compute \(H_* (X)\) using the Mayer--Vietoris sequence.
\hypertarget{question-75-3}{%
\subsection{Question 75}\label{question-75-3}}
Compute the integral homology groups of the space \(X = Y \cup Z\) which
is the union of the sphere \[
Y = \theset{x^2 + y^2 + z^2 = 1}
\] and the ellipsoid \[
Z = \theset{x^2 + y^2 + {z^2 \over 4} = 1}
.\]
\hypertarget{question-76-3}{%
\subsection{Question 76}\label{question-76-3}}
Let \(X\) consist of two copies of the solid torus \(\DD^2 \times S^1\),
glued together by the identity map along the boundary torus
\(S^1 \times S^1\). Compute the homology groups of \(X\).
\hypertarget{question-77-3}{%
\subsection{Question 77}\label{question-77-3}}
Use the circle along which the connected sum is performed and the
Mayer-Vietoris long exact sequence to compute the homology of
\(\RP^2 \# \RP^2\).
\hypertarget{question-78-3}{%
\subsection{Question 78}\label{question-78-3}}
Express a Klein bottle as the union of two annuli.
Use the Mayer Vietoris sequence and this decomposition to compute its
homology.
\hypertarget{question-79-3}{%
\subsection{Question 79}\label{question-79-3}}
Let \(X\) be the topological space obtained by identifying three
distinct points on \(S^2\). Calculate \(H_* (X; Z)\).
\hypertarget{question-80-3}{%
\subsection{Question 80}\label{question-80-3}}
Compute \(H_0\) and \(H_1\) of the complete graph \(K_5\) formed by
taking five points and joining each pair with an edge.
\hypertarget{question-81-3}{%
\subsection{Question 81}\label{question-81-3}}
Compute the homology of the subset \(X \subset \RR^3\) formed as the
union of the unit sphere, the \(z\dash\)axis, and the \(xy\dash\)plane.
\hypertarget{question-82-3}{%
\subsection{Question 82}\label{question-82-3}}
Let \(X\) be the topological space formed by filling in two circles
\(S^1 \times \theset{p_1 }\) and \(S^1 \times \theset{p_2 }\) in the
torus \(S^1 \times S^1\) with disks.
Calculate the fundamental group and the homology groups of \(X\).
\hypertarget{question-83-3}{%
\subsection{Question 83}\label{question-83-3}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Consider the quotient space \[
T^2 = \RR^2 / \sim \qtext{where} (x, y) \sim (x + m, y + n) \text{ for } m, n \in \ZZ
,\] and let \(A\) be any \(2 \times 2\) matrix whose entries are
integers such that \(\det A = 1\).
Prove that the action of \(A\) on \(\RR^2\) descends via the quotient
\(\RR^2 \to T^2\) to induce a homeomorphism \(T^2 \to T^2\).
\item
Using this homeomorphism of \(T^2\), we define a new quotient space \[
T_A^3 \definedas {T^2\cross \RR \over \sim} \qtext{where} ((x, y), t) \sim (A(x, y), t + 1)
\] Compute \(H_1 (T_A^3 )\) if
\(A=\left(\begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right).\)
\end{enumerate}
\hypertarget{question-84-3}{%
\subsection{Question 84}\label{question-84-3}}
Give a self-contained proof that the zeroth homology \(H_0 (X)\) is
isomorphic to \(\ZZ\) for every path-connected space \(X\).
\hypertarget{question-85-3}{%
\subsection{Question 85}\label{question-85-3}}
Give a self-contained proof that the zeroth homology \(H_0 (X)\) is
isomorphic to \(\ZZ\) for every path-connected space \(X\).
\hypertarget{question-86-2}{%
\subsection{Question 86}\label{question-86-2}}
It is a fact that if \(X\) is a single point then
\(H_1 (X) = \theset{0}\).
One of the following is the correct justification of this fact in terms
of the singular chain complex.
Which one is correct and why is it correct?
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
\(C_1 (X) = \theset{0}\).
\item
\(C_1 (X) \neq \theset{0}\) but \(\ker \partial_1 = 0\) with
\(\partial_1 : C_1 (X) \to C_0 (X)\).
\item
\(\ker \partial_1 \neq 0\) but \(\ker \partial_1 = \im\partial_2\)
with \(\partial_2 : C_2 (X) \to C_1 (X)\).
\end{enumerate}
\hypertarget{question-87-2}{%
\subsection{Question 87}\label{question-87-2}}
Compute the homology groups of \(S^2 \times S^2\).
\hypertarget{question-88-2}{%
\subsection{Question 88}\label{question-88-2}}
Let \(\Sigma\) be a closed orientable surface of genus \(g\). Compute
\(H_i(S^1 \times \Sigma; Z)\) for \(i = 0, 1, 2, 3\).
\hypertarget{question-89-2}{%
\subsection{Question 89}\label{question-89-2}}
Prove that if \(A\) is a retract of the topological space \(X\), then
for all nonnegative integers \(n\) there is a group \(G_n\) such that
\(H_{n} (X) \cong H_{n} (A) \oplus G_n\).
\begin{quote}
Here \(H_{n}\) denotes the \(n\)th singular homology group with integer
coefficients.
\end{quote}
\hypertarget{question-90-2}{%
\subsection{Question 90}\label{question-90-2}}
Does there exist a map of degree 2013 from \(S^2 \to S^2\).
\hypertarget{question-91-2}{%
\subsection{Question 91}\label{question-91-2}}
For each \(n \in \ZZ\) give an example of a map \(f_n : S^2 \to S^2\).
For which \(n\) must any such map have a fixed point?
\hypertarget{question-92-2}{%
\subsection{Question 92}\label{question-92-2}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
What is the degree of the antipodal map on the \(n\)-sphere? (No
justification required)
\item
Define a CW complex homeomorphic to the real projective
\(n\dash\)space \(\RP^n\).
\item
Let \(\pi : \RP^n \to X\) be a covering map. Show that if \(n\) is
even, \(\pi\) is a homeomorphism.
\end{enumerate}
\hypertarget{question-93-2}{%
\subsection{Question 93}\label{question-93-2}}
Let \(A \subset X\). Prove that the relative homology group
\(H_0 (X, A)\) is trivial if and only if \(A\) intersects every path
component of \(X\).
\hypertarget{question-94-2}{%
\subsection{Question 94}\label{question-94-2}}
Let \(\DD\) be a closed disk embedded in the torus
\(T = S^1 \times S^1\) and let \(X\) be the result of removing the
interior of \(\DD\) from \(T\) . Let \(B\) be the boundary of \(X\),
i.e.~the circle boundary of the original closed disk \(\DD\).
\hypertarget{question-95-2}{%
\subsection{Question 95}\label{question-95-2}}
Let \(\DD\) be a closed disk embedded in the torus
\(T = S^1 \times S^1\) and let \(X\) be the result of removing the
interior of \(\DD\) from \(T\) . Let \(B\) be the boundary of \(X\),
i.e.~the circle boundary of the original closed disk \(\DD\).
Compute \(H_i (T, B)\) for all \(i\).
\hypertarget{question-96-2}{%
\subsection{Question 96}\label{question-96-2}}
For any \(n \geq 1\) let
\(S^n = \theset{(x_0 , \cdots , x_n )\mid \sum x_i^2 =1}\) denote the
\(n\) dimensional unit sphere and let
\[E = \theset{(x_0 , . . . , x_n )\mid x_n = 0}\] denote the
``equator''.
Find, for all \(k\), the relative homology \(H_k (S^n , E)\).
\hypertarget{question-97-2}{%
\subsection{Question 97}\label{question-97-2}}
Suppose that \(U\) and \(V\) are open subsets of a space \(X\), with
\(X = U \cup V\). Find, with proof, a general formula relating the Euler
characteristics of \(X, U, V\), and \(U \cap V\).
\begin{quote}
You may assume that the homologies of \(U, V, U \cap V, X\) are
finite-dimensional so that their Euler characteristics are well defined.
\end{quote}
\hypertarget{question-98-2}{%
\subsection{Question 98}\label{question-98-2}}
Describe a cell complex structure on the torus \(T = S^1 \times S^1\)
and use this to compute the homology groups of \(T\).
\begin{quote}
To justify your answer you will need to consider the attaching maps in
detail.
\end{quote}
\hypertarget{question-99-2}{%
\subsection{Question 99}\label{question-99-2}}
Let \(X\) be the space formed by identifying the boundary of a Möbius
band with a meridian of the torus \(T^2\).
Compute \(\pi_1 (X)\) and \(H_* (X)\).
\hypertarget{question-100-2}{%
\subsection{Question 100}\label{question-100-2}}
Compute the homology of the space \(X\) obtained by attaching a Möbius
band to \(\RP^2\) via a homeomorphism of its boundary circle to the
standard \(\RP^1\) in \(\RP^2\).
\hypertarget{question-101-2}{%
\subsection{Question 101}\label{question-101-2}}
Let \(X\) be a space obtained by attaching two 2-cells to the torus
\(S^1 \times S^1\), one along a simple closed curve
\(\theset{x} \times S^1\) and the other along \(\theset{y} \times S^1\)
for two points \(x \neq y\) in \(S^1\) .
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Draw an embedding of \(X\) in \(\RR^3\) and calculate its fundamental
group.
\item
Calculate the homology groups of \(X\).
\end{enumerate}
\hypertarget{question-102-2}{%
\subsection{Question 102}\label{question-102-2}}
Let \(X\) be the space obtained as the quotient of a disjoint union of a
2-sphere \(S^2\) and a torus \(T = S^1 \times S^1\) by identifying the
equator in \(S^2\) with a circle \(S^1 \times \theset{p}\) in \(T\).
Compute the homology groups of \(X\).
\hypertarget{question-103-2}{%
\subsection{Question 103}\label{question-103-2}}
Let \(X = S^2 / \theset{p_1 = \cdots = p_k }\) be the topological space
obtained from the 2-sphere by identifying \(k\) distinct points on it
(\(k \geq 2\)).
Find:
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
The fundamental group of \(X\).
\item
The Euler characteristic of \(X\).
\item
The homology groups of \(X\).
\end{enumerate}
\hypertarget{question-104-2}{%
\subsection{Question 104}\label{question-104-2}}
Let \(X\) be the topological space obtained as the quotient of the
sphere
\(S^2 = \theset{\vector x \in \RR^3 \suchthat \norm{\vector x} = 1}\)
under the equivalence relation \(\vector x \sim -\vector x\) for
\(\vector x\) in the equatorial circle, i.e.~for
\(\vector x = (x_1, x_2, 0)\).
Calculate \(H_* (X; \ZZ)\) from a CW complex description of \(X\).
\hypertarget{question-105-2}{%
\subsection{Question 105}\label{question-105-2}}
Compute, by any means available, the fundamental group and all the
homology groups of the space obtained by gluing one copy \(A\) of
\(S^2\) to another copy \(B\) of \(S^2\) via a two-sheeted covering
space map from the equator of \(A\) onto the equator of \(B\).
\hypertarget{question-106-2}{%
\subsection{Question 106}\label{question-106-2}}
Use cellular homology to calculate the homology groups of
\(S^n \times S^m\).
\hypertarget{question-107-2}{%
\subsection{Question 107}\label{question-107-2}}
Denote the points of \(S^1 \times I\) by \((z, t)\) where \(z\) is a
unit complex number and \(0 \leq t \leq 1\). Let \(X\) denote the
quotient of \(S^1 \times I\) given by identifying \((z, 1)\) and
\((z_2 , 0)\) for all \(z \in S^1\).
Give a cell structure, with attaching maps, for \(X\), and use it to
compute \(\pi_1 (X, \ast)\) and \(H_1 (X)\).
\hypertarget{question-108-2}{%
\subsection{Question 108}\label{question-108-2}}
Let \(X = S_1 \cup S_2 \subset \RR^3\) be the union of two spheres of
radius 2, one about \((1, 0, 0)\) and the other about \((-1, 0, 0)\),
i.e.~ \begin{align*}
S_1 &= \theset{(x, y,z) \mid (x-1)^2 + y^2 +z^2 = 4} \\
S_2 &= \theset{(x, y, z) \mid (x + 1)^2 + y^2 + z^2 = 4}
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Give a description of \(X\) as a CW complex.
\item
Write out the cellular chain complex of \(X\).
\item
Calculate \(H_* (X; Z)\).
\end{enumerate}
\hypertarget{question-109-2}{%
\subsection{Question 109}\label{question-109-2}}
Let \(M\) and \(N\) be finite CW complexes.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Describe a cellular structure of \(M \times N\) in terms of the
cellular structures of \(M\) and \(N\).
\item
Show that the Euler characteristic of \(M \times N\) is the product of
the Euler characteristics of \(M\) and \(N\).
\end{enumerate}
\hypertarget{question-110-2}{%
\subsection{Question 110}\label{question-110-2}}
Suppose the space \(X\) is obtained by attaching a 2-cell to the torus
\(S^1 \times S^1\).
In other words, \(X\) is the quotient space of the disjoint union of the
closed disc \(\DD^2\) and the torus \(S^1 \times S^1\) by the
identification \(x \sim f(x)\) where \(S^1\) is the boundary of the unit
disc and \(f : S^1 \to S^1 \times S^1\) is a continuous map.
What are the possible homology groups of \(X\)? Justify your answer.
\hypertarget{question-111-2}{%
\subsection{Question 111}\label{question-111-2}}
Let \(X\) be the topological space constructed by attaching a closed
2-disk \(\DD^2\) to the circle \(S^1\) by a continuous map
\(\partial\DD^2 \to S^1\) of degree \(d > 0\) on the boundary circle.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that every continuous map \(X \to X\) has a fixed point.
\item
Explain how to obtain all the connected covering spaces of \(X\).
\end{enumerate}
\hypertarget{question-112-2}{%
\subsection{Question 112}\label{question-112-2}}
Let \(X\) be a topological space obtained by attaching a 2-cell to
\(\RP^2\) via some map \(f: S^1 \to \RP^2\) .
What are the possibilities for the homology \(H_* (X; Z)\)?
\hypertarget{question-113-2}{%
\subsection{Question 113}\label{question-113-2}}
For any integer \(n \geq 2\) let \(X_n\) denote the space formed by
attaching a 2-cell to the circle \(S^1\) via the attaching map
\begin{align*}
a_n: S^1 &\to S^1 \\
e^{i\theta} &\mapsto e^{in\theta}
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Compute the fundamental group and the homology of \(X_n\).
\item
Exactly one of the \(X_n\) (for \(n \geq 2\)) is homeomorphic to a
surface. Identify, with proof, both this value of \(n\) and the
surface that \(X_n\) is homeomorphic to (including a description of
the homeomorphism).
\end{enumerate}
\hypertarget{question-114-2}{%
\subsection{Question 114}\label{question-114-2}}
Let \(X\) be a CW complex and let \(\pi : Y \to X\) be a covering space.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(Y\) is compact iff \(X\) is compact and \(\pi\) has finite
degree.
\item
Assume that \(\pi\) has finite degree \(d\). Show show that
\(\chi (Y ) = d \chi (X)\).
\item
Let \(\pi :\RP^N \to X\) be a covering map. Show that if \(N\) is
even, \(\pi\) is a homeomorphism.
\end{enumerate}
\hypertarget{question-115-2}{%
\subsection{Question 115}\label{question-115-2}}
For topological spaces \(X, Y\) the \textbf{mapping cone} \(C(f )\) of a
map \(f : X \to Y\) is defined to be the quotient space \begin{align*}
(X \times [0, 1])\disjoint Y / \sim &\qtext{where} \\
(x, 0) &\sim (x', 0) \qtext{for all} x, x' \in X \text{ and } \\
(x, 1) &\sim f (x) \qtext{for all } x \in X
.\end{align*}
Let \(\phi_k : S^n \to S^n\) be a degree \(k\) map for some integer
\(k\).
Find \(H_i(C(\phi_k ))\) for all \(i\).
\hypertarget{question-116-2}{%
\subsection{Question 116}\label{question-116-2}}
Prove that a finite CW complex must be Hausdorff.
\hypertarget{question-117-2}{%
\subsection{Question 117}\label{question-117-2}}
State the classification theorem for surfaces (compact, without
boundary, but not necessarily orientable). For each surface in the
classification, indicate the structure of the first homology group and
the value of the Euler characteristic.
Also, explain briefly how the 2-holed torus and the connected sum
\(\RP^2 \# \RP^2\) fit into the classification.
\hypertarget{question-118-2}{%
\subsection{Question 118}\label{question-118-2}}
Give a list without repetitions of all compact surfaces (orientable or
non-orientable and with or without boundary) that have Euler
characteristic negative one.
Explain why there are no repetitions on your list.
\hypertarget{question-119-2}{%
\subsection{Question 119}\label{question-119-2}}
Describe the topological classification of all compact connected
surfaces \(M\) without boundary having Euler characteristic
\(\chi(M )\geq -2\).
No proof is required.
\hypertarget{question-120-2}{%
\subsection{Question 120}\label{question-120-2}}
How many surfaces are there, up to homeomorphism, which are:
\begin{itemize}
\tightlist
\item
Connected,
\item
Compact,
\item
Possibly with boundary,
\item
Possibly nonorientable, and
\item
With Euler characteristic -3?
\end{itemize}
Describe one representative from each class.
\hypertarget{question-121-2}{%
\subsection{Question 121}\label{question-121-2}}
Prove that the Euler characteristic of a compact surface with boundary
which has \(k\) boundary components is less than or equal to \(2 - k\).
\hypertarget{question-122-2}{%
\subsection{Question 122}\label{question-122-2}}
Let \(X\) be the topological space obtained as the quotient space of a
regular \(2n\dash\)gon (\(n \geq 2\)) in \(\RR^2\) by identifying
opposite edges via translations in the plane.
First show that X is a compact, orientable surface without boundary, and
then identify its genus as a function of \(n\).
\hypertarget{question-123-2}{%
\subsection{Question 123}\label{question-123-2}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that any compact connected surface with nonempty boundary is
homotopy equivalent to a wedge of circles
\begin{quote}
Hint: you may assume that any compact connected surface without
boundary is given by identifying edges of a polygon in pairs.
\end{quote}
\item
For each surface appearing in the classification of compact surfaces
with nonempty boundary, say how many circles are needed in the wedge
from part (a).
\begin{quote}
Hint: you should be able to do this even if you have not done part
(a).
\end{quote}
\end{enumerate}
\hypertarget{question-124-2}{%
\subsection{Question 124}\label{question-124-2}}
Let \(M_g^2\) be the compact oriented surface of genus \(g\).
Show that there exists a continuous map \(f : M_g^2 \to S^2\) which is
not homotopic to a constant map.
\hypertarget{question-125-2}{%
\subsection{Question 125}\label{question-125-2}}
Show that \(\RP^2 \lor S^1\) is \emph{not} homotopy equivalent to a
compact surface (possibly with boundary).
\hypertarget{question-126-1}{%
\subsection{Question 126}\label{question-126-1}}
Identify (with proof, but of course you can appeal to the classification
of surfaces) all of the compact surfaces without boundary that have a
cell decomposition having exactly one 0-cell and exactly two 1-cells
(with no restriction on the number of cells of dimension larger than 1).
\hypertarget{question-127-1}{%
\subsection{Question 127}\label{question-127-1}}
For any natural number \(g\) let \(\Sigma_g\) denote the (compact,
orientable) surface of genus \(g\).
Determine, with proof, all valued of \(g\) with the property that there
exists a covering space \(\pi : \Sigma_5 \to \Sigma_g\) .
\begin{quote}
Hint: How does the Euler characteristic behave for covering spaces?
\end{quote}
\hypertarget{question-128-1}{%
\subsection{Question 128}\label{question-128-1}}
Find \emph{all} surfaces, orientable and non-orientable, which can be
covered by a closed surface (i.e.~compact with empty boundary) of genus
2. Prove that your answer is correct.
\hypertarget{question-129-1}{%
\subsection{Question 129}\label{question-129-1}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Write down (without proof) a presentation for \(\pi_1 (\Sigma_2 , p)\)
where \(\Sigma_2\) is a closed, connected, orientable genus 2 surface
and \(p\) is any point on \(\Sigma_2\) .
\item
Show that \(\pi_1 (\Sigma_2 , p)\) is not abelian by showing that it
surjects onto a free group of rank 2.
\item
Show that there is no covering space map from \(\Sigma_2\) to
\(S^1 \times S^1\) . You may use the fact that
\(\pi_1 (S^1 \times S^1 ) \cong \ZZ^2\) together with the result in
part (b) above.
\end{enumerate}
\hypertarget{question-130-1}{%
\subsection{Question 130}\label{question-130-1}}
Give an example, with explanation, of a closed curve in a surfaces which
is not nullhomotopic but is nullhomologous.
\hypertarget{question-131-1}{%
\subsection{Question 131}\label{question-131-1}}
Let \(M\) be a compact orientable surface of genus \(2\) without
boundary.
Give an example of a pair of loops \[\gamma_0 , \gamma_1 : S^1 \to M\]
with \(\gamma_0 (1) = \gamma_1 (1)\) such that there is a continuous map
\(\Gamma: [0, 1] \times S^1 \to M\) such that \[
\Gamma(0, t) = \gamma_0 (t), \quad \Gamma(1, t) = \gamma_1 (t) \qtext{for all} t \in S^1
,\] but such that there is no such map \(\Gamma\) with the additional
property that \(\Gamma_s (1) = \gamma_0 (1)\) for all \(s \in [0, 1]\).
(You are not required to prove that your example satisfies the stated
property.)
\hypertarget{question-132-1}{%
\subsection{Question 132}\label{question-132-1}}
Let \(C\) be cylinder. Let \(I\) and \(J\) be disjoint closed intervals
contained in \(\partial C\).
What is the Euler characteristic of the surface \(S\) obtained by
identifying \(I\) and \(J\)?
Can all surface with nonempty boundary and with this Euler
characteristic be obtained from this construction?
\hypertarget{question-133-1}{%
\subsection{Question 133}\label{question-133-1}}
Let \(\Sigma\) be a compact connected surface and let
\(p_1, \cdots , p_k \in \Sigma\).
Prove that \(H_2 \qty{\Sigma \setminus \union_{i=1}^k {p_i} } = 0\).
\hypertarget{question-134-1}{%
\subsection{Question 134}\label{question-134-1}}
Prove or disprove:
Every continuous map from \(S^2\) to \(S^2\) has a fixed point.
\hypertarget{question-135-1}{%
\subsection{Question 135}\label{question-135-1}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
State the \textbf{Lefschetz Fixed Point Theorem} for a finite
simplicial complex \(X\).
\item
Use degree theory to prove this theorem in case \(X = S^n\).
\end{enumerate}
\hypertarget{question-136-1}{%
\subsection{Question 136}\label{question-136-1}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\tightlist
\item
Prove that for every continuous map \(f : S^2 \to S^2\) there is some
\(x\) such that either \(f (x) = x\) or \(f (x) = -x\).
\end{enumerate}
\begin{quote}
Hint: Where \(A : S^2 \to S^2\) is the antipodal map, you are being
asked to prove that either \(f\) or \(A \circ f\) has a fixed point.
\end{quote}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
Exhibit a continuous map \(f : S^3 \to S^3\) such that for every
\(x \in S^3\), \(f (x)\) is equal to neither \(x\) nor \(-x\).
\end{enumerate}
\begin{quote}
Hint: It might help to first think about how you could do this for a map
from \(S^1\) to \(S^1\).
\end{quote}
\hypertarget{question-137-1}{%
\subsection{Question 137}\label{question-137-1}}
Show that a map \(S^n \to S^n\) has a fixed point unless its degree is
equal to the degree of the antipodal map \(a : x \to -x\).
\hypertarget{question-138-1}{%
\subsection{Question 138}\label{question-138-1}}
Give an example of a homotopy class of maps of \(S^1 \lor S^1\) each
member of which must have a fixed point, and also an example of a map of
\(S^1 \lor S^1\) which doesn't have a fixed point.
\hypertarget{question-139-1}{%
\subsection{Question 139}\label{question-139-1}}
Prove or disprove:
Every map from \(\RP^2 \lor \RP^2\) to itself has a fixed point.
\hypertarget{question-140-1}{%
\subsection{Question 140}\label{question-140-1}}
Find all homotopy classes of maps from \(S^1 \times \DD^2\) to itself
such that every element of the homotopy class has a fixed point.
\hypertarget{question-141}{%
\subsection{Question 141}\label{question-141}}
Let \(X\) and \(Y\) be finite connected simplicial complexes and let
\(f : X \to Y\) and \(g : Y \to X\) be basepoint-preserving maps.
Show that no matter how you homotope
\(f \lor g : X \lor Y \to X \lor Y\), there will always be a fixed
point.
\hypertarget{question-142}{%
\subsection{Question 142}\label{question-142}}
Let \(f = \id_{\RP^2} \lor \ast\) and \(g = \ast \lor id_{S^1}\) be two
maps of \(\RP^2 \lor S^1\) to itself where \(\ast\) denotes the constant
map of a space to its basepoint.
Show that one map is homotopic to a map with no fixed points, while the
other is not.
\hypertarget{question-143}{%
\subsection{Question 143}\label{question-143}}
View the torus \(T\) as the quotient space \(\RR^2 /\ZZ^2\).
Let \(A\) be a \(2 \times 2\) matrix with \(\ZZ\) coefficients.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that the linear map \(A : \RR^2 \to \RR^2\) descends to a
continuous map \(\mca : T \to T\).
\item
Show that, with respect to a suitable basis for \(H_1 (T ; \ZZ)\), the
matrix \(A\) represents the map induced on \(H_1\) by \(\mca\).
\item
Find a necessary and sufficient condition on \(A\) for \(\mca\) to be
homotopic to the identity.
\item
Find a necessary and sufficient condition on \(A\) for \(\mca\) to be
homotopic to a map with no fixed points.
\end{enumerate}
\hypertarget{question-144}{%
\subsection{Question 144}\label{question-144}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Use the Lefschetz fixed point theorem to show that any degree-one map
\(f : S^2 \to S^2\) has at least one fixed point.
\item
Give an example of a map \(f : \RR^2 \to \RR^2\) having no fixed
points.
\item
Give an example of a degree-one map \(f : S^2 \to S^2\) having exactly
one fixed point.
\end{enumerate}
\hypertarget{question-145}{%
\subsection{Question 145}\label{question-145}}
For which compact connected surfaces \(\Sigma\) (with or without
boundary) does there exist a continuous map \(f : \Sigma \to \Sigma\)
that is homotopic to the identity and has no fixed point?
Explain your answer fully.
\hypertarget{question-146}{%
\subsection{Question 146}\label{question-146}}
Use the Brouwer fixed point theorem to show that an \(n \times n\)
matrix with nonnegative entries has a real eigenvalue.
\hypertarget{question-147}{%
\subsection{Question 147}\label{question-147}}
Prove that \(\RR^2\) is not homeomorphic to \(\RR^n\) for \(n > 2\).
\hypertarget{question-148}{%
\subsection{Question 148}\label{question-148}}
Prove that any finite tree is contractible, where a \textbf{tree} is a
connected graph that contains no closed edge paths.
\hypertarget{question-149}{%
\subsection{Question 149}\label{question-149}}
Show that any continuous map \(f : \RP^2 \to S^1 \times S^1\) is
necessarily null-homotopic.
\hypertarget{question-150}{%
\subsection{Question 150}\label{question-150}}
Prove that, for \(n \geq 2\), every continuous map \(f: \RP^n \to S^1\)
is null-homotopic.
\hypertarget{question-151}{%
\subsection{Question 151}\label{question-151}}
Let \(S^2 \to \RP^2\) be the universal covering map.
Is this map null-homotopic? Give a proof of your answer.
\hypertarget{question-152}{%
\subsection{Question 152}\label{question-152}}
Suppose that a map \(f : S^3 \times S^3 \to \RP^3\) is not surjective.
Prove that \(f\) is homotopic to a constant function.
\hypertarget{question-153}{%
\subsection{Question 153}\label{question-153}}
Prove that there does not exist a continuous map \(f : S^2 \to S^2\)
from the unit sphere in \(\RR^3\) to itself such that
\(f (\vector x) \perp \vector x\) (as vectors in \(\RR^3\) for all
\(\vector x \in S^2\)).
\hypertarget{question-154}{%
\subsection{Question 154}\label{question-154}}
Let \(f\) be the map of \(S^1 \times [0, 1]\) to itself defined by \[
f (e^{i\theta} , s) = (e^{i(\theta+2\pi s)} , s)
,\] so that \(f\) restricts to the identity on the two boundary circles
of \(S^1 \times [0, 1]\).
Show that \(f\) is homotopic to the identity by a homotopy \(f_t\) that
is stationary on one of the boundary circles, but not by any homotopy
that is stationary on both boundary circles.
\begin{quote}
Hint: Consider what \(f\) does to the path
\(s \mapsto (e^{i\theta_0} , s)\) for fixed \(e^{i\theta_0} \in S^1\).
\end{quote}
\hypertarget{question-155}{%
\subsection{Question 155}\label{question-155}}
Show that \(S^1 \times S^1\) is not the union of two disks (where there
is no assumption that the disks intersect along their boundaries).
\hypertarget{question-156}{%
\subsection{Question 156}\label{question-156}}
Suppose that \(X \subset Y\) and \(X\) is a deformation retract of
\(Y\).
Show that if \(X\) is a path connected space, then \(Y\) is path
connected.
\hypertarget{question-157}{%
\subsection{Question 157}\label{question-157}}
Do one of the following:
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Give (with justification) a contractible subset \(X \subset \RR^2\)
which is not a retract of \(\RR^2\) .
\item
Give (with justification) two topological spaces that have the same
homology groups but that are not homotopy equivalent.
\end{enumerate}
\hypertarget{question-158}{%
\subsection{Question 158}\label{question-158}}
Recall that the \textbf{suspension} of a topological space, denoted
\(SX\), is the quotient space formed from \(X \times [-1, 1]\) by
identifying \((x, 1)\) with \((y, 1)\) for all \(x, y \in X\), and also
identifying \((x, -1)\) with \((y, -1)\) for all \(x, y \in X\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(SX\) is the union of two contractible subspaces.
\item
Prove that if \(X\) is path-connected then
\(\pi_1 (SX) = \theset{0}\).
\item
For all \(n \geq 1\), prove that \(H_{n} (X) \cong H_{n+1} (SX)\).
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.6576964677,
"avg_line_length": 31.4868287003,
"ext": "tex",
"hexsha": "9c7f2d209bfe92dc4c4bb21f3677b40b12ab5a1e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-05-19T07:12:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-19T07:12:00.000Z",
"max_forks_repo_head_hexsha": "beba581e5b32f54ff469ed603a0885d51591e5fc",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "dzackgarza/MakeMeAQual_UGA",
"max_forks_repo_path": "Combined_Questions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "beba581e5b32f54ff469ed603a0885d51591e5fc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "dzackgarza/MakeMeAQual_UGA",
"max_issues_repo_path": "Combined_Questions.tex",
"max_line_length": 150,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "beba581e5b32f54ff469ed603a0885d51591e5fc",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "dzackgarza/MakeMeAQual_UGA",
"max_stars_repo_path": "Combined_Questions.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-11T17:23:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-02T02:57:24.000Z",
"num_tokens": 76229,
"size": 216346
} |
% -*- LaTeX -*-
% -*- coding: utf-8 -*-
%
% michael a.g. aïvázis
% orthologue
% (c) 1998-2018 all rights reserved
%
% shortcuts to references
\def\slideref#1{{Slide~\ref{slide:#1}}}
\def\algref#1{{Alg.~\ref{alg:#1}}}
\def\alglineref#1{{line~\ref{line:#1}}}
\def\eqref#1{{Eq.~\ref{eq:#1}}}
\def\figref#1{{Fig.~\ref{fig:#1}}}
\def\secref#1{{Sec.~\ref{sec:#1}}}
\def\tabref#1{{Table~\ref{tab:#1}}}
\def\lstref#1{{Listing~\ref{lst:#1}}}
\def\lstlineref#1{{line~\ref{line:#1}}}
\def\supercite#1{\mbox{\scriptsize\raise 1ex\hbox{\cite{#1}}}}
% macros
\def\bydef{\mathrel{\mathop:}=}
\def\href#1{{\footnotesize\bfseries\url{#1}}}
\def\defeq{\mathrel{\mathop:}=}
\def\extremum#1{\stackrel[#1]{}{\mbox{\rm ext}}\,}
\def\maximum#1{\stackrel[#1]{}{\mbox{\rm max}}\,}
\def\minimum#1{\stackrel[#1]{}{\mbox{\rm min}}\,}
\def\optimum#1{\stackrel[#1]{}{\mbox{\rm opt}}\,}
\def\GNU{\mbox{\tt\small GNU}}
\def\GSL{\mbox{\tt\small GSL}}
\def\RANLUX{\mbox{\tt\small RANLUX}}
\def\NULL{\mbox{\tt\small NULL}}
\def\cc{\mbox{\tt\small C}}
\def\cpp{\mbox{\tt\small C++}}
%\def\cpp{\mbox{\tt C\raise.4ex\hbox{++}}}
\def\fortran{\mbox{\tt\small FORTRAN}}
\def\f90{\mbox{\tt\small FORTRAN90}}
\def\mpi{{\tt MPI}}
\def\sql{{\tt SQL}}
\def\html{{\tt html}}
\def\th#1{\mbox{$#1^{\rm th}$}}
\def\pyre{{\color{pyre@pipe}\tt pyre}}
\def\class#1{\mbox{\tt #1}}
\def\component#1{\mbox{\tt #1}}
\def\function#1{\mbox{\tt #1}}
\def\identifier#1{\mbox{\tt #1}}
\def\keyword#1{\mbox{\tt #1}}
\def\literal#1{\mbox{\tt #1}}
\def\method#1{\mbox{\tt #1}}
\def\operator#1{\mbox{\tt #1}}
\def\order#1{\mbox{$\mathcal{O}(#1)$}}
\def\package#1{\mbox{\tt\small #1}}
\def\srcfile#1{\mbox{\tt #1}}
\def\TODO#1{{%
\subsubsection*{Still to do}%
\scriptsize\tt%
\begin{list}{\leftpointright}{} #1 \end{list}}}
% end of file
| {
"alphanum_fraction": 0.6147403685,
"avg_line_length": 27.5538461538,
"ext": "tex",
"hexsha": "4eeeaf567f78e28a8e75d0f53f30b67ca785ac28",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "179359634a7091979cced427b6133dd0ec4726ea",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "BryanRiel/pyre",
"max_forks_repo_path": "doc/gauss/config/macros.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "179359634a7091979cced427b6133dd0ec4726ea",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "BryanRiel/pyre",
"max_issues_repo_path": "doc/gauss/config/macros.tex",
"max_line_length": 62,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "179359634a7091979cced427b6133dd0ec4726ea",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "BryanRiel/pyre",
"max_stars_repo_path": "doc/gauss/config/macros.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 809,
"size": 1791
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{makeidx}
\usepackage{bussproofs}
\usepackage{amsmath,amssymb}
\usepackage{tabularx}
\usepackage{pgf, tikz}
% \usepackage[margin=4cm]{geometry}
\usepackage{float}
\floatstyle{boxed}
\restylefloat{figure}
\usepackage{listings}
\lstset{language=[Objective]caml}
\input{macros}
\usetikzlibrary{arrows, automata}
\begin{document}
\title{\msat{}: a \sat{}/\smt{}/\mcsat{} library}
\author{Guillaume~Bury}
\maketitle
\section{Introduction}
The goal of the \msat{} library is to provide a way to easily
create automated theorem provers based on a \sat{} solver. More precisely,
the library, written in \ocaml{}, provides functors which, once instantiated,
provide a \sat{}, \smt{} or \mcsat{} solver.
Given the current state of the art of \smt{} solvers, where most \sat{} solvers
are written in C and heavily optimised\footnote{Some solvers now have instructions
to manage a processor's cache}, the \msat{} library does not aim to provide solvers
competitive with the existing implementations, but rather an easy way to create
reasonably efficient solvers.
\msat{} currently uses the following techniques:
\begin{itemize}
\item 2-watch literals scheme
\item Activity based decisions
\item Restarts
\end{itemize}
Additionally, \msat{} has the following features:
\begin{itemize}
\item Local assumptions
\item Proof/Model output
\item Adding clauses during proof search
\end{itemize}
\clearpage
\tableofcontents{}
\clearpage
\section{\sat{} Solvers: principles and formalization}\label{sec:sat}
\subsection{Idea}
\subsection{Inference rules}\label{sec:trail}
The SAT algorithm can be formalized as follows. During the search, the solver keeps
a set of clauses, containing the problem hypotheses and the learnt clauses, and
a trail, which is the current ordered list of assumptions and/or decisions made by
the solver.
Each element in the trail (decision or propagation) has a level, which is the number of decision
appearing in the trail up to (and including) it. So for instance, propagations made before any
decisions have level $0$, and the first decision has level $1$. Propagations are written
$a \leadsto_C \top$, with $C$ the clause that caused the propagation, and decisions
$a \mapsto_n \top$, with $n$ the level of the decision. Trails are read
chronologically from left to right.
In the following, given a trail $t$ and an atomic formula $a$, we will use the following notation:
$a \in t$ if $a \mapsto_n \top$ or $a \leadsto_C \top$ is in $t$, i.e.~$a \in t$ is $a$ is true
in the trail $t$. In this context, the negation $\neg$ is supposed to be involutive
(i.e.~$\neg \neg a = a$), so that, if $a \in t$ then $\neg \neg a = a \in t$.
There exists two main \sat{} algorithms: \dpll{} and \cdcl{}. In both, there exists
two states: first, the starting state $\text{Solve}$, where propagations
and decisions are made, until a conflict is detected, at which point the algorithm
enters in the $\text{Analyse}$ state, where it analyzes the conflict, backtracks,
and re-enter the $\text{Solve}$ state. The difference between \dpll{} and \cdcl{}
is the treatment of conflicts during the $\text{Analyze}$ phase: \dpll{} will use
the conflict only to known where to backtrack/backjump, while in \cdcl{} the result
of the conflict analysis will be added to the set of hypotheses, so that the same
conflict does not occur again.
The $\text{Solve}$ state take as argument the set of hypotheses and the trail, while
the $\text{Analyze}$ state also take as argument the current conflict clause.
We can now formalize the \cdcl{} algorithm using the inference rules
in Figure~\ref{fig:smt_rules}. In order to completely recover the \sat{} algorithm,
one must apply the rules with the following precedence and termination conditions,
depending on the current state:
\begin{itemize}
\item If the empty clause is in $\mathbb{S}$, then the problem is unsat.
If there is no more rule to apply, the problem is sat.
\item If we are in $\text{Solve}$ mode:
\begin{enumerate}
\item First is the rule \irule{Conflict};
\item Then the try and use \irule{Propagate};
\item Finally, is there is nothing left to be propagated, the \irule{Decide}
rule is used.
\end{enumerate}
\item If we are in $\text{Analyze}$ mode, we have a choice concerning
the order of application. First we can observe that the rules
\irule{Analyze-Propagate}, \irule{Analyze-Decision} and \irule{Analyze-Resolution}
can not apply simultaneously, and we will thus group them in a super-rule
\irule{Analyze}. We now have the choice of when to apply the \irule{Backjump} rule
compared to the \irule{Analyze} rule:
using \irule{Backjump} eagerly will result in the first UIP strategy, while delaying it
until later will yield other strategies, both of which are valid.
\end{itemize}
\begin{figure}
\center{\underline{\sat{}}}
\begin{center}
\begin{tabular}{c@{\hspace{1cm}}l}
% Propagation (boolean)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Propagate}
\UIC{$\text{Sove}(\mathbb{S}, t :: a \leadsto_C \top)$}
\DP{} &
\begin{tabular}{l}
$a \in C, C \in \mathbb{S}, \neg a \notin t$ \\
$\forall b \in C. b \neq a \rightarrow \neg b \in t$ \\
\end{tabular}
\\ \\
% Decide (boolean)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Decide}
\UIC{$\text{Solve}(\mathbb{S}, t :: a \mapsto_n \top)$}
\DP{} &
\begin{tabular}{l}
$a \notin t, \neg a \notin t, a \in \mathbb{S}$ \\
$n = \text{max\_level}(t) + 1$ \\
\end{tabular}
\\ \\
% Conflict (boolean)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Conflict}
\UIC{$\text{Analyze}(\mathbb{S}, t, C)$}
\DP{} &
$C \in \mathbb{S}, \forall a \in C. \neg a \in t$
\\ \\
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{c@{\hspace{.5cm}}l}
% Analyze (propagation)
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \leadsto_C \top, D)$}
\LLc{Analyze-propagation}
\UIC{$\text{Analyze}(\mathbb{S}, t, D)$}
\DP{} &
$\neg a \notin D$
\\ \\
% Analyze (decision)
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \mapsto_n \top, D)$}
\LLc{Analyze-decision}
\UIC{$\text{Analyze}(\mathbb{S}, t, D)$}
\DP{} &
$\neg a \notin D$
\\ \\
% Resolution
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \leadsto_C \top, D)$}
\LLc{Analyze-Resolution}
\UIC{$\text{Analyze}(\mathbb{S}, t, (C - \{a\}) \cup (D - \{ \neg a\}))$}
\DP{} &
$\neg a \in D$
\\ \\
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{c@{\hspace{1cm}}l}
% BackJump
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \mapsto_d \top :: t', C)$}
\LLc{Backjump}
\UIC{$\text{Solve}(\mathbb{S} \cup \{ C \}, t)$}
\DP{} &
\begin{tabular}{l}
$\text{is\_uip}(C, t :: a \mapsto_d \top :: t')$ \\
$d \leq \text{uip\_level}(C)$ \\
\end{tabular}
\\ \\
\end{tabular}
\end{center}
\center{\underline{\smt{}}}
\begin{center}
\begin{tabular}{c@{\hspace{1cm}}l}
% Conflict (theory)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Smt}
\UIC{$\text{Solve}(\mathbb{S} \cup \{C\}, t)$}
\DP{} &
\begin{tabular}{l}
$\mathcal{T} \vdash C$ \\
\end{tabular}
\\ \\
\end{tabular}
\end{center}
\caption{Inference rules for \sat{} and \smt{}}\label{fig:smt_rules}
\end{figure}
\subsection{Invariants, correctness and completeness}
The following invariants are maintained by the inference rules in Figure~\ref{fig:smt_rules}:
\begin{description}
\item[Trail Soundness] In $\text{Solve}(\mathbb{S}, t)$, if $a \in t$ then $\neg a \notin t$
\item[Conflict Analysis] In $\text{Analyze}(\mathbb{S}, t, C)$, $C$ is a clause implied by the
clauses in $\mathbb{S}$, and $\forall a \in C. \neg a \in t$ (i.e.~$C$ is entailed by the
hypotheses, yet false in the partial model formed by the trail $t$).
\item[Equivalence] In any rule \AXC{$s_1$}\UIC{$s_2$}\DP{}, the set of hypotheses
(usually written $\mathbb{S}$) in $s_1$ is equivalent to that of $s_2$.
\end{description}
These invariants are relatively easy to prove, and provide an easy proof of correctness for
the \cdcl{} algorithm. Termination can be proved by observing that the same trail appears
at most twice during proof search (once during propagation, and eventually a second time
right after backjumping\footnote{This could be avoided by making the \irule{Backjump} rule
directly propagate the relevant literal of the conflict clause, but it needlessly
complicates the rule.}). Correctness and termination implies completeness of the \sat{}
algorithm.
\section{\smt{} solver architecture}\label{sec:smt}
\subsection{Idea}\label{sec:smt_flow}
In a \smt{} solver, after each propagation and decision, the solver sends the newly
assigned literals to the theory. The theory then has the possibility to declare the
current set of literals incoherent, and give the solver a tautology in which all
literals are currently assigned to $\bot$\footnote{or rather for each literal, its negation
is assigned to $\top$}, thus prompting the solver to backtrack.
We can represent a simplified version of the information flow (not taking into
account backtracking) of usual \smt{} Solvers, using the graph in fig~\ref{fig:smt_flow}.
Contrary to what the Figure~\ref{fig:smt_flow} could suggest, it is not impossible
for the theory to propagate information back to the \sat{} solver. Indeed,
some \smt{} solvers already allow the theory to propagate entailed literals back to the
\sat{} solver. However, these propagations are in practice limited by the complexity
of deciding entailment. Moreover, procedures in a \smt{} solver should be incremental
in order to get decent performances, and deciding entailment in an incremental manner
is not easy (TODO~: ref nécessaire). Doing efficient, incremental entailment is exactly
what \mcsat{} allows (see Section~\ref{sec:mcsat}).
\subsection{Formalization and Theory requirements}
An \smt{} solver is the combination of a \sat{} solver, and a theory $\mathcal{T}$.
The role of the theory $\mathcal{T}$ is to stop the proof search as soon as the trail
of the \sat{} solver is inconsistent. A trail is inconsistent iff there exists a clause
$C$, which is a tautology of $\mathcal{T}$ (thus $\mathcal{T} \vdash C$), but is not
satisfied in the current trail (each of its literals has either been decided or
propagated to false).
TODO:reword the following paragraph (inference rule Conflict-SMT renamed to SMT)
Thus, we can add the \irule{Conflict-Smt} rule (see
Figure~\ref{fig:smt_rules}) to the \cdcl{} inference rules in order to get a \smt{} solver.
We give the \irule{Conflict-Smt} rule a slightly lower precedence than the
\irule{Conflict} rule for performance reason (detecting boolean conflict is
faster than theory specific conflicts).
So, what is the minimum that a theory must implement in practice to be used in a
\smt{} solver~? The theory has to ensure that the current trail is consistent
(when seen as a conjunction of literals), that is to say, given a trail $t$,
it must ensure that there exists a model $\mathcal{M}$ of $\mathcal{T} $so that
$\forall a \in t. \mathcal{M} \vDash a$, or if it is impossible (i.e.~the trail
is inconsistent) produce a conflict.
\begin{figure}
\begin{center}
\begin{tikzpicture}[
->, % Arrow style
> = stealth, % arrow head style
shorten > = 1pt, % don't touch arrow head to node
node distance = 2cm, % distance between nodes
semithick, % line style
auto
]
\tikzstyle{state}=[rectangle,draw=black!75]
\node (sat) {SAT Core};
\node (th) [right of=sat, node distance=6cm] {Theory};
\node[state] (d) [below of=sat, node distance=1cm] {Decision (boolean)};
\node[state] (bp) [below of=d, node distance=2cm] {Boolean propagation};
\node[state] (tp) [right of=bp, node distance=6cm] {Theory propagation};
\draw (d) edge [bend left=30] (bp);
\draw (bp) edge [bend left=30] (d);
\draw (bp) edge (tp);
\draw[black!50] (-2,1) rectangle (2,-4);
\draw[black!50] (4,1) rectangle (8,-4);
\end{tikzpicture}
\end{center}
\caption{Simplified \smt{} Solver architecture}\label{fig:smt_flow}
\end{figure}
\section{\mcsat{}: An extension of \smt{} Solvers}\label{sec:mcsat}
\subsection{Motivation and idea}
\mcsat{} is an extension of usual \smt{} solvers, introduced in~\cite{VMCAI13} and~\cite{FMCAD13}.
In usual \smt{} Solvers, interaction between the core SAT Solver and the Theory is pretty limited~:
the SAT Solver make boolean decisions and propagations, and sends them to the theory,
whose role is in return to stop the SAT Solver as soon as the current set of assumptions
is incoherent. This means that the information that theories can give the SAT Solver is
pretty limited, and furthermore it heavily restricts the ability of theories to guide
the proof search (see Section~\ref{sec:smt_flow}).
While it appears to leave a reasonably simple job to the theory, since it completely
hides the propositional structure of the problem, this simple interaction between the
SAT Solver and the theory makes it harder to combine multiple theories into one. Usual
techniques for combining theories in \smt{} solvers typically require to keep track of
equivalence classes (with respect to the congruence closure) and for instance
in the Nelson-Oppen method for combining theories (TODO~: ref nécessaire) require of
theories to propagate any equality implied by the current assertions.
\mcsat{} extends the SAT paradigm by allowing more exchange of information between the theory
and the SAT Solver. This is achieved by allowing the solver to not only decide on the truth value
of atomic propositions, but also to decide assignments for terms that appear in the problem.
For instance, if the SAT Solver assigns a variable $x$ to $0$,
an arithmetic theory could propagate to the SAT Solver that the formula $x < 1$ must also hold,
instead of waiting for the SAT Solver to guess the truth value of $x < 1$ and then
inform the SAT Solver that the conjunction~: $x = 0 \land \neg x < 1$ is incoherent.
This exchange of information between the SAT Solver and the theories results in
the construction of a model throughout the proof search (which explains the name
Model Constructing SAT).
The main addition of \mcsat{} is that when the solver makes a decision, instead of
being restricted to making boolean assignment of formulas, the solver now can
decide to assign a value to a term belonging to one of the literals. In order to do so,
the solver first chooses a term that has not yet been assigned, and then asks
the theory for a possible assignment. Like in usual SMT Solvers, a \mcsat{} solver
only exchange information with one theory, but, as we will see, combination
of theories into one becomes easier in this framework, because assignments are
actually a very good way to exchange information.
Using the assignments on terms, the theory can then very easily do efficient
propagation of formulas implied by the current assignments: it is enough to
evaluate formulas using the current partial assignment.
The information flow then looks like fig~\ref{fig:mcsat_flow}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[
->, % Arrow style
> = stealth, % arrow head style
shorten > = 1pt, % don't touch arrow head to node
node distance = 2cm, % distance between nodes
semithick, % line style
auto
]
\tikzstyle{state}=[rectangle,draw=black!75]
\node (sat) {SAT Core};
\node (th) [right of=sat, node distance=6cm] {Theory};
\node[state] (d) [below of=sat, node distance=1cm] {Decision};
\node[state] (ass) [right of=d, node distance=6cm] {Assignment};
\node[state] (bp) [below of=d, node distance=2cm] {Boolean propagation};
\node[state] (tp) [right of=bp, node distance=6cm] {Theory propagation};
\draw (bp)[right] edge [bend left=5] (tp);
\draw (tp) edge [bend left=5] (bp);
\draw (bp) edge [bend left=30] (d);
\draw (ass) edge [bend left=5] (d);
\draw (d) edge [bend left=5] (ass);
\draw (d) edge [bend left=30] (bp);
\draw[<->] (ass) edge (tp);
\draw[black!50] (-2,1) rectangle (2,-4);
\draw[black!50] (4,1) rectangle (8,-4);
\end{tikzpicture}
\end{center}
\caption{Simplified \mcsat{} Solver architecture}\label{fig:mcsat_flow}
\end{figure}
\subsection{Decisions and propagations}
In \mcsat{}, semantic propagations are a bit different from the propagations
used in traditional \smt{} Solvers. In the case of \mcsat{} (or at least the version presented here),
semantic propagations strictly correspond to the evaluation of formulas in the
current assignment. Moreover, in order to be able to correctly handle these semantic
propagations during backtrack, they are assigned a level: each decision is given a level
(using the same process as in a \sat{} solvers: a decision level is the number of decision
previous to it, plus one), and a formula is propagated at the maximum level of the decisions
used to evaluate it.
We can thus extend the notations introduced in Section~\ref{sec:trail}: $t \mapsto_n v$ is
a semantic decision which assign $t$ to a concrete value $v$, $a \leadsto_n \top$ is a
semantic propagation at level $n$.
For instance, if the current trail is $\{x \mapsto_1 0, x + y + z = 0 \mapsto_2 \top, y\mapsto_3 0\}$,
then $x + y = 0$ can be propagated at level $3$, but $z = 0$ can not be propagated, because
$z$ is not assigned yet, even if there is only one remaining valid value for $z$.
The problem with assignments as propagations is that it is not clear what to do with
them during the $\text{Analyze}$ phase of the solver, see later.
\begin{figure}
\begin{center}
\begin{align*}
\text{max\_level}([]) &= 0 \\
\text{max\_level}(t :: a \mapsto_n \top) &= n \\
\text{max\_level}(t :: a \leadsto_C \top) &= \text{max\_level}(t) \\
\text{level}(a, t :: a \mapsto_n \top :: t') &= n \\
\text{level}(a, t :: a \leadsto_C \top :: t') &= \text{max\_level}(t) \\
\end{align*}
\begin{align*}
\text{max\_lit}(a, C, t) &= \forall b \in C, (b \neq a) \rightarrow
\text{level}(b, t) < n \\
\text{is\_uip}(C, t) &= \exists (a \mapsto_n \top) \in t,
a \in C \land \text{max\_lit}(a, C, t) \\
\text{uip\_level}(C, t) = l &\equiv \exists (a \mapsto_n \top) \in t,
a \in C \land \text{max\_lit}(a, C, t)
\land l = \text{max\_level}(C, t) \\
\end{align*}
\end{center}
\caption{}\label{fig:analyze_functions}
\end{figure}
\subsection{First order terms \& Models}
A model traditionally is a triplet which comprises a domain,
a signature and an interpretation function. Since most problems, do not
define new interpreted functions or constants\footnote{Indeed, thses typically
only come from built-in theories such as arithmetic, bit-vectors, etc\ldots},
and built-in theories such as arithmetic usually have canonic domain and signature,
we will consider in the following that the domain $\mathbb{D}$, signature, and
intepretation of theory-defined symbols are given and constant. For instance,
there exists more than one first order model of Peano arithmetic (ref nécessaire);
in our case we want to choose one of them, and try and extand it to satisfy the
input problem, but we do not want to try and switch model during the proof search.
In the following, we use the following notations:
\begin{itemize}
\item $\mathbb{V}$ is an infinite set of variables;
\item $\mathbb{C}$ is the possibly infinite set of constants defined
by the input problem's theories\footnote{For instance, the theory of arithmetic
defines the usual operators $+, -, *, /$ as well as $0, -5, \frac{1}{2}, -2.3, \ldots$};
\item $\mathbb{S}$ is the finite set of constants defined by the input problem's type
definitions;
\item $\mathbb{T}$ is the (infinite) set of first-order terms over $\mathbb{V}$, $\mathbb{C}$
and $\mathbb{S}$ (for instance $a, f(0), x + y, \ldots$);
\item $\mathbb{F}$ is the (infinite) set of first order quantified formulas
over the terms in $\mathbb{T}$.
\end{itemize}
An interpretation $\mathcal{I}$ is a mapping from $\mathbb{V} \cup \mathbb{C} \cup \mathbb{S}$
to $\mathbb{D}$. What we are interested in, is finding an interpretation of a problem, and more
specifically an interpretation of the symbols in $\mathbb{S}$ not defined by
the theory, i.e.~non-interpreted functions.
An interpretation $\mathcal{I}$ can easily be extended to a function from ground terms
and formulas to model value by recursively applying it:
\[
\mathcal{I}( f ( e_1, \ldots, e_n ) ) =
\mathcal{I}_f ( \mathcal{I}(e_1), \ldots, \mathcal{I}(e_n) ) \\
\]
TODO:~What do we do with quantified variables~?
What kind of model do we want for these~?
\paragraph{Partial Interpretation}
The goal of \mcsat{} is to construct a first-order model of the input formulas. To do so,
it has to build what we will call partial interpretations: intuitively, a partial
interpretation is a partial function from the constants symbols to the model values.
It is, however, not so simple: during proof search, the \mcsat{} algorithm maintains
a partial mapping from expressions to model values (and not from constant symbols
to model values). The intention of this mapping is to represent a set of constraints
on the partial interpretation that the algorithm is building.
For instance, given a function symbol $f$ of type $(\text{int} \rightarrow \text{int})$ and an
integer constant $a$, one such constraint that we would like to be able to express in
our mapping is the following: $f(a) \mapsto 0$, regardless of the values that $f$ takes on
other argument, and also regardless of the value mapped to $a$. To that end we introduce
the notion of abstract partial interpretation.
An abstract partial interpretation $\sigma$ is a mapping from ground expressions to model values.
To each abstract partial interpretation correspond a set of complete models that realize it.
More precisely, any mapping $\sigma$ can be completed in various ways, leading to a set of
potential interpretations:
\[
\text{Complete}(\sigma) =
\left\{
\mathcal{I}
\; | \;
\forall t \mapsto v \in \sigma , \mathcal{I} ( t ) = v
\right\}
\]
\paragraph{Coherence}
An abstract partial interpretation $\sigma$ is said to be coherent iff
there exists at least one model that completes it,
i.e.~$\text{Complete}(\sigma) \neq \emptyset$. One example
of incoherent abstract partial interpretation is the following
mapping:
\[
\sigma = \left\{
\begin{matrix}
a \mapsto 0 \\
b \mapsto 0 \\
f(a) \mapsto 0 \\
f(b) \mapsto 1 \\
\end{matrix}
\right.
\]
\paragraph{Compatibility}
In order to do semantic propagations, we want to have some kind of notion
of evaluation for abstract partial interpretations, and we thus define the
partial interpretation function in the following way:
\[
\forall t \in \mathbb{T} \cup \mathbb{F}. \forall v \in \mathbb{D}.
\left(
\forall \mathcal{I} \in \text{Complete}(\sigma).
\mathcal{I}(t) = v
\right) \rightarrow
\sigma(t) = v
\]
The partial interpretation function is the intersection of the interpretation
functions of all the completions of $\sigma$, i.e.~it is the interpretation
where all completions agree. We can now say that a mapping $\sigma$ is compatible
with a trail $t$ iff $\sigma$ is coherent, and
$\forall a \in t. \neg (\sigma(a) = \bot)$, or in other words, for every literal $a$
true in the trail, there exists at least one model that completes $\sigma$ and where
$a$ is satisfied.
\paragraph{Completeness}
We need one last property on abstract partial interpretations, which is to
specify the relation between the terms in a mapping, and the terms it can evaluate,
according to its partial interpretation function defined above. Indeed, at the end
of proof search, we want all terms (and sub-terms) of the initial problem to be
evaluated using the final mapping returned by the algorithm when it finds that the
problem is satisfiable: that is what we will call completeness of a mapping.
To that end we will here enumerate some sufficient conditions on the mapping domain
$\text{Dom}(\sigma)$ compared to the finite set of terms (and sub-terms) $T$ that
appear in the problem.
A first way, is to have $\text{Dom}(\sigma) = T$, i.e.~all terms (and sub-terms) of the
initial problem are present in the mapping. While this is the simplest way to ensure that
the mapping is complete, it might be a bit heavy: for instance, we might have to assign
both $x$ and $2x$, which is redundant. The problem in that case is that we try and assign
a term for which we could actually compute the value from its arguments: indeed,
since the multiplication is interpreted, we do not need to interpret it in our mapping.
This leads to another way to have a complete mapping: if $\text{Dom}(\sigma)$ is the
set of terms of $T$ whose head symbol is uninterpreted (i.e.~not defined by the theory),
it is enough to ensure that the mapping is complete, because the theory will automatically
constrain the value of terms whose head symbol is interpreted.
\subsection{Inference rules}
\begin{figure}
\center{\underline{\mcsat{}}}
\begin{center}
\begin{tabular}{c@{\hspace{.2cm}}l}
% Decide (assignment)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Theory-Decide}
\UIC{$\text{Solve}(\mathbb{S}, t :: a \mapsto_n v)$}
\DP{} &
\begin{tabular}{l}
$a \mapsto_k \_ \notin t$ \\
$n = \text{max\_level}(t) + 1$ \\
$\sigma_{t :: a \mapsto_n v} \text{ compatible with } t$
\end{tabular}
\\ \\
% Propagation (semantic)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Propagate-Theory}
\UIC{$\text{Solve}(\mathbb{S}, t :: a \leadsto_n \top)$}
\DP{} &
$\sigma_t(a) = \top$
\\ \\
% Conflict (semantic)
\AXC{$\text{Solve}(\mathbb{S}, t)$}
\LLc{Conflict-Mcsat}
\UIC{$\text{Analyze}(\mathbb{S}, t, C)$}
\DP{} &
\begin{tabular}{l}
$\mathcal{T} \vdash C$ \\
$\forall a \in C, \neg a \in t \lor \sigma_t(a) = \bot$
\end{tabular}
\\ \\
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{c@{\hspace{.2cm}}l}
% Analyze (assign)
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \mapsto_n v, C)$}
\LLc{Analyze-Assign}
\UIC{$\text{Analyze}(\mathbb{S}, t, C)$}
\DP{} &
\\ \\
% Analyze (semantic)
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \leadsto_n \top, C)$}
\LLc{Analyze-Semantic}
\UIC{$\text{Analyze}(\mathbb{S}, t, C)$}
\DP{} &
\\ \\
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{c@{\hspace{.2cm}}l}
% Backjump (semantic)
\AXC{$\text{Analyze}(\mathbb{S}, t :: a \mapsto_n \_ :: \_, C)$}
\LLc{Backjump-Semantic}
\UIC{$\text{Solve}(\mathbb{S}, t)$}
\DP{} &
\begin{tabular}{l}
$\text{is\_semantic}(C)$ \\
$n = \text{slevel}(C)$ \\
\end{tabular}
\\ \\
\end{tabular}
\end{center}
\caption{\mcsat{} specific inference rules}\label{fig:mcsat_rules}
\end{figure}
In \mcsat{}, a trail $t$ contains boolean decision and propagations as well as semantic
decisions (assignments) and propagations. We can thus define $\sigma_t$ as the mapping
formed by all assignments in $t$. This lets us define the last rules in Figure~\ref{fig:mcsat_rules},
that, together with the other rules, define the \mcsat{} algorithm.
\section{\mcsat{} theories}
We can reformulate the constraints that a theory must respect to be a correct \mcsat{}
theory in a more intuitive way. First the assignments returned by th theory must be consistent
(i.e there exists a model which satisfies the assignemnts), and they must be compatible with
the current assertions made by the sat solvers, which means that the assertions that can be
evaluated using the assignments must not be false. Another way to put it is to say that
for every assertions there exists a model extendin the assignemnts, that satisfy the assertion.
This requirement is actually a lot weaker than the usual one required of \smt{} theories,
where the theory must ensure that there exists a model which satisfies all
assertions\footnote{Basically, the two quantifiers have exchanged places.}
Since the requirements of an \mcsat{} theory are weaker, it is most of the time, easy to
take a \smt{} theory and transform it into a \mcsat{} one. The main difference being the
generation of unsatisfiability explanations which must take into accounts the assignments.
That means that unsatisfiability explanations must be tautological clauses (as usual),
whose every atom evaluates to false using the current assignments.
TODO:~formalize the notion of theory with an internal state + transitions
\subsection{Equality}
In order to handle genric equality, a very simple theory is enough. Given a set of assertions
containing equalities $\mathcal{A}_= = \{ e_i = f_i \}_i$ and inequalities
$\mathcal{A}_\neq = \{ a_j \neq b_j \}_j$, we do the following:
\begin{itemize}
\item when asked for a value to assign to a term $x$, if there is an equation
$ x = y \in \mathcal{A}_=$ where $y$ is assigned to $v$\footnote{If there are
more than one such equations and not all $v$ are equal, the next step will
automatically be triggered}, then return $v$ and
stor $x = y$ as the reason of the assignment, else returns an arbitrary
term $\hat{x}$
\item when informed of a new assignment $x \mapsto v$:
\begin{itemize}
\item if there exists an equation $x = y$ where $y$ is assigned to a $v'$,
such that $v \neq v'$, then return UNSAT with an explanation consisting
of the least fixpoint of the set $\{ x = y \}$ by the function:
$g(\{ e \bowtie f \} \cup S) = \{ \text{reason}(e), \text{reason}(f), e \bowtie f \} \cup S$
\item if there exists an inequation $x \neq y$ where $y$ is assigned to $v$,
then return, the least fixpoint of the set $\{ x \neq y \}$ y the same function.
\end{itemize}
\end{itemize}
While that is sufficient, an easy upgrade is to use an union-find algorithm to store
the reflexive transitive closure of equalities in $\mathcal{A}_=$. Each class is tagged
with the value first assigned to any of its member. A conflict occurs when two terms
$x$ and $y$ are in the same class, but assigned to different values, in which case
we ask the union-find for the explanation of why $x$ and $y$ are in the same class,
and return that as the unsatisfiability explanation.
in order to avoid
to depend on the order of the assignments of terms.
\subsection{Uninterpreted functions and predicates}
\subsection{Arithmetic}
\bibliographystyle{plain}
\bibliography{biblio}
\clearpage
\appendix
\end{document}
| {
"alphanum_fraction": 0.6926905529,
"avg_line_length": 44.3323903819,
"ext": "tex",
"hexsha": "01c99717fb93f98a91453a1e37654e14ac25ea82",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-05-14T11:12:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-02-12T14:25:50.000Z",
"max_forks_repo_head_hexsha": "322fbefa4a58023ddafb3fa1a51f8199c25cde3d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Gbury/archsat",
"max_forks_repo_path": "papers/mcsat_rewriting/msat.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "322fbefa4a58023ddafb3fa1a51f8199c25cde3d",
"max_issues_repo_issues_event_max_datetime": "2022-03-07T15:41:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-04-10T02:05:47.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Gbury/archsat",
"max_issues_repo_path": "papers/mcsat_rewriting/msat.tex",
"max_line_length": 102,
"max_stars_count": 19,
"max_stars_repo_head_hexsha": "322fbefa4a58023ddafb3fa1a51f8199c25cde3d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Gbury/archsat",
"max_stars_repo_path": "papers/mcsat_rewriting/msat.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-14T14:07:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-19T14:41:41.000Z",
"num_tokens": 8938,
"size": 31343
} |
\documentclass[simplex.tex]{subfiles}
% NO NEED TO INPUT PREAMBLES HERE
% packages are inherited; you can compile this on its own
\onlyinsubfile{
\title{NeuroData SIMPLEX Report: Reduced Dimension Clustering}
}
\begin{document}
\onlyinsubfile{
\maketitle
\thispagestyle{empty}
The following report documents the progress made by the labs of Randal~Burns and Joshua~T.~Vogelstein at Johns Hopkins University towards goals set by the DARPA SIMPLEX grant.
%%%% Table of Contents
\tableofcontents
%%%% Publications
\bibliographystyle{IEEEtran}
\begin{spacing}{0.5}
\section*{Publications, Presentations, and Talks}
%\vspace{-20pt}
\nocite{*}
{\footnotesize \bibliography{simplex}}
\end{spacing}
%%%% End Publications
}
\subsection{Batch effect removal in dimension reduction of multiway array data}
Batch effects are unwanted random variations caused by different data sources and experimental conditions. Generalized linear random effects model is effective to mitigate these confounders in traditional low dimensional data; however, there is a lack of such tool for high dimensional and multiway array data. While tensor factorization is routinely used for dimension reduction, due to the sharing of factors among all batches, the batch effects quickly populate the low dimensional core and confound the signal. In this research, we propose a different strategy by letting factor matrices vary over batches, while leaving the remaining variation in the core. This allows capturing sophisticated batch effects, while retaining the low rank structure for describing signal. To allow estimation with flexible factors, we utilize a hierarchical random effects model to borrow information among the batches. An efficient closed-form expectation conditional maxmization strategy is developed for rapid estimation. We focus the application on the joint diagonalization of brain connectivity data obtained from different sources.
The model we propose is:
\begin{equation}
\begin{aligned}
A_{ji, kl} & = A_{ji, lk}\\
A_{ji, kl} & \stackrel{indep}{\sim} \text{Bern}(\text{logit}(\psi_{ji,kl}))\\
\psi_{ji,kl} & = \sum_{r=1}^{d} c_{ji, r} f_{j,kr} f_{j,lr} \\
f_{j,kr} & \stackrel{indep}{\sim} \text{N}(f_{0,kr}, \sigma^2)\\
f_{0,kr} & \stackrel{iid}{\sim} \text{N}(0, 1)
\end{aligned}
\end{equation}
with $k=1\ldots l$ and $l=2\ldots n$.
The batch effect adjusted connectome is then $A_{ji, kl} = \psi_{ji,kl} = \sum_{r=1}^{d} c_{ji, r} f_{0,kr} f_{0,lr} $
\begin{figure}[h!]
\begin{cframed}
\centering
\includegraphics[width=0.455\textwidth, clip = true, trim = 0mm 15mm 0mm 10mm]{../../figs/avgA1}
\includegraphics[width=0.4635\textwidth, clip = true, trim = 0mm 15mm 0mm 10mm ]{../../figs/avgA3}
\includegraphics[width=0.455\textwidth, clip = true, trim = 0mm 15mm 0mm 10mm]{../../figs/avgA1adjusted}
\includegraphics[width=0.4635\textwidth, clip = true, trim = 0mm 15mm 0mm 10mm]{../../figs/avgA3adjusted}
\caption{The first row shows the average connectome of two groups, that some difference can be obsevered. The second row shows the batch effect adjusted average connectome of the two groups, that they become similar.}
\end{cframed}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.7610201511,
"avg_line_length": 52.0655737705,
"ext": "tex",
"hexsha": "601b7e3ad81c56eeb77606aa42ac7a92da5993af",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "openconnectome/SIMPLEX_Q2",
"max_forks_repo_path": "Reporting/reports/2017-01/batchEffectsRemoval.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "openconnectome/SIMPLEX_Q2",
"max_issues_repo_path": "Reporting/reports/2017-01/batchEffectsRemoval.tex",
"max_line_length": 1124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "openconnectome/SIMPLEX_Q2",
"max_stars_repo_path": "Reporting/reports/2017-01/batchEffectsRemoval.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 907,
"size": 3176
} |
\documentclass[../main.tex]{subfiles}
\begin{document}
\chapter{Technical Contribution}
Throughout the duration of the thesis project, a tool named Incoming was implemented with the purpose of performing predictions for risky code. This tool was later used for answering some of the research questions. The tool has two main modes of operation that can be used as independent processes. The first is the training mode which collects new commits for a repository and trains on them. The second mode involves providing predictions using trained models and new commits. All of the scripts for the data collection as well as for training the model and making predictions were implemented in Python 3 using the pandas and scikit-learn libraries and could be executed on Windows 10 as well as Linux operating systems.
\section{Data Collection}
This tool was designed to collect data from any git repository. When provided the HTTPS URL for a GitHub repository, it would clone the project to the local disk. It then collects a list of commits and extracts the features for each commit. Extracting commit data can be done by having Python create a subprocess that calls the command \texttt{"git log --numstat"} which can be parsed to obtain data such as the author, commit message, number of files changed and more. Then for all of these new commits, the ASZZ algorithm is run in order to identify potential commits that cause a bug. When assigning labels, any commit that was found to be causing a bug by ASZZ is given a label of 1 and all remaining commits have a label of 0.
The data collection tool can be used repeatedly on the same repository in order to fetch the latest commits. However, for larger projects it is wasteful to repeat this process on commits that have already been scraped. So whenever new commits are mined, their data is saved remotely in CSV files. Then in subsequent runs of the data collection, the tool avoids mining data for old commits by comparing the commit's hash.
An alternative method of collecting data that was considered was utilizing the GitHub API. Although the API could be used to obtain features and label commits, it was not practical to use for extracting large datasets due to its rate limits. The GitHub API was better suited for fetching the names or urls of repositories in this case.
\section{Training}
When it comes to training the models, the models would train on all available data for a given project rather than a mix of projects. This was because cross defect prediction models tend to suffer in terms of performance. \cite{kamei2016studying} For preprocessing the data, missing values where replaced using the mean. The datetime value extracted was parsed and converted into new features such as the hour and the day of the week the commit was made. Categorical features with more than one category were represented using a one-hot encoding. All features that were not categorical were standardized by transforming the mean to 0 and the standard deviation to 1. In order to deal with the class imbalance in the dataset the SMOTE sampling technique which has shown to be useful for defect prediction models \cite{tan2015online}. The tool used a random forest model with 100 estimators using the Gini impurity metric for creating decision rules. Random forest was used because it is easier for developers to interpret the most important features and is simple to train due to a small number of hyperparameters. Also, past research has shown that ensemble learning methods tend to outperform single models at the task of JIT defect prediction \cite{yang2017tlel}.
\vspace{10pt}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{images/Technical_Contribution/incoming_1.png}
\label{fig:incoming1}
\caption{Training mode}
\end{figure}
\section{Sending Predictions}
For the experiments, the tool was set up to send out predictions periodically to individual developers through Slack, a messaging tool. When making predictions, the tool would obtain a list of recent commits from several branches made by developers who registered to use the tool. Branches other than the master branch were included because development branches contain commits that are not reviewed during a pull request, hence the possibility of them being more risky. Once a prediction was made, it was sent to the author of the commit using messaging services such as Slack.
Each prediction message sent on Slack would contain the link to the commit that was predicted on, the model's prediction for this commit (\textit{risky} or \textit{not risky}) as well as how confident the model was in that prediction. When machine learning models perform classification, they will output a probability for each class that an instance belongs to this class. The confidence is just the probability our commit is risky according to the model. The purpose of adding confidence to the message was because if a prediction had a 51\% confidence, this prediction would not be reliable as the chance of it being a false positive is very high.
\begin{figure}[H]
\centering
\includegraphics[scale=0.15]{images/Technical_Contribution/incoming_2.png}
\label{fig:incoming2}
\caption{Prediction mode}
\end{figure}
\section{Deployment}
In order to have data collection, training of models and predictions be performed frequently, the tool was deployed on King's continuous integration servers. The data collection and training was bundled into one Jenkins job whereas predictions ran in a separate job. These jobs were set up so that there were separate jobs for each project for which the tool was deployed too. Unlike the tool Commit Guru which analyzes commits once a user requests it, this tool runs autonomously. A proposed benefit of this would be that developers might be preoccupied with other tasks and forget to run the tool manually. Having it run in the background means that the tool can naturally be integrated into the developers' workflow.
\end{document} | {
"alphanum_fraction": 0.809428619,
"avg_line_length": 125.0625,
"ext": "tex",
"hexsha": "32ba7a3735751a6d98344348f3d92786004bd445",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Arsalan-Syed/Master_Thesis",
"max_forks_repo_path": "report/chapters/3_technical_contribution.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Arsalan-Syed/Master_Thesis",
"max_issues_repo_path": "report/chapters/3_technical_contribution.tex",
"max_line_length": 1266,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Arsalan-Syed/Master_Thesis",
"max_stars_repo_path": "report/chapters/3_technical_contribution.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-19T18:01:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-29T17:45:10.000Z",
"num_tokens": 1214,
"size": 6003
} |
\documentclass[12pt, answers]{exam}
%\documentclass[12pt]{exam}
\usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry}
\usepackage{setspace}
\PassOptionsToPackage{hyphens}{url}
\usepackage{tabu}
\onehalfspacing
\setlength{\parindent}{0mm} \setlength{\parskip}{1em}
% packages
\RequirePackage{amssymb, amsfonts, amsmath, latexsym, verbatim, xspace, setspace}
\RequirePackage{tikz}
% The float package HAS to load before hyperref
\usepackage{float} % for psuedocode formatting
\usepackage{amsthm}
\usepackage{epsfig}
\usepackage{times}
\renewcommand{\ttdefault}{cmtt}
\usepackage{amsmath}
\usepackage{graphicx} % for graphics files
% for creating indented blocks
\usepackage{scrextend}
\usepackage{paralist, tabularx}
% from Denovo Methods Manual
\usepackage{mathrsfs}
\usepackage[mathcal]{euscript}
\usepackage{color}
\usepackage{array}
\usepackage[pdftex]{hyperref}
\usepackage[parfill]{parskip}
\usepackage{cancel}
\newcommand{\nth}{n\ensuremath{^{\text{th}}} }
\newcommand{\ve}[1]{\ensuremath{\mathbf{#1}}}
\newcommand{\Macro}{\ensuremath{\Sigma}}
\newcommand{\vOmega}{\ensuremath{\hat{\Omega}}}
\newcommand{\cc}[1]{\ensuremath{\overline{#1}}}
\newcommand{\ccm}[1]{\ensuremath{\overline{\mathbf{#1}}}}
%--------------------------------------------------------------------
%--------------------------------------------------------------------
\begin{document}
\begin{center}
{\bf NE 155, Topic 14, S21 \\
Finite Difference and Volume Methods for the Eigenvalue form of the DE \\ April 1, 2021}
\end{center}
\setlength{\unitlength}{1in}
\begin{picture}(6,.1)
\put(0,0) {\line(1,0){6.25}}
\end{picture}
%------------------------------------------
\subsection*{Helmholtz Form}
When we derived the Diffusion Equation, we skipped over the Helmholtz form. I am bringing it up now b/c it can be a convenient way to solve the DE. This approach can also be useful if you need to generate an analytical solution.
In steady state, we can write the diffusion equation this way:
%
\begin{align*}
-\nabla \cdot D(\vec{r})\nabla \phi(\vec{r}) +
\Sigma_a \phi(\vec{r}) &= Q(\vec{r})\:, \\
%
\text{where }\qquad Q(\vec{r}) &=
\nu \Sigma_f \phi(\vec{r}) +
S(\vec{r})\:,
\end{align*}
%
which can be written as the Helmholtz equation of applied mathematics:
%http://en.wikipedia.org/wiki/Helmholtz_equation
%
\ifprintanswers
\begin{align*}
\nabla^2 \phi(\vec{r}) - \frac{1}{L^2}\phi(\vec{r}) &= \frac{-Q(\vec{r})}{D}\:, \\
\text{where }\qquad L &\equiv \sqrt{\frac{D}{\Sigma_a}}\:.
\end{align*}
\else
\\ \vspace*{4em}\\
\fi
%
$L$ is called the neutron diffusion length. This is ``how far a neutron diffuses from a source prior to absorption".
In the Helmholtz formulation, $\phi$ is amplitude and $\frac{1}{L}$ is wave number.
This formulation is useful because we know how to solve it. We write
\[\phi(\vec{r}) = \phi_H(\vec{r}) + \phi_P(\vec{r}) \:,\]
For example, we often have:
\[\phi_H(\vec{r}) = A\exp\bigl(-\frac{|\vec{r}|}{L}\bigr) + B\exp\bigl(\frac{|\vec{r}|}{L}\bigr) \:.\]
Going through how to solve this analytically in a variety of circumstances, geometries, etc.\ is another class (NE 150/250).
%------------------------------------------
\subsection*{Criticality Calculations}
\textbf{Recall:}
We also want to think about the details of how to configure a reactor to get it to work the way we want it to.
We can write our DE in steady state for a nuclear reactor core:
%
\begin{align*}
-\nabla \cdot D\nabla \phi(\vec{r}) +
\Sigma_a \phi(\vec{r}) &= \nu \Sigma_f \phi(\vec{r})\:, \\
\text{with} \qquad \phi(\tilde{x}_s) &= 0\:.
\end{align*}
%
Unless we have the proper combination of core composition ($\Sigma_a$, $\Sigma_f$, $D$) and geometry ($\vec{r}$ ,$\vec{r}_s$) details, there is \underline{no} general solution.
To deal with this, we introduce a parameter $k$ into the equation:
%
\begin{equation}
-\nabla \cdot D\nabla \phi(\vec{r}) +
\Sigma_a \phi(\vec{r}) = \frac{1}{k}\nu \Sigma_f \phi(\vec{r})\:. \nonumber
\end{equation}
%
Then, for any value of $k$ we assert that there is always a solution. We use an iterative process to find the condition when $k=1$, called ``critical''.
%A reactor is called \textbf{``critical''} if the chain reaction is self-sustaining and time-independent. Another way to think of the addition of $k$ is to assume $\nu$ can be adjusted to obtain a time-independent solution by replacing it with $\frac{\nu}{k}$, where $k$ is the parameter expressing the deviation from critical.
%
%This substitution changes the transport equation into an \textbf{eigenvalue problem.} A spectrum of eigenvalues can be found, but at \textbf{long times only the non-negative solution corresponding to the largest real eigenvalue will dominate}, and that's $k$.
%
%$k$ can also be thought of as the asymptotic ratio of the number of neutrons in one generation to the number in the next.
%--------------------------------------------------------------------
\section*{Finite Difference Method, Eigenvalue Problem}
We can extend all of the finite difference and finite volume methods we just learned to the eigenvalue problem case, which is another layer of complication.
Now instead of a fixed source, we have an eigenvalue problem
\[-\frac{d}{dx}D(x)\frac{d \phi(x)}{dx} + \Sigma_a(x) \phi(x) = \frac{1}{k}\nu \Sigma_f(x) \phi(x) \]
%
\begin{figure}[h!]
\begin{center}
\includegraphics[height=1in]{FVM-fig}
\end{center}
\end{figure}
%
Let's again have a reflecting condition at the centerline ($x_0 = 0$) and vacuum on the right ($x_n = a$):
\begin{align}
\frac{d}{dx}\phi(x) \big|_{x=0} &= 0 \qquad \text{zero net current,} \nonumber\\
\phi(\tilde{a}) &= 0 \qquad \tilde{a} = a + 2D\:. \nonumber
\end{align}
%
We again have a spatial mesh:
%
\begin{center}
\begin{tikzpicture}
\draw (-.25,0)--(1.25,0);
\draw[dotted] (1.25,0)--(2.75,0);
\draw (2.75,0)--(5.25,0);
\draw[dotted] (5.25,0)--(6.75,0);
\draw (6.75,0)--(8.25,0);
%\draw (4,0)--(5.25,0);
\draw (0,-.25)--(0,.25);
\draw (1,-.25)--(1,.25);
%\draw (2,-.25)--(2,.25);
\draw (3,-.25)--(3,.25);
\draw (4,-.25)--(4,.25);
\draw (5,-.25)--(5,.25);
\draw (7,-.25)--(7,.25);
\draw (8,-.25)--(8,.25);
\node[below] at (0,-.25) {$x_0$};
\node[below] at (1,-.25) {$x_1$};
\node[below] at (3,-.25) {$x_{i-1}$};
\node[below] at (4,-.25) {$x_i$};
\node[below] at (5,-.25) {$x_{i+1}$};
\node[below] at (7,-.25) {$x_{n-1}$};
\node[below] at (8,-.25) {$x_n$};
\node[above] at (0.5, 0.5) {$h_1$};
\node[above] at (3.5, 0.5) {$h_i$};
\end{tikzpicture}
\end{center}
%
and in this configuration $x_0 = 0$, $x_n = a$, and $h_i$ is the mesh spacing. There are $n+1$ points and $n$ mesh cells.
Material discontinuities will coincide with the cell \textit{edges}, $x_i$. Thus, we can assume that the cross sections and the diffusion coefficient are constant in each cell.
%
%\begin{align}
%D(x) &= D_i \qquad \text{for } x_{i-1} \leq x \leq x_i \:,\nonumber \\
%\Sigma_{a}(x) &= \Sigma_{a,i} \qquad \text{for } x_{i-1} \leq x \leq x_i \:, \nonumber \\
%\nu\Sigma_f(x) &= \nu\Sigma_{f,i}\;, \quad \text{for } x_{i-1} \leq x \leq x_i\:, \nonumber \\
%h_i &\equiv x_i - x_{i-1} \:.\nonumber
%\end{align}
%
The unknown values are again defined at the mesh or cell \textit{edges}, e.g.\ $\phi(x_i) = \phi_i$.
We derive the equations just like we did in the fixed source case, but now instead of $S_i$ we have $\nu \Sigma_{f,i}$ on the rhs and a $1/k$ in multiplying the rhs vector.
\ifprintanswers
\begin{align}
\frac{\phi_{i+1} - 2\phi_i + \phi_{i-1}}{h_i^2} - \frac{1}{L_i^2}\phi_i = -\frac{1}{k}\frac{\nu\Sigma_{f,i}}{D_i}\phi_i \qquad i &= 1, 2, \dots, n-1 \:,\nonumber \\
%
-\phi_{i-1} + \bigl(2 + \frac{h_i^2}{L_i^2}\bigr)\phi_i - \phi_{i+1} = \frac{1}{k} h_i^2 \frac{\nu\Sigma_{f,i}}{D_i}\phi_i \qquad i &= 1, 2, \dots, n-1\: \text{ or,} \nonumber\\
%
\nonumber \\
\frac{-D_i}{h_i^2}\phi_{i-1} + \biggl(\frac{2D_i}{h_i^2} + \Sigma_{a,i} \biggr)\phi_i - \frac{D_i}{h_i^2}\phi_{i+1} = \frac{1}{k} \nu\Sigma_{f,i}\phi_i \qquad i &= 1, \dots, n-1 \:.\nonumber
\end{align}
\else
\vspace*{10em}
\fi
With this formulation we still have the problem that our unknown is defined the cell edges and if the properties in neighboring cells differ we will have discontinuities. To be able to handle \textit{material discontinuities} we're going to do our volume integration again.
%-----------------------------------------------------
%-----------------------------------------------------
\section*{Finite Volume Method}
Like last time, we will integrate the flux and source values across neighboring half-cells.
%
\begin{figure}[h!]
\includegraphics[height=2.5in]{FVM-eig-fig}
\end{figure}
We again assume the cross section and diffusion coefficient are constant in each cell; for $i=1, \dots, n$:
\begin{align}
D(x) &= D_i\;, \qquad x_{i-1} \leq x \leq x_i \nonumber \\
\Sigma_a(x) &= \Sigma_{a,i}\;, \qquad x_{i-1} \leq x \leq x_i \nonumber \\
\nu\Sigma_f(x) &= \nu\Sigma_{f,i}\;, \quad x_{i-1} \leq x \leq x_i \nonumber \\
h_i &\equiv x_{i} - x_{i-1} \:.\nonumber
\end{align}
%
We also still assume that the fluxes are constant over the interval centered around $x_i$; for $i=0, \dots, n$:
%
\begin{align}
\phi(x) &= \phi_i \qquad \text{for } \bigl(x_i - \frac{h_i}{2}\bigr) \leq x \leq \bigl(x_i + \frac{h_{i+1}}{2}\bigr)\:. \nonumber %\\
%S(x) &= S_i \qquad \text{for } \bigl(x_i - \frac{h_i}{2}\bigr) \leq x \leq \bigl(x_i + \frac{h_{i+1}}{2}\bigr) \nonumber
\end{align}
Now, we integrate the differential equation over each cell, $\bigl(x_i - \frac{h_i}{2}\bigr) \leq x \leq \bigl(x_i + \frac{h_{i+1}}{2}\bigr)$:
%
\[\int_{(x_i - \frac{h_i}{2})}^{(x_i + \frac{h_{i+1}}{2})} \biggl( -\frac{d}{dx}D(x)\frac{d \phi(x)}{dx}\biggr) dx
+ \int_{(x_i - \frac{h_i}{2})}^{(x_i + \frac{h_{i+1}}{2})} \Sigma_a(x) \phi(x) dx
= \int_{(x_i - \frac{h_i}{2})}^{(x_i + \frac{h_{i+1}}{2})} \frac{1}{k}\nu \Sigma_f(x) \phi(x) dx\]
%
We'll only add the term we didn't do before:
%
\ifprintanswers
\begin{align}
\int_{(x_i - \frac{h_i}{2})}^{(x_i + \frac{h_{i+1}}{2})} \frac{1}{k}\nu \Sigma_f(x)\phi(x) dx &=
%
\int_{(x_i - \frac{h_i}{2})}^{(x_i)} \frac{1}{k}\nu \Sigma_f(x) \phi(x) dx + \int_{(x_i)}^{(x_i + \frac{h_{i+1}}{2})} \frac{1}{k}\nu \Sigma_f(x) \phi(x) dx \nonumber\\
%
&\nonumber \\
&= \frac{1}{k} \biggl(\frac{\nu\Sigma_{f,i}h_i + \nu\Sigma_{f,i+1}h_{i+1}}{2} \biggr)\phi_i \:.\nonumber
\end{align}
\else
\\ \vspace*{4em}
\fi
Collecting all of the terms:
%
\begin{equation}
-D_{i+1}\biggl(\frac{\phi_{i+1} - \phi_i}{h_{i+1}}\biggr) + D_{i}\biggl(\frac{\phi_{i} - \phi_{i-1}}{h_{i}}\biggr) + \biggl(\frac{\Sigma_{a,i}h_i + \Sigma_{a,i+1}h_{i+1}}{2} \biggr)\phi_i = \frac{1}{k} \biggl(\frac{\nu\Sigma_{f,i}h_i + \nu\Sigma_{f,i+1}h_{i+1}}{2} \biggr)\phi_i \:.\nonumber
\end{equation}
To express this in matrix form we'll use the same abbreviations as last time, and add one for the fission term (where we've divided through by $h_{ii}$ to get the x-sec terms):
\begin{align}
%h_{ii} &= \frac{h_i + h_{i+1}}{2} \nonumber \\
%%
%\Sigma_{a,ii} &= \frac{\Sigma_{a,i}h_i + \Sigma_{a,i+1}h_{i+1}}{h_i + h_{i+1}} \nonumber \\
\nu\Sigma_{f,ii} &= \frac{\nu\Sigma_{f,i}h_i + \nu\Sigma_{f,i+1}h_{i+1}}{h_i + h_{i+1}} \:.\nonumber
\end{align}
%
Then
%
\[a_{i,i-1} \phi_{i-1} + a_{i,i}\phi_i + a_{i, i+1} \phi_{i+1} = \frac{1}{k}\nu\Sigma_{f,ii} \phi_i \qquad \text{for } i = 1, 2, \dots, n-1\:,\]
%
where the $a$s have the same value as last time. (Only the rhs is new.)
%
%\begin{align}
%a_{i,i-1} &= \frac{-D_i}{h_i h_{ii}}\:, \nonumber \\
%a_{i,i} &= \frac{D_i}{h_i h_{ii}} + \frac{D_{i+1}}{h_{i+1} h_{ii}} +\Sigma_{a,ii}\:, \nonumber \\
%a_{i,i+1} &= \frac{-D_{i+1}}{h_{i+1} h_{ii}}:.\nonumber
%\end{align}
We again have a set of $n-1$ linear algebraic equations with $n+1$ unknowns. We will use the boundary conditions to get the rest of the information that we need.
%-------------------------------------------------------
\subsection*{Boundary Conditions}
Again assume $x_n = \tilde{a}$, then the \textbf{vacuum condition} becomes
\[\phi_n = 0\]
and the last equation for $i=n-1$ becomes
\ifprintanswers
\[a_{n-1,n-2} \phi_{n-2} + a_{n-1,n-1}\phi_{n-1} + 0 = \frac{1}{k}\nu\Sigma_{f,(n-1,n-1)} \phi_{n-1}\]
\else
\\ \vspace*{2em}
\fi
To include the \textbf{reflecting} or zero current condition we integrate over $[0, h_{1}/2]$.
%
\begin{figure}[h!]
\includegraphics[height=2in]{ReflectingBC-eig}
\end{figure}
%
\begin{align}
\int_{0}^{\frac{h_{1}}{2}} \biggl(-\frac{d}{dx}D(x)\frac{d \phi(x)}{dx}\biggr) dx &+ \int_{0}^{\frac{h_{1}}{2}} \Sigma_a(x) \phi(x) dx = \int_{0}^{\frac{h_{1}}{2}} \frac{1}{k}\nu \Sigma_f(x) \phi(x) dx \:. \nonumber %\\
%
%-D(x)\frac{d \phi(x)}{dx}\big|_{\frac{h_{1}}{2}} &+ D(x)\frac{d \phi(x)}{dx}\big|_{0} + \Sigma_{a,1}\phi_0 \frac{h_1}{2} = \frac{1}{k}\nu\Sigma_{f,1} \phi_0 \frac{h_1}{2} \nonumber
\end{align}
%
We perform the integration and can apply the boundary condition $\frac{d \phi(x)}{dx}\big|_{0} = 0$ just like last time,
%\[-D(x)\frac{d \phi(x)}{dx}\big|_{\frac{h_{1}}{2}} + \Sigma_{a,1}\phi_0 \frac{h_1}{2} = \frac{1}{k}\nu\Sigma_{f,1} \phi_0 \frac{h_1}{2}\]
%%
%Recall that
%\[-D(x)\frac{d \phi(x)}{dx}\big|_{\frac{h_{1}}{2}} \cong -D_{1}\biggl(\frac{\phi_{1} - \phi_0}{h_{1}}\biggr) \]
%
and the first equation ($i=0$) becomes
\[a_{00}^*\phi_0 + a_{01}^* \phi_1 = \frac{1}{k}\nu\Sigma_{f,1} \phi_0 \:,\]
%
where we redefine the $a^*$s to be the same as last time as well.%to be (I've added the * to indicate that these have different definitions than the rest of the terms.)
%
%\begin{align}
%a_{00}^* &= \frac{2D_1}{h_1^2} + \Sigma_{a,1} \nonumber \\
%a_{01}^* &= -\frac{2D_1}{h_1^2} \nonumber
%\end{align}
We now have $n$ equations and $n$ unknowns, but we formulate it a bit differently:
\[\ve{A}\vec{\phi} = \frac{1}{k}\ve{F}\vec{\phi}\:,\]
where:
\begin{align}
%\ve{A} &= \begin{pmatrix}
%a_{00}^* & a_{01}^* & 0 & 0 & \cdots & 0 \\
%a_{10} & a_{11} & a_{12} & 0 & \cdots & 0 \\
%0 & a_{21} & a_{22} & a_{21} & & \vdots \\
%\vdots & & \ddots & \ddots & \ddots & \vdots \\
%0 & \cdots & 0 & a_{n-3,n-3} & a_{n-2,n-2} & a_{n-2,n-1} \\
%0 & \cdots & 0 & 0 & a_{n-1,n-2} & a_{n-1,n-1}
%\end{pmatrix} \nonumber \\
%%
%\vec{\phi} &= \begin{pmatrix}\phi_0 \\ \phi_1 \\ \phi_2 \\ \vdots \\ \phi_{n-2} \\ \phi_{n-1} \end{pmatrix} \:, \qquad
%
\ve{F} = \begin{pmatrix}
\nu\Sigma_{f,1} & 0 & 0 & 0 & \cdots & 0 \\
0 & \nu\Sigma_{f,11} & 0 & 0 & \cdots & 0 \\
0 & 0 & \nu\Sigma_{f,22} & 0 & \cdots & 0 \\
\vdots & & \ddots & \ddots & \ddots & \vdots \\
0 & \cdots & 0 & 0 & \nu\Sigma_{f,n-2,n-2} & 0 \\
0 & \cdots & 0 & 0 & 0 & \nu\Sigma_{f,n-1,n-1}
\end{pmatrix} \:.\nonumber
\end{align}
%----------------------------------------------------------------
%----------------------------------------------------------------
\section*{Solution Methods}
We are only going to talk about iterative solution methods for eigenvalue problems since no one uses direct methods in practice (dealing with directly solving an eigenvalue matrix problem rapidly becomes intractable). We will formulate the problem this way, where $m$ is the eigenvalue or ``outer" iteration index:
%
\ifprintanswers
\[ \ve{A} \vec{\phi}^{(m+1)} = \frac{1}{k^{(m)}}\ve{F}\vec{\phi^{(m)}}\]
\else
\\ \vspace*{3em}
\fi
There are a variety of ways you can choose to determine convergence. We will consider these convergence criteria:
\begin{align}
\bigg|\frac{k^{(m)} - k^{(m-1)}}{k^{(m)}}\bigg| &< \epsilon_1 \:, \nonumber \\
\bigg|\frac{\phi_i^{(m)} - \phi_i^{(m-1)}}{\phi_i^{(m)}}\bigg| &< \epsilon_2 \qquad \forall i\:. \nonumber
\end{align}
%
Where $\epsilon_1$ is the eigenvalue convergence criterion (often $1 \times 10^{-4}$ or smaller), and $\epsilon_2$ is the flux error criterion (often $1 \times 10^{-3}$ or smaller).
\subsection*{Finding $k$}
But wait, that iterative method was only telling us how to update $\vec{\phi}$. How do we get new iterates for $k$? To sort that out, we're going to think about the physical interpretation of $k$.
The multiplication factor, $k$, can be defined as
\[k = \frac{\text{total production rate}}{\text{total loss rate}}\]
%
We can define two \textbf{operators} (\emph{not matrices}; to get the thing that we solve, we apply our specific discretization methods to turn the operators into matrices) to help us compute this:
%
\begin{align}
A &= -\frac{d}{dx}D(x)\frac{d}{dx} + \Sigma_a(x) \quad \text{is the loss operator,} \nonumber \\
F &= \nu\Sigma_f(x) \qquad \qquad \text{is the production operator.}\nonumber
\end{align}
%Fortunately, the discretized versions of these operators are the matrices that we have $\ve{A}$ and $\ve{F}$, respectively.
This allows us to write an equation for $k$ as
\ifprintanswers
\[k^{(m+1)} = \frac{\int_0^{\tilde{a}} F \vec{\phi}^{(m+1)}(x)dx}{\int_0^{\tilde{a}} A \vec{\phi}^{(m+1)}(x)dx}\:. \]
\else
\\ \vspace*{3em}
\fi
We can express our iterative method with our operators,
\[ A \vec{\phi}^{(m+1)} = \frac{1}{k^{(m)}}F\vec{\phi^{(m)}}\]
%
and substitute this into our $k$ equation to get
\[k^{(m+1)} = \frac{\int_0^{\tilde{a}} F \vec{\phi}^{(m+1)}(x)dx}{\frac{1}{k^{(m)}}\int_0^{\tilde{a}} F \vec{\phi}^{(m)}(x)dx\:.} \]
%
This idea applies to any discretization strategy. To use the finite difference formulation we've developed, we discretize the operators and this can be expressed as:
\ifprintanswers
\[k^{(m+1)} = k^{(m)}\Biggl(\frac{\nu\Sigma_{f,1} \phi_0^{(m+1)} \frac{h_1}{2} + \sum_{i=1}^{n-1} \nu\Sigma_{f,ii} \phi_i^{(m+1)} \frac{h_{ii}}{2}}
{\nu\Sigma_{f,1} \phi_0^{(m)} \frac{h_1}{2} + \sum_{i=1}^{n-1} \nu\Sigma_{f,ii} \phi_i^{(m)} \frac{h_{ii}}{2}}\Biggr)\:.\]
\else
\\ \vspace*{3em}
\fi
\subsection*{Power Method}
Power Iteration (PI) is an old and straightforward algorithm for finding an eigenvalue/vector pair.
The basic idea is that any non-zero vector can be written as a linear combination of the eigenvectors of $\ve{B}$ because the eigenvectors are linearly independent, namely $\vec{v}_0 = \gamma_1 \vec{x}_1 + \gamma_2 \vec{x}_2 + \cdots + \gamma_n \vec{x}_n$, where $\vec{x}_{j}$ is the $j$th eigenvector and $\gamma_{j}$ is some constant. This specific expression assumes a non-defective $\ve{B}$, though this assumption is not necessary for the method to work.
Another fact that is used to understand power iteration is that $\ve{B}^m \vec{x}_i = \lambda_i^m \vec{x}_i$. Thus
%
\begin{equation}
\ve{B}^m \vec{v}_{0} = \gamma_1 \lambda_1^m \vec{x}_1 + \gamma_2 \lambda_2^m \vec{x}_2 + \cdots + \gamma_n \lambda_n^m \vec{x}_n \:.\nonumber
\label{eq:Ak}
\end{equation}
%
Since $|\lambda_1| > |\lambda_i|, i \ne 1$, the first term in the expansion will dominate as $m \to \infty$ and $\ve{B}^m \vec{v}_{0}$ therefore becomes an increasingly accurate approximation to $\vec{x}_1$.
In practice, it is desirable to avoid exponentiating a matrix, so we will use an algorithm that does something else. It is also helpful to normalize $\vec{v}_0$ to avoid possible over or underflow.
We are also quite interested in the \textit{convergence behavior} of PI. After $m$ steps, the iteration vector $\vec{v}$ will be:
%
\ifprintanswers
\begin{equation}
\vec{v}_{m} = \bigl( \frac{\lambda_{1}^{m}}{\vec{e}_{1}^{T}\ve{B}^{m}\vec{v}_{0}} \bigr) \bigl(\frac{1}{\lambda_{1}^{m}}\ve{B}^{m}\vec{v}_{0} \bigr) \:' \nonumber
\end{equation}
\else
\\ \vspace*{2em}
\fi
%
where $\vec{e}_{1}^{T}$ is a vector with 1 in the first entry and zeros elsewhere; it selects the first row of $\ve{B}$ in the multiplication. If $\ve{B}$ has eigenpairs $\{(\vec{x}_{j}, \lambda_{j}), 1 \le j \le n \}$ and $\vec{v}_{0}$ has the expansion $\vec{v}_{0} = \sum_{j=1}^{n} \vec{x}_{j}\gamma_{j}$ then
%
\begin{equation}
\frac{1}{\lambda_{1}^{m}}\ve{B}^{m}\vec{v}_{0} = \frac{1}{\lambda_{1}^{m}} \sum_{j=1}^{n} \ve{B}^{m}\vec{x}_{j}\gamma_{j} = \sum_{j=1}^{n} \bigl(\frac{\lambda_{j}}{\lambda_{1}} \bigr)^m \vec{x}_{j} \gamma_{j} \:.
\label{eq:PIexpand}
\end{equation}
%
\ifprintanswers
From equation \eqref{eq:PIexpand} it can be determined that the error is reduced in each iteration by a factor of $|\frac{\lambda_{2}}{\lambda_{1}}|$, which is called the dominance ratio. If $\lambda_2$ is close to $\lambda_1$, then this ratio will be close to unity and the method will converge very slowly.
If $\lambda_2$ is far from $\lambda_1$, then convergence will happen much more quickly. Put simply, PI is better suited for problems where $\ve{B}$ has eigenvalues that are well separated.
\else
\vspace*{8em}
\fi
Power iteration is very attractive because it only requires matrix-vector products and two vectors of storage space. Because of its simplicity and low storage cost, PI has been widely used in the transport community for criticality problems for quite some time.
Despite these beneficial characteristics, many current codes use an acceleration method with PI or have moved away from it altogether because of the slow convergence for many problems of interest. Nevertheless, it is still used in some codes, has historical relevance, and is used in many studies as a base comparison case.
\subsubsection*{Algorithm}
The power method applies in our formulation when $\ve{B} \equiv \ve{A}^{-1}\ve{F}$, $\lambda \equiv k$ and $\vec{x} \equiv \vec{\phi}$.
%
When we write it this way it is technically \textbf{inverse power iteration} because we're using the inverse of $\ve{A}$ rather than $\ve{A}$ (the theory is the same). Be careful when setting up these solvers about whether you are solving for $k$ or $1/k$ to ensure you get the correct eigenvectors.
Here is an algorithm to implement the power method (note: this is not the most efficient way to do this, but it is likely the clearest).
%
\begin{enumerate}
\item get initial values for $k^{(0)}$ and $\phi^{(0)}$ for $i = 0, \dots, n-1$; normalize $\phi_0 = \phi_0 / ||\phi_0||$
\item compute the elements of $\ve{A}$
\item compute the initial fission source
\[\vec{Q}_{f}^{(0)} = \begin{pmatrix}
Q_{f,0}^{(0)} \\ Q_{f,1}^{(0)} \\ \vdots \\ Q_{f,n-1}^{(0)} \\
\end{pmatrix}\]
%
where $Q_{f,i}^{(0)} = \nu\Sigma_{f,ii}\phi_i^{(0)} \: \text{for } i = 1, \dots, n-1$ and $Q_{f,0}^{(0)} = \nu\Sigma_{f,1}\phi_0^{(0)}$
\item for $m = 1, ...,$ convergence:
\begin{enumerate}
\item solve (using a method like Jacobi or GS)
\[\ve{A} \vec{\phi}^{(m)} = \frac{1}{k^{(m-1)}}\vec{Q}_{f}^{(m-1)}\]
\item compute the next fission source $\:Q_{f,i}^{(m)} = \nu\Sigma_{f,ii}\phi_i^{(m)} \: \text{ for } i = 1, \dots, n-1$ and\\ $Q_{f,0}^{(m)} = \nu\Sigma_{f,1}\phi_0^{(m)}$
\item compute the next eigenvalue:
\[k^{(m)} = k^{(m-1)}\Biggl(\frac{Q_{f,0}^{(m)} \frac{h_1}{2} + \sum_{i=1}^{n-1} Q_{f,i}^{(m)} \frac{h_{ii}}{2}}
{Q_{f,0}^{(m-1)} \frac{h_1}{2} + \sum_{i=1}^{n-1} Q_{f,i}^{(m-1)} \frac{h_{ii}}{2}}\Biggr)\]
\item check for convergence
\begin{align}
\bigg|\frac{k^{(m)} - k^{(m-1)}}{k^{(m)}}\bigg| &< \epsilon_1 \nonumber \\
\bigg|\frac{\phi_i^{(m)} - \phi_i^{(m-1)}}{\phi_i^{(m)}}\bigg| &< \epsilon_2 \nonumber
\end{align}
\end{enumerate}
\end{enumerate}
%--------------------------------------------------------------------
%--------------------------------------------------------------------
%\bibliographystyle{plain}
%\bibliography{LinearSolns}
\end{document} | {
"alphanum_fraction": 0.6274747125,
"avg_line_length": 46.268,
"ext": "tex",
"hexsha": "06a0bfeda65a47807bde8072ea8f7ea80a00d91f",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2020-09-20T08:01:10.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-21T20:12:08.000Z",
"max_forks_repo_head_hexsha": "5a08229eb11eebdd60e5ec1b4c0d41a541e7d82f",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "rachelslaybaugh/NE155",
"max_forks_repo_path": "13-1d-fd-fvm/14-1d-eig-fd-fvm.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "5a08229eb11eebdd60e5ec1b4c0d41a541e7d82f",
"max_issues_repo_issues_event_max_datetime": "2016-10-31T20:14:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-04-01T00:18:04.000Z",
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "rachelslaybaugh/NE155",
"max_issues_repo_path": "13-1d-fd-fvm/14-1d-eig-fd-fvm.tex",
"max_line_length": 460,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "5a08229eb11eebdd60e5ec1b4c0d41a541e7d82f",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "rachelslaybaugh/NE155",
"max_stars_repo_path": "13-1d-fd-fvm/14-1d-eig-fd-fvm.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-15T02:00:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-22T05:28:25.000Z",
"num_tokens": 8551,
"size": 23134
} |
% $Id$ %
\subsection{Calendar}
\screenshot{plugins/images/ss-calendar}{Calendar}{img:calendar}
This is a small and simple calendar application with memo saving function.
Dots indicate dates with memos. The available memo types are: one off,
yearly, monthly, and weekly memos.
You can select what day is first day of week by the setting \setting{First Day of Week} in the menu.
\begin{btnmap}
\opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD%
,SANSA_C200_PAD,SANSA_CLIP_PAD,GIGABEAT_PAD,MROBE100_PAD,GIGABEAT_S_PAD,IPOD_4G_PAD%
,IPOD_3G_PAD,SANSA_E200_PAD,IRIVER_H10_PAD,SANSA_FUZE_PAD,PBELL_VIBE500_PAD
,SANSA_FUZEPLUS_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}
{\ButtonLeft{} / \ButtonRight{} /}
\opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD%
,GIGABEAT_PAD,MROBE100_PAD,GIGABEAT_S_PAD,PBELL_VIBE500_PAD,SANSA_FUZEPLUS_PAD%
,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}
{\ButtonUp{} / \ButtonDown}
\opt{IPOD_4G_PAD,IPOD_3G_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD}
{\ButtonScrollFwd{} / \ButtonScrollBack}
\opt{IRIVER_H10_PAD,MPIO_HD300_PAD}{\ButtonScrollUp{} / \ButtonScrollDown}
\opt{COWON_D2_PAD}{\TouchMidLeft{} / \TouchMidRight{} / \TouchTopMiddle{} / \TouchBottomMiddle}
\opt{HAVEREMOTEKEYMAP}{& }
& Move the selector\\
%
\opt{RECORDER_PAD}{\ButtonPlay}
\opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IPOD_4G_PAD,IPOD_3G_PAD,IAUDIO_X5_PAD%
,SANSA_E200_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,GIGABEAT_PAD,MROBE100_PAD,GIGABEAT_S_PAD%
,SANSA_FUZE_PAD,SANSA_FUZEPLUS_PAD}
{\ButtonSelect}
\opt{IRIVER_H10_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonPlay}
\opt{PBELL_VIBE500_PAD}{\ButtonOK}
\opt{MPIO_HD300_PAD}{\ButtonEnter}
\opt{HAVEREMOTEKEYMAP}{& }
& Show memos for the selected day\\
%
\opt{MPIO_HD300_PAD}{\ButtonRew / \ButtonFF
& Previous / Next week\\}
%
\opt{RECORDER_PAD}{\ButtonOn{} + \ButtonUp{} / \ButtonDown}
\opt{MROBE100_PAD}{\ButtonMenu{} + \ButtonUp{} / \ButtonDown}
\opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode{} / \ButtonRec}
\opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonPlay{} / \ButtonMenu}
\opt{IAUDIO_X5_PAD}{\ButtonRec{} / \ButtonPlay}
\opt{GIGABEAT_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD}{\ButtonVolUp{} / \ButtonVolDown}
\opt{GIGABEAT_S_PAD}{\ButtonNext{} / \ButtonPrev}
\opt{SANSA_E200_PAD,SANSA_FUZE_PAD}{\ButtonUp{} / \ButtonDown}
\opt{IRIVER_H10_PAD}{\ButtonRew{} / \ButtonFF}
\opt{COWON_D2_PAD}{\TouchBottomLeft{} / \TouchBottomRight}
\opt{PBELL_VIBE500_PAD}{\ButtonMenu{} / \ButtonPlay}
\opt{SAMSUNG_YH92X_PAD}{\ButtonFF{} + \ButtonUp{} / \ButtonDown}
\opt{SAMSUNG_YH820_PAD}{\ButtonRec{} + \ButtonUp{} / \ButtonDown}
\opt{MPIO_HD300_PAD}{\ButtonRec{} / \ButtonPlay}
\opt{SANSA_FUZEPLUS_PAD}{\ButtonBack{} / \ButtonPlay}
\opt{HAVEREMOTEKEYMAP}{& }
& Previous / Next month\\
%
\opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff}
\opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonMenu{} + \ButtonSelect}
\opt{GIGABEAT_S_PAD}{\ButtonBack}
\opt{IAUDIO_X5_PAD,IRIVER_H10_PAD,SANSA_E200_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,GIGABEAT_PAD,MROBE100_PAD}{\ButtonPower}
\opt{SANSA_FUZE_PAD}{Long \ButtonHome}
\opt{COWON_D2_PAD}{\ButtonPower}
\opt{PBELL_VIBE500_PAD}{\ButtonRec}
\opt{SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonRew}
\opt{MPIO_HD300_PAD}{Long \ButtonMenu}
\opt{HAVEREMOTEKEYMAP}{& }
& Quit\\
\end{btnmap}
| {
"alphanum_fraction": 0.7299803316,
"avg_line_length": 50.1267605634,
"ext": "tex",
"hexsha": "7df655accb5e30109a363fc051f6ffb4fbd84c47",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z",
"max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_forks_repo_path": "manual/plugins/calendar.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_issues_repo_path": "manual/plugins/calendar.tex",
"max_line_length": 123,
"max_stars_count": 24,
"max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_stars_repo_path": "manual/plugins/calendar.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z",
"num_tokens": 1278,
"size": 3559
} |
\chapter{Requirement \& Specification}
Software Requirements engineering are the descriptions of what the system should do, which contains discovering, analyzing, documenting and checking these services and constraints. This project consists of two kinds of stakeholders, teachers and students.
\section{Requirements Elicitation}
The team hammered out a group discussion in detail and came to a fundamental requirement, then conducted interviews with stakeholders based on existing requirements. According to the interviews with stakeholders, the team created personas to record characteristics of users and their needs, which is shown in figure [\ref{fig: personas}].
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.2]{Requirement/Personas.jpeg}
\caption{Personas}
\label{fig: personas}
\end{figure}
\section{Requirements Specification}
According to current progress, the team has documented functional requirements (see Appendix~\ref{appendix:fun_req}) and non-functional requirements (see Appendix~\ref{appendix:non_fun_req}). In addition, the team plans to develop a WeChat application satisfying all the requirements. The supervisor expects that the platform of this app can be not only WeChat, but also web and other platforms. Therefore, after tests, the team will release a WeChat mini-app as well as a web version app.
\section{Requirements Validation}
The team had face to face meetings with the stakeholders including the supervisor (as a stakeholder for teacher) and 3 other students separately. The team proofread and discussed every requirement with them to make sure they understood what this app was able to achieve.
\section{UML Diagram}
\begin{enumerate}%[(1)]
\item
The Use Case Diagram in Figure~\ref{fig: UCD} shows the two main systems of Quiz App, which are for students and teachers. It demonstrates the relationship between users and corresponding tasks, and specifies the user's independent tasks.
\item
The sequence diagram in Figure~\ref{fig: SD} shows the sequence of how students and teachers interacting with system separately. It specifies the dynamic collaboration between mini-program and database.
\item
The Activity Diagram in Figure~\ref{fig: InitialAndReleaseQuiz} shows that how teachers initialize and release quizzes.
\item
The Activity Diagram in Figure~\ref{fig: AnswerQuiz} shows that how students answer quizzes and check the feedback.
\item
The Activity Diagram in Figure~\ref{fig: ReviewQuiz} shows that how students look back into quizzes for detailed information in their quiz lists.
\end{enumerate}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Requirement/UCD.png}
\caption{Use Case Diagram}
\label{fig: UCD}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Requirement/SequenceDiagram.jpg}
\caption{Sequence Diagram}
\label{fig: SD}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Requirement/InitialAndReleaseQuiz.jpg}
\caption{Initialize And Release Quiz}
\label{fig: InitialAndReleaseQuiz}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Requirement/AnswerQuiz.jpg}
\caption{Answer Quiz}
\label{fig: AnswerQuiz}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Requirement/ReviewQuiz(S).jpg}
\caption{Review Quiz}
\label{fig: ReviewQuiz}
\end{figure}
| {
"alphanum_fraction": 0.7749855575,
"avg_line_length": 50.1739130435,
"ext": "tex",
"hexsha": "26af8ab5ab5922227d98b25cfd04a8dad97036f5",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-04-22T02:45:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-04-22T02:45:24.000Z",
"max_forks_repo_head_hexsha": "870de7af6028660ad371b95bd74e4765a07d6a0c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "BILLXZY1215/QuizApp",
"max_forks_repo_path": "doc/Interim Report/Requirement/Requirement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "870de7af6028660ad371b95bd74e4765a07d6a0c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "BILLXZY1215/QuizApp",
"max_issues_repo_path": "doc/Interim Report/Requirement/Requirement.tex",
"max_line_length": 489,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "870de7af6028660ad371b95bd74e4765a07d6a0c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "BILLXZY1215/QuizApp",
"max_stars_repo_path": "doc/Interim Report/Requirement/Requirement.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-07T10:22:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-06-07T10:22:21.000Z",
"num_tokens": 811,
"size": 3462
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Master Thesis in Mathematics
% "Immersions and Stiefel-Whitney classes of Manifolds"
% -- Chapter 1: Formulation of the Immersion Conjecture --
%
% Author: Gesina Schwalbe
% Supervisor: Georgios Raptis
% University of Regensburg 2018
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Formulation of the Immersion Conjecture}
\label{chap:reformulation}
% Explain carefully how the immersion problem can be reformulated purely
% in terms of homotopy theory. [immersionconj] (The necessary results from differential
% topology and Hirsch-Smale theory should be stated clearly but may be
% presented without proofs.)
This chapter is dedicated to reviewing the concepts and results,
which are needed to formulate the immersion conjecture and its
connection to the theory of characteristic classes.
The (re)formulation in \autoref{sec:reformulation} uses as main
ingredient a theorem by Hirsch and Smale on the relation between
immersions and vector bundle monomorphisms presented in
\autoref{sec:hirschsmale}.
This, together with other required definitions and properties of
immersions, is contained in \autoref{sec:immersions}.
Thereafter, in \autoref{sec:charclsofvb}, characteristic classes of vector
bundles are reviewed. Most importantly, the Stiefel-Whitney
characteristic classes are recalled in \autoref{sec:swclasses},
together with an outline of the way these will be particularly
useful throughout this thesis.
The last section then contains an explanation on how they contribute
to the immersion problem via obstructions.
This chapter is meant as revision and outline, therefore a
couple of preliminary results are merely referenced without proof.
\section{Immersions}\label{sec:immersions}
This section recapitulates the definition some properties
of immersions.
\subsection{Definition}
As already mentioned, immersions are technically local embeddings of
manifolds.
From a different point of view, immersions are merely a special case
of monomorphisms of vector bundles. So, recall that a morphism
$(\xi_1\colon\E_1\to X_1)\to(\xi_2\colon E_2\to X_2)$
of vector bundles over different spaces is a map $F\colon E_1\to E_2$
which is linear on fibers, and that covers its restriction to the zero
section, \idest it makes the following diagram commute:
\begin{center}
\begin{tikzcd}[column sep=large]
E_1 \ar[r,"F"]\ar[d, "\xi_1"]
& E_2 \ar[d, "\xi_2"]
\\
X_1 \ar[r, "F|_{\zerosec{\xi_1}}"]
& X_2
\end{tikzcd}
\end{center}
Further, remember the fact that such a morphism is a monomorphism in
the category of vector bundles if and only if its restriction to each
fiber is injective.
\begin{Def}
A smooth map $f\colon M\to N$ of smooth manifolds is called
an \emph{immersion}, written $M\immto N$, if its differential
$\Diff f\colon\T M\to\T N$ is a monomorphism of vector
bundles.
A homotopy $H\colon M\times I\to N$ which is an immersion in each
stage is called \emph{regular}.
\end{Def}
\begin{Rem}
Let $M$ and $N$ be manifolds.
\begin{enumerate}
\item
Immersions are local embeddings, \idest for an
immersion $f\colon M\immto N$, around every point in $M$
there is an open neighborhood on which $f$ is a diffeomorphism
onto its image.
More descriptive, immersions are mappings that do not allow creases,
respectively sharp bends, or puncturing
(see \forexample \cite{outsidein} for nicely illustrated examples)
However, globally, immersions need not be injective since
\forexample self-intersections of the image are allowed.
\begin{proof}
This is a conclusion from the implicit function theorem.
For details see \forexample \cite[Chap.~1, Theorem~3.1]{hirsch}.
\end{proof}
\item Embeddings of manifolds are exactly those injective immersions
that are topological embeddings.
If $M$ is compact, any injective immersion $f\colon M\to N$ is an
embedding.
\begin{proof}
% Argumentation:
% Every manifold has Riemannian metric
% Every Riemannian manifold is a metric space, topologies agree
% For metric spaces, sequentially compact is equivalent to compact
Use the fact that for manifolds compact is equivalent to
sequentially compact, in order to get
\begin{gather*}
\left\{
y=\lim_n f(x_n) \middlemid
(x_n)_{n\in\Nat}\subset M \text{ without limit}
\right\} \cap M = \emptyset
\end{gather*}
directly from injectivity of $f$, and compactness of $M$ and
$f(M)$.
Then apply \cite[Chap.~II, Lemma~2.6]{adachi}.
\end{proof}
\end{enumerate}
\end{Rem}
\subsection{The Hirsch-Smale Theorem}\label{sec:hirschsmale}
It is easy to see that in general not all vector bundle monomorphisms
between tangent bundles of smooth manifolds need to be the
differential of an immersion. However, taking the differential gives a
canonical inclusion of the set of immersions into the set of vector
bundle monomorphisms.
A theorem of Hirsch and Smale states that this inclusion
actually is a homotopy equivalence,
translating questions on the existence of immersions into the context
of characteristic classes of vector bundles.
For the formulation, one has to equip the respective sets with a
topology as follows.
\begin{Def}
Let $M$, $N$ be closed smooth manifolds of dimensions $\dim M<\dim N$.
\begin{enumerate}
\item
Equip the set of all vector bundle monomorphisms from $\xi_1$ to
$\xi_2$ with the compact-open topology (see \forexample
\cite{hatcher}), and denote that space by $\Mono{\xi_1}{\xi_2}$.
Note that a path between monomorphisms $F_1$ and $F_2$ in the
space $\Mono \xi \eta$ is an homotopy from $F_1$ to $F_2$ which is
a vector bundle monomorphism in each stage.
\item
% compare Lecture_Notes_on_Immersions_of_Surfaces_in_3-Space--Nowik.ps
% Whitney $C^r$\nbd{}topology; see [Hirsch, Differential Topology, Chap 2, p.35]
Taking the differential yields an injection
\begin{gather*}
\Imm M N \longto \Mono{\T M}{\T N}
\;,\quad
f\longmapsto\Diff f
\end{gather*}
of the set $\Imm M N$ of all immersions from $M$ to $N$.
Equip $\Imm M N$ in the following with the subspace topology.
This results in the weak topology described in
\cite[Section~2.1]{hirsch}, which equals the Whitney
$C^1$\nbd{}topology since $M$ was chosen to be compact.
By the way, $\Imm M N$ is open in $C^1(M,N)$ equipped
with the Whitney $C^1$\nbd{}topology
(see \cite[Section~2.1, Theorem~1.1]{hirsch}).
\end{enumerate}
\end{Def}
Now one can state the major result in immersion theory by
Hirsch using preliminary work of Smale
\cite[Sections~5 and 6]{hirschimmersions}.
The following formulation is according to
\cite[Theorem~1.2]{immersionconj}.
\begin{Thm}[Hirsch-Smale]\label{thm:hirschsmale}
Let $M$, $N$ be closed manifolds with $\dim M<\dim N$.
Then the differential map
$\Diff\colon \Imm M N\to \Mono{\T M}{\T N}$
induces isomorphisms on the homotopy groups.
Especially,
\begin{gather*}
\Diff_*\colon
\pi_0(\Imm M N) \overset\sim\longto \pi_0(\Mono{\T M}{\T N})
\end{gather*}
describes an isomorphism of path-connected components.
% Original formulation by Hirsch in [hirschimmersions], sec. 5:
Therefore, every vector bundle monomorphism
$F\colon\T M\to\T N$ is homotopic (through vector bundle
monomorphisms) to a monomorphism which is the differential
$\Diff f$ of a smooth map $f\colon M\to N$, \idest of an
immersion.
\begin{proof}
See \cite[Theorem~8.2.1]{introductionhprinciple} or the original
paper \cite{hirschimmersions}.
\end{proof}
\end{Thm}
Thus, any monomorphism of vector bundles over smooth, closed manifolds
$M$ and $N$ implies the existence of an immersion from $M$ to $N$.
This conclusion will be needed to reformulate the immersion problem.
\subsection{Normal Bundles}
Another nice property of immersions is that every immersion gives
rise to a normal bundle. This will finally make it possible to
translate the existence of an immersion of certain codimension into
the existence of a vector bundle of certain rank fulfilling a homotopy
invariant lifting property.
\begin{Def}
Let $\imm\colon M^n\immto N^{n+r}$ be an immersion of smooth
manifolds.
The \emph{normal bundle $\N{\emb}$ of $\imm$}
is the well-defined, quotient bundle $\pb{\imm}\T N/\T M$ of rank $r$,
respectively the one fulfilling
$\N{\imm}\oplus\T M\cong\pb{\imm}\T N$.
\end{Def}
Recall that manifolds have the very handy property that they admit a
unique tangent bundle.
Similarly, a normal bundle of an immersion into Euclidean space is
unique up to a notion of stable equivalence, under which
characteristic classes will turn out to be invariant.
\begin{Def}
Call two vector bundles $\xi_1$, $\xi_2$ over the same space
\emph{stably equivalent} in case there are $s_1, s_2\in\Nat$ such
that $\xi_1\oplus\trivbdl^{s_1}\cong\xi_2\oplus\trivbdl^{s_2}$.
\end{Def}
Now the promised notion of the stable normal bundle can be clarified.
\begin{LemDef}
Let $M^n$ be a closed, smooth manifold.
Then all normal bundles of immersions of $M$ into
Euclidean spaces are stably equivalent.
The resulting equivalence class is called the
\emph{stable normal bundle of $M$}, written $\N M$.
When working with vector bundles in a context that is stable in the
above sense, like \forexample characteristic classes, the stable
normal bundle of $M$ may be identified with an arbitrary
representative of its class.
\begin{proof}[Proof (sketch)]
First show that every normal bundle of an immersion is stably
equivalent to the
normal bundle of \emph{some} embedding (\idest some injective
immersion). Then ensure that all normal bundles of embeddings are
stably equivalent.
\begin{description}
\item[Immersions]
Any immersion $\imm\colon M\immto\R^{n+r}$ can be raised to
higher codimension by concatenation with the linear embedding
$l\colon\R^{n+r}\immto\R^{n+r+s}$ into the first components.
As the normal bundles $\N{\imm}$ and
$\N{l\circ\imm}\cong\N{\imm}\oplus\trivbdl^s$ are stably
equivalent, raising the codimension does not change the stable
equivalence class.
Furthermore, since a regular homotopy yields an
isomorphism on the normal bundles, it suffices to show:
\begin{claim}
For $r>n$ every immersion $M\immto\R^{n+r}$ is regularly
homotopic to an embedding.
\end{claim}
For the claim use bumping techniques
%as needed for the Thom transversality theorem
to show that for $r>n$ every
immersion $\imm\colon M\immto\R^{n+r}$ is regularly homotopic to
an injective immersion.
This is \forexample \cite[Chap.~II, Lemma~2.5]{adachi}.
However, as $M$ is compact, injective immersions
are embeddings.
\item[Embeddings]
By Whitney's embedding theorem
(see \forexample \cite[Chap.~II.2]{adachi}),
it is known that every manifold admits an embedding into some
real space.
Further, by \forexample the General Position theorem
(compare \cite[Chap.~2]{embeddingsummary})
or Haefliger's theorem (see \forexample \cite[Chap.~II.1]{adachi}),
it is known that for sufficiently large $k\in\Nat$ all embeddings
$M\immto\R^{n+k}$ are isotopic, \idest homotopic through embeddings,
and hence their normal bundles are isomorphic.
Therefore, all normal bundles of embeddings of a manifold are
stably equivalent.
\qedhere
\end{description}
\end{proof}
\end{LemDef}
\section{Characteristic Classes of Vector Bundles}\label{sec:charclsofvb}
The theory of characteristic classes provides the key tools
for the rest of this thesis. Therefore, this section
revises basic results, and recalls in detail several properties
of Stiefel-Whitney classes. The latter are generators of all
characteristic classes of vector bundles, and will be essential in
proving the theorems of the subsequent chapters.
\subsection{General Definition and Properties}
Before starting off with the definition of characteristic classes,
we recall the definition of universal bundles and Steenrod's
classification theorem. For more details see
\cite[Chapter~14.4]{tomdieck}.
\begin{LemDef}\label{def:charcls}
\begin{enumerate}
\item Any topological group $G$ admits a contractible space $\EG$ with a
free $G$\nbd{}action, and a corresponding principal $G$\nbd{}bundle
\begin{gather*}
\gamma^G\colon \EG\longto\BG\coloneqq \EG/G
\;,
\end{gather*}
called the \emph{universal $G$\nbd{}bundle},
where $\gamma_G$, $\EG$, and $\BG$ are all unique up to
homotopy.
$\BG$ is called the \emph{classifying space} for principal
$G$\nbd{}bundles.
For construction and uniqueness see \cite[Example~1B.7~ff.]{hatcher},
respectively note that universal coverings are unique up to homotopy.
\item\label{item:classificationthm}
$\gamma^G$ fulfills the following universal property:
For any space $X$ admitting the homotopy type of a CW-complex
there is a bijection between $[X,\BG]$, which denotes the homotopy
classes of maps from $X$ to $\BG$, and the isomorphism classes of
principal $G$\nbd{}bundles over $X$, given by
\begin{gather*}
\left(f\colon X\to\BG \right) \longmapsto \pb f \gamma^G
\;.
\end{gather*}
This correspondence is natural in $X$, and is a version of
Steenrod's classification theorem,
see
\cite[Theorem~14.4.1]{tomdieck}, or
\cite[Theorem~1.4, p.~75]{immersionconj}.
\end{enumerate}
\end{LemDef}
As becomes clear directly from the statement, the classifying
theorem serves in translating bundle theoretic problems into homotopy
theoretic ones, which will be a crucial step in reformulating the
immersion problem in \autoref{sec:reformulation}.
Such homotopy theoretic questions can then be tackled using known
cohomological tools, which yields the general concept of
characteristic classes.
Some important examples for vector bundles, namely the
Stiefel-Whitney classes, will be discussed in detail in
\autoref{sec:swclasses}.
\begin{Def}
A \emph{characteristic class}
\begin{compactitemize}
\item of degree $i$
\item with coefficients in a ring $R$
\item for principal $G$\nbd{}bundles for a group $G$
\end{compactitemize}
is a natural transformation
\begin{gather*}
\SwapAboveDisplaySkip
\Cl\colon [-, \BG] \Longrightarrow \H^i(-; R)\;.
\end{gather*}
of contravariant functors from the category of spaces with the
homotopy type of a CW-complex to the category of sets.
\end{Def}
\begin{Rem}
By Brown's representation theorem
(\forexample \cite[Chap.~4.E]{hatcher}),
$\H^i(-;R)$ is a representable functor represented by the
Eilenberg-MacLane space $K(i,R)$.
Thus, by the Yoneda lemma, a characteristic class is
represented by a morphism
\begin{gather*}
\cl\colon \BG \longto K(i, R)
\end{gather*}
in $\Top$, \idest by a cohomology class $\cl$ of $\BG$.
Thus, for a space $X$, which admits the homotopy type of a
CW-complex,
applying $\Cl$ to a principal $G$\nbd{}bundle over $X$ that is
represented by a morphism $\eta\colon X\to\BG$
as in Definition~\itemref{def:charcls}{item:classificationthm},
yields
\begin{gather*}
\Cl(X) = \pb{\eta} \cl \in \H^i(X;R)
\;.
\end{gather*}
This describes a one-to-one correspondence between
characteristic classes as above and cohomology classes in
$\H^i(BG;R)$, and in the following any characteristic class will be
identified with its corresponding cohomology class.
\end{Rem}
As this thesis is mainly concerned with vector bundles, we have a
closer look at the significance of classifying spaces in that context,
especially at their stability property and its implications for normal
bundles.
\begin{Lem}\label{lem:classificationvb}
Let $X$ be any space, let $M^n$ be a manifold, and $r,s\in\Nat$.
\begin{enumerate}
\item\label{item:vbcharacterisation}
There is a natural equivalence between the category of
$n$\nbd{}dimensional vector bundles and that of principal
$\Orth(n)$\nbd{}, respectively $\GL n$\nbd{}bundles.
\item\label{item:boincl}
The inclusion $\B\Orth(r)\to\B\Orth(r+s)$ is $r$\nbd{}connected.
\item\label{item:bomaps}
On vector bundles, the inclusion $\B\Orth(r)\to\B\Orth(r+s)$
represents the direct sum with the trivial bundle $\trivbdl^s$.
\Idest for vector bundles $\xi_1$ and $\xi_2$ over $X$ with
classifying maps $f_1$ respectively $f_2$, there is a lift up to
homotopy of the form
\begin{center}
\begin{tikzcd}
X \ar[r, "f_1"]
\ar[dr, bend right, "f_2"{left}]
& \B\Orth(r)
\ar[d, "\incl"]
\\
& \B\Orth(r+s)
\end{tikzcd}
\end{center}
if and only if $\xi_1\oplus\trivbdl^{s}\cong\xi_2$.
\item\label{item:charclsstablenormalbundle}
Taking the limit of all classifying maps of the normal bundles of
embeddings of $M$ yields a homotopy class in
$[M,\B\Orth=\lim_{k\to\infty}\B\Orth(k)]$ % this is \injlim
that classifies the stable normal bundle of $M$ uniquely.
Any lift of it in $[M,\B\Orth(r)]$ represents a vector bundle
$\N{}$ with the property $\N{}\oplus\T M\cong\trivbdl^{n+r}$.
\end{enumerate}
\begin{proof}
The natural equivalence is given by the known construction of
associated vector bundles, and the stability property of the
family $(\B\Orth(n))_n$ becomes clear from
$\B\Orth(n)\cong\lim_{k\to\infty}G_n(\R^k)$
where $G_n(\R^k)$ is the Grassmann manifold of $n$\nbd{}dimensional
vector subspaces of $\R^k$.
For the $r$\nbd{}connectivity in \ref{item:boincl}, observe that
the diagram
\begin{center}
\begin{tikzcd}[row sep=small, column sep=small]
\Orth(r) \ar[r]\ar[d,"\incl"{near start}]
&\E\Orth(r) \ar[r] \ar[d,"\incl"{near start}]
&\BO(r) \ar[d,"\incl"{near start}] \\
\Orth(r+1) \ar[r]\ar[d]
&\E\Orth(r+1) \ar[r]
&\BO(r+1) \\
\Sphere{r}
\end{tikzcd}
\end{center}
commutes, where the rows are the defining fiber bundles for the
classifying spaces, and $\Orth(r)\to\Orth(r+1)\to\Sphere r$
is the well-known fiber bundle of the inclusion of orthogonal groups.
Since $\E\Orth(s)$ is contractible for any $s\in\Nat$,
the long exact sequences of homotopy for the horizontal fiber bundles yield
$\pi_i(\Orth(r))\cong\pi_{i+1}(\BO(r))$ for $i\in\Nat$, analogously for $r+1$.
The sequence for the vertical fiber bundle yields that
$\Orth(r)\to\Orth(r+1)$ is $r$\nbd{}connected, and commutativity gives
the same for $\incl\colon\BO(r)\to\BO(r+1)$.
For the lifting property of the stable normal bundle, consider a
lift $\N{}\in[M,\BO(r)]$, $r>1$, by \ref{item:vbcharacterisation}
classifying a vector bundle over $M$, which we will also call
$\N{}$. By \ref{item:bomaps} and the definition
of the stable normal bundle, there is some $s\in\Nat$
such that
\begin{gather*}
\T M\oplus\N{}\oplus\trivbdl^s\cong\trivbdl^{n+r+s}
\;.
\end{gather*}
Assume $s>0$.
In order to show that this still implies
$\T M\oplus\N{}\cong\trivbdl^{n+r}$ (\idest that their
classifying maps are homotopic), consider the corresponding
homotopy commutative diagram of classifying maps
\begin{center}
\begin{tikzcd}[column sep=large]
M
\ar[r, "\T M\oplus\N{}", shift left]
\ar[r, "\trivbdl^{n+r}"{below}, shift right]
\ar[dr, bend right, "\trivbdl^{n+r+s}\,"{left}]
& \B\Orth(n+r)
\ar[d, "\incl"]
\\
& \B\Orth(n+r+s)
\end{tikzcd}
\end{center}
This says that $\incl\circ\T M\oplus\N{}$ and
$\incl\circ\trivbdl^{n+r}$ must be homotopic via
some homotopy
\begin{gather*}
H\colon M\times I\to\BO(n+r+s) \;.
\end{gather*}
The trick now is to use that by Morse theory $M$ has the homotopy
type of an $n$\nbd{}dimensional CW-complex, together with $\incl$ being
$(n+r)$\nbd{}connected, $r>0$, and both $\BO(n+r+s)$ and $\BO(n+r)$
being path-connected. Because with these assumptions
obstruction theory yields that
$H$ lifts to a homotopy $M\times I\to\BO(n+r)$ between
the classifying maps of $\T M\oplus\N{}$ and $\trivbdl^{n+r}$, as
was needed
(see \forexample \cite[Lemma~4.6]{hatcher}).
\end{proof}
\end{Lem}
\begin{Not}
Throughout this thesis assume $R=\Zmod2$ and $G=\Orth(n)$
respectively $G=\Orth$ if not stated otherwise.
\end{Not}
\subsection{Stiefel-Whitney Classes}
\label{sec:swclasses}
Now that the general concept is known, this section reviews the
defining and immediate properties of the Stiefel-Whitney
classes, a generating set for the ring $\H^*(\BO)$ of characteristic
classes of vector bundles.
This makes them especially interesting for investigation,
as any property stable under cohomology ring operations only needs to
be checked on the generating set.
Furthermore, they will be invaluable for constructing new
characteristic classes used as obstructions or indicators.
More precisely, their duals, their inverses under certain Steenrod
operations called Wu classes, and a couple of special polynomials
evaluated on them will be used.
First start with the defining properties of the Stiefel-Whitney
classes. Compare \forexample \cite[compare §4, p.~37]{milnor}.
\begin{Def}\label{def:swclasses}
The \emph{Stiefel-Whitney classes} are
characteristic classes for principal $\Orth$\nbd{}bundles
respectively vector bundles,
\idest cohomology classes
$\ws{i}\in \H^i(\BO;\Zmod2)$, $i\in\Nat$,
fulfilling the following properties for any vector bundles $\xi$ and
$\eta$ over a space $B$, and any map $f\colon A\to B$ of spaces:
\begin{axioms}
\axiom[Naturality] $\pb f\w{i}{\xi} = \w{i}{\pb f \xi}$,
\axiom $\w{0}{\xi}=1$,
\axiom $\W{\gamma_1} = 1 + x$,
\axiom[Multiplicativity]\label{tag:swclassesmultiplicativity}
$\W{\xi \oplus \eta} = \W{\xi}\cup \W{\eta}$,
\\\idest in degree $n$ we have
$\w{n}{\xi\oplus\eta} = \sum_{i+j=n}\w{i}{\xi} \cup \w{j}{\eta}$,
\end{axioms}
where the \emph{total Stiefel-Whitney class}
$\Ws\coloneqq\sum_{i\geq 0}\ws{i}$ is the formal sum of all
Stiefel-Whitney classes,
$\gamma_1$ is the tautological line bundle over $\RP 1$,
$\RPinf\cong\BO(1)\to\BO$,
and $x$ is the%
\footnote{
This is well-defined: A ring $R$ of the form $\Zmod2[x]$
with $\deg(x)=1$ only admits two elements in degree~1, $0$ and a
generator. Therefore, there exists exactly one ring
isomorphism, and this sends the unique generator in
degree 1 to $x$.
}
generator of $\H^*(\RPinf;\Zmod2)\cong\Zmod2[x]$.
\end{Def}
Note that naturality is already implied by the requirements for a
characteristic class. However, given only the above axioms:
\begin{Thm}
Stiefel-Whitney classes exist and are uniquely defined by the above
properties. Furthermore, they generate the ring
$\H^*(\BO;\Zmod2)$, which is isomorphic to $\Zmod2[\ws{i}\mid i\geq1]$.
\begin{proof}[Proof]
One possible concrete construction utilizes the Euler
class, another one via Steenrod squares can be
found in Theorem~\ref{thm:altdefswclasses}.
For uniqueness see \cite[Theorem~7.3]{milnor}.
For the generating property see \forexample
\cite[Theorem~7.1~ff.]{milnor}, or \cite[Chap.~7.6]{may}.
\end{proof}
\end{Thm}
As already mentioned, the above generating property means that every
characteristic class of vector bundles of a fixed dimension can be
represented as a certain combination of Stiefel-Whitney classes.
Moreover, they---and hence all characteristic classes---behave
extremely well with respect to vector bundle operations as emphasized
below.
\begin{Rem}
\label{rem:propswclasses}
Let $\xi$, $\eta$ be vector bundles over a space $X$.
\begin{enumerate}
\item\label{item:propswclasses:dimesioncut} $\w{i}{\eta} = 0$
for any vector bundle $\eta$ with $\rk\eta < i$.
Therefore, the total Stiefel-Whitney class $\W{\xi}$ is
well-defined (\idest the sum is finite)
for any vector bundle $\xi$ of finite rank.
\begin{proof}
See \cite[Sec.~19.4]{tomdieck}.
\end{proof}
\item\label{item:swoftrivbdl}
$\w{i}{\trivbdl}=0$ for $i>0$, and one immediately concludes
from multiplicativity:
\begin{enumerate}
\item\label{item:swclassesstable}
The Stiefel-Whitney classes are stable, \idest
$\w{i}{\xi\oplus\trivbdl} = \w{i}{\xi}$, which once more proves
the stability property of characteristic classes of vector bundles.
Thus, for a manifold $M^n$, all normal bundles $\N{\emb}$ of
embeddings $\emb\colon M\to\R^{n+k+r}$ share the same
Stiefel-Whitney classes, written $\W{\N M}$ accordingly.
Note that $\W{\N M} = \pb{\N{M}}\ws{i}$, where $\N M$ denotes the
classifying map of the stable normal bundle.
\item\label{item:wuclassmfdinverse}
If $\xi\oplus\eta = \trivbdl^{\rk\xi+\rk\eta}$, then
$\W{\xi}\cup\W{\eta}=1$.
Especially, for any choice of embedding $\emb\colon M^n\to\R^{n+k}$
with normal bundle $\N{\emb}$ of a smooth manifold $M$ we have
$\T M\oplus\N{\emb} = \trivbdl$, and therefore
$1 = \W{\T M} \cup \W{\N{\emb}} = \W{\T M}\cup\W{\N M}$.
\end{enumerate}
\begin{proof}
The trivial rank $n$ bundle over $X$ is defined as the pullback
$\pb \pi \trivbdl^n$ of the rank $n$ bundle
$\trivbdl^n\colon \R^n\to\pt$ over the point by the trivial map
$\pi\colon X\to\pt$. The naturality of the Stiefel-Whitney
classes gives
\begin{gather*}
\W{\trivbdl^n}
= \pb\pi \W{\trivbdl^n}
\in\pb\pi \left(\H^*(\pt;\Zmod2)\right)
\;,
\end{gather*}
and the result follows from $\H^i(\pt;\Zmod2) = 0$ for $i>0$.
\end{proof}
\end{enumerate}
\end{Rem}
In order to algebraically work with the Stiefel-Whitney classes, the
formal inverse is often handy. Especially, since it is well-known for
manifolds as explained below.
\begin{Def}
Define the \emph{dual Stiefel-Whitney (characteristic) classes}
$\dualws{i}$ in degree $i$ inductively by
\begin{align*}
1 &= \dualws{0}\cup\ws{0} = \dualws{0} &&\text{in degree 0}\\
0 &= \sum_{i+j=n} \dualws{i}\cup\ws{j} &&\text{in degree n>0}
\end{align*}
Denoting the formal sum by $\dualWs\coloneqq \sum_{i\geq0} \dualws{i}$ as above
this can be reformulated as
\begin{gather*}
1 = \Ws\cup\dualWs
\end{gather*}
in the completion of the polynomial ring
$\H^*(\BO)\cong\Zmod2[\ws{i}\mid i\in\Nat]$.
\end{Def}
By
Remark~\itemref{rem:propswclasses}{item:swoftrivbdl}\ref{item:wuclassmfdinverse},
a first example of dual Stiefel-Whitney classes is given by the
canonical tangent and normal bundle of a manifold, which makes them
especially handy in the context relevant for the immersion conjecture.
\begin{Def}
For a manifold $M$ use the following abbreviation
\begin{align*}
\W{M} &\coloneqq \W{\T M}
\;,
&\text{and thus}&
&\dualW{M} &\coloneqq \dualW{\T M} = \W{\N M}
\;.
\end{align*}
\end{Def}
\section{Reformulation of the Immersion Conjecture}
\label{sec:reformulation}
The immersion problem can finally be clearly stated with the
definitions from \autoref{sec:immersions}.
The goal of this section is to reformulate the immersion conjecture to
a statement that can be analyzed with means of homotopy theory of
vector bundles, and show how characteristic classes relate to this by
finding a powerful obstruction.
The latter will be followed up in the subsequent chapter.
Before reformulating, recall the actual immersion conjecture.
\begin{Def}
For $n\in\Nat$ consider the unique minimal binary expansion
\begin{gather*}
n=2^{i_1}+\dotsb+2^{i_{l_n}},
\quad\text{with}\quad
i_1<\dotsb<i_{l_n}
\;.
\end{gather*}
Define $\alpha(n)\coloneqq l_n$, \idest $\alpha(n)$ is the number of
ones in the binary notation of $n$.
\end{Def}
\begin{Thm}\label{thm:immersionconj}
For $n\in\Nat$, every closed, smooth, $n$\nbd{}dimensional manifold
immerses into $\R^{2n-\alpha(n)}$.
\end{Thm}
In the style of this conjecture, an $n$\nbd{}manifold that immerses into
some $\R^{2n-\alpha(n)}$ will be said to have the
\emph{immersion property}.
And the question, whether a particular manifold does have the
immersion property, will be referred to as the \emph{immersion problem} for
this manifold.
Recall, that by the Theorem~\ref{thm:hirschsmale} of Hirsch and Smale,
any vector bundle monomorphism between tangent bundles of smooth
manifolds implies the existence of an immersion.
This is the main ingredient for the subsequent reformulation of the
immersion problem.
\begin{Thm}\label{thm:immersionconj:equivalences}
Let $n,k\in\Nat$ and $M^n$ be a closed, smooth, $n$\nbd{}dimensional manifold.
The following statements are equivalent.
\begin{enumerate}
\item\label{item:immersionconj:1}
$M$ immerses into $\R^{n+k}$.
\item\label{item:immersionconj:2}
There is a vector bundle monomorphism $F\colon\T M\to\T{\R^{n+k}}$.
\item\label{item:immersionconj:3}
There is a $k$\nbd{}dimensional vector bundle
$\N{}\colon\E{\N{}}\to M$ over $M$ with
\begin{gather*}
\N{}\oplus\T M\cong\trivbdl^{n+k}
\;.
\end{gather*}
\item\label{item:immersionconj:4}
For the map $\N M\colon M\to\B\Orth$ classifying the stable
normal bundle over $M$ there is a lift $\N{}\colon M\to\B\Orth(k)$
making the following diagram commute up to homotopy
\begin{center}
\begin{tikzcd}
M
\ar[r, "\N{}"]
\ar[dr, "\N{M}"{left}, bend right]
& \BO(k) \ar[d, "\incl", hookrightarrow] \\
& \BO
\end{tikzcd}
\end{center}
\end{enumerate}
\end{Thm}
% From a homotopy theoretical viewpoint, one is most
% interested in statement \ref{item:immersionconj:4}
% because it allows to apply the huge arsenal of characteristic classes
% of vector bundles, an example of which will be discussed in
% \autoref{chap:massey}.
\begin{proof}[Proof of Theorem~\ref{thm:immersionconj:equivalences}]
The strategy is to show
\ref{item:immersionconj:1}$\Rightarrow$%
\ref{item:immersionconj:4}$\Rightarrow$%
\ref{item:immersionconj:3}$\Rightarrow$%
\ref{item:immersionconj:2}$\Leftrightarrow$%
\ref{item:immersionconj:1}.
\begin{description}
\item[\ref{item:immersionconj:1}$\Rightarrow$\ref{item:immersionconj:4}:]
The classifying map of an immersion's normal bundle
lifts $\N{M}$ as required,
using Steenrod's classification theorem
\itemref{def:charcls}{item:classificationthm} and the properties
of the stable normal bundle from
Lemma~\itemref{lem:classificationvb}{item:charclsstablenormalbundle}.
\item[\ref{item:immersionconj:4}$\Rightarrow$\ref{item:immersionconj:3}:]
Also, by
Lemma~\itemref{lem:classificationvb}{item:charclsstablenormalbundle},
any rank $k$ vector bundle that is represented by a lift of
$\N{M}$ to $[M,\B\Orth(k)]$ as in \ref{item:immersionconj:4},
has the property needed for \ref{item:immersionconj:3}.
\item[\ref{item:immersionconj:3}$\Rightarrow$\ref{item:immersionconj:2}:]
% In order to get from a vector bundle monomorphism
% $F\colon\T M\to\T{\R^{n+k}}$ to a vector bundle dual to the
% tangent bundle in the sense of \ref{item:immersionconj:3}, note that
% $\pb f\T{\R^{n+k}}$ for $f=F|_{\zerosec{\T M}}$ is
% trivial, and that $F\colon \T M\to \pb f\T{\R^{n+k}}$ is not only
%% Wrong according to George??
% fiber-wise, but globally injective of constant rank
% $\dim M$.
% Thus, as in the definition of normal bundles,
% $\N{}\coloneqq\pb f\T{\R^{n+k}}/\T M$ is a vector bundle over $M$
% that obviously fulfills the required property of
% \ref{item:immersionconj:3}.
%
When given some rank $k$ vector bundle $\N{}$
such that $\N{}\oplus\T M\cong\trivbdl^{n+k}$, there is a vector
bundle monomorphism $\T M\to\trivbdl^{n+k}$ over $M$. Then the
following chain of vector bundle morphisms
\begin{center}
\begin{tikzcd}
\T M \ar[d]
\ar[r, hookrightarrow]
& M\times\R^{n+k} \ar[d,"\trivbdl^{n+k}"]
\ar[r]
& \pt\times \R^{n+k} \ar[d,"\trivbdl^{n+k}"]
\ar[r, hookrightarrow]
& \R^{n+k}\times \R^{n+k} \ar[d,"\trivbdl^{n+k}"]
\\
M
\ar[r,equals]
& M
\ar[r]
& \pt
\ar[r, hookrightarrow]
& \R^{n+k}
\end{tikzcd}
\end{center}
is fiber-wise injective in each stage, and hence a monomorphism as
was needed.
\item[\ref{item:immersionconj:1}$\Leftrightarrow$\ref{item:immersionconj:2}:]
The tricky part is to relate \ref{item:immersionconj:1} and
\ref{item:immersionconj:2}, even though it is easily seen that
\ref{item:immersionconj:1} implies \ref{item:immersionconj:2} by
simply taking $F$ to be the differential $\Diff f$ of the
immersion from \ref{item:immersionconj:1}.
The converse direction is an application of the Hirsch-Smale
theorem~\ref{thm:hirschsmale}.
However, first substitute the non-compact manifold $\R^{n+k}$ with
the compact sphere $N=\Sphere{n+k}$, to make $M$ and $N$ comply
with the assumptions of the theorem:
As $\dim M<n+k$ by assumption,
the image of $M$ under any immersion $M\to\Sphere{n+k}$ is a
zero-set by Sard's theorem (see \forexample \cite[Chap.~3,
Theorem~1.3]{hirsch}), and hence every such immersion misses a
point in $\Sphere{n+k}$, thus factoring over an immersion
$M\to\R^{n+k}$.
This then shows that also \ref{item:immersionconj:2} implies
\ref{item:immersionconj:1} which makes them equivalent.
\qedhere
\end{description}
\end{proof}
This now gives rise to involve the powerful obstruction theory of
characteristic classes of vector bundles as follows.
\begin{Cor}\label{cor:obstruction}
Let $n,k\in\Nat$, and $M^n$ be a smooth, closed manifold.
If $M$ immerses into $\R^{n+k}$, then $\dualw{i}{M}=0$ for all
$i>k$.
\begin{proof}
By Theorem~\ref{thm:immersionconj:equivalences} $M$ immerses
into $\R^{n+k}$ if and only if there is a rank\nbd{}$k$
normal bundle $\N{}$ of $M$.
Since the Stiefel-Whitney classes are stable,
$\W{\N{}}=\W{\N M}=\dualW{M}$.
However, as explained in
Remark~\itemref{rem:propswclasses}{item:propswclasses:dimesioncut},
all Stiefel-Whitney classes $\w{i}{\N{}}$ of degree $i$ exceeding
the rank $k$ of $\N{}$ are zero.
\end{proof}
\end{Cor}
As a result, the immersion conjecture requires that all $n$\nbd{}manifolds
have vanishing dual Stiefel-Whitney classes in degrees $i>n-\alpha(n)$.
That this is true, is a theorem of Massey which will be proven in
\autoref{chap:massey}. It was an inspiration to state the conjecture
with the value $k=n-\alpha(n)$ in the first place.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "thesis"
%%% ispell-local-dictionary: "en_US"
%%% End:
| {
"alphanum_fraction": 0.6930901838,
"avg_line_length": 42.3852657005,
"ext": "tex",
"hexsha": "11a2e79897bbc0c8e7f1dfdbd9fa352a221f5ed5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gesina/master_thesis",
"max_forks_repo_path": "thesis-01.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gesina/master_thesis",
"max_issues_repo_path": "thesis-01.tex",
"max_line_length": 87,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gesina/master_thesis",
"max_stars_repo_path": "thesis-01.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10788,
"size": 35095
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amssymb}
\usepackage{graphicx}
\graphicspath{ {./images/} }
\title{Regularization}
\author{Rishit Dagli}
\date{April 2020}
\begin{document}
\maketitle
\section{Cover image}
\vspace{5}
\includegraphics[width=\textwidth]{cover.png}
\maketitle
\section{Introduction}
Overfitting is a huge problem, specially in deep neural networks. If you suspect your neural network is over fitting your data .There are quite some methods to figure out that you are overfitting the data, maybe you have a high variance problem or you draw a train and test accuracy plot and figure out that you are overfitting. One of the first things you should try out probably in this case is regularization.
\maketitle
\section{Why Regularization?}
The other way to address high variance, is to get more training data that is quite reliable. If you get some more training data, you can think of it in this manner that you are trying to generalize your weights for all situations. And that solves the problem most of the time, so why anything else? But a huge downside with this is that you can not always get more training data, it could be expensive to get more data and sometimes it just could not be accessible.
\vspace{5}
It now makes sense for us to discuss about some method which would help us reduce overfitting. Adding regularization will often help to prevent overfitting. Guess what, there is a hidden benefit with this, often regularization also helps you minimize random errors in your network. Having discussed why the idea of regularization makes sense, let us now understand it.
\maketitle
\section{Understanding $L_2$ Regularization?}
We will start by developing these ideas for the logistic function. So just recall that you are trying to minimize a function J called the cost function which looks like this-
$$J = \frac{1}{m} \sum_{i=1}^m L(\hat{y^{(i)}}, y^{(i)})$$
Where $w \in \mathbb{R}^{n_x}$ and $b \in \mathbb{R}$
And your $w$ is a matrix of size $x$ and $L$ is the loss function. Just a quick refresher, nothing new.
So, to add regularization for this we will add a term to this cost function equation, we will see more of this-
$$\frac{\lambda}{2m}||w||^2_2$$
So $\lambda$ is another hyper parameter that you might have to tune, and is called the regularization parameter.
According to the ideal convention $||w||^2_2$ just means the Euclidean $L_2$ norm of $w$ and then just square it, let us summarise this in an equation so it becomes easy for us, we will just simplify the $L_2$ norm term a bit-
$$||w||^2_2 = \sum_{j=1}^{n_x} w_j^2 = w^Tw$$
I have just expressed it in the terms of the squared euclidean norm of the vector prime to vector $w$. So the term we just talked about is called $L_2$ regularization. Don't worry, we will discuss more about how we got the $\lambda$ term in a while but at least you now now have a rough idea of how it works. There is actually a reason of this method called "$L_2$ normalization", it is just called so because we are computing the $L_2$ norm of $w$
Till now, we discussed about regularization of the parameter $w$, and you might have asked yourself a question, why only $w$? Why not add a term with $b$? And that is a logical question. It turns out in practice you could add a $w$ term or do it for w but we usually just omit it. Because if you look at the parameters, you would notice that w is usually a pretty high dimensional vector and particularly has a high variance problem. Understand it as w just has a lot of individual parameters, so you aren't fitting all the parameters well, whereas $b$ is just a single number. So almost all of your major parameters are in w rather than in $b$. So, even if you add that last b term to your equation, in practice it would not make a great difference.
\maketitle
\section{$L_1$ Regularization?}
We just discussed about $L_2$ regularization and you might also have heard of $L_1$ regularization. So $L_1$ regularization is when instead of the term we were earlier talking about you just add L₁ norm of the parameter vector $w$. Let's see this term in a mathematical way-
$$\frac{\lambda}{2m} \sum_{i=1}^{n_x} |w| = \frac{\lambda}{2m} ||w||_1$$
\maketitle
\section{Extending the idea to Neural Nets}
We just saw how we would do regularization for the logistic function and you now have a clear idea of what regularization means and how it works. So, now it would be good to see how these ideas can extend to neural nets. So, recall or cost function for a neural net, it looks something like this-
$$J(w^{[1]}, b^{[1]}, w^{[2]}, b^{[2]} ... w^{[n]}, b^{[n]}) = \frac{1}{m}\sum^m_{i=1} L(\hat{y^{(i)}}, y^{(i)})$$
So, now recall what we added to this while we were discussing this earlier we added the regularization parameter $\lambda$, a scaling parameter and most importantly the $L_2$ norm so we will do something similar instead we will just sum it for the layers. So, the term we add would look like this-
$$\frac{\lambda}{2m} \sum_{l=1}^L ||w^{[l]}||^2$$
Let us now simplify this $L_2$$ norm term for us, it is defined as the sum of the $i$ over sum of $j$, of each of the elements of that matrix, squared-
$$||w^{[l]}||^2 = \sum^{n[l]}_{i=1} \sum^{n[l-1]}_{j=1} (w_{ij}^{[l]})^2$$
where $w: \hspace{1}(n^{[l]}, n^{[l-1]})$
What the second line here tells you is that your weight matrix or $w$ here is of the dimensions $n^l$, $n^{l-1}$ and these are the number of units in layer $l$ and $l-14 respectively. This is also called the "Frobenius norm" of a matrix, Frobenius norm is highly useful and is used in a quite a lot of applications the most exciting of which is in recommendation systems. Conventionally it is denoted by a subscript "F". You might just say that it is more easy to call it just L₂ norm but due to some conventional reasons we call it Frobenius norm and has a different representation-
$||\cdot||^2_2$ -L_2$$ norm
$||\cdot||^2_F$ - Frobenius norm
\maketitle
\section{Implementing Gradient Descent}
So, earlier when we would do is compute $dw$ using back propagation, get the partial derivative of your cost function $J$ with respect to w for any given layer l and then you just update you $w^1$ and also include the α parameter. Now that we have got our regularization term into our objective, so we will simply add a term for the regularization-
$$dw^{[l]} = (from \hspace{5} backprop.) + \frac{\lambda}{m} w^{[l]}$$
$$\frac{\partial J}{\partial w^{[l]}} = dw^{[l]}$$
$$w^{[l]} = w^{[l]} - \alpha dw^{[l]}$$
So, the first step earlier used to be just something received from back propagation and we now added a regularization term to it. The other two steps are pretty much the same as you would do in any other neural net. This new $dw^{[l]]$ is still a correct definition of the derivative of your cost function, with respect to your parameters, now that you have added the extra regularization term at the end. For this reason that $L_2$ regularization is sometimes also called weight decay. So, now if you just take the equation from step 1 and substitute it in step 3 equation-
$$w^{[l]} = w^{[l]} - \alpha((from \hspace{5} backprop.) + \frac{\lambda}{m} w^{[l]}$$
$$w^{[l]} = w^{[l]} - \frac{\alpha \lambda}{m} - \alpha(from \hspace{5} backprop.)$$
So, what this shows is whatever your matrix $w^{[l]}$ is you are going to make it a bit smaller. This is actually as if you are taking the matrix w and you are multiplying it by $1 - \frac{\alpha \lambda}{m}$.
So this is why $L_2$ norm regularization is also called weight decay. Because it is just like the ordinary gradient descent, where you update w by subtracting $\alpha$ times the original gradient you got from back propagation. But now you're also multiplying $w$ by this thing, which is a little bit less than 1. So, the alternative name for $L_2$ regularization is weight decay.
\maketitle
\section{Why Regularization reduces overfitting}
When implementing regularization we added a term called Frobenius norm which penalises the weight matrices from being too large. So, now the question to think about is why does shrinking the Frobenius norm reduce overfitting?
\vspace{5}
Idea 1
\vspace{5}
An idea is that if you crank regularisation parameter $\lambda$ to be really big, they'll be really incentivized to set the weight matrices $w$ to be reasonably close to zero. So one piece of intuition is maybe it set the weight to be so close to zero for a lot of hidden units that's basically zeroing out a lot of the impact of these hidden units. And if that is the case, then the neural network becomes a much smaller and simplified neural network. In fact, it is almost like a logistic regression unit, but stacked most probably as deep. And so that would take you from the overfitting case much closer to the high bias case. But hopefully there should be an intermediate value of $\lambda$ that results in an optimal solution. So, to sum up you are just zeroing or reducing out the impact of some hidden layers and essentially a simpler network.
The intuition of completely zeroing out of a bunch of hidden units isn't quite right and does not work too good in practice. It turns out that what actually happens is we will still use all the hidden units, but each of them would just have a much smaller effect. But you do end up with a simpler network and as if you have a smaller network that is therefore less prone to overfitting.
\vspace{5}
Idea 2
\vspace{5}
Here is another intuition or idea to regularization and why it reduces overfitting. To understand this idea we take the example of $tanh$ activation function. So, our $g(z) = tanh(z)$.
\vspace{5}
\includegraphics[width=\textwidth]{tanh.png}
\vspace{5}
Here notice that if $z$ takes on only a small range of parameters, that is $|z|$ is close to zero, then you're just using the linear regime of the $tanh$ function. If only if $z$ is allowed to wander up to larger values or smaller values or $|z|$ is farther from 0, that the activation function starts to become less linear. So the intuition you might take away from this is that if $\lambda$, the regularization parameter, is large, then you have that your parameters will be relatively small, because they are penalized being large into a cost function.
And so if the weights of $w$ are small then because $z = wx+b4$ but if $w$ tends to be very small, then $z$ will also be relatively small. And in particular, if $z$ ends up taking relatively small values, it would cause of $g(z)$ to roughly be linear. So it is as if every layer will be roughly linear like linear regression. This would make it just like a linear network. And so even a very deep network, with a linear activation function is at the end only able to compute a linear function. This would not make it possible to fit some very complicated decisions.
If you have a neural net and some very complicated decisions you could possibly overfit and this could definitely help reducing your overfitting.
\maketitle
\section{A tip to remember}
When you implement gradient descent, one of the steps to debug gradient descent is to plot the cost function $J$ as a function of the number of elevations of gradient descent and you want to see that the cost function $J$ decreases monotonically after every elevation of gradient descent. And if you're implementing regularization then remember that $J$ now has a new definition. If you plot the old definition of $J$, then you might not see a decrease monotonically. So, to debug gradient descent make sure that you're plotting this new definition of $J$ that includes this second term as well. Otherwise you might not see $J$ decrease monotonically on every single elevation.
I have found regularization pretty helpful in my Deep Learning models and have helped me solve overfitting quite a few times, I hope they can help you too.
\maketitle
\section{About Me}
Hi everyone I am Rishit Dagli
LinkedIn - linkedin.com/in/rishit-dagli-440113165/
Website - rishit.tech
If you want to ask me some questions, report any mistake, suggest improvements, give feedback you are free to do so by mailing me at -
[email protected]
[email protected]
\end{document}
| {
"alphanum_fraction": 0.7531796176,
"avg_line_length": 80.1776315789,
"ext": "tex",
"hexsha": "4088adb3bd0781015ad196a51c1a6b7c39cd6b75",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-09-18T11:15:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-18T11:15:06.000Z",
"max_forks_repo_head_hexsha": "133e3c31c4af5c88d6535971841dc067700797a4",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Rishit-dagli/Solving-overfitting-in-Neural-Nets-with-Regularization",
"max_forks_repo_path": "Regularization/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "133e3c31c4af5c88d6535971841dc067700797a4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Rishit-dagli/Solving-overfitting-in-Neural-Nets-with-Regularization",
"max_issues_repo_path": "Regularization/main.tex",
"max_line_length": 851,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "133e3c31c4af5c88d6535971841dc067700797a4",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Rishit-dagli/Solving-overfitting-in-Neural-Nets-with-Regularization",
"max_stars_repo_path": "Regularization/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-15T14:07:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-07-24T03:49:33.000Z",
"num_tokens": 3121,
"size": 12187
} |
\chapter{Rascal Algebraic Data Type}
Source name: \textbf{rascal-a}
\section{Source grammar}
\begin{itemize}\item Source artifact: \href{http://github.com/grammarware/slps/blob/master/topics/fl/rascal/Abstract.rsc}{topics/fl/rascal/Abstract.rsc}\item Grammar extractor: \href{http://github.com/grammarware/slps/blob/master/shared/rascal/src/extract/RascalADT2BGF.rsc}{shared/rascal/src/extract/RascalADT2BGF.rsc}\end{itemize}
\footnotesize\begin{center}\begin{tabular}{|l|}\hline
\multicolumn{1}{|>{\columncolor[gray]{.9}}c|}{\footnotesize \textbf{Production rules}}
\\\hline
$\mathrm{p}(\text{`prg'},\mathit{FLPrg},\mathrm{sel}\left(\text{`fs'},\star \left(\mathit{FLFun}\right)\right))$ \\
$\mathrm{p}(\text{`fun'},\mathit{FLFun},\mathrm{seq}\left(\left[\mathrm{sel}\left(\text{`f'},str\right), \mathrm{sel}\left(\text{`args'},\star \left(str\right)\right), \mathrm{sel}\left(\text{`body'},\mathit{FLExpr}\right)\right]\right))$ \\
$\mathrm{p}(\text{`'},\mathit{FLExpr},\mathrm{choice}([\mathrm{sel}\left(\text{`binary'},\mathrm{seq}\left(\left[\mathrm{sel}\left(\text{`e1'},\mathit{FLExpr}\right), \mathrm{sel}\left(\text{`op'},\mathit{FLOp}\right), \mathrm{sel}\left(\text{`e2'},\mathit{FLExpr}\right)\right]\right)\right),$\\$\qquad\qquad\mathrm{sel}\left(\text{`apply'},\mathrm{seq}\left(\left[\mathrm{sel}\left(\text{`f'},str\right), \mathrm{sel}\left(\text{`vargs'},\star \left(\mathit{FLExpr}\right)\right)\right]\right)\right),$\\$\qquad\qquad\mathrm{sel}\left(\text{`ifThenElse'},\mathrm{seq}\left(\left[\mathrm{sel}\left(\text{`c'},\mathit{FLExpr}\right), \mathrm{sel}\left(\text{`t'},\mathit{FLExpr}\right), \mathrm{sel}\left(\text{`e'},\mathit{FLExpr}\right)\right]\right)\right),$\\$\qquad\qquad\mathrm{sel}\left(\text{`argument'},\mathrm{sel}\left(\text{`a'},str\right)\right),$\\$\qquad\qquad\mathrm{sel}\left(\text{`literal'},\mathrm{sel}\left(\text{`i'},int\right)\right)]))$ \\
$\mathrm{p}(\text{`'},\mathit{FLOp},\mathrm{choice}([\mathrm{sel}\left(\text{`minus'},\varepsilon\right),$\\$\qquad\qquad\mathrm{sel}\left(\text{`plus'},\varepsilon\right),$\\$\qquad\qquad\mathrm{sel}\left(\text{`equal'},\varepsilon\right)]))$ \\
\hline\end{tabular}\end{center}
\section{Normalizations}
{\footnotesize\begin{itemize}
\item \textbf{reroot-reroot} $\left[\right]$ to $\left[\mathit{FLPrg}\right]$
\item \textbf{unlabel-designate}\\$\mathrm{p}\left(\fbox{\text{`prg'}},\mathit{FLPrg},\mathrm{sel}\left(\text{`fs'},\star \left(\mathit{FLFun}\right)\right)\right)$
\item \textbf{unlabel-designate}\\$\mathrm{p}\left(\fbox{\text{`fun'}},\mathit{FLFun},\mathrm{seq}\left(\left[\mathrm{sel}\left(\text{`f'},str\right), \mathrm{sel}\left(\text{`args'},\star \left(str\right)\right), \mathrm{sel}\left(\text{`body'},\mathit{FLExpr}\right)\right]\right)\right)$
\item \textbf{anonymize-deanonymize}\\$\mathrm{p}\left(\text{`'},\mathit{FLOp},\mathrm{choice}\left(\left[\fbox{$\mathrm{sel}\left(\text{`minus'},\varepsilon\right)$}, \fbox{$\mathrm{sel}\left(\text{`plus'},\varepsilon\right)$}, \fbox{$\mathrm{sel}\left(\text{`equal'},\varepsilon\right)$}\right]\right)\right)$
\item \textbf{anonymize-deanonymize}\\$\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathrm{choice}\left(\left[\fbox{$\mathrm{sel}\left(\text{`binary'},\mathrm{seq}\left(\left[\fbox{$\mathrm{sel}\left(\text{`e1'},\mathit{FLExpr}\right)$}, \fbox{$\mathrm{sel}\left(\text{`op'},\mathit{FLOp}\right)$}, \fbox{$\mathrm{sel}\left(\text{`e2'},\mathit{FLExpr}\right)$}\right]\right)\right)$}, \fbox{$\mathrm{sel}\left(\text{`apply'},\mathrm{seq}\left(\left[\fbox{$\mathrm{sel}\left(\text{`f'},str\right)$}, \fbox{$\mathrm{sel}\left(\text{`vargs'},\star \left(\mathit{FLExpr}\right)\right)$}\right]\right)\right)$}, \fbox{$\mathrm{sel}\left(\text{`ifThenElse'},\mathrm{seq}\left(\left[\fbox{$\mathrm{sel}\left(\text{`c'},\mathit{FLExpr}\right)$}, \fbox{$\mathrm{sel}\left(\text{`t'},\mathit{FLExpr}\right)$}, \fbox{$\mathrm{sel}\left(\text{`e'},\mathit{FLExpr}\right)$}\right]\right)\right)$}, \fbox{$\mathrm{sel}\left(\text{`argument'},\fbox{$\mathrm{sel}\left(\text{`a'},str\right)$}\right)$}, \fbox{$\mathrm{sel}\left(\text{`literal'},\fbox{$\mathrm{sel}\left(\text{`i'},int\right)$}\right)$}\right]\right)\right)$
\item \textbf{anonymize-deanonymize}\\$\mathrm{p}\left(\text{`'},\mathit{FLFun},\mathrm{seq}\left(\left[\fbox{$\mathrm{sel}\left(\text{`f'},str\right)$}, \fbox{$\mathrm{sel}\left(\text{`args'},\star \left(str\right)\right)$}, \fbox{$\mathrm{sel}\left(\text{`body'},\mathit{FLExpr}\right)$}\right]\right)\right)$
\item \textbf{vertical-horizontal} in $\mathit{FLExpr}$
\item \textbf{undefine-define}\\$\mathrm{p}\left(\text{`'},\mathit{FLOp},\varepsilon\right)$
\item \textbf{unlabel-designate}\\$\mathrm{p}\left(\fbox{\text{`fs'}},\mathit{FLPrg},\star \left(\mathit{FLFun}\right)\right)$
\item \textbf{extract-inline} in $\mathit{FLExpr}$\\$\mathrm{p}\left(\text{`'},\mathit{FLExpr_1},\mathrm{seq}\left(\left[\mathit{FLExpr}, \mathit{FLOp}, \mathit{FLExpr}\right]\right)\right)$
\item \textbf{extract-inline} in $\mathit{FLExpr}$\\$\mathrm{p}\left(\text{`'},\mathit{FLExpr_2},\mathrm{seq}\left(\left[str, \star \left(\mathit{FLExpr}\right)\right]\right)\right)$
\item \textbf{extract-inline} in $\mathit{FLExpr}$\\$\mathrm{p}\left(\text{`'},\mathit{FLExpr_3},\mathrm{seq}\left(\left[\mathit{FLExpr}, \mathit{FLExpr}, \mathit{FLExpr}\right]\right)\right)$
\end{itemize}}
\section{Grammar in ANF}
\footnotesize\begin{center}\begin{tabular}{|l|c|}\hline
\multicolumn{1}{|>{\columncolor[gray]{.9}}c|}{\footnotesize \textbf{Production rule}} &
\multicolumn{1}{>{\columncolor[gray]{.9}}c|}{\footnotesize \textbf{Production signature}}
\\\hline
$\mathrm{p}\left(\text{`'},\mathit{FLPrg},\star \left(\mathit{FLFun}\right)\right)$ & $\{ \langle \mathit{FLFun}, {*}\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLFun},\mathrm{seq}\left(\left[str, \star \left(str\right), \mathit{FLExpr}\right]\right)\right)$ & $\{ \langle str, 1{*}\rangle, \langle \mathit{FLExpr}, 1\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathit{FLExpr_1}\right)$ & $\{ \langle \mathit{FLExpr_1}, 1\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathit{FLExpr_2}\right)$ & $\{ \langle \mathit{FLExpr_2}, 1\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathit{FLExpr_3}\right)$ & $\{ \langle \mathit{FLExpr_3}, 1\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr},str\right)$ & $\{ \langle str, 1\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr},int\right)$ & $\{ \langle int, 1\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr_1},\mathrm{seq}\left(\left[\mathit{FLExpr}, \mathit{FLOp}, \mathit{FLExpr}\right]\right)\right)$ & $\{ \langle \mathit{FLOp}, 1\rangle, \langle \mathit{FLExpr}, 11\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr_2},\mathrm{seq}\left(\left[str, \star \left(\mathit{FLExpr}\right)\right]\right)\right)$ & $\{ \langle str, 1\rangle, \langle \mathit{FLExpr}, {*}\rangle\}$\\
$\mathrm{p}\left(\text{`'},\mathit{FLExpr_3},\mathrm{seq}\left(\left[\mathit{FLExpr}, \mathit{FLExpr}, \mathit{FLExpr}\right]\right)\right)$ & $\{ \langle \mathit{FLExpr}, 111\rangle\}$\\
\hline\end{tabular}\end{center}
\section{Nominal resolution}
Production rules are matched as follows (ANF on the left, master grammar on the right):
\begin{eqnarray*}
\mathrm{p}\left(\text{`'},\mathit{FLPrg},\star \left(\mathit{FLFun}\right)\right) & \Bumpeq & \mathrm{p}\left(\text{`'},\mathit{program},\plus \left(\mathit{function}\right)\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLFun},\mathrm{seq}\left(\left[str, \star \left(str\right), \mathit{FLExpr}\right]\right)\right) & \Bumpeq & \mathrm{p}\left(\text{`'},\mathit{function},\mathrm{seq}\left(\left[str, \plus \left(str\right), \mathit{expression}\right]\right)\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathit{FLExpr_1}\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{expression},\mathit{binary}\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathit{FLExpr_2}\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{expression},\mathit{apply}\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr},\mathit{FLExpr_3}\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{expression},\mathit{conditional}\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr},str\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{expression},str\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr},int\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{expression},int\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr_1},\mathrm{seq}\left(\left[\mathit{FLExpr}, \mathit{FLOp}, \mathit{FLExpr}\right]\right)\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{binary},\mathrm{seq}\left(\left[\mathit{expression}, \mathit{operator}, \mathit{expression}\right]\right)\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr_2},\mathrm{seq}\left(\left[str, \star \left(\mathit{FLExpr}\right)\right]\right)\right) & \Bumpeq & \mathrm{p}\left(\text{`'},\mathit{apply},\mathrm{seq}\left(\left[str, \plus \left(\mathit{expression}\right)\right]\right)\right) \\
\mathrm{p}\left(\text{`'},\mathit{FLExpr_3},\mathrm{seq}\left(\left[\mathit{FLExpr}, \mathit{FLExpr}, \mathit{FLExpr}\right]\right)\right) & \bumpeq & \mathrm{p}\left(\text{`'},\mathit{conditional},\mathrm{seq}\left(\left[\mathit{expression}, \mathit{expression}, \mathit{expression}\right]\right)\right) \\
\end{eqnarray*}
This yields the following nominal mapping:
\begin{align*}\mathit{rascal-a} \:\diamond\: \mathit{master} =\:& \{\langle \mathit{FLFun},\mathit{function}\rangle,\\
& \langle \mathit{FLExpr_2},\mathit{apply}\rangle,\\
& \langle \mathit{FLPrg},\mathit{program}\rangle,\\
& \langle \mathit{FLExpr},\mathit{expression}\rangle,\\
& \langle int,int\rangle,\\
& \langle str,str\rangle,\\
& \langle \mathit{FLExpr_3},\mathit{conditional}\rangle,\\
& \langle \mathit{FLOp},\mathit{operator}\rangle,\\
& \langle \mathit{FLExpr_1},\mathit{binary}\rangle\}\end{align*}
Which is exercised with these grammar transformation steps:
{\footnotesize\begin{itemize}
\item \textbf{renameN-renameN} $\mathit{FLFun}$ to $\mathit{function}$
\item \textbf{renameN-renameN} $\mathit{FLExpr_2}$ to $\mathit{apply}$
\item \textbf{renameN-renameN} $\mathit{FLPrg}$ to $\mathit{program}$
\item \textbf{renameN-renameN} $\mathit{FLExpr}$ to $\mathit{expression}$
\item \textbf{renameN-renameN} $\mathit{FLExpr_3}$ to $\mathit{conditional}$
\item \textbf{renameN-renameN} $\mathit{FLOp}$ to $\mathit{operator}$
\item \textbf{renameN-renameN} $\mathit{FLExpr_1}$ to $\mathit{binary}$
\end{itemize}}
\section{Structural resolution}
{\footnotesize\begin{itemize}
\item \textbf{narrow-widen} in $\mathit{program}$\\$\star \left(\mathit{function}\right)$\\$\plus \left(\mathit{function}\right)$
\item \textbf{narrow-widen} in $\mathit{function}$\\$\star \left(str\right)$\\$\plus \left(str\right)$
\item \textbf{narrow-widen} in $\mathit{apply}$\\$\star \left(\mathit{expression}\right)$\\$\plus \left(\mathit{expression}\right)$
\end{itemize}}
| {
"alphanum_fraction": 0.6790404962,
"avg_line_length": 113.0309278351,
"ext": "tex",
"hexsha": "e5e7f2f2262994c84322763782be983688fa2895",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2020-05-26T10:10:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-18T13:50:07.000Z",
"max_forks_repo_head_hexsha": "a39bb0f8454de8508269d4467f2501badbb2cc4a",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "grammarware/slps",
"max_forks_repo_path": "topics/convergence/guided/bgf/rascal-a.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a39bb0f8454de8508269d4467f2501badbb2cc4a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "grammarware/slps",
"max_issues_repo_path": "topics/convergence/guided/bgf/rascal-a.tex",
"max_line_length": 1106,
"max_stars_count": 19,
"max_stars_repo_head_hexsha": "a39bb0f8454de8508269d4467f2501badbb2cc4a",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "grammarware/slps",
"max_stars_repo_path": "topics/convergence/guided/bgf/rascal-a.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-08T11:23:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-18T13:50:02.000Z",
"num_tokens": 4170,
"size": 10964
} |
\documentclass[%
reprint,
%superscriptaddress,
%groupedaddress,
%unsortedaddress,
%runinaddress,
%frontmatterverbose,
%preprint,
%preprintnumbers,
nofootinbib,
%nobibnotes,
%bibnotes,
amsmath,amssymb,
aps,
%pra,
%prb,
%rmp,
%prstab,
%prstper,
floatfix,
]{revtex4-2}
\usepackage{gensymb}
\usepackage{textcomp}
\usepackage{graphicx}% Include figure files
\usepackage{dcolumn}% Align table columns on decimal point
\usepackage{bm}% bold math
\usepackage{siunitx}
\DeclareSIUnit\gauss{G}
\DeclareSIUnit\erg{erg}
\DeclareMathOperator{\Rot}{rot}
\sisetup{separate-uncertainty=true}
\usepackage{tabularx}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{relsize}
\usepackage{commath}
\usepackage{enumitem}
\usepackage{float}
\usepackage{booktabs}
\usepackage{makecell}
\usepackage[version=4]{mhchem}
\usepackage[colorlinks,bookmarks=false,citecolor=blue,linkcolor=blue,urlcolor=blue]{hyperref}
%\usepackage{hyperref}% add hypertext capabilities
%\usepackage[mathlines]{lineno}% Enable numbering of text and display math
%\linenumbers\relax % Commence numbering lines
%\usepackage[showframe,%Uncomment any one of the following lines to test
%%scale=0.7, marginratio={1:1, 2:3}, ignoreall,% default settings
%%text={7in,10in},centering,
%%margin=1.5in,
%%total={6.5in,8.75in}, top=1.2in, left=0.9in, includefoot,
%%height=10in,a5paper,hmargin={3cm,0.8in},
%]{geometry}
\begin{document}
\preprint{APS/123-QED}
\title{Electron Spin Resonance}% Force line breaks with \\
\author{Maitrey Sharma}
\email{[email protected]}
\affiliation{School of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni-752050, India}
\date{\today}% It is always \today, today,
% but any date may be explicitly specified
\begin{abstract}
In this experiment, we study about the phenomena of electron spin resonance, which is often used as a spectroscopic technique to identify paramagnetic materials. We discuss the Larmor precession, the $g$-factor, specially Land\'e $g$-factor and justify its appearance in our discussions. We review the classical and quantum mechanical picture of the magnetic moment of electron in paramagnetic materials. We explore the ESR phenomena in macroscopic systems like solids. We formulate the theory for the experiment and workings of the ESR spectrometer. We finally determine the Land\'e $g$-factor for the electron and finally discuss the results so obtained like the appearance of four peaks on the oscilloscope and how the obtained Lissajous figure provides the insight on the effects of external magnetic field when a paramagnetic substance is kept under its influence. In the process of this experiment, we establish various useful results relates to magnetism and behaviour of electron in magnetic fields.
\end{abstract}
\keywords{Zeeman effect, energy quantum, quantum number, resonance, $g$-factor, land\'e factor}
\maketitle
%\tableofcontents
\section{\label{sec:level1}Introduction}
Spectroscopy, primarily in the electromagnetic spectrum, has played the role of a fundamental exploratory tool in the fields of physics, chemistry, and astronomy, allowing the composition, physical structure and electronic structure of matter to be investigated at the atomic, molecular and macro scale, and over astronomical distances since its inception. Especially in chemistry, it has been of immense importance as the combination of atoms into molecules leads to the creation of unique types of energetic states and therefore unique spectra of the transitions between these states. Molecular spectra can be obtained due to electron spin states (electron spin resonance or electron paramagnetic resonance), molecular rotations, molecular vibration, and electronic states.
\par
\textbf{Electron paramagnetic resonance} (EPR) or \textbf{electron spin resonance} (ESR) spectroscopy is a method for studying materials with unpaired electrons. In general, resonance implies the enhancement of the amplitude when the frequency of the externally applied force matches with the natural frequency of the system. The basic concepts of ESR are analogous to those of nuclear magnetic resonance (NMR), but the spins excited are those of the electrons instead of the atomic nuclei.
\par
Soviet physicist Yevgeny Zavoisky started his work on ESR in 1943 after failing to materialise his work on NMR and due to interruption from the war as it was less demanding. In 1944, ESR signals were detected in several salts, including hydrous copper chloride (\ce{CuCl2.2H2O}), copper sulfate and manganese sulfate. The results were revolutionary and were first not accepted even by the Soviet scientists (including Pyotr Kapitsa, known for his work on superfluidity). The doubts were dispersed when Zavoisky visited Moscow, assembled an ESR spectrometer from scratch and reproduced his results there. In 1945, Zavoisky defended his habilitation on the phenomenon of electron spin resonance. ESR was also developed independently around the same time by Brebis Bleaney at the University of Oxford.
\begin{figure*}
\centering
\includegraphics[scale = 0.18]{Figures/qloFRlMIUU-compress.jpg}
\caption{Panoramic view of the reconstruction of Zavoisky's experimental setup. The signal is read out on the oscilloscope. It is preserved at the Evgeny Zavoisky museum in the Kazan Federal University \footnote{Courtsey of the \href{http://chiralqubit.eu/a-visit-to-the-Zavoisky-museum}{CHIRALQUBIT Project}.}.}
\label{fig:zavoisky}
\end{figure*}
\par
Applications of electron magnetic spin resonance in solid state physics are of great importance. It is a very sensitive technique and has been applied in many fields. The chief of these are paramagnetic ions in crystals, unpaired electron in semi-conductors and organic free radicals, colour centres, and radiation damage centres, ferro and anti-ferro magnetic materials.
\section{Theory}
It is well known that an electron carries a spin angular momentum, which gives it a magnetic property known as a magnetic moment. There are many materials that contains unpaired electrons. When an external magnetic field is applied, the unpaired electrons can either orient in a direction parallel or anti-parallel to the direction of the magnetic field. This creates two distinct energy levels for the unpaired electrons an measurements are taken as they are driven between the two levels. The theoretical treatment of this experiment is divided into subsections, as we build upon the physics needed to understand and perform this experiment.
\subsection{Larmor Precession}
Before going into the details of magnetic resonance, we need to understand what is \textit{precession}. Precession is a change in the orientation of the rotational axis of a rotating body. More specifically, the precession of the magnetic moment of an object about an external magnetic field is referred to as \textit{Larmor precession}.
\par
When a magnetic moment $\Vec{\mu}$ is placed in a magnetic field $\Vec{B}$, it experiences a torque which can be expressed as
\begin{equation}
\Vec{\tau} = \Vec{\mu} \times \Vec{B}
\end{equation}
\begin{figure}
\centering
\includegraphics[scale = 0.8]{Figures/larmor.png}
\caption{Larmor precession}
\label{fig:larmor}
\end{figure}
From figure (\ref{fig:larmor}), we can write
\begin{equation}
\tau = \dfrac{\Delta L}{\Delta t} = \dfrac{L \sin \theta \Delta \phi}{\Delta t} = \abs{\mu B \sin \theta} = \dfrac{e}{2 m_e} LB \sin \theta
\end{equation}
The precession angular velocity called as Larmor frequency is
\begin{equation}
\omega_{Larmor} = \dfrac{d \phi}{d t} = \dfrac{e}{2 m_e} B
\end{equation}
In general, we can write
\begin{equation}
\omega = \gamma B = \dfrac{e g_e}{2 m_e} B
\end{equation}
where $\gamma$ is the gyromagnetic ratio ( the ratio of its magnetic moment to its angular momentum) and $g_e$ is the electron's Land\'e $g$-factor (a dimensionless quantity defined for an electron with both spin and orbital angular momenta).
\subsection{Elementary Magnetic Resonance}
Suppose a particle having a magnetic moment $\Vec{\mu}$ is placed in a uniform magnetic field of intensity $\Vec{H_0}$ (figure (\ref{fig:precession})). Then the moment $\Vec{\mu}$ will precess around $\Vec{H_0}$ with an angular Larmor frequency
\begin{equation}
\omega_0 = g_e \Bigg( \dfrac{e}{2 m c} \Bigg) H_0
\end{equation}
\begin{figure*}
\centering
\includegraphics[scale = 0.9]{Figures/precession.png}
\caption{Precession of a magnetic moment $\Vec{\mu}$ when placed in a magnetic field $\Vec{H_0}$. \\ (a) The spin precesses with angular frequency $\omega_0 = \gamma H_0$; the angle $\theta$ is a constant of the motion. \\ (b) In addition to $\Vec{H_0}$ a week magnetic field $\Vec{H_1}$ is now also applied. $\Vec{H_1}$ is rotating about the $z$-axis with angular frequency $\omega_0$ and therefore $\Vec{\mu}$ precesses about $\Vec{H_1}$ with angular frequency $\omega_1 = \gamma H_1$; $\theta$ is not conserved anymore.}
\label{fig:precession}
\end{figure*}
Note that the Larmor frequency will change with the change in field strength. As seen in figure (\ref{fig:precession} (b)), an additional weak magnetic field is introduced oriented in $xy$-plane and rotating about the $z$-axis (in the same direction as the \textit{Larmor precessing}) with an angular frequency $\omega_1$. If the frequency $\omega_1$ is different from $\omega_0$, the angle between the field $\Vec{H_1}$ and $\Vec{H_0}$ will continuously change so that their interaction will average out to zero. However, if however, $\omega_1 = \omega_0$, the angle between $\Vec{H_1}$ and $\Vec{H_0}$ is maintained and net interaction is effective. This corresponds to change in the potential energy of the particle in the magnetic field. This is the classical analogy to a transition between sub-levels with different $m$.
\par
Suppose that the intrinsic angular momentum of the electron $\Vec{S}$ couples with the orbital angular momentum of electron $\Vec{L}$ to give a resultant $\Vec{J}$. We know, that there are $J+1$ magnetic sub-levels labelled by the magnetic field $\Vec{H_0}$ by equal energy difference $\Delta E = g_e \mu_B H_0$ between adjacent sub-levels, where $\mu_B$ is the Bohr magneton. The quantum mechanical value of $g_e$ is given as
\begin{equation}
\label{eqlande}
g_e = 1 + \dfrac{J (J+1) + S (S+1) - L (L+1)}{2 J (J+1)}
\end{equation}
Now, if the particle is subjected to a perturbation by an alternating magnetic field with a frequency $\nu_1$ such that the quantum $h \nu_1$ is exactly the same as the difference between the levels, $\Delta E$ and if the direction of the alternating field is perpendicular to the direction of the static magnetic field, then there will be induced transitions between neighbouring sub-levels according to the selection rules $\Delta m = \pm 1$ for magnetic dipolar radiation. Therefore, the condition for resonance is
\begin{equation}
\label{eqDelE}
\Delta E = g_e \mu_B H_0 = h \nu_0 = h \nu_1
\end{equation}
where $\nu_1$ is the resonance frequency in cycles/sec. This requirement is identical with the classical condition $\omega_1 = \omega_0$.
\par
In atomic spectroscopy, we do not observe the transitions between sub-levels with different $m$ (labelled $a_1$, $a_2$, $a_3$ and selection rules $\Delta L = \pm 1$. Instead the splitting of a level is observed through small change in frequency of the radiation emitted in the transition between widely distant levels (figure (\ref{fig:energylevels})). It is clear that, if we could directly measure the frequency corresponding to a transition between the sub-levels of the same state, a much more precise knowledge of the energy splitting would be obtained.
\begin{figure*}
\centering
\includegraphics[scale = 1]{Figures/energylevels.png}
\caption{Energy levels of a single valence electron atom showing a $P$ state and an $S$ state. Due to the fine structure, the $P$ state is split into a doublet with $j=2/3$ and $j = 1/2$. Further, under the influence of an external magnetic field each of the three levels is split into sub-levels as shown in the figure where account has been taken of the magnetic moment of the electron. The magnetic quantum number mi for each sub-level is also shown as is the $g$-factor for each level. The arrows indicate the allowed transitions between the initial and final states, and the structure of the line is shown in the lower part of the figure.}
\label{fig:energylevels}
\end{figure*}
\par
In this experiment we will apply the external magnetic field $\Vec{H_0}$ using a Helmholtz coil and the small magnetic field $\Vec{H_1}$ using a microwave source.
\subsection{Origin of an ESR signal}
Every electron has a magnetic moment and spin quantum number, $s = \dfrac{1}{2}$, with magnetic components $m_s = + \dfrac{1}{2}$ or $m_s = - \dfrac{1}{2}$. In the presence of an external magnetic field with strength $B_0$ , the electron's magnetic moment aligns itself either anti-parallel ($m_s = - \dfrac{1}{2}$) or parallel ($m_s = + \dfrac{1}{2}$) to the field, each alignment having a specific energy due to the Zeeman effect\footnote{The effect of splitting of a spectral line into several components in the presence of a static magnetic field. It is named after the Dutch physicist Pieter Zeeman, who discovered it in 1896. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field.}:
\begin{equation}
E = m_s g_e \mu_B B_0
\end{equation}
where $g_e = 2.0023$ for the free electron. Therefore, the separation between the lower and the upper state is $\Delta E = g_e \mu_B B_0$ for unpaired free electrons. This equation implies (since both $g_e$ and $\mu_B$ are constant) that the splitting of the energy levels is directly proportional to the magnetic field's strength (see figure (\ref{fig:ESRsplit})).
\begin{figure}
\centering
\includegraphics[scale = 0.4]{Figures/EPR_splitting.svg.png}
\caption{The splitting of the energy levels is directly proportional to the magnetic field's strength}
\label{fig:ESRsplit}
\end{figure}
An unpaired electron can change its electron spin by either absorbing or emitting a photon of energy $h \nu$ such that the resonance condition, $h \nu = \Delta E$ is obeyed. This leads to the fundamental equation of ESR spectroscopy:
\begin{equation}
h \nu = g_e \mu_B B_0
\end{equation}
By increasing an external magnetic field, the gap between the $m_s = + \dfrac{1}{2}$ and $m_s = - \dfrac{1}{2}$ energy states is widened until it matches the energy of the microwaves, as represented by the double arrow in the diagram above. At this point the unpaired electrons can move between their two spin states. Since there typically are more electrons in the lower state, due to the Maxwell–Boltzmann distribution, there is a net absorption of energy, and it is this absorption that is monitored and converted into a spectrum. The upper spectrum below is the simulated absorption for a system of free electrons in a varying magnetic field. The lower spectrum is the first derivative of the absorption spectrum. The latter is the most common way to record and publish continuous wave ESR spectra.
\begin{figure}
\centering
\includegraphics[scale = 0.1]{Figures/EPR_lines.png}
\caption{The shape of ESR signal against the applied magnetic field strength for microwave frequency $\SI{9388.2}{\mega \hertz}$}
\label{fig:ESRline}
\end{figure}
For example, for the microwave frequency of $\SI{9388.2}{\mega \hertz}$, the predicted resonance occurs at a magnetic field of about $B_0 = h \nu / g_e \mu_B = \SI{0.3350}{\tesla} = \SI{3350}{\gauss}$ (see figure (\ref{fig:ESRline})).
\par
The ESR spectrum is usually directly measured as the first derivative of the absorption. This is accomplished by using field modulation. A small additional oscillating magnetic field is applied to the external magnetic field at a typical frequency of $\SI{100}{\kilo \hertz}$ (see figure (\ref{fig:ESRfm})).
\begin{figure}
\centering
\includegraphics[scale = 0.4]{Figures/EPR_Field_Modulation.svg.png}
\caption{The field oscillates between $B_1$ and $B_2$ due to the superimposed modulation field at $\SI{100}{\kilo \hertz}$. This causes the absorption intensity to oscillate between $I_1$ and $I_2$. The larger the difference the larger the intensity detected by the detector tuned to $\SI{100}{\kilo \hertz}$ (note this can be negative or even 0). As the difference between the two intensities is detected the first derivative of the absorption is detected.}
\label{fig:ESRfm}
\end{figure}
\subsection{ESR in Macroscopic systems}
Now that we have understood magnetic resonance at the quantum mechanical level, we can now advance to macroscopic objects like solids. The behaviour of a paramagnetic substance in a magnetic field will depend on the interaction of the particles with one another and with the diamagnetic particles. There are mainly two types of interactions: \textbf{\textit{spin-spin}} (where the spin interacts with a neighbouring spin but the total energy of the spin system remains constant) and \textbf{\textit{spin-lattice}} (where the electron spin interact with entire solid or liquid, transforming energy from the spin system to the lattice which act as a thermal reservoir). As a matter of fact it is the spin-lattice interaction that makes possible the observation of energy absorption from the radio-frequency field when the resonance frequency is reached.
\par
To understand this last statement, consider a paramagnetic substance in a magnetic field $\Vec{H_0}$ and say the equilibrium state has been reached. The population of individual energy levels will be determined by the Boltzmann distribution $e^{-g_e \mu_0 H_0 m/ kT}$ where $m$ is the magnetic quantum number. It can be seen that the population of the lower energy levels are greater than those of the upper levels and, therefore when a periodic magnetic field with a resonance frequency is switched on; the number of induced radiation events will be more and as a result the substance will absorb energy from the radio-frequency field. Thus, two opposing processes take place in ESR. The radio frequency field tends to equalise the population of various levels and the spin lattice interaction tends to restore the Boltzmann distribution by conversion of the energy absorbed from the radio-frequency field into heat.
\par
The the mechanism through which the electron return from an excited state to the ground state or relax back to the ground state is known as \textit{relaxation} in the field of magnetic resonances and the time taken by the process is called the \textit{relaxation time}. This complete process may be considered as two state process (provided the spin-spin interactions are much stronger than the spin-lattice interaction). First, the energy is absorbed from the radio frequency magnetic field and the equilibrium is established inside the `spin system'. The time taken by this process is known as the \textit{spin-spin relaxation time} and is a measure of the rate at which magnetic energy can be distributed within the spin system though total energy is conserved. Secondly, an exchange of energy occurs between the spin system and the lattice. The time taken is known as the \textit{spin-lattice relaxation time} and is a measure of the rate of transfer of energy from the spin system to the lattice.
\par
In optical spectroscopy of the relaxation time is usually very short ($\SI{e-8}{\second}$) so that the relaxation time does not impede the absorption rate. In radio frequency, on the other hand, typical relaxation times are in milliseconds or longer and the spin do not have time to relax if the energy is supplied at a faster rate. This situation is called the \textbf{saturation state}. In other words, no additional energy is absorbed, if the radio-frequency field power is increased beyond certain level.
\par
The effect of the spin-spin interaction is to slightly shift the exact position of energy level of any individual spin in the external field. This energy shift clearly depend on the relative orientation and distance of the spin and thus is different for each spin, resulting in apparent broadening of the energy level. Another way of thinking of the spin-spin interaction is that one electron spin produce a local magnetic field at the position of another spin. Thus, the width of absorption line due to spin-spin interaction may be estimated as $\dfrac{1}{T'}$, where $T'$ is the spin-spin relaxation time.
\par
If the spin-lattice interactions are not weak the spin lattice relaxation time $T$ will also be introduced. Let us consider the probability of a transition of an individual paramagnetic particle from one magnetic level to another under the influence of thermal motion. If the probability per second is equal to $A$, $T \sim \dfrac{1}{A}$ and the absorption line width would be of the order of $\dfrac{1}{T}$. In general case, however, the absorption line width may be estimated as $\dfrac{1}{T} + \dfrac{1}{T'}$.
\par
Thus, we see that from the width of absorption line it is possible to measure the relaxation time. In fact most of the research in this field involve the study of relaxation phenomena which in turn provide information about internal interactions in solids and liquids. The position and number of lines of paramagnetic resonance absorption also depend on the internal interactions.
\section{Formulation}
Now that we have gone through the theory, we can formulate the physics which will aid us in calculating the different parameters in the experiment.
\par
If an electron of mass $m$ and charge $e$ is located in an electromagnetic field with vector potential $\Vec{A}$ and scalar potential $\phi$, then its steady-state energy level and characteristic states are given by the solutions of the Dirac equation.
\par
If the equation is solved for the c, component of the state spinor\footnote{an element of a complex vector space that can be associated with Euclidean space}, it reads:
\begin{equation}
\label{eqpi2}
\Big( \Vec{\pi^{2}} + \mu_B 2 \Vec{S} \cdot \Vec{B} \Big) \pi_1 = \epsilon \psi_1
\end{equation}
where the Bohr magneton, $\mu_B = \dfrac{eh}{2 m_e c} = \SI{9.27e-27}{\ampere \metre \squared}$ and $\Vec{\pi} = \Vec{p} + \dfrac{e}{c} \Vec{A}$, $\Vec{p}$ being the momentum, $c$ being the velocity of light, $h = \SI{6.626e-34}{\joule \second}$ is the Planck's constant, $\Vec{S}$ is the electron spin operator and $\Vec{B} = \Rot \Vec{A}$. Finally, $\epsilon = \dfrac{1}{2} \Big( 1 + (E/mc)^2 \Big) \Big( E - mc^2\Big)$ is approximately the excess of energy over the residual mass energy.
\par
For a free electron in a uniform magnetic field, the second term (Zeeman term) interchanges with the first in the Hamiltonian operator of (\ref{eqpi2}), and the energy level
\begin{equation}
\epsilon = \epsilon_0 + \mu_B 2 S_Z B_Z
\end{equation}
is obtained if the $z$-axis lies in the direction of the magnetic field. $\epsilon_0$ is the energy of the electron without a magnetic field. If the diamagnetic contribution from the first term of (\ref{eqpi2}) is disregarded in the general case, then for the electron in a uniform magnetic field, the interaction Hamiltonian operator with the magnetic filed, (Zeeman effect) is
\begin{equation}
H_Z = \mu_B \Vec{B} \cdot (\Vec{L} + 2 \Vec{S})
\end{equation}
where $\Vec{L}$ and $\Vec{S}$ are the operators of the orbital and spin angular momentum respectively. In addition, the spin-orbit interaction
\begin{equation}
H_{so} = \lambda \cdot \Vec{S} \cdot \Vec{L}
\end{equation}
should be taken into account, so that only the total angular momentum
\begin{equation}
\Vec{J} = \Vec{L} + \Vec{S}
\end{equation}
is a conserved value.
\par
In this case, $H_Z$ can be written as
\begin{equation}
\label{eqHz}
H_Z = \mu_B g_e \Vec{B} \cdot \Vec{J}
\end{equation}
where $g_e$ is the $g$-factor given by (\ref{eqlande}). Disregarding the influence of the nuclear spin, the energy levels of $H_Z$, for magnetic fields in the $z$-direction, are
\begin{equation}
E_Z = \mu_B g_e B m_j
\end{equation}
with $m_j = j, j-1, \ldots -j$. The selection rule for magnetic transitions of the electron spin resonance experiment is
\begin{equation}
\Delta m_j = \pm 1
\end{equation}
so that the absorption condition reads
\begin{equation}
\mu_B g_e B = \Delta E = h \nu
\end{equation}
In many cases, especially with molecular radicals, the angular momentum $L$ of the unpaired electron is extinguished by the electric fields of the neighbouring atoms and molecules. In the case of DPPH (the paramagnetic sample used in the experiment), $L = 0$ and therefore
\begin{equation}
g_e = 2
\end{equation}
In (\ref{eqHz})
\begin{equation}
\Vec{\mu} = \mu_B \cdot g_e \cdot \Vec{J}
\end{equation}
is the magnetic moment of an electron with the angular momentum $\Vec{J}$ in units of the Bohr magneton $\mu_B$.
\par
If account is also taken of the exchange of virtual photons between the electron and a radiation field in the static limit through inclusion of the vertex corrections in increasing order, a modified abnormal magnetic moment of the electron is obtained as a series in the fine-structure constant $\alpha$:
\begin{equation}
\label{eqalpha}
\begin{split}
\alpha
&= \dfrac{e^2}{hc} \\
\Vec{\mu}
&= \mu_B \Big( 1 + \dfrac{\alpha}{2 \pi} - 0.328 \dfrac{\alpha^2}{\pi^2} + \ldots \Big) g_e \Vec{J}
\end{split}
\end{equation}
The correction factor in the parentheses of equation (\ref{eqalpha}) is generally taken into account in $g_e$, so that then $g_e \neq 2$.
\par
By supplying the corresponding energy, transitions can be induced between the levels:
\begin{equation}
\label{eqge}
h \nu = g_e \mu B H
\end{equation}
The probability of transition depends on the occupation number and on the transition matrix elements. The latter are the same for absorption and emission processes.
\par
Because of the interactions of the spins with the lattice or with one another, the levels are not sharply defined, and this leads to a line width of the absorption spectrum and prevents an equipartition of the levels (saturation) because of the corresponding relaxation processes. The occupation numbers are given in accordance with the Boltzmann relation by:
\begin{equation}
\dfrac{N_2}{N_1} = e^{- \frac{\Delta E}{kT}} = e^{- \frac{g_e \mu_B B}{kT}}
\end{equation}
where $k$ is the Boltzmann constant.
\begin{figure*}
\centering
\includegraphics[scale = 0.9]{Figures/esrmodelexpt.png}
\caption{ESR model experiment}
\label{fig:my_label}
\end{figure*}
\section{\label{sec:setup}Description of the Setup}
\begin{figure}
\centering
\includegraphics[scale = 0.75]{Figures/blockdgESR.png}
\caption{ Block Diagram of the ESR Set}
\label{fig:block}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale = 1]{Figures/paneldgESR.png}
\caption{Panel Diagram of Electron Spin Resonance Spectrometer, ESR-105}
\label{fig:panel}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale = 0.4]{Figures/setupcharactcurves.png}
\caption{Experimental set-up for determining characteristic curves.}
\label{fig:characurves}
\end{figure}
A block diagram of the ESR Spectrometer is given in figure (\ref{fig:block}) and panel diagram is given in the figure (\ref{fig:panel}). The experimental set-up for determining the characteristic curves is given in figure (\ref{fig:characurves}). The various components of the ESR spectrometer are detailed as follows:
\begin{enumerate}
\item \textbf{Basic circuit}: The first stage of the ESR circuit consists of a critically adjusted (marginal) radio frequency oscillator having a frequency range of approximately $\SI{12}{\mega \hertz}$ to $\SI{16}{\mega \hertz}$. A marginal oscillator is required here so that the slightest increase in its load decreases the amplitude of oscillation to an appreciable extent. The sample is kept inside the tank coil of this oscillator, which in turn, is placed in the 50Hz magnetic field, generated by the Helmholtz coils. At resonance, i.e. when the frequency of oscillation equal to the Larmor's frequency of the sample, the oscillator amplitude registers a dip due to the absorption of power by the sample. This obviously, occurs periodically - four times in each complete cycle of the Helmholtz coils supply voltage. The result is in amplitude modulated carrier (figures (\ref{fig:4AatA}) and (\ref{fig:4AatB})) which is then detected using a diode detector and amplified by a chain of three low noise, high gain audio - frequency amplifiers of excellent stability. A sensitivity control is provided in the amplifier to suit the input requirement of any oscilloscope.
\begin{figure}
\centering
\includegraphics[scale = 0.5]{Figures/fig4AatA.png}
\caption{The amplitude modulated carrier at $A$ from the circuit diagram in figure (\ref{fig:4B1})}
\label{fig:4AatA}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 0.5]{Figures/fig4AatB.png}
\caption{The amplitude modulated carrier at $B$ from the circuit diagram in figure (\ref{fig:4B1})}
\label{fig:4AatB}
\end{figure}
\item \textbf{Phase shifter}: In order to make it possible to use an ordinary displaying type oscilloscope, instead of a measuring oscilloscope which preserve the phase between X and Y plates signals, a phase shifter is provided. This can compensate the phase difference which is introduced in the amplification stage of the ordinary oscilloscope.
\par
The circuit diagram of the phase shifter is shown in figures (\ref{fig:4B1}). The primary of the transformer is fed from the $\SI{220}{\volt}$, $\SI{50}{\hertz}$ (or $\SI{1100}{\volt}$, $\SI{60}{\hertz}$) mains and the secondary is centre tapped developing $V_1 - 0 - V_1$ (say). The operation of the circuit may be explained with the help of the vector diagram shown in figure (\ref{fig:4B2}). The vectors $OA$ and $BO$ represent the voltage developed in the secondary, in phase and magnitude. The current flowing in the circuit $ADB$ leads the voltage vector $BA$ due to the presence of capacitor $C$ and current is shown in the diagram as $I$. Voltage developed across resistance $R$, i.e. $VR$ is in phase with the current $I$, and the voltage across across capacitor $V_C$ is $90 \degree$ (lag) out of phase with the current. The vector sum of $V_C$ and $V_R$ is equal to $2 V_1$. These are also plotted in the diagram. It is clear from the diagram that as $R$ is varied, $V_R$ will change and the point $D$ will trace a semicircle, shown dotted. The vector $OD$, or the voltage across points $O$ and $D$, will, therefore, have a constant magnitude equal to $V_1$ and its phase, variable from $0 \degree$ to $180 \degree$. This is the voltage which is fed to the X-amplifier of the oscilloscope to correct for any phase change which might have taken place in the rest of the circuit.
\begin{figure}
\centering
\includegraphics{Figures/fig4B1.png}
\caption{Circuit diagram of the phase shifter circuit}
\label{fig:4B1}
\end{figure}
\begin{figure}
\centering
\includegraphics{Figures/fig4B2.png}
\caption{Vector diagram of the phase shifter circuit}
\label{fig:4B2}
\end{figure}
\item \textbf{50 Hertz sweep unit}: For modulation with a low frequency magnetic field, a $\SI{50}{\hertz}$ current flows through the Helmholtz coils. As the resonance in this frequency range occurs at low magnetic fields, no static $DC$ magnetic field is required.
\item \textbf{Power supplies}:
\begin{enumerate}
\item \textit{$DC$ power supply}: : The ESR circuit requires a highly stabilised almost ripple free voltage. These are obtained using integrated circuit regulator.
\item \textit{Helmholtz coils power supply}: The Helmholtz coils power supply consists of a step down transformer ($\SI{220}{\volt}$ to $\SI{35}{\volt}$ $AC$). Variable coil current is provided in 10 steps using a band switch, while the current is displayed on a $3 \frac{1}{2}$ digit panel meter. The output is taken from the two terminals provided on the panel.
\end{enumerate}
\item \textbf{Helmholtz coils}: There are two coils exactly alike and parallel to each other, so connected that current passes through them in the same direction. The two coils increase the uniformity of the field near the centre. There are 500 turns in each coil, the diameter of windings is $\SI{15.4}{\centi \metre}$ and the separation between the coils is $\SI{7.7}{\centi \metre}$. In the centre of the coils, an attachment is provided to keep the sample in place and to minimise shocks and vibrations.
\item \textbf{Test sample}: A test sample, Diphenyl Picryl Hydrazyl (DPPH) (figure (\ref{fig:dpph})) is placed in a plastic tube, which itself is in the induction coils. This increases the filling factor to the maximum. DPPH is a free radical and widely used as a standard for ESR measurements.
\begin{figure}
\centering
\includegraphics{Figures/dpph.png}
\caption{Chemical structure of DPPH (2,2-Diphenyl-1-picrylhydrazyl, (free radical, 95\%))}
\label{fig:dpph}
\end{figure}
\item \textbf{Control and terminals}: (refer to the panel diagram in figure (\ref{fig:panel})).
\begin{enumerate}
\item Mains : To switch 'ON' or 'OFF' the ESR Spectrometer.
\item Phase : To adjust the phase between X and Y plates signals.
\item Current : To control current in Helmholtz coils.
\item 'H' Coils : Terminals and switch for Helmholtz coils.
\item Frequency : To adjust the frequency of the Oscillator.
\item X,Y,E : For X, Y and Earth terminals of the Oscilloscope.
\end{enumerate}
\item \textbf{Oscilloscope}: Any Oscilloscope, normally available in the laboratory of the following specifications or better, will be quite suitable for the observation of ESR resonance: a screen of diameter $\SI{12.5}{\centi \metre}$ and vertical amplifier sensitivity of $\SI{50}{\milli \volt \per \centi \metre}$.
\end{enumerate}
\section{Experimental Procedure}
A symmetrically fed bridge circuit (figure (\ref{fig:bridge})) contains a variable resistor $R$ in one branch and a high-quality tuned circuit (resonator) in the other. The specimen is located in the coil of the tuned circuit. Normally, the bridge is balanced so that the complex impedance of both branches is the same and consequently there is no voltage between points $a$ and $b$. If the external magnetic field is now so adjusted that the resonance absorption occurs in the specimen, the bridge becomes unbalanced and the voltage set up between $a$ and $b$ rectified and amplified.
\begin{figure}
\centering
\includegraphics{Figures/measuringbridge.png}
\caption{Measuring bridge of the ESR apparatus.}
\label{fig:bridge}
\end{figure}
If the magnetic field is modulated with $\SI{50}{\hertz}$ $AC$ (voltage $\SI{2}{\volt}$), the resonance point is passed through 100 times a second (figure (\ref{fig:magfields})), and the absorption signal can be displayed on an oscilloscope, provided the x-deflection is driven with the same $AC$ voltage in the correct phase.
\begin{figure}
\centering
\includegraphics{Figures/magfields.png}
\caption{The magnetic field $B$ is compounded from a $DC$ field $B_{=}$ and an alternating field $B_{\sim}$, so that $B = B_{=} + B_{\sim}$. Through $I_{=}$, $B_{=}$ is to be adjusted so that $B_{=} = B_r$}
\label{fig:magfields}
\end{figure}
\par
First, the bridge has to be balanced. In doing this (without external magnetic field),`R' on the resonator is brought to its central position and `C' to the left-hand stop. On the ESR power supply, key 8 `bridge balancing' (see operating instructions) is pressed, the oscilloscope input is switched to $DC$ and $\SI{1}{\volt \per \centi \metre}$ and the line is brought to the zero with 12 `Zero' (beforehand bring to zero with `Position' in GND setting). The wiring diagram is given in figure (\ref{fig:wiring}).
\begin{figure}
\centering
\includegraphics{Figures/wiringdg.png}
\caption{Wiring diagram}
\label{fig:wiring}
\end{figure}
\par
The operating instructions are as follows:
\begin{enumerate}
\item The `H-coil power' is switched ON and the current is adjusted to $\SI{150}{\milli \ampere}$.
\item The frequency and phase on the front panel of ESR spectrometer is set to \textit{centred}.
\item Observe four peaks on the Screen of CRO. Now adjust the FREQUENCY of the Spectrometer and SENSITIVITY of the CRO to obtain the best results (i.e. sharp peaks and good signal to noise ratio).
\item Adjust the PHASE knob to coincide the two peaks with the other two as far as possible.
\item Adjust the orientation of Helmholtz coils with respect to the main unit for best overlap of base lines.
\item To calibrate the X-plate of CRO in terms of magnetic field proceed as follows, the X amplifier (CRO) is adjusted to obtain the maximum X deflection (say 'P' divisions).
\item Read current flowing in Helmholtz coils and calculate the magnetic field.
\begin{equation}
\label{eqH}
H = \dfrac{32 \pi n}{10 \sqrt{125} \cdot a} \cdot I
\end{equation}
where $n$ is the number of turns in the coil, $a$ is the radius of the coils and $I$ is the current (in amp) flowing through the coils. This is the root mean square (rms) field. The peak to peak field will be $2 \sqrt{2} H$ and represents `P' division of the CRO X plate. The zero field is at the middle point.
\item Measure the positions of the two peaks. These should be at equal distances from the middle point (say `Q' division).The magnetic field at the resonance is thus
\begin{equation}
\label{eqH0}
H_0 = \dfrac{2 \sqrt{2} H}{P} \cdot Q \text{ gauss}
\end{equation}
\item Increase the horizontal sensitivity of the Oscilloscope to the maximum within the linear range.
\item Obtain the best possible resonance peaks by varying the frequency, detection level and vertical sensitivity of the oscilloscope, keeping the current at 150 mA (say).
\item Keep the frequency fixed but vary the current flowing through the coils and measure the corresponding horizontal separation between the two peaks (2Q) after adjusting the phase. Take five to six sets of observations.
\item Draw a graph in 1/I Vs Q which should be a straight line. Calculate the g-factor using the QI value from the graph.
\item Repeat the experiment with different frequency.
\end{enumerate}
\section{Observations}
The make of the instrument is SES Instruments, Roorkee. The preliminary observations are as follows
\begin{enumerate}
\item Number of turns in each coil, $n = 500$,
\item Diameter of the windings, $2a = \SI{15.4}{\centi \metre}$,
\item Separation between the coils, $a = \SI{7.7}{\centi \metre}$,
\item Least count of frequency measurement, \\ $d \nu = \SI{0.01}{\mega \hertz}$,
\item Least count of current measurement, $dI = \SI{1}{\milli \ampere}$,
\item Least count of distance measurement in the oscilloscope, $ds = \SI{2}{\milli \metre}$.
\end{enumerate}
The data for frequencies $\nu_1 = \SI{13.05}{\mega \hertz}$ and $\nu_2 = \SI{14.72}{\mega \hertz}$ are tabulated in tables (\ref{tab:freq1}) and (\ref{tab:freq2}) respectively. The plots corresponding to the two frequencies are plotted in figures (\ref{fig:plot1}) and (\ref{fig:plot2}) respectively.
\begin{table}[]
\centering
\caption{Readings for $\nu_1 = \SI{13.05}{\mega \hertz}$}
\label{tab:freq1}
\begin{tabular}{@{}ccccc@{}}
\toprule
\begin{tabular}[c]{@{}c@{}}Current\\ $I$ (mA)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1/$I$\\ ($A^{-1}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$P$\\ (cm)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$2Q$\\ (cm)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$Q$\\ (mm)\end{tabular} \\ \midrule
96 & 125/12 & 5.6 & 3.4 & 17 \\
124 & 250/31 & 5.8 & 2.7 & 13.5 \\
151 & 1000/151 & 5.8 & 2.2 & 11 \\
178 & 500/89 & 5.8 & 1.9 & 9.5 \\
204 & 250/51 & 5.8 & 1.6 & 8 \\
230 & 100/23 & 5.8 & 1.4 & 7 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Readings for $\nu_2 = \SI{14.72}{\mega \hertz}$}
\label{tab:freq2}
\begin{tabular}{@{}ccccc@{}}
\toprule
\begin{tabular}[c]{@{}c@{}}Current\\ $I$ (mA)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1/$I$\\ ($A^{-1}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$P$\\ (cm)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$2Q$\\ (cm)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$Q$\\ (mm)\end{tabular} \\ \midrule
124 & 250/31 & 5.6 & 4 & 20 \\
151 & 1000/151 & 5.6 & 3.2 & 16 \\
178 & 500/89 & 5.6 & 2.6 & 13 \\
204 & 250/51 & 5.6 & 2.2 & 11 \\
230 & 100/23 & 5.6 & 1.9 & 9.5 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[scale = 0.56]{Figures/plot-1.png}
\caption{The $Q \sim 1/I$ plot for $\nu_1$}
\label{fig:plot1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 0.56]{Figures/plot-2.png}
\caption{The $Q \sim 1/I$ plot for $\nu_2$}
\label{fig:plot2}
\end{figure}
\section{Calculations and Results}
We have from (\ref{eqDelE}) and (\ref{eqge}),
\begin{equation}
\label{eq26}
g = \dfrac{h \nu}{\mu_B H_0}
\end{equation}
where $h = \SI{6.625e-27}{\erg \second}$, $\mu_B = \SI{0.927e-20}{\erg \per \gauss}$ and $H_0$ is magnetic field on the sample at the resonance. And from (\ref{eqH}), the magnetic field in at the centre of Helmholtz coils is
\begin{equation}
H = \dfrac{32 \pi n}{10 \sqrt{125} \cdot a} \cdot I \text{ gauss}
\end{equation}
Putting in the values, we get
\begin{equation}
H = 58.388 I \text{ gauss}
\end{equation}
As the peak to peak magnetic field is $H_{pp} = 2 \sqrt{2} H$, we have
\begin{equation}
H_{pp} = 165.146 I \text{ gauss}
\end{equation}
From here
\begin{equation}
H_0 = H_{pp} \dfrac{Q}{P} = 165.146 \dfrac{QI}{P}
\end{equation}
Now from (\ref{eqH0}), to find $QI$, we need the slope of curves plotted in figures (\ref{fig:plot1}) and (\ref{fig:plot2}).
\par
The five summations of for the data set plotted in figure (\ref{fig:plot1}) are as follows:
\par
\vspace{0.5cm}
$S_{x} = \mathlarger{\mathlarger{\sum}} x_{i} = \SI{39.971}{\per \ampere}$, \hspace{0.5cm} $S_{y} = \mathlarger{\mathlarger{\sum}} y_{i} = \SI{66}{\milli \metre}$,
\par
\vspace{0.5cm}
$S_{xx} = \mathlarger{\mathlarger{\sum}} x_{i}^2 = \SI{291.90}{\per \ampere \squared}$,
\par
\vspace{0.5cm}
$S_{yy} = \mathlarger{\mathlarger{\sum}} y_{i}^2 = \SI{795.5}{\milli \metre \squared}$,
\par
\vspace{0.5cm}
$S_{xy} = \mathlarger{\mathlarger{\sum}} x_{i}y_{i} = \SI{481.82}{\milli \metre \per \ampere}$.
\par
\vspace{0.5cm}
Now the slope is given by,
\begin{equation}
m_{\nu_1} = \dfrac{S S_{xy} - S_{x}S_{y}}{S S_{xx} - S_{x}^2} = \SI{1.64539}{\milli \metre \ampere} = QI
\end{equation}
Now, putting in the values in (\ref{eq26}), we get
\begin{equation}
\boxed{g_{\nu_1} = 1.97926}
\end{equation}
Similarly for the five summations of for the data set plotted in figure (\ref{fig:plot2}) are as follows:
\par
\vspace{0.5cm}
$S_{x} = \mathlarger{\mathlarger{\sum}} x_{i} = \SI{29.555}{\per \ampere}$, \hspace{0.5cm} $S_{y} = \mathlarger{\mathlarger{\sum}} y_{i} = \SI{69.5}{\milli \metre}$,
\par
\vspace{0.5cm}
$S_{xx} = \mathlarger{\mathlarger{\sum}} x_{i}^2 = \SI{183.39}{\per \ampere \squared}$,
\par
\vspace{0.5cm}
$S_{yy} = \mathlarger{\mathlarger{\sum}} y_{i}^2 = \SI{1036.25}{\milli \metre \squared}$,
\par
\vspace{0.5cm}
$S_{xy} = \mathlarger{\mathlarger{\sum}} x_{i}y_{i} = \SI{435.51}{\milli \metre \per \ampere}$.
\par
\vspace{0.5cm}
Now the slope is given by,
\begin{equation}
m_{\nu_2} = \dfrac{S S_{xy} - S_{x}S_{y}}{S S_{xx} - S_{x}^2} = \SI{2.84171}{\milli \metre \ampere} = QI
\end{equation}
Now, putting in the values in (\ref{eq26}), we get
\begin{equation}
\boxed{g_{\nu_2} = 1.25532}
\end{equation}
\section{Error Analysis}
Error in $g$ is given by
\begin{equation}
\label{eqerrorg}
dg = \sqrt{\Big(\dfrac{\partial g}{\partial \nu} \sigma_{\nu}\Big)^2 + \Big(\dfrac{\partial g}{\partial P} \sigma_{P}\Big)^2 + \Big(\dfrac{\partial g}{\partial m} \sigma_{m}\Big)^2}
\end{equation}
where $m$ and $\sigma_m$ represent the slope and error in slope respectively. Now error in slope is given by
\begin{equation}
\label{eqslope}
\sigma_m = \sigma_y \times \sqrt{\dfrac{S}{\Delta}}
\end{equation}
$\sigma_y$ is the uncertainty in the measurement of $Q$ which is equal to its least count of measurement $\SI{2}{\milli \metre}$ and $\Delta = S S_{xx} - S_x^2$. Putting in the values in equations (\ref{eqerrorg}) and (\ref{eqslope}) and solving, we get
\begin{equation}
\boxed{d g_{\nu_1} = 0.5}
\end{equation}
and
\begin{equation}
\boxed{d g_{\nu_2} = 0.3}
\end{equation}
\section{Discussions}
\begin{enumerate}
\item Note that the frequencies used in magnetic resonance experiments range from $10^9$ to $10^{11}$ cps. These frequencies situated below the limits of the infrared part of the spectrum, allow highly accurate investigation of energy level splitting so small that they are inaccessible or almost inaccessible by optical methods.
\item The probability of spontaneous transition in the radio-frequency region is very small, since this probability is proportional to $\nu^3$. Therefore, in paramagnetic resonance studies one is forced to deal only with induced absorption and emission.
\item While in the great majority of cases optical spectra arise from electric dipole transitions between energy levels, the lines of paramagnetic resonance absorption arise exclusively from magnetic dipole transitions. Consequently, the Einstein coefficients for induced absorption and emission will, in the case of paramagnetic resonance, be smaller by roughly four orders of magnitude.
\item As a result, the paramagnetic resonance effect is exceedingly small; if it can be observed at all is due to the high sensitivity of electronic methods of detection and the enormous number of photons coming into play ($\SI{1}{\milli \watt}$ corresponds to $n \simeq 10^{20}$ photons per sec at a frequency of $10^{10}$ cps).
\item In the optical frequency region the line width is always very small in comparison with the fundamental frequency. In paramagnetic resonance the relation between these quantities becomes quite different, since the interactions causing a broadening of the lines can be of the same order of magnitude as the energy splitting which determines the resonance frequency. Because of this the width of paramagnetic resonance lines is often comparable to the fundamental frequency and can be measured with great accuracy. This opens up wide possibilities for investigation of different types of interactions in paramagnetic substances by means of analysis of the shape and width of a paramagnetic resonance line and of the character of its dependence upon various factors.
\item The most important factors determining the line width are magnetic dipole interactions, exchange forces, local electrical fields created by neighbouring magnetic particles, and finally, thermal motion; the natural line widths of radio- frequency spectra are completely negligible.
\item In contrast with optical experiments, in radio-frequency spectroscopy it is customary to use radiation which is so monochromatic that the generated band of frequencies is incomparably narrower than the absorption line width.
\item Paramagnetic resonance spectra are not studied by varying the frequency of the incident radiation, but by varying the characteristic frequencies of the absorbing systems. This is achieved by varying the static magnetic field.
\item That is ESR can be observed at radio frequencies in a magnetic field of few gauss or otherwise in the microwave region in a magnetic field of a few kilogauss. The latter alternate has many distinct advantages like for each transition the absorbed energy is much larger, and thus the signal-to-noise ratio is much improved and a high magnetic field is used, thus providing separation between levels that are intrinsically wide and would remain partially overlapped at low fields. Because of these advantages, ESR in microwave region is preferred for research purposes.
\item The observed peaks are in fact absorption dips, because the sample absorb power from the induction coil. The spin precesses with Larmor's frequency $\omega_0$ and hence varies in magnitude and direction due to variation of magnetic field $\Vec{H_0}$ which is due to an alternating current in the Helmholtz coils. Now if the radio frequency field, $\omega_1$ falls in the range of $\omega_0$ the resonance occurs
\item The coincidence of peaks on the $x$-scale needs to be calibrated for magnetic field measurements. The coincidence ensures that the magnetic field is zero at the centre and has the peak values at the two ends. Complete merger of the peaks on $y$-scale may not occur due to many reasons such as $\SI{50}{\hertz}$ pick-ups, ripples in the power supply etc. Though, every effort has been made to minimise these factors but the large amplification ($\simeq 4000$) in the circuitry make them substantial. However, any non- coincidence on the $y$-scale is immaterial as neither any measurement of the $y$-scale is involved in the calculation of $g$-factor nor any measurement is made on it.
\begin{figure}
\centering
\includegraphics[scale = 0.65]{Figures/radfreq.png}
\caption{The radio frequency is linear by polarised, which can be regarded as two circularly polarised fields of opposite direction (say clockwise and anti clockwise). Further magnetic field $H_0$ also changes direction. Thus resonance occurs when the two frequencies ($\omega_1$ and $\omega_0$) becomes equal in magnitude as well as direction i.e. four times in one full cycle of $H_0$.}
\label{fig:radfreq}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 0.8]{Figures/linfreq.png}
\caption{A linearly field of frequency is equivalent to two fields rotating in opposite direction with the same frequency $\omega$}
\label{fig:linfreq}
\end{figure}
\end{enumerate}
\section{Conclusions}
\begin{enumerate}
\item The value of $g$-factor for $\nu_1 = \SI{13.05}{\mega \hertz}$ was found to be $g = 2.0 \pm 0.5$ after taking care of the significant figures.
\item The value of $g$-factor for $\nu_2 = \SI{14.72}{\mega \hertz}$ was found to be $g = 1.2 \pm 0.3$ after taking care of the significant figures.
\item Although the first value was obtained very close to the literature value, the value for second frequency was unsatisfactory.
\item The results so obtained were in the order of the expected values for the lower frequency. The experiment was thus a moderate success.
\end{enumerate}
% Produces the bibliography via BibTeX.
\end{document}
%
% ****** End of file apssamp.tex ******
| {
"alphanum_fraction": 0.6833378148,
"avg_line_length": 88.6883942766,
"ext": "tex",
"hexsha": "a97e5190ac9d408d6de43a071dc1b84e8522d8f2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e42c1397386e609116cbb2a7fbaf8f3b6af1f4a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "peakcipher/p343-modern-physics-lab",
"max_forks_repo_path": "ESR/Main/apssamp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e42c1397386e609116cbb2a7fbaf8f3b6af1f4a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "peakcipher/p343-modern-physics-lab",
"max_issues_repo_path": "ESR/Main/apssamp.tex",
"max_line_length": 1396,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "8e42c1397386e609116cbb2a7fbaf8f3b6af1f4a",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "peakcipher/p343-modern-physics-lab",
"max_stars_repo_path": "ESR/Main/apssamp.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-15T11:28:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-10-09T06:57:04.000Z",
"num_tokens": 14298,
"size": 55785
} |
\section{Programming and Configuring the FPGA Device}
The FPGA device must be programmed and configured to implement the designed
circuit. The required configuration file is generated by the Quartus Prime
Compiler's Assembler module. Intel's DE-series board allows the configuration to
be done in two different ways, known as JTAG and AS modes.
The configuration data is transferred from the host computer (which runs the
Quartus Prime software) to the board by means of a cable that connects
a USB port on the host computer to the USB-Blaster connector on the board.
To use this connection, it is necessary to have the USB-Blaster driver
installed. If this driver is not already installed, consult the
tutorial {\it Getting Started with Intel's DE-Series Boards}
for information about installing the driver.
Before using the board, make sure that the USB cable is properly connected
and turn on the power supply switch on the board.
In the JTAG mode, the configuration data is loaded directly
into the FPGA device. The acronym JTAG stands for Joint Test Action Group.
This group defined a simple way for testing digital circuits and loading data
into them, which became an IEEE standard. If the FPGA is configured in
this manner, it will retain its
configuration as long as the power remains turned on.
The configuration information is lost when the power is turned off.
The second possibility is to use the Active Serial (AS) mode.
In this case, a configuration device that includes some flash memory is used
to store the configuration data. Quartus Prime software places the configuration
data into the configuration device on the DE-series board. Then, this data is loaded
into the FPGA upon power-up or reconfiguration.
Thus, the FPGA need not be configured by the Quartus Prime software if the power
is turned off and on.
The choice between the two modes is made by switches on the DE-series
board. Consult your manual for the location of this switch on your DE-series board. The boards should be set to JTAG mode by default.
This tutorial discusses only the JTAG programming mode.
\subsection{JTAG Programming for the DE5a-Net Board}
For the DE5a-Net board, the
programming and configuration task is performed as follows.
To program the FPGA chip, the RUN/PROG switch on the board must be in the RUN position.
Select {\sf Tools $>$ Programmer} to reach the window in Figure~\ref{fig:38}.
Here it is necessary to specify the programming hardware and
the mode that should be used. If not already chosen by default,
select JTAG in the Mode box.
Also, if the USB-Blaster is not chosen by default, press the
{\sf Hardware Setup...} button and select the USB-Blaster in the window
that pops up, as shown in Figure~\ref{fig:39}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.65]{figures/figure38.png}
\caption{The Programmer window.}
\label{fig:38}
\end{center}
\end{figure}
Observe that the configuration file {\it light.sof} is listed in the window in
Figure~\ref{fig:38}. If the file is not already listed, then click {\sf Add File}
and select it.
This is a binary file produced by the Compiler's Assembler module,
which contains the data needed to configure the FPGA device.
The extension {\it .sof} stands for SRAM Object File.
Ensure the {\sf Program/Configure} check box is ticked, as shown in Figure~\ref{fig:38}.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.65]{figures/figure39.png}
\caption{The Hardware Setup window.}
\label{fig:39}
\end{center}
\end{figure}
Now, press {\sf Start} in the window in Figure~\ref{fig:38}.
An LED on the board will light up corresponding to the programming operation.
If you see an error reported by Quartus Prime software indicating that
programming failed, then check to ensure that the board is properly powered on.
| {
"alphanum_fraction": 0.7778354529,
"avg_line_length": 50.038961039,
"ext": "tex",
"hexsha": "13f958d401c7dfbc4796024ffbae8eba8a4ba63b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fpgacademy/Tutorials",
"max_forks_repo_path": "Hardware_Design/Quartus_Introduction/shared_sections/section_programming_and_configuring_pro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fpgacademy/Tutorials",
"max_issues_repo_path": "Hardware_Design/Quartus_Introduction/shared_sections/section_programming_and_configuring_pro.tex",
"max_line_length": 133,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fpgacademy/Tutorials",
"max_stars_repo_path": "Hardware_Design/Quartus_Introduction/shared_sections/section_programming_and_configuring_pro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 899,
"size": 3853
} |
\section{Introduction to graphSLAM}
An autonomous robot needs to address two critical problems to survive and
navigate within its surroundings: mapping the environment and finding its
relative location within the map. Simultaneous localization and mapping
(SLAM) is a process that aims to localize an autonomous mobile robot in a
previously unexplored environment while constructing a consistent and
incremental map of its environment~\cite{Saeedi2016}.
While filtering methods (extended Kalman filtering, information-form
filtering, particle filtering) used to dominate the SLAM literature, recently
(~2006) graph-based approaches have made a comeback. Introduced by Lu and
Milios in 1997~\cite{Lu1994} graph-based approaches formulate SLAM as a least-squares
minimization problem.
\section{Application Usage}
Aim of the app is to perform 2D graphSLAM: Robot localizes itself in the
environment while, at the same time builds a map of that environment. App
currently executes SLAM using MRPT rawlog files (both MRPT rawlog formats are
supported) as input which should contain (some of) the following observation
types:
\begin{itemize*}
\item CObservationOdometry
\item CObservation2DRangeScan
\item CObservation3DRangeScan\hfill\\
Working with 3DRangeScans is currently in an experimental phase.
\end{itemize*}
The majority of the graphslam-engine parameters in each case should be
specified in an external~.ini file which is to be given as a command-line
argument. The following parameters can also be specified as command-line
arguments:
\begin{description*}
\item[.ini-file [REQUIRED] ]\hfill\\
Specify the .ini configuration file using the \texttt{-i},
\texttt{-\--ini-file} flags.
Configuration file parameters are read by the main CGraphSlamEngine
class as well as the node/edge registration schemes, and the
optimization scheme.
\item[rawlog-file [REQUIRED] ]\hfill\\
Specify the rawlog dataset file using the \texttt{-r},
\texttt{-\--rawlog} flags.
\item[ground-truth]\hfill\\
Specify a ground truth file with \texttt{-g}, \texttt{-\--ground-truth} flags. Ground truth
has to be specified if user has set visualize\_slam\_metric or
visualize\_ground\_truth to true in the .ini file, otherwise an
exception is raised.
\item[node/edge registration deciders]\hfill\\
Specify the node registration or/and edge registration decider
classes to be used using \texttt{-n}, \texttt{-\--node-reg},
\texttt{-e}, \texttt{-\--edge-reg} flags. If not
specified the default CFixedIntervalsNRD, CICPCriteriaERD are used
as node and edge registration decider schemes respectively.
\item[optimizer class to be used]\hfill\\
Specify the class to be used for the optimization of the pose-graph
using the \texttt{-o}, \texttt{-\--optimizer} flags. Currently the only supported
optimization scheme is Levenberg-Marquardt non-linear graph
optimization defined in
\href{http://reference.mrpt.org/devel/group\_\_mrpt\_\_graphslam\_\_grp.html\#ga022f4a70be5ec7c432f46374e4bb9d66}{optimize\_graph\_spa\_levmarq}. If not specified, the default CLevMarqGSO is used.
\end{description*}
Furthermore, application offers the following capabilities to the user:
\begin{itemize*}
\item Support of both kinds of MRPT rawlog formats,
action-observations and observation-only format. However, users should
also make sure that the deciders/optimizer classes that are to be used
also support the rawlog format that they are interested in.
\item Visual inspection of graphSLAM execution. Apart from others the
visualization window can be configured, by the user, to include the
following:
\begin{itemize*}
\item graphSLAM estimated robot trajectory
\item Odometry-only trajectory
\item ground-truth trajectory (if the corresponding ground-truth is
available
\item Current robot 2D laser scan footprint
\item Draft version of the map
\item Currently constructed graph as edges between the registered
nodes
\end{itemize*}
\item Hotkeys support for triggering various visualization features on/off.
\footnote{Any registration decider / optimizer can get notified of
keyboard/mouse events and can modify the visuals when these occur.
For more on this see the corresponding
\href{http://reference.mrpt.org/devel/classmrpt\_1\_1graphslam\_1\_1\_c\_registration\_decider\_or\_optimizer.html\#a3d8445d2382e282f3a03edfa686c6e22}{method}}
\end{itemize*}
\subsection{command-line arguments}
The available command line argument are also listed below, along with a
description of their usage:
\verbatiminput{./input/application_cmd_synopsis.txt}
\verbatiminput{./input/application_cmd_options.txt}
\subsection{Application output}
By default graphslam-engine execution generates an output directory by the name
\textit{graphslam\_results} within the current working directory. The output directory
contains the following files:
\begin{description*}
\item[CGraphSlamEngine.log]\hfill\\
Logfile containing the activity of the CGraphSlamEngine class instance.
Activity involves output logger messages, time statistics for critical
parts of the application, and summary statistics about the constructed
graph (e.g. number of registered nodes, edges).
\item[node\_registrar.log, edge\_registrar.log, optimizer.log]\hfill\\
Logfiles containing the activity of the corresponding class instances. File
contents depend on the implementation of the corresponding classes but in
most cases they contain output logger messages, time statistics for
critical parts of the class execution.
\item[output\_graph.graph]\hfill\\
File contains the constructed graph in the VERTEX/EDGE format. The latter
can be visualized using the MRPT graph-slam application for verification
reasons.
For more information on this, consult the following pages:
\begin{itemize*}
\item \url{http://www.mrpt.org/Graph-SLAM\_maps}
\item \url{http://www.mrpt.org/list-of-mrpt-apps/application-graph-slam/}
\end{itemize*}
\item[output\_scene.3DScene]\hfill\\
File contains the 3DScene that was generated in the end of the
graphslam-engine execution. The latter can be visualized using the MRPT
SceneViewer3D tool.
For more information on this, see:
\url{http://www.mrpt.org/list-of-mrpt-apps/application-sceneviewer3d/}
\vspace*{10mm}
\ul{\textit{Note:}} File is generated only when the visualization of the
graph construction is enabled in the .ini configuration file. See .ini
parameters as well as the \texttt{-\--disable} flag for more on this.
\item[SLAM\_evaluation\_metric.log]\hfill\\
File contains the differences between the estimated trajectory increments
and the corresponding ground-truth increments and can be used to verify and
evaluate the performance of the SLAM algorithm. For more information on the
metric, see \cite{Burgard2009}
\ul{\textit{Note:}} File is generated only when the ground-truth of the corresponding
dataset is given.
\end{description*}
\ul{\textit{Note:}} In case a directory named graphslam\_results, generated during a
previous execution, already exists, it is, by default, overwritten. If this is
not the desired behavior, user can set the user\_decides\_about\_output\_dir
.ini parameter to true so that they are asked about this naming conflict during
program execution.
\section{Library Design}
In this section insight into the graphslam-engine application - and corresponding library - is provided.
CGraphSlamEngine is the main class executing graphslam. CGraphSlamEngine
delegates most of the graph manipulation tasks to node/edge registration
deciders and optimizer classes. This makes up for independence between the
different tasks as well as for a reconfigurable setup, as the user can select
to use different decider/optimizer classes depending on the situation. Users
can also write their own decider/optimizer classes by inheriting from one of
the
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_node_registration_decider.html}{CNodeRegistrationDecider},
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_edge_registration_decider.html}{CEdgeRegistrationDecider}, \href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1optimizers_1_1_c_graph_slam_optimizer.html}{CGraphSlamOptimizer}
interfaces depending on what part they want to implement.
\subsection{Registration Deciders}
The registration decider classes are divided into node and edge registration
deciders. The former are responsible of adding new nodes in the graph while the
latter add additional edges between already registered graph nodes. These nodes can
be consecutive or non-consecutive.
\subsubsection{Node Registration Deciders (NRD)}
Node registration decider schemes add nodes to the graph according to a
specific criterion. Node deciders should implement the methods defined in the
CNodeRegistrationDecider abstract class. The latter provides the basic methods
that have to exist in every node registration decider class and which are
called from the main CGraphSlamEngine instance. For an example of inheriting
from this class see
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_fixed_intervals_n_r_d.html}{CFixedIntervalsNRD}.
Currently two specific node registration schemes have been implemented:
\begin{itemize*}
\item \href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_fixed_intervals_n_r_d.html}{CFixedIntervalsNRD} \hfill\\
Decider registers a new node in the graph if the distance or the angle
difference with regards to the previous registered node surpasses a
corresponding fixed threshold. Decider makes use only of the
CObservationOdometry instances in the rawlog file.
\item \href{http://reference.mrpt.org/devel/structmrpt_1_1graphslam_1_1deciders_1_1_c_i_c_p_criteria_n_r_d_1_1_t_params.html}{CICPCriteriaNRD} \hfill\\
Decider registers a new node in the graph if the distance or the angle
difference with regards to the previous registered node surpasses a
corresponding fixed threshold. Decider measures the distance from the
current position to the previous registered node using ICP (i.e. matches
the current range scan against the range scan of the previous node). In
case of noisy 2D laser scans, decider can also use odometry information to
locally correct and smoothen the robot trajectory. Decider makes use of
2DRangeScans or 3DRangeScans.
\end{itemize*}
\ul{\textit{Note:}} As a naming convention, all the implemented node registration deciders are
suffixed with the NRD acronym.
For more information on this see the
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_node_registration_decider.html}{node
registration deciders interface}
\subsubsection{Edge Registration Deciders (ERD)}
Edge registration decider schemes add edges between already added nodes in the
graph according to a specific criterion. Edge deciders should implement the
methods defined in CEdgeRegistrationDecider abstract class.
CEdgeRegistrationDecider provides the basic methods that have to exist in every
edge registration decider class. For an example of inheriting from this class
see CICPCriteriaERD.
Currently two specific edge registration schemes have been implemented:
\begin{itemize*}
\item \href{http://reference.mrpt.org/devel/structmrpt_1_1graphslam_1_1deciders_1_1_c_i_c_p_criteria_e_r_d_1_1_t_params.html}{CICPCriteriaERD} \hfill\\
Register new edges in the graph with the last inserted node. Criterion for
adding new edges should be the goodness of the
candidate ICP edge. The nodes for ICP are picked based on the distance from the
last inserted node. Decider makes use of 2DRangeScans or 3DRangeScans.
\item \href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_loop_closer_e_r_d.html}{CLoopCloserERD}\hfill\\
Evaluate sets of potential loop closure edges in the graph based on their
pairwise consistency matrix.Decider first splits the graph into partitions
based on the 2D laser scans of the nodes and then searches for potential loop
closure edges within the partitions. Goal is to register only a subset of the
potential loop closure edges that maximally agree with each other. Decider is
implemented based on \cite{Blanco2006}, \cite{Olson2009}
\end{itemize*}
\ul{\textit{Note:}} As a naming convention, all the implemented edge
registration deciders are suffixed with the ERD acronym.
For more information on this see the
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1deciders_1_1_c_edge_registration_decider.html}{edge
registration deciders interface}
\subsection{graphSLAM Optimizers (GSO)}
Optimizer classes optimize an already constructed graph so that the registered
edges maximally agree with each other. Optimizer schemes should implement the
methods defined in
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1optimizers_1_1_c_graph_slam_optimizer.html}{CGraphSlamOptimizer}
abstract class. For an example of inheriting from this class see
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1optimizers_1_1_c_lev_marq_g_s_o.html}{CLevMarqGSO}.
\ul{\textit{Note:}} As a naming convention, all the implemented optimizer
classes are suffixed with the GSO acronym.
For more information on this see the
\href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1optimizers_1_1_c_graph_slam_optimizer.html}{graphSLAM optimizers interface}
\subsection{Notable links}
Below important links for the library/application are provided:
\begin{itemize*}
\item \href{http://www.mrpt.org/list-of-mrpt-apps/application-graphslamengine/}{graphslam-engine
- Application page}
\item \href{https://www.youtube.com/watch?v=Pv0yvlzrcXk}{Demonstration video}
\item \href{http://reference.mrpt.org/devel/structmrpt_1_1graphslam_1_1_c_graph_slam_engine_1_1_t_r_g_b_d_info_file_params.html}{CGraphSlamEngine documentation}
\item \href{http://reference.mrpt.org/devel/classmrpt_1_1graphslam_1_1_c_registration_decider_or_optimizer.html}{Registration Deciders / Optimizers documentation}
\item \href{https://github.com/MRPT/GSoC2016-discussions/issues/2#}{GSoC 2016
graphslam-engine discussion}
\item Directory of ~.ini sample files: \url{../../share/mrpt/config_files/graphslam-engine/}
\item Directory of demo datasets: \url{../../share/mrpt/datasets/graphslam-engine-demos/}
\end{itemize*}
\section{Further Improvements}
As of now, the mrpt-graphslam library is still under active development. A list of
features that have been scheduled for implementation are listed below:
\begin{itemize*}
\item Cleanup and Fix code bugs
\item Integrate with ROS --- Add support for online execution
\item Add support for multi-robot graphSLAM
\item Add nodes in the graph in an adaptive fashion
\item Implement node reduction scheme
\item Add support for visual sensors (e.g.\ working with point features as SIFT)
\item Integrate with 3rd party optimization libraries
\end{itemize*}
For more details on the progress of the aforementioned improvements, see the
corresponding \href{https://github.com/MRPT/mrpt/milestone/9}{github milestone}
| {
"alphanum_fraction": 0.7783213673,
"avg_line_length": 54.0859106529,
"ext": "tex",
"hexsha": "b6f1024b49b2903a42835179ba6e64b28c7f60cb",
"lang": "TeX",
"max_forks_count": 588,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T08:05:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-23T01:13:18.000Z",
"max_forks_repo_head_hexsha": "4a59edd8b3250acea27fcb94bf8e29bee1ba8e1c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Russ76/mrpt",
"max_forks_repo_path": "doc/graphslam-engine-guide/tex_includes/main_body.tex",
"max_issues_count": 772,
"max_issues_repo_head_hexsha": "4a59edd8b3250acea27fcb94bf8e29bee1ba8e1c",
"max_issues_repo_issues_event_max_datetime": "2022-03-27T02:45:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-18T19:18:54.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Russ76/mrpt",
"max_issues_repo_path": "doc/graphslam-engine-guide/tex_includes/main_body.tex",
"max_line_length": 266,
"max_stars_count": 1372,
"max_stars_repo_head_hexsha": "4a59edd8b3250acea27fcb94bf8e29bee1ba8e1c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Russ76/mrpt",
"max_stars_repo_path": "doc/graphslam-engine-guide/tex_includes/main_body.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T12:55:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-07-25T00:33:22.000Z",
"num_tokens": 3947,
"size": 15739
} |
\documentclass[aspectratio=169]{beamer}
% other options: finnish, sectionpages
\usetheme{tut}
\usepackage[english]{babel}
\usepackage{listings}
\usepackage{csquotes}
\usepackage[style=authoryear,backend=biber]{biblatex}
\usepackage{filecontents}
\usepackage{blindtext}
\usepackage{pgfpages}
%\setbeameroption{show notes on second screen}% you'll need pgfpages for this one and a suitable pdf viewer i.e. dspdfviewer
%\setbeameroption{show notes}
\lstset{%
basicstyle=\scriptsize\ttfamily,
backgroundcolor=\color{TUTGrey}
}
\begin{filecontents}{bibliography.bib}
@book{hoenig1997,
author = {Hoenig, Alan},
title = {TEX Unbound: Latex and TEX Strategies for Fonts, Graphics and More},
year = {1997},
publisher = {Oxford University Press, Inc.}
}
@misc{tantau2013,
title = {The BEAMER class -- user guide for version~3.36},
author = {T.~Tantau, J.~Wright, V.~Mileti{\'c}},
year = {2015},
}
@online{tutgraphic,
title = {Graphic guidelines},
author = {{Tampere University of Technology}},
note = {tutka > image > printed publications > graphic guidelines}
}
\end{filecontents}
\addbibresource{bibliography.bib}
% The color palettes can be changed
% primary: lower half of titlepage, lower half of left color beam, default block environment
% secondary: upper half of left color beam, example block body
% tertiary: top header showing outline
% quaternary: example block title
% structure: structure elements such as list items
%\setbeamercolor{palette primary}{fg=white,bg=TUTsecPetrol}
%\setbeamercolor{palette secondary}{fg=white,bg=TUTsecOrange}
%\setbeamercolor{palette quaternary}{use=palette secondary,bg=palette secondary.bg!50!black}
\title{TUT Beamer Theme}
\subtitle{A theme for typesetting presentation slides using \LaTeX{}}
\author{Tuomas Välimäki}
\email{[email protected]}
\institute{Department of Automation Science and Engineering\\Tampere University of Technology}
\date{\today}
\begin{document}
\maketitle
\section*{Outline}
\begin{frame}{Outline}
\tableofcontents
\end{frame}
\section{Introduction}
\subsection{Main features}
\begin{frame}{Main features}
\begin{itemize}
\item supports all aspect ratios
\item vector backgrounds ``drawn'' using Ti\emph{k}Z
\item customizable colors
\item bilingual (finnish, english)
\end{itemize}
\bigskip
Newest versions available under \url{https://github.com/tvalimaki/tut-beamer}
\end{frame}
\note{This is how a note page looks like if you use the beameroptions ``show notes'' or ``show notes on second screen''}
\subsection{Colors}
\begin{frame}{Colors, 1/2}
\begin{columns}
\begin{column}{0.475\textwidth}
All colors introduced in TUT graphic guidelines are predefined.
\end{column}
\begin{column}{0.475\textwidth}
Primary colors:\par
\begin{tikzpicture}[%
node distance=9em,
every node/.append style={font=\scriptsize,
minimum size=3ex}
]
\node[rectangle,fill=TUTGreen,label=east:TUTGreen] (1) {};
\node[rectangle,fill=TUTBlue, label=east:TUTBlue,right of=1] (2) {};
\node[rectangle,fill=TUTGrey, label=east:TUTGrey,below of=1,node distance=4ex] (3) {};
\end{tikzpicture}
Secondary colors:\par
\begin{tikzpicture}[%
node distance=9em,
every node/.append style={font=\scriptsize,
minimum size=3ex}
]
\node[rectangle,fill=TUTsecOrange, label=east:TUTsecOrange] (1) {};
\node[rectangle,fill=TUTsecGreen, label=east:TUTsecGreen, right of=1] (2) {};
\node[rectangle,fill=TUTsecPink, label=east:TUTsecPink, below of=1,node distance=4ex] (3) {};
\node[rectangle,fill=TUTsecPetrol, label=east:TUTsecPetrol, right of=3] (4) {};
\node[rectangle,fill=TUTsecPlum, label=east:TUTsecPlum, below of=3,node distance=4ex] (5) {};
\node[rectangle,fill=TUTsecBlue, label=east:TUTsecBlue, right of=5] (6) {};
\node[rectangle,fill=TUTsecRed, label=east:TUTsecRed, below of=5,node distance=4ex] (7) {};
\node[rectangle,fill=TUTsecDarkblue,label=east:TUTsecDarkblue,right of=7] (8) {};
\node[rectangle,fill=TUTsecDarkred, label=east:TUTsecDarkred, below of=7,node distance=4ex] (9) {};
\end{tikzpicture}
\end{column}
\end{columns}
\end{frame}
\begin{frame}[containsverbatim]{Colors, 2/2}
In addition to defining colors for single items, the color palettes of the theme can be changed.
Try for example adding
\begin{lstlisting}[%
language={[LaTeX]TeX},
texcsstyle=*\color{TUTsecOrange},
moretexcs={setbeamercolor}
]
\setbeamercolor{palette primary}{fg=white,bg=TUTsecPetrol}
\setbeamercolor{palette secondary}{fg=white,bg=TUTsecOrange}
\setbeamercolor{palette quaternary}{use=palette secondary,
bg=palette secondary.bg!50!black}
\end{lstlisting}
to your preamble.
\end{frame}
\section{Example environments}
\subsection{Figures and equations}
\begin{frame}{Figures and equations}
\begin{columns}[onlytextwidth]
\begin{column}{0.5\textwidth}
\centering
\begin{figure}
\includegraphicscopyright[width=\textwidth]{photo.jpg}{Image courtesy of \href{http://openphoto.net/gallery/image/view/5468}{openphoto.net}}
\end{figure}
\end{column}
\begin{column}{0.4\textwidth}
Here is some regular text in a column. And there is an equation
\begin{displaymath}
f(x)=ax^2+bx+c
\end{displaymath}
Here is some \alert{important} text.
\end{column}
\end{columns}
\end{frame}
\subsection{Lists}
\begin{frame}{List environments}
\begin{columns}[onlytextwidth]
\begin{column}{0.5\textwidth}
This slide has a list\dots
\blinditemize[3]
\vspace*{5mm}
descriptions\dots
\blinddescription[2]
\end{column}
\begin{column}{0.5\textwidth}
as well as some enumerations
\blindenumerate[4]
\end{column}
\end{columns}
\end{frame}
\subsection{Blocks}
\begin{frame}{Block environments}
\begin{exampleblock}{Example}
This is an example
\end{exampleblock}
\begin{alertblock}{Note}
This is important
\end{alertblock}
\begin{theorem}[Pythagoras]
$ a^2 + b^2 = c^2$
\end{theorem}
\end{frame}
\section{Further reading}
\begin{frame}{Further reading}
\nocite{*}
\printbibliography[heading=none]
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7062727129,
"avg_line_length": 32.1269035533,
"ext": "tex",
"hexsha": "bdf4b3f752cec18718acb5160a41797c44aa92db",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z",
"max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection",
"max_forks_repo_path": "200+ beamer 模板合集/tut-beamer-master(坦佩雷理工大学)/tut_beamer_example.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection",
"max_issues_repo_path": "200+ beamer 模板合集/tut-beamer-master(坦佩雷理工大学)/tut_beamer_example.tex",
"max_line_length": 148,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection",
"max_stars_repo_path": "200+ beamer 模板合集/tut-beamer-master(坦佩雷理工大学)/tut_beamer_example.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z",
"num_tokens": 1910,
"size": 6329
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% StMcEvent - User Guide and Reference Manual -- LaTeX Source
%
% $Id: StMcEvent.tex,v 2.10 2006/08/15 21:41:59 jeromel Exp $
%
% Authors:Michael A. Lisa
% Thomas S. Ullrich
% Manuel Calderon de la Barca Sanchez
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Notes to the authors:
%
% - A template for a class reference is at the end of this file.
% - Wrap all names functions with \name{}
% - All code, examples, prototypes in \verb+ ... +\\
% or \begin{verbatim} ... \end{verbatim}
% - Use \StMcEvent if you refer to the package itself (not the class)
%
% This file is best edit with xemacs and the 'Function' package loaded.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% $Log: StMcEvent.tex,v $
% Revision 2.10 2006/08/15 21:41:59 jeromel
% Fix rhic -> rhic.bnl.gov
%
% Revision 2.9 2003/12/02 16:42:10 calderon
% fix a verb statement extending over two lines
%
% Revision 2.8 2003/10/08 20:27:17 calderon
% Update documentation on FTPC, CTB, BEMC hits.
%
% Revision 2.7 2003/05/15 18:29:04 calderon
% Added data members from modified g2t_event table:
% Event Generator Final State Tracks, N Binary Collisions,
% N Wounded Nucleons East and West, N Jets.
%
% Revision 2.6 2003/04/16 18:12:35 calderon
% Documentation on subprocess id.
%
% Revision 2.5 2000/06/06 02:59:11 calderon
% Introduction of Calorimeter classes. Update documentation.
%
% Revision 2.4 2000/04/18 15:49:38 calderon
% Updated documentation on particle table, added Table of Contents for
% PDF.
%
% Revision 2.3 2000/04/18 00:55:47 calderon
% Updated documentation in class reference. Added section about
% inclusion of particle table.
%
% Revision 2.2 2000/02/03 23:04:30 calderon
% Fixed typo that caused index to be generated in \texttt
%
% Revision 2.1 2000/02/03 03:36:59 calderon
% Documentation for version 2.0
%
% Revision 2.0 1999/11/17 02:01:07 calderon
% Completely revised for new StEvent
%
% Revision 1.4 1999/07/23 00:03:48 calderon
% Corrected documentation about macro location, and current libraries where StMcEvent runs
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[twoside]{article}
\parindent 0pt
\parskip 6pt
\advance\textwidth by 80pt%
\advance\evensidemargin by -80pt%
\usepackage{graphicx}
\usepackage{psboxit}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{fancyhdr}
\usepackage{times}
\usepackage{verbatim}
\usepackage{makeidx}
\usepackage[dvips=true,hyperindex=true,colorlinks=true,linkcolor=blue,bookmarks=true]{hyperref}
\PScommands % init boxit
\makeindex
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Define header and footer style
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\pagestyle{fancyplain}
\rhead[\fancyplain{}{\bfseries\leftmark}]
{\fancyplain{}{\bfseries\rightmark}}
\lhead[\fancyplain{}{\bfseries\rightmark}]
{\fancyplain{}{\bfseries\leftmark}}
\rfoot[{}]{\fancyplain{}{\bfseries\thepage}}
\lfoot[\fancyplain{}{\bfseries\thepage}]{}
\cfoot{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Typographic Conventions
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\name}[1]{\textsl{#1}}% class-, function-, package names
\newcommand{\StEvent}{\textsf{StEvent}}
\newcommand{\StMcEvent}{\textsf{StMcEvent}}
\newcommand{\StAssociationMaker}{\textsf{StAssociationMaker}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Define multiline labels for class reference
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\entrylabel}[1]{\mbox{\textbf{{#1}}}\hfil}%
\newenvironment{entry}
{\begin{list}{}%
{\renewcommand{\makelabel}{\entrylabel}%
\setlength{\labelwidth}{90pt}%
\setlength{\leftmargin}{\labelwidth}
\advance\leftmargin by \labelsep%
}%
}%
{\end{list}}
\newcommand{\Entrylabel}[1]%
{\raisebox{0pt}[1ex][0pt]{\makebox[\labelwidth][l]%
{\parbox[t]{\labelwidth}{\hspace{0pt}\textbf{{#1}}}}}}
\newenvironment{Entry}%
{\renewcommand{\entrylabel}{\Entrylabel}\begin{entry}}%
{\end{entry}}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Title page
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{titlepage}
\pagestyle{empty}
\vspace*{-35mm}
\begin{center}
\mbox{\includegraphics[width=2cm]{StarIcon.eps}}
{\Large\bf STAR Offline Library Long Writeup}
\hfill\mbox{}\\[3cm]
\mbox{\includegraphics[width=\textwidth]{StMcEventTitle.eps}}
\hfill\mbox{}\\[3cm]
{\LARGE User Guide and Reference Manual}\\[2cm]
{\LARGE $ $} \\[5mm] % replaced by cvs with current revision
{\LARGE $ $} % replaced by cvs with current revision
\vfill
\end{center}
\cleardoublepage
\end{titlepage}
\pagenumbering{roman}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Table of contents
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\tableofcontents
\cleardoublepage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% User Guide
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\pagenumbering{arabic}
\part{User Guide}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\StMcEvent\footnote{We will adopt the convention used in \StEvent\
and write \StMcEvent\ when we refer to the package and
\name{StMcEvent} when we refer to the class.} is a package that
compliments the use of \StEvent\ . The aim is for the user to be able
to access and analyze Monte Carlo with the same object oriented
approach as \StEvent\ . The top class is \name{StMcEvent}, and a
pointer to this class enables access to all the relevant Monte Carlo
information. Again, the pointer is obtained through a Maker, in this
case \name{StMcEventMaker}. An example invocation is found in sec.~\ref{sec:howto}.
The information contained in \StMcEvent\ is designed to be simply
a reflection of the information already available in the existing
g2t tables. The primary keys and foreign keys are replaced by
associations between classes through pointers, like in \StEvent\ .
At the moment, not all the information found on the g2t tables is
encapsulated in \StMcEvent\ , for a full description of what is
avaliable, please refer to sec.~\ref{sec:refman}. Also note that
\StMcEvent\ is a work in progress, and if additional information
is needed, it can be added.
The goal of \StMcEvent\ is to be used in conjunction with \StEvent\ in
order to have an OO model for both pure Monte Carlo data and DST data
that has passed through the whole reconstruction chain. To be able
to relate the 2 packages, there is an additional package that is
run after both \StEvent\ and \StMcEvent\ are filled: \StAssociationMaker\ .
\index{StAssociationMaker}
The aim of \StAssociationMaker\ is to establish the relationships
between the reconstructed and Monte Carlo information, so that
users can easily check whether a particular reconstructed hit or
track is found, and if that is the case,
to directly obtain the associated Monte
Carlo hit or track so that an assessment of the quality of the
data can be made.
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{How to read this document}
This document is divided in two parts, a user guide and a
reference manual. In the first part we rather concentrate on basic
questions and provide guidance to get started. Some of the information
given here overlaps that one found in the \StEvent\ documentation,
and we provide it again here for completeness (We think that it is better
to be redundant than to have to remember which information is in which
manual). The reference section
provides information on all available classes, their member functions
and related operators. The class references contain one or more
examples which demonstrate some features of each class and how to use
them.
New users should \textbf{not} start with the Reference section. It is
meant as a lookup when specific information is needed. Beginners
should study the User Guide and should make themselves familiar only
with those classes they will encounter more frequently:
\name{StMcEvent} (sec.~\ref{sec:StMcEvent}),
\name{StMcTrack} (sec.~\ref{sec:StMcTrack}),
\name{StMcVertex} (sec.~\ref{sec:StMcVertex}),
\name{StMcTpcHit} (sec.~\ref{sec:StMcTpcHit}).
\name{StMcSvtHit} (sec.~\ref{sec:StMcSvtHit}).
and \name{StMcFtpcHit} (sec.~\ref{sec:StMcFtpcHit}).
Understanding the various
examples certainly is the best way to get started. However, the examples
are given for illustration purposes only, and are not complete programs.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Further documentation}
\label{sec:furtherdoc}
\StMcEvent\ makes use of various classes from the StarClassLibrary (SCL).
To obtain the SCL documentation, the easiest thing to do is to go to
the web page set up by Gene van Buren.
Go to
\begin{verbatim}
http://www.star.bnl.gov/STARAFS/comp/root/special_docs.html
\end{verbatim}
Here you can easily obtain the documentation several packages
(this one for example).
\index{Gene's Documentation Page}
\index{SCL} \index{StarClassLibrary}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Getting \StMcEvent\ Sources} \index{Getting StMcEvent sources}
To access the complete source code proceed as follows:
\StMcEvent\ is under {\bf CVS} control at BNL. It can
be accessed via \name{afs}: \index{afs} \index{CVS} \index{CVSROOT}
\begin{enumerate}
\item Obtain an \name{afs} token: \name{klog -cell rhic}.
\item Make sure \name{\$CVSROOT} is set properly:\\ %%$
(i.e.~\name{CVSROOT = /afs/rhic.bnl.gov/star/packages/repository})
\item Check-out package into your current working directory:\\
\name{cvs checkout StRoot/StMcEvent}
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Coding Standards}
\index{Coding Standards}
\StMcEvent\ tries to follow the STAR coding guidelines as described on the
STAR $\rightarrow$ Computing $\rightarrow$ Tutorials $\rightarrow$ C++
coding standards web page: \\
http://www.star.bnl.gov/STAR/html/comp\_l/train/standards.html.\\
Here we
summarize the most relevant ones concerning the programmable interface:
\begin{itemize}
\item all classes, enumerations and functions start with the prefix
\textbf{St}
\item all member functions start with a lowercase letter
\item header files have the extension \textbf{.hh}, source files have
the extension\textbf{ .cc}
\item the use of underscores in names is discouraged
\item classes, methods, and variables have self-explanatory English
names and first letter capitalization to delineate words
\end{itemize}
\StMcEvent\ also follows the following rules set forth in \StEvent\
\begin{itemize}
\item methods (member functions) which return a certain object have
the same name as the referring object, i.e., without a preceding
\name{get} prefix \\ (as in \name{StMcEvent::primaryVertex()})
\item methods which assign a value to a data member (or members) carry
the prefix \name{set} followed by the name of the member (as in \name{StMcEvent::set\-Primary\-Vertex()}).
\item integer variables which serve as counter or indices and never
can take negative values are consistently declared as
\name{unsigned}.
\item Objects which are returned by pointer are not guaranteed to
exist in which case a NULL pointer is returned. It is the user's
responsibility to check for the return value and make sure the
pointer is valid (non-zero) before she/he dereferences it. Objects
which are guaranteed to exist are returned by reference or by value.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conventions}
\subsection{Numbering Scheme}
\label{sec:numberscheme}
\index{Numbering}
The information here is a reiteration of that found in the \StEvent\
Manual. Both packages use the same conventions, but in order to
stress the difference between the official STAR numbering scheme
for subdetector components, and
the C/C++ conventions for indexing the containers with which we represent those
subdetectors, we repeat the cautions here. The C/C++ convention
is that the first element in an array has the index 0. This applies
to vectors, collections or other containers that allow indexing.
The TPC sectors and padrows,
the SVT layers, ladders and wafers, and the FTPC planes and sectors
all start counting at 1. So it is important to remember this when
using the member functions that return the sector or padrow of a
TPC hit, for example, to address the elements in a container. If
you want to obtain the hit container in which a particular hit
\verb+h+ is stored, you would do the following:
\begin{verbatim}
evt->tpcHitCollection()->sector(h.sector()-1)->padrow(h.padrow()-1).hits();
\end{verbatim}
Note the subtraction of 1, in order to index the array properly.
The functions that have this convention are:
\begin{itemize}
\item \texttt{StMcTpcHit::sector()}
\item \texttt{StMcTpcHit::padrow()}
\item \texttt{StMcFtpcHit::plane()}
\item \texttt{StMcSvtHit::layer()}
\item \texttt{StMcSvtHit::ladder()}
\item \texttt{StMcSvtHit::wafer()}
\item \texttt{StMcSvtHit::barrel()}
\item \texttt{StMcRichHit::pad()}
\item \texttt{StMcRichHit::row()}
\end{itemize}
As of October 2003, the volume Id of the FTPC hits also contains a sector. However the hits
are still only contained by a collection based on the plane.
\subsection {References and Pointers}
Part of the goal of \StMcEvent\ is the use of pointers and references
as opposed to id's and foreign keys. However, the handling of
pointers must be done with care.
To reiterate wise words of caution: If a function returns an object by
\textit{reference},
the object is guaranteed to exist. If a function returns an object by {\it pointer},
then it is a good idea to always check if the pointer is \verb+NULL+. Not doing
this can be the cause of many a headache and sleepless nights. This is one of
the most common ``monsters under the bed'' that can painfully bite,
and crash code all over the place.
\subsection{Units}
\index{units} \index{system of units}
\label{sec:units}
All quantities in \StMcEvent\ are stored using the official STAR units:
cm, GeV and Tesla. In order to maintain a coherent system of units it
is recommended to use the definitions in \name{SystemOfUnits.h} from
the StarClassLibrary. They allow to 'assign' a unit to a given
variable by multiplying it with a constant named accordingly
(centimeter, millimeter, kilometer, tesla, MeV, ...). The value of
the constants is thus that the result after the multiplication follows
always the STAR system of units.
The following example illustrates their use:
{\footnotesize
\begin{verbatim}
double a = 10*centimeter;
double b = 4*millimeter;
double c = 1*inch;
double E1 = 130*MeV;
double E2 = .1234*GeV;
//
// Print in STAR units
//
cout << "STAR units:" << endl;
cout << "a = " << a << " cm" << endl;
cout << "b = " << b << " cm" << endl;
cout << "c = " << c << " cm" << endl;
cout << "E1 = " << E1 << " GeV" << endl;
cout << "E2 = " << E2 << " GeV" << endl;
//
// Print in personal units
//
cout << "\nMy units:" << endl;
cout << "a = " << a/millimeter << " mm" << endl;
cout << "b = " << b/micrometer << " um" << endl;
cout << "c = " << c/meter << " m" << endl;
cout << "E1 = " << E1/TeV << " TeV" << endl;
cout << "E2 = " << E2/keV << " keV" << endl;
\end{verbatim}
}%footnotesize
The resulting printout is:
{\footnotesize
\begin{verbatim}
STAR units:
a = 10 cm
b = 0.4 cm
c = 2.54 cm
E1 = 0.13 GeV
E2 = 0.1234 GeV
My units:
a = 100 mm
b = 4000 um
c = 0.0254 m
E1 = 0.00013 TeV
E2 = 123400 keV
\end{verbatim}
}%footnotesize
Further documentation can be found in the StarClassLibrary manual
(see sec.~\ref{sec:furtherdoc}).
\subsection{Containers and Iterators}
\label{sec:containers}
\index{containers}
\index{iterators}
The containers used throughout \StMcEvent\
\begin{itemize}
\item are STL vectors (see below)\ref{subsec:vectors}.
\item store objects by pointer.
\end{itemize}
Because the are {\tt vector}s, they allow random access as in:
\verb+pointer_to_object = container[index];+
The containers are guaranteed to provide the following member functions:
\name{size()}, \name{begin()}, \name{end()}. All collections in
\StMcEvent\ have a referring iterator defined.
The containers and iterators are all declared in the file
{\tt StMcContainers.hh}, for convenience. In addition,
all the classes in \StMcEvent\ are {\tt \#include}d in
the header {\tt StMcEventTypes.hh}.
\index{{\tt \#include} files}
\index{include files}
\index{StMcEventTypes}
The names given to the containers are meant to reflect
the objects contained, as well as whether a container is
a {\it structural} container or not.
\index{structural containers}
By a structural container we mean a container that owns the objects
it contains. This means that when a structural container is deleted,
the objects it holds get deleted as well. The naming convention
is then:
\begin{itemize}
\item A {\bf vec}tor of {\bf p}oin{\bf t}e{\bf r}s
will carry the prefix {\bf \tt StPtrVec}
\item A {\bf s}tructural {\bf vec}tor of {\bf p}oin{\bf t}e{\bf r}s
will carry the prefix {\bf \tt StSPtrVec}
\end{itemize}
The name is completed with the type of the objects they contain. So
the structural container of pointers to objects of type \name{StMcTpcHit} is
\name{StSPtrVecMcTpcHit} where we drop the \name{St} of the object being
contained.
In reality, whether a container is structural or not
is of little importance in terms of its usage, the interface and methods of
both are the same. The distinction is made to keep the objects organized,
and to make sure everthing gets deleted, and deleted in only one place. But
this mainly goes on behind the scenes. The main point one should
keep in mind from all of this is that, unless you really know
what you are doing, you should {\bf NOT} delete a
structural container. (Play music for ``The Twilight Zone''.)
In order to keep your code independent of the underlying container
types, people are encouraged to use \textit{iterators} instead of
indices. These permit a higher degree of flexibility,
allowing to change containers
without changing the application code using them. The following
example demonstrates this:
Given an arbitrary \StMcEvent\ collection \name{anyColl} of type
\name{StAnyColl}
which holds objects of type \name{obj} the following code is
not guaranteed to work:
\begin{verbatim}
for (int i=0; i<anyColl.size(); i++)
obj = anyColl[i];
\end{verbatim}
A {\tt list} for example, does not allow random access, so indexing
is not supported for {\tt lists}. For vectors and deques, indexing does work.
Iterators however, work for all STL containers. The code above then,
should be replaced by:
\begin{verbatim}
StAnyCollIterator iter;
for (iter = anyColl.begin(); iter != anyColl.end(); iter++)
obj = *iter;
\end{verbatim}
The names of the iterators defined in \StMcEvent\ are just the name
of the object contained + the word {\tt Iterator}. For example,
the iterator for the container \name{StPtrVecMcVertex} is
\name{StMcVertexIterator}.
Note that all collections are by pointer, so they are
\textit{polymorphic} containers; that is, they are not restricted to
collect objects of the base type only, but can store
objects of any type derived from
the base. This is especially important when dereferencing the iterator to
access the objects in the collection. Polymorphism is
not used in \StMcEvent (at least not as much as it could be used), but
it is necessary in \StEvent, where,
for example, one has containers of tracks, but they can be Global
or Primary. If a method returns a base class and one really wants
a derived class, we have to use Run Time Type Information (RTTI),
{\tt dynamic\_cast}ing and such. Also, make sure you know how to
get the objects you want, depending on the type of objects in the
container.
For a \textit{by-pointer} collection:
\begin{verbatim}
StMcVertexIterator iter;
StSPtrVecMcVertex& vertices = event->vertices();
StMcVertex* vertex;
for (iter = vertices.begin(); iter != vertices.end(); iter++)
vertex = *iter; // dereference once
\end{verbatim}
Hint: Use pointers or references to collection elements wherever
possible. Making local copies is often time- and memory-intensive.
For example the same code above but written as:
\begin{verbatim}
StMcVertexIterator iter;
StMcVertexCollection* vertices = event->vertexCollection();
StMcVertex vertex;
for (iter = vertices->begin(); iter != vertices->end(); iter++)
vertex = **iter; // dereference twice
\end{verbatim}
invokes the \name{StMcVertex} assignment operator and creates a local copy.
This method is only useful if you intend to modify an object locally
but want to leave the original untouched.
\subsubsection{Quick glance at {\tt vector}s}
\label{subsec:vectors}
As mentioned before, all containers used in \StMcEvent\ are STL {\tt vectors}. That
means that one can use the methods supported by the {\tt vector}
template class. I'll give a brief overview of some basic elements
of {\tt vector}s to provide some common ground on which to walk on, for
people unfamiliar with the STL. A LOT more information can be found
in your favorite C++ book.
\index{vector}
\paragraph{Common Member Types}
The type of the elements of the container is passed as the
first template argument, and is known as its {\tt value\_type}. For
example, the {\tt value\_type} of the \name{StPtrVecMcTrack} container,
which is really a \verb+vector<StMcTrack*>+ is, of course, {\tt StMcTrack*}.
The type used for indexing into the container is known as {\tt size\_type},
and {\tt difference\_type} is the type of the result of subtracting
two iterators. Like mentioned before, every container defines an
{\tt iterator} and a {\tt const\_iterator} for pointing to the elements
of the container. One of the aims of these types is to allow the
possibility to write code without knowing the actual types involved.
\paragraph{Public Member Functions}
The following methods illustrate some of the methods provided by
the {\tt vector} container. The list is by no means complete, but
even so there are some more methods listed here than normally used.
The counterpart {\tt const} methods are omitted.
\begin{Entry}
\item[Iterator Methods]
\verb+iterator begin()+\\
Points to the first element of the container.
\verb+iterator end()+\\
Points to the last-plus-one element.
\verb+reverse_iterator rbegin()+\\
Points to the first element of the reverse sequence.
Note that the
type of the iterator is different than for {\tt begin()}.
Useful when going through a sequence backwards. For more
information on {\tt reverse\_iterator} consult your favorite
C++ reference book.
\verb+reverse_iterator rend()+\\
Points to the last-plus-one element of the reverse sequence.
\end{Entry}
The following methods allow to access the elements of the container.
\begin{Entry}
\item[Useful Access \\Methods]
\verb+reference operator[](size_type n)+\\
Provides unchecked access to the elements of the container. Safe
to use when there is a previous condition to guarantee that
the argument is within bounds.
\verb+reference at(size_type n)+\\
Provides range checked access to the elements of the container.\\
Throws {\tt out\_of\_range} if the index is out of range.
\verb+reference front()+\\
Reference to the first element of the container.
\verb+reference back()+\\
Reference to the last element of the container.
\verb+size_type size()+\\
Number of elements in the container.
\verb+void push_back(const T& x)+\\
Adds the element to the end of the container, {\tt size()} increases
by 1.
\verb+void pop_back(const T& x)+\\
Removes the last element of the container, {\tt size()} decreases
by 1.
\verb+void clear()+\\
Erases all the elements.
\end{Entry}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Particles from Event Generators}
\label{sec:particle}
\index{particle table}
\index{event generator tracks}
\StMcEvent also provides information on particles coming from the so
called ``particle table''. This is where all the particles produced
by the event generator are stored, with all the necessary bookkeeping.
The particle table is useful for the cases where a particle that one is
interested in, has been decayed by the event generator and not in
GSTAR. It will therefore not appear in the g2t\_track table. In
order to find out if a track from the g2t\_track table comes from
our particle of interest in this case, we need to access the particle
table.
All entries in the particle table are kept in the same container
as the tracks from the g2t\_track table. (We probably shouldn't call
them tracks because these particles will most definitely NOT leave a
track in the detector simulation, but we keep them along with the others
for convenience.) There will of course be tracks that appear in both
tables, namely the final state particles of the event generator. This
case is already accounted for in \StMcEvent, so that only one entry
with the relevant information from both tables is kept in the container.
To properly use the additional information, it is important to know how to
distinguish an \name{StMcTrack} that comes from the particle table
and one that doesn't. This can be done in several ways, according to
the following properties (or conventions, if you will):
\begin{itemize}
\item {\bf Tracks with an event generator label $> 0$ have a corresponding
entry in the particle table.}
\item {\bf Tracks with a key $> 0$ have a corresponding entry in the g2t\_track table.}
\item {\bf Only tracks from the g2t\_track table are assigned a start vertex.} This also means that
the method \name{StMcVertex::daughters()} will {\bf NOT} return any tracks from the particle
table. It is important to realize this if one wants to asks for the daughters of the
primary vertex. This will only return the tracks that actually have the primary
vertex as their parent, according to the start\_vertex\_p entry of g2t\_track the table.
(These tracks
are most probably also in the particle table, this scheme will of course not exclude them. But
the point is that tracks that only appear in the particle table will {\bf NOT} have a start vertex
and they will not appear as daughters of any vertex, {\it not even the primary}).
\item {\bf To find out the genealogy of a particle, the \name{StMcTrack::parent()} method is provided.}
This method will return a pointer to the parent track, if there is one, otherwise it will return
a null pointer. Using this method, one can get to the parent particle of a decay daughter all
the way into the particle table. The parentage is followed not just until we reach the
particle table, but it is also followed {\it within} the particle table, i.e. down to the quarks and gluons
of the event generator.
It should be stressed, of course, that
this further parentage is {\it only} event generator bookkeeping, and should not be used for more than this.
For g2t\_tracks, one can also find daughters by going through their stop vertex (if there is one),
which then knows about its daughter tracks.
\end{itemize}
Let's illustrate this with a simple example. Suppose that we have an event generator and
we want to look at $\phi \longrightarrow e^{+}e^{-}$ and the $\phi$ was decayed by the
event generator. To get to them, we can do the following.
\begin{enumerate}
\item We get a pointer to the primary vertex (\name{StMcEvent::primaryVertex()}).
\item We get the daughters of the primary vertex (\name{StMcVertex::daughters()}).
\item We loop over the primary tracks thus obtained (Look in the containers section ~\ref{sec:containers}).
\item For each of primary tracks, we check whether it is an electron (\name{StMcTrack::particleDefinition()}).
\item If it is, we check whether its parent is a $\phi$ (\name{StMcTrack::parent()}, followed by
\name{StMcTrack::particleDefinition()}. Be careful with NULL pointers!).
\item If it is a $\phi$, then we found it and we can do whatever we want with it, e.g. fill a 2-D
histogram of $p_{\bot}$ vs. $y$.
\end{enumerate}
For more information on the specific methods to do this, refer to the relevant sections for
\name{StMcVertex} ~\ref{sec:StMcVertex} and \name{StMcTrack} ~\ref{sec:StMcTrack}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{How to use \StMcEvent: The Macro and the Maker}
\label{sec:howto}
\subsection{StMcEventReadMacro.C}
\index{StMcEventReadMacro.C}
\index{root4star}
\index{ROOT files}
\StMcEvent\ has now been a package in the STAR library for a while. The
new version which this guide refers to is, as of this writing,
available in starnew and stardev. Remember, that libraries
get moved, so make sure you are using the right library for
your purposes. This is very important, since to use \StMcEvent\
along with \StEvent\ one CANNOT combine the latest code with
files produced before December 1999, since the dst tables
and \StEvent\ changed substantially.
To run the example macro StMcEventReadMacro.C, one can just
run the default version directly from the library:
\begin{verbatim}
stardev
cd somewhere
root4star -b
.x StMcEventReadMacro.C
\end{verbatim}
This will run the macro in the repository with the default settings:
Process one event from the default geant.root file, load StMcEvent
and print out to the screen some information on the event.
If this step doesn't work, then something weird is going on. (Although
make sure if you are using dev that Lidia is not rebuilding the whole
library while you are trying to run one of the examples!) The
default macros are meant to work ``out of the box'', so to speak,
so this is the first step to check that things are working
properly.
The next step is to actually check out the macro and edit it yourself for
your purposes. To check out the macro, do the following:
\begin{verbatim}
klog
mkdir workdir
cd workdir
cvs co StRoot/macros/examples/StMcEventReadMacro.C
\end{verbatim}
StMcEventReadMacro.C is found in
\name{\$CVSROOT/StRoot/macros/examples/StMcEventReadMacro.C}, %$
and it runs a chain runs a chain of just one maker to load \StMcEvent .
%%%%%%%%%%%%%%%%%%%
At the moment, it runs the chain on a ROOT file tree. That is, it uses
the GEANT branches of the files it finds in the directory. This is important to
realize if you want to run the macro on files other than the default, an
activity one should quickly graduate to. The macro takes as arguments the
number of events to process, and the name of the file to be used. An example invocation
is:
{\footnotesize
\begin{verbatim}
.x StMcEventReadMacro.C(10,"/star/rcf/test/dev/tfs_Linux/Mon/year_2a/hc_standard/*.geant.root")
\end{verbatim}
}
This invocation will process 10 events from the specified file.
Now, to open the files, branches, etc. we rely on StIOMaker. This maker
takes the path to look for files from the specified file, but the files
it actually
opens depend on
what branches are activated in the macro.
This is done inside the macro with the command \name{SetBranch}.
\subsection{StMcEventMaker}
\index{StMcEventMaker}
\name{StMcEventMaker} is the code that actually does the opening of
the geant.root file and uses the information there to set up the
whole \StMcEvent . If you want to use \StMcEvent\ you have to make
sure that \name{StMcEventMaker} is instantiated in the chain you
are running. This will take care of all the setup. The data
members and methods of the maker that you will find useful are:
\index{StMcEventMaker|textbf}
\label{sec:StMcEventMaker}
\begin{Entry}
\item[Synopsis]
\verb+#include "StMcEventMaker.hh"+\\
\verb+class StMcEventMaker;+\\
\item[Public Data\\ Members]
\verb+Bool_t doPrintEventInfo;+\\
Print or do not print info on the current \StMcEvent\ event.
(default=kFALSE). This produces a lot of output, and is
meant for debugging. Every major
class is dumped, the sizes of all collections, and the first
element in every container. Don't use it for production.
\verb+Bool_t doPrintMemoryInfo;+\\
Switch on/off checks on memory usage of \StMcEvent\
(default=kFALSE). In order to get a memory snapshot we use
\texttt{StMemoryInfo} from the \name{StarClassLibrary}. A
snapshot is taken before and after the setup of \StEvent. The
numbers in brackets refer to the difference. Not available on SUN
Solaris yet.
\verb+Bool_t doPrintCpuInfo;+\\
Switch on/off CPU usage (default=kFALSE). Tells you how long it
took to setup \StEvent. Timing is performed using \texttt{StTimer}
from the StarClassLibrary.
\verb+Bool_t doUseTpc;+\\
Load Tpc hits in the run (default = true). If you do not want to
look at Tpc hits, switch this off.
\verb+Bool_t doUseSvt;+\\
Load Svt hits in the run (default = true). If you do not want to
look at Svt hits, switch this off.
\verb+Bool_t doUseFtpc;+\\
Load Ftpc hits in the run (default = true). If you do not want to
look at Ftpc hits, switch this off.
\verb+Bool_t doUseRich;+\\
Load Rich hits in the run (default = true). If you do not want to
look at Rich hits, switch this off.
\verb+Bool_t doUseBemc;+\\
Load Bemc hits in the run (default = true). If you do not want to
look at Bemc hits, switch this off. This loads the barrel towers.
\verb+Bool_t doUseBsmd;+\\
Load Shower Max hits in the run (default = true). This loads hits from
Bsmde, and Bsmdp detectors.
\verb+Bool_t doUseCtb;+\\
Load CTB hits in the run (default = true). If you do not want to
look at CTB hits, switch this off.
\verb+Bool_t doUseTofp;+\\
Load TOFp hits in the run (default = true). If you do not want to
look at TOFp hits, switch this off.
\verb+Bool_t doUseTof;+\\
Load TOFr hits in the run (default = true). If you do not want to
look at TOFr hits, switch this off.
\verb+Bool_t doUsePixel;+\\
Load Pixel hits in the run (default = true). If you do not want to
look at Pixel hits, switch this off. The pixel is under development
(August 2003), so Pixel hits will not be found in general purpose
simulation files for a while.
\item[Public Member\\ Functions]
\verb+StMcEvent* currentMcEvent();+\\
Returns a pointer to the current \name{StMcEvent} object.
\end{Entry}
Since the \name{StMcEvent} object you get when you invoke
{\tt currentMcEvent()} is held by pointer, dont forget to
check if this is {\tt NULL}. This would immediately signal
something gone awry. Also, do not delete the objects
you get from the StMcEvent methods, they will be deleted
every time you read a new event.
In order to use the classes in
\StMcEvent\ in other Makers, one header file with {\it all} the include files
is provided: {\tt StMcEventTypes.hh}. \index{StMcEventTypes}
Like with \StEvent\ this makes
life easier in terms of not having to remember which file you need where,
you just care about one include file and that's it. However, convenience
comes at a price, because if you \#include all the files, you will certainly
be creating dependencies on classes you won't need or care about. So if this
is an issue, then proceed as usual and only \#include the classes that you actually
do depend on.
\index{{\tt \#include} files}
\index{include files}
The next step is the use of the \StAssociationMaker\ package. This is described
in its own manual.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Known Problems and Fixes}
\index{known problems and fixes}
\StMcEvent\ is changing as the software environment changes. From the last
version, several things have been updated and the design revamped to
follow \StEvent\ .
The creation of the vertex collection from the tables took its final
form, once the g2t\_vertex table was corrected. The g2t\_event table was
finally written out, so it is now available.
The FTPC and SVT associations between reconstructed and Monte Carlo
hits are now included.
Also, the problems \StAssociationMaker\ had with the declaration of multimaps
by the Solaris CC4.2 compiler (i.e. no Template Default Arguments, and
ObjectSpace implementation of the STL) have been solved and
now \StMcEvent\ and \StAssociationMaker\ compile and run
on Linux gcc, Solaris CC4.2 and CC5 and HP aCC.
The particle table has also been added to \StMcEvent, and this
allows to find particles of interest that have been decayed by
the event generator.
The switches to turn on/off the loading of the different detector hits
have been added for convenience. The first version of the classes
for the EMC hits implemented by Aleksei Pavlinov are now also included.
The TOF classes have been added in Aug 2003, as well as the development
classes for the Pixel detector.
For corrections to this manual, typos, suggestions, etc. send an email
to
{\tt [email protected]}.
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference Manual
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\part{Reference Manual}
\label{sec:refman}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Global Constants}
We use the constants defined in the two header files
\index{StarClassLibrary} \name{SystemOfUnits.h} and
\name{PhysicalConstants.h} which are part of the StarClassLibrary.
The types defined therein are used
throughout \StMcEvent\ .
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Class Reference}
The classes which are currently implemented and available from the
STAR CVS repository are described in alphabetical order.
Inherited member functions and operators are not described in the
reference section of a derived class. Always check the section(s)
of the base class(es) to get a complete overview on the available
methods.
Note that some constructors are omitted, especially the ones which
take tables as arguments. They are for internal use only (even if
public).
Destructors, assignment operators and copy constructors are not
listed. Macros and \name{Inline} declarations are omitted throughout the
documentation, as well as the {\tt virtual} keyword. For the
most up-to-date reference of what is available, there is no
substitute to looking directly at the class definition in
the header file.
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcCalorimeterHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcCalorimeterHit}
\index{Calorimeter hit} \index{StMcCalorimeterHit|textbf}
\label{sec:StMcCalorimeterHit}
\begin{Entry}
\item[Summary]
\name{StMcCalorimeterHit} represents a Calorimeter hit.
\item[Synopsis]
\verb+#include "StMcCalorimeterHit.hh"+\\
\verb+class StMcCalorimeterHit;+\\
\item[Description]
\name{StMcCalorimeterHit} does not inherit from \name{StMcHit}. This decision was
taken by the EMC group based on the different functionality the Calorimeter Hit
would need to have. For example, one of the main difference is that the calorimeter hits
do not have a position in global coordinates. They do however, still keep a pointer to
their parent track.
\item[Persistence]
None
\item[Related Classes]
The hits are kept according to module.
They are stored by pointer in a container for each module
(see \ref{sec:containers}). The way to get the hits for a
certain module (0-119, 120 modules in total) starting
from the top level \name{StMcEvent} pointer is:
\verb+mcEvt->emcHitCollection()->module(0)->hits()+
Look in \ref{sec:StMcEmcHitCollection} and
\ref{sec:StMcEmcModuleHitCollection} for more information.
Each instance of \name{StMcTrack} holds a list of Calorimeter hits
which belong to that track. Depending on which detector they are, they
will be in the appropriate containter: Bemc, Bprs, Bsmde, or Bsmdp. Look
in \ref{sec:StMcTrack} for more information.
\index{StMcTrack}
\index{StMcEmcHitCollection}
\item[Public\\ Constructors]
\verb+StMcCalorimeterHit(int m,int e,int s,float de)+\\
Create an instance of \name{StMcCalorimeterHit} with module \name{p},
eta \name{e}, sub \name{s}, and energy deposition at hit \name{de}.
\verb+StMcCalorimeterHit(int m,int e,int s,float de, StMcTrack* parent)+\\
Create an instance of \name{StMcCalorimeterHit} with module \name{p},
eta \name{e}, sub \name{s}, energy deposition at hit \name{de},
and parent track \name{parent}.
\item[Public Member\\ Functions]
\verb+int module() const;+\\
Returns the number of the module (1-120) in which a hit is found.
\index{module, EMC}
\verb+int eta() const;+\\
Returns the eta in which a hit is found.
\index{eta, EMC}
\verb+int sub() const;+\\
Returns the sub in which a hit is found.
\index{sub, EMC}
\verb+float dE() const;+\\
Returns the energy deposition of the hit.
\index{dE, EMC}
\verb+StMcTrack* parentTrack() const;+\\
Returns the pointer to the track which generated this hit.
\index{parentTrack, EMC}
\item[Public Member\\ Operators]
\verb+ostream& operator<<(ostream& os, const StMcCalorimeterHit\&);+\\
Allows to write some relevant information directly to an output
stream.
\verb+void operator+=(const StMcCalorimeterHit\&);+\\
Allows to incrementally add energy to a calorimeter hit from another
calorimeter hit.
\verb+void operator==(const StMcCalorimeterHit\&);+\\
Compares two calorimeter hits, they're equal when the module, eta, sub
and parent track pointers are found to be equal.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// In this example, we use the methods of the class
// to access the data. For concreteness, we use them to fill
// a Root histogram.
//
void
PositionOfHits(StMcTrack *track)
{
StPtrVecMcCalorimeterHit hits = track->bemcHits();
StMcCalorimeterHitIterator i;
StMcCalorimeterHit* currentHit;
TH2F* myHist = new TH2F("coords. MC","eta vs mod pos. of Hits",120, 1, 121, 100, -2, 2)
for (i = hits.begin(); i != hits.end(); i++) {
currentHit = (*i);
myHist->Fill(currentHit->module(), currentHit->eta());
}
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcCtbHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcCtbHit}
\index{StMcCtbHit|textbf}
\label{sec:StMcCtbHit}
\begin{Entry}
\item[Summary]
\name{StMcCtbHit} represents a CTB hit.
\item[Synopsis]
\verb+#include "StMcCtbHit.hh"+\\
\verb+class StMcCtbHit;+\\
\item[Description]
Like most of the hit classes, it represents the Geant hit its related detector, in
this case the CTB.
\item[Persistence]
None
\item[Related Classes]
The class to hold these hits is \name{StMcCtbHitCollection}, which holds them in a flat
array, no further hierarchy.
\item[Public\\ Constructors]
\verb+StMcCtbHit(const StThreeVectorF& x,const StThreeVectorF& p, const float de, const float ds, const long key, const long id, StMcTrack* parent)+\\
Create an instance of \name{StMcCtbHit} with position \name{x},
local momentum \name{p}, energy deposited \name{de}, pathlength \name{ds},
key and id for proper assignment of pointers and the pointer to the partent track..
\verb+StMcCtbHit(g2t_ctf_hit_st*)+\\
Create an instance of \name{StMcCtbHit} from the geant tables, this is the
standard way in \name{StMcEventMaker}.
\item[Public Member\\ Functions]
\verb+void get_slat_tray(unsigned int & slat, unsigned int & tray) const;+\\
Puts the corresponding slat and tray of the hit in the variables entered as arguments to the function.
\verb+float tof() const;+\\
Returns the Time of Flight of the particle to the detector hit volume.
\end{Entry}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: className
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{className}
\index{className|textbf}
\label{sec:className}
\begin{Entry}
\item[Summary]
\item[Synopsis]
\verb+#include "className.hh"+\\
\verb+class className;+\\
\item[Description]
\item[Persistence]
None
\item[Related Classes]
\item[Public\\ Constructors]
\item[Public Member\\ Functions]
\item[Public Member\\ Operators]
\item[Public Functions]
\item[Public Operators]
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// What the example does
//
\end{verbatim}
}%footnotesize
\end{Entry}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcEmcHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcEmcHitCollection}
\index{EMC hit collection} \index{StMcEmcHitCollection|textbf}
\label{sec:StMcEmcHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcEmcHitCollection} is a container for the
modules of the EMC. The actual hits are stored according to
the module they belong to.
\item[Synopsis]
\verb+#include "StMcEmcHitCollection.hh"+\\
\verb+class StMcEmcHitCollection;+\\
\item[Description]
\name{StMcEmcHitCollection} is the first step down
the hierarchy of hits for the EMC. It provides
methods to return a particular module, and to
count the number of hits in the EMC detector that it
represents. The EMC is different than the rest of the
detectors in that the same class is used for the Barrel Emc,
the Pre-shower, and the Shower Max detectors (Eta and Phi).
\item[Persistence]
None
\item[Related Classes]
\name{StMcEmcHitCollection}
has a container of type {\tt StMcEmcModuleHitCollection}
of size 120, each element represents a module. Look in
\name{StMcEmcModuleHitCollection} \ref{sec:StMcEmcModuleHitCollection}.
This class inherits from \name{TDataSet}, so that each hit collection
also has a name to distinguish them (along with other inherited properties
of \name{TDataSet}.
\index{StMcEmcModuleHitCollection}
\index{StMcCalorimeterHit}
\item[Public\\ Constructors]
\verb+StMcTpcHitCollection();+\\
Create an instance of \name{StEmcHitCollection}
and sets up the internal containers.
\verb+StMcTpcHitCollection(char* name);+\\
Create an instance of \name{StEmcHitCollection}, where \name{name} is passed to the
TDataSet constructor,
and sets up the internal containers.
\item[Enumerations]
\verb+enum EAddHit {kNull, kErr, kNew, kAdd};+\\
Enumerations for the return values of the addHit method to flag
the different cases. See below.
\item[Public Member\\ Functions]
\verb+StMcEmcHitCollection::EAddHit addHit(StMcTpcHit*);+\\
Adds a hit to the collection (using \name{StMemoryPool}
from the StarClassLibrary. If the hit is a new hit, kNew is returned.
If the hit was already present and we just needed to add the information,
kAdd is returned. If the hit has a wrong module or some other parameter that
prevents its correct assignment, kErr is returned.
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the for the referring EMC detector.
\verb+unsigned int numberOfModule() const;+\\
Returns the size of the module vector. Default is 120.
\verb+StMcEmcModuleHitCollection* module(unsigned int);+\\
Returns a pointer to the module specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcTpcModuleHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcEmcModuleHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcEmcModuleHitCollection}
\index{EMC module hit collection} \index{StMcEmcModuleHitCollection|textbf}
\label{sec:StMcEmcModuleHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcEmcModuleHitCollection} is the class where the EMC hits
are actually stored. It holds a vector of StMcCalorimeterHit pointers.
\item[Synopsis]
\verb+#include "StMcEmcModuleHitCollection.hh"+\\
\verb+class StMcEmcModuleHitCollection;+\\
\item[Description]
\name{StMcEmcModuleHitCollection} holds the EMC hits.
\item[Persistence]
None
\item[Related Classes]
\name{StMcEmcModuleHitCollection}
has a container of type {\tt StSPtrVecMcCalorimeterHit}, this means
that this is the class that actually owns the hits. Look
at the section on containers \ref{sec:containers} for more
information.
\index{StMcCalorimeterHit}
\index{StSPtrVecMcCalorimeterHit}
\item[Public\\ Constructors]
\verb+StMcEmcModuleHitCollection();+\\
Create an instance of \name{StEmcHitCollection}
and sets up the internal container.
\item[Public Member\\ Functions]
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of this module.
\verb+float sum() const;+\\
Returns the summation of the energy deposition, \name{dE}, values
of all the hits in the module.
\verb+StSPtrVecMcEmcHit& hits();+\\
Returns a reference to the container of the EMC hits.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Print number of hits in EMC and plot 1D Histogram of eta of
// Hits
//
StMcEmcHitCollection* emcHitColl = evt->bemcHits();
cout << ``There are :`` << emcHitColl->numberOfHits() << ``hits in the BEMC'' << endl;
TH1F* etaHist = new TH1F("etaHist","Eta pos. of Hits",100, -2, 2)
for (unsigned int i=0; i<emcHitColl->numberOfModules(); i++) {
StSPtrVecMcEmcHit& hits = emcHitColl->module(i)->hits();
for (StMcEmcHitIterator i = hits.begin(); i != hits.end(); i++) {
currentHit = (*i);
myHist->Fill(currentHit->eta());
}
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcEvent
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcEvent}
\index{StMcEvent|textbf}
\index{event header}
\label{sec:StMcEvent}
\begin{Entry}
\item[Summary]
\name{StMcEvent} is the top class in the \StMcEvent\ data model.
It provides methods to access all quantities and objects
stored in the g2t tables.
\item[Synopsis]
\verb+#include "StMcEvent.hh"+\\
\verb+class StMcEvent;+\\
\item[Description]
Objects of type \name{StMcEvent} are the entry point to the g2t data.
From here one can navigate to (and access) quantities stored
in the g2t tables, but instead of going through foreign keys and
Id's all relationships are established through pointers.
The only {\tt \#include} file needed to access all the classes of \StMcEvent\
is the {\tt StMcEventTypes.hh} file. \index{StMcEventTypes}
\name{StMcEvent} itself doesn't offer much functionality but rather serves
as a container for the event data.
Deleting \name{StMcEvent} deletes the whole data tree, i.e.~all depending objects
are deleted properly.
\item[Persistence]
None
\item[Related Classes]
None
\item[Public\\ Constructors]
\verb+StMcEvent();+\\
Default constructor. Creates an "empty" event. Normally not used. As with
most other classes, the constructor used normally is based on g2t tables.
\item[Public Member\\ Functions]
\verb+unsigned long eventGeneratorEventLabel() const;+\\
Event Label from event generator, read from g2t\_event.
\index{event generator label}
\verb+unsigned long eventNumber() const;+\\
Event number from monte carlo production, read from g2t\_event.
\index{event number}
\verb+unsigned long runNumber() const;+\\
Run number, read from g2t\_event.
\index{run number}
\verb+unsigned long zWest() const;+\\
Number of protons of particle coming from the ``West'', read from g2t\_event.
\index{zWest}
\verb+unsigned long nWest() const;+\\
Number of neutrons of particle coming from the ``West'', read from g2t\_event.
\index{nWest}
\verb+unsigned long zEast() const;+\\
Number of protons of particle coming from the ``East'', read from g2t\_event.
\index{zEast}
\verb+unsigned long nEast() const;+\\
Number of neutrons of particle coming from the ``East'', read from g2t\_event.
\index{nEast}
\verb+unsigned long eventGeneratorFinalStateTracks() const;+\\
As the name implies, this has the number of final stated tracks coming
from the event generator, read from g2t\_event.
\index{event generator final state tracks}
\verb+unsigned long numberOfPrimaryTracks() const;+\\
Number of primary tracks read from g2t\_event. Note that this is
not coming from {\tt primaryVertex()->daughters().size()} so one
can check whether these numbers are actually the same (they should be). Note
that the tracks that only appear in the particle table are NOT assigned
either a start or a stop vertex, and they are NOT kept as daughters
of the primary vertex either.
\index{primary tracks}
\verb+unsigned long subProcessId() const;+\\
This is used for simulated Pythia pp events, where
one can study a given reaction, e.g. gg->QQbar = 82, and
gg->J/Psi+g = 86. Refer to the Pythia manual for the
subprocess ids. Read from g2t\_event.
\index{subprocess ID}
\index{Pythia}
\verb+float impactParameter() const;+\\
Impact parameter of collision, read from g2t\_event.
\index{impact parameter}
\verb+float phiReactionPlane() const;+\\
Phi angle of the reaction plane of the collision, read from g2t\_event.
\index{phi reaction plane}
\verb+float triggerTimeOffset() const;+\\
Trigger time offset, read from g2t\_event.
\index{trigger time offset}
\verb+unsigned long nBinary() const;+\\
Returns the number of binary collisions. Read from g2t\_event.
\index{binary collisions}
\verb+unsigned long nWoundedEast() const;+\\
\verb+unsigned long nWoundedWest() const;+\\
Return the number of binary of wounded nucleons from each collision
direction. Read from g2t\_event.
\index{wounded nucleons}
\verb+unsigned long nJets() const;+\\
Number of jets in the event. Read from g2t\_event.
\index{jets}
\verb+StMcVertex* primaryVertex();+\\
Returns a pointer to primary vertex. The same object is stored
in the \name{StMcVertexCollection}. Since the primary vertex is of
high importance for most analysis steps, this method was added
for convenience. Note that if no primary vertex is available
this function returns a {\tt NULL} pointer, so be careful!
\index{primary vertex}
\verb+StSPtrVecMcVertex& vertices();+\\
Returns a reference to the vertex container, i.e., the {\tt vector} of all
event vertices.
\index{vertex container}
\verb+StSPtrVecMcTrack& tracks();+\\
Returns reference to the track container, i.e., the {\tt vector} of all
monte carlo tracks.
\index{track container}
\verb+StMcTpcHitCollection* tpcHitCollection();+\\
Returns a pointer to the TPC hit collection. Note that the
collections are {\emph NOT} just flat containers. The TPC hit
collection is a container of sectors. The sectors are containers
of padrows, where the TPC hits are stored.
\index{hit collections, TPC}
\verb+StMcSvtHitCollection* svtHitCollection();+\\
Returns a pointer to the SVT hit collection. The SVT hit
collection is a container of layers. The layers are containers
of ladders. The ladders are containers of wafers,
where the SVT hits are stored.
\index{hit collections, SVT}
\verb+StMcFtpcHitCollection* ftpcHitCollection();+\\
Returns a pointer to the FTPC hit collection. The FTPC hit
collection is a container of planes. Note that it is in the planes
where the FTPC hits are stored. The Monte Carlo hits of the FTPC
know nothing about sectors, because these are only determined
after reconstruction.
\index{hit collections, FTPC}
\verb+StMcRichHitCollection* richHitCollection();+\\
Returns a pointer to the RICH hit collection. This is a flat collection
of hits.
\verb+StMcEmcHitCollection* bemcHitCollection();+\\
Returns a pointer to the BEMC hit collection. The calorimeter hit
collection is containers of modules. This same class is used for
the other calorimeter detector components, below.
\verb+StMcEmcHitCollection* bprsHitCollection();+\\
Returns a pointer to the Barrel Pre-shower hit collection.
\verb+StMcEmcHitCollection* bsmdeHitCollection();+\\
Returns a pointer to the Barrel Shower Max detector-Eta hit collection.
\verb+StMcEmcHitCollection* bsmdpHitCollection();+\\
Returns a pointer to the Barrel Shower Max detector-Phi hit collection.
The following member functions are used to set data members of \name{StMcEvent}.
They are shown only for completeness and shouldn't be used without
a profound understanding of the relations between the different objects.
\verb+void setEventGeneratorEventLabel(unsigned long);+\\
\verb+void setEventNumber(unsigned long);+\\
\verb+void setRunNumber(unsigned long);+\\
\verb+void setZWest(unsigned long);+\\
\verb+void setNWest(unsigned long);+\\
\verb+void setZEast(unsigned long);+\\
\verb+void setNEast(unsigned long);+\\
\verb+void setImpactParameter(float);+\\
\verb+void setPhiReactionPlane(float);+\\
\verb+void setTriggerTimeOffset(float);+\\
\verb+void setPrimaryVertex(StMcVertex*);+\\
\verb+void setTrackCollection(StMcTrackCollection*);+\\
\verb+void setTpcHitCollection(StMcTpcHitCollection*);+\\
\verb+void setSvtHitCollection(StMcSvtHitCollection*);+\\
\verb+void setFtpcHitCollection(StMcFtpcHitCollection*);+\\
\verb+void setRichHitCollection(setRichHitCollection);+\\
\verb+void setBemcHitCollection(StMcEmcHitCollection);+\\
\verb+void setBprsHitCollection(StMcEmcHitCollection);+\\
\verb+void setBsmdeHitCollection(StMcEmcHitCollection);+\\
\verb+void setBsmdpHitCollection(StMcEmcHitCollection);+\\
\verb+void setVertexCollection(StMcVertexCollection*);+\\
\item[Public Member\\ Operators]
\verb+int operator==(const StMcEvent&) const;+\\
Compares the event number, run number and type to determine
whether 2 events are equal.
\verb+int operator!=(const StMcEvent&) const;+\\
Uses {\tt operator==} and negates it.
\verb+ostream& operator<<(ostream& os, const StMcEvent&);+\\
Allows to write some relevant information directly to an output
stream.
\item[Examples]
{\bf Example:}
{\footnotesize
\begin{verbatim}
//
// Prints some basic event quantities to stdout (cout).
//
//
void printEvent(StMcEvent *event)
{
cout << "Event Generator Label = "
<< event->eventGeneratorEventLabel() << endl;
cout << "Event Number = "
<< event->eventNumber() << endl;
cout << "Run number = " << event->runNumber();
cout << "# of tracks = "
<< event->trackCollection()->size() << endl;
cout << "# of vertices = "
<< event->vertexCollection()->size() << endl;
cout << "# of tpc hits = "
<< event->tpcHitCollection()->size() << endl;
cout << "xyz of primary Vertex = "
<< event->primaryVertex()->position()/millimeter
<< " mm" << endl;
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcFtpcHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcFtpcHit}
\index{FTPC hit} \index{StMcFtpcHit|textbf}
\label{sec:StMcFtpcHit}
\begin{Entry}
\item[Summary]
\name{StMcFtpcHit} represents a FTPC hit.
\item[Synopsis]
\verb+#include "StMcFtpcHit.hh"+\\
\verb+class StMcFtpcHit;+\\
\item[Description]
\name{StMcFtpcHit} inherits most functionality from \name{StMcHit}. For a complete
description of the inherited member functions see \name{StMcHit}
(sec.~\ref{sec:StMcHit}).
In the \StMcEvent\ data model each track keeps references to the
associated hits and vice versa. All hits have a pointer to their
parent track. This is especially important for the making of the
associations in \StAssociationMaker.
\item[Persistence]
None
\item[Related Classes]
\name{StMcFtpcHit} is derived directly from \name{StMcHit}.
The hits are kept according to plane.
They are stored by pointer in a container for each plane
(see \ref{sec:containers}). The way to get the hits for a
certain plane (1-20) starting
from the top level \name{StMcEvent} pointer is:
\verb+mcEvt->ftpcHitCollection()->plane(0)->hits()+
Look in \ref{sec:StMcFtpcHitCollection} and
\ref{sec:StMcFtpcPlaneHitCollection} for more information.
Each instance of \name{StMcTrack} holds a list of FTPC hits
which belong to that track.
\index{StMcHit}
\index{StMcTrack}
\index{StMcFtpcHitCollection}
\item[Public\\ Constructors]
\verb+StMcFtpcHit(const StThreeVectorF& x,const StThreeVectorF& p, +\\
\verb+ const float de, const float ds, const long key,+\\
\verb+ const long id, StMcTrack* parent);+\\
Create an instance of \name{StFtpcHit} with position \name{x}, local momentum \name{p},
energy deposition at hit \name{de}, path length (within padrow) \name{ds}, primary key \name{key},
volume Id \name{id},
and parent track \name{parent}. Not used. Standard use is
to obtain all parameters from g2t\_fpt\_hit table.
\item[Public Member\\ Functions]
\verb+unsigned long plane() const;+\\
Returns the number of the plane (1-10 for West Ftpc, 11-20 for East Ftpc) in which a hit is found.
\index{plane, FTPC}
\verb+unsigend long sector() const;+\\
Returns the sector (1-6) in which a hit is found. The volume Id to allow this was introduced on Oct. 2003.
\item[Public Member\\ Operators]
\verb+ostream& operator<<(ostream& os, const StMcFtpcHit\&);+\\
Allows to write some relevant information directly to an output
stream.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// In this example, we use the methods of the class
// to access the data. For concreteness, we use them to fill
// a Root histogram.
//
void
PositionOfHits(StMcTrack *track)
{
StPtrVecMcFtpcHit hits = track->ftpcHits();
StMcFtpcHitIterator i;
StMcFtpcHit* currentHit;
TH2F* myHist = new TH2F("coords. MC","X vs Y pos. of Hits",100, -150, 150, 100, -150, 150)
for (i = hits.begin(); i != hits.end(); i++) {
currentHit = (*i);
myHist->Fill(currentHit->position().x(), currentHit->position().y());
}
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcFtpcHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcFtpcHitCollection}
\index{FTPC hit collection} \index{StMcFtpcHitCollection|textbf}
\label{sec:StMcFtpcHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcFtpcHitCollection} is a container for the
planes of the FTPC. The actual hits are stored according to
planes.
\item[Synopsis]
\verb+#include "StMcFtpcHitCollection.hh"+\\
\verb+class StMcFtpcHitCollection;+\\
\item[Description]
\name{StMcFtpcHitCollection} is the first step down
the hierarchy of hits for the FTPC. It provides
methods to return a particular plane, and to
count the number of hits in all the FTPC.
\item[Persistence]
None
\item[Related Classes]
\name{StMcFtpcHitCollection}
has a container of type {\tt StMcFtpcPlaneHitCollection}
of size 20, each element represents a plane.
\index{StMcFtpcPlaneHitCollection}
\index{StMcFtpcHit}
\item[Public\\ Constructors]
\verb+StMcFtpcHitHitCollection();+\\
Create an instance of \name{StFtpcHitCollection}
and sets up the internal containers.
\item[Public Member\\ Functions]
\verb+bool addHit(StMcFtpcHit*);+\\
Adds a hit to the collection (using \name{StMemoryPool}
from the StarClassLibrary.
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the whole FTPC.
\verb+unsigned int numberOfPlanes() const;+\\
Returns the size of the Array of planes. Default is 20.
\verb+StMcFtpcPlaneHitCollection* plane(unsigned int);+\\
Returns a pointer to the plane specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcFtpcPlaneHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcFtpcPlaneHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcFtpcPlaneHitCollection}
\index{FTPC plane hit collection} \index{StMcFtpcPlaneHitCollection|textbf}
\label{sec:StMcFtpcPlaneHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcFtpcPlaneHitCollection} is the class where the FTPC hits
are actually stored.
\item[Synopsis]
\verb+#include "StMcFtpcPlaneHitCollection.hh"+\\
\verb+class StMcFtpcPlaneHitCollection;+\\
\item[Description]
\name{StMcFtpcPlaneHitCollection} holds the FTPC hits. Note
the difference with \StEvent. The Monte Carlo doesn't know
about the sectors of the FTPC, these are determined during
reconstruction.
\item[Persistence]
None
\item[Related Classes]
\name{StMcFtpcPlaneHitCollection}
has a container of type {\tt StSPtrVecMcFtpcHit}, this means
that this is the class that actually owns the hits. Look
at the section on containers \ref{sec:containers} for more
information.
\index{StMcFtpcHit}
\index{StSPtrVecMcFtpcHit}
\item[Public\\ Constructors]
\verb+StMcFtpcPlaneHitCollection();+\\
Create an instance of \name{StFtpcHitCollection}
and sets up the internal container.
\item[Public Member\\ Functions]
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of this plane.
\verb+StSPtrVecMcFtpcHit& hits();+\\
Returns a reference to the container of the FTPC hits.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Print number of hits in FTPC and plot 1D Histogram of r (cylindrical coord.) of
// Hits
//
StMcFtpcHitCollection* ftpcHitColl = evt->ftpcHits();
cout << ``There are :`` << ftpcHitColl->numberOfHits() << ``hits in the FTPC'' << endl;
TH1F* myHist = new TH1F("Sector 10 ","R pos. of Hits",100, 0, 100)
for (unsigned int i=0; i<ftpcHitColl->numberOfPlanes(); i++) {
StSPtrVecMcFtpcHit& hits = ftpcHitColl->plane(i)->hits();
for (StMcFtpcHitIterator i = hits.begin(); i != hits.end(); i++) {
currentHit = (*i);
myHist->Fill(currentHit->position().perp());
}
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcHit}
\index{StMcHit|textbf} \index{hit}
\label{sec:StMcHit}
\begin{Entry}
\item[Summary]
\name{StMcHit} is the base class of all TPC, FTPC, SVT and RICH
monte carlo hit classes.
\item[Synopsis]
\verb+#include "StMcHit.hh"+\\
\verb+class StMcHit;+\\
\item[Description]
\name{StMcHit} provides the basic functionality for all derived classes
which represent TPC, FTPC SVT and RICH hits. It provides information on
position, energy deposition, path length within sensitive volume,
and the track which
generated this hit. It also has the primary key from the g2t tables and the
volume Id, which are useful for debugging purposes. All information is taken from
the appropriate \name{g2t\_xxx\_hit.idl} table, where xxx can be ``tpc'', ``svt'',
``ftp'', or ``rch''.
\name{StMcHit} is a virtual class.
\index{g2t\_xxx\_hit.idl}
\item[Persistence]
None
\item[Related Classes]
The following classes are derived from \name{StMcHit}:
\begin{itemize}
\item \name{StMcTpcHit}
\item \name{StMcFtpcHit}
\item \name{StMcSvtHit}
\item \name{StMcRichHit}
\end{itemize}
\index{StMcTpcHit}
\index{StMcFtpcHit}
\index{StMcSvtHit}
\index{StMcRichHit}
\item[Public\\ Constructors]
\verb+StMcHit();+\\
Constructs an instance of \name{StMcHit} with all values initialized
to 0 (zero). Not used. Normally, all hits belong to a certain detector,
so the appropriate table based constructor is used.
\verb+StMcHit(const StThreeVectorF& x, const StThreeVectorF& p,+\\
\verb+ float de, float ds, long k, long volId, StMcTrack* parent);+\\
Create an instance of \name{StMcHit} with position \name{x}, local momentum \name{p},
energy deposition \name{de}, path length \name{ds}, primary key \name{k},
volume id \name{volId} and parent track \name{parent}. Used by the derived
classes to initialize the base class data members.
\item[Public Member\\ Functions]
\verb+const StThreeVectorF& position() const;+\\
Hit position in global coordinates.
\verb+const StThreeVectorF& localMomentum() const;+\\
Local momentum direction of the parent track at the position of the hit.
\verb+float dE() const;+\\
Energy deposition.
\verb+float dS() const;+\\
Path length within sensitive volume.
\verb+long key() const;+\\
Primary key from the appropriate g2t table.
\verb+long volumeId() const;+\\
Geant volume Id, without any decoding.
\verb+StMcTrack* parentTrack() const;+\\
Pointer to the track that generated the hit.
The following functions should not normally be used.
The relevant data members are set during construction or
in \name{StMcEventMaker} so these are provided for completeness.
\verb+void setPosition(const StThreeVectorF&);+\\
Set the hit position.
\verb+void setLocalMomentum(const StThreeVectorF&);+\\
Set the local momentum.
\verb+void setdE(float);+\\
Set the total energy deposition.
\verb+void setdS(float);+\\
Set the path length within sensitive volume.
\verb+void setKey(long);+\\
Set the primary key.
\verb+void setVolumeId(long);+\\
Set the volume Id.
\verb+void setParentTrack(StMcTrack*);+\\
Set the parent track.
\item[Global Operators]
\verb+int operator==(const StMcHit&) const;+\\
Checks to see if 2 hits are equal. Uses position, dE and dS
for the comparison.
\verb+int operator!=(const StMcHit&) const;+\\
Uses operator== and negates the result.
\verb+ostream& operator<<(ostream& os, const StMcHit&);+\\
Prints hit data (position, dE, dS ) to output stream \name{os}.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Create a random hit and print it.
//
void printArbitraryHit()
{
HepJamesRandom engine;
RandFlat rndm(engine);
RandGauss rgauss(engine);
const double sigmadE = 1e-06;
const double sigmadS = 7*millimeter;
StMcTrack* pTrack = new StMcTrack;
StThreeVectorF pos(rndm.shoot(-meter,meter),
rndm.shoot(-meter,meter),
rndm.shoot(-meter,meter));
StThreeVectorF mom(rndm.shoot(-GeV,GeV),
rndm.shoot(-GeV,GeV),
rndm.shoot(-GeV,GeV));
float de = rgauss.shoot(2.0e-06,sigmadE);
float ds = rgauss.shoot(1.6,sigmadS);
StMcHit hit(pos, mom, de, ds, 0, 0, pTrack);
cout << hit << endl;
}
\end{verbatim}
}%footnotesize
{\bf Programs Output:}
{\footnotesize
\begin{verbatim}
Id: 0
Position: -20.3111 -80.7543 76.8227
Local Momentum: .9042 -.4982 -.1873
dE: 1.0952e-06
dS: 1.3049
Vol Id: 0
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcRichHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcRichHit}
\index{RICH hit} \index{StMcRichHit|textbf}
\label{sec:StMcRichHit}
\begin{Entry}
\item[Summary]
\name{StMcRichHit} represents a monte carlo RICH hit.
\item[Synopsis]
\verb+#include "StMcRichHit.hh"+\\
\verb+class StMcRichHit;+\\
\item[Description]
\name{StMcRichHit} inherits most functionality from \name{StMcHit}.
For a complete description of the inherited member functions see
\name{StMcHit} (sec.~\ref{sec:StMcHit}).
In the \StMcEvent\ data model each track keeps references to the
associated hits and vice versa. All hits have a pointer to their
parent track. This is especially important for the making of the
associations in \StAssociationMaker.
\item[Persistence]
None
\item[Related Classes]
\name{StMcRichHit} is derived directly from \name{StMcHit}.
The hits are kept according to row and pad.
The hits are kept in an instance of \name{StSPtrVecMcRichHit}
where they are stored by pointer (see \ref{sec:containers}).
Each instance of \name{StMcTrack} holds a list of RICH hits
which belong to that track.
\index{StMcHit}
\index{StMcTrack}
\index{StMcRichHitCollection}
\item[Public\\ Constructors]
\verb+StMcRichHit(const StThreeVectorF& x,const StThreeVectorF& p,+\\
\verb+ const float de, const float ds, const long key, const long id, StMcTrack* parent);+\\
Create an instance of \name{StMcHit} with position \name{x}, local momentum \name{p},
energy deposition \name{de}, path length \name{ds}, primary key \name{key},
volume id \name{id} and parent track \name{parent}.
\item[Public Member\\ Functions]
\verb+unsigned short pad() const;+\\
Returns the number of the pad in which a hit is found.
\index{pad, RICH Hit}
\verb+unsigned short row() const;+\\
Returns the number of the row in which a hit is found.
\index{row, RICH Hit}
\verb+float tof() const;+\\
Returns the time of flight from the g2t\_rch\_hit table.
\index{tof, RICH Hit}
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Function which returns the RICH hits
// used by a track and prints them.
//
void getHits(StMcTrack *track)
{
StPtrVecMcRichHit& hits =
track->richHits();
StMcRichHitIterator iter;
StMcRichHit* hit;
for(iter = hits->begin();
iter != hits->end(); iter++) {
hit = *iter;
cout << *hit << endl;
}
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcRichHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcRichHitCollection}
\index{RICH hit collection} \index{StMcRichHitCollection|textbf}
\label{sec:StMcRichHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcRichHitCollection} is a container for the
hits of the RICH. There is no hierarchy, this class holds
the vector of hits.
\item[Synopsis]
\verb+#include "StMcRichHitCollection.hh"+\\
\verb+class StMcRichHitCollection;+\\
\item[Description]
\name{StMcRichHitCollection} holds the hits and the methods to add and count them.
\item[Persistence]
None
\item[Related Classes]
\name{StMcRichHitCollection}
has a container of type {\tt StSPtrVecMcRichHit} which holds the hits.
\index{StMcRichHit}
\item[Public\\ Constructors]
\verb+StMcRichHitCollection();+\\
Create an instance of \name{StRichHitCollection}
and sets up the internal container.
\item[Public Member\\ Functions]
\verb+bool addHit(StMcRichHit*);+\\
Adds a hit to the collection (using \name{StMemoryPool}
from the StarClassLibrary.
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the whole RICH.
\verb+StSPtrVecMcRichHit& hits();+\\
Returns a reference to the vector of hits. This function also has a \verb+const+ version.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in any of the other hit collections, the usage is identical.
//
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcSvtHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcSvtHit}
\index{SVT hit} \index{StMcSvtHit|textbf}
\label{sec:StMcSvtHit}
\begin{Entry}
\item[Summary]
\name{StMcSvtHit} represents a monte carlo SVT hit.
\item[Synopsis]
\verb+#include "StMcSvtHit.hh"+\\
\verb+class StMcSvtHit;+\\
\item[Description]
\name{StMcSvtHit} inherits most functionality from \name{StMcHit}.
For a complete description of the inherited member functions see
\name{StMcHit} (sec.~\ref{sec:StMcHit}).
In the \StMcEvent\ data model each track keeps references to the
associated hits and vice versa. All hits have a pointer to their
parent track. This is especially important for the making of the
associations in \StAssociationMaker.
\item[Persistence]
None
\item[Related Classes]
\name{StMcSvtHit} is derived directly from \name{StMcHit}.
The way to get the hits for a
certain layer-ladder-wafer starting
from the top level \name{StMcEvent} pointer is:
\verb+mcEvt->svtHitCollection()->layer(ly)->ladder(ld)->wafer(w)->hits()+
The hits are kept in an instance of \name{StSPtrVecMcSvtHit}
where they are stored by pointer (see \ref{sec:containers}).
Each instance of \name{StMcTrack} holds a list of SVT hits
which belong to that track.
\index{StMcHit}
\index{StMcTrack}
\index{StMcSvtHitCollection}
\item[Public\\ Constructors]
\verb+StMcSvtHit(const StThreeVectorF& x,const StThreeVectorF& p,+\\
\verb+ const float de, const float ds, const long key, const long id, StMcTrack* parent);+\\
Create an instance of \name{StMcHit} with position \name{x}, local momentum \name{p},
energy deposition \name{de}, path length \name{ds}, primary key \name{key},
volume id \name{id} and parent track \name{parent}.
\item[Public Member\\ Functions]
\verb+unsigned long layer() const;+\\
Returns the number of the layer [1-6] in which a hit is found.
\index{layer, SVT}
\verb+unsigned long ladder() const;+\\
Returns the number of the ladder [1-8] in which a hit is found.
\index{ladder, SVT}
\verb+unsigned long wafer() const;+\\
Returns the number of the wafer [1-7] in which a hit is found.
\index{wafer, SVT}
\verb+unsigned long barrel() const;+\\
Returns the number of the barrel [1-3] in which a hit is found.
\index{barrel, SVT}
\verb+unsigned long hybrid() const;+\\
Returns the number of the hybrid.
\index{hybrid, SVT}
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Function which returns the SVT hits
// used by a track and prints them.
//
void getHits(StMcTrack *track)
{
StPtrVecMcSvtHit& hits =
track->svtHits();
StMcSvtHitIterator iter;
StMcSvtHit* hit;
for(iter = hits->begin();
iter != hits->end(); iter++) {
hit = *iter;
cout << hit << endl;
}
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcSvtHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcSvtHitCollection}
\index{SVT hit collection} \index{StMcSvtHitCollection|textbf}
\label{sec:StMcSvtHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcSvtHitCollection} is a container for the
layers of the SVT. The actual hits are stored according to
wafers, 3 levels down the hierarchy.
\item[Synopsis]
\verb+#include "StMcSvtHitCollection.hh"+\\
\verb+class StMcSvtHitCollection;+\\
\item[Description]
\name{StMcSvtHitCollection} is the first step down
the hierarchy of hits for the SVT. It provides
methods to return a particular layer, and to
count the number of hits in all the SVT.
\item[Persistence]
None
\item[Related Classes]
\name{StMcSvtHitCollection}
has a container of type {\tt StMcSvtLayerHitCollection}
of size 6, each element represents a layer. Look in
\name{StMcSvtLayerHitCollection} \ref{sec:StMcSvtLayerHitCollection},
\name{StMcSvtLadderHitCollection} \ref{sec:StMcSvtLadderHitCollection},
and \name{StMcSvtWaferHitCollection} \ref{sec:StMcSvtWaferHitCollection},
\index{StMcSvtLayerHitCollection}
\index{StMcSvtLadderHitCollection}
\index{StMcSvtWaferHitCollection}
\index{StMcSvtHit}
\item[Public\\ Constructors]
\verb+StMcSvtHitCollection();+\\
Create an instance of \name{StSvtHitCollection}
and sets up the internal containers.
\item[Public Member\\ Functions]
\verb+bool addHit(StMcSvtHit*);+\\
Adds a hit to the collection (using \name{StMemoryPool}
from the StarClassLibrary.
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the whole SVT.
\verb+unsigned int numberOfLayers() const;+\\
Returns the size of the Array of layers. Default is 6.
\verb+StMcSvtLayerHitCollection* layer(unsigned int);+\\
Returns a pointer to the layer specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcSvtWaferHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcSvtLadderHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcSvtLadderHitCollection}
\index{SVT ladder hit collection} \index{StMcSvtLadderHitCollection|textbf}
\label{sec:StMcSvtLadderHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcSvtLadderHitCollection} is a container for the
wafers of the SVT. The actual hits are stored according to
wafers, the next level down the hierarchy.
\item[Synopsis]
\verb+#include "StMcSvtLadderHitCollection.hh"+\\
\verb+class StMcSvtLadderHitCollection;+\\
\item[Description]
\name{StMcSvtLadderHitCollection} is the third step down
the hierarchy of hits for the SVT. It provides
methods to return a particular wafer, and to
count the number of hits in the ladder.
\item[Persistence]
None
\item[Related Classes]
\name{StMcSvtLadderHitCollection}
has a container of type {\tt StMcSvtWaferHitCollection}
of size 7, each element represents a wafer. Look in
\name{StMcSvtLayerHitCollection} \ref{sec:StMcSvtLayerHitCollection},
\name{StMcSvtHitCollection} \ref{sec:StMcSvtHitCollection},
and \name{StMcSvtWaferHitCollection} \ref{sec:StMcSvtWaferHitCollection},
\index{StMcSvtLayerHitCollection}
\index{StMcSvtHitCollection}
\index{StMcSvtWaferHitCollection}
\index{StMcSvtHit}
\item[Public\\ Constructors]
\verb+StMcSvtLadderHitHitCollection();+\\
Create an instance of \name{StSvtLadderHitCollection}
and sets up the internal containers.
\item[Public Member\\ Functions]
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of this ladder.
\verb+unsigned int numberOfWafers() const;+\\
Returns the size of the Array of wafers. Default is 7.
\verb+StMcSvtWaferHitCollection* wafer(unsigned int);+\\
Returns a pointer to the wafer hit collection
specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcSvtWaferHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcSvtLayerHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcSvtLayerHitCollection}
\index{SVT hit collection} \index{StMcSvtLayerHitCollection|textbf}
\label{sec:StMcSvtLayerHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcSvtLayerHitCollection} is a container for the
layers of the SVT. The actual hits are stored according to
wafers, 2 levels down the hierarchy.
\item[Synopsis]
\verb+#include "StMcSvtLayerHitCollection.hh"+\\
\verb+class StMcSvtLayerHitCollection;+\\
\item[Description]
\name{StMcSvtLayerHitCollection} is the second step down
the hierarchy of hits for the SVT. It provides
methods to return a particular ladder, and to
count the number of hits in the layer.
\item[Persistence]
None
\item[Related Classes]
\name{StMcSvtLayerHitCollection}
has a container of type {\tt StMcSvtLadderHitCollection}
of size 8, each element represents a ladder. Look in
\name{StMcSvtHitCollection} \ref{sec:StMcSvtHitCollection},
\name{StMcSvtLadderHitCollection} \ref{sec:StMcSvtLadderHitCollection},
and \name{StMcSvtWaferHitCollection} \ref{sec:StMcSvtWaferHitCollection},
\index{StMcSvtHitCollection}
\index{StMcSvtLadderHitCollection}
\index{StMcSvtWaferHitCollection}
\index{StMcSvtHit}
\item[Public\\ Constructors]
\verb+StMcSvtLayerHitHitCollection();+\\
Create an instance of \name{StSvtLayerHitCollection}
and sets up the internal containers.
\item[Public Member\\ Functions]
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the whole SVT.
\verb+unsigned int numberOfLadders() const;+\\
Returns the size of the Array of ladders. Default is 8.
\verb+StMcSvtLadderHitCollection* ladder(unsigned int);+\\
Returns a pointer to the ladder specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcSvtWaferHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcSvtWaferHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcSvtWaferHitCollection}
\index{SVT wafer hit collection} \index{StMcSvtWaferHitCollection|textbf}
\label{sec:StMcSvtWaferHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcSvtWaferHitCollection} is the class where the SVT hits
are actually stored.
\item[Synopsis]
\verb+#include "StMcSvtWaferHitCollection.hh"+\\
\verb+class StMcSvtWaferHitCollection;+\\
\item[Description]
\name{StMcSvtWaferHitCollection} holds the SVT hits.
\item[Persistence]
None
\item[Related Classes]
\name{StMcSvtWaferHitCollection}
has a container of type {\tt StSPtrVecMcSvtHit}, this means
that this is the class that actually owns the hits. Look
at the section on containers \ref{sec:containers} for more
information.
\index{StMcSvtHit}
\index{StSPtrVecMcSvtHit}
\item[Public\\ Constructors]
\verb+StMcSvtWaferHitCollection();+\\
Create an instance of \name{StMcSvtWaferHitCollection}
and sets up the internal container.
\item[Public Member\\ Functions]
\verb+StSPtrVecMcSvtHit& hits();+\\
Returns a reference to the container of the SVT hits.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Print number of hits in SVT and plot 2D Histogram of x,y of
// Hits
//
StMcSvtHitCollection* svtHitColl = evt->svtHits();
cout << ``There are :`` << svtHitColl->numberOfHits()
<< ``hits in the SVT'' << endl;
TH2F* myHist = new TH2F("SVT","pos. of Hits",100, -10, 10, 100, -10, 10)
for (unsigned int ly=0; ly<svtHitColl->numberOfLayers(); ly++) {
for (unsigned int ld=0; ld<svtHitColl->layer(ly)->numberOfLadders(); ld++) {
for (unsigned int w=0;
w<svtHitColl->layer(ly)->ladder(ld)->numberOfWafers();
w++) {
StSPtrVecMcSvtHit& hits =
svtHitColl->layer(ly)->ladder(ld)->wafer(w)->hits();
for (StMcSvtHitIterator i = hits.begin(); i != hits.end(); i++) {
currentHit = (*i);
myHist->Fill(currentHit->position().x(),
currentHit->position().y());
}
} // wafer loop
} // ladder loop
} // layer loop
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcTpcHit
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcTpcHit}
\index{TPC hit} \index{StMcTpcHit|textbf}
\label{sec:StMcTpcHit}
\begin{Entry}
\item[Summary]
\name{StMcTpcHit} represents a monte carlo TPC hit.
\item[Synopsis]
\verb+#include "StMcTpcHit.hh"+\\
\verb+class StMcTpcHit;+\\
\item[Description]
\name{StMcTpcHit} inherits most functionality from \name{StMcHit}.
For a complete description of the inherited member functions see
\name{StMcHit} (sec.~\ref{sec:StMcHit}).
In the \StMcEvent\ data model each track keeps references to the
associated hits and vice versa. All hits have a pointer to their
parent track. This is especially important for the making of the
associations in \StAssociationMaker.
\item[Persistence]
None
\item[Related Classes]
\name{StMcTpcHit} is derived directly from \name{StMcHit}.
The hits are kept in an instance of \name{StMcTpcHitCollection}
where they are stored by pointer (see \ref{sec:containers}).
Each instance of \name{StMcTrack} holds a list of TPC hits
which belong to that track.
\index{StMcHit}
\index{StMcTrack}
\index{StMcTpcHitCollection}
\item[Public\\ Constructors]
\verb+StMcTpcHit(const StThreeVectorF& x,const StThreeVectorF& p,+\\
\verb+ const float de, const float ds, const long key, const long id, StMcTrack* parent);+\\
Create an instance of \name{StMcHit} with position \name{x}, local momentum \name{p},
energy deposition \name{de}, path length \name{ds}, primary key \name{key},
volume id \name{id} and parent track \name{parent}.
\item[Public Member\\ Functions]
\verb+unsigned long sector() const;+\\
Returns the number of the sector [1-24] in which a hit is found.
\index{sector, TPC Hit}
\verb+unsigned long padrow() const;+\\
Returns the number of the padrow [1-45] in which a hit is found.
\index{padrow, TPC Hit}
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Function which returns the TPC hit
// that is closest to the
// start vertex of the track.
StMcTpcHit* getFirstHit(StMcTrack *track)
{
StThreeVector<float> vtxPosition = track->startVertex();
StMcTpcHitCollection *hits =
track->tpcHits();
StMcTpcHitIterator iter;
StMcTpcHit* hit;
float minDistance = 20*meter;
for(iter = hits->begin();
iter != hits->end(); iter++) {
if (abs(*iter->position() - vtxPosition) < minDistance) {
minDistance = abs(*iter->position() - vtxPosition);
hit = *iter;
}
}
return hit;
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcTpcHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcTpcHitCollection}
\index{TPC hit collection} \index{StMcTpcHitCollection|textbf}
\label{sec:StMcTpcHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcTpcHitCollection} is a container for the
sectors of the TPC. The actual hits are stored according to
padrows, 2 levels down the hierarchy.
\item[Synopsis]
\verb+#include "StMcTpcHitCollection.hh"+\\
\verb+class StMcTpcHitCollection;+\\
\item[Description]
\name{StMcTpcHitCollection} is the first step down
the hierarchy of hits for the TPC. It provides
methods to return a particular sector, and to
count the number of hits in all the TPC.
\item[Persistence]
None
\item[Related Classes]
\name{StMcTpcHitCollection}
has a container of type {\tt StMcTpcSectorHitCollection}
of size 24, each element represents a sector. Look in
\name{StMcTpcSectorHitCollection} \ref{sec:StMcTpcSectorHitCollection},
and \name{StMcTpcPadrowHitCollection} \ref{sec:StMcTpcPadrowHitCollection},
\index{StMcTpcSectorHitCollection}
\index{StMcTpcPadrowHitCollection}
\index{StMcTpcHit}
\item[Public\\ Constructors]
\verb+StMcTpcHitCollection();+\\
Create an instance of \name{StTpcHitCollection}
and sets up the internal containers.
\item[Public Member\\ Functions]
\verb+bool addHit(StMcTpcHit*);+\\
Adds a hit to the collection (using \name{StMemoryPool}
from the StarClassLibrary.
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the whole TPC.
\verb+unsigned int numberOfSectors() const;+\\
Returns the size of the Array of sectors. Default is 24.
\verb+StMcTpcSectorHitCollection* sector(unsigned int);+\\
Returns a pointer to the sector specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcTpcPadrowHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcTpcPadrowHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcTpcPadrowHitCollection}
\index{TPC padrow hit collection} \index{StMcTpcPadrowHitCollection|textbf}
\label{sec:StMcTpcPadrowHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcTpcPadrowHitCollection} is the class where the TPC hits
are actually stored.
\item[Synopsis]
\verb+#include "StMcTpcPadrowHitCollection.hh"+\\
\verb+class StMcTpcPadrowHitCollection;+\\
\item[Description]
\name{StMcTpcPadrowHitCollection} holds the TPC hits.
\item[Persistence]
None
\item[Related Classes]
\name{StMcTpcPadrowHitCollection}
has a container of type {\tt StSPtrVecMcTpcHit}, this means
that this is the class that actually owns the hits. Look
at the section on containers \ref{sec:containers} for more
information.
\index{StMcTpcHit}
\index{StSPtrVecMcTpcHit}
\item[Public\\ Constructors]
\verb+StMcTpcPadrowHitCollection();+\\
Create an instance of \name{StMcTpcPadrowHitCollection}
and sets up the internal container.
\item[Public Member\\ Functions]
\verb+StSPtrVecMcTpcHit& hits();+\\
Returns a reference to the container of the TPC hits.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Print number of hits in TPC and plot 2D Histogram of x,y of
// Hits
//
StMcTpcHitCollection* tpcHitColl = evt->tpcHits();
cout << ``There are :`` << tpcHitColl->numberOfHits() << ``hits in the TPC'' << endl;
TH2F* myHist = new TH2F("TPC","x y pos. of Hits",100, -10, 10, 100, -10, 10)
for (unsigned int s=0; s<tpcHitColl->numberOfSectors(); s++) {
for (unsigned int pr=0; pr<tpcHitColl->sector(s)->numberOfPadrows(); pr++) {
StSPtrVecMcTpcHit& hits = tpcHitColl->sector(s)->padrow(pr)->hits();
for (StMcTpcHitIterator i = hits.begin(); i != hits.end(); i++) {
currentHit = (*i);
myHist->Fill(currentHit->position().x(), currentHit->position().y());
}
} // padrow loop
} // sector loop
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcTpcSectorHitCollection
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcTpcSectorHitCollection}
\index{TPC sector hit collection} \index{StMcTpcSectorHitCollection|textbf}
\label{sec:StMcTpcSectorHitCollection}
\begin{Entry}
\item[Summary]
\name{StMcTpcSectorHitCollection} is a container for the
sectors of the TPC. The actual hits are stored according to
padrows, the next level down the hierarchy.
\item[Synopsis]
\verb+#include "StMcTpcSectorHitCollection.hh"+\\
\verb+class StMcTpcSectorHitCollection;+\\
\item[Description]
\name{StMcTpcSectorHitCollection} is the second step down
the hierarchy of hits for the TPC. It provides
methods to return a particular padrow, and to
count the number of hits in the sector.
\item[Persistence]
None
\item[Related Classes]
\name{StMcTpcSectorHitCollection}
has a container of type {\tt StMcTpcPadrowHitCollection}
of size 45, each element represents a padrow. Look in
\name{StMcTpcHitCollection} \ref{sec:StMcTpcHitCollection},
and \name{StMcTpcPadrowHitCollection} \ref{sec:StMcTpcPadrowHitCollection},
\index{StMcTpcHitCollection}
\index{StMcTpcPadrowHitCollection}
\index{StMcTpcHit}
\item[Public\\ Constructors]
\verb+StMcTpcSectorHitHitCollection();+\\
Create an instance of \name{StTpcSectorHitCollection}
and sets up the internal containers.
\item[Public Member\\ Functions]
\verb+unsigned long numberOfHits() const;+\\
Counts the number of hits of the whole TPC.
\verb+unsigned int numberOfPadrows() const;+\\
Returns the size of the Array of padrows. Default is 45.
\verb+StMcTpcPadrowHitCollection* padrow(unsigned int);+\\
Returns a pointer to the padrow specified by the argument to the
function. Recall that this is for indexing into an array, so the
argument should go from 0 - (size()-1). Look in the
numbering scheme conventions \ref{sec:numberscheme}for more information.
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// Look in StMcTpcPadrowHitCollection for an example.
// These classes normally go hand in hand.
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcTrack
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcTrack}
\index{StMcTrack|textbf}
\index{monte carlo track}
\label{sec:StMcTrack}
\begin{Entry}
\item[Summary]
\name{StMcTrack} describes a Monte Carlo track in the STAR detector.
\item[Synopsis]
\verb+#include "StMcTrack.hh"+\\
\verb+class StMcTrack;+\\
\item[Description]
\name{StMcTrack} describes a monte carlo track in the STAR
detector.
\name{StMcTrack} provides information on the
hits which belong to the track, its momentum and the
vertices of the track. All
tracks stored on the g2t\_track table are collected in a
\name{StSPtrVecMcTrack} which can be obtained through the
\name{StMcEvent::tracks()} method. The tracks from the particle
table are also stored in this container. For a track that appears
in both tables, the information is merged so that only one entry
per track appears in StMcEvent.
In addition, the information about the particle that produced
the track is accessible.
The information on the relation between hits and tracks in
\StMcEvent\ is kept both in \name{StMcTrack} and in \name{StMcHit}.
Each track knows about the hits it is associated with, and vice versa.
This was needed in order for the \StAssociationMaker to begin its job by
building hit associations and from those create the track associations.
\index{relation btw hits and tracks}
\item[Persistence]
None
\item[Related Classes]
All monte carlo tracks are kept in an instance of \name{StSPtrVecMcTrack}
where they are stored by pointer (see sec.~\ref{sec:containers}).
\index{StMcTrack}
\index{StSPtrVecMcTrack}
\item[Public\\ Constructors]
\verb+StMcTrack();+\\
Constructs an instance of \name{StMcTrack} with all values initialized to 0 (zero).
Not used. The tracks are constructed using the g2t\_track table.
\item[Public Member\\ Functions]
\verb+const StLorentzVectorF& fourMomentum() const;+\\
Returns the four-momentum of the track.
\verb+const StThreeVectorF& momentum() const;+\\
Returns the momentum of the track.
\verb+float pt() const;+\\
Returns pt of the track.
\verb+float rapidity() const;+\\
Returns rapidity of the track.
\verb+float pseudoRapidity() const;+\\
Returns pseudoRapidity of the track.
\verb+StMcVertex* startVertex();+\\
Returns a pointer to the start vertex of the track.
\verb+StMcVertex* stopVertex();+\\
Returns a pointer to the stop vertex of the track.
\verb+StMcTrack* parent();+\\
Returns a pointer to the parent of the track, if it exitsts. Otherwise, returns a
null pointer.
\verb+StPtrVecMcVertex& intermediateVertices();+\\
Returns a reference to the collection of intermediate vertices of the track.
Note, that the collection can be empty.
\verb+StPtrVecMcTpcHit& tpcHits();+\\
TPC hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcSvtHit& svtHits();+\\
SVT hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcFtpcHit& ftpcHits();+\\
FTPC hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcFtpcHit& richHits();+\\
RICH hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcRichHit& richHits();+\\
RICH hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcCalorimeterHit& bemcHits();+\\
BEMC hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcCalorimeterHit& bprsHits();+\\
Barrel pre-shower hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcCalorimeterHit& bsmdeHits();+\\
Barrel shower max-eta hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StPtrVecMcCalorimeterHit& bsmdpHits();+\\
Barrel shower max-phi hits belonging to the track.
The returned container holds the
hits by pointer. Note, that the collection can be empty.
This will be the case if the option to load these hits is switched off.
\verb+StParticleDefinition* particleDefinition();+\\
Returns a pointer to the only instance of the appropriate
\name{StParticleDefinition} class, which holds
information on the particle (name, charge, mass, spin, etc).
\verb+int isShower();+\\
Returns 1 if the track is shower, 0 if not.
\verb+long geantId();+\\
GEANT particle ID.
\verb+long pdgId();+\\
PDG code particle ID (when the track comes from the particle table and has
no GEANT ID). If the track comes from the g2t table, this will also be set
but the numerical value will be equal to the geantId value. (That is what is
in the g2t table...)
\verb+long key();+\\
Primary key from g2t\_track table.
\verb+long eventGenLabel();+\\
Event Generator Label. This label should be different from zero if the track
comes from the event generator. Useful also in determining the entry in the
particle table (as index = label - 1).
\verb+void setFourMomentum(const StLorentzVectorF&);+\\
Assigns the four-momentum of the track.
\verb+void setStartVertex(StMcVertex*);+\\
Assigns the start vertex of the track.
\verb+void setStopVertex(StMcVertex*);+\\
Assigns the stop vertex of the track.
\verb+void setIntermediateVertices(StPtrVecMcVertex&);+\\
Assigns the intermediate vertex collection of the track.
\verb+void setTpcHits(StPtrVecMcTpcHit&);+\\
Assigns the TPC hit collection of the track.
\verb+void setSvtHits(StPtrVecMcSvtHit&);+\\
Assigns the SVT hit collection of the track.
\verb+void setFtpcHits(StPtrVecMcFtpcHit&);+\\
Assigns the FTPC hit collection of the track.
\verb+void setRichHits(StPtrVecMcRichHit&);+\\
Assigns the RICH hit collection of the track.
\verb+void setBemcHits(StPtrVecMcCalorimeterHit&);+\\
Assigns the BEMC hit collection of the track.
\verb+void setBprsHits(StPtrVecMcCalorimeterHit&);+\\
Assigns the BPRS (pre-shower) hit collection of the track.
\verb+void setBsmdeHits(StPtrVecMcCalorimeterHit&);+\\
Assigns the BMDE (shower max-eta) hit collection of the track.
\verb+void setBsmdpHits(StPtrVecMcCalorimeterHit&);+\\
Assigns the BMDP (shower max-phi) hit collection of the track.
\verb+void setShower(char);+\\
Assigns the shower flag of the track.
\verb+void setGeantId(long);+\\
Assigns the GEANT ID of the track.
\verb+void setPdgId(long);+\\
Assigns the PDG ID of the track.
\verb+void setKey(long);+\\
Assigns the primary key of the track.
\verb+void setEventGenLabel(long);+\\
Assigns the event generator label of the track.
\verb+void setParent(StMcTrack*);+\\
Assigns the parent track if it exists.
\verb+void addTpcHit(StMcTpcHit*);+\\
Adds a TPC hit to the internal hit collection.
\verb+void addFtpcHit(StMcFtpcHit*);+\\
Adds a FTPC hit to the internal hit collection.
\verb+void addSvtHit(StMcSvtHit*);+\\
Adds a SVT hit to the internal hit collection.
\verb+void addRichHit(StMcRichHit*);+\\
Adds a RICH hit to the internal hit collection.
\verb+void addBemcHit(StMcCalorimeterHit*);+\\
Adds a BEMC hit to the internal hit collection.
\verb+void addBprsHit(StMcCalorimeterHit*);+\\
Adds a BPRS hit to the internal hit collection.
\verb+void addBsmdeHit(StMcCalorimeterHit*);+\\
Adds a BSMDE hit to the internal hit collection.
\verb+void addBsmdpHit(StMcCalorimeterHit*);+\\
Adds a BSMDP hit to the internal hit collection.
\verb+void removeTpcHit(StMcTpcHit*);+\\
Removes a TPC hit from the internal hit collection.
\verb+void removeFtpcHit(StMcFtpcHit*);+\\
Removes a FTPC hit from the internal hit collection.
\verb+void removeSvtHit(StMcSvtHit*);+\\
Removes a SVT hit from the internal hit collection.
\verb+void removeRichHit(StMcRichHit*);+\\
Removes a RICH hit from the internal hit collection.
\verb+void removeCalorimeterHit(StPtrVecMcCalorimeterHit&, StMcCalorimeterHit*);+\\
Removes a calorimeter hit from the specified vector. Used by the methods below.
\verb+void removeBemcHit(StMcCalorimeterHit*);+\\
Removes a Calorimeter hit from the internal BEMC Emc hit collection. Calls
removeCalorimeterHit.
\verb+void removeBprsHit(StMcCalorimeterHit*);+\\
Removes a Calorimeter hit from the internal BPRS Emc hit collection. Calls
removeCalorimeterHit.
\verb+void removeBsmdeHit(StMcCalorimeterHit*);+\\
Removes a Calorimeter hit from the internal BSMDE Emc hit collection. Calls
removeCalorimeterHit.
\verb+void removeBsmdpHit(StMcCalorimeterHit*);+\\
Removes a Calorimeter hit from the internal BSMDP Emc hit collection. Calls
removeCalorimeterHit.
\item[Examples]
{\bf Example 1:}
{\footnotesize
\begin{verbatim}
//
// Calculate the mean pt of all primary
// tracks.
double meanPtOfTracks(StMcEvent *event)
{
StPtrVecMcTrack& tracks = event->primaryVertex()->daughters();
StMcTrackIterator iter;
StMcTrack* theTrack;
double sumPt = 0;
int n = 0;
for (iter = tracks.begin(); iter != tracks.end(); iter++) {
theTrack = *iter; // careful here, pointer collection
// add up pt
sumPt += theTrack->pt();
n++;
}
return n ? sumPt/static_cast<double>(n) : 0;
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: StMcVertex
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{StMcVertex}
\index{StMcVertex|textbf}
\index{vertex class}
\label{sec:StMcVertex}
\begin{Entry}
\item[Summary]
\name{StMcVertex} describes a vertex, i.e.~a start or/and end point of one
or several tracks
\item[Synopsis]
\verb+#include "StMcVertex.hh"+\\
\verb+class StMcVertex;+\\
\item[Description]
\name{StMcVertex} describes a general vertex in the STAR detector.
It uses the information stored in the \name{g2t\_vertex.idl}
table.\index{g2t\_vertex.idl} \name{StMcVertex} has a position, a
GEANT volume, a TOF and a GEANT process. Each vertex also has
a parent track and 0...n daughter tracks. All are
referenced by pointers.
The container of all vertices in the event can be obtained
through \name{StMcEvent::vertices()}.
\item[Persistence]
None
\item[Related Classes]
The StSPtrVecMcVertex container of \name{StMcEvent} owns the
vertices. This container stores the
vertices by pointer.
\index{tSPtrVecMcVertex}
\item[Public\\ Constructors]
\verb+StMcVertex();+\\
Default constructor. Constructs an instance of \name{StMcVertex} with
values initialized to 0. Not used. Construction is done
using the g2t\_vertex table.
\verb+StMcVertex(float x, float y, float z);+\\
Constructs a vertex in position (x, y, z).
\item[Public Member\\ Functions]
\verb+const StThreeVectorF& position() const;+\\
Vertex position in global coordinates.
\verb+StPtrVecMcTrack& daughters();+\\
Collection of all daughter tracks. Note that
the collection can be empty.
\verb+unsigned int numberOfDaughters();+\\
Number of daughter tracks. Same as {\tt daughters().size()}.
\verb+StMcTrack* daughter(unsigned int i);+\\
Returns the {\tt i}'th daughter track, or a NULL pointer
if {\tt i} is larger than or equal to the number of stored daughters.
(Index runs from 0 to {\tt numberOfDaughters()-1}).
\verb+const StMcTrack* parent();+\\
Pointer to parent track.
\verb+string geantVolume() const;+\\
Returns the GEANT volume read from g2t\_vertex table.
\verb+float tof();+\\
TOF read from g2t\_vertex table.
\verb+long geantProcess() const;+\\
GEANT process read from g2t\_vertex table.
\verb+long key() const;+\\
Primary key, read from g2t\_vertex table.
\verb+long geantMedium() const;+\\
GEANT medium read from g2t\_vertex table.
\verb+long generatorProcess() const;+\\
Process from event generator, read from g2t\_vertex table.
\verb+void setPosition(const StThreeVectorF&);+\\
Sets vertex position.
\verb+void setParent(StMcTrack*);+\\
Sets pointer to parent track.
\verb+void addDaughter(StMcTrack*); +\\
Adds a daughter track to the daughter collection.
\verb+void setGeantVolume(string); +\\
Sets the GEANT volume.
\verb+void setTof(float); +\\
Sets the Time Of Flight.
\verb+void setGeantProcess(int); +\\
Sets the GEANT process.
\verb+void removeDaughter(StMcTrack*);+\\
Removes a track from the daughter container.
\item[Public Member\\ Operators]
\verb+int operator==(const StMcVertex&) const;+\\
Returns true (1) if two instances of \name{StMcVertex} are equal or false (0)
if otherwise.
The only data members used for the comparison are the position and the
GEANT process.
\verb+int operator!=(const StMcVertex&) const;+\\
Returns true (1) if two instances of \name{StMcVertex} are not equal or false (0)
if otherwise.
This operator is implemented by simply inverting \name{operator==}.
\verb+int operator!=(const StMcVertex&) const;+\\
Returns true (1) if two instances of \name{StMcVertex} are not equal or false (0)
if otherwise.
This operator is implemented by simply inverting \name{operator==}.
\verb+ostream& operator<<(ostream& os, const StMcVertex& v);+\\
Prints position, geant volume, time of flight and geant process to standard
output.
\item[Examples] \index{primary vertex}
{\bf Example 1:}
{\footnotesize
\begin{verbatim}
//
// Function to count # of tracks originating
// from the primary vertex that have more than
// 10 hits in the TPC.
unsigned long countGoodPrimaryTracks(StMcEvent *event)
{
unsigned long counter = 0;
StPtrVecMcTrack& primaryTracks =
event->primaryVertex()->daughters();
StMcTrackIterator i;
StMcTrack* track;
for (i = primaryTracks.begin(); i != primaryTracks.end(); i++) {
track = *i;
if (track->tpcHits()->size() > 10)
counter++;
}
return counter;
}
\end{verbatim}
}%footnotesize
\end{Entry}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% The End
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\printindex
\end{document}
\bye
% The text following after this line is not included into the text
%
% Template for reference section
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Reference: className
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{className}
\index{className|textbf}
\label{sec:className}
\begin{Entry}
\item[Summary]
\item[Synopsis]
\verb+#include "className.hh"+\\
\verb+class className;+\\
\item[Description]
\item[Persistence]
None
\item[Related Classes]
\item[Public\\ Constructors]
\item[Public Member\\ Functions]
\item[Public Member\\ Operators]
\item[Public Functions]
\item[Public Operators]
\item[Examples]
{\footnotesize
\begin{verbatim}
//
// What the example does
//
\end{verbatim}
}%footnotesize
\end{Entry}
| {
"alphanum_fraction": 0.6844461182,
"avg_line_length": 35.0552055206,
"ext": "tex",
"hexsha": "667cca85361b6be5c6237e2523383aa3be15c068",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a305cb0a6ac15c8165bd8f0d074d7075d5e58752",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "xiaohaijin/RHIC-STAR",
"max_forks_repo_path": "StRoot/StMcEvent/doc/tex/StMcEvent.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a305cb0a6ac15c8165bd8f0d074d7075d5e58752",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "xiaohaijin/RHIC-STAR",
"max_issues_repo_path": "StRoot/StMcEvent/doc/tex/StMcEvent.tex",
"max_line_length": 155,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "a305cb0a6ac15c8165bd8f0d074d7075d5e58752",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "xiaohaijin/RHIC-STAR",
"max_stars_repo_path": "StRoot/StMcEvent/doc/tex/StMcEvent.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-28T06:57:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-12-24T19:37:00.000Z",
"num_tokens": 31728,
"size": 116839
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\documentclass{article}
\usepackage{graphicx} % Used to insert images
\usepackage{adjustbox} % Used to constrain images to a maximum size
\usepackage{color} % Allow colors to be defined
\usepackage{enumerate} % Needed for markdown enumerations to work
\usepackage{geometry} % Used to adjust the document margins
\usepackage{amsmath} % Equations
\usepackage{amssymb} % Equations
\usepackage{eurosym} % defines \euro
\usepackage[mathletters]{ucs} % Extended unicode (utf-8) support
\usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document
\usepackage{fancyvrb} % verbatim replacement that allows latex
\usepackage{grffile} % extends the file name processing of package graphics
% to support a larger range
% The hyperref package gives us a pdf with properly built
% internal navigation ('pdf bookmarks' for the table of contents,
% internal cross-reference links, web links for URLs, etc.)
\usepackage{hyperref}
\usepackage{longtable} % longtable support required by pandoc >1.10
\usepackage{booktabs} % table support for pandoc > 1.12.2
\usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout)
\definecolor{orange}{cmyk}{0,0.4,0.8,0.2}
\definecolor{darkorange}{rgb}{.71,0.21,0.01}
\definecolor{darkgreen}{rgb}{.12,.54,.11}
\definecolor{myteal}{rgb}{.26, .44, .56}
\definecolor{gray}{gray}{0.45}
\definecolor{lightgray}{gray}{.95}
\definecolor{mediumgray}{gray}{.8}
\definecolor{inputbackground}{rgb}{.95, .95, .85}
\definecolor{outputbackground}{rgb}{.95, .95, .95}
\definecolor{traceback}{rgb}{1, .95, .95}
% ansi colors
\definecolor{red}{rgb}{.6,0,0}
\definecolor{green}{rgb}{0,.65,0}
\definecolor{brown}{rgb}{0.6,0.6,0}
\definecolor{blue}{rgb}{0,.145,.698}
\definecolor{purple}{rgb}{.698,.145,.698}
\definecolor{cyan}{rgb}{0,.698,.698}
\definecolor{lightgray}{gray}{0.5}
% bright ansi colors
\definecolor{darkgray}{gray}{0.25}
\definecolor{lightred}{rgb}{1.0,0.39,0.28}
\definecolor{lightgreen}{rgb}{0.48,0.99,0.0}
\definecolor{lightblue}{rgb}{0.53,0.81,0.92}
\definecolor{lightpurple}{rgb}{0.87,0.63,0.87}
\definecolor{lightcyan}{rgb}{0.5,1.0,0.83}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Additional commands for more recent versions of Pandoc
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}}
\newcommand{\ImportTok}[1]{{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}}
\newcommand{\BuiltInTok}[1]{{#1}}
\newcommand{\ExtensionTok}[1]{{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{Introduction to Python}
\author{Kjartan Halvorsen}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=blue,
linkcolor=darkorange,
citecolor=darkgreen,
}
% Slightly bigger margins than the latex defaults
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\begin{document}
\maketitle
\section{Short introduction to
python}\label{short-introduction-to-python}
\subsection{Preliminary}\label{preliminary}
This notebook is part of the repository
\href{https://github.com/alfkjartan/systemanalys}{systemanalys}. You can
either clone this repository using \href{https://help.github.com/articles/set-up-git/}{\texttt{git}}, or you can download it
as a
\href{https://github.com/alfkjartan/systemanalys/archive/master.zip}{zip-file}.
\subsection{Installation}\label{installation}
\subsubsection{Anaconda}\label{anaconda}
In this course we recomment to use the
\href{https://www.continuum.io/downloads}{Anaconda} python distribution.
Make sure to download the Python3 version. The Anaconda distribution
contains by default many of the most common python modules for
scientific computing. However, it does not come with
\href{https://simpy.readthedocs.io/en/latest/}{SimPy}. To install this,
open the Anaconda prompt (Start -\textgreater{} All Programs
-\textgreater{} Anaconda -\textgreater{} Anaconda prompt) and write
\begin{quote}
\texttt{\textgreater{}\ pip\ install\ simpy}
\end{quote}
You can manage your Anaconda installation and launch programs from the
Anaconda Navigator (Start -\textgreater{} All Programs -\textgreater{}
Anaconda -\textgreater{} Anaconda Navigator). The anaconda distribution
does come with Jupyter, which is a program that lets you work with
notebooks (code and documentation interwoven) in your favourite browser.
Wonderful to work with!
\subsubsection{Jupyter}\label{jupyter}
Writing and running the simulation will be done using
\href{https://jupyter.readthedocs.io/en/latest/index.html}{Jupyter}
notebooks. This getting-started-guide is written as a Jupyter notebook.
You may be reading this and running the command in jupyter. Or if you
are reading the pdf-version you are encouraged to jump over to the
Jupyter version right away. You can launch Jupyter notebook from the
Anaconda Navigator. This will open a new tab in your default web
browser. It shows the current working directory from which you can
create folders, files or notebooks. Navigate to the folder containing
this notebook (\texttt{systemanalys/doc}) and open the notebook
Introduction to Python.
\subsection{Python variables and
datatypes}\label{python-variables-and-datatypes}
To do something useful, we need to define \emph{variables} to hold
\emph{data}. In python the \emph{type} of the variable is not specified,
so a variable name can hold any kind of data. This is called ``weak
typing''. The most common basic data types are
numbers
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}9}]:} \PY{n}{a} \PY{o}{=} \PY{l+m+mi}{42}
\end{Verbatim}
strings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}10}]:} \PY{n}{s1} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{system}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{s2} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{system}\PY{l+s+s2}{\PYZdq{}}
\end{Verbatim}
lists
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}11}]:} \PY{n}{b} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{hej}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{s1}\PY{p}{,} \PY{n}{a}\PY{p}{]}
\end{Verbatim}
In the assignments above, this is what happens in the computer: Memory
space is allocated to hold the data (what is on the right hand side of
the equal sign), and a variable name is created to reference that data.
Also, the variable name is added to the list of current variables. When
one variable is assigned to another, the data is not copied (unless it
is a simple data type such as a float or integer). Instead we end up
with two variables referencing the same piece of data. For instance
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}12}]:} \PY{n}{c} \PY{o}{=} \PY{n}{b}
\PY{n}{c}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}12}]:} ['hej', 'system', 42]
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}13}]:} \PY{n}{c}\PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{24}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{c}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{b}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
['hej', 'system', 24]
['hej', 'system', 24]
\end{Verbatim}
Note the command \texttt{c{[}-1{]}\ =\ 24}. Negative indices in python
is a convenient way of accessing elements starting from the end of the
list. Note also that the command is (re)assigning the last element of
\texttt{c} to the value 24, leaving the other elements unchanged. The
data referenced by \texttt{c} is modified, and the variable \texttt{b}
which refers to the same data will now show the same change. This is
fundamentally different from writing
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}17}]:} \PY{n}{c} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{b}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}17}]:} ['hej', 'system', 24]
\end{Verbatim}
Above, an empty list is created and the variable \texttt{c} is then
assigned to this empty list instead of whatever it was assigned to
earlier. The two variables \texttt{c} and \texttt{b} no longer reference
the same data, and the operation leaves \texttt{b} entirely unchanged.
\subsection{If, for and while}\label{if-for-and-while}
The \texttt{if} construct is used whenever certain lines of code should
be executed only when some condition holds. For instance
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}18}]:} \PY{k}{if} \PY{p}{(}\PY{n}{c} \PY{o}{==} \PY{p}{[}\PY{p}{]}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{c is empty}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
c is empty
\end{Verbatim}
The block of code (lines of code) to be executed if the \texttt{if}
statement is true is identified by being further \emph{indented} than
the line starting with \texttt{if}. There are no curly brackets or
keywords marking the start and end of the block. The first line that are
equally or less indented than the \texttt{if} line ends the block of
code to be executed. In the example above, this line is empty.
To repeat some piece of code a known number of times, use a \texttt{for}
loop:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}20}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{b}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{b}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
hej
system
24
\end{Verbatim}
The function \texttt{len()} returns the length of the list, and the
function \texttt{range(n)} returns a list of integers starting at 0 and
ending at \(n-1\). It may seem a bit unnecessary to create a list of
integers to use as indices into \texttt{b}. Sometimes the index is
actually needed, but more often we loop over the elements more
conveniently
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}21}]:} \PY{k}{for} \PY{n}{e} \PY{o+ow}{in} \PY{n}{b}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{e}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
hej
system
24
\end{Verbatim}
A \texttt{while} loop is useful to repeat a number of lines for as long
as some condition is true
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}22}]:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{7}
\PY{k}{while} \PY{n}{x} \PY{o}{\PYZlt{}} \PY{l+m+mi}{1000}\PY{p}{:}
\PY{n}{x} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{7}
\PY{n}{x}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}22}]:} 1001
\end{Verbatim}
\texttt{while} can also be used to create an eternal loop
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor} }]:} \PY{k}{while} \PY{k+kc}{True}\PY{p}{:}
\PY{n}{x} \PY{o}{*}\PY{o}{=} \PY{n}{x}
\PY{n}{x}
\end{Verbatim}
To interrupt the neverending execution of the above code, look to the
Jupyter menu near the top of the browser page and choose Kernel
-\textgreater{} Interrupt
\subsection{Modules}\label{modules}
So far we have entered the code directly into the command window. More
commonly, one writes code in text files which are then read and executed
by the interpreter. A python file (with ending \texttt{.py}) containing
code is referred to as a \emph{module}. At start-up, some of these
modules are loaded automatically and are available for use in our own
code. Other modules must be explicitly imported before the code can be
accessed. There is, for instance, a \texttt{math} module containing a
number of mathematical functions, and `numpy' for numerical computations
and linear algebra. To access functions from these modules we must do
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}26}]:} \PY{k+kn}{import} \PY{n+nn}{math}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np}
\end{Verbatim}
then we can use functions such as
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}27}]:} \PY{n+nb}{print} \PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{pi}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
1.2246467991473532e-16
1.0
\end{Verbatim}
other modules of interest include
\href{http://docs.scipy.org/doc/scipy/reference/}{SciPy} for scientific
computing, \href{http://matplotlib.org/}{matplotlib} for plotting and
\href{http://pandas.pydata.org/}{pandas} for data analysis. For
simulations of discrete event simulations we will use the
\href{https://simpy.readthedocs.io/en/latest/}{SimPy} module.
\subsection{Functions}\label{functions}
Functions are fundamental constructs in programming languages. They make
code much easier to understand and maintain by separating and
encapsulating operations. Consider this simple function that swops the
first and last element of a list:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}32}]:} \PY{k}{def} \PY{n+nf}{swop}\PY{p}{(}\PY{n}{alist}\PY{p}{)}\PY{p}{:}
\PY{n}{slask} \PY{o}{=} \PY{n}{alist}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{alist}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{n}{alist}\PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}
\PY{n}{alist}\PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{slask}
\PY{c+c1}{\PYZsh{} Think about cases where the function will fail and try to figure out how to make it more robust}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{b}\PY{p}{)}
\PY{n}{swop}\PY{p}{(}\PY{n}{b}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{b}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[24, 'system', 'hej']
['hej', 'system', 24]
\end{Verbatim}
The function above did some operations on the input argument, but did
not return any value. This is of course possible also:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}36}]:} \PY{k}{def} \PY{n+nf}{middle}\PY{p}{(}\PY{n}{alist}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{\PYZsq{}\PYZsq{}\PYZsq{} Will return the element in the middle of the list. }
\PY{l+s+sd}{ If the length is even, it will return the last element before the middle. \PYZsq{}\PYZsq{}\PYZsq{}}
\PY{n}{ind} \PY{o}{=} \PY{n+nb}{int}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{alist}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{c+c1}{\PYZsh{} since indexing starts at 0}
\PY{k}{return} \PY{n}{alist}\PY{p}{[}\PY{n}{ind}\PY{p}{]}
\PY{n}{middle}\PY{p}{(}\PY{n}{b}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}36}]:} 'hej'
\end{Verbatim}
\subsection{Creating your own
datatypes}\label{creating-your-own-datatypes}
Basically all modern programming languages allows you to create your own
datatypes based on the fundamental types that the language provides.
Assume that we would like to create a datatype \texttt{Customer}. We
want to use it as this:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor} }]:} \PY{n}{c1} \PY{o}{=} \PY{n}{Customer}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Pontus}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{prio}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{c2} \PY{o}{=} \PY{n}{Curstomer}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Elisa}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{prio}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{)}
\PY{n}{queue} \PY{o}{=} \PY{p}{[}\PY{n}{c1}\PY{p}{,} \PY{n}{c2}\PY{p}{]}
\end{Verbatim}
Which gives us a short queue containing two customers. The answer is to
define a \emph{class} \texttt{Customer} from which we can instantiate
(generate) any number of \emph{objects}. The values \texttt{name} and
\texttt{prio} are \emph{attributes} of the object:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}45}]:} \PY{k}{class} \PY{n+nc}{Customer}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{name}\PY{p}{,} \PY{n}{prio}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{prio} \PY{o}{=} \PY{n}{prio}
\PY{n}{c1} \PY{o}{=} \PY{n}{Customer}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Pontus}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{prio}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{c2} \PY{o}{=} \PY{n}{Customer}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Elisa}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{prio}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{)}
\PY{n}{queue} \PY{o}{=} \PY{p}{[}\PY{n}{c1}\PY{p}{,} \PY{n}{c2}\PY{p}{]}
\PY{n}{queue}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}45}]:} [<\_\_main\_\_.Customer at 0x7fb9345a1940>, <\_\_main\_\_.Customer at 0x7fb9345a15f8>]
\end{Verbatim}
The function \texttt{\_\_init()\_\_} is called the \emph{constructor}.
It is called when a new object is created. The first argument,
\texttt{self}, is a reference to the actual object under construction.
We see from the code that the constructor takes two arguments: the name
and the priority of the customer. The priority has a default value of 1,
which is used if the customer object is created with one argument only.
We may check the name and priority of the customers as
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}46}]:} \PY{k}{for} \PY{n}{c} \PY{o+ow}{in} \PY{n}{queue}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Customer }\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s2}{ has priority }\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{n}{c}\PY{o}{.}\PY{n}{name}\PY{p}{,} \PY{n}{c}\PY{o}{.}\PY{n}{prio}\PY{p}{)} \PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Customer Pontus has priority 1
Customer Elisa has priority 3
\end{Verbatim}
The somewhat complicated syntax on the \texttt{print} line is actually a
quite common way (many programming languages has it) of formatting
output that contains numbers. Within the string, the
percentage-expressions are placeholders for numbers or strings. The
occurance of the placeholders are matched with the elements in the
\emph{tuple} following the string (the tuple is the list within
parantheses separated from the string by the percentage symbol). A tuple
is a fundamental datatype in python. It is a list whose elements cannot
be changed after it is constructed.
One important advantage of objects and object-oriented programming is
that functions written to work on the objects can be associated with the
object itself. These functions are called \emph{methods} or \emph{member
functions}. If we have an object, say a \texttt{list} object, we can
print a list of the methods and attributes for that object with the help
of the \texttt{dir} function:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}47}]:} \PY{n}{names} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Evelyn}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Eirik}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}
\PY{n+nb}{dir}\PY{p}{(}\PY{n}{names}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}47}]:} ['\_\_add\_\_',
'\_\_class\_\_',
'\_\_contains\_\_',
'\_\_delattr\_\_',
'\_\_delitem\_\_',
'\_\_dir\_\_',
'\_\_doc\_\_',
'\_\_eq\_\_',
'\_\_format\_\_',
'\_\_ge\_\_',
'\_\_getattribute\_\_',
'\_\_getitem\_\_',
'\_\_gt\_\_',
'\_\_hash\_\_',
'\_\_iadd\_\_',
'\_\_imul\_\_',
'\_\_init\_\_',
'\_\_init\_subclass\_\_',
'\_\_iter\_\_',
'\_\_le\_\_',
'\_\_len\_\_',
'\_\_lt\_\_',
'\_\_mul\_\_',
'\_\_ne\_\_',
'\_\_new\_\_',
'\_\_reduce\_\_',
'\_\_reduce\_ex\_\_',
'\_\_repr\_\_',
'\_\_reversed\_\_',
'\_\_rmul\_\_',
'\_\_setattr\_\_',
'\_\_setitem\_\_',
'\_\_sizeof\_\_',
'\_\_str\_\_',
'\_\_subclasshook\_\_',
'append',
'clear',
'copy',
'count',
'extend',
'index',
'insert',
'pop',
'remove',
'reverse',
'sort']
\end{Verbatim}
You can use the \texttt{help()} function to get help on specific
methods. For instance
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}48}]:} \PY{n}{help}\PY{p}{(}\PY{n}{names}\PY{o}{.}\PY{n}{sort}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Help on built-in function sort:
sort({\ldots}) method of builtins.list instance
L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*
\end{Verbatim}
This help string tells us that the \texttt{sort()} method will sort the
list in place, i.e.~it will be modified by this method. It will try to
sort the list depending on the types of the elements. So sorting a list
of strings is
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}51}]:} \PY{n}{names} \PY{o}{=} \PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Evelyn}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Elisa}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Eirik}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Emilio}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{names}\PY{p}{)}
\PY{n}{names}\PY{o}{.}\PY{n}{sort}\PY{p}{(}\PY{p}{)}
\PY{n}{names}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
['Evelyn', 'Elisa', 'Eirik', 'Emilio']
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}51}]:} ['Eirik', 'Elisa', 'Emilio', 'Evelyn']
\end{Verbatim}
\subsection{Generators}\label{generators}
\emph{Generators} is a powerful concept in python. It is used to
implement the processes in the pseudo-parallell / process-based
simulations in SimPy. You should think of generators as functions which
generateas and returns elements in a sequence. Let's look at a simple
example which generates a sequence of odd numbers:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}54}]:} \PY{k}{def} \PY{n+nf}{oddnumbers}\PY{p}{(}\PY{p}{)}\PY{p}{:}
\PY{n}{x} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{while} \PY{k+kc}{True}\PY{p}{:}
\PY{k}{yield} \PY{n}{x}
\PY{n}{x} \PY{o}{=} \PY{n}{x}\PY{o}{+}\PY{l+m+mi}{2}
\PY{n}{og} \PY{o}{=} \PY{n}{oddnumbers}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} This creates the generator}
\PY{n+nb}{print}\PY{p}{(} \PY{n+nb}{next}\PY{p}{(}\PY{n}{og}\PY{p}{)} \PY{p}{)} \PY{c+c1}{\PYZsh{} Prints 1}
\PY{n+nb}{print}\PY{p}{(} \PY{n+nb}{next}\PY{p}{(}\PY{n}{og}\PY{p}{)} \PY{p}{)} \PY{c+c1}{\PYZsh{} Prints 3}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} The odd numbers 5, 7, 9, 11}
\PY{n+nb}{print}\PY{p}{(} \PY{n+nb}{next}\PY{p}{(}\PY{n}{og}\PY{p}{)} \PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
1
3
5
7
9
11
\end{Verbatim}
Python recognizes a definition of a generator by the existence of the
keyword \texttt{yield}. In the example, a generator object is created
\texttt{og\ =\ oddnumbers()}, and then the function \texttt{next(og)} is
called on the generator object. The first call to \texttt{next()} causes
the code in the definition of the generator to be executed from the top
to the first occurance of \texttt{yield}. The statment \texttt{yield} is
similar to a return statement in a regular function. However, in the
next call to \texttt{next()}, the execution starts at the first line
after the \texttt{yield} statement that caused the previous return to
the caller. The execution goes on to the next occurance of
\texttt{yield}. The local variables in the definition of the generator
retains there values between calls to \texttt{next()}.
If a call to \texttt{next()} should not reach a \texttt{yield}
statement, but instead reach the end of the definition of the generator,
then an exception is raised which tells the caller to stop iterating
over the generator: There are no more elements in the sequence. The odd
numbers are of course an infinite sequence, so it should be possible (in
theory) to generate increasingly large odd numbers by calling
\texttt{next(og)} forever.
\subsubsection{A classical example: the Fibonacci
numbers}\label{a-classical-example-the-fibonacci-numbers}
The Fibonacci number series is a sequence of integers defined by the
difference equation \[ F(n) = F(n-1) + F(n-2) \] together with the
starting conditions \(F(0) = 0\), \(F(1) = 1\). Leonardo Fibonacci used
the difference equation as a simple model of the dynamics in a
population of rabbits. In the model there are no deaths, so at each time
step, the number of rabbits is equal to the number of rabbits in the
previous time step plus the number of rabbits born since the last time
step. Each rabbit has exactly one offspring in each time step, except
for an initial newborn-period of one time step. Hence the number of
rabbits in the last time step that are mature enough to reproduce equals
the size of the population two time steps back in time.
A standard schoolbook implementation of the fibonacci numbers would use
recursion to compute the numbers:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{k}{def} \PY{n+nf}{fib\PYZus{}rec}\PY{p}{(}\PY{n}{n}\PY{p}{)}\PY{p}{:}
\PY{k}{if} \PY{n}{n}\PY{o}{==}\PY{l+m+mi}{1} \PY{o+ow}{or} \PY{n}{n}\PY{o}{==}\PY{l+m+mi}{0}\PY{p}{:}
\PY{k}{return} \PY{n}{n}
\PY{k}{return} \PY{n}{fib\PYZus{}rec}\PY{p}{(}\PY{n}{n}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{+}\PY{n}{fib\PYZus{}rec}\PY{p}{(}\PY{n}{n}\PY{o}{\PYZhy{}}\PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{fib\PYZus{}rec}\PY{p}{(}\PY{l+m+mi}{7}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}58}]:} 13
\end{Verbatim}
The implementation looks kind of nice, but this type of recursive
implementation is not a good idea in a modern high-level language such
as python. The reason being that function calls are quite expensive as
compared to C. Each call to \texttt{fib\_rec()} causes two new calls to
the function. Also, values are computed twice.
The sequence is much more efficiently implemented with a generator
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{k}{def} \PY{n+nf}{fib}\PY{p}{(}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{} Generates the sequence if fibonacci numbers \PYZdq{}\PYZdq{}\PYZdq{}}
\PY{n}{fmin1} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{fmin2} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{while} \PY{k+kc}{True}\PY{p}{:}
\PY{n}{f} \PY{o}{=} \PY{n}{fmin1}\PY{o}{+}\PY{n}{fmin2}
\PY{k}{yield} \PY{n}{f}
\PY{c+c1}{\PYZsh{} Update}
\PY{p}{(}\PY{n}{fmin2}\PY{p}{,} \PY{n}{fmin1}\PY{p}{)} \PY{o}{=} \PY{p}{(}\PY{n}{fmin1}\PY{p}{,} \PY{n}{f}\PY{p}{)}
\PY{n}{fg} \PY{o}{=} \PY{n}{fib}\PY{p}{(}\PY{p}{)}
\PY{p}{[}\PY{n+nb}{next}\PY{p}{(}\PY{n}{fg}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{7}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}60}]:} [1, 1, 2, 3, 5, 8, 13]
\end{Verbatim}
The line \texttt{{[}next(fg)\ for\ i\ in\ range(7){]}} is an example of
\emph{list comprehension}. A list is generated from the return values of
the function call \texttt{next(fg)}. This is done 7 times.
Timing the two implementations of the fibonacci numbers show clearly the
difference in efficiency.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}66}]:} \PY{k+kn}{import} \PY{n+nn}{time}
\PY{n}{t} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)}
\PY{n}{fg} \PY{o}{=} \PY{n}{fib}\PY{p}{(}\PY{p}{)}
\PY{n}{fibonacciSequence} \PY{o}{=} \PY{p}{[}\PY{n+nb}{next}\PY{p}{(}\PY{n}{fg}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{30}\PY{p}{)}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{t}\PY{p}{)}
\PY{n}{t} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)}
\PY{n}{fib\PYZus{}rec}\PY{p}{(}\PY{l+m+mi}{30}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{t}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
0.00012254714965820312
0.4177854061126709
\end{Verbatim}
\subsection{Random number generation}\label{random-number-generation}
Discrete event systems are almost always stochastic systems. To build a
model and simulate it, we need sequences of random numbers. A truly
random sequence of numbers is actually very difficult to generate.
Instead, the computer will generate a completely deterministic sequence
of numbers, but whose properties are close to that of a sequence of
random numbers. The numbers are therefore called pseudo-random.
Python, as well as most other programming languages used in scientific
computing, has a built-in pseudo-number generator. We don't have to
implement this ourselves. The pseudo-number generator is part of the
\href{https://docs.python.org/2/library/random.html}{random} module. It
is used as follows
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{k+kn}{import} \PY{n+nn}{random}
\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{1315}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{p}{[}\PY{n}{random}\PY{o}{.}\PY{n}{gauss}\PY{p}{(}\PY{n}{mu}\PY{o}{=}\PY{l+m+mf}{1.0}\PY{p}{,} \PY{n}{sigma}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{]} \PY{p}{)}
\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{1314}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{p}{[}\PY{n}{random}\PY{o}{.}\PY{n}{gauss}\PY{p}{(}\PY{n}{mu}\PY{o}{=}\PY{l+m+mf}{1.0}\PY{p}{,} \PY{n}{sigma}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{]} \PY{p}{)}
\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{1315}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{p}{[}\PY{n}{random}\PY{o}{.}\PY{n}{gauss}\PY{p}{(}\PY{n}{mu}\PY{o}{=}\PY{l+m+mf}{1.0}\PY{p}{,} \PY{n}{sigma}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{]} \PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[0.6201987142056988, 1.3120253897349055, 0.7844364422139856]
[0.5896986001391438, 1.325007288757538, 1.2354502475094375]
[0.6201987142056988, 1.3120253897349055, 0.7844364422139856]
\end{Verbatim}
Note that the sequence of the first three numbers are the same after the
seed is set to the same number. If you choose not to set a seed, the
default behaviour of python is to use the system time to set a seed that
will be different every time you run your code. The purpose of setting
the seed explicitly is to be able to reproduce simulation experiments.
Use \texttt{dir(random)} to see the different methods available to
generate random numbers from different distributions. Use
\texttt{help()} to get helt on any specific method. For instance
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}70}]:} \PY{n}{help}\PY{p}{(}\PY{n}{random}\PY{o}{.}\PY{n}{expovariate}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Help on method expovariate in module random:
expovariate(lambd) method of random.Random instance
Exponential distribution.
lambd is 1.0 divided by the desired mean. It should be
nonzero. (The parameter would be called "lambda", but that is
a reserved word in Python.) Returned values range from 0 to
positive infinity if lambd is positive, and from negative
infinity to 0 if lambd is negative.
\end{Verbatim}
\subsection{The SimPy module}\label{the-simpy-module}
There are good introductory
\href{http://simpy.readthedocs.io/en/latest/simpy_intro/index.html}{tutorials},
\href{http://simpy.readthedocs.io/en/latest/examples/index.html}{examples}
and
\href{http://simpy.readthedocs.org/en/latest/api_reference/index.html}{documentation}
for SimPy. In this section we just want to point out a key concept to
understand when implementing simulation models in SimPy.
The \emph{processes} of a discrete event system are implemented as
\emph{generators} in python. The defining property of a generator is
that it can return a value after a call (by use of the \texttt{yield}
keyword), and continue execution where it left off at the next call.
This is exactly what we need to implement processes that performs some
operations and then wait for some event to happen before continuing.
In SimPy, the \texttt{yield} statement in the process generators
\textbf{must} return an event object. The process will continue when the
event happens. Let's look at some simple examples.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}107}]:} \PY{k+kn}{import} \PY{n+nn}{random}
\PY{k+kn}{import} \PY{n+nn}{simpy}
\end{Verbatim}
\subsubsection{Example 1 - Reneging
customers}\label{example-1---reneging-customers}
Consider the process of a customer that enters a quee, waits for 1.3
time units and then exits the queue. The process step of waiting is
implemented by returning a \texttt{timeout} event object. The event will
occur when the specified time has passed. We will assme that if the
customer is still in queue when the time has passed, then she will get
impatient and exit the system. Otherwise, we assume that the customer
has already been served or is currently getting served.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}108}]:} \PY{k}{def} \PY{n+nf}{reneging\PYZus{}customer\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{name}\PY{p}{,} \PY{n}{patience}\PY{p}{,} \PY{n}{queue}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s2}{ with patience }\PY{l+s+si}{\PYZpc{}f}\PY{l+s+s2}{ enters the queue at time }\PY{l+s+si}{\PYZpc{}f}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{n}{name}\PY{p}{,} \PY{n}{patience}\PY{p}{,} \PY{n}{env}\PY{o}{.}\PY{n}{now}\PY{p}{)} \PY{p}{)}
\PY{n}{queue}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{name}\PY{p}{)} \PY{c+c1}{\PYZsh{} Customers are identified by name, so all names should be unique}
\PY{k}{yield} \PY{n}{env}\PY{o}{.}\PY{n}{timeout}\PY{p}{(}\PY{l+m+mf}{1.3}\PY{p}{)}
\PY{k}{if} \PY{n}{name} \PY{o+ow}{in} \PY{n}{queue}\PY{p}{:}
\PY{n}{queue}\PY{o}{.}\PY{n}{remove}\PY{p}{(}\PY{n}{name}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s2}{ got impatient and exited the queue at time }\PY{l+s+si}{\PYZpc{}f}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{n}{name}\PY{p}{,} \PY{n}{env}\PY{o}{.}\PY{n}{now}\PY{p}{)} \PY{p}{)}
\PY{n}{env} \PY{o}{=} \PY{n}{simpy}\PY{o}{.}\PY{n}{Environment}\PY{p}{(}\PY{p}{)}
\PY{n}{queue} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(}\PY{n}{reneging\PYZus{}customer\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Oscar}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+m+mf}{1.3}\PY{p}{,} \PY{n}{queue}\PY{p}{)}\PY{p}{)}
\PY{n}{env}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Oscar with patience 1.300000 enters the queue at time 0.000000
Oscar got impatient and exited the queue at time 1.300000
\end{Verbatim}
\paragraph{Customer generator}\label{customer-generator}
We will need a process that generates customers that enters the system.
Typically, the time between arrivals is random. Here we assume the time
between arrivals to be exponentially distributed with mean time 1.0.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}109}]:} \PY{k}{def} \PY{n+nf}{customer\PYZus{}generator\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{numberOfCustomers}\PY{p}{,} \PY{n}{timeBetween}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{newArrivalEvents}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{} Will generate a fixed number of customers, with random time between arrivals.\PYZdq{}\PYZdq{}\PYZdq{}}
\PY{n}{k} \PY{o}{=} \PY{l+m+mi}{0}
\PY{k}{while} \PY{n}{k}\PY{o}{\PYZlt{}}\PY{n}{numberOfCustomers}\PY{p}{:}
\PY{k}{yield} \PY{n}{env}\PY{o}{.}\PY{n}{timeout}\PY{p}{(} \PY{n}{random}\PY{o}{.}\PY{n}{expovariate}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{o}{/}\PY{n}{timeBetween}\PY{p}{)} \PY{p}{)}
\PY{n}{k} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{reneging\PYZus{}customer\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{name} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Customer\PYZhy{}}\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}}\PY{k}{k}, patience = 1.3, queue = queue) )
\PY{k}{while} \PY{n}{newArrivalEvents} \PY{o}{!=} \PY{p}{[}\PY{p}{]}\PY{p}{:}
\PY{n}{ev} \PY{o}{=} \PY{n}{newArrivalEvents}\PY{o}{.}\PY{n}{pop}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{c+c1}{\PYZsh{} The newArrivalEvents list contains events that servers are waiting for in order to proceed.}
\PY{c+c1}{\PYZsh{} What they are waiting for is for a new customer to arrive, so trigger the event}
\PY{n}{ev}\PY{o}{.}\PY{n}{succeed}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Triggering arrival event}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{env} \PY{o}{=} \PY{n}{simpy}\PY{o}{.}\PY{n}{Environment}\PY{p}{(}\PY{p}{)}
\PY{n}{queue} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{waitingForArrivalEvents} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{customer\PYZus{}generator\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mf}{0.8}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{waitingForArrivalEvents}\PY{p}{)} \PY{p}{)}
\PY{n}{env}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Customer-1 with patience 1.300000 enters the queue at time 1.468228
Customer-2 with patience 1.300000 enters the queue at time 2.243378
Customer-1 got impatient and exited the queue at time 2.768228
Customer-3 with patience 1.300000 enters the queue at time 3.056802
Customer-2 got impatient and exited the queue at time 3.543378
Customer-3 got impatient and exited the queue at time 4.356802
Customer-4 with patience 1.300000 enters the queue at time 5.027282
Customer-4 got impatient and exited the queue at time 6.327282
Customer-5 with patience 1.300000 enters the queue at time 7.311952
Customer-6 with patience 1.300000 enters the queue at time 7.859252
Customer-7 with patience 1.300000 enters the queue at time 8.178111
Customer-8 with patience 1.300000 enters the queue at time 8.350823
Customer-5 got impatient and exited the queue at time 8.611952
Customer-6 got impatient and exited the queue at time 9.159252
Customer-7 got impatient and exited the queue at time 9.478111
Customer-8 got impatient and exited the queue at time 9.650823
\end{Verbatim}
\paragraph{Service process}\label{service-process}
There is a service process that takes care of the customers in the
queue. The time to serve a single customer is exponentially distributed
with mean 1.3 time units. The service process takes out customers from
the queueu on a first come first served basis (FIFO queue). If there are
no customers, the server process will wait for the event that a new
customer is arriving.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}110}]:} \PY{k}{def} \PY{n+nf}{service\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{serviceTime}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{arrivalEvents}\PY{p}{)}\PY{p}{:}
\PY{k}{while} \PY{k+kc}{True}\PY{p}{:}
\PY{k}{while} \PY{n}{queue} \PY{o}{!=} \PY{p}{[}\PY{p}{]}\PY{p}{:}
\PY{n}{customer} \PY{o}{=} \PY{n}{queue}\PY{o}{.}\PY{n}{pop}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)} \PY{c+c1}{\PYZsh{} Take out the first customer}
\PY{k}{yield} \PY{n}{env}\PY{o}{.}\PY{n}{timeout}\PY{p}{(} \PY{n}{random}\PY{o}{.}\PY{n}{expovariate}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{o}{/}\PY{n}{serviceTime}\PY{p}{)} \PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s2}{ served at time }\PY{l+s+si}{\PYZpc{}f}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}}\PY{p}{(}\PY{n}{customer}\PY{p}{,} \PY{n}{env}\PY{o}{.}\PY{n}{now}\PY{p}{)} \PY{p}{)}
\PY{n}{newArrivalEv} \PY{o}{=} \PY{n}{env}\PY{o}{.}\PY{n}{event}\PY{p}{(}\PY{p}{)}
\PY{n}{arrivalEvents}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{newArrivalEv}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Service process waiting for a new arrival}\PY{l+s+s2}{\PYZdq{}} \PY{p}{)}
\PY{k}{yield} \PY{n}{newArrivalEv} \PY{c+c1}{\PYZsh{} Wait for the arrival event to be triggered. This is done in customer\PYZus{}generator\PYZus{}proc}
\PY{n}{env} \PY{o}{=} \PY{n}{simpy}\PY{o}{.}\PY{n}{Environment}\PY{p}{(}\PY{p}{)}
\PY{n}{queue} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{waitingForArrivalEvents} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{service\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{l+m+mf}{1.3}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{waitingForArrivalEvents}\PY{p}{)} \PY{p}{)}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{customer\PYZus{}generator\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mf}{0.8}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{waitingForArrivalEvents}\PY{p}{)} \PY{p}{)}
\PY{n}{env}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Service process waiting for a new arrival
Triggering arrival event
Customer-1 with patience 1.300000 enters the queue at time 2.193581
Customer-1 served at time 2.324042
Service process waiting for a new arrival
Triggering arrival event
Customer-2 with patience 1.300000 enters the queue at time 3.629634
Customer-3 with patience 1.300000 enters the queue at time 3.957301
Customer-2 served at time 4.996579
Customer-4 with patience 1.300000 enters the queue at time 5.474184
Customer-4 got impatient and exited the queue at time 6.774184
Customer-3 served at time 7.580712
Service process waiting for a new arrival
\end{Verbatim}
\subsubsection{Exercise: What is the probability that a customer will
leave the system without getting
served?}\label{exercise-what-is-the-probability-that-a-customer-will-leave-the-system-without-getting-served}
The code for the simulation model above with reneging customers is
repeated here below, but without the print statements. Think about how
you can record the number of customers that leave without being served
in the simulation. Use this to answer the question.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}104}]:} \PY{k}{def} \PY{n+nf}{reneging\PYZus{}customer\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{name}\PY{p}{,} \PY{n}{patience}\PY{p}{,} \PY{n}{queue}\PY{p}{)}\PY{p}{:}
\PY{n}{queue}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{name}\PY{p}{)} \PY{c+c1}{\PYZsh{} Customers are identified by name, so all names should be unique}
\PY{k}{yield} \PY{n}{env}\PY{o}{.}\PY{n}{timeout}\PY{p}{(}\PY{l+m+mf}{1.3}\PY{p}{)}
\PY{k}{if} \PY{n}{name} \PY{o+ow}{in} \PY{n}{queue}\PY{p}{:}
\PY{n}{queue}\PY{o}{.}\PY{n}{remove}\PY{p}{(}\PY{n}{name}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{customer\PYZus{}generator\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{numberOfCustomers}\PY{p}{,} \PY{n}{timeBetween}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{newArrivalEvents}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{} Will generate a fixed number of customers, with random time between arrivals.\PYZdq{}\PYZdq{}\PYZdq{}}
\PY{n}{k} \PY{o}{=} \PY{l+m+mi}{0}
\PY{k}{while} \PY{n}{k}\PY{o}{\PYZlt{}}\PY{n}{numberOfCustomers}\PY{p}{:}
\PY{k}{yield} \PY{n}{env}\PY{o}{.}\PY{n}{timeout}\PY{p}{(} \PY{n}{random}\PY{o}{.}\PY{n}{expovariate}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{o}{/}\PY{n}{timeBetween}\PY{p}{)} \PY{p}{)}
\PY{n}{k} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{reneging\PYZus{}customer\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{name} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Customer\PYZhy{}}\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}}\PY{k}{k}, patience = 1.3, queue = queue) )
\PY{k}{while} \PY{n}{newArrivalEvents} \PY{o}{!=} \PY{p}{[}\PY{p}{]}\PY{p}{:}
\PY{n}{ev} \PY{o}{=} \PY{n}{newArrivalEvents}\PY{o}{.}\PY{n}{pop}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}
\PY{c+c1}{\PYZsh{} The newArrivalEvents list contains events that servers are waiting for in order to proceed.}
\PY{c+c1}{\PYZsh{} What they are waiting for is for a new customer to arrive, so trigger the event}
\PY{n}{ev}\PY{o}{.}\PY{n}{succeed}\PY{p}{(}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{service\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{n}{serviceTime}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{arrivalEvents}\PY{p}{)}\PY{p}{:}
\PY{k}{while} \PY{k+kc}{True}\PY{p}{:}
\PY{k}{while} \PY{n}{queue} \PY{o}{!=} \PY{p}{[}\PY{p}{]}\PY{p}{:}
\PY{n}{customer} \PY{o}{=} \PY{n}{queue}\PY{o}{.}\PY{n}{pop}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)} \PY{c+c1}{\PYZsh{} Take out the first customer}
\PY{k}{yield} \PY{n}{env}\PY{o}{.}\PY{n}{timeout}\PY{p}{(} \PY{n}{random}\PY{o}{.}\PY{n}{expovariate}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{o}{/}\PY{n}{serviceTime}\PY{p}{)} \PY{p}{)}
\PY{n}{newArrivalEv} \PY{o}{=} \PY{n}{env}\PY{o}{.}\PY{n}{event}\PY{p}{(}\PY{p}{)}
\PY{n}{arrivalEvents}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{newArrivalEv}\PY{p}{)}
\PY{k}{yield} \PY{n}{newArrivalEv} \PY{c+c1}{\PYZsh{} Wait for the arrival event to be triggered. This is done in customer\PYZus{}generator\PYZus{}proc}
\PY{n}{env} \PY{o}{=} \PY{n}{simpy}\PY{o}{.}\PY{n}{Environment}\PY{p}{(}\PY{p}{)}
\PY{n}{queue} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{waitingForArrivalEvents} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{service\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{l+m+mf}{1.3}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{waitingForArrivalEvents}\PY{p}{)} \PY{p}{)}
\PY{n}{env}\PY{o}{.}\PY{n}{process}\PY{p}{(} \PY{n}{customer\PYZus{}generator\PYZus{}proc}\PY{p}{(}\PY{n}{env}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mf}{0.8}\PY{p}{,} \PY{n}{queue}\PY{p}{,} \PY{n}{waitingForArrivalEvents}\PY{p}{)} \PY{p}{)}
\PY{n}{env}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
% Add a bibliography block to the postdoc
\end{document}
| {
"alphanum_fraction": 0.6226586255,
"avg_line_length": 54.6119266055,
"ext": "tex",
"hexsha": "abfcfe70a27e9b8d22437a2f3e4c61ccb29ecd5b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "020551562de43c10a31e05094f476bbcfdee9090",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alfkjartan/systemanalys",
"max_forks_repo_path": "doc/Introduction to Python.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "020551562de43c10a31e05094f476bbcfdee9090",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alfkjartan/systemanalys",
"max_issues_repo_path": "doc/Introduction to Python.tex",
"max_line_length": 357,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "020551562de43c10a31e05094f476bbcfdee9090",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alfkjartan/systemanalys",
"max_stars_repo_path": "doc/Introduction to Python.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 22057,
"size": 59527
} |
\documentclass[11pt, twoside]{article}
\usepackage[parfill]{parskip}
\usepackage{amsmath}
\usepackage{gensymb}
\usepackage{geometry}
\usepackage{graphicx, caption}
\usepackage{listings}
\usepackage{natbib}
\usepackage{placeins}
\usepackage{rotating}
\usepackage{textgreek}
\lstset{
basicstyle=\small\ttfamily,
columns=flexible,
breaklines=true
}
\geometry{
a4paper,
left=20mm,
top=20mm,
}
\begin{document}
\title{Simulating the ZetaSDR radio}
\author{Jason Leake}
\date{April 2019}
\maketitle
\section{Introduction}
The ZetaSDR radio \citep{ly1gp:2007}, designed by a Lithuanian radio
amateur, call sign LY1GP, is a simple direct conversion radio receiver
with a very small part count and forms the front end for a software
defined radio. The output from it is an analogue inphase and
quadrature signal, that allows the baseband to be extracted from
several different forms of modulation, for example frequency
modulation and quadrature amplitude modulation.
This program is simulates the operation of the Tayloe mixer that forms
the core of the ZetaSDR -- the 74HC4052 \citep{Motorola:1996}, and the
Johnson counter that drives it, producing the response to a simulated
amplitude modulated RF signal. I wrote the program to see what ideal
time domain waveforms should look like on an oscilloscope as an
informal aid to constructing and optimising the radio.
Whilst this document contains a back-of-an-envelope analysis of the
operation of the mixer, the reader is referred to \cite{Soer:2007} for
a far more thorough examination.
The source code for the simulation is {\texttt program.cpp}, and
{\texttt plot.py} generates the plots from its CSV format output
files. The program works on instantaneous samples of a simulated
incoming signal using the transformations that key components apply to
it, rather than employing a circuit solver.
The program was written for Ubuntu Linux is dependent upon several
packages, which can be installed via:
\begin{lstlisting}
sudo apt-get install make g++ librtlfilter-dev python3 python3-matplotlib texlive-all
\end{lstlisting}
Generating this PDF document is achieved by:
\begin{lstlisting}
make zetasdr.pdf
\end{lstlisting}
\begin{sidewaysfigure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{circuit.eps}
\caption{ZetaSDR schematic (after \cite{ly1gp:2007})}
\label{figure:schematic}
\end{sidewaysfigure}
\section{Operation of the ZetaSDR}
The incoming RF electromagnetic signal induces a small electrical
current in the antenna. The current passes across capacitor C6
(Figure~\ref{figure:schematic}) which acts as a high pass filter,
blocking DC from passing from the receiver circuit to the antenna,
which could otherwise cause problems with preamplifiers.
A 2.5 volt bias is applied to the RF signal by the voltage divider
R1/R2 so that the signal oscillates around 2.5 volts instead of 0
volts. This means that the rest of the circuit is dealing with a
signal in the middle of its normal operating range instead of around 0
volts where non-linearity in the response will be great even if it can
respond at all.
\subsection{Tayloe quadrature product detector}
The ZetaSDR uses a Tayloe quadrature product detector
\citep{Tayloe:2013, Tayloe:2001}, often informally called a Tayloe
mixer, to demodulate the signal. This is a type of quadrature
sampling detector, a switching mixer that produces quadrature (IQ)
signals. The ZetaSDR implementation uses a 74HC4052 analogue
multiplexer/demultiplexer and capacitors to implement an integrator.
It samples, averages and holds the RF signal on each quarter cycle,
sending the first and third quarter samples to the I output, and the
second and fourth quarter samples to the Q output.
The 74HC4052 is a twin channel analogue multiplexer/demultiplexer.
One channel has an input, X, and switches it to one of four outputs,
X0 $\rightarrow$ X3 depending on the state of the digital inputs A and
B (as shown in Table~\ref{table:74HC4052}). If output X{\it n} is
enabled then the impedance between it and X is around 70 {\ohm}
(depending on supply voltage), and if not enabled then the path is
open circuit. The device is bidirectional, but this is not relevant
here, for the purposes of the ZetaSDR it is enough that the input from
the antenna (X) is switched to one of the four outputs X1
$\rightarrow$ X4. The other 74HC4052 channel has an input called Y,
switching to Y1 $\rightarrow$ Y4. The circuit uses both channels in
parallel to halve the impedance seen across the device from about 70
{\ohm} to 35 {\ohm}.
\begin{table}[ht]
\center
\begin{tabular}{|l| l| l| l|}
\hline
A & B & X & Y \\
\hline
\hline
0 &0& X0& Y0\\
0 &1& X1& Y1\\
1 &0& X2& Y2\\
1 &1& X3& Y3\\
\hline
\end{tabular}
\caption{Operation of 74HC4052}
\label{table:74HC4052}
\end{table}
The inputs A and B are driven by a Johnson counter (also called a
twisted ring counter) made from two D-type flip flops. This provides
two binary signals, connected to A and B, that change on every clock
cycle. The clock is supplied by the QG2 local oscillator. The counter
operates as a 2 bit wide shift register, with the last bit value
being inverted and fed into the first bit of the shift register on
each clock cycle, so generating a repeating sequence 00, 01, 11, 10.
The local oscillator runs at four times the frequency of the RF
carrier, producing a new bit pattern in the Johnson counter, and thus
switching the 74HC4052, on every quarter cycle of the RF carrier.
Hence, each of the four pairs of outputs X0/Y0, X1/Y1, X2/Y2 and X3/Y3
receive a quarter cycle of the RF signal. The outputs connect to the
sampling capacitors that charge/discharge as the RF carrier signal is
applied to them but hold their voltage when the corresponding 74HC4052
output is disabled and so in its high impedance state.
The amplitude of the RF carrier depends on the incoming signal
strength and the quality of the antenna and preamplifier, so I picked
an arbitrary value of 1 mV for the simulation program.
The waveform shapes produced by the simulation will be the same for
smaller voltages, albeit with a smaller amplitude since the simulation
does not model the noise in the system.
Since the local oscillator output is the Johnson counter clock, the
flip flops making up the counter change when the local oscillator
signal transitions from a voltage corresponding to logic 0 to logic 1.
The simulation sets the logic 1 transition to a conventional 2.4
volts, and since the local oscillator has a swing of 5 volts, the
clock occurs just below the midpoint on the local oscillator upswing.
Because the Johnson counter produces the sequence 00, 01, 11, 10, the
first and third quarters of the RF carrier cycle are selected using
outputs X0 and X3 respectively, and the second and fourth using X1 and
X2.
The voltage on the sampling capacitors C2 and C3 are shown in
Figure~\ref{figure:C2C3simple}. The local oscillator is set to have a
range of 0--5 volts, and have its 0{\degree} point at the 0{\degree},
90{\degree}, 180{\degree} and 270{\degree} positions of the RF
carrier. In this idealised simulation the Johnson counter therefore
changes state very slightly before this point, when the local
oscillator voltage is rising through 2.4 volts.
In practice, it is unlikely that the local oscillator will be in phase
with the RF carrier, and there will be small delays in the Johnson
counter logic and the 74HC4052 operation.
The two sampling capacitors are also connected to the differential
inputs of an active low pass filter that both amplifies and filters
the difference between the two voltages. In the simulation, the low
pass filters have an infinite input impedance. The final output
voltage is shown on the lower plot in Figure~\ref{figure:C2C3simple}
-- the effect of taking the difference between the two signals, which
will be normally be similar but of opposite polarity, is to produce a
signal of double the size of each. The corresponding voltages on the
other pair of capacitors, C4 and C5, are shown in
Figure~\ref{figure:C4C5simple}.
In the \cite{ly1gp:2007} schematic, a 28.322 MHz local oscillator is
specified, but the simulation uses a 28 MHz oscillator and hence a 7
MHz radio signal.
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{c2c3_unmodulated_voltage.png}
\caption{Voltage on C2 and C3, ZetaSDR, when the local oscillator is
in phase with the RF signal. The 74HC4052 switching leads the RF
signal phase very slightly because its local oscillator derived
clock changes logic state just below the midpoint on the local
oscillator upswing.}
\label{figure:C2C3simple}
\end{figure}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{c4c5_unmodulated_voltage.png}
\caption{Voltage on C4 and C5, ZetaSDR, when the local oscillator is
in phase with the RF signal}
\label{figure:C4C5simple}
\end{figure}
The sharp transitions and the short-lived spikes will be attenuated by
the active low pass filter stage. Hence, the demodulator is producing
an output where the I signal is sampled at two points in the carrier
cycle 180{\degree} apart, and the Q signal is the signal sampled
90{\degree} on from those points in much the same way that the
multiplication by the respective local oscillator does it in a
conventional multiplying IQ mixer.
Hence, if the amplitude or phase of the signal does not change rapidly
during the sampling and we ignore the quarter cycles where the voltage
on the capacitors is changing, then for the I signal, the voltage at
capacitor C2 in Figure~\ref{figure:C2C3simple} is
\begin{equation*}
s_1 = A cos(\theta) sin({\omega_b}t)
\end{equation*}
where:\\
\\
${\omega_b}$ is the baseband frequency in radians per second\\
$\theta$ is the phase difference between the carrier and the Johnson
counter transitions (and hence the 74HC4052 switching)
The voltage across C3 is:
\begin{align*}
s_2 & = A cos(\theta) sin(\pi + {\omega_b}t) \\
\\
&= -A cos(\theta) sin({\omega_b}t)
\end{align*}
Because the active filter is the difference between the two voltages,
the filter output is:
\begin{equation}\label{eqn:tayloei}
I = s_1 - s_2 = 2 A cos(\theta) sin({\omega_b}t)
\end{equation}
Hence, the difference into the active low pass filter, ignoring the
high frequency portion which results when one of the capacitors is
tracking the output from the 74HC4052 rather than holding its voltage,
is $2A sin(\theta) sin({\omega_b}t)$.
By the same reasoning, the Q signal formed from the voltage difference
between C4 and C5 is:
\begin{equation}\label{eqn:tayloeq}
Q = 2 A sin(\theta) sin({\omega_b}t)
\end{equation}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{c2c3_modulated_voltage.png}
\caption{Voltage on C2 and C3 in the ZetaSDR, when the local
oscillator is in phase with the RF carrier, and the 7 MHz carrier
is amplitude modulated at 100 kHz}
\label{figure:C2C3mod}
\end{figure}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{c4c5_modulated_voltage.png}
\caption{Voltage on C4 and C5, when the local oscillator is in phase
with the RF carrier, and the 7 MHz carrier is amplitude modulated
at 100 kHz}
\label{figure:C4C5mod}
\end{figure}
The corresponding plots for a modulated signal are shown in
Figures~\ref{figure:C2C3mod} and \ref{figure:C4C5mod}. Many more RF
cycles are shown in these plots, and the 7 MHz carrier is amplitude
modulated at 100 kHz. Although this is above the audio range, it
allows several AM cycles can be accommodated
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{c2c3_unmodulated_voltage_35.png}
\caption{Voltage on C2 and C3 in the ZetaSDR, when the local
oscillator leads the RF carrier by 35{\degree}}
\label{figure:C2C3phase}
\end{figure}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{c4c5_unmodulated_voltage_35.png}
\caption{Voltage on C4 and C5 in the ZetaSDR, when the local
oscillator leads the RF carrier by 35{\degree}}
\label{figure:C4C5phase}
\end{figure}
In practice, the 74HC4052 switching will usually be more out of phase
with the RF carrier signal, and so something such as
Figure~\ref{figure:C2C3phase} (and Figure~\ref{figure:C4C5phase} for
the other pair of capacitors) is more likely. The modulated versions
of these plot, over a larger number of carrier cycles, are shown in
Figures~\ref{figure:C2C3modphase} and \ref{figure:C4C5modphase} are
obtained.
Figure~\ref{figure:zetasdr_low_pass_phase} shows the I and Q signals
through a low pass filter and then combining them to extract the
baseband. Because of the unusually high modulation frequency and
signal strength used in the simulation, a 500 kHz $2^{nd}$ order zero
gain low pass Butterworth filter is used instead of the higher gain
and lower frequency cutoff active low pass filters in the ZetaSDR
radio.
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{zetasdr_modulated_35_va.png}
\caption{Voltage on C2 and C3 in the ZetaSDR, when the local
oscillator leads the carrier by 35{\degree}, and the 7 MHz carrier
is amplitude modulated at 100 kHz}
\label{figure:C2C3modphase}
\end{figure}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{zetasdr_modulated_35_vb.png}
\caption{Voltage on C4 and C5 in the ZetaSDR, when the local
oscillator leads the carrier by 35{\degree}, and the 7 MHz carrier
is amplitude modulated at 100 kHz}
\label{figure:C4C5modphase}
\end{figure}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{zetasdr_modulated_35_d.png}
\caption{ZetaSDR response to a 100 kHz amplitude modulated signal,
when the local oscillator is 35{\degree} ahead of the RF
carrier. Because of the high input signal amplitude and the high
modulation frequency, the active low pass filters have been
replaced with a $2^{nd}$ order low pass filter, but with unity
gain and a cutoff frequency of 500 kHz.}
\label{figure:zetasdr_low_pass_phase}
\end{figure}
\section{Comparison with an ideal multiplying IQ mixer}
A sinusoidally amplitude modulated RF signal, with a 100\% modulation
depth, double-sideband and full-carrier, is described by:
\begin{equation}
S = A sin({\omega_b}t) sin({\omega_c}t)
\end{equation}
where:\\
\\
$A$ is the signal amplitude\\
$\omega_b$ is the baseband frequency\\
$\omega_c$ is the carrier frequency \\
$t$ is time\\
In a conventional multiplying IQ mixer, the I signal is produced by
mixing with a local oscillator with a phase difference $\theta$:
\begin{align*}
I =& A sin({\omega_b}t) sin({\omega}_ct)sin({\omega}_ct + \theta) \\
\\
&= A sin({\omega_b}t) \frac{cos(\theta) - cos(2 {\omega}_ct - \theta)}{2}
\\
&= \frac{A sin({\omega_b}t) cos(\theta)}{2} - \frac{cos(2 {\omega_c}t - \theta)}{2}
\end{align*}
Because
\begin{equation*}
sin(\alpha) sin(\beta) = \frac{sin(\alpha + \beta) - sin(\alpha - \beta)}{2}
\end{equation*}
The Q signal is produced by mixing the incoming RF signal with a
similar local oscillator signal that is 90{\degree} out of phase with
the I local oscillator:
\begin{align*}
Q& = A sin({\omega_b}t) cos({\omega}_ct) sin({\omega}_ct + \theta) \\
\\
& = A sin({\omega_b}t) \frac{sin(2{\omega}_ct + \theta) + sin(\theta)}{2}\\
\\
& = \frac{A sin({\omega_b}t)sin(2{\omega}_ct + \theta)}{2} + \frac{A sin({\omega_b}t)sin(\theta)}{2}\\
\end{align*}
Because
\begin{equation*}
cos(\alpha)sin(\beta) = \frac{(sin(\alpha + \beta) - sin(\alpha - \beta)}{2}
\end{equation*}
In both cases, a low pass filter will remove the sum portion of the
signal at frequency $2{\omega_c}$ leaving only the difference
frequency:
\begin{equation*}
I = \frac{A sin({\omega_b}t)cos(\theta)}{2}
\end{equation*}
\begin{equation*}
Q = \frac{A sin({\omega_b}t)sin(\theta)}{2}
\end{equation*}
These two equations are of the same form as
Equations~\ref{eqn:tayloei} and \ref{eqn:tayloeq}, the ones for the
Tayloe quadrature product detector, demonstrating that it operates as
an IQ mixer.
The results of the simulation of an ideal multiplying IQ mixer are
shown in
Figures~\ref{figure:iq_mod}--\ref{figure:iq_mod_low_pass_phase} for
comparison with the Tayloe detector simulation plots above.
Like the IQ mixer output, the Tayloe detector in the ZetaSDR produces
sum and difference frequencies. The high frequency portion of detector
output signal is at $2\omega_c$, since the rapidly varying portion
gets through two cycles for each cycle of the RF carrier, although in
this case there are additional harmonics. The detector capacitors are
intended to form an RC low pass filter with the resistance through the
74HC4052 to remove the sum frequency component (the Tayloe mixer
minimises the use of resistors to reduce noise). However, the ZetaSDR
has an additional active low pass filter, so this filter is redundant,
and for this reason the detector capacitors can be deleted.
The cutoff frequency of the RC circuit is about 85 kHz if the
aggregate resistance through the pair of 74HC4052 channels in parallel
is 35 {\ohm} and the impedance of the antenna is 50 {\ohm}.
\cite{Tayloe:2013} says that effective resistance for filter
calculations is quadrupled if the signal is only connected for a
quarter of the time, but my simulation works at a small time step
level, so still sees the ``instantaneous'' cutoff at 85 kHz when the
capacitor is connected to the signal, and the capacitor voltage frozen
when it is disconnected.
The modulation frequencies used by the simulation are 83 kHz and 100
kHz, as a not entirely ideal compromise between being strongly
affected by this RC filter and being a sufficiently high frequency to
allow the plots to show at least one modulation cycle.
Perhaps the greatest weakness of the design is that it relies upon the
mixer to emphasise the tuned frequency over adjacent ones. Perhaps
this is not such a problem in practice if the antenna can be tuned.
But to investigate this further, Figures~\ref{figure:adjacent} and
\ref{figure:tuned_adjacent} show simulations for a pair of signals
close to each other and the ZetaSDR radio tuned to one or other
signals. The signal parameters are shown in
Table~\ref{table:adjacent}. The conclusion from these simulations is
that the adjacent signal introduces very considerable distortion on
the demodulated signal that the radio is tuned to, although
Figures~\ref{figure:iq_adjacent} and \ref{figure:iq_tuned_adjacent}
show that an ideal IQ mixer without any additional tuning capability
also produces similarly distorted results.
\begin{table}[ht]
\center
\begin{tabular}{| l| c| r |}
\hline
Signal & Carrier & Amplitude \\
& frequency & modulation \\
&& frequency \\
\hline
\hline
Signal 1 & 7 MHz & 100 kHz \\
Signal 2 & 7.5 MHz & 83 kHz\\
\hline
\end{tabular}
\caption{Signals used in examining adjacent signal response}
\label{table:adjacent}
\end{table}
\begin{figure}
\center \captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{iq_modulated_0.png}
\caption{I/Q outputs using an ideal multiplying IQ mixer, when the
local oscillator is in phase with the RF carrier, and the 7 MHz
carrier is amplitude modulated at 100 kHz}
\label{figure:iq_mod}
\end{figure}
\begin{figure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{iq_modulated_35.png}
\caption{I/Q outputs using an ideal multiplying IQ mixer when the
local oscillator leads the carrier by 35{\degree}, and the 7 MHz
carrier is amplitude modulated at 100 kHz}
\label{figure:iq_modphase}
\end{figure}
\begin{sidewaysfigure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{iq_modulated_35_d.png}
\caption{I/Q outputs using an ideal multiplying IQ mixer, when the
local oscillator is 35{\degree} ahead of the RF carrier, and with
the I/Q signals passed through a $2^{nd}$ order low pass filter
with a cutoff of 400 kHz to remove the $2{\omega_c}$ signal.
Compare this ideal case with
Figure~\ref{figure:zetasdr_low_pass_phase}.}
\label{figure:iq_mod_low_pass_phase}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{zetasdr_adjacent_35.png}
\caption{I and Q outputs from the ZetaSDR with the local oscillator
starting 35{\degree} ahead of the carrier. The second order low
pass filter has a cutoff of 400 kHz. The signals present described
in Table~\ref{table:adjacent}, with the local oscillator tuned to
Signal 1.}
\label{figure:adjacent}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\center \captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{zetasdr_tuned_adjacent_35.png}
\caption{I and Q outputs from the ZetaSDR tuned to Signal 2 in
Table~\ref{table:adjacent}, and starting starting 35{\degree}
ahead of the carrier.}
\label{figure:tuned_adjacent}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{iq_adjacent_35_d.png}
\caption{I and Q outputs from an ideal multiplying IQ mixer with the
local oscillator tuned to Signal 1 in Table~\ref{table:adjacent},
and starting starting 35{\degree} ahead of the carrier.}
\label{figure:iq_adjacent}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\center
\captionsetup{width=.8\linewidth}
\includegraphics[width=.8\linewidth]{iq_tuned_adjacent_35_d.png}
\caption{I and Q outputs from an ideal multiplying IQ mixer. local
oscillator is tuned to Signal 2 in Table~\ref{table:adjacent}, and
starting starting 35{\degree} ahead of the
carrier.}
\label{figure:iq_tuned_adjacent}
\end{sidewaysfigure}
\begin{thebibliography}{9}
\bibitem[LY1GP(2007)]{ly1gp:2007}
LY1GP (2007), {\it ZetaSDR for 40m band}
\\\texttt{http://www.qrz.lt/ly1gp/SDR/}
\bibitem[Motorola(1996)]{Motorola:1996}
Motorola (1996),
{\it Analog Multiplexers/Demultiplexers High–Performance Silicon–Gate CMOS},
\texttt{http://www.om3bc.com/datasheets/74HC4051.PDF}
\bibitem[Soer(2007)]{Soer:2007}
Soer M (2007),
{\it Analysis and comparison of switch-based frequency converters}, MSc. thesis, University of Twente,
\texttt{https://essay.utwente.nl/58276/1/scriptie\_Soer.pdf}
\bibitem[Tayloe(2001)]{Tayloe:2001}
Tayloe, D (2001), {\it Product detector and method therefor -- United States Patent No 6230000}, United States Patent and Trademark Office,
\\\texttt{https://patentimages.storage.googleapis.com/ed/ec/5f/c214501bb441f1/US6230000.pdf}
\bibitem[Tayloe(2013)]{Tayloe:2013}
Tayloe, D (2013), {\it Ultra Low Noise, High Performance, Zero IF Quadrature Product Detector and Preamplifier},
\\\texttt{https://wparc.us/presentations/SDR-2-19-2013/Tayloe\_mixer\_x3a.pdf}
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.762195384,
"avg_line_length": 38.5215231788,
"ext": "tex",
"hexsha": "f18140b05c457b3bc1d3e1bfecb9cf6e4f1ae948",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "03cb1c3e9283d15ab02babbc5a48ad5d0a19aba3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "isopleth/zetasdr",
"max_forks_repo_path": "zetasdr.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "03cb1c3e9283d15ab02babbc5a48ad5d0a19aba3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "isopleth/zetasdr",
"max_issues_repo_path": "zetasdr.tex",
"max_line_length": 141,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "03cb1c3e9283d15ab02babbc5a48ad5d0a19aba3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "isopleth/zetasdr",
"max_stars_repo_path": "zetasdr.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-14T15:59:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-14T15:59:53.000Z",
"num_tokens": 6526,
"size": 23267
} |
\chapter{Hardware and Software}
| {
"alphanum_fraction": 0.7878787879,
"avg_line_length": 11,
"ext": "tex",
"hexsha": "238038fdd7ef73cfe572c115e8967cf3e68ba900",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "371e0e07140bc2fa5a8e3d424510900f368f885a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kcdodd/ecsp-book",
"max_forks_repo_path": "TeX_files/HardwareSoftwareNetworks.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "371e0e07140bc2fa5a8e3d424510900f368f885a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kcdodd/ecsp-book",
"max_issues_repo_path": "TeX_files/HardwareSoftwareNetworks.tex",
"max_line_length": 31,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "371e0e07140bc2fa5a8e3d424510900f368f885a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kcdodd/ecsp-book",
"max_stars_repo_path": "TeX_files/HardwareSoftwareNetworks.tex",
"max_stars_repo_stars_event_max_datetime": "2015-07-27T18:34:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-07-27T18:34:02.000Z",
"num_tokens": 7,
"size": 33
} |
\chapter{Supplementary Material}
\label{chap:supp}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Chapter Summary
\mbox{}
\vfill
This chapter gives supplementary information on \ldots
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\section{Supplement to \chap\ \ref{chap:intro}}
\label{sec:chap1supp}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\afterpage{\blankpage}
| {
"alphanum_fraction": 0.1859099804,
"avg_line_length": 56.7777777778,
"ext": "tex",
"hexsha": "969412507b0f75908e54411bb9476ff17ee2fd04",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "705490c28ecc438256e4a048fa3f092a371bc8e2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "krishnanj/tex-templates",
"max_forks_repo_path": "thesis/supp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "705490c28ecc438256e4a048fa3f092a371bc8e2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "krishnanj/tex-templates",
"max_issues_repo_path": "thesis/supp.tex",
"max_line_length": 154,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "705490c28ecc438256e4a048fa3f092a371bc8e2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "krishnanj/tex-templates",
"max_stars_repo_path": "thesis/supp.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 98,
"size": 1022
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{iftex}
\ifPDFTeX
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Student Performance Analysis Report},
pdfauthor={Isabela Lucas Bruxellas \& Tony Liang \& Xue Wang \& Anam Hira},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{longtable,booktabs,array}
\usepackage{calc} % for calculating minipage widths
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
\newlength{\cslhangindent}
\setlength{\cslhangindent}{1.5em}
\newlength{\csllabelwidth}
\setlength{\csllabelwidth}{3em}
\newlength{\cslentryspacingunit} % times entry-spacing
\setlength{\cslentryspacingunit}{\parskip}
\newenvironment{CSLReferences}[2] % #1 hanging-ident, #2 entry spacing
{% don't indent paragraphs
\setlength{\parindent}{0pt}
% turn on hanging indent if param 1 is 1
\ifodd #1
\let\oldpar\par
\def\par{\hangindent=\cslhangindent\oldpar}
\fi
% set entry spacing
\setlength{\parskip}{#2\cslentryspacingunit}
}%
{}
\usepackage{calc}
\newcommand{\CSLBlock}[1]{#1\hfill\break}
\newcommand{\CSLLeftMargin}[1]{\parbox[t]{\csllabelwidth}{#1}}
\newcommand{\CSLRightInline}[1]{\parbox[t]{\linewidth - \csllabelwidth}{#1}\break}
\newcommand{\CSLIndent}[1]{\hspace{\cslhangindent}#1}
\ifLuaTeX
\usepackage{selnolig} % disable illegal ligatures
\fi
\title{Student Performance Analysis Report}
\author{Isabela Lucas Bruxellas \& Tony Liang \& Xue Wang \& Anam Hira}
\date{07/04/2022}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{2}
\tableofcontents
}
\hypertarget{summary}{%
\subsection{Summary}\label{summary}}
In this project, we explore and predict the exam performance of students in the subject of Electrical DC Machines based on their study time by using linear regression (LN) and the K-nearest neighbors (K-NN) algorithm. Study time is chosen for this analysis as we identify it to be the variable with the highest correlation to exam performance. The results obtained in this analysis could help students gain insight into the necessary study times for specific scores as well as help instructors better understand the performance of students.
As a result of our analysis, we find that the Root mean square prediction error (RMSPE) for our LN model is 0.385, while the RMSPE of the K-NN model is 0.284. Although the K-NN model is slightly better than LN model, both types of regression have a prediction error percentage of about 40\% (therefore our accuracy is about 60\%). This can be attributed to the fact that exam performance could be affected by other external factors such as health condition, student IQ, stress levels, learning ability, learning environment and many others. It may also be indicative that the data set is not big enough to draw a relationship between just these two variables.
The dataset used is the Kahraman, Sagiroglu, and Colak (2009), User Knowledge Modeling Dataset provided by UCL Machine Learning Repository. And referenced from these sources: Colak, Sagiroglu, and Kahraman (2008), Kahraman, Sagiroglu, and Colak (2013), Kahraman, Sagiroglu, and Colak (2010), Maria Virvou (2004), E. AshbyPlant and LenHillKiaAsberg (2005), Gail I. Hudson (2010). You could see citations in our reference section.
\hypertarget{introduction}{%
\subsection{Introduction}\label{introduction}}
The research question of this analysis is: What will a student's exam performance be based on their study time?
For this analysis, we use the User Knowledge Modeling Data Set, which has a total of 5 variables observed about students, in addition to the classified knowledge level of user (UNS):
STG: the degree of study time for goal object materials
SCG: the degree of repetition number of user for goal object materials
STR: the degree of study time of user for related objects with goal object
LPR: the exam performance of user for related objects with goal object
PEG: the exam performance of user for goal objects
These variables were drawn from students' learning-related activities on the web. The data was also already tidy and pre-divided into training and testing subsets.
The choice to analyse the relationship between PEG and STG is owed to a correlation analysis performed while preparing the data. In this step we ran a function to test the correlation of all numeric variables against PEG and determined that STG was most highly correlated to it. The correlation was also positive which is consistent with our intuition that increasing the degree of study time would result in higher/better exam performance.
Thus, we pick STG as the predictor (explanatory variable) and PEG as the response variable. We first conduct a regression since our response variable is numerical. Since we are not sure which regression method (linear or K-NN) will be better at fitting our data set, we conduct both analyses and compare/contrast their prediction errors to determine the best model.
\hypertarget{wrangle-data}{%
\subsection{Wrangle data}\label{wrangle-data}}
\hypertarget{summary-data}{%
\subsection{Summary Data}\label{summary-data}}
This section summarizes key characteristics and values of the explanatory variable (STG) and the response variable (PEG).
Table \ref{tab:user-means-table} displays the means of the variables.
\begin{table}
\caption{\label{tab:user-means-table}Table of user means}
\centering
\begin{tabular}[t]{r|r}
\hline
STG & PEG\\
\hline
0.37 & 0.46\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:user-maxima-table} displays the maxima of the variables.
\begin{table}
\caption{\label{tab:user-maxima-table}Value of User maxima}
\centering
\begin{tabular}[t]{r|r}
\hline
STG & PEG\\
\hline
0.99 & 0.93\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:user-minima-table} displays the minima of the variables.
\begin{table}
\caption{\label{tab:user-minima-table}Value of User minima}
\centering
\begin{tabular}[t]{r|r}
\hline
STG & PEG\\
\hline
0 & 0\\
\hline
\end{tabular}
\end{table}
Tables \ref{tab:user-means-table} - \ref{tab:user-minima-table} show that the means are 0.37 for STG and 0.46 for PEG, the maximums are 0.99 for STG and 0.93 for PEG, and minimums are 0 for STG and 0 for PEG.
All of these three summaries show that the values of STG and PEG are close in value and have roughly the same scale. We will not need to scale or standardize the data for linear regression, but it is still good practice to scale the data for K-NN regression, since it is extremely sensitive to differing scales of the variables.
\begin{table}
\caption{\label{tab:user-observations-table}Numbers of Observations}
\centering
\begin{tabular}[t]{r}
\hline
n\\
\hline
258\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:user-observations-table} shows number of observations.
\hypertarget{data-visualization}{%
\subsection{Data Visualization}\label{data-visualization}}
\begin{figure}
\includegraphics[width=0.5\linewidth]{../results/stats/peg_stg} \caption{Plot of PEG and STG}\label{fig:peg-stg-plot}
\end{figure}
Figure \ref{fig:peg-stg-plot} shows a scatterplot of PEG against STG. It shows that there isn't a clear relationship or pattern between PEG and STG. There is neither a linear/nonlinear nor a positive/negative relationship, as the points are spread out widely on the graph.
Therefore, no evident relationship can be observed between these two variables.
\hypertarget{methods}{%
\subsection{Methods}\label{methods}}
In order to answer our predictive question, we followed a series of steps to perform our data analysis.
We first dowloaded our dataset reproducibly (students' exam performance about Electrical DC Machines) from the web. The data set was already initially split into training and data sets, so we loaded the respective sheets in. We could observe from the printed tables that the data was already tidy, since each row was a single observation, each column was a single variable, and each value was a single cell. Therefore, we did not need to do any further tidying.
Next, we prepared our dataset by accessing the dataset downloaded and reads the second and third sheets of the dataset. We then created a dunction to select only the numeric columns and create the training dataset using the selected data from sheet 2 and a user-inputted target variable. The function automaticallyidentified the variable with the highest correlation to the target variable. Thus, our training data includes only the column of the target variable and of the variable with the highest correlation to it (PEC and STg). We also created the testing data selecting only numeric columns from the third sheet and selecting only the relevant columns determined for training data.
We then performed relevant summaries of the data for exploratory data analysis. This included finding the means, maxima and minima, number of observations, and rows of missing data of our variables of interest (PEG and STG). This allowed us to get a better picture of the data we were working with. Following this, we created a visualization to see the relationship between our chosen variables, with STG on the x-axis and PEG on the y-axis. After plotting PEG against STG, we could not observe a strong relationship between the two, as we saw that the data points were spread out and did not follow a clear or direct relationship.
To perform the actual data analysis, we used regression to predict a student's exam performance (PEG) based on their study time (STG). Since we were predicting a numerical value instead of a categorical value, we had to use regression to evaluate and create a prediction. From what we had seen from our exploratory data analysis, K-NN regression seemed to be the better choice as it would allow for more flexibility, but we tested both the accuracies of K-NN and linear regression, and compared them to find the best approach. We trained our regression with our training data and assessed its accuracy with our testing data. To assess the accuracy, we calculated the Root Mean Squared Prediction Error (RMSPE) of our model on the test data to see how well our model generalizes to future data. RMSPE is the square root of the difference between the observed and predicted value of the ith test observation, divided by the number of observations. This indicates how well our model is able to predict on unseen data.
To visualize our final results, we plotted our predictions as a line using geom\_smooth overlaid our testing data to see the relationship between the two. The x-axis is STG, and the y-axis is PEG.
\hypertarget{explore-data-analysis-by-two-methods}{%
\subsection{Explore Data Analysis by two methods}\label{explore-data-analysis-by-two-methods}}
\hypertarget{linear-regression}{%
\subsection{Linear Regression}\label{linear-regression}}
\begin{figure}
\includegraphics[width=0.5\linewidth]{../results/model/lm_predictions} \caption{Plot of Linear Regression}\label{fig:lm-regression-plot}
\end{figure}
This figure \ref{fig:lm-regression-plot} shows our linear regression model overlaid our testing data with STG on the x-axis and PEG on the y-axis.
We can see that since the data itself is quite spread out, the plotted line of our linear regression cuts through the points, as linear regression usually does. We can see that the points are distributed relatively evenly over and under our fitted line.
Table \ref{tab:lm-rmse-table} shows the RMSE to assess goodness of fit on the training data.
\begin{table}
\caption{\label{tab:lm-rmse-table}LM RMSE table}
\centering
\begin{tabular}[t]{r}
\hline
.estimate\\
\hline
0.261\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:lm-rmspe-table} shows the RMSPE to assess how well the model predicts on the testing data.
\begin{table}
\caption{\label{tab:lm-rmspe-table}LM RMSPE table}
\centering
\begin{tabular}[t]{r}
\hline
.estimate\\
\hline
0.385\\
\hline
\end{tabular}
\end{table}
Tables \ref{tab:lm-rmse-table} and \ref{tab:lm-rmspe-table} show that the RMSE (of our training data) of linear regression is 0.261 and the RMSPE (of our testing data) of the linear regression is 0.385.
\hypertarget{k-nn-regression}{%
\subsection{K-NN Regression}\label{k-nn-regression}}
\begin{figure}
\includegraphics[width=0.5\linewidth]{../results/model/knn_regression_plot} \caption{Plot of K-NN Regression}\label{fig:knn-regression-plot}
\end{figure}
This figure \ref{fig:knn-regression-plot} shows a visualization of our K-NN regression. shows our K-NN regression model overlaid our testing data with STG on the x-axis and PEG on the y-axis.
We can see that although the data itself is quite spread out, the plotted line of our K-NN regression tries to follow the data points by producing ``wiggles''. This indicates that it is more flexible than our linear model, since it tries to follow most of the points instead of cutting through them.
However, it is unclear if our model underfits or overfits the data since the testing data points themselves are spread out across the entire graph and there is a lot of randomness in our data.
\begin{table}
\caption{\label{tab:knn-rmse-table}KNN RMSE table}
\centering
\begin{tabular}[t]{l|l|r}
\hline
.metric & .estimator & .estimate\\
\hline
rmse & standard & 0.187\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:knn-rmse-table} shows the RMSE (of training data) of K-NN regression.
\begin{table}
\caption{\label{tab:knn-rmspe-table}KNN RMSPE table}
\centering
\begin{tabular}[t]{l|l|r}
\hline
.metric & .estimator & .estimate\\
\hline
rmse & standard & 0.284\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:knn-rmspe-table} shows the RMSPE (of testing data) of K-NN regression.
Tables \ref{tab:knn-rmse-table} and \ref{tab:knn-rmspe-table} show that the RMSE (of training data) of K-NN regression is 0.187 and the RMSPE (of testing data) of K-NN regression is 0.284.
\hypertarget{comparing-results}{%
\subsection{Comparing Results}\label{comparing-results}}
Looking at the visualizations of both methods, we cannot evidently see which model is better. The linear regression model has a relatively even ratio of points above and under the fitted line, but our K-NN Regression is also flexible and follows the data points. Hence, we must rely on the calculated RMSPE to determine the better model.
As seen in the results section, the the RMSE (of our training data) of linear regression is 0.261, and the RMSPE (of our testing data) of the linear regression is 0.385. The RMSE (of training data) of K-NN regression is 0.187, and the RMSPE (of testing data) of K-NN linear regression is 0.284 By comparing the RMSE of both methods, we can see that our K-NN regression model has a slightly lower RMSE on our training data. By comparing the RMSPE of both methods, we can see that again our K-NN regression model has a slightly lower RMSPE on our testing data. Therefore, our K-NN Regression Model is slightly more accurate.
Due to the higher accuracy of our K-NN Regression, our final visualization of the best model is shown in Figure \ref{fig:knn-regression-plot}.
\hypertarget{conclusion-discussion}{%
\subsection{Conclusion \& Discussion}\label{conclusion-discussion}}
Before performing the analysis, we expected to find a positive, linear relationship between PEG and STG. The intuition was that as a student spends more time studying, they should perform better. Our analysis however, found that the accuracy for our linear regression model is 0.385, while the accuracy of our K-NN regression is 0.284 (where the range of degree is 1). Both of these types of regression have a prediction error percentage of about 40\% (therefore our accuracy is about 60\%). Our K-NN Regression model is slightly better than our linear regression due to its higher accuracy, which is consistent with the guess stated in the Methods section. The overall analysis shows that the model has relatively low accuracy.
This outcome of the analysis was unexpected. However, it is consistent with the Figure \ref{fig:peg-stg-plot} , which shows no evidence of a relationship between these two variables.
As mentioned in the Introduction, we had initially believed the accuracy of our K-NN regression to be about 80\%. However, the tested accuracy was only of about 60\%. Since there are no missing observations in our data set, there may be several reasons causing this difference in estimated accuracy and the actual accuracy calculated. On one hand, this can be attributed to the fact that exam performance could be affected by other external factors such as health condition, student IQ, stress levels, learning ability, etc. that were not in the data set. On the other hand, we can also say that our data set may not be big enough to directly draw a relationship between just study time and exam performance.Future studies could seek out other forms of prediction that can predict using multiple factors or find a larger data set with more observations.
In conclusion, one could draw the following conclusion from out findings while keeping in mind the analysis and dataset limitations: as study time increases, students generally perform better, but other external factors can also come into play and affect final results.
Similar studies could build upon this one and attempt to identify study methods and efficiency parameter. Some future questions to consider are:
How much time should a student be spending studying to improve exam performance?
How do other factors (such as repetition, knowledge level, etc.) contribute to exam performance?
Can a similar approach be used in the industry to predict workers' performance based on their working time? How?
\hypertarget{references}{%
\subsection*{References}\label{references}}
\addcontentsline{toc}{subsection}{References}
\hypertarget{refs}{}
\begin{CSLReferences}{1}{0}
\leavevmode\vadjust pre{\hypertarget{ref-user-modeling-approach}{}}%
Colak, Ilhami, Seref Sagiroglu, and H. Tolga Kahraman. 2008. \emph{A User Modeling Approach to Web Based Adaptive Educational Hypermedia Systems}. IEEE. \url{https://ieeexplore.ieee.org/document/4725051}.
\leavevmode\vadjust pre{\hypertarget{ref-results-supportive}{}}%
E. AshbyPlant, K. AndersEricsson, and LenHillKiaAsberg. 2005. \emph{Why Study Time Does Not Predict Grade Point Average Across College Students: Implications of Deliberate Practice for Academic Performance}. \url{https://doi.org/10.1016/j.cedpsych.2004.06.001}.
\leavevmode\vadjust pre{\hypertarget{ref-discussion-supportive}{}}%
Gail I. Hudson, Sarath A. Nonis \&. 2010. \emph{Academic Performance of College Students: Influence of Time Spent Studying and Working}. \url{https://doi.org/10.3200/JOEB.81.3.151-159}.
\leavevmode\vadjust pre{\hypertarget{ref-student-performance}{}}%
Kahraman, H. Tolga, Seref Sagiroglu, and Ilhami Colak. 2009. {``User Knowledge Modeling Data Set.''} UCI Machine Learning Repository. \url{https://archive.ics.uci.edu/ml/datasets/User+Knowledge+Modeling\#}.
\leavevmode\vadjust pre{\hypertarget{ref-development-adaptive-edu-sys}{}}%
---------. 2010. \emph{Development of Adaptive and Intelligent Web-Based Educational Systems}. IEEE. \url{https://ieeexplore.ieee.org/abstract/document/5612054?casa_token=7H8SL8Adw_MAAAAA:XQdCJf7zvfJu8qtptRvqpyvEC7fuiGU1phE5tjqI_yh7dX9iGerXpv58scy5mBPrWmX6dVaG}.
\leavevmode\vadjust pre{\hypertarget{ref-knowledge-based-systems}{}}%
---------. 2013. \emph{The Development of Intuitive Knowledge Classifier and the Modeling of Domain Dependent Data}. Elsevier Science Publishers B. V. \url{https://doi.org/10.1016/j.knosys.2012.08.009}.
\leavevmode\vadjust pre{\hypertarget{ref-init-student-models}{}}%
Maria Virvou, Victoria Tsiriga \&. 2004. \emph{A Framework for the Initialization of Student Models in Web-Based Intelligent Tutoring Systems. User Modeling and User-Adapted Interaction}. \url{https://link.springer.com/article/10.1023/B:USER.0000043396.14788.cc\#citeas}.
\end{CSLReferences}
\end{document}
| {
"alphanum_fraction": 0.7827250057,
"avg_line_length": 58.0394736842,
"ext": "tex",
"hexsha": "83c1aaddd3c2f6a529e30000321760099f3d667c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fd93ee87cd2906addbf23fd1cc2de48f32f033c8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DSCI-310/DSCI-310-Group-8",
"max_forks_repo_path": "doc/student_performance_analysis_report.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "fd93ee87cd2906addbf23fd1cc2de48f32f033c8",
"max_issues_repo_issues_event_max_datetime": "2022-02-15T00:51:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-02-03T20:14:08.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DSCI-310/DSCI-310-Group-8",
"max_issues_repo_path": "doc/student_performance_analysis_report.tex",
"max_line_length": 1014,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fd93ee87cd2906addbf23fd1cc2de48f32f033c8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DSCI-310/DSCI-310-Group-8",
"max_stars_repo_path": "doc/student_performance_analysis_report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5791,
"size": 22055
} |
\documentclass[11pt,a4paper]{book}
\setlength{\headheight}{13.6pt}
\usepackage{SVC}
\graphicspath{{./images}{../images}}
\usepackage{subfiles}
\title{Single-Variable Calculus}
\author{Syed, Danial Haseeb}
\date{}
\begin{document}
\frontmatter
\pagenumbering{Roman}
% Title Page
\subfile{title}
% Copyright Page
\subfile{copyright}
% Contents
\tableofcontents
% Preface
\subfile{chapters/preface}
\mainmatter
% Part 1 - Differentiation
\cleardoublepage
\begingroup
\makeatletter
\let\ps@plain\ps@empty
\part{Differentiation}
\endgroup
% Chapter 1 - Derivatives
\subfile{chapters/chapter1}
%Chapter 2 - Limits and Continuity
\subfile{chapters/chapter2}
% Chapter 3 - Derivative Formulæ
\subfile{chapters/chapter3}
% Chapter 4 - Higher Derivatives
\subfile{chapters/chapter4}
% Chapter 5 - Implicit Differentiation and Inverses
\subfile{chapters/chapter5}
% Chapter 6 - Exponential and Logarithmic Differentiation
% Part 2 - Applications of Differentiation
\cleardoublepage
\begingroup
\makeatletter
\let\ps@plain\ps@empty
\part{Applications of Differentiation}
\endgroup
% Chapter 7 - Linear and Quadratic Approximations
% Chapter 8 - Curve Sketching
% Chapter 9 - Min-Max Problems
% Part 3 - Integration
\cleardoublepage
\begingroup
\makeatletter
\let\ps@plain\ps@empty
\part{Integration}
\endgroup
% Part 4 - Techniques of Integration
\cleardoublepage
\begingroup
\makeatletter
\let\ps@plain\ps@empty
\part{Techniques of Integration}
\endgroup
\backmatter
% Acknowledgments
\chapter{Acknowledgments}
\Blindtext
% Appendix
% End Notes
% Glossary
% Index
% Bibliography
\end{document} | {
"alphanum_fraction": 0.7594397077,
"avg_line_length": 15.2037037037,
"ext": "tex",
"hexsha": "57defd8012401cbe3c5e8a9f4bfc45e9ab748aa9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4bf05b3e46010967217f2e71bb22a9de8e7fc82d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DanialHaseeb/single-variable-calculus",
"max_forks_repo_path": "main.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "4bf05b3e46010967217f2e71bb22a9de8e7fc82d",
"max_issues_repo_issues_event_max_datetime": "2021-02-15T13:01:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-01-22T21:42:46.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DanialHaseeb/single-variable-calculus",
"max_issues_repo_path": "main.tex",
"max_line_length": 57,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4bf05b3e46010967217f2e71bb22a9de8e7fc82d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DanialHaseeb/single-variable-calculus",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 518,
"size": 1642
} |
% !TEX root = index.tex
\section{Geometric Meaning of Curvature}
\subsection{Mean Curvature}
The Mean Curvature shows up in physics while studying soap films. At a point on a soap film the difference between the pressure on two sides is proportional to the mean curvature of the surface at that point. This is called the \textbf{Young-Laplace equation}.
\begin{align*}
\Delta (\mbox{pressure}) \propto H
\end{align*}
If the soap film does not bound a volume, for example, if it is bounded by a curve then the pressure on both the sides is the same and hence the mean curvature at every point must be 0. Such surfaces are called \textbf{minimal surfaces}, minimal because these surfaces also have the minimal surface area of all the surfaces bounding the curve. The study of minimal surfaces is a very active area of research in geometry.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.495\textwidth}
\centering
\includegraphics[width=6cm]{Minimal_Surface_1}
\end{subfigure}
\begin{subfigure}[t]{0.495\textwidth}
\centering
\includegraphics[width=6cm]{Minimal_Surface_2}
\end{subfigure}
\caption{Examples of minimal surfaces. The Mean Curvature at \emph{every} non-boundary point is 0 and hence at every point the surface looks like the perfect potato chip. Images from \href{https://en.wikipedia.org/wiki/Minimal_surface}{Wikipedia}.}
\end{figure}
\subsection{Gaussian Curvature}
The Gaussian Curvature is much more subtle and has several interpretations.
\begin{definition}
For a surface $S$ the \textbf{geodesic distance} $d_S(p,q)$ between two points $p,q \in S$ is defined to be the shortest length of the curve on the surface $S$ that connects $p$ to $q$.
\end{definition} For example, on a plane the geodesic distance is simply the Euclidean distance. On a sphere, the geodesic distance between two points is the length of the arc of the great circle connecting them.
\begin{figure}[H]
\centering
\includegraphics[width=3cm]{great_circle}
\caption{The geodesic distance between two points on a sphere is the length of the arc of the great circle connecting them. Image from \href{https://en.wikipedia.org/wiki/Great-circle_distance}{Wikipedia}.}
\end{figure}
\begin{definition}
The \textbf{geodesic ball} of radius $r$ centered at a point $p \in S$ is the set of points which are at a geodesic distance of at the most $r$ from $p$
\begin{align*}
B_r(p) := \{ x \in S : d_S(x,p) \le r\}
\end{align*}
\end{definition}
We'll assume the following theorem without proof.
\begin{thm}
The Gaussian curvature of $S$ at $p$ equals
\begin{align*}
K &= 3 \lim \limits_{r \rightarrow 0} \dfrac{2 \pi r - \mbox{length of } \partial B_S(p,r) }{\pi r^3}
\end{align*}
\end{thm}
\begin{cor}
\label{thm:geodesics}
The Gaussian curvature can be computed by measuring distances on the surface (without knowing anything about the ambient space).
\end{cor}
This leads directly to the next theorem.
\subsubsection{Theorema Egregium}
We can take a sheet of paper and roll into a cylinder without stretching or compressing the sheet.\footnote{Neglect the \emph{thickness} of the paper.} Such a map is called an \textbf{isometry}.
\begin{definition}
A smooth map $\phi: S \rightarrow S'$ is called an \textbf{isometry} if
\begin{align*}
d_S(p,q) = d_{S'}(\phi(p),\phi(q))
\end{align*}for any two points $p,q \in S$.
\end{definition}
\noindent The map that sends a plane to a cylinder is an example of such an isometry.
\begin{thm}[Theorema Egregium]
\label{thm:theorema}
If there exists an isometry $\phi: S \rightarrow S'$ between two surfaces then the Gaussian curvature of $S$ at $p \in S$ equals the Gaussian curvature of $S'$ at $\phi(p)$.
\end{thm}
\begin{proof}
This is a direct consequence of Corollary \ref{thm:geodesics}. It is possible to measure the Gaussian curvature using only the geodesic distances and geodesic distances are preserved under isometry.
\end{proof}
This theorem is interpreted as saying that the Gaussian curvature is \emph{intrinsic} to a surface.
\subsubsection{Gauss Map}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{Gauss_map}
\caption{The unit normal vector $\vec n$ defines a map from the surface to the unit sphere $\vec n : S \rightarrow S^2$. Image from \href{https://en.wikipedia.org/wiki/Gauss_map}{Wikipedia}.}
\end{figure}
Let $\vec n$ denote a continuously varying unit normal vector field on the surface $S$. There are two possibly choices for $\vec n$, we just pick one. We can think of $\vec n$ as a map, called the \textbf{Gauss map} from $S$ to the unit sphere $S^2$. Then,
\begin{align*}
K &= \lim \limits_{r \rightarrow 0} \dfrac{\mbox{area of } \vec n (B_r(p)) }{\mbox{area of } B_r(p) }
\end{align*}
\subsubsection{Gauss-Bonnet theorem}
Gaussian curvature has a topological significance as well. If $S$ is a closed surface then
\begin{thm}
The total Gaussian Curvature
\begin{align*}
\int \limits_S K \: dA = 2 \pi \chi(S)
\end{align*}
where $\chi(S)$ denotes the Euler characteristic of the surface.
\end{thm}
\subsection{Final Remarks}
The methods that we've described only define the various curvatures for \emph{embedded} surfaces. In general, because the Gaussian curvature can be computed by measuring Geodesic distances it is possible to define the Gaussian curvature (but not the principal and mean curvatures) for manifolds with a metric (also called Riemannian manifold).
| {
"alphanum_fraction": 0.7292191436,
"avg_line_length": 55.58,
"ext": "tex",
"hexsha": "6cb2d4e49f7e6c89c6be6eac625562388e072e27",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d76acd32d3f030b4fbf3ed0cad3876639bdf4e8f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "apurvnakade/mc2018-how-curved-is-a-potato",
"max_forks_repo_path": "04.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d76acd32d3f030b4fbf3ed0cad3876639bdf4e8f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "apurvnakade/mc2018-how-curved-is-a-potato",
"max_issues_repo_path": "04.tex",
"max_line_length": 422,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d76acd32d3f030b4fbf3ed0cad3876639bdf4e8f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "apurvnakade/mc2018-how-curved-is-a-potato",
"max_stars_repo_path": "04.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1549,
"size": 5558
} |
\documentclass{article}
\usepackage{fancyhdr}
\usepackage{extramarks}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{tikz}
\usepackage{physics}
\usepackage{amssymb}
\usepackage[plain]{algorithm}
\usepackage{algpseudocode}
\usetikzlibrary{automata,positioning}
% Basic Document Settings
%
\topmargin=-0.45in
\evensidemargin=0in
\oddsidemargin=0in
\textwidth=6.5in
\textheight=9.0in
\headsep=0.25in
\linespread{1.1}
\pagestyle{fancy}
\lhead{\hmwkAuthorName}
\chead{\hmwkClass\ : \hmwkTitle}
\rhead{\firstxmark}
\lfoot{\lastxmark}
\cfoot{\thepage}
\renewcommand\headrulewidth{0.4pt}
\renewcommand\footrulewidth{0.4pt}
\setlength\parindent{0pt}
%
% Create Problem Sections
%
\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\bes}{\begin{equation*}}
\newcommand{\ees}{\end{equation*}}
\newcommand{\bea}{\begin{flalign*}}
\newcommand{\eea}{\end{flalign*}}
\newcommand{\enterProblemHeader}[1]{
\nobreak\extramarks{}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
\nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
}
\newcommand{\exitProblemHeader}[1]{
\nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
\stepcounter{#1}
\nobreak\extramarks{Problem \arabic{#1}}{}\nobreak{}
}
\setcounter{secnumdepth}{0}
\newcounter{partCounter}
\newcounter{homeworkProblemCounter}
\setcounter{homeworkProblemCounter}{1}
\nobreak\extramarks{Problem \arabic{homeworkProblemCounter}}{}\nobreak{}
%
% Homework Problem Environment
%
% This environment takes an optional argument. When given, it will adjust the
% problem counter. This is useful for when the problems given for your
% assignment aren't sequential. See the last 3 problems of this template for an
% example.
%
\newenvironment{homeworkProblem}[1][-1]{
\ifnum#1>0
\setcounter{homeworkProblemCounter}{#1}
\fi
\section{Problem \arabic{homeworkProblemCounter}}
\setcounter{partCounter}{1}
\enterProblemHeader{homeworkProblemCounter}
}{
\exitProblemHeader{homeworkProblemCounter}
}
%
% Homework Details
% - Title
% - Due date
% - Class
% - Section/Time
% - Instructor
% - Author
%
\newcommand{\hmwkTitle}{Assignment\ \#6}
\newcommand{\hmwkDueDate}{Due 16th November 2018}
\newcommand{\hmwkClass}{Classical Mechanics}
\newcommand{\hmwkClassTime}{}
\newcommand{\hmwkClassInstructor}{Prof.Manas Kulkarni}
\newcommand{\hmwkAuthorName}{\textbf{Aditya Vijaykumar}}
%
% Title Page
%
\title{
%\vspace{2in}
\textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\
\normalsize\vspace{0.1in}\small{\hmwkDueDate\ }\\
% \vspace{3in}
}
\author{\hmwkAuthorName}
\date{}
\renewcommand{\part}[1]{\textbf{\large Part \Alph{partCounter}}\stepcounter{partCounter}\\}
%
% Various Helper Commands
%
% Useful for algorithms
\newcommand{\alg}[1]{\textsc{\bfseries \footnotesize #1}}
% For derivatives
\newcommand{\deriv}[1]{\frac{\mathrm{d}}{\mathrm{d}x} (#1)}
% For partial derivatives
\newcommand{\pderiv}[2]{\frac{\partial}{\partial #1} (#2)}
% Integral dx
\newcommand{\dx}{\mathrm{d}x}
% Alias for the Solution section header
\newcommand{\solution}{\textbf{\large Solution}}
% Probability commands: Expectation, Variance, Covariance, Bias
\newcommand{\E}{\mathrm{E}}
\newcommand{\Var}{\mathrm{Var}}
\newcommand{\Cov}{\mathrm{Cov}}
\newcommand{\Bias}{\mathrm{Bias}}
\begin{document}
\maketitle
\begin{homeworkProblem}[1]
Liouville Theorem states that in a Hamiltonian system, the total phase space volume is constant in time.
Let our system consist of $ N $ points $ (q_k, p_k) $ in a $ 2N $ dimensional phase space. The volume of the phase space is,
\begin{equation*}
V = \prod_{i} d q_i \qq{and} \tilde{V} = \prod_{i} d \tilde{q}_i
\end{equation*}
where the tilded coordinates represents the volume at a later time. We know from Hamilton's equations,
\begin{equation*}
\tilde{q}_i = q_i + \pdv{H}{p_i} dt \qq{and} \tilde{p}_i = p_i - \pdv{H}{q_i} dt
\end{equation*}
We know for a fact that the volume transformation is related as follows,
\begin{align*}
\tilde{V} &= \det(J) V \\
J &= \mqty[\pdv{\tilde{q}_i}{q_j} & \pdv{\tilde{q}_i}{p_j} \\ \pdv{\tilde{p}_i}{q_j} & \pdv{\tilde{p}_i}{q_j}]\\
J &= \mqty[1 + \pdv[2]{H}{q_j}{p_i} dt & \pdv[2]{H}{p_i} dt \\ -\pdv[2]{H}{q_j} dt & 1 - \pdv[2]{H}{p_j}{q_j} dt] \\
\det(J) &= 1 + \order{dt^2}
\end{align*}
Hence upto first order, $ \tilde{V} = V$. Hence proved. (files for the next part are attached)
\end{homeworkProblem}
\begin{homeworkProblem}[2]
Transformations of coordinates $ (q,p,t) \rightarrow (Q,P,t)$ which preserves the form of Hamilton's equations are called canonical transformations. So, by definition,
\begin{equation*}
\dot{p} = \pdv{H}{q} \qq{,} \dot{q} = - \pdv{H}{p} \qq{and} \dot{P} = \pdv{K}{Q} \qq{,} \dot{Q} = -\pdv{K}{P}
\end{equation*}
The definition also implies that,
\begin{align*}
\delta(p \dot{q} - H) = 0 &\qq{and} \delta(P \dot{Q} - K) = 0\\
\lambda(p \dot{q} - H) &= P \dot{Q} - K + \dv{F}{t}
\end{align*}
We deal with the $ \lambda = 1 $ case. The $ \dv{F}{t} $ term comes from the fact that Lagrangians are not unique and we can always add a total time derivative term without changing the equations of motion. If the above condition is satisfied, the transformation $ (q,p,t) \rightarrow (Q,P,t)$ is guaranteed to be canonical, and the function $ F $ is called a generating function. We deal with four classes of generating functions case-by-case,
\begin{itemize}
\item $ F = F_1 (q,Q,t) $,
\begin{equation*}
p \dot{q} - H = P \dot{Q} - K + \dv{F_1}{t} = P \dot{Q} - K + \pdv{F_1}{q}\dot{q} + \pdv{F_1}{Q}\dot{Q} + \pdv{F_1}{t}
\end{equation*}
As $ q $ and $ Q $ are independent, the coefficients should vanishh independently, such that $ K = H + \pdv{F_1}{t} $. This implies,
\begin{equation*}
\pdv{F_1}{q} = p \qq{and} \pdv{F_1}{Q} = -P
\end{equation*}
\item $ F = F_2 (q,P,t) - QP$,
\begin{equation*}
p \dot{q} - H = P \dot{Q} - K + \dv{F_2}{t} - \dv{(QP)}{t} = P \dot{Q} - K + \pdv{F_2}{q}\dot{q} + \pdv{F_2}{P}\dot{P} + \pdv{F_2}{t} - P \dot{Q} - Q \dot{P}
\end{equation*}
\begin{equation*}
\implies \pdv{F_2}{q} = p \qq{and} \pdv{F_2}{P} = Q
\end{equation*}
\item $ F = F_3 (p,Q,t) + qp$,
\begin{equation*}
p \dot{q} - H = P \dot{Q} - K + \dv{F_3}{t} + \dv{(qp)}{t} = P \dot{Q} - K + \pdv{F_3}{Q}\dot{Q} + \pdv{F_3}{p}\dot{p} + \pdv{F_3}{t} + p \dot{q} + q \dot{p}
\end{equation*}
\begin{equation*}
\implies \pdv{F_3}{Q} = -P \qq{and} \pdv{F_2}{p} = -q
\end{equation*}
\item $ F = F_4 (p,P,t) + qp - QP$,
\begin{equation*}
p \dot{q} - H = P \dot{Q} - K + \dv{F_4}{t} + \dv{(qp - QP)}{t} = P \dot{Q} - K + \pdv{F_4}{P}\dot{P} + \pdv{F_4}{p}\dot{p} + \pdv{F_4}{t} + p \dot{q} + q \dot{p} - P \dot{Q} - Q \dot{P}
\end{equation*}
\begin{equation*}
\implies \pdv{F_4}{P} = Q \qq{and} \pdv{F_4}{p} = -q
\end{equation*}
\end{itemize}
\textbf{Part (b)}\\
We first use the Poisson Bracket invariance approach. We are given,
\begin{equation*}
Q_1 = q_1 \qq{,} Q_2 = p_2 \qq{,} P_1 = p_1 - 2 p_2 \qq{,} P_2 = -2 q_1 - q_2
\end{equation*}
Consider $ \pb{Q_1}{Q_2} $,
\begin{align*}
\pb{Q_1}{Q_2} &= \sum_{i =1}^{2} \pdv{Q_1}{q_i} \pdv{Q_2}{p_i} - \pdv{Q_1}{p_i} \pdv{Q_2}{q_i} = 0 \\
\pb{P_1}{P_2} &= \sum_{i =1}^{2} \pdv{P_1}{q_i} \pdv{P_2}{p_i} - \pdv{P_1}{p_i} \pdv{P_2}{q_i} = - \pdv{P_1}{p_1} \pdv{P_2}{q_1} - \pdv{P_1}{p_2} \pdv{P_2}{q_2} = 2-2 = 0 \\
\pb{Q_1}{P_2} &= \sum_{i =1}^{2} \pdv{Q_1}{q_i} \pdv{P_2}{p_i} - \pdv{Q_1}{p_i} \pdv{P_2}{q_i} = \pdv{Q_1}{q_1} \pdv{P_2}{p_1} = 0 \\
\pb{Q_2}{P_1} &= \sum_{i =1}^{2} \pdv{Q_2}{q_i} \pdv{P_1}{p_i} - \pdv{Q_2}{p_i} \pdv{P_1}{q_i} = - \pdv{Q_2}{p_2} \pdv{P_1}{q_2} = 0\\
\pb{Q_1}{P_1} &= \sum_{i =1}^{2} \pdv{Q_1}{q_i} \pdv{P_1}{p_i} - \pdv{Q_1}{p_i} \pdv{P_1}{q_i} = \pdv{Q_1}{q_1} \pdv{P_1}{p_1} = 1\\
\pb{Q_2}{P_2} &= \sum_{i =1}^{2} \pdv{Q_2}{q_i} \pdv{P_2}{p_i} - \pdv{Q_2}{p_i} \pdv{P_2}{q_i} = - \pdv{Q_2}{p_2} \pdv{P_2}{q_2} = 1
\end{align*}
Hence, as $ \pb{Q_i}{P_j} = \delta_{ij}, \pb{Q_i}{Q_j} = 0, \pb{P_i}{P_j} = 0$, the transformation is canonical. We now use the symplectic approach. If we denote $ X = \mqty[Q_1 & Q_2 & P_1 & P_2]^T, x = \mqty[q_1 & q_2 & p_1 & p_2]^T $, then $ X = Mx $ where $ M $ is the transformation matrix. From the definitions of the $ X $, we can see that,
\begin{equation*}
M = \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 &0 & 1 \\
0 & 0 & 1 & -2 \\
-2 & -1 & 0 & 0 \\
\end{array}
\right)
\end{equation*}
For the transformation to be a canonical transformation, $ M^T J M = J $, where,
\begin{align*}
J &= \left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 &0 & 1 \\
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
\end{array}
\right)
\end{align*}
\begin{align*}
M^T J M &= \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 &0 & 1 \\
0 & 0 & 1 & -2 \\
-2 & -1 & 0 & 0 \\
\end{array}
\right) \left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 &0 & 1 \\
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
\end{array}
\right) \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 &0 & 1 \\
0 & 0 & 1 & -2 \\
-2 & -1 & 0 & 0 \\
\end{array}
\right) \\
&= \left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & -1 &0 & 0 \\
-1 & 2 & 0 & 1 \\
0 & 0 & -2 & 0 \\
\end{array}
\right)
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 &0 & 1 \\
0 & 0 & 1 & -2 \\
-2 & -1 & 0 & 0 \\
\end{array}
\right) \\
M^T J M &= \left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 &0 & 1 \\
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
\end{array}
\right) = J
\end{align*}
Hence, it is a canonical transformation.\\
\textbf{Part (c)}
\begin{align*}
l_i &= \epsilon_{i j k} x_j p_k \qq{using the Einstein summation convention}
\end{align*}
We now note the following,
\begin{align*}
\pb{l_i}{l_j} &= \epsilon_{i a b} \epsilon_{j m n} \pb{x_a p_b}{x_m p_n} \\
&= \epsilon_{i a b} \epsilon_{j m n} \pb{x_a p_b}{x_m p_n}\\
&= \epsilon_{i a b} \epsilon_{j m n} (\pb{x_a }{ p_n} x_m p_b + \pb{p_b}{x_m} x_a p_n)\\
&= \epsilon_{i a b} \epsilon_{j m n} ( \delta_{an} x_m p_b - \delta_{bm} x_a p_n) \\
&= \epsilon_{i n b} \epsilon_{j m n} x_m p_b - \epsilon_{i a m} \epsilon_{j m n} x_a p_n \\
&= - \epsilon_{i b n} \epsilon_{j m n} x_m p_b + \epsilon_{i m a} \epsilon_{j m n} x_a p_n \\
&= - (\delta_{ij} \delta_{bm} - \delta_{im} \delta_{j b}) x_m p_b + (\delta_{ij} \delta_{an} - \delta_{aj} \delta_{in}) x_a p_n \\
&= - \delta_{ij} x_b p_b + x_i p_j + \delta_{ij} x_a p_a - x_j p_i \\
&= + x_i p_j - x_j p_i \\
\pb{l_i}{l_j} &= \epsilon_{i j k} l_k
\end{align*}
\begin{align*}
\pb{x_i}{l_j} &= \epsilon_{j m n} \pb{x_i}{ x_m p_n}\\
&= \epsilon_{j m n} x_m \pb{x_i}{ p_n}\\
&= \epsilon_{j m n} x_m \delta_{in}\\
\pb{x_i}{l_j} &= \epsilon_{i j m} x_m
\end{align*}
\begin{align*}
\pb{p_i}{l_j} &= \epsilon_{j m n} \pb{p_i}{ x_m p_n}\\
&= \epsilon_{j m n} p_n \pb{p_i}{ x_m}\\
&= -\epsilon_{j m n} p_n \delta_{im}\\
\pb{p_i}{l_j}&= \epsilon_{i j n} p_n
\end{align*}
\end{homeworkProblem}
\begin{homeworkProblem}[3]
We are given the Hamiltonian and generating function,
\begin{equation*}
H = \dfrac{p^2}{2} + \dfrac{\omega^2 x^2 }{2} + \alpha x^3 + \beta x p^2 \qq{and} \phi = xP + ax^2 P + bP^3
\end{equation*}
$ \phi = \phi (x,P) $. For $ \phi $ to be a canonical transformation,
\begin{align*}
\pdv{\phi}{x} = p &\qq{and} \pdv{\phi}{P} = Q\\
\implies P + 2 a x P = p &\qq{and} x + ax^2 + 3b P^2 = Q \\
\implies P \sqrt{-12 a b P^2+4 a Q+1} = p &\qq{and}\frac{\sqrt{-12 a b P^2+4 a Q+1}-1}{2 a} = x
\end{align*}
where we have only considered the $ x $ root with positive sign before the discriminant. Then,
\begin{align*}
K(Q, P) &= \frac{\alpha \left(\sqrt{-12 a b P^2+4 a Q+1}-1\right)^3}{8 a^3}+\frac{\omega ^2 \left(\sqrt{-12 a b P^2+4 a Q+1}-1\right)^2}{8 a^2} \\
&+\frac{\beta P^2 \left(-12 a b P^2+4 a Q+1\right) \left(\sqrt{-12 a b P^2+4 a Q+1}-1\right)}{2 a}+ \frac{1}{2} P^2 \left(-12 a b P^2+4 a Q+1\right)
\end{align*}
Expanding the above upto third order, we have,
\begin{align*}
K(Q, P) &= Q^3 \left(P^2 \left(-30 a^2 b \omega ^2-2 a^2 \beta +36 \alpha a b\right)-a \omega ^2+\alpha \right)+Q^2 \left(P^2 \left(9 a b \omega ^2+3 a \beta -9 \alpha b\right)+\frac{\omega ^2}{2}\right)\\&+P^2 Q \left(2 a-3 b \omega ^2+\beta \right)+\frac{P^2}{2}
\end{align*}
As anharmonic terms of third order should not be present, we can see from above that,
\begin{equation*}
- a \omega^2 + \alpha = 0 \qq{and} 2 a-3 b \omega ^2+\beta = 0 \implies a = \dfrac{\alpha}{\omega^2} \qq{and} b = \dfrac{1}{3 \omega^2} \qty(\dfrac{2 \alpha}{\omega^2} + \beta)
\end{equation*}
Now we need to find $ \dot{x} $. From Hamilton's equation of motion we have,
\begin{align*}
\dot{x} = \pdv{H}{p} = p (1 + 2 \beta x) \implies p = \dfrac{\dot{x}}{1 + 2 \beta x} \\
\dot{p} = -\dfrac{2 \beta \dot{x}^2 - \ddot{x} (1 + 2 \beta x) }{(1 + 2 \beta x)^2} = -\pdv{H}{x} = -\omega^2 x - 3 \alpha x^2 - \beta p^2\\
\implies -{2 \beta \dot{x}^2 - \ddot{x} (1 + 2 \beta x) } = -(\omega^2 x - 3 \alpha x^2) (1 + 2 \beta x)^2 - \beta \dot{x}^2 \\
\implies {\ddot{x} (1 + 2 \beta x) -\beta \dot{x}^2 } + (\omega^2 x - 3 \alpha x^2) (1 + 2 \beta x)^2 = 0
\end{align*}
$ x(t) $ will be given by the solution of this equation.
\textbf{Part (b)}\\
\begin{itemize}
\item $ \phi(\va{r}, \va{P}) = (\va{r} \cdot \va{P}) + (\delta \va{a} \cdot \va{P}) $\\
This looks like $ F_2 (q,P) $. From the results of Problem 2, we can then write,
\begin{align*}
\pdv{\Phi}{r} = p_r = P_r \qq{,} \pdv{\Phi}{\theta} = p_\theta =0 \qq{,} \pdv{\Phi}{\phi} = p_\phi = 0 \\
\pdv{\Phi}{P_r} = Q_r = r + \delta a_x\qq{,} \pdv{\Phi}{P_\theta} = Q_\theta = \delta a_\theta \qq{,} \pdv{\Phi}{P_\phi} = Q_\phi = \delta a_\phi
\end{align*}
as $ r + \delta a = Q $, it is evident that the transformation is a translation by constant, as the momentum remains the same but the coordinates get shifted by a constant amount.
\item $ \Phi(\va{r}, \va{P}) = (\va{r} \cdot \va{P}) + (\va{\delta \psi} \cdot \va{r} \cross \va{P}) $ \\
This looks like $ F_2 $ too. We have,
\begin{equation*}
p \cdot \delta \psi = \qty(\dv{\Phi}{r} = P + \pdv{(\va{\delta \psi} \cdot \va{r} \cross \va{P})}{r})\cdot \delta \psi = P\cdot \delta \psi + \pdv{(\va{r} \cdot \va{P} \cross \va{\delta \psi})}{r} \cdot \delta \psi= P \delta \psi
\end{equation*}
\begin{equation*}
Q = \pdv{\Phi}{p} = r + r \delta \psi
\end{equation*}
The above transformations look like rotations in the phase plane.
\item $ \Phi = qP + \delta \tau H(q,p,t) $\\
This looks like $ F_2 $ again. We write,
\begin{align*}
\pdv{\Phi}{q} &= P + \delta \tau (-\dot{p}) = p \qq{and}\\
\pdv{\Phi}{P} &= q + \delta \tau \pdv{H}{P}\\
&= q + \delta \tau \pdv{H}{P} \\
&= q + \delta \tau \qty( \pdv{H}{p} \pdv{p}{P} + \pdv{H}{q} \pdv{q}{P} ) \\
&= q + \delta \tau \qty( \dot{q} (1 - \delta \tau \dot{p}) )\\
Q & \approx q + \delta \tau \dot{q}
\end{align*}
\begin{equation*}
\therefore Q \approx q + \delta \tau \dot{q} \qq{and} P \approx p + \delta \tau \dot{p}
\end{equation*}
So the canonical transformation just corresponds to time translation by parameter $ d\tau $.
\item $ \Phi = \va{r} \cdot \va{P} + (r^2 + P^2)\delta a $
\begin{align*}
\pdv{\Phi}{r} = P_r + 2 r \delta a \implies P_r = p_r - 2 r \delta a \\
\pdv{\Phi}{P} = r + 2 P \delta a \implies Q = r + 2 P \delta a
\end{align*}
This is equivalent to rotation in the phase space by amount $ 2 \delta a $
\end{itemize}
\end{homeworkProblem}
\begin{homeworkProblem}[4]
We first note that,
\begin{equation*}
y = x^2 \implies \dot{y} = 2 x \dot{x}
\end{equation*}
and write down the Lagrangian and Hamiltonian of the system,
\begin{align*}
L &= \dfrac{m \dot{x}^2}{2} + \dfrac{m \dot{y}^2}{2} - mgy \\
L &= \dfrac{m \dot{x}^2}{2} + {2m x^2 \dot{x}^2 } - mgx^2\\
\implies p = m \dot{x} + 4 mx^2 \dot{x} &\implies \dot{x} = \dfrac{p}{m(1 + 4x^2)}
\end{align*}
Thus, we can write the Hamiltonian as,
\begin{align*}
H(x,p) &= \dfrac{p^2}{m(1 + 4x^2)} - \dfrac{m}{2}(1 + 4x^2)\dfrac{p^2}{m^2 (1 + 4x^2)^2} + mg x^2\\
H(x,p) &= \dfrac{p^2}{2m(1 + 4x^2)} + mgx^2
\end{align*}
The Hamilton-Jacobi equation is given by,
\begin{equation*}
\dfrac{1}{2m(1 + 4x^2)}\qty(\pdv{S}{x})^2 + mgx^2 + \pdv{S}{t} = 0
\end{equation*}
Substituting $ S = W(x) - Et $, we get,
\begin{align*}
\dfrac{1}{2m(1 + 4x^2)}\qty(\dv{W}{x})^2 + &mgx^2 - E = 0
\implies \dv{W}{x} = \sqrt{2m(E - mgx^2)(1 + 4x^2)}\\
\implies S &= \int dx \sqrt{2m(E - mgx^2)(1 + 4x^2)} - Et
\end{align*}
We know that $ \pdv{S}{E} = \alpha t + \beta $ for constants $ \alpha $ and $ \beta $. Hence the equation of motion is,
\begin{equation*}
\sqrt{\dfrac{m(1+4x^2)}{2(E - mgx^2)}} - E = \alpha t + \beta
\end{equation*}
\textbf{Part (b)}\\
We first note that,
\begin{equation*}
z = \dfrac{\xi^2 - \eta^2}{2} \qq{,} \rho = \eta \xi \qq{,} \psi = \phi \implies \dot{z} = \xi \dot{\xi} - \eta \dot{\eta} \qq{,} \dot{\rho} = \eta \dot{\xi} + \xi \dot{\eta} \qq{,} \dot{\phi} = \dot{\psi}
\end{equation*}
We first write down the Lagrangian and canonical momenta,
\begin{align*}
L &= \dfrac{m (\dot{\rho}^2 + \rho^2 \dot{\phi}^2 + \dot{z}^2)}{2} - \dfrac{k}{\sqrt{\rho^2 + z^2} } + Fz\\
&= \dfrac{m ( \eta^2 \dot{\xi}^2 + \xi^2 \dot{\eta}^2+ 2 \eta \xi \dot{\eta} \dot{\xi} + \eta^2 \xi^2 \dot{\psi}^2 + \xi^2 \dot{\xi}^2 - 2 \xi \dot{\xi} \eta \dot{\eta} + \eta^2 \dot{\eta}^2 )}{2} - \dfrac{k}{\sqrt{\qty(\dfrac{\xi^2 - \eta^2}{2})^2 + \eta^2 \xi^2}} + F\dfrac{\xi^2 - \eta^2}{2}\\
L&= m\dfrac{(\eta^2 + \xi^2)( \dot{\xi}^2 + \dot{\eta}^2)+ \eta^2 \xi^2 \dot{\psi}^2}{2} - \dfrac{2k}{\eta^2 + \xi^2} + F\dfrac{\xi^2 - \eta^2}{2}\\
\implies &p_\xi = m (\eta^2 + \xi^2) \dot{\xi} \qq{,} p_\eta = m (\eta^2 + \xi^2) \dot{\eta} \qq{,} p_\psi = m \eta^2 \xi^2 \dot{\psi}\\
\implies H &= \dfrac{p_\xi^2 + p_\eta^2}{2 m(\eta^2 + \xi^2)} + \dfrac{p_\psi^2}{2m \eta^2 \xi^2} +\dfrac{2k}{\eta^2 + \xi^2} - F\dfrac{\xi^2 - \eta^2}{2}
\end{align*}
Let's apply the transformations given in the problem
We can now write down the Hamilton-Jacobi equation as,
\begin{equation*}
\pdv{S}{t} + \dfrac{1}{2 m(\eta^2 + \xi^2)} \qty[\qty(\pdv{S}{\xi})^2 + \qty(\pdv{S}{\eta})^2 ]+ \dfrac{1}{2m \eta^2 \xi^2} \qty(\pdv{S}{\psi})^2 + \dfrac{2k}{\eta^2 + \xi^2} - F\dfrac{\xi^2 - \eta^2}{2} = 0
\end{equation*}
We now make the substitution $ S = S_1(\xi) + S_2(\eta) + S_3(\psi) - Et $. We then have,
\begin{equation*}
\dfrac{1}{2 m(\eta^2 + \xi^2)} \qty[\qty(\dv{S_1}{\xi})^2 + \qty(\dv{S_2}{\eta})^2 ]+ \dfrac{1}{2m \eta^2 \xi^2} \qty(\dv{S_3}{\psi})^2 + \dfrac{2k}{\eta^2 + \xi^2} - F\dfrac{\xi^2 - \eta^2}{2} = E
\end{equation*}
Out of the four terms above, we see that only the third terms depends on $ \psi $. As the RHS is a constant, the dependence on $ \psi $ also should vanish. This means,
\begin{equation*}
\qty(\dv{S_3}{\psi})^2 = \beta_1^2
\end{equation*}
Making the above substitution, multiplying the equation by $ 2m(\eta^2 + \xi^2) $, and collecting terms, we get,
\begin{equation*}
\qty[-2m \eta^2 E + \qty(\dv{S_2}{\eta})^2 + \dfrac{\beta_1^2}{ \eta^2} + Fm \eta^4] + \qty[-2m\xi^2 E + \qty(\pdv{S}{\xi})^2 + \dfrac{\beta_1^2}{ \xi^2} - Fm \xi^4] + 4mk = 0
\end{equation*}
We can see from the above form that the equation has become completely separable in the new coordinates.
\end{homeworkProblem}
\end{document}
| {
"alphanum_fraction": 0.5903423608,
"avg_line_length": 39.2184368737,
"ext": "tex",
"hexsha": "8043718081ba763591a0ae0b6a1fece55e15b84b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c0aebb67332ccf0b116a3348923ab2631b586dac",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adivijaykumar/courses",
"max_forks_repo_path": "sem1/cmech/assign_6/cmech_assign_6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0aebb67332ccf0b116a3348923ab2631b586dac",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adivijaykumar/courses",
"max_issues_repo_path": "sem1/cmech/assign_6/cmech_assign_6.tex",
"max_line_length": 445,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c0aebb67332ccf0b116a3348923ab2631b586dac",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adivijaykumar/courses",
"max_stars_repo_path": "sem1/cmech/assign_6/cmech_assign_6.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8806,
"size": 19570
} |
\clearpage
\phantomsection
\addcontentsline{toc}{subsection}{SDIV}
\label{insn:sdiv}
\subsection*{SDIV: signed division}
\subsubsection*{Format}
\textrm{SDIV \%rd, \%r1, \%r2}
\begin{center}
\begin{bytefield}[endianness=big,bitformatting=\scriptsize]{32}
\bitheader{0,7,8,15,16,23,24,31} \\
\bitbox{8}{0x09}
\bitbox{8}{r1}
\bitbox{8}{r2}
\bitbox{8}{rd}
\end{bytefield}
\end{center}
\subsubsection*{Description}
The \instruction{sdiv} instruction divides the value contained in
\registerop{r2} into that contained in \registerop{r1} placing the
results into \registerop{rd}. The values in both \registerop{r1} and
\registerop{r2} are first promoted to signed, 64 bit values, before
the division operation is carried out.
\subsubsection*{Pseudocode}
\begin{verbatim}
%rd = (int64_t)%r1 / (inst64_t)%r2
\end{verbatim}
\subsubsection*{Constraints}
\subsubsection*{Failure modes}
This instruction has no run-time failure modes beyond its constraints.
| {
"alphanum_fraction": 0.7533960293,
"avg_line_length": 23.925,
"ext": "tex",
"hexsha": "fba2bd320b27245a3d06f347e9654737b29d334f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "bsdbcr/documentation",
"max_forks_repo_path": "specification/instr/sdiv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "bsdbcr/documentation",
"max_issues_repo_path": "specification/instr/sdiv.tex",
"max_line_length": 70,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "bsdbcr/documentation",
"max_stars_repo_path": "specification/instr/sdiv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 310,
"size": 957
} |
\chapter{The Unicon Translator}
Unicon was originally a modest addition to Icon. As much as possible,
features are implemented by extending existing functions with new
semantics, and by the addition of new functions and keywords with no
new syntax. Originally, the object-oriented facilities were
implemented as a simple line-oriented preprocessor named Idol. When
Idol was merged with the Posix and ODBC facilities to become Unicon,
the Idol preprocessor was substantially modified to a full parser that
generates code by traversing a syntax tree. It is still reasonable to
call the Unicon translator a preprocessor, but it has many of the
traits of a compiler.
\section{Overview}
The Unicon translator lives in uni/unicon/. In addition to many Unicon
source files, it uses the external tools iyacc and merr to generate
its parser and syntax error message tables, which depend on files
unigram.y and meta.err, respectively. Unicon is written in Unicon,
posing a bootstrapping problem. When building from sources, some of
the .icn files can be translated by icont (the Icon translator, a C
program). Those files that require Unicon itself in order to compile
are included in precompiled object format (in .u files) in order to
solve the bootstrapping problem.
\section{Lexical Analysis}
Unicon's lexical analyzer is adapted from a hand-written lex-compatible
scanner written by Bob Alexander and used in the Jcon Java-based
implementation of Icon. Some of its design is borrowed from the
original Icon lexical analyzer (which is handwritten C code). It would
be interesting to replace Unicon's lexical analyzer with a machine
generated lexical analyzer to reduce the amount of compiler source
code to maintain. The lexical analyzer consists of a function
yylex() located in unicon/uni/unicon/unilex.icn, about 500 lines of
code.
\subsection{Lex-compatible API}
The global declarations that exist in order to provide a
Lex-compatible API include:
\begin{iconcode}
\$include "ytab\_h.icn" \> \> \> \> \> \> \> \> \> \> \> \# yacc's token categories \\
global yytext \> \> \> \> \> \> \> \> \> \> \> \# lexeme \\
global yyin \> \> \> \> \> \> \> \> \> \> \> \# source file we are reading \\
global yytoken \> \> \> \> \> \> \> \> \> \> \> \# token (a record) \\
global yylineno, yycolno, yyfilename \> \> \> \> \> \> \> \> \> \> \> \# source location
\end{iconcode}
\subsection{Character Categories}
The lexical analyzer uses several csets for different character
categories beyond the built-in ones:
\begin{iconcode}
global O, D, L, H, R, FS, IS, W, idchars\\
\ \\
procedure init\_csets() \\
\> O \ := '01234567' \\
\> D \ := \&digits \\
\> L \ := \&letters ++ '\_' \\
\> H \ := \&digits ++ 'abcdefABCDEF' \\
\> R \ := \&digits ++ \&letters \\
\> FS := 'fFlL' \\
\> IS := 'uUlL' \\
\> W \ := ' {\textbackslash}t{\textbackslash}v' \\
\> idchars := L ++ D \\
end
\end{iconcode}
\subsection{The Token Type}
The record type storing each token's information just bundles together
the syntactic category (an integer), lexeme (a string), and location
at which the token occurred. This is pretty minimalist.
\iconline{
record token(tok, s, line, column, filename)
}
\subsection{Error Handling and Debugging}
Several remaining global variables are mainly used for error handling,
and for debugging the lexical analyzer itself.
\subsection{Reserved Words}
Global \texttt{reswords()} creates and becomes a table holding the
Unicon reserved words. For each word, a pair of integers [tokenflags,
category] is kept. The tokenflags indicate whether the token allows
semi-colon insertion if a newline immediately precedes it (Beginner)
or comes after it (Ender); see section 26.2.7 below. Language design
note: tables in this language need a literal format.
\begin{iconcode}
procedure reswords() \\
static t \\
initial \{ \\
\> t := table([Beginner+Ender, IDENT]) \\
\ \\
\> t[{\textquotedbl}abstract{\textquotedbl}] := [0, ABSTRACT] \\
\> t[{\textquotedbl}break{\textquotedbl}] := [Beginner+Ender, BREAK] \\
\> t[{\textquotedbl}by{\textquotedbl}] := [0, BY] \\
\> ... \\
\> t[{\textquotedbl}to{\textquotedbl}] := [0, TO] \\
\> t[{\textquotedbl}until{\textquotedbl}] := [Beginner, UNTIL] \\
\> t[{\textquotedbl}while{\textquotedbl}] := [Beginner, WHILE] \\
\> \} \\
return t \\
end
\end{iconcode}
\subsection{Big-inhale Input}
A function, \texttt{yylex\_reinit()} is called the first time
\texttt{yylex()} is called, along with each time the compiler moves to
process a new file named on the command line. Along with initializing
the public API variables, this function reads in the entire file, in a
single global string variable, cleverly named \texttt{buffer}. This
allows extremely fast subsequent processing, which does no file I/O
for each token, while avoiding complex buffering sometimes done to
reduce file I/O costs in compilers.
This ``big-inhale'' model did not work well on the 128K PDP-11 UNIX
computers Icon's C implementation were developed on, but works well in
this century. At present, the code assumes Unicon source files are
less than a megabyte -- a lazy programmer's error. Although Unicon
programs are much shorter than C programs, an upper limit of 1MB is
bound to be reached someday.
\begin{iconcode}
procedure yylex\_reinit() \\
\> yytext := "" \\
\> yylineno := 0 \\
\> yycolno := 1 \\
\> lastchar := "" \\
\> if type(yyin) == "file" then \\
\> \> buffer := reads(yyin, 1000000) \\
\> else \\
\> \> buffer := yyin \\
\> tokflags := 0 \\
end
\end{iconcode}
\subsection{Semicolon Insertion}
Icon and Unicon insert semicolons for the programmer
automatically. This is an easy lexical analyzer trick. The lexical
analyzer requires one token of lookahead. Between each two tokens, it
asks: was there a newline? If yes, was the token before the newline
one that could conceivably be the end of an expression, and was the
token at the start of the new line one that could conceivably start a
new expression? If it would be legal to do so, it saves the new token
and returns a semicolon instead.
This little procedure is entirely hidden from the regular lexical
analyzer code by writing that regular code in a helper function
\texttt{yylex2()}, and writing the semicolon insertion logic in a
\texttt{yylex()} function that calls \texttt{yylex2()} when it needs a
new token.
Initialization for the \texttt{yylex()} function shows the static
variables used to implement the one token of lookahead. If the global
variable \texttt{buffer} doesn't hold a string anymore,
\texttt{/buffer} will succeed and it must be that we are at
end-of-file and should return 0.
\begin{iconcode}
procedure yylex() \\
static saved\_tok, saved\_yytext \\
local rv, ender \\
initial \{ \\
\> if /buffer then \\
\> \> yylex\_reinit() \\
\> \> \} \\
\> if /buffer then \{ \\
\> \> if {\textbackslash}debuglex then \\
\> \> \> write("yylex() : 0") \\
\> \> return 0 \\
\> \> \}
\end{iconcode}
If a semicolon was inserted the last time \texttt{yylex()} was called,
the \texttt{saved\_tok} will be the first token of the next line; it
should be returned.
\begin{iconcode}
\> if {\textbackslash}saved\_tok then \{ \\
\> \> rv := saved\_tok \\
\> \> saved\_tok := \&null \\
\> \> yytext := saved\_yytext \\
\> \> yylval := yytoken := token(rv, yytext, yylineno, yycolno, yyfilename) \\
\> \> if {\textbackslash}debuglex then \\
\> \> \> write("yylex() : ",tokenstr(rv),
"{\textbackslash}t", image(yytext)) \\
\> \> return rv \\
\> \> \}
\end{iconcode}
Otherwise, we should obtain the next token by calling yylex2(). We
have to check for end of file, remember if the last token could end an
expression, call yylex2(), and update buffer to be the smaller string
remaining after the token.
\begin{iconcode}
\> ender := iand(tokflags, Ender) \\
\> tokflags := 0 \\
\> if *buffer=0 then \{ \\
\> \> buffer := \&null \\
\> \> if {\textbackslash}debuglex then \\
\> \> \> write("yylex() : EOFX") \\
\> \> return EOFX \\
\> \> \} \\
\> buffer ? \{ \\
\> \> if rv := yylex2() then \{ \\
\> \> \> buffer := tab(0) \\
\> \> \> \} \\
\> \> else \{ \\
\> \> \> buffer := \&null \\
\> \> \> yytext := "" \\
\> \> \> if {\textbackslash}debuglex then \\
\> \> \> \> write("yylex() : EOFX") \\
\> \> \> return EOFX \\
\> \> \> \} \\
\> \> \}
\end{iconcode}
After fetching a new token, we have to decide whether to insert a
semicolon or not. This is based on global variable ender (whether the
previous token could end an expression) and global variable tokflags
(which holds both whether the current token could begin an expression,
and whether a newline occurred between the last token and the current
token). \texttt{iand()} is a bitwise AND, equivalen to C language \&
operator, used to pick bits out of a set of boolean flags encoded as
bits within an integer.
\begin{iconcode}
\> if ender\~{}=0 \& iand(tokflags, Beginner)\~{}=0 \& iand(tokflags, Newline)\~{}=0 then \{ \\
\> \> saved\_tok := rv \\
\> \> saved\_yytext := yytext \\
\> \> yytext := ";" \\
\> \> rv := SEMICOL \\
\> \> \}
\end{iconcode}
Returning a token requires allocation of a \texttt{token()} record
instance, which is stored in a global variable.
\begin{iconcode}
\> yylval := yytoken := token(rv, yytext, yylineno, yycolno, yyfilename) \\
\> if {\textbackslash}debuglex then \\
\> \> write("yylex() : ", tokenstr(rv), "{\textbackslash}t", image(yytext))\\
\> return rv \\
end
\end{iconcode}
\subsection{The Real Lexical Analyzer Function, yylex2()}
This function maintains a table of functions, calling a helper
function depending on what the first character in the token is.
\begin{iconcode}
procedure yylex2() \\
static punc\_table \\
initial \{ \\
\> init\_csets() \\
\> reswords := reswords() \\
\> punc\_table := table(uni\_error) \\
\> punc\_table["'"] := do\_literal \\
\> punc\_table["{\textbackslash}""] := do\_literal \\
\> punc\_table["!"] := do\_bang \\
\> punc\_table["\%"] := do\_mod \\
\> ... \\
\> punc\_table["\$"] := do\_dollar \\
\> every punc\_table[!\&digits] := do\_digits \\
\> every punc\_table["\_" {\textbar} !\&letters] := do\_letters \\
\> \}
\end{iconcode}
The main lexical analyzer code strips comments and whitespace, and calls the function table for the first non-whitespace
character it finds. Note support for \#line directives, and the use of string scanning.
{\ttfamily\mdseries
\ \ \ yycolno +:= *yytext}
\bigskip
{\ttfamily\mdseries
\ \ \ repeat \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ if pos(0) then fail}
{\ttfamily\mdseries
\ \ \ \ \ \ \ if }
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ ={\textquotedbl}\#{\textquotedbl} then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if ={\textquotedbl}line {\textquotedbl} then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if yylineno := integer(tab(many(\&digits))) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ={\textquotedbl} {\textbackslash}{\textquotedbl}{\textquotedbl}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyfilename :=
tab(find({\textquotedbl}{\textbackslash}{\textquotedbl}{\textquotedbl}){\textbar}0)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ tab(find({\textquotedbl}{\textbackslash}n{\textquotedbl}) {\textbar} 0)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ next}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ if ={\textquotedbl}{\textbackslash}n{\textquotedbl} then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ yylineno +:= 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ yycolno := 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ if tokflags {\textless} Newline then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ tokflags +:= Newline}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ next}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ if tab(any(' ')) then \{ yycolno +:= 1; next \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ if tab(any('{\textbackslash}v{\textbackslash}\^{}l')) then \{ next \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ if tab(any('{\textbackslash}t')) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ yycolno +:= 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ while (yycolno-1) \% 8 \~{}= 0 do yycolno +:= 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ next}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \}}
\bigskip
{\ttfamily\mdseries
\ \ \ \ \ \ \ yytext := move(1)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ return punc\_table[yytext]()}
{\ttfamily\mdseries
\ \ \ \}}
{\ttfamily\mdseries
end}
The functions in the punctuation table select integer codes and match
the rest of the lexeme. do\_comma() illustrates an unambiguous token
selection, while do\_plus() illustrates a more common case where the
{\textquotedbl}+{\textquotedbl} character could start any of 5
different tokens depending on the character(s) that follow it. Tokens
starting with {\textquotedbl}letters{\textquotedbl} are looked up in a
reserved words table, which tells whether they are special, or just a
variable name.
{\ttfamily\mdseries
procedure do\_comma()}
{\ttfamily\mdseries
\ \ \ return COMMA}
{\ttfamily\mdseries
end}
\bigskip
{\ttfamily\mdseries
procedure do\_plus()}
{\ttfamily\mdseries
\ \ \ if yytext {\textbar}{\textbar}:= ={\textquotedbl}:{\textquotedbl} then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ if yytext {\textbar}{\textbar}:= ={\textquotedbl}={\textquotedbl} then \{ return AUGPLUS \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ return PCOLON}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ if yytext {\textbar}{\textbar}:= ={\textquotedbl}+{\textquotedbl} then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ if yytext {\textbar}{\textbar}:= ={\textquotedbl}:={\textquotedbl} then \{return AUGUNION\}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ return UNION}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ tokflags +:= Beginner}
{\ttfamily\mdseries
\ \ \ return PLUS}
{\ttfamily\mdseries
end}
\bigskip
{\ttfamily\mdseries
procedure do\_letters()}
{\ttfamily\mdseries
\ \ \ yytext {\textbar}{\textbar}:= tab(many(idchars))}
{\ttfamily\mdseries
\ \ \ x := reswords[yytext]}
{\ttfamily\mdseries
\ \ \ tokflags +:= x[1]}
{\ttfamily\mdseries
\ \ \ return x[2]}
{\ttfamily\mdseries
end}
\section{The Unicon Parser}
Unicon's parser is written using a YACC grammar; a graduate student
(Ray Pereda) modified Berkeley's public domain version of YACC (byacc)
to generate Unicon code, following in the footsteps of someone who had
earlier modified it to generate Java. The Unicon parser lives in
uni/unicon/unigram.y in the source distribution (22kB, 700 lines, 119
terminals, 71 nonterminals). Unicon's YACC grammar was obtained by
copying the Icon grammar, and adding Unicon syntax constructs. Prior
to this time the object-oriented dialect of Icon was called Idol and
really was a line-oriented preprocessor instead of a compiler.
The start symbol for the grammar is named
\textstyleSourceText{program}, and the semantic action code fragment
for this nonterminal calls the rest of the compiler (semantic analysis
and code generation) directly on the root of the syntax tree, rather
than storing it in a global variable for the main() procedure to
examine.
{\ttfamily\mdseries
program : decls EOFX \{ Progend(\$1);\} ;}
Many context free grammar rules are recursive, with an empty
production to terminate the recursion. The rule for declarations is
typical:
{\ttfamily\mdseries
decls \ \ : \{ \$\$ := EmptyNode \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} decls decl \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ if yynerrs = 0 then iwrites(\&errout,{\textquotedbl}.{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \$\$ := node({\textquotedbl}decls{\textquotedbl}, \$1, \$2)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \} ;}
The {\textquotedbl}semantic action{\textquotedbl} (code fragment) for
every production rule builds a syntax tree node and assigns it to \$\$
for the nonterminal left-hand side of the rule.
Another common grammar pattern is a production rule that has many
different alternatives, such as the one for individual declarations:
{\ttfamily\mdseries
decl \ \ \ : record}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} proc}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} global}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} link}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} package}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} import}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} invocable}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} cl}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ ;}
For such {\textquotedbl}unary{\textquotedbl} productions, child's
syntax tree node suffices for the parent, no new tree node is needed.
Some nonterminals mostly correspond to a specific sequence of
terminals, as is the case for package references:
{\ttfamily\mdseries
packageref : IDENT COLONCOLON IDENT \{ \$\$ := node({\textquotedbl}packageref{\textquotedbl}, \$1,\$2,\$3) \} }
{\ttfamily\mdseries
\ \ \ {\textbar} COLONCOLON IDENT \{ \$\$ := node({\textquotedbl}packageref{\textquotedbl}, \$1,\$2) \} \ }
{\ttfamily\mdseries
\ \ \ ;}
The lexical analyzer has already constructed a valid
{\textquotedbl}leaf{\textquotedbl} for each terminal symbol, so if a
production rule has only one terminal symbol in it, for a syntax tree
we can simply use the leaf for that nonterminal (for a parse tree, we
would need to allocate an extra unary internal node):
{\ttfamily\mdseries
lnkfile : IDENT ;}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} STRINGLIT ;}
The expressions (which comprise about half of the grammar) use a
separate nonterminal for each level of precedence intead of YACC's
declarations for handling precedence (\%left, \%right, etc). The Icon
and Unicon grammars approach 20 levels of nonterminals. A typical rule
looks like:
{\ttfamily\mdseries
expr6 \ \ : expr7 ;}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} expr6 PLUS expr7 \{ \$\$ := node({\textquotedbl}Bplus{\textquotedbl}, \$1,\$2,\$3);\} ;}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} expr6 DIFF expr7 \{ \$\$ := node({\textquotedbl}Bdiff{\textquotedbl}, \$1,\$2,\$3);\} ;}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} expr6 UNION expr7 \{ \$\$ := node({\textquotedbl}Bunion{\textquotedbl}, \$1,\$2,\$3);\} ;}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ {\textbar} expr6 MINUS expr7 \{ \$\$ := node({\textquotedbl}Bminus{\textquotedbl}, \$1,\$2,\$3);\} ;}
The {\textquotedbl}B{\textquotedbl} stands for
{\textquotedbl}binary{\textquotedbl}, to distinguish these operators
from their unary brethren. The 20 levels of nonterminals approach is
inherited from Icon and probably makes the parser larger than it has
to be, but taking these nonterminals out doesn't seem to help much.
\subsection{Syntax Error Handling}
Icon employed a relatively clever approach to doing syntax error
messages with YACC --- the parse state at the time of error is enough
to do fairly good diagnoses. But, every time the grammar changed, the
parse state numbers could change wildly. For Unicon I developed the
Merr tool, which associates parse error example fragments with the
corresponding diagnostic error message, and detects/infers the parse
state for you, reducing the maintenance problem when changing the
grammar. Merr also considers the current input token in deciding what
error message to emit, making it fundamentally more precise than
Icon's approach.
\section{The Unicon Preprocessor}
The Icon language originally did not include any preprocessor, but
eventually, a simple one was introduced, with ability to include
headers, define symbolic constants (macros without parameters), and
handle conditional compilation (ifdef). The preprocessor
implementation in Unicon was written by Bob Alexander, and came to
Unicon by way of Jcon, an Icon-to-JVM translator. This preprocessor is
written in a single 600+ line file, uni/unicon/preproce.icn.
The external public interface of the preprocessor is line-oriented,
consisting of a generator preproc(filename, predefinedsyms) which
suspends each line of the output, one after another. Its invocation
from the main() procedure looks like:
{\ttfamily\mdseries
\ \ \ yyin := {\textquotedbl}{\textquotedbl}}
{\ttfamily\mdseries
\ \ \ every yyin {\textbar}{\textbar}:= preprocessor(fName, uni\_predefs) do}
{\ttfamily\mdseries
\ \ \ \ \ \ yyin {\textbar}{\textbar}:= {\textquotedbl}{\textbackslash}n{\textquotedbl}}
Since the preprocessor outputs line-by-line, there is a mismatch
between it and the lexical analyzer's big-inhale model. The
preprocessor could be modified to fit better with the lexical analyzer
or vice versa.
The preprocessor function takes the filename to read from, along with
a table of predefined symbols which allows the preprocessor to respond
to lines like
{\ttfamily\mdseries
\$ifdef \_SQL}
\noindent based on what libraries are available and how Unicon was
built on a given platform.
The preprocessor() function itself starts each call off with initializations:
{\ttfamily\mdseries
\ \ \ \ static nonpunctuation}
{\ttfamily\mdseries
\ \ \ \ initial \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ nonpunctuation := ${\leq}$\&letters ++ \&digits ++ '
{\textbackslash}t{\textbackslash}f{\textbackslash}r'}
{\ttfamily\mdseries
\ \ \ \ \}}
\bigskip
{\ttfamily\mdseries
\ \ \ \ preproc\_new(fname,predefined\_syms)}
The initialization code opens fname, creates empty stacks to keep
track of nested \$ifdef's and \$include's, initializes counters to 0
and so forth.
The preprocessor is line-oriented. For each line, it looks for a
preprocessor directive, and if it does not find one, it just scans for
symbols to replace and returns the line. The main loop looks like
{\ttfamily\mdseries
\ \ \ while line := preproc\_read() do line ? \{}
{\ttfamily\mdseries
\ \ \ \ \ \ preproc\_space() \ \ \ \ \ \ \# eat whitespace}
{\ttfamily\mdseries
\ \ \ \ \ \ if (={\textquotedbl}\#{\textquotedbl} \& match({\textquotedbl}line{\textquotedbl})) {\textbar}
(={\textquotedbl}\${\textquotedbl} \& any(nonpunctuation)) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ suspend preproc\_scan\_directive()}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ else \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \&pos := 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ suspend preproc\_scan\_text()}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
The procedures preproc\_scan\_directive() and preproc\_scan\_text()
work on special and ordinary lines, respectively. The line is not a
parameter because it is held in the current string scanning
environment. The preproc\_scan\_directive() starts by discardign
whitespace and identifying the first word on the line (which must be a
valid preprocessor directive). A case expression handles the various
directives (define, undef, ifdef, etc.). Defined symbols are stored in
a table. \$ifdef and \$ifndef are handled using a global variable
preproc\_if\_state to track the boolean conditions. A count of
\$ifdef's is maintained, in order to handle matching endif's.
Include files are handled using a stack, but an additional set of
filenames is kept to prevent infinite recursion when files include
each other. When a new include directive is encountered it is checked
against the preproc\_include\_set and if OK, it is opened. The
including file (and its associated name, line, etc) are pushed onto a
list named preproc\_file\_stack. It is possible to run out of open
files under this model, although this is not easy under modern
operating systems.
Include files are searched on an include file path, consisting of a
list of directories given on an optional environment variable (LPATH)
followed by a list of standard directories. The standard directories
are expected to be found relative to the location of the virtual
machine binaries.
The procedure preproc\_scan\_text has the relatively simple job of
replacing any symbols by their definitions within an ordinary source
line. Since macros do not have parameters, it is vastly simpler than
in a C preprocessor. The main challenges are to avoid macro
substitutions when a symbol is in a comment or within quotes (string
or cset literals). An additional issue is to handle multiline string
literals, which occur in Icon when a string literal is not closed on a
line, and instead the line ends with an underscore indicating that it
is continued on the next line. Skipping over quoted text sounds
simple, but is trickier than it looks. Escape characters mean you
can't just look for the closing quote without considering what comes
before it, and you can't just look at the preceding character since it
might have been escaped, as in
{\textquotedbl}{\textbackslash}{\textbackslash}{\textquotedbl}. The
code looks similar to:
{\ttfamily\mdseries
repeat \{}
{\ttfamily\mdseries
\ \ \ while tab(upto('{\textquotedbl}{\textbackslash}{\textbackslash}')) do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ case move(1) of \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}{\textbackslash}{\textbackslash}{\textquotedbl}: move(1)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ default: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ break break}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \# ...}
{\ttfamily\mdseries
\ \ \ if not match({\textquotedbl}\_{\textquotedbl},,-1) then}
{\ttfamily\mdseries
\ \ \ \ \ \ break}
{\ttfamily\mdseries
\ \ \ ${\subset}$\&subject := preproc\_read() {\textbar} fail}
{\ttfamily\mdseries
\ \ \ \# ...}
{\ttfamily\mdseries
\ \ \ \}}
The code in preproc\_read() for reading a line does a regular Icon
read(); end of file causes the preprocessor file\_stack to be popped
for the previous file's information. Performance has not been
perceived as a significant problem, it it would be interesting to
convert preproc\_read() to use a big-inhale model to see if any
statistical difference could be observed. When an include is
encountered under a big-inhale, the saved state would contain the
string of remaining file contents, instead of the open file value.
\section{Semantic Analysis}
The Unicon translator's semantic analysis is minimal, and revolves
mainly around object-oriented features such as inheritance and package
imports. Before we can look at those things, we need to look at the
syntax tree structure.
In conventional YACC, a \%union declaration is necessary to handle the
varying types of objects on the value stack including the type used
for syntax tree nodes, but iyacc has no need of this awkward
mechanism: the value stack like all structure types can hold any type
of value in each slot. Similarly, tree nodes can hold children of any
type, potentially eliminating any awkwardness of mixing tokens and
internal nodes. Of course, you do still have to check what kind of
value you are working with.
{\sffamily
Parse Tree Nodes }
uni/unicon/tree.icn contains procedures to handle the syntax tree node
data type, including both the following declaration and the yyprint()
traversal function we'll be discussing in today's lecture.
{\ttfamily\mdseries
record treenode(label, children)}
\noindent holds one node worth of information. For convenience, a
procedure node(label, kids[]) takes an arbitrary number of parameters
and constructs the list of children for you. Leaves have a null
children field.
{\sffamily
{\textquotedbl}Code Generation{\textquotedbl} in the Unicon Translator }
In a regular preprocessor, there is no code generation, there is a
text-filter model in which the preprocessor writes out (modified)
versions of the lines it reads in. In the Unicon translator, the code
that is written out is produced by a traversal of the syntax tree. The
same technique might be used by a {\textquotedbl}pretty
printer{\textquotedbl}. We will explore this aspect of the Unicon
translator as the best available demonstration of working with Unicon
syntax trees. Later on we will consider more
{\textquotedbl}real{\textquotedbl} code generation in the virtual
machine and the optimizing compiler.
Earlier we saw that the start symbol of the Unicon grammar had a
semantic action that called a procedure Progend(). We will cover most
of that procedure next week since it is all about object-orientation,
but at the end Progend(), a call to yyprint() performs the tree
traversal for code generation. A classic tree traversal pattern would
look like:
{\ttfamily\mdseries
procedure traverse(node)}
{\ttfamily\mdseries
\ \ \ if node is an internal node \{}
{\ttfamily\mdseries
\ \ \ \ \ \ every child := ! node.children do traverse(child)}
{\ttfamily\mdseries
\ \ \ \ \ \ generate code for this internal node (postfix)}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ else}
{\ttfamily\mdseries
\ \ \ \ \ \ generate code for this leaf}
{\ttfamily\mdseries
end}
The code generator traversal yyprint() is a lot more complicated than
that, but fits the general pattern. The main work done at various
nodes is to write some text to the output file, yyout. Most ordinary
internal nodes are of type treenode as described above. But because
there are several kinds of internal nodes and several kinds of leaves,
the {\textquotedbl}if node is an internal node{\textquotedbl} is
implemented as a case expression. Besides a regular treenode, the
other kinds of internal nodes are objects of type declaration, class,
and argument list. For regular treenodes, another case expression on
the node's label field is used to determine what kind of code to
generate, if any, besides visiting children and generating their code.
The default behavior for an internal node is to just visit the
children, generating their code. For ordinary syntax constructs (if,
while, etc.) this works great and a copy of the code is written out,
token by token. But several exceptions occur, mainly for the pieces of
Unicon syntax that extend Icon's repertoire. For example, packages and
imports are not in Icon and require special treatment.
{\ttfamily\mdseries
procedure yyprint(node)}
{\ttfamily\mdseries
\ \ \ static lasttok}
{\ttfamily\mdseries
\ \ \ case type(node) of \{}
{\ttfamily\mdseries
\ \ \ \ \ \ {\textquotedbl}treenode{\textquotedbl} : \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ case node.label of \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}package{\textquotedbl}: \{ \} \# handled by semantic analysis}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}import{\textquotedbl}: \{ print\_imports(node.children[2]) \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \# implement packages via name mangling}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}packageref{\textquotedbl}: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ if *node.children = 2 then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[2]) \# ::ident}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ else \{ \# ident :: ident}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[1])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ writes(yyout, {\textquotedbl}\_\_{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ outcol +:= ((* writes(yyout, node.children[3].s)) + 2)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
New syntax constructs such as procedure parameter defaults and type
restrictions, and variable initializers, are other examples where the
default traversal would output things illegal in Icon. They are
implemented by skipping some of the children (assignment and value) in
the regular pass, and adding extra code elsewhere, discussed below.
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}varlist2{\textquotedbl}{\textbar}{\textquotedbl}stalist2{\textquotedbl}: \{
yyprint(node.children[1]) \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}varlist4{\textquotedbl}{\textbar}{\textquotedbl}stalist4{\textquotedbl}: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[1])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[2])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[3])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
Much of this special logic is orchestrated by the code for traversing
a procedure; it can visit its arguments and variable declarations and
apply special rules to them.
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}proc{\textquotedbl}: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[1])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ every yyprint(node.children[2 to 3])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if exists\_statlists(node.children[3]) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ini := node.children[4]}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyprint({\textquotedbl}{\textbackslash}ninitial \{{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if ini \~{}=== EmptyNode then \{ \# append into existing initial}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyprint(ini.children[2])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyprint({\textquotedbl};{\textbackslash}n{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yystalists(node.children[3])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ yyprint({\textquotedbl}{\textbackslash}n\}{\textbackslash}n{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ else}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ every yyprint(node.children[4])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ (node.children[1].fields).coercions()}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyvarlists(node.children[3])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[5])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ yyprint(node.children[6])}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
The default behavior of visiting one's children is very simple, as is
the handling of other kinds of internal nodes, which are objects. For
the objects, a method Write() is invoked.
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ {\textquotedbl}error{\textquotedbl}: fail}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ default:}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ every yyprint(!node.children)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ {\textquotedbl}declaration\_\_state{\textquotedbl} {\textbar} {\textquotedbl}Class\_\_state{\textquotedbl}
{\textbar} {\textquotedbl}argList\_\_state{\textquotedbl}:}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ node.Write(yyout)}
The outer case expression of yyprint() continues with various kinds of
leaf (token) nodes. These mainly know how to write their lexemes
out. But, a lot of effort is made to try to keep line and column
number information consistent. Variables outline and outcol are
maintained as each token is written out. Integers and string literals
found in the syntax tree are written out as themselves. Since they
have no attached lexical attributes, they are a bit suspect in terms
of maintaining debugging consistency. It turns out the reason they
occur at all, and the reason they have no source lexical attributes,
is that artificial syntax subtrees are generated to handle certain
object-oriented constructs, and within those subtrees strings and
integers may be placed, which do not correspond to anywhere in the
source code.
{\ttfamily\mdseries
\ \ \ \ \ \ {\textquotedbl}integer{\textquotedbl}: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ writes(yyout, node); outcol +:= *string(node)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ {\textquotedbl}string{\textquotedbl}: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ node ? \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ while writes(yyout, tab(find({\textquotedbl}{\textbackslash}n{\textquotedbl})+1)) do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ outline+:=1; outcol:=1;}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ node := tab(0)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ writes(yyout, node); outcol +:= *node}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\textquotedbl}Normally{\textquotedbl}, tokens are written out at
exactly the line and column they appear at in the source code. But a
myriad of constructs may bump them around. If the output falls behind
(in lines, or columns) extra whitespace can be inserted to stay in
sync. If output gets ahead by lines, a \#line directive can back it
up, but if output gets ahead by columns, there is nothing much one can
do, except make sure subsequent tokens don't accidentally get
attached/concatenated onto earlier tokens. This occurs, for example,
when the output code for an object-oriented construct in an expression
is longer than the source expression, perhaps due to name
mangling. Specific token combinations are checked, but the list here
may be incomplete (possible BUG!). For source tokens, not only might
the line and column change, the filename could be different as well.
{\ttfamily\mdseries
\ \ \ \ \ \ {\textquotedbl}token{\textquotedbl}: \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ if outfilename \~{}== node.filename {\textbar} outline {\textgreater} node.line then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ write(yyout,{\textquotedbl}{\textbackslash}n\#line {\textquotedbl}, node.line-1,{\textquotedbl}
{\textbackslash}{\textquotedbl}{\textquotedbl},
node.filename,{\textquotedbl}{\textbackslash}{\textquotedbl}{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ outline := node.line}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ outcol := 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ outfilename := node.filename}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ while outline {\textless} node.line do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ write(yyout); outline +:= 1; outcol := 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ if outcol {\textgreater}= node.column then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \# force space between idents and reserved words, and other}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \# deadly combinations (need to add some more)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if (({\textbackslash}lasttok).tok = (IDENT{\textbar}INTLIT{\textbar}REALLIT) \&
reswords[node.s][2]\~{}=IDENT){\textbar}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ((({\textbackslash}lasttok).tok = NMLT) \& (node.tok = MINUS)) {\textbar}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (({\textbackslash}lasttok).tok = node.tok = PLUS) {\textbar}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (({\textbackslash}lasttok).tok = node.tok = MINUS) {\textbar}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ((reswords[({\textbackslash}lasttok).s][2]\~{}=IDENT) \&
(node.tok=(IDENT{\textbar}INTLIT{\textbar}REALLIT))){\textbar}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ((reswords[({\textbackslash}lasttok).s][2]\~{}=IDENT) \&
(reswords[node.s][2]\~{}=IDENT))}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ writes(yyout, {\textquotedbl} {\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ else}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ while outcol {\textless} node.column do \{ writes(yyout, {\textquotedbl} {\textquotedbl});
outcol +:= 1 \}}
Most tokens' lexemes are finally written out by writing node.s:
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ writes(yyout, node.s)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ outcol +:= *node.s}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ lasttok := node}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ {\textquotedbl}null{\textquotedbl}: \{ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ default: write({\textquotedbl}its a {\textquotedbl}, type(node))}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
end}
{\sffamily
Keywords }
Besides the large set of interesting reserved words, Icon and Unicon
have another set of predefined special words called
\textstyleEmphasis{keywords}. These words are prefixed by an
ampersand, for example, \&subject holds the current
{\textquotedbl}subject{\textquotedbl} string being examined by string
scanning. A procedure Keyword(x1,x2) semantically checks that an
identifier following a unary ampersand is one of the valid keyword
names. The valid names are kept in a set data structure.
\section{Object Oriented Facilities}
Unicon features classes, packages, and a novel multiple inheritance
mechanism. These items are implemented entirely within the Unicon
translator. The Icon virtual machine thusfar has only the slightest of
extensions for object-orientation, specifically, the dot operator has
been extended to handle objects and method invocation.
The Unicon OOP facilities were originally prototyped as a semester
class project in a {\textquotedbl}special topics{\textquotedbl}
graduate course. Writing the prototype in a very high-level language
like Icon, and developing it as a preprocessor with name mangling,
allowed the initial class mechanism to be developed in a single
evening, and a fairly full, usable system with working inheritance to
be developed in the first weekend. By the end of the semester, the
system was robust enough to write it in itself, and it was released to
the public shortly afterwards as a package for Icon called
{\textquotedbl}Idol{\textquotedbl}. Many many improvements were made
after this point, often at the suggestion of users.
An initial design goal was to make the absolute smallest additions to
the language that were necessary to support
object-orientation. Classes were viewed as a version of Icon's record
data type, retaining its syntax for fields (member variables), but
appending a set of associated procedures. Because records have no
concept of public and private, neither did classes. Another graduate
student criticized this lack of privacy, and for several versions,
everything was made private unless an explicit public keyword was
used. But eventually support for privacy was dropped on the grounds
that it added no positive capabilities and was un-Iconish. The
existence of classes with hundreds of
{\textquotedbl}getter{\textquotedbl} and
{\textquotedbl}setter{\textquotedbl} methods was considered a direct
proof that {\textquotedbl}private{\textquotedbl} was idiotic in a
rapid prototyping language.
{\sffamily
The Code Generation Model for Classes }
{\textquotedbl}unicon -E foo{\textquotedbl} will show you what code is
generated for Unicon file foo.icn. If foo.icn contains classes, you
can enjoy the code generation model and experiment to see what it does
under various circumstances. As a first example, consider
{\ttfamily\mdseries
class A(x,y)}
{\ttfamily\mdseries
\ \ \ method m()}
{\ttfamily\mdseries
\ \ \ \ \ \ write({\textquotedbl}hello{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ end}
{\ttfamily\mdseries
end}
These five lines generate 25 lines for Icont to translate into virtual
machine code. The first two lines are line directives showing from
whence this source code originated:
{\ttfamily\mdseries
\#line 0 {\textquotedbl}/tmp/uni13804206{\textquotedbl}}
{\ttfamily\mdseries
\#line 0 {\textquotedbl}a.icn{\textquotedbl}}
Global declarations (including procedures) would be passed through the
preprocessor pretty nearly intact, but for the class, we get a bunch
of very different code. Methods are written out, with names mangled to
a classname\_methodname format.
{\ttfamily\mdseries
procedure A\_m(self)}
\bigskip
\bigskip
{\ttfamily\mdseries
\#line 2 {\textquotedbl}a.icn{\textquotedbl}}
{\ttfamily\mdseries
\ \ \ \ \ write({\textquotedbl}hello{\textquotedbl});}
{\ttfamily\mdseries
end}
Two record types are defined, one for the class instances and one for
the {\textquotedbl}methods vector{\textquotedbl}, or
{\textquotedbl}operation record{\textquotedbl}. The methods vector is
instantiated exactly once in a global variable in classname\_\_oprec
format.
{\ttfamily\mdseries
record A\_\_state(\_\_s,\_\_m,x,y)}
{\ttfamily\mdseries
record A\_\_methods(m)}
{\ttfamily\mdseries
global A\_\_oprec}
The default constructor for a class takes fields as parameters and
uses them directly for initialization purposes. The first time it is
called, a methods vector is created. Instances are given a pointer to
themselves in an \_\_s field (mainly for historical reasons) and to
the methods vector in an \_\_m field. Current NMSU grad student Sumant
Tambe did an independent study project to get rid of \_\_s and \_\_m
with partial success, but his work is not finished or robust enough to
be enabled by default.
{\ttfamily\mdseries
procedure A(x,y)}
{\ttfamily\mdseries
local self,clone}
{\ttfamily\mdseries
initial \{}
{\ttfamily\mdseries
\ \ if /A\_\_oprec then Ainitialize()}
{\ttfamily\mdseries
\ \ \}}
{\ttfamily\mdseries
\ \ self := A\_\_state($\nu $ll,A\_\_oprec,x,y)}
{\ttfamily\mdseries
\ \ self.\_\_s := self}
{\ttfamily\mdseries
\ \ return self}
{\ttfamily\mdseries
end}
\bigskip
{\ttfamily\mdseries
procedure Ainitialize()}
{\ttfamily\mdseries
\ \ initial A\_\_oprec := A\_\_methods(A\_m)}
{\ttfamily\mdseries
end}
{\sffamily
Symbols and Scope Resolution }
One of the basic aspects of semantic analysis is: for each variable,
where was it declared, so we can identify its address, etc. Unicon
inherits from Icon the curious convenience that variables do not have
to be declared: they are local by default. This feature is implemented
by deferring the local vs. global decision until link time, so the
Unicon translator has no local vs. global issues. Class variables,
however, have to be identified, and looked up relative to the implicit
{\textquotedbl}self{\textquotedbl} variable. A family of procedures in
uni/unicon/tree.icn with names starting
{\textquotedbl}scopecheck{\textquotedbl} go through the syntax tree
looking for such class variables. Like most tree traversals, this is a
recursive process, and since local and parameter declarations override
class variables, there are helper functions to walk through subtrees
building mini-symbol tables such as local\_vars in
scopecheck\_proc(node):
{\ttfamily\mdseries
\ \ \ \# Build local\_vars from the params and local var expressions.}
{\ttfamily\mdseries
\ \ \ local\_vars := set()}
{\ttfamily\mdseries
\ \ \ extract\_identifiers(node.children[1].fields, local\_vars)}
{\ttfamily\mdseries
\ \ \ extract\_identifiers(node.children[3], local\_vars)}
Eventually, every identifier in every expression is checked against
local\_vars, and if not found there, against the class variables
stored in a variable self\_vars:
{\ttfamily\mdseries
\ \ \ self\_vars := set()}
{\ttfamily\mdseries
\ \ \ every insert(self\_vars, c.foreachmethod().name)}
{\ttfamily\mdseries
\ \ \ every insert(self\_vars, c.foreachfield())}
{\ttfamily\mdseries
\ \ \ every insert(self\_vars, (!c.ifields).ident)}
{\ttfamily\mdseries
\ \ \ every insert(self\_vars, (!c.imethods).ident)}
For an IDENT node, the tests boil down to:
{\ttfamily\mdseries
\ \ \ if node.tok = IDENT then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ if not member({\textbackslash}local\_vars, node.s) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ if member({\textbackslash}self\_vars, node.s) then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ node.s := {\textquotedbl}self.{\textquotedbl} {\textbar}{\textbar} node.s}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ else }
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ node.s := mangle\_sym(node.s)}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \}}
Undeclared locals and globals are mangled to include the current
package name if there is one.
{\sffamily
Inheritance }
Inheritance means: creating a class that is similar to an existing
class. In object-oriented literature there is {\textquotedbl}abstract
inheritance{\textquotedbl} in which a class supports all the same
operations with the same signatures, and there is concrete inheritance
in which actual code is shared. Early object-oriented languages
supported only concrete inheritance, while more recent languages tend
to discourage it. Unicon is not typed at compile time, so abstract
inheritance is not a big deal. There are abstract methods, and classes
whose every method is abstract, but the use of abstract is mainly for
documentation: subclass authors must provide certain methods. Anyhow,
the syntax of inheritance in Unicon is
{\ttfamily\mdseries
class subclass : super1 : super2 : ... ( ...fields... )}
The semantics of inheritance, and particularly of multiple
inheritance, are interesting in Unicon; the implementation is
relatively simple. An example of inheritance is given by class Class,
from uni/unicon/idol.icn
{\ttfamily\mdseries
class declaration(name,fields,tag,lptoken,rptoken)}
{\ttfamily\mdseries
\ \ \ ...}
{\ttfamily\mdseries
end}
{\ttfamily\mdseries
...}
{\ttfamily\mdseries
class Class : declaration (supers,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ methods,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ text,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ imethods,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ifields,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ glob,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ linkfile,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ dir,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ unmangled\_name,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ supers\_node)}
Unique perspective on inheritance in Unicon comes from the actual
acquisition of inherited data fields and methods by the subclass. Some
object-oriented languages do this inheritance {\textquotedbl}by
aggregation{\textquotedbl}, creating a copy of the superclass in the
subclass. This is fine, but it makes
{\textquotedbl}overriding{\textquotedbl} an anomaly, when overriding
the parent with new/different behavior is entirely routine. Unicon
instead inherits by the child looking for things in the parent (and
the parent's parent, etc.) that they don't already have. In the above
example, class declaration effectively appends 5 fields from class
declaration onto the end of its field list. The generated code for
instances looks like
{\ttfamily\mdseries
record Class\_\_state(\_\_s,\_\_m,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ supers,methods,text,imethods,ifields,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ glob,linkfile,dir,unmangled\_name,supers\_node,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ name,fields,tag,lptoken,rptoken)}
The inheritance semantics is called {\textquotedbl}closure
based{\textquotedbl} because the process of looking for things to add
from parent superclasses iterates until no new information can be
added, after which the subclass is said to be closed on its
parents. Other forms of closure appear frequently in CS.
\subsection{Implementing Multiple Inheritance in Unicon}
The actual code in the Unicon translator is, by analogy to transitive
closure, looking for things to inherit via a depthfirst traversal of
the inheritance graph. Multiple inheritance can be separated out into
two portions:
\liststyleLxliii
\begin{enumerate}
\item a method transitive\_closure() that finds all superclasses and
provides a linearization of them, flattening the graph into a single
ordered list of all superclasses
\item a method resolve() that walks the list and looks for classes and
fields to add.
\end{enumerate}
Method transitive\_closure() is one of the cleaner demonstrations of
why Unicon is a fun language in which to write complex algorithms. It
is walking through a class graph, but by the way it is not recursive.
{\ttfamily\mdseries
\ \ method transitive\_closure()}
{\ttfamily\mdseries
\ \ \ \ count := supers.size()}
{\ttfamily\mdseries
\ \ \ \ while count {\textgreater} 0 do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ added := taque()}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ every sc := supers.foreach() do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ if /(super := classes.lookup(sc)) then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ halt({\textquotedbl}class/transitive\_closure: \_}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ couldn't find superclass {\textquotedbl},sc)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ every supersuper := super.foreachsuper() do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if / self.supers.lookup(supersuper) \&}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ /added.lookup(supersuper) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ added.insert(supersuper)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ count := added.size()}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ every self.supers.insert(added.foreach())}
{\ttfamily\mdseries
\ \ \ \ \}}
{\ttfamily\mdseries
\ \ end}
Now, given that Unicon provides a depthfirst inheritance hierarchy
semantics, what is wrong with this picture? The code is stable and
hasn't needed changes in several years, so this question is not
fishing for syntax bugs, or claiming that there is a bug. But there is
something odd.
The method resolve() within class Class finds the inherited fields and
methods from the linearized list of superclasses.
{\ttfamily\mdseries
\ \ \#}
{\ttfamily\mdseries
\ \ \# resolve -{}- primary inheritance resolution utility}
{\ttfamily\mdseries
\ \ \#}
{\ttfamily\mdseries
\ \ method resolve()}
{\ttfamily\mdseries
\ \ \ \ \#}
{\ttfamily\mdseries
\ \ \ \ \# these are lists of [class , ident] records}
{\ttfamily\mdseries
\ \ \ \ \#}
{\ttfamily\mdseries
\ \ \ \ self.imethods := []}
{\ttfamily\mdseries
\ \ \ \ self.ifields := []}
{\ttfamily\mdseries
\ \ \ \ ipublics := []}
{\ttfamily\mdseries
\ \ \ \ addedfields := table()}
{\ttfamily\mdseries
\ \ \ \ addedmethods := table()}
{\ttfamily\mdseries
\ \ \ \ every sc := supers.foreach() do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ if /(superclass := classes.lookup(sc)) then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ halt({\textquotedbl}class/resolve: couldn't find superclass {\textquotedbl},sc)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ every superclassfield := superclass.foreachfield() do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if /self.fields.lookup(superclassfield) \&}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ /addedfields[superclassfield] then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ addedfields[superclassfield] := superclassfield}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ put ( self.ifields , classident(sc,superclassfield) )}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if superclass.ispublic(superclassfield) then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ put( ipublics, classident(sc,superclassfield) )}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \} else if {\textbackslash}strict then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ warn({\textquotedbl}class/resolve: '{\textquotedbl},sc,{\textquotedbl}' field
'{\textquotedbl},superclassfield,}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\textquotedbl}' is redeclared in subclass {\textquotedbl},self.name)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ every superclassmethod := (superclass.foreachmethod()).name() do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if /self.methods.lookup(superclassmethod) \&}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ /addedmethods[superclassmethod] then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ addedmethods[superclassmethod] := superclassmethod}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ put ( self.imethods, classident(sc,superclassmethod) )}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ every public := (!ipublics) do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if public.Class == sc then}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ put (self.imethods, classident(sc,public.ident))}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \}}
{\ttfamily\mdseries
\ \ end}
{\sffamily
Class and Package Specifications }
In the {\textquotedbl}old days{\textquotedbl} of Unicon's ancestor
Idol, you could only inherit from a class that appeared in the same
source file. Anything else poses a librarian's problem of identifying
from what file to inherit. Java, for instances, takes a brute-force
approach of one class per file.
Unicon generates in each source directory an NDBM database (named
uniclass.dir and uniclass.pag) that includes a mapping from class name
to: what file the class lives in, plus, what superclasses, fields, and
methods appear in that class. From these specifications,
{\textquotedbl}link{\textquotedbl} declarations are generated for
superclasses within subclass modules, plus the subclass can perform
inheritance resolution. The code to find a class specification is
given in idol.icn's fetchspec(). A key fragment looks like
{\ttfamily\mdseries
\ \ \ if f := open(dir {\textbar}{\textbar} {\textquotedbl}/{\textquotedbl} {\textbar}{\textbar} env,
{\textquotedbl}dr{\textquotedbl}) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ if s := fetch(f, name) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ close(f)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ return db\_entry(dir, s)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ close(f)}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
Unicon searches for {\textquotedbl}link{\textquotedbl} declarations in
a particular order, given by the current directory followed by
directories in an IPATH (Icode path, or perhaps Icon path) environment
variable, followed by system library directories such as ipl/lib and
uni/lib. This same list of directories is searched for inherited
classes.
The string stored in uniclass.dir and returned from fetch() for class Class is:
{\ttfamily\mdseries
idol.icn}
{\ttfamily\mdseries
class Class : declaration(supers, methods, text, imethods, ifields, glob, linkfile, dir, unmangled\_name, supers\_node)}
{\ttfamily\mdseries
ismethod}
{\ttfamily\mdseries
isfield}
{\ttfamily\mdseries
Read}
{\ttfamily\mdseries
ReadBody}
{\ttfamily\mdseries
has\_initially}
{\ttfamily\mdseries
ispublic}
{\ttfamily\mdseries
foreachmethod}
{\ttfamily\mdseries
foreachsuper}
{\ttfamily\mdseries
foreachfield}
{\ttfamily\mdseries
isvarg}
{\ttfamily\mdseries
transitive\_closure}
{\ttfamily\mdseries
writedecl}
{\ttfamily\mdseries
WriteSpec}
{\ttfamily\mdseries
writemethods}
{\ttfamily\mdseries
Write}
{\ttfamily\mdseries
resolve}
{\ttfamily\mdseries
end}
\subsection{Unicon's Progend() revisited}
Having presented scope resolution, inheritance, and importing packages
and inheriting classes from other files via the uniclass.dir NDBM
files, we can finally show the complete semantic analysis in the
Unicon compiler, prior to writing out the syntax tree as Icon code:
{\ttfamily\mdseries
procedure Progend(x1)}
\bigskip
{\ttfamily\mdseries
\ \ \ package\_level\_syms := set()}
{\ttfamily\mdseries
\ \ \ package\_level\_class\_syms := set()}
{\ttfamily\mdseries
\ \ \ set\_package\_level\_syms(x1)}
{\ttfamily\mdseries
\ \ \ scopecheck\_superclass\_decs(x1)}
\bigskip
{\ttfamily\mdseries
\ \ \ outline := 1}
{\ttfamily\mdseries
\ \ \ outcol := 1}
{\ttfamily\mdseries
\ \ \ \#}
{\ttfamily\mdseries
\ \ \ \# export specifications for each class}
{\ttfamily\mdseries
\ \ \ \#}
{\ttfamily\mdseries
\ \ \ native := set()}
{\ttfamily\mdseries
\ \ \ every cl := classes.foreach\_t() do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ cl.WriteSpec()}
{\ttfamily\mdseries
\ \ \ \ \ \ insert(native, cl)}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \#}
{\ttfamily\mdseries
\ \ \ \# import class specifications, transitively}
{\ttfamily\mdseries
\ \ \ \#}
{\ttfamily\mdseries
\ \ \ repeat \{}
{\ttfamily\mdseries
\ \ \ \ \ \ added := 0}
{\ttfamily\mdseries
\ \ \ \ \ \ every super := ((classes.foreach\_t()).foreachsuper() {\textbar} !imports) do \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ if /classes.lookup(super) then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ added := 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ readspec(super)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ cl := classes.lookup(super)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ if /cl then halt({\textquotedbl}can't inherit class
'{\textquotedbl},super,{\textquotedbl}'{\textquotedbl})}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ iwrite({\textquotedbl} \ inherits {\textquotedbl}, super, {\textquotedbl} from {\textquotedbl},
cl.linkfile)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ writelink(cl.dir, cl.linkfile)}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ outline +:= 1}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ \ \ \ \}}
{\ttfamily\mdseries
\ \ \ \ if added = 0 then break}
{\ttfamily\mdseries
\ \ \}}
{\ttfamily\mdseries
\ \ \#}
{\ttfamily\mdseries
\ \ \# Compute the transitive closure of the superclass graph. Then}
{\ttfamily\mdseries
\ \ \# resolve inheritance for each class, and use it to apply scoping rules.}
{\ttfamily\mdseries
\ \ \#}
{\ttfamily\mdseries
\ \ every (classes.foreach\_t()).transitive\_closure()}
{\ttfamily\mdseries
\ \ every (classes.foreach\_t()).resolve()}
\bigskip
{\ttfamily\mdseries
\ \ scopecheck\_bodies(x1)}
\bigskip
{\ttfamily\mdseries
\ \ \ if {\textbackslash}thePackage then \{}
{\ttfamily\mdseries
\ \ \ \ \ \ every thePackage.insertsym(!package\_level\_syms)}
{\ttfamily\mdseries
\ \ \ \ \ \ \}}
\bigskip
{\ttfamily\mdseries
\ \ \ \#}
{\ttfamily\mdseries
\ \ \ \# generate output}
{\ttfamily\mdseries
\ \ \ \#}
{\ttfamily\mdseries
\ \ \ yyprint(x1)}
{\ttfamily\mdseries
\ \ \ write(yyout)}
\subsection{Other OOP Issues}
The primary mechanisms for object-oriented programming that we have
discussed so far include: classes, method invocation,
inheritance. There were certainly a few parts we glossed over (like
how a\$super.m() is implemented.) The main way to look for additional
issues we skipped is to read uni/unicon/idol.icn, which handles all
the object-oriented features and comes from the original Idol
preprocessor. Here are some thoughts from a scan of idol.icn:
\liststyleLxliv
\begin{itemize}
\item
the preprocessor semi-parsed class and method headers in order to do inheritance. After the real (YACC-based) parser was
added, I hoped to remove the parsing code, but it is retained in order to handle class specifications in the
uniclass.dir NDBM files
\item
The classes in idol.icn correspond fairly directly to major syntax constructs; the compiler itself is object-oriented.
\item
Packages are a {\textquotedbl}virtual syntax construct{\textquotedbl}: no explicit representation in the source, but
stored in the uniclass.dir database
\item
There is a curious data structure, a tabular queue, or taque, that combines (hash) table lookup and preserves (lexical)
ordering.
\item
Aggregation and delegation patterns are used a lot. A class is an aggregate of methods, fields, etc. and delegates a lot
of its work to objects created for subparts of its overall syntax.
\end{itemize}
\subsection{On Public Interfaces and Runtime Type Checking}
Object-oriented facilities are usually discussed in the context of
large complex applications where software engineering is an issue. We
don't usually need OOP for 100 line programs, but for 10,000+ line
programs it is often a big help.
Besides classes and packages, Unicon adds to Icon one additional
syntax construct in support of this kind of program: type checking and
coercion of parameters. Parameters and return values are the points at
which type errors usually occur, during an integration phase in a
large project where one person's code calls another. The type checking
and coercion syntax was inspired by the type checks done by the Icon
runtime system at the boundary where Icon program code calls the C
code for a given function or operator.
One additional comment about types is that the lack of types in
declarations for ordinary variables such as {\textquotedbl}local
x{\textquotedbl} does not prevent the Icon compiler iconc from
determining the exact types of well over 90\% of uses at compile time
using type inference. Type checking can generally be done at compile
time even if variable declarations do not refer to types... as long as
the type information is available across file and module boundaries.
| {
"alphanum_fraction": 0.6789679699,
"avg_line_length": 30.5808104332,
"ext": "tex",
"hexsha": "f99b4f4aa69b7e17e285c52c315336ade779362a",
"lang": "TeX",
"max_forks_count": 16,
"max_forks_repo_forks_event_max_datetime": "2022-03-01T06:01:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-14T04:32:36.000Z",
"max_forks_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "MatthewCLane/unicon",
"max_forks_repo_path": "doc/ib/p3-unicon.tex",
"max_issues_count": 83,
"max_issues_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_issues_repo_issues_event_max_datetime": "2022-03-22T11:32:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-03T20:07:12.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "MatthewCLane/unicon",
"max_issues_repo_path": "doc/ib/p3-unicon.tex",
"max_line_length": 120,
"max_stars_count": 35,
"max_stars_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "MatthewCLane/unicon",
"max_stars_repo_path": "doc/ib/p3-unicon.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T06:00:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-29T13:19:55.000Z",
"num_tokens": 19208,
"size": 65657
} |
\section{Relative attaching maps, $\RP^n$, Euler characteristic, and homology approximation}
Recall:
\begin{equation*}
\xymatrix{ H_n\left(\coprod_\alpha D^n_\alpha\right)\ar[r]^\partial_\cong\ar[d]^\cong_{\text{char. map}} & H_{n-1}\left(\coprod_\alpha S^{n-1}_\alpha\right)\ar[d]_{\text{attaching map}} & H_{n-1}\left(\coprod_\beta D^{n-1}_\beta,\coprod_\beta S^{n-2}_\beta\right) \ar[r]^\cong & \widetilde{ H}_{n-1}\left(\bigvee_\beta D^{n-1}_\beta/S^{n-2}_\beta\right)\ar[d]^\cong\\
H_n(X_n,X_{n-1})\ar[r]^\partial\ar@{=}[d] & H_{n-1}(X_{n-1})\ar[r]^j & H_{n-1}(X_{n-1},X_{n-2})\ar[r]^\cong\ar[ur]^\cong\ar@{=}[d] & \widetilde{ H}_{n-1}(X_{n-1}/X_{n-2})\\
C_n(X)\ar[rr]^d & & C_{n-1}(X)}
\end{equation*}
This boundary map $d$ is the effect of $ H_{n-1}(-)$ to:
\begin{equation*}
\xymatrix{\coprod_\alpha S^{n-1}_\alpha\ar[r]^{f_{n-1}} & X_{n-1}\ar[d]\ar[r] & X_{n-1}/X_{n-2}\cong \bigvee_\beta D^{n-1}_\beta/S^{n-2}_\beta\\
& X_n}
\end{equation*}
The composite in the top row of this diagram is called the ``relative attaching map'' because you're working relative to the $(n-2)$-skeleton.
Before, I coyly said before that there is a monoid homomorphism $\deg:[S^{n-1},S^{n-1}]\to\Z_\times$ that sends $f\mapsto ( H_{n-1}(f): H_{n-1}(S^{n-1})\to H_{n-1}(S^{n-1}))$. I said that this was surjective. This is actually an isomorphism. We won't prove injectivity here, but we'll do this in 18.906.
\subsection{$\RP^m$}
Recall the CW-structure $\RP^0\subseteq\RP^1\subseteq\cdots\subseteq\RP^{n-1}\subseteq\RP^n\subseteq \cdots\subseteq\RP^m$, where $\RP^{n-1}$ is the collection of lines in $\RR^n$. The attaching map is the double cover $\pi:S^{n-1}\to\RP^{n-1}$ to get a pushout
\begin{equation*}
\xymatrix{S^{n-1}\ar[r]^{\text{double cover}}\ar[d] & \RP^{n-1}\ar[d]\\
D^n\ar[r] & \RP^n}
\end{equation*}
The cellular chain complex $C_\ast$ will look like:
\begin{equation*}
\xymatrix{0 & C_0=\Z\ar[l] & C_1=\Z\ar[l] & \cdots\ar[l] & C_{n-1}=\Z\ar[l] & C_n=\Z\ar[l] & \cdots\ar[l] & C_m=\Z\ar[l] & 0\ar[l]}
\end{equation*}
The first map $C_1\to C_0$ is easy because $\RP^m$ is connected. Thus $C_1\to C_0$ is the zero map.
The relative attaching maps are: $S^{n-1}\xrightarrow{\pi}\RP^{n-1}\to \RP^{n-1}/\RP^{n-2}\cong S^{n-1}$. All we have to do is figure out the degree of this map. What happens when I collapse out $\RP^{n-2}$? This has the effect of collapsing out by the equator because you are collapsing all those points on the equator of $S^{n-1}$ that go to $\RP^{n-1}$. So the composition $S^{n-1}\xrightarrow{\pi}\RP^{n-1}\to \RP^{n-1}/\RP^{n-2}\cong S^{n-1}$ splits as:
\begin{equation*}
\xymatrix{S^{n-1}\ar[r]^{\pi}\ar[dr]^{\text{pinching}} & \mathbf{RP}^{n-1}\ar[r] & \RP^{n-1}/\RP^{n-2}\cong S^{n-1}\\
& S^{n-1}/S^{n-2}\ar[ur]\ar@{=}[r] & S^{n-1}_u\vee S^{n-1}_\ell}
\end{equation*}
The map $S^{n-1}_u\vee S^{n-1}_\ell\to S^{n-1}$ sends the top hemisphere to $S^{n-1}$ itself via the identity, so the first factor is a homeomorphism. The lower hemisphere will be sent to $S^{n-1}$ via the antipodal map (called $\alpha$), which is also a homeomorphism, but I won't draw this in because I don't want to do that here. What does this do in homology? In $(n-1)$-dimensional homology, we choose a generator $\sigma$ of $ H_{n-1}(S^{n-1})$.
The pinch map sends $\sigma$ to $(\sigma,\sigma)$. The map from $S^{n-1}_u\vee S^{n-1}_\ell$ to $S^{n-1}$ sends $(\sigma,\sigma)\mapsto \sigma+\alpha_\ast\sigma$. The degree of $\alpha_\ast$ is $(-1)^n$, as you saw in homework. The composite in homology for $S^{n-1}$ is multiplication by $1+(-1)^n$. Thus the differential $d:C_n(X)\to C_{n-1}(X)$ is multiplication by $1+(-1)^n$. So the cellular chain complex now looks like, if $n$ is even:
\begin{equation*}
\xymatrix{0 & C_0=\Z\ar[l] & C_1=\Z\ar[l]^0 & C_2=\Z \ar[l]^2 & \cdots\ar[l]^0 & C_{n-1}=\Z\ar[l]^0 & C_n=\Z\ar[l]^2 & 0\ar[l]}
\end{equation*}
and if $n$ is odd:
\begin{equation*}
\xymatrix{0 & C_0=\Z\ar[l] & C_1=\Z\ar[l]^0 & C_2=\Z \ar[l]^2 & \cdots\ar[l]^0 & C_{n-1}=\Z\ar[l]^2 & C_n=\Z\ar[l]^0 & 0\ar[l]}
\end{equation*}
Thus:
\begin{equation*}
H_k(\RP^n)=\begin{cases}
\Z & k=0\text{ and }k=n\text{ odd}\\
\Z/2\Z & k\text{ odd, }0<k<n\\
0 & \text{else}
\end{cases}
\end{equation*}
This means that odd-dimensional real projective space is orientable, and even-dimensional real projective is non-orientable.
\subsection{Euler char.}
On Friday, I made the comment that things are simpler if you have (?). Here's a lemma.
\begin{lemma}
If $X$ is a CW-complex with only even cells (eg. $\CP^n,\mathbb{H}\mathbf{P}^n$), then $ H_\text{odd}(X)=0$, and $ H_\text{even}(X)=C_\text{even}(X)$. Actually, I can just write $ H_\ast(X)\cong C_\ast(X)$. Even homology groups are free abelian groups with rank given by the number of $(2q)$-cells.
\end{lemma}
\begin{proof}
Trivial.
\end{proof}
Here's a result that'll improve this.
\begin{theorem}[``Euler'' because this is the generalization of the Euler characteristic]
Let $X$ be a finite CW-complex\footnote{Some alarm starts ringing. ``What are we supposed to do? It's just here to annoy us. It's ringing, but there's nothing to answer. Can we ignore it? (Turns off the light.) Let's just talk over it''.}. (We write $A_n$ to index the $n$-cells.)\footnote{Alarm ends, yay!} Then $\sum^\infty_{n=0}(-1)^n\# A_n=:\chi(X)=\text{Euler characteristic}$ is independent of the CW-structure on $X$.
\end{theorem}
When $n$ is even, the lemma is much stronger than this. I'm going to prove this theorem. Now I want to give a little reminder about the structure of finitely generated abelian groups.
\subsection{Finitely generated abelian groups}
If you have an abelian group $A$, you have a torsion subgroup $T(A)$, i.e., elements of finite order in $A$, i.e., $\{a\in A|\exists n\in \Z_{>0},na=0\}$. Then $A/T(A)$ is \emph{torsion free}. For a general abelian group, that's all you can say. Assume $A$ is finitely generated. Then $A/T(A)$ is also a finitely generated torsion free abelian group (take the image of the generators of $A$). This is actually a \emph{free abelian group}, and so it's isomorphic to $\Z^r$. We say that $r$ is the \emph{rank} of $A$. It's an invariant of $A$.
Another fact is the following. Recall that any subgroup of $A$ is finitely generated (nontrivial fact). This means that $T(A)$ is finitely generated. It is true that $T(A)=\Z/n_1\oplus\Z/n_2\oplus\cdots\oplus\Z/n_t$ where $n_1|n_2|\cdots|n_t$, where $t$ is well-defined and is ``the number of torsion generators''. What this means for us is that $A\cong T(A)\oplus A/T(A)\cong \Z^r\oplus\Z/n_1\oplus\Z/n_2\oplus\cdots\oplus\Z/n_t$ where $n_1|n_2|\cdots|n_t$. If $0\to A\to B\to C\to 0$ is a sexseq of finitely generated abelian groups, then $\text{rank}(A)+\text{rank}(B)=\text{rank}(C)$.
| {
"alphanum_fraction": 0.6673596674,
"avg_line_length": 97.5942028986,
"ext": "tex",
"hexsha": "af1f62a7e5514a1f4069778e039056ddeeb020f2",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z",
"max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ichung/algtop-notes",
"max_forks_repo_path": "old-905/lec-18-euler-char.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2",
"max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ichung/algtop-notes",
"max_issues_repo_path": "old-905/lec-18-euler-char.tex",
"max_line_length": 588,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ichung/algtop-notes",
"max_stars_repo_path": "old-905/lec-18-euler-char.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z",
"num_tokens": 2585,
"size": 6734
} |
\chapter{Introduction} \label{chapter:intro}
| {
"alphanum_fraction": 0.7826086957,
"avg_line_length": 23,
"ext": "tex",
"hexsha": "f12bde0caa404de18b93f2c28c7d5dc845c2be2b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d4d20e0e2362ef84098d46df95cf935deb27ad41",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AshivDhondea/dfts_ci_project_report",
"max_forks_repo_path": "Chapters/hansintro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d4d20e0e2362ef84098d46df95cf935deb27ad41",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AshivDhondea/dfts_ci_project_report",
"max_issues_repo_path": "Chapters/hansintro.tex",
"max_line_length": 45,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d4d20e0e2362ef84098d46df95cf935deb27ad41",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AshivDhondea/dfts_ci_project_report",
"max_stars_repo_path": "Chapters/hansintro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12,
"size": 46
} |
\subsection*{Institutional}
I am very grateful for the institutional support that made my DPhil possible, and extend my thanks to the EPSRC Centre for Doctoral Training in Theory and Modelling in Chemical Sciences (grant EP/L015722/1), to the UK Materials and Molecular Modelling Hub (grant EP/P020194/1) and to the AWE.
\vspace{\baselineskip}
\noindent Ministry of Defence \copyright{} British Crown Copyright 2020/AWE
\subsection*{Personal}
The working title of this thesis was ``Rings and Things'' and I have thoroughly enjoyed this pseudo\--random walk of scientific discovery, not least because of the people who made it both productive and fun, and whose help I would like to recognise.
Firstly, I would like to thank Mark Wilson, for being an excellent supervisor and mentor.
I am very appreciative of him for a great many reasons: for providing continual encouragement and direction regardless of whatever random idea or erratic diagram I threw at him; for instigating ``Good Idea Tuesday'' and most of all (I can't stress this enough) for never following through with his plan for a group meeting on the life and music of Frank Zappa.
People say that students tend to turn into their supervisors over the course of a DPhil and, if this is the case, I can think of no better compliment.
In terms of academic support, my thanks also go to Alice Thorneywork and Roel Dullens for generously providing experimental colloid data, to Andrew Goodwin for his discussions on procrystalline lattices and to Phil Salmon for his assistance in understanding persistent homology.
I would also like to thank Claire Vallance for her help with the Faraday Division, along with Fernanda Duarte for the teaching opportunities at Hertford College, which I greatly enjoyed.
Additionally, I am grateful to the members of the Wilson group: to David, for recruiting me, to Dom, whose late night lab sessions helped shape the early parts of my DPhil, and to Matt, for never ceasing to find new ways to overcomplicate my code.
Thanks also to Leema and Linda, who ran a tight ship and always kept me on track.
I am also appreciative to everyone in the TMCS and the Theory Wing who made the office such a great place to work.
The 1000 day countdown timer Tim gave me is just ticking down, so I'll briefly say thank you to Laszlo, Viivi, Rocco, Tom, Max, Darren and Martin; for providing coffee when in need of motivation, beer in times of distraction and good laughs throughout.
Gabriel too deserves a thank you, of course, for being a friend like no other since our undergraduate days at LMH.
A special mention also for Jonny, who became a good friend in the short time I knew him, and whose intelligence and humour is greatly missed.
I would like to extend thanks to my family.
To Charlie and especially Mum, for always believing in me and providing a fantastic education.
I would not have been able to achieve this without you.
As anyone who picks this volume off our bookshelf in the future might expect\footnote{From between the Booker prize winners no less \-- I appreciate the gesture}, my final thanks are reserved for you, Aurelia.
From when I first started this project, and we were in separate cities, to being locked\--down together, as I finished writing up, your advice, encouragement and support have been invaluable.
You are the source of all my inspiration and you have made this period truly special.
I am excited to find out where life will take us next. | {
"alphanum_fraction": 0.7964244521,
"avg_line_length": 102,
"ext": "tex",
"hexsha": "8ef3189b066fe7fa61076b5a2fdaa13f6c36504b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "77ddd9fcb3b563a5dc93457682724046053e1137",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dormrod/Thesis",
"max_forks_repo_path": "text/acknowledgements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "77ddd9fcb3b563a5dc93457682724046053e1137",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dormrod/Thesis",
"max_issues_repo_path": "text/acknowledgements.tex",
"max_line_length": 360,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "77ddd9fcb3b563a5dc93457682724046053e1137",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dormrod/Thesis",
"max_stars_repo_path": "text/acknowledgements.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-14T11:17:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-14T11:17:18.000Z",
"num_tokens": 758,
"size": 3468
} |
\documentclass[12pt]{cdblatex}
\usepackage{exercises}
\usepackage{fancyhdr}
\usepackage{footer}
\begin{document}
% --------------------------------------------------------------------------------------------
\section*{Exercise 4.5 Reformatting complex expressions}
\begin{cadabra}
{a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).
\nabla{#}::Derivative.
def get_term (obj,n):
x^{a}::Weight(label=xnum). # assign weights to x^{a}
foo := @(obj). # make a copy of obj
bah = Ex("xnum = " + str(n)) # choose a target
keep_weight (foo,bah) # extract the target
return foo
def reformat (obj,scale):
{x^{a},A_{a},B_{a},A_{a b},B_{a b},C_{a b},C_{a b c},g^{a b}}::SortOrder. # choose a sort order
foo = Ex(str(scale)) # create a scale factor
bah := @(foo) @(obj). # apply the scale factor, clears all fractions
distribute (bah) # only required if (bah) contains brackets
sort_product (bah)
rename_dummies (bah)
canonicalise (bah)
factor_out (bah,$x^{a?}$)
ans := @(bah) / @(foo). # undo previous scaling
return ans
# ---------------------------------------------------------------
# a messy unformatted expression
expr := (1/7) A_{e} x^{e}
- (1/3) B_{f} x^{f}
+ (1/3) A_{a b} x^{a} x^{b}
+ (1/9) B_{e c} x^{c} x^{e}
- (1/5) C_{p c} B_{d q} g^{c d} x^{p} x^{q}
+ (3/7) A_{a b c} x^{a} x^{b} x^{c}
- (1/5) B_{a b} C_{c d e} g^{c d} x^{a} x^{b} x^{e}
+ (7/11) B_{a b} B_{c d} C_{e f g} g^{b c} g^{d f} x^{a} x^{e} x^{g}. # cdb (ex-0405.100,expr)
# split the expression into seprate terms
term1 = get_term (expr,1) # cdb(term1.101,term1)
term2 = get_term (expr,2) # cdb(term2.101,term2)
term3 = get_term (expr,3) # cdb(term3.101,term3)
# reformat terms and tidy fractions
term1 = reformat (term1, 21) # cdb(term1.102,term1)
term2 = reformat (term2, 45) # cdb(term2.102,term2)
term3 = reformat (term3,385) # cdb(term3.102,term3)
# rebuild the expression
expr := @(term1) + @(term2) + @(term3). # cdb (ex-0405.101,expr)
\end{cadabra}
\clearpage
\begin{dgroup*}
\Dmath*{ g = \Cdb*[\V{15pt}\hskip 3.0cm\hfill]{ex-0405.100}
= \Cdb*{ex-0405.101} }
\end{dgroup*}
\end{document}
| {
"alphanum_fraction": 0.4945804898,
"avg_line_length": 30.0120481928,
"ext": "tex",
"hexsha": "4eddd7f37606fc5a29c8f6224b33acf42b58c3ad",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z",
"max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "leo-brewin/cadabra-tutorial",
"max_forks_repo_path": "source/cadabra/exercises/ex-0405.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "leo-brewin/cadabra-tutorial",
"max_issues_repo_path": "source/cadabra/exercises/ex-0405.tex",
"max_line_length": 106,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "leo-brewin/cadabra-tutorial",
"max_stars_repo_path": "source/cadabra/exercises/ex-0405.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z",
"num_tokens": 872,
"size": 2491
} |
\subsection{Parametric ReLU}
| {
"alphanum_fraction": 0.75,
"avg_line_length": 6.4,
"ext": "tex",
"hexsha": "a2fe2d9e38548df15d0c600c3310310305127325",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/neuralNetworksLink/01-05-ReLUParametric.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/neuralNetworksLink/01-05-ReLUParametric.tex",
"max_line_length": 28,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/neuralNetworksLink/01-05-ReLUParametric.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9,
"size": 32
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{iftex}
\ifPDFTeX
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Test},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
\usepackage{multicol}
\ifLuaTeX
\usepackage{selnolig} % disable illegal ligatures
\fi
\title{Test}
\author{}
\date{}
\begin{document}
\maketitle
\hypertarget{column-div-test}{%
\section{column-div test}\label{column-div-test}}
content\ldots{}
\leavevmode\vadjust pre{\hypertarget{thisdivdoesnothing}{}}%
content\ldots{}
\hypertarget{three-columns}{%
\subsection{Three columns}\label{three-columns}}
\begin{multicols}{3}
content\ldots{}
content\ldots{}
content\ldots{}
\end{multicols}
\hypertarget{two-uneven-columns}{%
\subsection{Two uneven columns}\label{two-uneven-columns}}
\mbox{
\begin{minipage}[b]{0.3\columnwidth}
contents\ldots{}
\end{minipage}
\begin{minipage}[b]{0.7\columnwidth}
contents\ldots{}
\end{minipage}
}
\hypertarget{columns-in-columns}{%
\subsection{Columns in columns}\label{columns-in-columns}}
\begin{multicols}{3}
\mbox{
\begin{minipage}[b]{0.2\columnwidth}
contents\ldots{}
\end{minipage}
\begin{minipage}[b]{0.8\columnwidth}
contents\ldots{}
\end{minipage}
}
\mbox{
\begin{minipage}[b]{0.2\columnwidth}
contents\ldots{}
\end{minipage}
\begin{minipage}[b]{0.8\columnwidth}
contents\ldots{}
\end{minipage}
}
\mbox{
\begin{minipage}[b]{0.2\columnwidth}
contents\ldots{}
\end{minipage}
\begin{minipage}[b]{0.8\columnwidth}
contents\ldots{}
\end{minipage}
}
\end{multicols}
\hypertarget{columns-and-colors}{%
\subsection{Columns and Colors}\label{columns-and-colors}}
\mbox{
\begin{minipage}{0.4\columnwidth}
\color{blue}
blue content\ldots{}
\end{minipage}
\colorbox{red}{
\begin{minipage}{\dimexpr0.6\columnwidth-4\fboxsep\relax}
content on red background\ldots{}
\end{minipage}
}
}
\mbox{
\colorbox{red}{
\begin{minipage}{\dimexpr0.6\columnwidth-4\fboxsep\relax}
\color{blue}
blue content on red background\ldots{}
\end{minipage}
}
\begin{minipage}{0.4\columnwidth}
contents\ldots{}
\end{minipage}
}
\end{document}
| {
"alphanum_fraction": 0.7388188073,
"avg_line_length": 17.2673267327,
"ext": "tex",
"hexsha": "2909e8d76200886629f3f88fe36076811d1628b7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ce4993c1f4bc8c94203b93567e2695a03b20d621",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "chrisaga/lua-filters",
"max_forks_repo_path": "column-div/expected.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ce4993c1f4bc8c94203b93567e2695a03b20d621",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "chrisaga/lua-filters",
"max_issues_repo_path": "column-div/expected.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ce4993c1f4bc8c94203b93567e2695a03b20d621",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "chrisaga/lua-filters",
"max_stars_repo_path": "column-div/expected.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1171,
"size": 3488
} |
\subsection{Explainable AI}
In addition, Article 22 of GDPR, in essence, grants an individual a “right of human intervention.” Under this right, an individual may ask for a human to review the AI’s decision to determine whether or not the system made a mistake. This right of human intervention and the right of explainability together place a legal obligation on the business to understand what happened, and then make a reasoned judgment as to if a mistake was made.[https://www.airoboticslaw.com/blog/ai-explainability-gdpr] | {
"alphanum_fraction": 0.8064516129,
"avg_line_length": 263.5,
"ext": "tex",
"hexsha": "23e1f4d69142035bebd6a2da49645a861b7bb58e",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2022-02-28T03:01:34.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-02-03T07:10:11.000Z",
"max_forks_repo_head_hexsha": "70cbd8dba6d3021fc0b6999c8f758a705fa260e7",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "PengGuohui88/PhysioNet-CinC-Challenge2020-TeamUIO",
"max_forks_repo_path": "Paper/Latex/background.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "70cbd8dba6d3021fc0b6999c8f758a705fa260e7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "PengGuohui88/PhysioNet-CinC-Challenge2020-TeamUIO",
"max_issues_repo_path": "Paper/Latex/background.tex",
"max_line_length": 499,
"max_stars_count": 19,
"max_stars_repo_head_hexsha": "41da06fadbedb06b5a2d887c77ddb2d38c44c08b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Bsingstad/IdrettsEKG",
"max_stars_repo_path": "Paper/Latex/background.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-28T03:01:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-26T14:04:13.000Z",
"num_tokens": 114,
"size": 527
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Conclusion and Future work}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Statistic Analysis conclusion}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
What are the key takeaways from spatial analysis?
What effects will this have on the Arctic or world?
\section{Spacial Analysis conclusion}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
What are the key takeaways from spatial analysis?
What effects will this have on the Arctic or world?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{What does it mean overall?}
Where does the project go from here?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Potential for Future Work}
Where does the project go from here? | {
"alphanum_fraction": 0.3799392097,
"avg_line_length": 34.0344827586,
"ext": "tex",
"hexsha": "42c587b2e3567f4d43750a7b47cffaf47cfccd2e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6658f6ccb1396e46c794926bc474085148fb06a3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kfmit/kat_muddy_thesis",
"max_forks_repo_path": "Sources/Conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6658f6ccb1396e46c794926bc474085148fb06a3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kfmit/kat_muddy_thesis",
"max_issues_repo_path": "Sources/Conclusion.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6658f6ccb1396e46c794926bc474085148fb06a3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kfmit/kat_muddy_thesis",
"max_stars_repo_path": "Sources/Conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 121,
"size": 987
} |
\section{List of strategies}\label{app:list_of_players}
The strategies used in this study which are from Axelrod-Python library version 3.0.0.
\begin{multicols}{3}
\begin{enumerate}
\input{list_of_strategies.tex}
\end{enumerate}
\end{multicols} | {
"alphanum_fraction": 0.78,
"avg_line_length": 27.7777777778,
"ext": "tex",
"hexsha": "9a872da17ffcef74a62234d4fe234ca36207c3bc",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-30T08:13:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-30T08:13:32.000Z",
"max_forks_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments",
"max_forks_repo_path": "paper/strategies_list_section.tex",
"max_issues_count": 14,
"max_issues_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8",
"max_issues_repo_issues_event_max_datetime": "2020-05-08T11:23:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-29T14:42:49.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments",
"max_issues_repo_path": "paper/strategies_list_section.tex",
"max_line_length": 86,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments",
"max_stars_repo_path": "paper/strategies_list_section.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 75,
"size": 250
} |
\section{Experimental evaluation}\label{sec:experiments}
%Exp:
%prob0 then VI
%prob0 then SMC
%BRTDP
%Black SMC
%White SMC
%Models:
%QComp? Large MC with lots of branching
%PRISM bench suite (Check there is nothing?)
%handcraft sth
%Pepa?
%Hermann self-stabilization
We implemented grey SMC in a branch of the PRISM Model Checker \cite{prism} extending the implementation of black SMC \cite{DHKPjournal}.
We ran experiments on (both discrete- and continuous-time) Markov chains from the PRISM Benchmark Suite \cite{prism-benchmark-suite}.
In addition to a comparison to black SMC, we also provide comparisons to VI and BVI of PRISM and BRTDP of \cite{atva14}.
An interested reader may also want to refer \cite[Table II]{DHKPjournal} for a comparison of black SMC against two unbounded SMC techniques of \cite{YCZ10}.
For every run configuration, we run 5 experiments and report the median.
In black SMC, the check for candidates is performed every 1000 steps during path simulations, while in grey SMC the check is performed every 100 steps.
Additionally, grey SMC checks if a candidate is indeed a BSCC once every state of the candidate is seen at least twice.
In all our tables, `TO' denotes a timeout of 15 minutes and `OOM' indicates that the tool ran out of memory restricted to 1GB RAM.
\subsection{Comparison of black and grey SMC}
\input{table-black-grey}
\input{table-pmin}
Table \ref{tab:grey-v-black} compares black SMC and grey SMC on multiple benchmarks.
One can see that, except in the case of \texttt{leader6\_11} and \texttt{brp\_nodl}, grey SMC finishes atleast as soon as black SMC.
In \texttt{bluetooth}, \texttt{gridworld}, \texttt{leader} and \texttt{tandem}, both the SMC methods are able to terminate without encountering any candidate (i.e. either the target is seen or the left side of the until formula is falsified).
In \texttt{brp\_nodl}, \texttt{crowds\_nodl} and \texttt{nand}, the SMC methods encounter a candidate, however, since the candidate has only a single state (all BSCCs are trivial), black SMC is quickly able to confidently conclude that the candidate is indeed a BSCC.
The only interesting behaviour is observed on the \texttt{herman-17} benchmark.
In this case, every path eventually encounters the only BSCC existing in the model.
Grey SMC is able to quickly conclude that the candidate is indeed a BSCC, while black SMC has to sample for a long time in order to be sufficiently confident.
The performance of black SMC is also a consequence of the $\pmin$ being quite small.
Table \ref{tab:fsmc-vary-pmin} shows that black SMC is very sensitive towards $\pmin$.
Note that grey SMC is not affected by the changes in $\pmin$ as it always checks whether a candidate is a BSCC as soon as all the states in the candidate are seen twice.
%Better if large BSCC or small pmin.
%But also Black SMC often good.
%various checkBound (how often to check): 1, 10, 100, 1000
%various simminvisits (strength of candidate): 0, 2, 5, 20, 100, 1000
%always report [min/mean/max]
\subsection{Grey SMC vs. Black SMC/BRTDP/BVI/VI}
\input{table-herman-shortened}
We now look more closely at the self-stabilization protocol \texttt{herman} \cite{HermanPrism,Herman90}.
The protocol works as follows:
\texttt{herman-N} contains N processes, each possessing a local boolean variable $x_i$.
A token is assumed to be in place $i$ if $x_i = x_{i-1}$.
The protocol proceeds in rounds.
In each round, if the current values of $x_i$ and $x_{i-1}$ are equal, the next value of $x_i$ is set uniformly at random, and otherwise it is set equal to the current value of $x_{i-1}$.
The number of states in \texttt{herman-N} is therefore $2^N$.
The goal of the protocol is to reach a stable state where there is exactly one token in place.
For example, in case of \texttt{herman-5}, a stable state might be $(x_1=0, x_2=0, x_3=1, x_4=0, x_5=1)$, which indicates that there is a token in place 2.
In every \texttt{herman} model, all stable states belong to the single BSCC.
The number of states in the BSCC range from 10 states in \texttt{herman-5} to 2,000,000 states in \texttt{herman-21}.
For all \texttt{herman} models in Table \ref{tab:herman-short}, we are interested in checking if the probability of reaching an unstable state where there is a token in places 2-5, i.e. $(x_1=1, x_2=1, x_3=1, x_4=1, x_5=1)$ is less than 0.05.
This property, which we name \texttt{4tokens}, identifies $2^{N-5}$ states as target in \texttt{herman-N}.
The results in Table \ref{tab:herman-short} show how well grey SMC scales when compared to black SMC, BRTDP, BVI\footnote{We refrain from comparison to other guaranteed VI techniques such as sound VI \cite{DBLP:conf/cav/QuatmannK18} or optimistic VI \cite{DBLP:journals/corr/abs-1910-01100} as the implementations are not PRISM-based and hence would not be too informative in the comparison.} and VI.
Black SMC times out for all models where $N \geq 11$. This is due to the fact that the larger models have a smaller $\pmin$, thereby requiring black SMC to sample extremely long paths in order to confidently identify candidates as BSCCs.
BVI and VI perform well on small models, but as the model sizes grow and transition probabilities become smaller, propagating values becomes extremely slow.
Interestingly, we found that in both grey SMC and black SMC, approximately 95\% of the time is spent in computing the next transitions, which grow exponentially in number; an improvement in the simulator implementation can possibly slow down the blow up in run time, allowing for a fairer comparison with the extremely performant symbolic value iteration algorithms.
\input{table-herman-brtdp}
Finally, we comment on the exceptionally poor performance of BRTDP on \texttt{herman} models.
In Table \ref{tab:herman-brtdp}, we run BRTDP on three different properties: (i) tokens in places 2-3 (\texttt{2tokens}); (ii) tokens in places 2-4 (\texttt{3tokens}); and (iii) tokens in places 2-5 (\texttt{4tokens}).
The number of states satisfying the property decrease when going from 2 tokens to 4 tokens.
The table shows that BRTDP is generally better in situations where the target set is larger. %\todojan{quick jump to the conclusion; must be refined (or dropped)}
In summary, the experiments reveal the following:
\begin{itemize}
\item For most benchmarks, black SMC and grey SMC perform similar, as seen in Table \ref{tab:grey-v-black}.
As expected, the advantages of grey SMC do not show up in these examples, which (almost all) contain only trivial BSCCs.
\item The advantage of grey SMC is clearly visible on the \texttt{herman-N} benchmarks, in which there are non-trivial BSCCs. Here, black SMC quickly fails while grey SMC is extremely competitive.
\item Classical techniques such as VI and BVI fail when either the model is too large or the transition probabilities are too small.
However, they are still to be used for strongly connected systems, where the whole state space needs to be analysed for every run in both SMC approaches, but only once for (B)VI.%\todojan{I want to add this, although maybe we don't have data here; Maxi: I see the point, but not sure that "only once" is correct. It still takes several iterations, and every iteration works on the whole state space.}
\end{itemize} | {
"alphanum_fraction": 0.7739010989,
"avg_line_length": 80.8888888889,
"ext": "tex",
"hexsha": "5bd090fa24ac906641299fdefc042441d0871b67",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "250f64e6c442f6018cab65ec6979d9568a842f57",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kushgrover/apt-vs-dift",
"max_forks_repo_path": "src/prism-fruit/paper/isola20/5_experimental.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "250f64e6c442f6018cab65ec6979d9568a842f57",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kushgrover/apt-vs-dift",
"max_issues_repo_path": "src/prism-fruit/paper/isola20/5_experimental.tex",
"max_line_length": 401,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "250f64e6c442f6018cab65ec6979d9568a842f57",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kushgrover/apt-vs-dift",
"max_stars_repo_path": "src/prism-fruit/paper/isola20/5_experimental.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1967,
"size": 7280
} |
\documentclass[letterpaper]{scrartcl}
\usepackage[nomarkers,figuresonly]{endfloat}
\renewcommand{\efloatseparator}{\mbox{}}
\usepackage[margin=2cm]{geometry}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{hyperref}
\title{Mini Places Challenge}
\subtitle{6.869 Final Project Proposal}
\author{David Bau, Jon Gjengset}
\begin{document}
\maketitle
Our goal is to reach at least a 75\% top-5 accuracy for the Mini Places
Challenge by constructing a Convolutional Neural Net.\ In order to reach
this level of accuracy, we want to explore the following ideas:
\begin{enumerate}
\item Experiment with variations on the number, size and type of layers in our
network architecture, for example, experiment with more aggressive
pooling or alternatives to fully connected layers, to try to speed
training for deeper networks.
\item Experiment with CNN visualization to aid us in identifying
areas of the network that are not being trained well. See
\url{http://arxiv.org/abs/1311.2901}.
\item Apply unconventional preprocessing to the training data. For
example, can a neural network do recognition on FFT input?
\item Apply autoencoders during pre-training. Can we build more
effective, deeper network using autoencoders? See
\url{http://www.jmlr.org/papers/volume11/vincent10a/vincent10a.pdf}
and \url{http://arxiv.org/pdf/1310.8499.pdf}.
\item Explore more traditional pre-processing ideas. In addition to
blurring, rotation, cropping, and flipping, we could experiment
with warping the image or perspective distortion.
\item Experiment with `attention''-seeking neural nets that combine a
recurrent network with a CNN to see if we can focus the CNN's
attention to interesting parts of the image. See
\url{http://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf}.
\end{enumerate}
\section*{Implementation Strategy}
We are set up to work with Theano with Lasagne, although Google just released
TensorFlow, and we are considering using that platform.
Our approach will be to try the above ideas iteratively, keeping the ones
that work and leaving behind the ones that don't.
\end{document}
| {
"alphanum_fraction": 0.7937030075,
"avg_line_length": 41.7254901961,
"ext": "tex",
"hexsha": "fe4f395e0a18017a805f3b44170bc3c3625b9e61",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-01-29T01:06:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-29T01:06:21.000Z",
"max_forks_repo_head_hexsha": "472240053d5caabdf862de520141e4aa12a88337",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "davidbau/periscope",
"max_forks_repo_path": "doc/proposal/proposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "472240053d5caabdf862de520141e4aa12a88337",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "davidbau/periscope",
"max_issues_repo_path": "doc/proposal/proposal.tex",
"max_line_length": 81,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "472240053d5caabdf862de520141e4aa12a88337",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "davidbau/periscope",
"max_stars_repo_path": "doc/proposal/proposal.tex",
"max_stars_repo_stars_event_max_datetime": "2016-12-19T06:18:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-19T06:18:19.000Z",
"num_tokens": 533,
"size": 2128
} |
\subsection*{Project Readme}
\addcontentsline{toc}{subsection}{Project Readme}
Taxicoin is a decentralised ride-sharing protocol, which uses Ethereum to manage logistics. The aim of the project is to provide a more open alternative to existing applications, and one without any intermediary to take a cut of driver profits.
It is the final year project of Scott Street, a Computer Science undergraduate at Aston University, Birmingham. As such, please do not attempt to contribute!
\subsubsection*{Running Locally}
You'll first need to install the project dependencies: \lstinline{nodejs}, \lstinline{npm}, \lstinline{ganache}, \lstinline{geth}.
\medskip
1. Clone this repo to somewhere on your computer
2. Launch `ganache`, make sure it's using port `7545` and leave it running in the background
3. Run `geth` using the following, and keep it running in the background:
\begin{lstlisting}
# run on rinkeby testnet, allow anybody to connect, and enable the Whisper protocol
geth console --rinkeby --rpc --rpccorsdomain "http://localhost:*" --shh
\end{lstlisting}
4. Run some commands to set everything up:
\begin{lstlisting}
cd /where/you/put/taxicoin
# install dependencies
npm install
# migrate Ethereum smart contracts
npm run contract
# start the dev server
npm run dev
\end{lstlisting}
5. This should open a browser window at `http://localhost:8080` where you can start playing with Taxicoin!
\subsubsection*{Multiple Accounts for Testing}
As a single user of Taxicoin is not able to be both a rider and a driver simultaneously, for testing purposes it is useful to be able to use two Ethereum accounts. The browser plugin \lstinline{MetaMask} allows this.
After installation, set the network to custom RPC, with the url \texttt{http://127.0.0.1:7545} (your local ganache test node). In ganache, you may click the key icon to the right of each address to display the private key for that address. You may then use this to import the account into MetaMask.
Now when visiting the Taxicoin example client, it will automatically use MetaMask to interact with the smart contract. Simply toggle between two imported Ganache accounts to represent either a rider or driver.
| {
"alphanum_fraction": 0.7856489945,
"avg_line_length": 46.5531914894,
"ext": "tex",
"hexsha": "409b3adf22d665cfe09f16260836153b577a8268",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3f0fd26841992aa47d5e4f6fce56de4b56452f30",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sprusr/taxicoin",
"max_forks_repo_path": "docs/tex/readme.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3f0fd26841992aa47d5e4f6fce56de4b56452f30",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sprusr/taxicoin",
"max_issues_repo_path": "docs/tex/readme.tex",
"max_line_length": 298,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "3f0fd26841992aa47d5e4f6fce56de4b56452f30",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sprusr/taxicoin",
"max_stars_repo_path": "docs/tex/readme.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-23T01:14:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-23T13:37:47.000Z",
"num_tokens": 522,
"size": 2188
} |
\chapter{Results}
% \graphicspath{{Chapter5/Figs/}{Chapter5/Figs/}}
\section{Steps Before The Analysis}
\label{steps-before-analysis}
- Before we analysed the data, we removed all reaction times that were larger than 2000 ms
(2\% of all observations) based on the assumption that such reaction times are unlikely to
reflect spontaneous responses.
- The data of two participants were excluded from the analyses because they did not complete
the whole study.
- Functional images were re-aligned, unwarped, corrected for slice timing, and spatially
smoothed using an 8 mm smoothing kernel.
\section{Main Results}
\label{main-results}
- First, we investigated whether X (research question)
- We used an Independent samples t test with groups as independent variable and the
depression score as dependent variable
- The results showed that the difference between the groups/ conditions was significant
- The results showed a significant correlation between…
- The results showed a significant interaction between…
- Specifically, the average depressions score was lower in the treatment group (M=3.45, SD = 2.18) compared to the placebo group (M=4.83, SD = 2.02).
\section{Figures And Tables}
\label{figures-and-tables}
Add figures to make important results easier to interpret or to provide more information. Use tables to add extensive amounts of information that would be hard to read in text-form.
\section{Goals}
\label{chapter5-goals}
o Did you describe everything that is needed to replicate your results?
o Did you describe all pre-processing steps before the main analyses?
o Did you mention to which research question each analysis belongs?
o Did you avoid interpreting your results?
o Did you add figures for making your key results easy to understand (or are they very simple)?
o Did you add tables for extensive amounts of (numerical) information?
o Does your discussion go from specific (interpretation) to broad (implications)?
o Did you draw conclusions with reservations? (“A possible interpretation is…”)
o If you expressed a preference for one explanation over another, did provide clear
support for this preference?
o Did you describe how your research connects to previous research?
o Did you make clear what your research adds to existing research?
o Did you describe how your research advance our understanding or how they may inspire
future applications?
o Did you clearly admit limitations before qualifying them?
o Did you remind the reader of the value/implications of your research at the end?
o Did you include some pointers for future research? (optional)
| {
"alphanum_fraction": 0.7996138996,
"avg_line_length": 49.8076923077,
"ext": "tex",
"hexsha": "f5fe4113774f8fcb01802210df2cc78164383e00",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9c7654b7dc6de7048ec7144a31fb5f4a7ce77e04",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "danburonline/mind-racing",
"max_forks_repo_path": "thesis/src/Chapter5/chapter5.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "9c7654b7dc6de7048ec7144a31fb5f4a7ce77e04",
"max_issues_repo_issues_event_max_datetime": "2021-09-17T09:03:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-09-15T15:22:21.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "danburonline/mind-racing",
"max_issues_repo_path": "thesis/src/Chapter5/chapter5.tex",
"max_line_length": 181,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9c7654b7dc6de7048ec7144a31fb5f4a7ce77e04",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "danburonline/mind-racing",
"max_stars_repo_path": "thesis/src/Chapter5/chapter5.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-03T22:02:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-03T22:02:22.000Z",
"num_tokens": 541,
"size": 2590
} |
\chapter*{\Large Acknowledgement} | {
"alphanum_fraction": 0.8181818182,
"avg_line_length": 33,
"ext": "tex",
"hexsha": "84c1658b8e76ef9e1e93af69e6d61440e20a470d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "glupeksha/Thesis-Format",
"max_forks_repo_path": "structure/front_matter/acknowledgement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "glupeksha/Thesis-Format",
"max_issues_repo_path": "structure/front_matter/acknowledgement.tex",
"max_line_length": 33,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "upeksha1996/Thesis-Format",
"max_stars_repo_path": "structure/front_matter/acknowledgement.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-26T00:51:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-26T00:51:34.000Z",
"num_tokens": 9,
"size": 33
} |
% !TeX spellcheck = en_GB
\documentclass[fleqn,12pt]{wlscirep}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{enumitem}
% UNCOMMENT FOR HANDIN (AUTOMATICALLY COMPILES NUMBERED PDF FILES FROM GRAPHICS FOLDER)
\write18{cd figures && latexmk -pdf}
% CUSTOM SETUP
\usepackage{float}
\usepackage[format=hang]{caption}
\usepackage[format=hang]{subcaption}
\graphicspath{{../graphics/}}
\usepackage{url}
\usepackage{xspace}
\newcommand{\suppi}{SI\xspace}
\hyphenation{seq-item seq-items}
\newtheorem{definition}{Definition}
\usepackage{array}
\newcommand{\PreserveBackslash}[1]{\let\temp=\\#1\let\\=\temp}
\newcolumntype{C}[1]{>{\PreserveBackslash\centering}p{#1}}
\newcolumntype{R}[1]{>{\PreserveBackslash\raggedleft}p{#1}}
\newcolumntype{L}[1]{>{\PreserveBackslash\raggedright}p{#1}}
\hyphenation{seq-item seq-items}
\title{Complex Societies and the Growth of the Law}
\author[1,2,5,*]{Daniel Martin Katz}
\author[3]{Corinna Coupette}
\author[4]{Janis Beckedorf}
\author[2,5]{Dirk Hartung}
\affil[1]{Illinois Tech -- Chicago Kent College of Law}
\affil[2]{CodeX -- The Stanford Center for Legal Informatics}
\affil[3]{Max Planck Institute for Informatics}
\affil[4]{Faculty of Law, Ruprecht-Karls-Universit\"at Heidelberg}
\affil[5]{Bucerius Law School}
\affil[*]{[email protected]}
%\affil[+]{these authors contributed equally to this work}
\keywords{Legal Complexity, Citation Network, Evolution of Law, Network Science, Social Physics}
\begin{abstract}
While many informal factors influence how people interact, modern societies rely upon law as a primary mechanism to formally control human behaviour.
How legal rules impact societal development depends on the interplay between two types of actors:
the people who create the rules and the people to which the rules potentially apply.
We hypothesise that an increasingly diverse and interconnected society might create increasingly diverse and interconnected rules,
and assert that legal networks provide a useful lens through which to observe the interaction between law and society.
To evaluate these propositions, we present a novel and generalizable model of statutory materials as multidimensional, time-evolving document networks.
Applying this model to the federal legislation of the United States and Germany, we find impressive expansion in the size and complexity of laws over the past two and a half decades.
We investigate the sources of this development using methods from network science and natural language processing.
To allow for cross-country comparisons over time,
based on the explicit cross-references between legal rules,
we algorithmically reorganise the legislative materials of the United States and Germany into cluster families that reflect legal topics.
This reorganisation reveals that the main driver behind the growth of the law in both jurisdictions is the expansion of the welfare state, backed by an expansion of the tax state.
Hence, our findings highlight the power of document network analysis for understanding the evolution of law and its relationship with society.
\end{abstract}
\begin{document}
\flushbottom
\maketitle
\thispagestyle{empty}
\input{text/introduction}
\input{text/results}
\input{text/discussion}
\input{text/methods}
\subsection*{Code availability}
The code used in this study is available on GitHub in the following repositories:
\begin{itemize}[label=--]
\item Paper: \url{https://github.com/QuantLaw/Complex-Societies-and-Growth}\\
DOI of publication release: \textcolor{red}{TBD}
\item Data preprocessing: \url{https://github.com/QuantLaw/legal-data-preprocessing}\\
DOI of publication release: \textcolor{red}{TBD}
\item Clustering: \url{https://github.com/QuantLaw/legal-data-clustering}\\
DOI of publication release: \textcolor{red}{TBD}
\end{itemize}
\subsection*{Data availability}
For the United States, the raw input data used in this study is publicly available from the Annual Historical Archives published by the Office of the Law Revision Counsel of the U.S. House of Representatives,
and is also available from the authors upon reasonable request.
For Germany, the raw input data used in this study was obtained from \emph{juris GmbH} but restrictions apply to the availability of this data,
which was used under license for the current study,
and so is not publicly available.
For details, see Section~1.2 of the SI.
The preprocessed data used in this study (for both the United States and Germany) is archived under the following DOI:
\textcolor{red}{TBD}
\bibliography{bibliography}
\section*{Acknowledgements}
The research was supported by a grant from the Interdisciplinary Legal Research Program (ILRP) at Bucerius Law School and benefited from discussions with its director Hans-Bernd Schäfer.
\section*{Author contributions statement}
All authors conceived of the research project.
C.C. and J.B. performed the computational analysis in consultation with D.M.K. and D.H.
All authors drafted the manuscript and it was revised and reviewed by all authors.
All authors gave final approval for publication and agree to be held accountable for the work performed therein.
\section*{Additional information}
\subsection*{Competing Interests}
D.M.K. is affiliated with the start-up LexPredict which is now part of Elevate Services.
The other authors declare no competing interests.
\clearpage
\section*{Tables and figures}
\input{text/figures.tex}
\end{document}
| {
"alphanum_fraction": 0.7911485004,
"avg_line_length": 42.71875,
"ext": "tex",
"hexsha": "252efeabc3e57f097b6ff7bccf9ffc0d4f8b7b5a",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-07-21T15:25:11.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-21T15:25:11.000Z",
"max_forks_repo_head_hexsha": "3d55cdfa60960298e05cbd5a3ad341f48d7f52e4",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "QuantLaw/Complex-Societies-and-Growth",
"max_forks_repo_path": "journal_version/main-scirep.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3d55cdfa60960298e05cbd5a3ad341f48d7f52e4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "QuantLaw/Complex-Societies-and-Growth",
"max_issues_repo_path": "journal_version/main-scirep.tex",
"max_line_length": 208,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "3d55cdfa60960298e05cbd5a3ad341f48d7f52e4",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "QuantLaw/Complex-Societies-and-Growth",
"max_stars_repo_path": "journal_version/main-scirep.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-20T14:06:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-18T07:14:29.000Z",
"num_tokens": 1325,
"size": 5468
} |
%- LaTeX source file
%- section3.tex ~~
%
% This is the third section of the paper.
%
% ~~ last updated 22 Sep 2019
\jhanote{Need to set the tense correctly: refer to Titan in the past}
\jhanote{consistency between Moab and MOAB}
\jhanote{backfill versus backfill mode vs ``as a backfill job''}
%%%%%%%%%%%%%%%%%%%%%%
In 2013, the BigPanDA team began work on behalf of the ATLAS Collaboration to
incorporate the Titan supercomputer at the Oak Ridge Leadership Computing
Facility (OLCF) as a grid site for the Worldwide LHC Compute Grid, and in 2019,
Titan was decommissioned. This section describes the deployment of the PanDA
WMS on Titan as well as the relevant policies at OLCF and PanDA's integration
with the Moab workload manager \cite{moab}. PanDA operated on Titan in two very
different modes of operation, colloquially termed ``batch queue mode'' and
``backfill mode''. In ``batch queue mode'', PanDA interacted with Titan's Moab
scheduler in a static, non-adaptive manner to execute the workload. In
``backfill mode'', PanDA dynamically shaped the size of the workload deployed
on Titan to consume resources opportunistically that would otherwise have gone
unused.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{OLCF Policies}
\label{subsec:olcf-policies}
Oak Ridge Leadership Computing Facility (OLCF) is a United States Department of
Energy (DOE) ``leadership computing facility'' with a mission to enable
applications of size and complexity that cannot be readily performed at smaller
facilities. The OLCF has a mandate that a large portion of its flagship
machines' usage must come from large, leadership-class jobs, which are also
known as ``capability jobs''. Thus, the OLCF prioritizes the scheduling of
capability jobs.
To ensure that the OLCF user programs achieved this mission with Titan, OLCF
policies strongly encouraged users to run jobs that were as large as their
codes would allow. There were three queues on Titan, which were ``batch'',
``debug'', and ``killable''. OLCF used batch queue policy on the Titan systems
to support the delivery of large capability-class jobs~\cite{titan_sched}. OLCF
deployed Adaptive Computing's Moab resource manager, which supported features
that allowed it to integrate directly with Cray's Application Level Placement
Scheduler (ALPS), a lower-level resource manager unique to Cray HPC
clusters~\cite{osti_1086656}. Moab scheduled jobs in the batch queue in
priority order, and the highest priority jobs were executed depending on the
availability of the required resources. The OLCF therefore implemented queue
policies which awarded the highest priority to the largest capability jobs,
rather than just the oldest jobs in the batch queue.
The highest priority jobs were the ones next in line to run, unless the job did
not fit, which could happen, for example, when the requested resources were not
available. In such a case, a resource reservation was made for the job in the
future when availability could be assured; those nodes were exclusively
reserved for that job. When the job finished, the reservation was destroyed,
which released those nodes so that they were available for the next job.
Reservations were simply the mechanism by which a job received exclusive access
to the resources necessary to run the job. However, if policy desired that a
priority reservation be made for more than one job, then a system administrator
could specify the creation of reservations for the top N priority jobs in the
queue by increasing the keyword RESERVATIONDEPTH to be greater than one. The
priority reservation(s) would be re-evaluated (and destroyed or re-created)
during every scheduling iteration in order to take advantage of updated
information.
Of course, reservations seldom filled the nodes on Titan exactly, and Moab
would schedule smaller jobs to run in the vacancies. For example, if a large
capability job was due to start in two hours, Moab would work backwards to fill
in, or ``backfill'', the vacant nodes with the highest priority jobs in the
queue capable of finishing in less than two hours. The situation in which there
are vacant nodes for some amount of time was called colloquially ``backfill
opportunity'' by the BigPanDA team.
Thus, after creating reservations for the top priority jobs, Moab would switch
to ``backfill mode'' and continue down the job queue until it found a job that
would be able to start and would not disturb the existing priority
reservations, as specified by the value of RESERVATIONDEPTH. As time continued
and the scheduling algorithm continued to iterate, Moab would evaluate the
queue to find the highest priority jobs. If the highest priority job found
would not fit within the available resources, its reservation was updated but
left where it was. At this point, Moab would begin to try to backfill vacancies
by searching for a job in the queue that would be able to start and complete
without disturbing the priority reservations. If such jobs were started, they
were said to ``run within backfill''. If no such backfill jobs were present in
the queue, then available compute resources remained unutilized.
It is important to note, however, that there was no dedicated ``backfill
queue'' for Titan; instead, smaller jobs from each queue were scheduled into
spaces that could not be used by larger jobs. There were three queues on Titan,
which were ``batch'', ``debug'', and ``killable''. The batch queue was the
default queue for submitted jobs, and this paper is concerned only with the
batch queue.
Jobs submitted to the batch queue were grouped into five ``bins'' according to
the number of requested nodes, and each bin had a maximum wall time. The
definitions and rules for each bin are shown in Table~\ref{tab:olcf-bins}. Jobs
that requested fewer nodes had correspondingly lesser maximum wall times. Nodes
were assigned exclusively to one job at a time. Because Titan was a leadership
class machine and priority was a function of wait time, the batch scheduler
awarded aging boosts to jobs in bins 1 and 2 in order to prioritize larger jobs
over smaller ones. Once jobs in the batch queue begin to run, however, they
were not killed when new jobs arrived, regardless of their priority. Sometimes,
jobs small enough to use currently idle resources on Titan were scheduled to
run immediately. Finally, ``Titan core hours'' were the billable units used at
OLCF; they converted at a rate of 30 Titan core hours per 1 node hour.
%%%
% OLCF BINS TABLE
%%%
% For tables use
\begin{table}
% table caption is above the table
\caption{OLCF policies sort jobs into numbered bins based on the requested
number of nodes, and each bin has its own set of constraints.}
\label{tab:olcf-bins} % Give a unique label
% For LaTeX tables use
\begin{tabular}{crrr}
\hline\noalign{\smallskip}
Bin & Requested Nodes & Maximum Wall Time & Aging Boost \\
\noalign{\smallskip}\hline\noalign{\smallskip}
1 & 11,250 - 18,688 & 24 hours & 15 days \\
2 & 3,750 - 11,249 & 24 hours & 5 days \\
3 & 313 - 3,749 & 12 hours & 0 \\
4 & 126 - 312 & 6 hours & 0 \\
5 & 1 - 125 & 2 hours & 0 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{PanDA Integration at OLCF}
\label{subsec:panda-at-olcf}
In 2013, the BigPanDA team began working on behalf of the ATLAS Collaboration
to incorporate the Titan supercomputer at OLCF as a grid site for the Worldwide
LHC Compute Grid. The team operated under several different project
identifiers, including CSC108, HEP110, and HEP113. The HEP110 and HEP113
projects represented traditional ASCR Leadership Computing Challenge (ALCC)
allocations, but the CSC108 project was a Director's Discretionary (DD) project
which operated exclusively in what the team colloquially referred to as
``backfill mode'', as outlined in Section~\ref{subsec:olcf-policies}.
PanDA is a pilot-based WMS. In a distributed computing setting, pilot jobs are
submitted to batch queues on compute sites, and then they wait for resources to
become available. When a pilot job starts on a worker node, it contacts the
PanDA Server to retrieve an actual payload, and then, after necessary
preparations, it executes the payload as a sub-process. The PanDA pilot is also
responsible for a job's data management on a worker node and is capable of
performing data stage-in and stage-out operations.
Taking advantage of PanDA's modular and extensible design, the BigPanDA team
enhanced the pilot code and logic with tools and methods relevant for work on
HPCs. The pilots ran on Titan's data transfer nodes (DTNs), which allowed them
to communicate with the ATLAS PanDA Server, since the DTNs had good (10 GB/s)
connectivity to the Internet. The DTNs and the worker nodes on Titan used a
shared filesystem which enabled the pilot to stage-in the input files required
by the payload and to stage-out the output files produced at the end of the
job. In other words, the pilot acted as a site edge service for Titan. Pilots
were launched by a daemon-like script which ran in user space.
The ATLAS Tier 1 computing center at Brookhaven National Laboratory was used
for data transfer to and from Titan, but in principle any ATLAS site could have
been chosen. Figure~\ref{fig:implementation} shows a schematic view of PanDA's
interface with Titan. The pilots submitted ATLAS payloads to the worker nodes
using the local batch system via the Simple API for Grid Applications (SAGA)
interface \cite{saga_cmst}. SAGA was also used for monitoring and
management of PanDA jobs running on Titan's worker nodes.
One interesting feature of the deployment was its ability to collect and use
information about Titan's status, such as its free worker nodes, in real time.
The pilot could query the Moab scheduler about currently unused nodes on Titan
by using the ``showbf'' command-line tool, and the pilot could check to see if
the free resources' availability and size represented a suitable ``backfill
opportunity'' for PanDA jobs. The pilot transmitted this information to the
PanDA Server, which responded by sending the pilot a list of jobs intended for
submission to Titan. Then, based on the job information, the pilot transferred
the necessary input data from the ATLAS Grid, and once all the necessary data
was transferred, the pilot submitted jobs to Titan using an MPI wrapper.
The MPI wrappers were Python scripts that were typically workload-specific,
since they were responsible for setup of the workload environment,
organization of per-rank worker directories, rank-specific data management,
optional input parameter modification, and cleanup on exit. When activated on
worker nodes, each copy of the wrapper script would, after completing the
necessary preparations, start the actual payload as a sub-process and wait
until its completion. This approach allowed for flexible execution of a wide
spectrum of grid-centric workloads on parallel computational platforms such as
Titan \cite{htchpc2017converging}.
% \seannote{I could not find the ``HTC on
% HPC'' eScience paper in the RADICAL bib file.}
% =======
% since they were responsible for setup of the workload environment, organization
% of per-rank worker directories, rank-specific data management, optional input
% parameter modification, and cleanup on exit. When activated on worker nodes,
% each copy of the wrapper script would, after completing the necessary
% preparations, start the actual payload as a sub-process and wait until its
% completion. This approach allowed for flexible execution of a wide spectrum of
% grid-centric workloads on parallel computational platforms such as Titan
% \cite{htchpc2017converging}.
% >>>>>>> 2a999b1b3c2003434a6f6a56534f861b0f00990a
Because ATLAS detector simulations were executed on Titan as discrete jobs
submitted via MPI wrappers, parallel performance could scale nearly linearly,
potentially limited only by shared filesystem performance. Up to 20 pilots were
deployed at a time, distributed evenly over 4 DTNs. Each pilot controlled from
15 to 350 ATLAS simulation ranks per submission. This configuration was able to
utilize up to 112,000 cores simultaneously on Titan.
Figure~\ref{fig:monthly-consumption} shows Titan core hours consumed per month
by the ATLAS Geant4 simulations from January 2016 through December 2018. During
this time, CSC108 always ran in pure backfill mode with a custom priority that
was guaranteed to make its jobs the lowest priority on Titan, and the project
also had no actual allocation. Despite these obstacles, CSC108 still consumed
more than 400 million Titan core hours during that three year time period,
peaking at more than 24 million Titan core hours during the month of October
2018. The drop in consumption that occurred in
Figure~\ref{fig:monthly-consumption} during the months of July, August, and
September 2018 was due to the end of a major ATLAS simulation campaign in June
2018; there were simply no simulation jobs to run.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{PanDA Server at OLCF}
\label{subsec:panda_instance}
The PanDA Server used to manage ATLAS production workloads was the dedicated
instance at CERN in Geneva, Switzerland. Thus, it was necessary to deploy
another instance of the PanDA Server elsewhere in order to manage non-ATLAS
workloads on Titan. To this end, in March 2017, the BigPanDA team implemented a
new PanDA Server instance within the OLCF by using Red Hat OpenShift Origin
\cite{RH_OpenShift}, a powerful container cluster management and orchestration
system.
By running PanDA Server on OLCF premises with Red Hat OpenShift built on
Kubernetes \cite{Kubernetes}, the OLCF provided a container orchestration
service that allowed the BigPanDA team as users to schedule and run HPC
middleware service containers while maintaining a high level of support for
many diverse service workloads. The containers had direct access to all shared
resources at the OLCF, including parallel filesystems and batch schedulers.
This PanDA Server instance was used to implement the demonstrations for
non-ATLAS workloads that are detailed in
Section~\ref{sec:beyond-atlas-and-olcf}.
% For two-column wide figures use
\begin{figure*}
% Use the relevant command to insert your figure file.
% For example, with the graphicx package use
\includegraphics[width=0.75\textwidth]{images/Figure_5.png}
% figure caption is below the figure
\caption{Schematic view of PanDA WMS integration with Titan supercomputer at OLCF}
\label{fig:implementation}
\end{figure*}
% For two-column wide figures use
\begin{figure*}
% Use the relevant command to insert your figure file.
% For example, with the graphicx package use
\includegraphics[width=0.75\textwidth]{images/monthly-consumption.png}
% figure caption is below the figure
\caption{This figure shows the monthly consumption of resources on Titan by the
two methods used by PanDA.}
\label{fig:monthly-consumption}
\end{figure*}
%- vim:set syntax=tex:
| {
"alphanum_fraction": 0.7736429039,
"avg_line_length": 56.0073260073,
"ext": "tex",
"hexsha": "d71fbb5a43ebfaad8655d0754d75d52e17996886",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2018-10-28T22:29:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-21T16:01:27.000Z",
"max_forks_repo_head_hexsha": "5eecb241e1f838cec934c8a4ef32bdc0beff6eab",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ATLAS-Titan/CSW-BigScience",
"max_forks_repo_path": "section3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5eecb241e1f838cec934c8a4ef32bdc0beff6eab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ATLAS-Titan/CSW-BigScience",
"max_issues_repo_path": "section3.tex",
"max_line_length": 82,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5eecb241e1f838cec934c8a4ef32bdc0beff6eab",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "ATLAS-Titan/CSW-BigScience",
"max_stars_repo_path": "section3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3485,
"size": 15290
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% "ModernCV" CV and Cover Letter
% LaTeX Template
% Version 1.1 (9/12/12)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Xavier Danaux ([email protected])
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
% Important note:
% This template requires the moderncv.cls and .sty files to be in the same
% directory as this .tex file. These files provide the resume style and themes
% used for structuring the document.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[11pt,a4paper,sans]{moderncv} % Font sizes: 10, 11, or 12; paper sizes: a4paper, letterpaper, a5paper, legalpaper, executivepaper or landscape; font families: sans or roman
% \documentclass{ctexart}
\usepackage{xeCJK}
% \usepackage{fontspec}
% \usepackage{fontspec}
% \usepackage{xunicode}
% \usepackage{xeCJK}
% \setmainfont{Chancery.ttf}
% \setsansfont{Chancery.ttf}
% \setmonofont{Chancery.ttf}
\setCJKmainfont{SimKai.ttf}
\setCJKsansfont{SimKai.ttf}
% \setCJKmonofont{STSong}
% %\setCJKmathfont{}
% \usepackage[scale=0.85, top=15mm, bottom=13mm]{geometry}
\usepackage[scale=0.85]{geometry}
\moderncvstyle{classic} % CV theme - options include: 'casual' (default), 'classic', 'oldstyle' and 'banking'
\moderncvcolor{blue} % CV color - options include: 'blue' (default), 'orange', 'green', 'red', 'purple', 'grey' and 'black'
\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template
% \usepackage[scale=0.85]{geometry} % Reduce document margins
%\setlength{\hintscolumnwidth}{3cm} % Uncomment to change the width of the dates column
%\setlength{\makecvtitlenamewidth}{10cm} % For the 'classic' style, uncomment to adjust the width of the space allocated to your name
%----------------------------------------------------------------------------------------
% NAME AND CONTACT INFORMATION SECTION
%----------------------------------------------------------------------------------------
\firstname{Chongming} % Your first name
\familyname{Gao 高崇铭} % Your last name
% All information in this block is optional, comment out any lines you don't need
%\title{Curriculum Vitae}
% \address{2 rue Fran\c cois Verny}{Chengdu, China 29806}
% \address{UESTC, Chengdu, China}
% \mobile{(+33) 7 86 33 40 99}
%\phone{(000) 111 1112}
%\fax{(000) 111 1113}
% \usepackage{fontawesome}
\email{[email protected]}
\homepage{https://chongminggao.me}%{staff.org.edu/$f$jsmith}% The first argument is %the url for the clickable link, the second argument is the url displayed in the %template - this allows special characters to be displayed such as the tilde in this %example
\mobile{(+86) 18064850580}
\extrainfo{WeChat: 619082231}
%\photo[70pt][0.4pt]{picture} % The first bracket is the picture height, the second is %the thickness of the frame around the picture (0pt for no frame)
% \quote{Seeking a PhD Placement}
%----------------------------------------------------------------------------------------
% \moderncvhead{1}
\newfontfamily\chancery[Ligatures=TeX]{Chancery.ttf}
\renewcommand*\namefont{\chancery\fontsize{30}{48}\selectfont}
% \renewcommand*\namefont{\fontfamily{pzc}\fontsize{40}{48}\selectfont}
% \renewcommand*\titlefont{\fontfamily{pzc}\fontsize{20}{24}\selectfont}
% \renewcommand*\addressfont{\fontfamily{pzc}\selectfont}
% \renewcommand*\sectionfont{\fontfamily{pzc}\fontsize{20}{24}\selectfont}
% \renewcommand*{\addressfont}{\fontfamily{ppl}\fontsize{14}{18}\selectfont}
% \renewcommand*{\addressstyle}[1]{{\addressfont\textcolor{color2}{#1}}}
% \definecolor{blue}{rgb}{0.11764706, 0.68235294, 0.85882353} % skyblue
\definecolor{blue}{rgb}{0.22,0.45,0.70} % skyblue
% \definecolor{blue}{rgb}{0.19607843, 0.45098039, 0.8627451}
\newcommand{\blue}[1]{{\textcolor{blue}{{} #1}}}
\definecolor{gray}{rgb}{0.2,0.2,0.2} % skyblue
\newcommand{\gray}[1]{{\textcolor{gray}{{} #1}}}
% \definecolor{red}{rgb}{1,0,0}
% \newcommand{\red}[1]{{\textcolor{red}{{}#1}}}
\definecolor{red}{HTML}{f74545} % red
\newcommand{\red}[1]{{\textcolor{red}{{} #1}}}
\begin{document}
\makecvtitle % Print the CV title
%----------------------------------------------------------------------------------------
% EDUCATION SECTION
%----------------------------------------------------------------------------------------
% \vspace{-7mm}
\section{Research Interests}
\cventry{}{Data Mining, Information Retrieval and Artificial Intelligence}{}{}{with a focus on leveraging methods such as graph analysis or causal inference, to a wide range of questions including spatio-temporal data analysis, conversational recommender system, and so on.}{}
\section{Experiences}
% \cventry{After 2020\\ Ph.D.}{The University of Queensland, Australia}{}{}{}{School of Information Technology and Electrical Engineering (ITEE)\vspace{1mm}\\
% Supervisor:\href{https://sites.google.com/site/dbhongzhi/}{\blue{Hongzhi Yin}}.\vspace{1mm}\\
% Research topics: Recommender System and NLP.}
% \vspace{1mm}
\cventry{2020.07--Now\\ Research Intern}{Kuaishou}{}{}{}{Research intern under the supervision of \href{https://scholar.google.com/citations?user=qexdxuEAAAAJ&hl=en}{\blue{Wenqiang Lei}}, Peng Jiang, Biao Li, and \href{http://staff.ustc.edu.cn/~hexn/}{\blue{Xiangnan He}}.\vspace{1mm}\\
Researches topics: interactive and conversational recommender systems.}
\vspace{1mm}
\cventry{2019.03--2019.09\\ Research Intern}{Alibaba AI Labs}{}{}{}{Research intern under the supervision of Hao Wang and \href{http://air.tsinghua.edu.cn/EN/team-detail.html?id=35&classid=8}{\blue{Zaiqing Nie}}.\vspace{1mm}\\
Internship achievement: solved shared account-aware recommendation problem in Tmall Genie.}
\vspace{1mm}
\section{Education}
% \cventry{2016.09--2019.03}{University of Electronic Science and Technology of China}{}{}{}
% {M.Eng. in Department of Computer Science and Technology. Supervisor:\href{http://dm.uestc.edu.cn}{\blue{Junming Shao}}.\\}
\cventry{2020.09--Now\\ D.Eng.}{University of Science and Technology of China (USTC)}{}{}{}
{School of Information Science and Technology. \vspace{1mm}\\
Advisor:\href{http://staff.ustc.edu.cn/~hexn/}{\blue{Xiangnan He}}.\vspace{1mm}\\
Research topics: conversational recommender systems; causal recommendation.}
\vspace{1mm}
\cventry{2016.09--2019.06\\ M.Eng.}{University of Electronic Science and Technology of China (UESTC)}{}{}{}
{Department of Computer Science and Technology. \vspace{1mm}\\
Supervisor:\href{http://dm.uestc.edu.cn}{\blue{Junming Shao}}.\vspace{1mm}\\
Thesis: Trajectory Semantic Representation and Location Recommender Systems.}
\vspace{1mm}
% \cventry{2016.09--2019.03}{University of Electronic Science and Technology of China}{}{}{}
% {M.Eng. in Department of Computer Science and Technology.\vspace{1mm}\\
% Supervisor:\href{http://dm.uestc.edu.cn}{\blue{Junming Shao}}.\\}
\cventry{2012.09--2016.06 \\ B.Eng.}{University of Electronic Science and Technology of China (UESTC)}{}{}{}
{\textit{Yingcai Honors College} (Elite program of UESTC). GPA: 3.81, Ranking: 9/72\\
Supervisor:\href{http://dm.uestc.edu.cn}{\blue{Junming Shao}}, since the junior year.\vspace{1mm}\\
Thesis: Synchronization-inspired Co-clustering and Its Application to Gene Expression Data.} % Arguments not required can be left empty
\vspace{1mm}
% \cvitemwithcomment{Arabic}{Native Speaker}{asdf}
%----------------------------------------------------------------------------------------
% WORK EXPERIENCE SECTION
%----------------------------------------------------------------------------------------
\section{Tutorial }
\cventry{\red{\fbox{\textbf{Tutorial}}}\\(3 hours)}{\small RecSys 2021 Tutorial on Conversational Recommendation: Formulation, Methods, and Evaluation}{}{}{}{\small Wenqiang Lei, \underline{Chongming Gao}, Maarten de Rijke \vspace{1mm}\\
\emph{RecSys 2021 Tutorial}.}
\vspace{1mm}
\section{Research Publications }
% \cvitem{Accepted}{\small Junming Shao, Chongming Gao, Wei Zeng, Jingkuan Song, Qinli Yang, Synchronization-inspired Co-clustering and Its Application to Gene Expression Data.}
% \cventry{ICDM'20\\Under Review }{*\small A paper about identifying users behind shared account}{}{}{}{\small \underline{Chongming Gao}, Hao Wang, Junliang Yu, Yong Cao, Zaiqing Nie, Hongzhi Yin\vspace{1mm}\\
% Submitted to \emph{IEEE International Conference on Data Mining} (\textbf{ICDM'20})\vspace{1mm}\\
% (CORE2020 Rank A*, CCF B), still under review.}
\cventry{AI Open\\\red{\fbox{\textbf{Survey}}}}{\small Advances and Challenges in Conversational Recommender Systems: A Survey}{}{}{}{\small \underline{Chongming Gao}, Wenqiang Lei, Xiangnan He, Maarten de Rijke, Tat-Seng Chua\vspace{1mm}\\
\emph{AI Open. Vol. 2. (2021) 100-126}.}
\vspace{1mm}
\cventry{IS'20}{\small Semantic Trajectory Representation and Retrieval via Hierarchical Embedding}{}{}{}{\small \underline{Chongming Gao}, Chen Huang, Zhong Zhang, Qinli Yang, Junming Shao\vspace{1mm}\\
\emph{Information Sciences} (\textbf{IS'20})\vspace{1mm}\\
(SJR Q1, H index: 154, CiteScore: 6.90, Impact Factor: 5.524).}
\vspace{1mm}
\cventry{EMNLP'20}{\small Revisiting Representation Degeneration Problem in Language Modeling}{}{}{}{\small Zhong Zhang, \underline{Chongming Gao}, Cong Xu, Rui Miao, Qinli Yang, Junming Shao\vspace{1mm}\\
\emph{Findings of EMNLP, 2020.}\vspace{1mm}\\
(CORE2020 Rank: A, CCF B).}
\vspace{1mm}
\cventry{DASFAA'19}{\small Towards Robust Arbitrarily Oriented Subspace Clustering \red{
\fbox{Best Paper Award}}}{}{}{}{\small Zhong Zhang, \underline{Chongming Gao}, Chongzhi Liu, Qinli Yang, Junming Shao\vspace{1mm}\\
\emph{International Conference on Database Systems for Advanced Applications} (\textbf{DASFAA'19}), \vspace{1mm}\\
(CORE2020 Rank B, CCF B)}
\vspace{1mm}
\cventry{DASFAA'19}{\small \mbox{BLOMA: Explain Collaborative Filtering via Boosted Local Rank-One Matrix Approximation}}{}{}{}{\small \underline{Chongming Gao}, Shuai Yuan, Zhong Zhang, Hongzhi Yin, Junming Shao\vspace{1mm}\\
\emph{International Conference on Database Systems for Advanced Applications} (\textbf{DASFAA'19}), \vspace{1mm}\\
(CORE2020 Rank B, CCF B)}
\vspace{1mm}
\cventry{KBS'19}{\small Semantic Trajectory Compression via Multi-resolution Synchronization-based Clustering}{}{}{}{\small \underline{Chongming Gao}, Yi Zhao, Ruizhi Wu, Qinli Yang, Junming Shao\vspace{1mm}\\
\emph{Knowledge-Based Systems} (\textbf{KBS'19}), \vspace{1mm}\\
(SJR Q1, H Index: 94, CiteScore: 7.01, Impact Factor: 5.101)}
\vspace{1mm}
\cventry{ICDM'19}{\small Online Budgeted Least Squares with Unlabeled Data}{}{}{}{\small Chen Huang, Peiyan Li, \underline{Chongming Gao}, Qinli Yang, Junming Shao\vspace{1mm}\\
\emph{IEEE International Conference on Data Mining} (\textbf{ICDM'19}), \vspace{1mm}\\
(CORE2020 Rank: A*, CCF B).}
\vspace{1mm}
\cventry{ICDM'19}{\small Generating Reliable Friends via Adversarial Training to Improve Social Recommendation}{}{}{}{\small Junliang Yu, Min Gao, Hongzhi Yin, Jundong Li, \underline{Chongming Gao}, Qinyong Wang\vspace{1mm}\\
\emph{IEEE International Conference on Data Mining} (\textbf{ICDM'19}), \vspace{1mm}\\
(CORE2020 Rank: A*, CCF B).}
\vspace{1mm}
\cventry{DASFAA'19}{\small SemiSync: Semi-supervised Clustering by Synchronization}{}{}{}{\small Zhong Zhang, DiDi Kang, \underline{Chongming Gao}, Junming Shao\vspace{1mm}\\
\emph{International Conference on Database Systems for Advanced Applications} (\textbf{DASFAA'19}), \vspace{1mm}\\
(CORE2020 Rank B, CCF B)}
\vspace{1mm}
\cventry{ICDM'17}{\small Synchronization-inspired Co-clustering and Its Application to Gene Expression Data}{}{}{}{\small Junming Shao, \underline{Chongming Gao}, Wei Zeng, Jingkuan Song, Qinli Yang\vspace{1mm}\\
\emph{IEEE International Conference on Data Mining} (\textbf{ICDM'17}), \vspace{1mm}\\
(CORE2020 Rank: A*, CCF B).}
\vspace{1mm}
% \vspace{1mm}
\section{Awards}
\cvitem{2019}{\small One of our papers won the \textbf{Best Paper Award} of DASFAA'19 (CORE2020 Rank B, CCF B).}
\cvitem{2019}{\small \emph{Outstanding Master's Thesis Award} of UESTC. (86/3744)}
\cvitem{2019}{\small \emph{Outstanding Graduate Student Award} of UESTC.}
\cvitem{2016}{\small \textbf{Rank No.1} in Undergraduate Thesis Defense of UESTC. \emph{Outstanding Undergraduation Thesis}.}
\cvitem{2016}{\small \emph{Outstanding Undergraduate Student Award} of UESTC. Top 10 in Yingcai Honors College.}
\cvitem{2014}{\small \emph{Meritorious Winner} for CSIAM's China Undergraduate Mathematical Contest in Modeling.}
\cvitem{2013}{\small \emph{First Prize} in Sichuan Contest District in China Undergraduate Mathematical Contest in Modeling.}
\cvitem{2012}{\small \emph{Tang Lixin Scholarship} of UESTC. Top 60 out of over 25,000 undergraduate and graduate students.}
\cvitem{2012}{\small \textbf{Rank No.1} in entering UESTC from Yunnan Province (Score: 614).}
%----------------------------------------------------------------------------------------
% COMPUTER SKILLS SECTION
%----------------------------------------------------------------------------------------
\section{Computer Skills}
% \cvitem{Basic}{VHDL, UML}
\cvitem{Programming}{Python, \textsc{Matlab}, \textsc{Java}, C/C++, \LaTeX, HTML5+CSS3+Javascript}
\cvitem{Prototyping}{Adobe Illustrator, Adobe Photoshop}
%----------------------------------------------------------------------------------------
% LANGUAGES SECTION
%----------------------------------------------------------------------------------------
% \section{Languages}
% \begin{small}
% \cvitemwithcomment{Arabic}{Native Speaker}{}
% \cvitemwithcomment{French}{Near Native}{excellent command}
% \cvitemwithcomment{English}{Near Native}{excellent command}
% \cvitemwithcomment{Chinese}{Intermediate}{good working knowledge}
% \cvitemwithcomment{Japanese}{Intermediate}{good working knowledge}
% \end{small}
%----------------------------------------------------------------------------------------
% INTERESTS SECTION
%----------------------------------------------------------------------------------------
%\bigskip
\section{Interests}
\renewcommand{\listitemsymbol}{-~} % Changes the symbol used for lists
\cvitem{}{Reading, Badminton, Jogging, Diving, Climbing, Photography.}
%\cvlistdoubleitem{Robotics}{}
%----------------------------------------------------------------------------------------
% COVER LETTER
%----------------------------------------------------------------------------------------
% To remove the cover letter, comment out this entire block
%\clearpage
%\recipient{HR Departmnet}{Corporation\\123 Pleasant Lane\\12345 City, State} % Letter recipient
%\date{\today} % Letter date
%\opening{Dear Sir or Madam,} % Opening greeting
%\closing{Sincerely yours,} % Closing phrase
%\enclosure[Attached]{curriculum vit\ae{}} % List of enclosed documents
%\makelettertitle % Print letter title
%\lipsum[1-3] % Dummy text
%\makeletterclosing % Print letter signature
%----------------------------------------------------------------------------------------
\end{document} | {
"alphanum_fraction": 0.659701297,
"avg_line_length": 49.2451612903,
"ext": "tex",
"hexsha": "cb1c330f6cc15787707dc98040718095035742b7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "80e5166784209d2aa65f013b83d3d08f3f8a9063",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "chongminggao/Homepage-ChongmingGAO",
"max_forks_repo_path": "files/resume/Chongming Gao(高崇铭)-USTC-18064850580.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "80e5166784209d2aa65f013b83d3d08f3f8a9063",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "chongminggao/Homepage-ChongmingGAO",
"max_issues_repo_path": "files/resume/Chongming Gao(高崇铭)-USTC-18064850580.tex",
"max_line_length": 286,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "80e5166784209d2aa65f013b83d3d08f3f8a9063",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "chongminggao/Homepage-ChongmingGAO",
"max_stars_repo_path": "files/resume/Chongming Gao(高崇铭)-USTC-18064850580.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-22T13:35:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-22T13:35:47.000Z",
"num_tokens": 4362,
"size": 15266
} |
\chapter{Expected Results and Cross Checks}
\section{Ensemble Tests}
\label{sec:ensembles}
Several ensembles of pseudo-data events were produced from the
background model to test the performance of the analysis. These
ensembles include:
\vspace{-0.05in}
\begin{myenumerate}
\item 400 fake data samples with SM single top content
($\sigma_{tb+tqb}=2.86$~pb).
\item Four sets of 100 samples each (named A, B, C and D), with
unknown (to the analyzers) single top content, but with SM ratios
between the $tb$ and $tqb$ cross sections.
\item Two sets of 200 samples with the SM ratio between the $tb$ and
$tqb$ cross sections, and with a total single top cross section of
$\sigma_{tb+tqb}=4.5$~pb and $\sigma_{tb+tqb}=4.7$~pb.
% Uncomment once the final set of results and plots are done:
%\item 6 sets of 100 experiments with unknown total cross section and
% unknown $tb$:$tqb$ ratio
\end{myenumerate}
\vspace{-0.05in}
The full analysis chain was run using each pseudo-dataset as if it
were the real dataset. The results are presented in the following
subsections.
\subsection{Ensemble Tests with SM Signal}
\label{ens_SM_sig}
Figure~\ref{SMens} shows the distribution of estimated cross sections
from one ensemble generated with the SM input signal cross sections
for the electron, muon, and e+$\mu$ channel. The ensemble contains 400
samples of both 2-jet and 3-jet events. The input cross section in
each case is 2.86~pb. The most probable value of the distribution is
2.75~pb while the mean is 3.18~pb.
\vspace{0.1in}
\begin{figure}[!h!tbp]
\includegraphics[width=0.40\textwidth]
{eps/MatrixElement/ensembles/Ensembles_Electron}
\includegraphics[width=0.40\textwidth]
{eps/MatrixElement/ensembles/Ensembles_Muon}
\includegraphics[width=0.40\textwidth]
{eps/MatrixElement/ensembles/Ensembles}
\vspace{-0.1in}
\caption[SMens]{Results of standard model ensemble test for the
electron channel (upper left plot), the muon channel (upper right
plot), and electrons and muons combined (lower plot).}
\label{SMens}
\end{figure}
\clearpage
\subsection{Ensemble Tests with Non-SM Signal}
\label{ens_nonSM_sig}
Five more ensembles were generated with a non-SM $tb$+$tqb$ cross
section, but SM $tb$:$tqb$ cross section ratio ($\sigma_s/\sigma_t =
0.44$). The results are shown in Fig.~\ref{ensembles}.
\vspace{0.1in}
\begin{figure}[!h!tbp]
\includegraphics[width=0.40\textwidth]{eps/MatrixElement/ensembles/EnsemblesA}
\includegraphics[width=0.40\textwidth]{eps/MatrixElement/ensembles/Ensembles0sig1}
\includegraphics[width=0.40\textwidth]{eps/MatrixElement/ensembles/EnsemblesC}
\includegraphics[width=0.40\textwidth]{eps/MatrixElement/ensembles/EnsemblesD}
\includegraphics[width=0.40\textwidth]{eps/MatrixElement/ensembles/Ensembles4.5.eps}
\includegraphics[width=0.40\textwidth]{eps/MatrixElement/ensembles/Ensembles4.7.eps}
\vspace{-0.1in}
\caption{Results using the ensembles with non-SM cross section but
SM $tb$:$tqb$ ratio. The upper row contains ensembles A and B, the
middle row shows ensembles C and D, the bottom row shows the results
from two ensembles with input cross sections close to our measured
cross section of 4.6~pb.}
\label{ensembles}
\end{figure}
Our measured values of the cross sections taken from the means of the
ensembles are:
\vspace{-0.05in}
\begin{myitemize}
\item A: 3.00 $\times$ SM \hspace{0.2in}(true value = 2.76 $\times$ SM)
\item B: 0.20 $\times$ SM \hspace{0.2in}(true value = 0)
\item C: 0.86 $\times$ SM \hspace{0.2in}(true value = 0.70 $\times$ SM)
\item D: 2.38 $\times$ SM \hspace{0.2in}(true value = 2.06 $\times$ SM)
\end{myitemize}
We show these measured cross sections versus the input values in
Fig.~\ref{calibration} as a calibration check. We fit a straight line
through points greater than 1~pb where it shows good agreement with
the measurements. Below that, the measurement is biased to produce a
value greater than input because the results are constrained to be
positive.
\clearpage
\vspace{0.1in}
\begin{figure}[!h!tbp]
\begin{center}
\includegraphics[width=0.65\textwidth]{eps/MatrixElement/ensembles/ME_analysis}
\end{center}
\vspace{-0.1in}
\caption[calibration]{Measured signal cross section versus input
cross section in the ensemble tests. The ensemble response is obtained
from the mean of the distributions in Figs.~\ref{SMens} and
\ref{ensembles}.}
\label{calibration}
\end{figure}
\section{Cross Check Samples}
To test the modeling of the $W$+jets and $\ttbar$ background, two cross check
samples are created that select regions of phase space where little or no signal
is expected.
\subsection{$W$+jets Cross Check Samples}
The first sample selects events with $H_{T}$ less than $175$
GeV. This sample selects soft $W$+jets and QCD events and almost no top quark
events. Results of this cross check sample are shown below is Fig~\ref{wjets_cross}.
\vspace{0.1in}
\begin{figure}[!h!tbp]
\begin{center}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/Wjets_tb_Discriminant_electron.eps}
}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/Wjets_tq_Discriminant_electron.eps}
}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/Wjets_tb_Discriminant_muon.eps}
}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/Wjets_tq_Discriminant_muon.eps}
}
\end{center}
\vspace{-0.1in}
\label{wjets_cross}
\caption{$W$+jets cross check plots for (a) $s$-channel discriminant for electrons,
(b) $t$-channel discriminant for electrons, (c) $s$-channel discriminant for muons,
and (d) $t$-channel discriminant for muons.}
\end{figure}
\clearpage
\subsection{$\ttbar$ and Hard $W$+jets Cross Check Samples}
The second sample selects events with $H_{T}$ greather than $300$
GeV. This sample selects $\ttbar$ and hard $W$+jets events. Results of this cross check sample are shown below is
Fig~\ref{ttbar_cross}.
\vspace{0.1in}
\begin{figure}[!h!tbp]
\begin{center}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/TTbar_tb_Discriminant_electron.eps}
}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/TTbar_tq_Discriminant_electron.eps}
}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/TTbar_tb_Discriminant_muon.eps}
}
\subfigure[]{
\includegraphics[width=0.48\textwidth]{eps/MatrixElement/cross_check/TTbar_tq_Discriminant_muon.eps}
}
\end{center}
\vspace{-0.1in}
\caption{$\ttbar$ and hard $W$+jets cross check plots for (a) $s$-channel discriminant for electrons,
(b) $t$-channel discriminant for electrons, (c) $s$-channel discriminant for muons,
and (d) $t$-channel discriminant for muons.}
\label{ttbar_cross}
\end{figure}
| {
"alphanum_fraction": 0.7717037037,
"avg_line_length": 37.5,
"ext": "tex",
"hexsha": "107a3f11f2c2d5b0b2676dce352407e2a56ad02a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "tgadf/thesis",
"max_forks_repo_path": "MEnote/ExpectedResults.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "tgadf/thesis",
"max_issues_repo_path": "MEnote/ExpectedResults.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "tgadf/thesis",
"max_stars_repo_path": "MEnote/ExpectedResults.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2050,
"size": 6750
} |
\SetPicSubDir{ch-Background}
\chapter{Background}
\label{ch:background}
\vspace{2em}
This section provides the background required to understand this work. Readers who already have an understanding of Deep Reinforcement Learning and Deep Deterministic Policy Gradient (DDPG) may skip this section.
We would like to note that a large portion of this section has been reproduced from OpenAI's Spinning Up in Deep RL~\cite{SpinningUp2018}. What is different in this reproduction, however, is that the background information on DRL has been adapted to fit the ideas and concepts related to map-less navigation found in this work.
\section{Deep Reinforcement Learning}
Deep reinforcement learning (DRL) is the combination of reinforcement learning and deep learning. This idea of using neural networks for reinforcement learning, however, is not new and can be dated all the way back to Teasauro's TD-Gammon~\cite{tesauro_temporal_nodate}. In the early 2010s, however, the field of deep learning began to find groundbreaking success, particularly in speech recognition~\cite{dahl_context-dependent_2012} and computer vision~\cite{krizhevsky_imagenet_2017}. This success, combined with the advances in computing power, allowed the revival in interest of using deep neural networks as universal function approximators for reinforcement learning - leading to deep reinforcement learning.
Deep reinforcement learning is useful when~\cite{SpinningUp2018}:
\begin{itemize}
\item we have a sequential decision-making problem (which we can represent as a Markov Decision Process)
\item we do not know the optimal behaviour (e.g. multi-modal problem)
\item but we can still evaluate whether behaviours are good or bad
\end{itemize}
\subsection{Markov Decision Process}
To train our agent with deep reinforcement learning, we must be able to model the relationship between the agent and its environment. A Markov Decision Process (MDP) is a mathematical object that describes our agent interacting with a stochastic environment. It is defined by the following components:
\begin{itemize}
\item $S$: \textbf{state space}, a set of states of the environment. This is the set of inputs for our robot which contain a view of the world (i.e. a stack of laser scans, the current speed of the robot, and the target position with respect to the robot's local frame)
\item $A$: \textbf{action space}, a set of actions, which the agent selects from each timestep. The actions taken by our robot to reach its target are its linear velocity and angular velocity, both of which are continuous-valued variables
\item $P(r, s' | s, a)$: \textbf{transition probability distribution}. For each state s and action $a$, $P$ specifies the probability that the environment will emit reward $r$ and transition to state $s'$. This transition function describes the relationship between states, actions, next states, and rewards in an environment.
\end{itemize}
\subsection{Policies}
The end goal is to find a policy $\pi$, which tells the agent what actions to take given a state. In DRL, we use parameterized policies: policies whose outputs are computable functions that depend on a set of parameters (e.g. the weights and biases of a neural network) which we can adjust to change the behavior via some optimization algorithm~\cite{SpinningUp2018}.
We denote the parameters of such a policy by $\theta$, and then write this as a subscript on the policy symbol to highlight the connection:
\begin{equation}
a_{t} \sim \pi_{\theta}(\cdot | s_{t})
\end{equation}
\subsection{Reward and Return}
\label{background:subsec:reward}
The reward function $R$ is critically important in reinforcement learning (and specifically for this work). It depends on the current state of the world, the action just taken, and the next state of the world:
\begin{equation}
r_{t} = R(s_{t}, a_{t}, s_{t+1})
\end{equation}
The goal of an agent is to maximise the cumulative reward over a trajectory. In this work, the type of return used is called an \textbf{infinite-horizon discounted return}, which is the sum of all rewards ever obtained by the agent, but discounted by how far off in the future they're obtained. This formulation of reward includes a discount factor $\gamma \in (0, 1)$
Specifically (prior to adding the reward feature proposed in this work), this work uses the reward function for map-less navigation proposed in Xie et. al's AsDDPG network~\cite{xie_learning_2018}:
\begin{equation*}
Reward, r_{t} =
\begin{cases}
R_{crash}& \text{if robot crashes},\\
R_{reach}& \text{if robot reaches the goal},\\
\gamma{((d_{t-1} - d_{t})\Delta{t} - C)}& \text{otherwise}.\\
\end{cases}
\end{equation*}
\subsection{The RL Problem}
The goal of reinforcement learning is to select a policy which maximises \textbf{expected return} when the agent acts according to it. To talk about expected return, we first have to talk about probability distributions over trajectories.
Let’s suppose that both the environment transitions and the policy are stochastic. In this case, the probability of a $T$-step trajectory is:
\begin{equation}
P(\tau|\pi) = p_{0}(s_{0}) \prod_{t=0}^{T-1} P(s_{t-1}|s_{t}, a_{t}) \pi(a_{t}|s_{t})
\end{equation}
The expected return, denoted by $J(\pi)$, is then:
\begin{equation}
J(\pi) = \int_{\tau} P(\tau|\pi) R(\tau) = \tau \sim \pi R(\tau)
\end{equation}
The central optimization problem in RL can then be expressed by:
\begin{equation}
\pi^{*} = arg\underset{\pi}{max} J(\pi)
\end{equation}
with $\pi$ being the optimal policy.
\subsection{Value Functions}
It's often useful to know the \textbf{value} of a state, or state-action pair. By value, we mean the expected return if you start in
that state or state-action pair, and then act according to a particular policy forever after. Value functions are used, one way or another, in almost every RL algorithm.
There are four main functions of note:
\begin{enumerate}
\item The \textbf{On-Policy Value Function}, $V^{\pi}(s)$, which gives the expected return if you start in a state $s$ and always act according to policy $\pi$:
\begin{equation}
V^{\pi}(s) = \tau \sim \pi R(\tau)|s_{0} = s
\end{equation}
\item The \textbf{On-Policy Action-Value Function}, $Q^{\pi}(s, a)$, which gives the gives the expected return if you start in state $s$, take an arbitrary action $a$ (which may not have come from the policy), and then forever after act according to policy
$\pi$:
\begin{equation}
Q^{\pi}(s, a) = \tau \sim \pi R(\tau)|s_{0} = s, a_{0} = a
\end{equation}
\item The \textbf{Optimal Value Function}, $V^{*}(s)$, which gives the expected return if you start in state $s$ and always act according to the \textit{optimal} policy in the environment
\begin{equation}
V^{*}(s) = \underset{\pi}{max} \tau \sim \pi R(\tau)|s_{0} = s
\end{equation}
\item The \textbf{Optimal Action-Value Function}, $Q^{*}(s, a)$, which gives the expected return if you start in state $s$, take an arbitrary action $a$, and then forever after act according to the \textit{optimal} policy in the environment:
\begin{equation}
Q^{*}(s, a) = \underset{\pi}{max} \tau \sim \pi R(\tau)|s_{0} = s, a_{0} = a
\end{equation}
\end{enumerate}
\subsection{The Optimal Q-Function and the Optimal Action}
There is an important connection between the optimal action-value function $Q^{*}(s, a)$ and the action selected by the optimal policy. By definition, $Q^{*}(s, a)$ gives the expected return for starting in state $s$, taking (arbitrary) action $a$, and then acting according to the optimal policy forever after.
The optimal policy in $s$ will select whichever action maximises the expected return from starting in $s$. As a result, if we have $Q^{*}$, we can directly obtain the optimal action, $a^{*}(s)$, via:
\begin{equation}
a^{*}(s) = \underset{a}{\argmax} Q^{*}(s, a)
\end{equation}
This connection is important; some RL algorithms learn by improving their policy (thereby directly obtaining "good" actions), whereas others learn by improving their Q-Function (which allows them to determine the best action from a set of actions, indirectly obtaining "good" actions).
\subsection{Bellman Equations}
All four of the value functions obey special self-consistency equations called \textbf{Bellman equations}. The basic idea behind the Bellman equations is this:
\begin{center}
The value of your starting point is the reward you expect to get from being there, plus the value of wherever you land next.
\end{center}
The Bellman equations for the on-policy value functions are:
\begin{equation}
V^{\pi}(s) = \underset{\underset{s' \sim P}{a \sim \pi}}{E} [r(s, a) + \gamma V^{\pi}(s')]
Q^{\pi}(s, a) = \underset{s' \sim P}{E} [r(s, a) + \gamma \underset{a' \sim \pi}{E}[Q^{\pi}(s', a')]]
\end{equation}
where $s' \sim P$ is shorthand for $s' \sim P(\cdot|s, a)$, indicating that the next state $s'$ is sampled from the environment's transition rules; $a \sim \pi$ is shorthand for $a \sim \pi(\cdot|s)$; and $a' \sim \pi$ is shorthand for $a' \sim \pi(\cdot|s')$.
The Bellman equations for the optimal value functions are:
\begin{equation}
V^{*}(s) = \underset{a}{max} \underset{s' \sim P}{E} [r(s, a) + \gamma V^{*}(s')]
Q^{*}(s, a) = \underset{s' \sim P}{E} [r(s, a) + \gamma \underset{a'}{max}[Q^{*}(s', a')]]
\end{equation}
The crucial difference between the Bellman equations for the on-policy value functions and the optimal value functions, is the absence or presence of the $max$ over actions. Its inclusion reflects the fact that whenever the agent gets to choose its action, in order to act optimally, it has to pick whichever action leads to the highest value.
\subsection{Advantage Functions}
Sometimes in RL, we don't need to describe how good an action is in an absolute sense, but only how much better it is than others on average. That is to say, we want to know the relative \textbf{advantage} of that action. We make this concept precise with the \textbf{advantage function}.
The advantage function $A^{\pi}(s, a)$ corresponding to a policy $\pi$ describes how much better it is to take a specific action $a$ in state $s$, over randomly selecting an action according to $\pi(\cdot|s)$, assuming you act according to $\pi$ forever after. Mathematically, the advantage function is defined by
\begin{equation}
A^{\pi}(s, a) = Q^{\pi}(s, a) - V^{\pi}(s)
\end{equation}
\section{Deep Deterministic Policy Gradient (DDPG)}
The network presented in this work is a variant of DDPG~\cite{xie_learning_2018}. As such, it is crucial that we also provide a background for what DDPG is, and how it works.
Deep Deterministic Policy Gradient (DDPG) is an algorithm which concurrently learns a Q-function and a policy. It uses off-policy data and the Bellman equation to learn the Q-function, and uses the Q-function to learn the policy.
This approach is closely connected to Q-learning, and is motivated the same way: if you know the optimal action-value function $Q^{*}(s,a)$, then in any given state, the optimal action $a^{*}(s)$ can be found by solving:
\begin{equation}
a^{*}(s) = \argmax_{a} Q^*(s,a)
\end{equation}
DDPG interleaves learning an approximator to $Q^*(s,a)$ with learning an approximator to $a^*(s)$, and it does so in a way which is specifically adapted for environments with continuous action spaces. But what does it mean that DDPG is adapted specifically for environments with continuous action spaces? It relates to how we compute the max over actions in $\max_a Q^*(s,a)$.
When there are a finite number of discrete actions, the max poses no problem, because we can just compute the Q-values for each action separately and directly compare them. (This also immediately gives us the action which maximizes the Q-value.) But when the action space is continuous, we can't exhaustively evaluate the space, and solving the optimization problem is highly non-trivial. Using a normal optimization algorithm would make calculating $\max_a Q^*(s,a)$ a painfully expensive subroutine. And since it would need to be run every time the agent wants to take an action in the environment, this is unacceptable.
Because the action space is continuous, the function $Q^*(s,a)$ is presumed to be differentiable with respect to the action argument. This allows us to set up an efficient, gradient-based learning rule for a policy $\mu(s)$ which exploits that fact. Then, instead of running an expensive optimization subroutine each time we wish to compute $\max_a Q(s,a)$, we can approximate it with $\max_a Q(s,a) \approx Q(s,\mu(s))$.
\subsection{Quick Facts}
\begin{itemize}
\item DDPG is an off-policy algorithm
\item DDPG can only be used for environments with continuous action spaces
\item DDPG can be thought of as being deep Q-learning for continuous action spaces
\end{itemize}
Next, we'll explain the math behind the two parts of DDPG: learning a Q function, and learning a policy.
\subsection{The Q-Learning Side of DDPG}
First, let's recap the Bellman equation describing the optimal action-value function, $Q^*(s,a)$. It's given by
\begin{equation}
Q^*(s,a) = \underset{s' \sim P}{E}[r(s, a) + \gamma \max_{a'} Q^*(s', a')]
\end{equation}
where $s' \sim P$ is shorthand for saying that the next state, $s'$, is sampled by the environment from a distribution $P(\cdot| s, a)$.
This Bellman equation is the starting point for learning an approximator to $Q^*(s,a)$. Suppose the approximator is a neural network $Q_{\phi}(s,a)$, with parameters $\phi$, and that we have collected a set ${\mathcal D}$ of transitions $(s,a,r,s',d)$ (where $d$ indicates whether state $s'$ is terminal). We can set up a mean-squared Bellman error (MSBE) function, which tells us roughly how closely $Q_{\phi}$ comes to satisfying the Bellman equation:
\begin{equation}
L(\phi, {\mathcal D}) = \underset{(s,a,r,s',d) \sim {\mathcal D}}{{\mathrm E}}\left[
\Bigg( Q_{\phi}(s,a) - \left(r + \gamma (1 - d) \max_{a'} Q_{\phi}(s',a') \right) \Bigg)^2
\right]
\end{equation}
Here, in evaluating $(1-d)$, we've used a Python convention of evaluating True to 1 and False to 0. Thus, when $d ==$ True---which is to say, when s' is a terminal state---the Q-function should show that the agent gets no additional rewards after the current state.
Q-learning algorithms for function approximators, such as DQN (and all its variants) and DDPG, are largely based on minimizing this MSBE loss function. There are two main tricks employed by all of them which are worth describing, and then a specific detail for DDPG.
\textbf{Trick One: Replay Buffers}. All standard algorithms for training a deep neural network to approximate $Q^*(s,a)$ make use of an experience replay buffer. This is the set ${\mathcal D}$ of previous experiences. In order for the algorithm to have stable behavior, the replay buffer should be large enough to contain a wide range of experiences, but it may not always be good to keep everything. If you only use the very-most recent data, you will overfit to that and things will break; if you use too much experience, you may slow down your learning. This may take some tuning to get right.
\textbf{Trick Two: Target Networks}. Q-learning algorithms make use of target networks. The term
\begin{equation}
r + \gamma (1 - d) \max_{a'} Q_{\phi}(s',a')
\end{equation}
is called the target, because when we minimize the MSBE loss, we are trying to make the Q-function be more like this target. Problematically, the target depends on the same parameters we are trying to train: $\phi$. This makes MSBE minimization unstable. The solution is to use a set of parameters which comes close to $\phi$, but with a time delay---that is to say, a second network, called the target network, which lags the first. The parameters of the target network are denoted $\phi_{\text{targ}}$.
In DQN-based algorithms, the target network is just copied over from the main network every some-fixed-number of steps. In DDPG-style algorithms, the target network is updated once per main network update by polyak averaging:
\begin{equation}
\phi_{\text{targ}} \leftarrow \rho \phi_{\text{targ}} + (1 - \rho) \phi,
\end{equation}
where $\rho$ is a hyperparameter between 0 and 1 (usually close to 1).
\textbf{DDPG Detail: Calculating the Max Over Actions in the Target}. As mentioned earlier: computing the maximum over actions in the target is a challenge in continuous action spaces. DDPG deals with this by using a target policy network to compute an action which approximately maximizes $Q_{\phi_{\text{targ}}}$. The target policy network is found the same way as the target Q-function: by polyak averaging the policy parameters over the course of training.
Putting it all together, Q-learning in DDPG is performed by minimizing the following MSBE loss with stochastic gradient descent:
\begin{equation}
L(\phi, {\mathcal D}) = \underset{(s,a,r,s',d) \sim {\mathcal D}}{{\mathrm E}}\left[
\Bigg( Q_{\phi}(s,a) - \left(r + \gamma (1 - d) Q_{\phi_{\text{targ}}}(s', \mu_{\theta_{\text{targ}}}(s')) \right) \Bigg)^2
\right],
\end{equation}
where $\mu_{\theta_{\text{targ}}}$ is the target policy.
\subsection{The Policy Learning Side of DDPG}
Policy learning in DDPG is fairly simple. We want to learn a deterministic policy $\mu_{\theta}(s)$ which gives the action that maximizes $Q_{\phi}(s,a)$. Because the action space is continuous, and we assume the Q-function is differentiable with respect to action, we can just perform gradient ascent (with respect to policy parameters only) to solve
\begin{equation}
\max_{\theta} \underset{s \sim {\mathcal D}}{{\mathrm E}}\left[ Q_{\phi}(s, \mu_{\theta}(s)) \right].
\end{equation}
Note that the Q-function parameters are treated as constants here.
Exploration vs. Exploitation
DDPG trains a deterministic policy in an off-policy way. Because the policy is deterministic, if the agent were to explore on-policy, in the beginning it would probably not try a wide enough variety of actions to find useful learning signals. To make DDPG policies explore better, we add noise to their actions at training time.
At test time, to see how well the policy exploits what it has learned, we do not add noise to the actions. | {
"alphanum_fraction": 0.7475053752,
"avg_line_length": 80.2610619469,
"ext": "tex",
"hexsha": "e1d9a05f8c5652b818da16bfabe0513efee3052c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e987422e5efcb7affe9a52bd0bc8d57ecc80ed48",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "watate/nusthesis",
"max_forks_repo_path": "chapters/ch-background.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e987422e5efcb7affe9a52bd0bc8d57ecc80ed48",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "watate/nusthesis",
"max_issues_repo_path": "chapters/ch-background.tex",
"max_line_length": 715,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e987422e5efcb7affe9a52bd0bc8d57ecc80ed48",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "watate/nusthesis",
"max_stars_repo_path": "chapters/ch-background.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4769,
"size": 18139
} |
\documentclass[output=paper]{langsci/langscibook}
\ChapterDOI{10.5281/zenodo.3598562}
\title{Internal constituent variability and semantic transparency in N Prep N constructions in Romance languages}
\author{Inga Hennecke\affiliation{University of Tübingen}}
\abstract{Constructions of the type N Prep N represent one of the most controversial issues in Romance word formation. In particular, their lexical status and their degree of productivity are still crucial points of discussion. Hence, it remains unclear whether these constructions fall within the category of morphological word formation or of syntax. Furthermore, the possibilities for internal prepositional variation remain uncertain. This article takes a constructionist approach within the framework of construction morphology in order to describe the internal constituent variability and transparency of the prepositional element in N Prep N constructions in Spanish, Portuguese, and French, as in Sp. \textit{juego de niños}, \textit{juego para niños} (`kid's game') or in Sp. \textit{cabaña de árbol} and \textit{cabaña en árbol} (`tree house'). A qualitative analysis of large-scale corpus data from the TenTen corpus family indicates that Romance N Prep N constructions may undergo internal prepositional variation. The analysis focuses on the semantic relations of the internal nominal constituents and the semantic transparency of the constructions in the three Romance languages under investigation. The results indicate that semantic relations and semantic transparency play a role in the internal constituent variability of the prepositional element.}
\shorttitlerunninghead{Internal constituent variability and semantic transparency}
\begin{document}
\maketitle
\section{Introduction}
Compounds of the type N Prep N, such as Sp. \textit{bicicleta de montaña} `mountain bike', Fr. \textit{salle de bain} (`bath room'), or Pt. \textit{história em quadrinhos} (`comic strip'), are generally considered to be the most problematic aspect of research on compounding and word formation in Romance languages. This is because these constructions represent nominal lexical units that clearly approach free syntactic structures \citep{BustosGisbert:1986}. Compounds of the type N Prep N have been treated very differently in research on compounding and have also been labeled with many different terms, such as syntagmatic compounds \citep{BuenafuentesdelaMata:2010}, syntactic compounds \citep{RioTorto:2009}, improper compounds \citep{Kornfeld:2009}, phrasal lexemes \citep{Masini:2007}, frozen multiword units \citep{Guevara:2012}, lexicalized syntactic constructions \citep{Villoing:2012}, lexicalized phrases \citep{Fradin:2009}, and syntactic words \citep{DiSciullo:1987}. Generally, compounding is a mechanism whereby two lexical units are combined. Compounds of the type N Prep N are characterized as lexical units that consist of (at least) two lexical elements that are not orthographically combined. As a result, compounds of the type N Prep N, such as Sp. \textit{traje de baño} (`bathing suit'), do not differ on a formal level from syntactic phrases of the type N Prep N, such as Sp. \textit{libro para niños} (`book for children') \citep[69]{BustosGisbert:1986}.
The most problematic issue in current research on compounding of the type N Prep N is the question of the delimitation of syntactic and lexical structures in Romance languages. As the treatment of these constructions is based largely on the theoretical background of the individual author, there is no general agreement on whether or not N Prep N constructions should be included in the class of compounds. Related to this issue is the question of whether these constructions emerge by means of productive word formation processes or are merely ``fossilized'' or lexicalized syntactic structures. These two crucial issues will be discussed and analyzed in this study, with a focus on one particular case of internal constituent variability, the alternation of the internal preposition in N Prep N constructions. A large-scale corpus analysis of this alternation in French, Spanish, and Portuguese supports the adoption of a constructionist approach within a framework of construction morphology. Such an approach allows the internal constituent variation of N Prep N constructions to be represented without recourse to traditional notions of lexicon and syntax.
\section{Definition and classification of syntagmatic compounds}
As mentioned above, constructions of the type N Prep N are often excluded from descriptions of Romance compounding. Typically, they are classified together with other compound-like constructions lacking an orthographical union, as in the examples in \tabref{Fig:1:Types of phrasal lexemes}.
\begin{table}
\caption{Phrasal lexemes in Romance languages \citep[257]{Masini:2009}\label{Fig:1:Types of phrasal lexemes}}
\begin{tabular}{lllll}
\lsptoprule
Language & Types & Phrasal lexems & Lit. & Glosses\\
\midrule
French & [ADJ N]\textsubscript{N} & \textit{premier violon} & first violin & `first violin' \\
Italian & [N \textit{da} N]\textsubscript{N} & \textit{camera da letto} & room from bed & `bedroom' \\
Portuguese & [N \textit{de} N]\textsubscript{N} & \textit{cadeira de rodas} & chair of wheels & `wheelchair' \\
Spanish & [N ADJ]\textsubscript{N} & \textit{luna nueva} & moon new & `new moon' \\
\lspbottomrule
\end{tabular}
\end{table}
According to Masini, these examples are separated orthographically, show no strong degree of idiomaticity, and appear quite frequently in each of the four languages. The question nevertheless remains whether these constructions form part of the class of compounds.
According to \citet{Guevara:2012}, Spanish syntagmatic compounds, such as \textit{fin de semana} (`weekend') or \textit{sabelotodo} (`know-it-all'), should be excluded from the class of Spanish compounds, as these units are clearly syntactic units that contain “certain effects of lexicalization and atomicity in their distribution” \citep[180]{Guevara:2012}. In the same way, \citet{Villoing:2012} excludes French constructions such as \textit{fil de fer} (`iron wire') and \textit{brosse à dents} (`tooth brush') from her description of French compounds, as they are “lexicalized syntactic constructions that behave like lexical units” \citep[35]{Villoing:2012}. The approach taken by Guevara and Villoing indicates, on the one hand, that constructions of the type N Prep N are often considered as syntactic units that lie outside of the core of word formation processes. For this reason, they are regularly neglected in research papers on Romance word formation. On the other hand, this approach shows that N Prep N constructions are frequently interpreted as lexicalized syntactic constructions and, more precisely, as syntactic constructions that have somehow attained a high degree of fixedness. If this is the case, they should also be excluded as belonging to the class of Romance-language compounds, as lexicalization cannot be considered a morphological word formation process.
There is an opposing perspective according to which the constructions mentioned in \tabref{Fig:1:Types of phrasal lexemes} constitute a productive type of word formation and clearly follow productive morphosyntactic rules. According to Rainer, constructions of the type N Prep N are “very productive lexical patterns, which normally continue to obey the rules of […] syntax (for example, agreement rules), but may occasionally also deviate from them” \citep[2724]{Rainer:2016}. This perspective is not new and was already adopted by \citet{Benveniste:1974} in his work on French compounds of the type \textit {robe de chambre} (`robe') and \textit {plat à barbe} (`shaving bowl'), for which he claims indefinite productivity \citep[172]{Benveniste:1974}. In the course of the present paper, I will provide new empirical evidence in favor of this perspective using large-scale corpus data. The analysis will show that N Prep N constructions in Romance languages are highly frequent and productive and that their internal variability follows clear morphological rules that can be mapped using construction morphology.
In order to distinguish N Prep N constructions from other phrase-like constructions, their characteristics must be clearly delineated. According to \citet{BuenafuentesdelaMata:2010}, a syntagmatic compound may be defined as a lexical element that has been created by the fixation of a syntagm, which keeps its sentential structure, and therefore shows neither orthographic nor accentual union \citep[21ff.]{BuenafuentesdelaMata:2010}. \Citet{BustosGisbert:1986} states that Spanish N Prep N compounds differ from syntactic units on the syntactic level in two respects. First, they have a fixed word order, for example, \textit{ojo de buey} (`porthole') cannot be reordered as *\textit {buey ojo de}. Second, there is generally no unproblematic substitution of their constituents; for example *\textit{ojo de vaca} (`eye of cow') \citep[4825]{ValAlvaro:1999}. On a morphological level, he adds that N Prep N constructions show the same characteristics as other compounds in terms of gender and number agreement, the presence of composition markers, and the ability to undergo further derivation and to form collocations \citep[77]{BustosGisbert:1986}. According to Masini, N Prep N constructions are of major interest, as they follow the syntactic rules of head modification of a nominal phrase by a prepositional phrase. This means that in Romance languages, N Prep N constructions are generally left-headed, and that inflectional processes are performed on the head of the construction \citep[257]{Masini:2009}. \citet[4827]{ValAlvaro:1999} adds a fundamental characteristic on the semantic level: the absence of compositional meaning that may lead to syntactic reinterpretation of the complex nouns. This means that syntagmatic compounds, in contrast to syntactic units, represent one single naming unit at the semantic level; that is, they refer to one specific conceptual representation, as in Fr. \textit{sac à main} (`purse').
In this paper, I will focus on the syntactic criteria given by de Bustos Gisbert, specifically, on the impossibility of constituent substitution. This criterion does not appear to be suitable for purposes of differentiating syntactic and lexical elements, as the delimitation between syntactic and lexical N Prep N constructions remains a matter of controversy. Here, I will show that variation of the internal preposition can be best explained within a constructional framework. I will then argue with regard to the internal preposition that not only is the substitution of internal constituents possible in N Prep N constructions, but it is also a rule-governed process and depends largely on semantic factors, particularly the semantic relation of the nominal constituents.
When investigating the semantic relations of constituents of N Prep N constructions, it is crucial to consider the notions of semantic transparency and semantic opacity. In current research, the term semantic transparency refers to the degree to which the meaning of a complex construction can be derived from the meaning of its constituents \citep{Zwitserlood:1994}. For example, the French N Prep N construction \textit{salle de bains} (`bathroom') is considered semantically transparent, whereas the Spanish construction \textit{ojo de buey} (`porthole', lit. `bull's eye') is considered semantically opaque. Bell and Schäfer view semantic transparency and semantic opacity as scalar notions, lying at either end of a continuum (see \citet{Bell:2016} for a detailed discussion on semantic transparency). Later in the present study, I will discuss whether the semantic transparency of an N Prep N construction determines the possibility of internal constituent variation.
\section{Internal constituent variation in N Prep N constructions: The role of the preposition}
Characteristic of N Prep N constructions, and a crucial factor in their delimitation, is their resistance to paradigmatic variation. In the context of the delimitation of nominal compounds and noun phrases of the type N Prep N in Portuguese, \citet[9]{RioTorto:2012} state that the ``(im)possibility of lexical insertion" is one of the most important tests of compoundhood. They go further, claiming that if internal changing is allowed, ``we are no longer dealing with compounds ([N[PrepN]]\textsubscript{N}) but with noun phrases ([N[PrepN]]\textsubscript{NP})" (ibid.). When speaking of internal change, Rio-Torto and Ribeiro refer principally to changes in determination, as in Pt. \textit{fim de semana} (`weekend') and Pt. \textit{fim da semana} (`end of this week'), and to changes effected through insertion of lexical material, as in \textit{fim da última semana} (`end of last week'). As these examples suggest, internal constituent variation is generally seen as a crucial test of delimitation between compounds and syntactic structures. Similarly, Masini argues that, for lexical elements, ``paradigmatic variation is blocked, since the words in the construction cannot be substituted by a near-synonym, which should not be a problem for normal phrases" \citep[259]{Masini:2009}. Masini defines paradigmatic blocking as the inability to replace a constituent of the construction by another paradigmatically fitting constituent. She also refers to cases of paradigmatic blocking of a nominal unit of a N Prep N construction, as the Italian examples \textit{casa di cura} (`nursing home') and *\textit{abitazione di cura} (`*nursing domicile'). In this case, \textit{casa di cura} is a fixed naming unit that loses its semantic meaning when there is paradigmatic variation of a nominal element. The following analysis will show that paradigmatic blocking holds particularly true for N Prep N constructions with a stronger degree of semantic opacity and idiomaticity. More transparent N Prep N constructions allow productive and rule-governed internal constituent alternation, as the analysis will show by means of the prepositional constituent.
In the literature, all references to a delimitation test of constituent variability neglect the prepositional constituent in N Prep N constructions. The prepositional element is fundamental in N Prep N constructions, but its status is far from clear. In all the Romance languages under investigation here, the preposition \textit{de} is the most frequently used prepositional constituent in N Prep N constructions. In the case of Spanish, \citet{BuenafuentesdelaMata:2010} cites various examples of N Prep N constructions with prepositions other than \textit{de}, such as \textit{leche en polvo} (`milk powder'), \textit{cita a ciegas} (`blind date'), \textit{caridad con uñas} (`self-serving favor'), \textit{pozo sin fondo} (`bottomless pit'), and \textit{caballo con arcos} (`pommel horse'). She adduces the appearance of prepositions other than \textit{de} as evidence for the structural complexity of N Prep N constructions in Spanish. The same case can be made for the other languages under investigation in this paper (i.e. French and Portuguese), which show the same ability to form N Prep N constructions with other prepositions.
This paper concentrates on a specific set of (partially) synonymous prepositions in French, Spanish, and Portuguese, which are Fr. \textit{de} (`of'), \textit{à} (`to'), \textit{en} (`in'), and \textit{pour} `for'; Sp. \textit{de} (`of'), \textit{a} (`to'), \textit {en} (`in'), and \textit{para} (`for') as well as Pt. \textit{de} (`of'), \textit{a} (`to'), \textit{em} (`in') and \textit{para} (`for'). These prepositions may all appear in N Prep N constructions and they may all undergo internal alternation and variation in the three languages under investigation. Consider examples \xxref{ex:hennecke:1}{ex:hennecke:3} from the TenTen corpora:
\ea \label{ex:hennecke:1}
\ea\label{ex:hennecke:1a} Sp. \textit{fuente de horno – fuente para horno} `casserole' \\
\ex\label{ex:hennecke:1b} Pt. \textit{água de lavagem – água para lavagem} `wash water' \\
\ex\label{ex:hennecke:1c} Fr. \textit{livre d'enfant – livre pour enfants} `children’s book' \\
\z
\ex \label{ex:hennecke:2}
\ea\label{ex:hennecke:2a} Sp. \textit{motores de gasolina – motores a gasolina} `gas engine' \\
\ex\label{ex:hennecke:2b} Fr. \textit{jauge d'essence – jauge à essence} `fuel gauge' \\
\ex\label{ex:hennecke:2c} Pt. \textit{fogão de lenha – fogão a lenha} `wood stove' \\
\z
\ex \label{ex:hennecke:3}
\ea\label{ex:hennecke:3a} Fr. \textit{chemise de coton – chemise en coton} `cotton shirt' \\
\ex\label{ex:hennecke:3b} Pt. \textit{bracelete de aço – bracelete em aço} `steel bracelet' \\
\ex\label{ex:hennecke:3c} Sp. \textit{ciclismo de pisto – ciclismo en pisto} `track cycling' \\
\z
\z
Example \REF{ex:hennecke:1} shows internal variation of the prepositional elements \textit{de} and \textit{pour/para}. While the constructions containing \textit{de} are considered to have a lexical status, the constructions with \textit{pour/para} are generally considered to be syntactic constructions, as they pass certain of the classification tests mentioned above. In contrast to the construction with \textit{de}, they allow substitution and insertion, as the two tests of compoundhood demonstrate: Sp. \textit{fuentes de vidrio para horno} (`glass casseroles for the oven'), \textit{fuentes profundas para horno} (`deep casseroles for the oven'), but *\textit{fuentes de vidrio de horno} and *\textit{fuentes profundas de horno}. Example \REF{ex:hennecke:3b} demonstrates the internal alternation of the prepositions \textit{de} and \textit{en/em}. Here, alternation is possible without changing the semantic context of the whole construction or its degree of semantic transparency. In example \REF{ex:hennecke:2}, the prepositions \textit{de} and \textit{a/à} alternate in N Prep N constructions without changing the lexical status of the respective constructions. Nonetheless, these constructions differ in their frequency of usage, productivity, and fixedness, as well as in their degree of lexicalization and of idiomaticity. Especially in French, alternation of \textit{de} and \textit{à} may indicate a change in meaning, as in \textit{verre de vin} (`glass of wine') and \textit{verre à vin} (`wine glass'). In this case, the interpretation of both constructions as two distinct products of word formation is reasonable (this specific case will be discussed in detail in the course of the corpus analysis). In other cases, such as Pt. \textit{fogão de lenha – fogão a lenha} (`wood stove'), no clear semantic difference is visible, as attested by native speakers of Brazilian and European Portuguese: internal variation is possible inside one construction (a more detailed discussion of the examples will follow in the upcoming section). As mentioned above, authors including \citet{RioTorto:2009} interpret constructions of the type shown in examples \xxref{ex:hennecke:1}{ex:hennecke:3} as syntactic units, on the grounds that they do not pass all the delimitation tests for compoundhood. The following theoretical discussion and empirical analysis will show that it is neither necessary nor possible to draw a clear distinction between syntactic constructions and lexical constructions; the possibility of alternating prepositional elements in N Prep N constructions depends largely on the semantic function of the N2, and the fixedness, semantic transparency and the idiomaticity of the whole construction.
Another problem in analyzing alternation of prepositional elements concerns the role of the prepositions. In Romance languages, the prepositions \textit{de} and \textit{à/a} in particular have often been considered as semantically ``empty'' units that do not contain meaning. This perspective has often been applied to the prepositional element in N Prep N constructions: for example, \citet[164]{Bartning:1993} states that prepositions in French N Prep N constructions do not code any specific meaning and that they function only as linking elements. Similarly, \citet{Bartning:1993} refers to \citet{Cadiot:1997} by describing these prepositional elements as ``colorless prepositions'' \citep[164]{Bartning:1993}. For the French prepositions \textit{de} and \textit{à}, \citet[44]{Bosredon:1991} use the term of ``opérateur de couplage" (`linking operator'). \citet{Cadiot:1997} sees prepositions in N Prep N constructions as elements that express the operation of a construction or the denomination of a subclass of N1; he associates the prepositional element with a ``referential calibration'' of N1. In contrast, \citet[32]{Laumann:1998} notes the importance of distinguishing between different types of meaning when investigating the function and meaning of prepositions in nominal compounds. He differentiates between system meaning (\textit{Systembedeutung} -- the sum of the meaning patterns of the constituents in the N Prep N construction), word meaning (\textit{Wortbedeutung} -- the meaning of the construction on the level of word formation), and lexicon meaning (\textit{Wortschatzbedeutung} - the meaning of the construction as a naming unit in the lexicon).
Other authors interpret the possibility of elision of the prepositional element, as Sp. \textit{ducha de teléfono} > \textit{ducha teléfono} (`detachable shower head') or Sp. \textit{crédito de vivienda} > \textit{crédito vivienda} (`home loan'), as evidence of the semantic emptiness of the preposition. However, the elision of the prepositional element is only possible in certain strongly lexicalized constructions. Therefore, a counterargument may be based on the same evidence, given that the elision of the prepositional element is not possible in most cases. In the present paper, I argue that the elision of prepositional elements is not proof of a lack of semantic content. The elision can be explained in terms of common processes of language change that may or may not take place in certain lexicalization processes within complex units. The alternation between prepositional elements, as exemplified above is a productive word-formation process that differs clearly from mere lexicalization processes. Therefore, the following qualitative corpus analysis considers the internal constituent alternation of the prepositional element from a comparative perspective and does not focus on the elision of this element. The analysis adopts a constructionist approach based on \citet{Goldberg:1995,Goldberg:2006}, with a special focus on construction morphology as introduced by \citet{Booij:2010,Booij:2015}.
\section{N Prep N constructions in construction grammar and morphology}\label{sec:henneke:4}
Since Goldberg’s seminal work \textit{Constructions} (\citeyear{Goldberg:1995}), the constructional approach has had a strong impact on linguistic research. Constructions are considered as conventionalized form-meaning pairs that can be found at all levels of abstraction in language, are dynamically formed, and may be changed continuously. They are acquired via general processes of abstraction, generalization, and categorization. \citet[5]{Goldberg:2006} considers any linguistic unit to be a construction, if ``some aspect of its form or function is not strictly predictable from its component parts''. Furthermore, units are considered to be stored as constructions if they can be fully predicted and if they are sufficiently frequent (ibid.).
In his theory of construction morphology, \citet{Booij:2015} applied the general notions and concepts of construction grammar to morphological units that have traditionally been regarded as morphological. The underlying assumption of the theory of construction morphology is that a construction may have characteristics that cannot be derived from their constituents \citep[3]{Booij:2015}. Booij cites the example of the reduplication of nouns in Spanish in order to express the notion `real', as in \textit{un café café} (`a real coffee'). He establishes the notion of conceptual schemas and subschemas, defined as schematic representations of morphological constructions. These schemas represent a correlation between form and meaning:
\ea\citep[2]{Booij:2015}\\ <[[x]V\textsubscript{i} er]N\textsubscript{j} $\leftrightarrow$ [Agent of SEM\textsubscript{i}]\textsubscript{j}>\z
This example indicates that a word with base x, in this case an English infinitive verb form, can transform into a noun with the meaning `agent of the base word (SEM)' by adding the suffix \textit{-er} \citep[2]{Booij:2015}. The variable x denotes the phonological content of the base word, i denotes the meaning of the base word, and j shows that the meaning of the complete construction depends on the form of the complete construction (ibid.). \citet[261]{Masini:2009} applies the theory of construction morphology to constructions of the type N Prep N in Italian, taking them to represent an abstract template that is stored in the mental lexicon. Masini further notes that this abstract template features a certain degree of productivity and is associated with a concrete naming function (ibid.). By means of a specific inheritance mechanism, based on instance inheritance links \citep{Goldberg:1995}, constructions that are more and more specific can be derived from the abstract template. This may be done by categorical specification (filling an unspecified slot with a specific category), as in N Prep N or N Prep V, by lexical specification (filling a slot with specific lexical material), as in N \textit{de} N or by a completely lexical construction, such as It. \textit{casa di cura} \citep[261]{Masini:2009}. \figref{fig:hen:1} demonstrates the application of this theory to French constructions of the type N Prep N.
\begin{figure}
\caption{Inheritance hierarchy for N Prep N templates in French \citep[263]{Masini:2009}\label{fig:hen:1}}
\includegraphics[scale=0.5]{figures/Masinifigure2.png}
\end{figure}
This figure shows the inheritance hierarchy from the abstract template [N1 \textit{de} N2]\textsubscript{N}, which here is an intermediate construction of the abstract template [N1 Prep Y] and [N1 Prep N2]. From the level [N1 \textit{de} N2]\textsubscript{N}, it is possible to proceed to a second intermediate lexical level, which indicates the semantic function of N2, and to conclude at a completely lexical level, which shows the lexical result with a concrete naming function. According to \citet[263]{Masini:2009}, this model can also clarify and describe new occurrences of the N1 \textit{de} N2 construction.
The following qualitative corpus analysis aims to apply the concept of construction morphology presented by \citet{Booij:2010,Booij:2015} and exemplified by \citet{Masini:2009} to a cross-linguistic comparative analysis of large-scale corpus data for Spanish, French, and Portuguese N Prep N constructions. The focus of the analysis is on constituent variation of the internal prepositional element in N Prep N constructions in these three languages, and I will apply Masini's inheritance hierarchy template (\figref{fig:hen:1}) to the internal variability of prepositional constituents. It is useful to include a further intermediate level prior to the first and second levels of the hierarchy for N Prep N templates mentioned above. This additional level contains the abstract template with the semantic function of N2, which in the following corpus analysis is shown to be a crucial factor in determining the possibility of internal prepositional variation. For the purposes of the present analysis, the inheritance hierarchy for N Prep N templates may be visualized as in \figref{fig:henneke:Inheritancehierarchy}.
\begin{figure}
\caption{Inheritance hierarchy for internal variation in N Prep N templates in French (adapted from \citealt{Masini:2009})\label{fig:henneke:Inheritancehierarchy}}
\includegraphics[scale=0.5]{figures/Inheritancehierarchy.png}
\end{figure}
This figure shows the inheritance hierarchy adapted from \figref{fig:hen:1} by means of example \REF{ex:hennecke:3a}. As mentioned above, the added abstract intermediate levels are intended to reflect the possibility of that prepositional variability for certain N Prep N constructions and the dependence of this variability on the semantic function of the nominal constituents of the construction. The objectives of the following qualitative corpus analysis are to apply the inheritance hierarchy in \figref{fig:henneke:Inheritancehierarchy} to a large-scale corpus of natural speech data for Spanish, French and Portuguese and to compare the internal prepositional variability of N Prep N constructions in these three languages.
\section{Qualitative corpus analysis}
As mentioned above, the present corpus analysis is intended to investigate internal constituent alternation of the prepositional element in N Prep N constructions in Spanish, French and Portuguese. The focus is on the alternation between \textit{de} and \textit{à/a}, \textit{de} and \textit{en/em}, and \textit{de} and \textit{pour/para}. This study builds on a quantitative corpus survey on the internal alternation of the prepositional element in N Prep N constructions in Spanish, French, and Portuguese by means of large-scale corpus data \citep{Hennecke:2017}. \citet[144]{Hennecke:2017} showed that internal prepositional variation in the three languages under investigation is possible, but that these languages show different characteristics in terms of frequency and productivity of such alternations. The quantitative analysis of the three languages focused on frequency of types and token, productivity (i.e. probability of previously unobserved types), and population size (i.e. potential number of formations) \citep[139]{Hennecke:2017}. The results show that Portuguese and, to a lesser extent, French, allows productive internal constituent variation of the prepositional element. In contrast, Spanish does not show productivity in internal variation, which is demonstrated by the absence of hapax legomena (ibid.). At the same time, Spanish has the greatest tendency to employ the preposition \textit{de} in N Prep N constructions. In French, the prepositions \textit{à} and \textit{pour} are slightly more productive than in the other two languages. Moreover, French tends to avoid constructions using \textit{avec}, whereas constructions with \textit{com} are productive in Portuguese. The latter tendency may be explained by the fact that French prefers NA-constructions over constructions of the type N \textit{avec} N.
The aim of the present corpus analysis is to investigate the results from the above-mentioned study from a qualitative perspective. In this qualitative survey, the internal prepositional variability will be investigated from a mostly semantic perspective, combined with a constructionist approach. Here, the focus will be on which nominal semantic functions allow prepositional variability and whether the variability depends on the semantic transparency of the construction. To that end, this corpus analysis is based on the same dataset as in \citet{Hennecke:2017}, namely three web corpora from the TenTen corpus family from Sketchengine: the French corpus frTenTen12, the Spanish corpus esTenTen11 and the Portuguese corpus ptTenTen11. The TenTen corpora are large-scale web corpora with the counts displayed in \tabref{tab:1:frequencies}.
\begin{table}
\caption{Corpus Information of the TenTen corpora for Spanish, French and Portuguese (\url{https://the.sketchengine.co.uk})\label{tab:1:frequencies}}
\begin{tabular}{lrrr}
\lsptoprule
& frTenTen12 & esTenTen11 & ptTenTen11 \\
\midrule
Tokens & 11,444,973,582 & 10,994,616,207 & 4,626,584,246\\
Words & 9,889,689,889 & 9,497,402,122 & 3,900,501,097\\
Sentences & 456,065,104 & 407,205,587 & 190,221,913\\
Paragraphs & 188,079,362 & 213,364,685 & 91,248,976\\
Documents & 20,400,411 & 22,287,566 & 10,216,060\\
\lspbottomrule
\end{tabular}
\end{table}
In order to perform a qualitative analysis of the data, all N Prep N constructions were extracted automatically from the corpora, keeping only those that appear with more than one internal prepositional element. The present analysis focuses exclusively on N Prep N constructions and therefore excludes constructions of the type N Prep Det N. The data were manually inspected by excluding grammaticalized constructions (for example Fr. \textit{face à N}, Sp. \textit{gracias a N} `thanks to N'), binominal pairs (e.g. Fr. \textit{temps en temps} `time to time', Sp. \textit{dia a dia} `day to day'), and antonyms (Fr. \textit{chien avec/sans laisse} `dog with/without leash'). \tabref{tab:2:frequencies} demonstrates the underlying dataset for the qualitative analysis.
\begin{table}
\caption{Type and token counts for the underlying dataset with all pairs of nouns that are attested with at least two different internal prepositions\label{tab:2:frequencies}}
\begin{tabular}{lrr}
\lsptoprule
& Types & Tokens \\
\midrule
French & 1062 & 6991 \\
Spanish & 547 & 10219 \\
Portuguese & 6795 & 58932 \\
\lspbottomrule
\end{tabular}
\end{table}
This dataset shows important differences in type-token frequency between the three languages \citep{Hennecke:2017}. Portuguese presents by far the greatest number of different types and tokens of N Prep N constructions with more than one internal preposition. In contrast, Spanish has very few different types but a considerable number of tokens. This can be interpreted as a small number of different N Prep N constructions, but these few types appear quite often in the corpus data. The French data show a significantly higher number of different types than the Spanish data, but a lower number of different tokens. Here, more different types occur less often in the corpus data (for a detailed quantitative analysis of the data see \citealt{Hennecke:2017}). In what follows, a qualitative analysis of selected pairs of internal prepositions is presented in order to investigate whether these differences also appear at a qualitative level, with a special focus on the semantic functions of the nominal constituents and the semantic transparency of the constructions. The specific semantic relations were established with regard to the current literature on the semantic relations of nominal constituents in nominal compounds \citep{Gagne:1997, Gagne:2009, Girju:2005}. They were subsequently modified and adapted to the specific case of N Prep N constructions in the corpus data under investigation. It is not possible to list and discuss all occurrences of all types in the present paper; only selected examples will therefore be discussed and analyzed. Where necessary, references will be made to frequency of occurrence.
\subsection{The preposition \textit{de} in N Prep N constructions}
In all three languages under investigation, the preposition \textit{de} is most often used to combine two nominal expressions, as in Fr. \textit{salle de bain} (`bathroom'), Sp. \textit{botas de agua} (`rubber boots'), or Pt. \textit{moinho de vento} (`wind mill'). Therefore, the preposition \textit{de} appears in all pairs of internal prepositional variation analyzed in the following section. The three data sets also show internal variation for prepositions other than \textit{de}, but these pairs are not the subject of the present analysis. As mentioned above, the preposition \textit{de} has been much discussed; it has often been considered an ``empty'' or ``colorless'' preposition that lacks any kind of semantic content and that merely fulfills a linker role functions. This completely functional approach is not adopted in the present paper for reasons given above.
In the present account, I follow a constructionist approach (see \citealt{Masini:2009}), in which the prepositional constituent in N Prep N constructions is an element of semantic consequence to the whole construction \citep[262]{Masini:2009}. Masini states the example of Italian N1 \textit{di} N2 intermediate lexical constructions, which clearly differ semantically from N1 \textit{a} N2 intermediate lexical constructions. With reference to Johnston and Busta (1996), she emphasizes that ``the prepositions \textit{da}, \textit{di} and \textit{a} in Italian N+PREP+N expressions, under certain conditions and in combination with certain classes of nouns, are specialized for different kinds of modification" \citep[262]{Masini:2009}. In the further analysis, the statement from Masini will be refined, since in the present data, the intermediate lexical constructions N1 \textit{de} N2 and other intermediate lexical constructions (e.g. \textit{N1 para/pour} N2) may overlap semantically under certain conditions. These cases will be exemplified below in a cross-linguistic comparative analysis. In this analysis, the preposition is seen not as a semantically opaque constituent but as a constituent with a specific semantic value determined by the semantic functions of the nominal constituents.
The preposition \textit{de} in French, Spanish, and Portuguese has been described as expressing various relations \citep[187]{Bartning:1993}. In binominal constructions, it expresses, for instance, a relation of possession (Sp. \textit{el ordenador de Luis} `Luis' computer', Fr. \textit{la voiture de Jean} `John’s car'), characterization (Fr. \textit{statut de valeur} `status'), instrument (Fr. \textit{coup de baton} `blow'), material (Fr. \textit{papier de soie} `silk paper'), a part-whole relation (Sp. \textit{puerta de casa} `front door', Pt. \textit{ponta do dedo} `fingertip'), an affiliation (Fr. \textit{fils de roi} `king’s son'), a content (Fr. \textit{tasse de café} `cup of coffee'), a defining characteristic (Sp. \textit{hotel de lujo} `luxury hotel') or a purpose (Pt. \textit{vestido de noiva} `wedding dress'). For more examples in French see \citet[291ff.]{Lang:1991}.
\subsection{Internal variation between \textit{de} and \textit{a/à}}
Internal variation between the prepositional constituents \textit{de} and \textit{à} has been the subject of several articles and books on French prepositions and nominal syntagms (e.g. \citealt{Anscombre:1990, Lang:1991, Bosredon:1991, Cadiot:1997}). However, it is interesting that this discussion has no equivalent in the literature on Spanish and Portuguese prepositions. This is because such internal variation does not take place in Spanish and only to a small extent in Portuguese. The sole example of internal variation of \textit{de} and \textit{a} in the Spanish corpus data is the following:
\ea\relax [N1 \textit{de/a} N2\textsubscript{\scshape type/specification}]\textsubscript{N}> \textit{freno de/a disco}, `disk brake'\z
Here, the construction containing \textit{de} is far more frequent and the lexicalized form can be found in dictionaries. Still, the construction \textit{freno a disco} also occurs regularly in the corpus data of the esTenTen corpus, with a frequency of 0.10 occurrences per million. However, the corpus data shows that the internal variation of \textit{de} and \textit{a} is neither frequent nor productive in Spanish, as only one example of one type can be found in this large-scale internet corpus. In Portuguese, the ptTenTen data shows at least two Intermediate lexical constructions with variation of \textit{de} and \textit{a} that present a certain productivity:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/a} N2\textsubscript{\scshape purpose}]\textsubscript{N}\\
\textit{forno de/a microondas, forno de/a lenhas}\\
`microwave oven', `wood stove'
\end{minipage}\hfill
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/a} N2\textsubscript{\scshape type/specification}]\textsubscript{N}\\
\textit{lampião de/a gás, pilhas de/a combustível }\\
`gas lantern', `fuel cell'
\end{minipage}%
\end{exe}
\hspace*{-1.11124pt}The template [N1 \textit{de/a} N2\textsubscript{\scshape type/specification}]\textsubscript{N}, in particular, is frequently present in the corpus data and is expressed via different types, as in \textit{motor de/a combustão} (`combustion motor') or \textit{bomba de/a vácuo} (`vacuum pump'). It is striking that many of these types are technical terms. It is possible to perceive a semantic difference in both intermediate constructions, where the type N1 \textit{a} N2 more clearly indicates the material part of the N2 constituent, the type N1 \textit{de} N2 focuses semantically on complementing N1 and creating a construction that is a subtype of N1. However, the first sample surveys and questionnaires revealed that native speakers of European and Brazilian Portuguese do not perceive a difference in the semantic meaning patterns or, more precisely, in the semantics of the whole construction.
For the French data, a very different pattern appears in the analysis of internal variability of \textit{de} and \textit{à}:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/à} N2\textsubscript{\scshape purpose}]\textsubscript{N}\\
\textit{fil de/à pêche}\\
`fishing rod'
\end{minipage}\hfill
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/à} N2\textsubscript{\scshape type/specification}]\textsubscript{N}\\
\textit{course à/d’obstacles}\\
`obstacle course'
\end{minipage}%
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/à} N2\textsubscript{\scshape ingredient}]\textsubscript{N}\\
\textit{crème au/de citron}\\
`lemon creme'
\end{minipage}\hfill\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/à} N2\textsubscript{\scshape container}]\textsubscript{N}\\
\textit{conteneur de/à déchets }\\
`waste bin\slash bin with waste'
\end{minipage}%
\end{exe}
\ea\relax
[N1 \textit{de/à} N2\textsubscript{\scshape transport}]\textsubscript{N}\\
\textit{course de/à vélo}\\
`biking trip'
\z
The existing literature on \textit{de-à} alternation in French emphasizes that there is a semantic difference between binominal constructions containing \textit{de} and \textit{à} and that this semantic difference affects not only the prepositional element itself but also the whole naming unit. This becomes very clear in more detailed analysis of the examples from the template [N1 \textit{de/à} N2\textsubscript{\scshape container}]\textsubscript{N}. In all these examples, the intermediate lexical construction N1 \textit{à} N2 designates the container itself, as in \textit{flûte à champagne} (`champagne glass') or \textit{corbeille à fruit} (`fruit bowl'). In contrast, the intermediate lexical construction N1 \textit{de} N2 denotes the content of the container, as in \textit{flûte de champagne} (`a glass of champagne') or \textit{corbeille de fruit} (`a bowl of fruits'). In these cases, according to \citet{Cadiot:1997}, \textit{de} turns the interpretation of the construction toward the N2 and constructs a quantified image of the referent, whereas \textit{à} turns the interpretation toward the N1 and permits a qualified image of the reference \citep[44]{Cadiot:1997}. That is, \textit{de} carries an effect of quantification whereas \textit{à} carries a semantic notion of qualification. For cases of the intermediate lexical construction [N1 \textit{de/à} N2\textsubscript{\scshape ingredient}]\textsubscript{N}, such as \textit{salade d’écrevisses} and \textit{salade aux écrevisses} (`crawfish salad'), \citet{Lang:1991} states that the preposition \textit{à} connects N1 and N2, whereas the preposition \textit{de} derives N1 from N2. That is to say that \textit{à} describes an ingredient, whereas \textit{de} describes a substance \citep[283]{Lang:1991}. In the same way, in the examples of [N1 \textit{de/à} N2\textsubscript{\scshape type/specification}]\textsubscript{N} and [N1 \textit{de/à} N2\textsubscript{\scshape means of transport}]\textsubscript{N}, it can be seen that \textit{à} points to the material object \textit{vélo} or \textit{obstacles}, whereas \textit{de} more likely complements the N1, and hence the whole construction describes a subtype of N1. According to \citet[43]{Cadiot:1997}, the semantic differences that occur through the variation of the prepositions \textit{de} and \textit{à} can be accounted for in terms of the more abstract categorization that is the opposition of intension and extension. On this view, \textit{de} constructs an extensional reference directly, whereas \textit{à} creates an extensional reference indirectly by passing over an intentional reference \citep[62]{Cadiot:1997}.
From a constructionist perspective, it can be stated that only in the template [N1 \textit{de/à} N2\textsubscript{\scshape container}]\textsubscript{N} does the semantic value of the whole construction\linebreak change, as in \textit{conteneur de déchets} (`bin containing waste') and \textit{conteneur à déchets} (`waste bin'). In this case only, we have two different naming units when \textit{de} and \textit{à} alternate. Therefore, only here is it appropriate to refer to two different constructions, [N1 \textit{de} N2\textsubscript{\scshape container}]\textsubscript{N} and [N1 \textit{à} N2\textsubscript{\scshape container}]\textsubscript{N}, which lead to two different naming units at the lexical level. In all the other cases mentioned above, the variability of \textit{de} and \textit{à} does not lead to different semantic interpretations of the lexical outcome, but only to a difference in the semantic weight of certain meaning patterns in the interpretation. Therefore, in all other cases, the inheritance hierarchy from the previous section of this paper can be applied in order to capture the internal constituent variation.
To conclude this analysis, it can be stated that all constructions that allow internal constituent variation of the prepositional element are semantically transparent. The analysis shows that alternation of the internal prepositional constituent does not go along with the semantically more opaque constructions in the languages under investigation, since normally in these cases, the semantic functions of the nominal constituents cannot always be clearly determined. In Spanish and Portuguese, the internal variation is only possible in very specific cases of semantic function of the nominal constituents. In French, on the other hand, the internal variation of \textit{de} and \textit{à} is more frequently used or observed, and appears to be governed by the semantic functions of the nominal constituents.
\subsection{Internal variation between \textit{de} and \textit{em/en}}
The variation between \textit{de} and \textit{en/em} in N Prep N constructions has received little attention in the literature. On a general level, \citet[411]{Lang:1991} states that in French, \textit{en} between two nouns indicate the location of N1, as in \textit{arc-en-ciel} (`rainbow') and \textit{une ville en Italie} (`a city in Italy'), the characterization of N1, as in \textit{ange en stuc} (`stucco angel'), a way of preparation of N1, as in \textit{une salade en vinaigrette} (`a salad with dressing'), the material of N1, as in \textit{robe en soie} (`silk dress'), the form in which N1 appears, as in \textit{fleurs en bouquet} (`bouquet of flowers'), the condition in which N1 stands, as in \textit{arbre en fleur} (`blooming tree'), or a field in which N1 operates, as in \textit{expert en assurances} (`insurance expert'). According to \citet[55]{Laumann:1998}, French \textit{de} and \textit{en} are not always interchangeable when N2 refers to the material of N1. On the basis of an analysis of French grammar and dictionary entries, Laumann states that \textit{en} appears more regularly with a predicative supplement than \textit{de}, gives more concrete information about the material, and is less strongly linked to the N1. However, the most important difference seems to be that \textit{en} cannot appear in more opaque constructions with a (partially) idiomatic reading. \citet[55]{Laumann:1998} cites the examples of \textit{homme de fer} (`iron man') and \textit{yeux d’acier} (`steely eyes'), where it is not possible to substitute \textit{en} for \textit{de}. In the French data, most of these relations can also be seen in variations of \textit{de} and \textit{en}, as in the following examples:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape material}]\textsubscript{N}\\
\textit{chemise de/en coton}\\
`cotton shirt'
\end{minipage}\hfill
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape field}]\textsubscript{N}\\
\textit{étudiant de/en Sciences Po}\\
`student of politics'
\end{minipage}
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape location}]\textsubscript{N}\\
\textit{course de/en montagne}\\
`mountain race'
\end{minipage}\hfill
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape condition}]\textsubscript{N}\\
\textit{maison de/en vente}\\
`house for sale'
\end{minipage}
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape group}]\textsubscript{N}\\
\textit{dîner de/en famille}\\
`family dinner'
\end{minipage}\hfill
\begin{minipage}[t]{0.45\textwidth}
[N1\textsubscript{ACTION} \textit{de/en} N2\textsubscript{\scshape material}]\textsubscript{N}\\
\textit{dépenses d'/en énergie }\\
`energy expenditures'
\end{minipage}
\end{exe}
In each of these cases, the variation between \textit{de} and \textit{en} does not trigger any strong meaning difference between the two construction types; that is, it is possible to talk about internal variability rather than about two different types of constructions referring to different naming units. Nonetheless, certain differences are visible, as \citet{Laumann:1998} pointed out. For instance, \textit{en} is generally less closely linked to N1 and more often introduces a complement. Constructions with \textit{en} also appear to have a lesser degree of fixedness and put the focus on the N2. The present analysis confirms Laumann’s observation that the alternation of \textit{de} and \textit{en} is only possible in semantically transparent constructions that do not include any idiomatic meaning.
A very similar picture emerges from the analysis of the Portuguese data, as shown in the following examples:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/em} N2\textsubscript{\scshape material}]\textsubscript{N}\\
\textit{bracelete de/em aço}\\
`steel bracelet'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/em} N2\textsubscript{\scshape field}]\textsubscript{N}\\
\textit{profissional de/em artes}\\
`art professional'
\end{minipage}
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/em} N2\textsubscript{\scshape location}]\textsubscript{N}\\
\textit{surf de/em ondas }\\
`surf (on waves)'
\end{minipage}\hfill %
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/em} N2\textsubscript{\scshape condition}]\textsubscript{N}\\
\textit{crianças de/em risco}\\
`children at risk'
\end{minipage}\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/em} N2\textsubscript{\scshape group}]\textsubscript{N}\\
\textit{almoço de/em família}\\
`family lunch'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/em} N2\textsubscript{\scshape medium}]\textsubscript{N}\\
\textit{comentário de/em áudio}\\
`audio commentary’'
\end{minipage}\end{exe}
The Portuguese data show almost the same intermediate lexical constructions that function with a variation between \textit{de} and \textit{em}. The only difference is in the intermediate construction [N1 \textit{de/em} N2\textsubscript{\scshape medium}]\textsubscript{N}, where N2 designates the medium via which N1 is transferred. In contrast, the French data offer the intermediate construction [N1\textsubscript{\scshape action} \textit{de/en} N2\textsubscript{\scshape material}]\textsubscript{N}, which indicates a concrete action referring to a specific (raw) material. Nevertheless, from a quantitative perspective, the internal variation between \textit{de} and \textit{em/en} is by far more frequent in the Portuguese data.
From a quantitative perspective, the variation between \textit{de} and \textit{en} is quite rare in the Spanish data, but the qualitative analysis shows a more diverse picture:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/en} N2\textsubscript{\scshape material}]\textsubscript{N}\\
\textit{construcción de/en madera }\\
`wood construction'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape field}]\textsubscript{N}\\
\textit{grado de/en ingeniería}\\
`engineering degree'
\end{minipage}
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/em} N2\textsubscript{\scshape location}]\textsubscript{N}\\
\textit{ciclismo de/en pista}\\
`track cycling'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/en} N2\textsubscript{\scshape condition}]\textsubscript{N}\\
\textit{obras de/en construcción}\\
`construction site'
\end{minipage}\end{exe}
\ea\relax
[N1 \textit{de/en} N2\textsubscript{\scshape medium}]\textsubscript{N}\\
\textit{entrevista de/en radio}\\
`radio interview'
\z
The examples show that Spanish allows the same internal variation as Portuguese, except that the template [N1 \textit{de/en} N2\textsubscript{\scshape group}]\textsubscript{N} was not present in the data. In Portuguese and French, there are no strong meaning differences between the two templates, and therefore they can be counted as variants rather than as two distinct forms. In Spanish, as in French and Portuguese, the same subtle differences in the degree of fixedness and focus of the constituents can be observed.
Overall, it is possible to state that the variation between \textit{de} and \textit{en/em} is possible in all three languages under investigation. The differences appear to exist at the quantitative level rather than in the specific semantic meaning patterns. In all three languages, different templates can demonstrate and explain the possible alternation between \textit{de} and \textit{en/em}. For most cases, these templates overlap in the three languages. Therefore, it is possible to apply the inheritance hierarchy mentioned in \sectref{sec:henneke:4} to all of the examples.
\subsection{Internal variation between \textit{de} and \textit{pour/para}}
For French binominal compounds of the type N Prep N, \citet{Laumann:1998} states that the preposition \textit{pour} occurs quite rarely. This may be explained by the fact that \textit{pour} is less abstract than other prepositions, such as \textit{de} or \textit{à}: that is, \textit{pour }indicates a very concrete meaning of purpose or determination, whereas \textit{de} shows a less definite meaning pattern. Therefore, \textit{de}, as a semantically more opaque constituent, offers a wider scope for application than \textit{pour}, but in some cases both prepositions are interchangeable, as in the following examples:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/pour} N2\textsubscript{\scshape user}]\textsubscript{N}\\
\textit{collier de/pour chien}\\
`dog collar'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/pour} N2\textsubscript{\scshape purpose}]\textsubscript{N}\\
\textit{décoration de/pour mariage/table}\\
`marriage/table decoration'
\end{minipage}
\end{exe}
\ea\relax
[N1 \textit{de/pour} N2\textsubscript{\scshape user(object)}]\textsubscript{N}\\
\textit{musique de/pour piano}\\
`piano music\slash music for piano'
\z
The French data show that the variation between \textit{de} and \textit{pour} only is possible in cases where N2 designates a user (or a beneficiary), or where N2 specifies the purpose of N1. In all three templates given above, N2 serves to form a subtype of N1. However, the templates containing the preposition \textit{de} point more clearly to the N1 and focus on the interpretation of the whole template as a subtype of N1. In templates containing the preposition \textit{pour}, the preposition is clearly attached to the N2, and the semantic emphasis is on N2. Furthermore, the preposition \textit{pour} clearly carries the interpretation `for', whereas the constructions containing \textit{de} leave room for ambiguous interpretation. While \textit{musique pour piano} clearly designates music (a piece of music or composition) for piano, \textit{musique de piano} may also refer to music played by a piano (and not necessarily composed for playing on a piano). In this sense, \textit{pour} helps to resolve ambiguity and allows only the interpretation `designed for'. For the Spanish data, the pattern is quite similar to the French data, as in the following examples:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/para} N2\textsubscript{\scshape user}]\textsubscript{N}\\
\textit{club/ropa de/para niños }\\
`children’s club/clothes'
\end{minipage}\hfill
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/para} N2\textsubscript{\scshape purpose}]\textsubscript{N}\\
\textit{alimentos de/para consumo}\\
`consumer goods'
\end{minipage}\end{exe}
\ea\relax
[N1 \textit{de/para} N2\textsubscript{\scshape user(object)}]\textsubscript{N}\\
\textit{juego de/para pc}\\
`PC game'
\z
These cases show that the variation between \textit{de} and \textit{para} is possible only in contexts in which N2 semantically represents a user of (a person or an object) or a specific purpose for N1. These are the same templates that were found for the French data above. This result contradicts the findings from \citet{Lopez:1970}, who indicates that variation of \textit{de} and \textit{para} is also possible in contexts in which N1 designates a container, as in \textit{cesto de/para basura} (`waste bin/bin for waste'). In her corpus data of Argentinian Spanish from Buenos Aires, \citet[164]{Pacagnini:2003} also finds constructions of the type \textit{loción de/para limpieza} (`cleaning lotion') or \textit{crema de/para hidración} (`hydration crème'), in which the preposition expresses the utility of an object. Furthermore, she describes examples of the type \textit{lápiz de/para labios} (`lipstick') and \textit{esmalte de/para uñas} (`nail polish'), in which N1 represents an instrument. From this, Pacagnini deduces a schema in which, on a continuum between morphology and syntax, \textit{de} lies closer to the morphological pole, whereas \textit{para} is closer to the syntactic pole. In this paper, I can confirm Pacagnini’s hypothesis that N Prep N constructions in Spanish show a certain internal variation in respect of the prepositions \textit{de} and \textit{para}, which might therefore be considered as lying at different points of a continuum between the morphological and the syntactic pole. In this case, it is evident that constructions with \textit{para} are located closer to the syntactic pole than constructions with \textit{de}. Pacagnini observes that 75 percent of the participants in her data used a determiner or a qualifying adjective with the preposition \textit{para} in cases where N1 denotes an instrument, as in \textit{loción de/para la limpieza} (`lotion for cleaning') or \textit{esmalte para uñas sensibles} (`polish for sensitive nails') \citep[166]{Pacagnini:2003}. In the esTenTen corpus data, this type of variation between \textit{de} and \textit{para} does not occur at all. However, a closer look at the Portuguese data offers interesting findings:
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/para} N2\textsubscript{\scshape user}]\textsubscript{N}\\
\textit{brinquedos de/para crianças}\\
`children’s toys'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/para} N2\textsubscript{\scshape purpose}]\textsubscript{N}\\
\textit{acessórios de/para decoração}\\
`accessories for decoration'
\end{minipage}
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth} %
[N1 \textit{de/para} N2\textsubscript{\scshape user(object)}]\textsubscript{N}\\
\textit{concerto de/para piano}\\
`piano concert'
\end{minipage}\hfill%
\begin{minipage}[t]{0.45\textwidth}
[N1 \textit{de/para} N2\textsubscript{\scshape reason}]\textsubscript{N}\\
\textit{cirurgia de/para correção}\\
`reconstructive surgery'
\end{minipage}
\end{exe}
\begin{exe}\ex\begin{minipage}[t]{0.4\textwidth}
[N1 \textit{de/para} N2\textsubscript{\scshape period}]\textsubscript{N}\\
\textit{aluguel de/para férias}\\
`vacation rental'
\end{minipage}\hfill\begin{minipage}[t]{0.45\textwidth}
[N1\textsubscript{\scshape instrument} \textit{de/para} N2]\textsubscript{N}\\
\textit{produto de/para limpeza}\\
`cleaning product'
\end{minipage}\end{exe}
\ea\relax
[N1 \textit{de/para} N2\textsubscript{\scshape determination}]\textsubscript{N}\\
\textit{animais de/para abate}\\
`animals for slaughter'
\z
The Portuguese data illustrate that variation of \textit{de} and \textit{para} is possible in a larger number of nominal semantic relations in Portuguese than in Spanish or French. On the one hand, Portuguese offers the same templates as French and Spanish: N2 as a user (object or person) and N2 as a specific purpose of N1. Portuguese also provides additional templates, including N2 as a specific time or period of time, and N2 designating a specific determination for N1 (which in most cases is a living being). One additional template, N1 being an instrument for N2, is of particular interest. Here, we find the Portuguese example \textit{produto de/para limpeza} (`cleaning product'), which Pacagnini cited for Argentinian Spanish. This template appears to be productive in Portuguese, as shown in the additional examples \textit{creme de/para mãos} (`hand cream') and \textit{máscara de\slash para cílios} (`mascara for eyelashes'). Although our Spanish data contradict Pacagnini's findings for Spanish, the same template can be found in Portuguese. Further investigation of this phenomenon is necessary, particularly in light of the possibility that the Spanish used in Buenos Aires, where Pacagnini collected her data, may be influenced by Portuguese from Brazil. Initial informal speaker assessments of native Spanish speakers in Spain reveal that the template [N1\textsubscript{\scshape instrument} \textit{de/para} N2]\textsubscript{N} is not productive in Spain and that the template [N1\textsubscript{\scshape instrument} \textit{para} N2]\textsubscript{N} is considered incorrect.
The analysis of the variation between \textit{de} and \textit{para}, and \textit{de} and \textit{pour}, in Spanish, French, and Portuguese reveals that Portuguese has the largest number of templates at an intermediate lexical level for the variation of \textit{de} and \textit{pour/para}. The Spanish and French data overlap in their templates for the variation of \textit{de} and \textit{pour/para}, while the Spanish data from Buenos Aires \citep{Pacagnini:2003} offer a slightly different picture. The analysis here supports the findings from the previous subsections on the semantic transparency of the constructions under investigation. The present analysis does not feature any (partially) opaque or (partially) idiomatic constructions.
I mentioned at the beginning of this subsection that the prepositions \textit{de} and \textit{pour/para} vary in their semantic transparency; nevertheless, they undergo internal constituent variation in all three languages under investigation. While traditional accounts generally mention the different syntactic status of constructions containing \textit{de} and \textit{pour/para}, the constructionist approach introduced in \sectref{sec:henneke:4} makes possible an unproblematic mapping of this internal constituent variation.
\section{Conclusion}
The present study of internal constituent variation in N Prep N constructions allows numerous conclusions to be drawn as to their nature in Romance languages as well as on the role and variability of the prepositional element. The discussion and analysis here have shown that it is not always possible or expedient to differentiate clearly between lexical and syntactic N Prep N constructions. In many cases, not even the numerous delimitation tests may lead to a clear distinction. Therefore, the present account has abandoned this strict, dichotomous distinction in favor of a more holistic approach. When considering internal constituent variability, the determining factor is not the lexical or syntactic status of the elements; instead, it is the nominal semantic relation expressed via the preposition. Here, it is not crucial to differentiate between the lexical status, e.g. \textit{libro de niños} (`children’s book') and the syntactic status, e.g. \textit{libro para niños} (`children’s book'). In order to conduct a fruitful qualitative comparative analysis of N Prep N constructions in Romance languages, it is necessary to adopt a theoretical account that does not focus on the lexicon-syntax distinction. In the present paper, construction morphology, a constructionist approach that expands the notion of construction to the word level, offers the appropriate tools for analysis. Following \citet[261]{Masini:2009}, N Prep N constructions are analyzed as abstract templates, which are, to some degree, productive and associated with a naming function. For the present analysis, a constructionist inheritance hierarchy has been adapted to internal constituent variation in one construction (see \sectref{sec:henneke:4}). The latter analysis focused on the intermediate lexical level, that is, the alternation between [N1 Prep1 N2] and [N1 Prep2 N2], at which Prep1 and Prep2 designate alternative prepositions. This constructionist approach revealed the possible templates for prepositional variation in three different languages: Spanish, French, and Portuguese.
The analysis of three alternating pairs, specifically \textit{de} and \textit{à/a}, \textit{de} and \textit{en/em}, and \textit{de} and \textit{pour/para}, demonstrates important differences and common features between the languages. The quantitative aspect, which was not the primary focus of this paper, demonstrates the strong frequency and productivity of the different templates in Portuguese. This holds to a lesser extent in French and is even less in Spanish. This result is in line with the results from \citet{Hennecke:2017}. The qualitative analysis demonstrates that, in the underlying datasets, Portuguese offers the greatest number of different templates for internal prepositional variation, followed by French, and then Spanish. In this connection, it should be mentioned that Portuguese also offers the largest number of constructions (or types) for each template. This result confirms the impression from the quantitative study that Portuguese N Prep N templates are frequent in speech and are very productive. From a qualitative perspective, it is striking that most templates of internal prepositional variation exist across languages. In the case of the pair \textit{de} and \textit{en/em}, the templates that allow internal prepositional variation vary only slightly between the languages. For variation between \textit{de} and \textit{à/a}, the French data show the greatest tendency to internal variation. This is mainly because the preposition \textit{à} is relatively productive and frequent in French, which is not the case for Spanish and Portuguese. In cases where French relies on the preposition \textit{à}, Spanish and Portuguese mostly employ the preposition \textit{de}, as in Fr. \textit{verre à vin}, Sp. \textit{copa de vino} and Pt. \textit{copo de vinho} (`wine glass', in each case). Spanish does not offer any internal variation of \textit{de} and \textit{a}, whereas Portuguese shows certain tendencies in this direction. For the variation between \textit{de} and \textit{pour/para}, the French and Spanish data do not show any qualitative differences; that is, they overlap exactly in terms of which templates allow internal prepositional variation. Studies based on data from Argentinian Spanish indicated the existence of further templates; these were not found in the present data in Spanish, but many of them were present in the Portuguese data.
A very important finding from the qualitative analysis is that internal prepositional variation in the three languages is possible only for semantically transparent constructions. This can be explained by the fact that in opaque N Prep N constructions, the semantic relation between the nominal constituents often cannot be determined explicitly.
In conclusion, a constructionist approach to N Prep N constructions may solve certain problems in defining and delimitating these constructions in Romance languages. Furthermore, a constructionist approach allows an accurate investigation of the differences and common features of templates for internal prepositional variation in the three languages under investigation here. Future studies should investigate these templates in more detail, extending the approach to other types of internal variation.
{\sloppy\printbibliography[heading=subbibliography,notkeyword=this]}
\end{document}
| {
"alphanum_fraction": 0.7906321434,
"avg_line_length": 162.8969555035,
"ext": "tex",
"hexsha": "703a9dc179fb02518c9162345cd32a46f5c83db4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e49dcb0e66afad595db8c3dff814c120f162dfb7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/239",
"max_forks_repo_path": "chapters/henneke.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e49dcb0e66afad595db8c3dff814c120f162dfb7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/239",
"max_issues_repo_path": "chapters/henneke.tex",
"max_line_length": 2725,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e49dcb0e66afad595db8c3dff814c120f162dfb7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/239",
"max_stars_repo_path": "chapters/henneke.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17471,
"size": 69557
} |
\documentclass[11pt,a4paper]{report}
\usepackage{asymptote}
\usepackage{wrapfig}
\addtolength{\oddsidemargin}{-.875in}
\addtolength{\evensidemargin}{-.875in}
\addtolength{\textwidth}{1.75in}
\addtolength{\topmargin}{-.875in}
\addtolength{\textheight}{1.75in}
\begin{document}
\pagestyle{empty}
\setcounter{secnumdepth}{0}
\begin{center}
\Large{CHAPTER 5 SUMMARY. \textbf{Energy}}
\large{Justin Yang}
\sc{November 22, 2012}
\end{center}
\begin{wrapfigure}{r}{0.2\textwidth}
\vspace{-20pt}
\begin{center}
\includegraphics[width=0.18\textwidth]{images/05_!1work.JPG}
\end{center}
\end{wrapfigure}
\section{(1)\underline{\hspace{3cm}}}
The (2)\underline{\hspace{3cm}} done by a constant force $\vec{F}$ that moves an object a displacement $\Delta{x}$ is defined as $$\left(3\right)\underline{\hspace{5cm}}.$$ So $W = \vec{F} \cdot \Delta{\vec{x}}$.
\\Work is a (4)\underline{\hspace{2cm}}. The SI unit of work is the (5)\underline{\hspace{2cm}} (J), $1 \mathrm{\ J} = 1 \mathrm{\ N} \cdot \mathrm{m} = 1 \mathrm{\ kg} \cdot \mathrm{m}^2 / \mathrm{s}^2$.
\smallskip
\begin{wrapfigure}{r}{0.2\textwidth}
\vspace{-20pt}
\begin{center}
\includegraphics[width=0.18\textwidth]{images/05_!3varying.JPG}
\end{center}
\end{wrapfigure}
\noindent
A (6)\underline{\hspace{2cm}} is any object where all of its parts undergo equal $\Delta{x}$ over any $\Delta{t}$. The total work done on a particle is the same as the work done by the net force on the particle, so the work done is the area under the $F_x$-versus-$x$ curve: $$\left(7\right)\underline{\hspace{5cm}}.$$
\section{(8)\underline{\hspace{4cm}}}
Under a constant \textit{net} force $F_\mathrm{net}$ acting along a straight line on a particle of mass $m$, which is displaced by $\Delta{x}$ along the straight line, the work done on the particle is $$\left(9\right)\underline{\hspace{5cm}}.$$
Applying Netwon's second law (10)\underline{\hspace{2cm}} and the kinematic relation (11)\underline{\hspace{2cm}}, we have $$\left(12\right)\underline{\hspace{5cm}}.$$
The quantity $\frac{1}{2} mv^2$ is defined as the (13)\underline{\hspace{3cm}} of the particle $$\left(14\right)\underline{\hspace{5cm}}.$$
\\Kinetic energy is a (15)\underline{\hspace{2cm}}. The SI unit of kinetic energy is the same as work: $\mathrm{kg} \cdot \mathrm{m}^2 / \mathrm{s}^2$ or J.
\\Kinetic energy depends on the mass and speed of the particle but not the direction of motion.
\smallskip
\noindent
$W = \Delta{K}$. This is true even when the force is varying. This is known as the (16)\underline{\hspace{3cm}}.
\section{(17)\underline{\hspace{4cm}}}
The (18)\underline{\hspace{3cm}} of a system is the energy associated with the configuration of the system. Often the work done by external forces on a system may result in an increase in the potential energy of the system.
\begin{wrapfigure}{r}{0.1\textwidth}
\vspace{-23pt}
\begin{center}
\includegraphics[width=0.09\textwidth]{images/05_!4gpe.JPG}
\end{center}
\vspace{-20pt}
\end{wrapfigure}
\smallskip
\noindent
(19)\underline{\hspace{5cm}} The gravitational force between an object of mass $m$ and the Earth is $\vec{F} = -mg\,\hat{j}$, where $h$, $h_0 \ll r_E$, so the work done by gravity is $$\left(20\right)\underline{\hspace{7cm}}.$$
When the object is near the surface of the Earth, the gravitational potential energy $$\left(21\right)\underline{\hspace{3cm}}.$$
Thus, the work done by gravity is at the expense of the gravitational potential energy: $$\left(22\right)\underline{\hspace{3cm}}.$$
(23)\underline{\hspace{5cm}} The work done by the spring force, $F = -kx$, is given as $$\left(24\right)\underline{\hspace{7cm}}.$$
When the spring potential energy is zero at $x = 0$, the spring potential energy can be defined as $$\left(25\right)\underline{\hspace{3cm}}.$$
The work done by the spring force is then at the expense of the spring potential energy $$\left(26\right)\underline{\hspace{3cm}}.$$
\subsection{(27)\underline{\hspace{9cm}}}
\begin{wrapfigure}{r}{0.2\textwidth}
\vspace{-20pt}
\begin{center}
\includegraphics[width=0.18\textwidth]{images/05_!5conservative.JPG}
\end{center}
\vspace{-20pt}
\end{wrapfigure}
A force is conservative if on a particle $W_\mathrm{net} = 0$ around \textit{any} closed path.
\\We can use this property to define a (28)\underline{\hspace{4cm}} $U$ such that the force is the negative of the slope of the potential-energy $U$-versus-$x$ curve: $$\left(29\right)\underline{\hspace{5cm}}.$$
(30)\underline{\hspace{4cm}} are forces that are not conservative.
\section{(31)\underline{\hspace{7cm}}}
A (32)\underline{\hspace{3cm}} is a collection of particles. All forces are either (33)\underline{\hspace{3cm}} or (34)\underline{\hspace{3cm}}. The change in $E_\mathrm{net}$ of a system is done through work and heat. Since $K = \sum{K_i}$, we obtain by the work-energy theorem $$\left(34\right)\underline{\hspace{5cm}}.$$
The work done by all internal conservative forces can be recast as the change in the total potential energy of the system: $$\left(35\right)\underline{\hspace{3cm}}.$$
The sum $E_\mathrm{mech} = K + U$ is known as the total mechanical energy of the system, $$\left(36\right)\underline{\hspace{7cm}}.$$
When $W_\mathrm{ext} = 0$ and $W_\mathrm{nc} = 0$, we get the (37)\underline{\hspace{5cm}}: $$\left(38\right)\underline{\hspace{5cm}}.$$
\section{(39)\underline{\hspace{7cm}}}
For an isolated system, we have $W_\mathrm{ext} = 0$ and we may account of $W_\mathrm{nc}$ by changes in forms of energy other than mechanical energy. (40)\underline{\hspace{5cm}}: $$\left(41\right)\underline{\hspace{7cm}}.$$
Work and heat are the ways to transfer energy in or out of a system. When $\Delta{Q} = 0$, we have: $$\left(42\right)\underline{\hspace{9cm}}.$$
\section{(43)\underline{\hspace{5cm}}}
Power is the rate at which energy is transferred. The average (44)\underline{\hspace{3cm}} supplied by a force $\vec{F}$ is the rate at which the force does work: $$\left(45\right)\underline{\hspace{5cm}},$$ $$\left(46\right)\underline{\hspace{5cm}}.$$
The SI unit of power is J/s, also called the (47)\underline{\hspace{3cm}}. $1 \mathrm{\ W} = 1 \mathrm{\ J} / \mathrm{s} = 1 \mathrm{\ kg} \cdot \mathrm{m}^2 / \mathrm{s}^3$.
\end{document} | {
"alphanum_fraction": 0.7017627442,
"avg_line_length": 61.1359223301,
"ext": "tex",
"hexsha": "b6a46807c59a3cddc32c2d1c84781008a6c9cd22",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "justinyangusa/physics",
"max_forks_repo_path": "notes/Ch5SummaryFill.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "justinyangusa/physics",
"max_issues_repo_path": "notes/Ch5SummaryFill.tex",
"max_line_length": 324,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "justinyangusa/physics",
"max_stars_repo_path": "notes/Ch5SummaryFill.tex",
"max_stars_repo_stars_event_max_datetime": "2016-09-11T07:10:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-09-11T07:10:09.000Z",
"num_tokens": 2011,
"size": 6297
} |
\documentclass[12pt, a4paper]{article}
\usepackage[paper=a4paper,top=30mm,bottom=35mm,left=35mm,right=20mm]{geometry}
\usepackage[fontsize=13pt]{scrextend}
\usepackage[utf8]{inputenc}
\usepackage[english,vietnamese]{babel}
\renewcommand\baselinestretch{1.5}
\parindent 0pt
\parskip 5pt
\pagestyle{plain}
\title{Research Proposal}
\author{Danh Pham}
\date{2018-03-26}
\usepackage[pdftex]{graphicx}
\newcommand{\timeline}{\hspace{-2.3pt}$\bullet$ \hspace{5pt}}
\newcommand{\namelistlabel}[1]{\mbox{#1}\hfil}
\newenvironment{namelist}[1]{%1
\begin{list}{}
{
\let\makelabel\namelistlabel
\settowidth{\labelwidth}{#1}
\setlength{\leftmargin}{1.1\labelwidth}
}
}{%1
\end{list}}
\begin{document}
\begin{center}
\textbf{VIETNAM NATIONAL UNIVERSITY HO CHI MINH CITY} \\[2mm]
\textbf{UNIVERSITY OF INFORMATION TECHNOLOGY} \\ [2mm]
\textbf{FACULTY OF SOFTWARE ENGINEERING} \\ [1cm]
\includegraphics[width=30mm]{UIT_Logo} \\
\vspace{0.5cm}
\Large \textbf{THESIS PROPOSAL}
\vspace{0.5cm}
\end{center}
\begin{namelist}{Information}
\item[{\bf Title:}]
\textbf{ \large Deep Learning in Windows Malware Detection}
\item[{\bf Advisor:}]
Assoc Prof. Dr. Vũ Thanh Nguyên
\item[{\bf Student:}]
Phạm Hữu Danh - 14520134
\item[{\bf Degree:}]
Bachelor of Software Engineering
\end{namelist}
\section*{Introduction}
Malware is short for malicious software and is typically used as a catch-all term to refer to any software designed to cause damage to a single computer, server, or computer network \cite{moir2003defining}. In general, malwares are classified into the following categories \cite{egele2012survey}: worms, viruses, backdoors, Trojan horses, bots, rootkits, and spyware. A single incident of malware can cause millions of dollars in damage, e.g., zero-day ransomware WannaCry has caused world-wide catastrophe, from knocking U.K. National Health Service hospitals offline to shutting down a Honda Motor Company in Japan \cite{DBLP:journals/corr/abs-1709-08753}. Furthermore, malware is getting more sophisticated and more varied each day \cite{shahi2009technology}. Therfore, The detection of malicious software is an important problem in cyber security, especially as more of society becomes dependent on computing systems.
The current generation of malware detection products typically uses rule-based or signature-based approaches, which require analysts to handcraft rules that reason over relevant data to make detections. This approach has high accuracy, however, these rules are generally specific, and usually unable to recognize new malware even if it uses the same functionality. That is why the need for machine learning-based detection arises.
Machine learning algorithms learn the underlying patterns from a given training set, which includes both malicious and benign samples. These underlying patterns discriminate malware from benign code. Schultz et al. \cite{Schultz:2001:DMM:882495.884439} first applied machine learning methods to malware detection. They showed that compared with signature-based methods, machine learning methods yield more accurate classification results.
However, the main drawback of classical machine learning-based approaches is to accomplish accurate detection, since it is difficult to analyze complex and longer sequences of malicious behaviors, especially when malicious and benign behaviors are interposed. In contrast, deep learning models are capable of analyzing longer sequences of system calls and making better decisions through higher-level information extraction and semantic knowledge learning \cite{7946997}.
\section*{Motivation}
Many deep learning-based malware detection methods was presented, i.e., Tian et al. \cite{5665796} recorded the API sequences of binaries using the tool HookMe and proposed a scalable approach for distinguishing malicious files using the features extracted from logs of various API calls; Saxe and Berlin \cite{DBLP:journals/corr/SaxeB15} used a deep feed-forward neural network consisting of four layers with binary features of Windows portable executable (PE) files to detect malware; Yuxin et al. \cite{Yuxin2017MalwareDB} represented malware as opcode sequences and detect it using a deep belief network which can use unlabeled data to pretrain a multi-layer generative model.
Understanding how machine learning works in general and keeping track of state-of-the-art approaches emerging in the cybersecurity field can help organizations cope well with the increased sophistication and complexity of cyber attacks, especially those performed by advanced persistent threats (APT), which are multi-module, stealthy, and target- focused \cite{7946997}.
\section*{Objective}
The main objective of this thesis is to build a malware detection system for Windows platform based on the recommended deep learning methods, as well as the guidelines for its implementation. Specifically, I will build the system with a deep learning model which uses both features extracted from the Cuckoo Sandbox \cite{guarnieri2013cuckoo} and raw byte sequences \cite{2017arXiv171009435R}.
The dataset used for this thesis was published in Microsoft Malware Classification Challenge 2015 \cite{2018arXiv180210135R}, which is almost half a terabyte when uncompressed and consists of a set of known malware files representing a mix of 9 different families.
Besides, the study performed can be useful as a base for further research in the field of malware analysis with machine learning methods.
\section*{Research timelines}
\scalebox{1.2}{
\begin{tabular}{r |@{\timeline} l}
05 Mar - 15 Apr 2018 & Research about machine learning / deep \\
& learning methods in malware detection. \\
16 Apr - 15 May 2018 & Conduct experiments on various datasets \\
& and configurations. \\
16 May - 15 Jun 2018 & Build applications. \\
16 Jun - 30 Jun 2018 & Completing experiments and writing thesis.
\end{tabular}
}
\vspace{1.5cm}
{\centering{
\begin{tabular}{p{0.55\textwidth}cp{0.45\textwidth}}
\centering{\bfseries{Approved by the advisor}} & \bfseries{Ho Chi Minh City, March 26, 2018} \\
\centering{\textit{Signature of advisor}} & \textit{Signature of student} \\
\vspace{3cm} \\
\centering{\bfseries{
Assoc Prof. Dr. Vũ Thanh Nguyên
}} & \centering{\bfseries{
Phạm Hữu Danh
}}
\end{tabular}
}}
\vfill
\bibliographystyle{acm}
\renewcommand{\refname}{References}
\bibliography{references}
\end{document} | {
"alphanum_fraction": 0.7875583204,
"avg_line_length": 56.4035087719,
"ext": "tex",
"hexsha": "b43caa0fef10ae7e40368f22f24e839414ef065b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "90a127a849c483b02a76c88e498a58ca2b514c25",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "danhph/Cinder",
"max_forks_repo_path": "thesis/proposal/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "90a127a849c483b02a76c88e498a58ca2b514c25",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "danhph/Cinder",
"max_issues_repo_path": "thesis/proposal/main.tex",
"max_line_length": 921,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "90a127a849c483b02a76c88e498a58ca2b514c25",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "danhph/Cinder",
"max_stars_repo_path": "thesis/proposal/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1594,
"size": 6430
} |
\section{Algorithm description}
\label{sec:alg_desc}
Here we give a high-level description of how the program works, with some pseudocode where useful. Such pseudocode is meant to provide an abstract and concise representation of the algorithm, leaving out many implementation details and optimizations.
\paragraph{Individual}
An individual is described by the following attributes:
\begin{itemize}
\item $id$ : unique identifier in the world (assigned incrementally)
\item $pos = (x,y)$ : absolute position in the world
\item $\Delta pos = (\Delta x,\Delta y)$ : displacement vector, representing the movement at each step
\item $status =$ \NotExposed | \Exposed | \Infected | \Immune : current status of the individual. Both \NotExposed and \Exposed individuals are considered susceptible; the distinction is introduced for computational needs, for ease of visualization and debugging
\item $t_{status}$ : time passed since the individual entered its current state. It is used to determine when to change state.
\end{itemize}
\paragraph{Initialization}
The program starts by reading the simulation parameters from command line and validating them according to the rules we mentioned in section~\ref{sec:intro}; if an error is found, it is logged back to the user and the program exits with \MPIAbort.
The configuration is then broadcast to all the processes with \MPIBcast, using the purposefully created \texttt{mpi\_global\_config} datatype.
Each country proceeds to autonomously calculate its $x,y$ limits and the indices of its neighbors.
Now the root proceeds to (uniformly) distribute the population among the countries, sending the local number of individuals $N_c$ and infected $I_c$ via \MPIScatter. Each country will then create the requested number of individuals on its own, giving them a random position inside its borders and a random travelling direction.
We also use \MPIExscan to compute the id of the first individual of each country, by summing the numbers of individuals of the previous ones.
Note that we used blocking and asynchronous communication primitives for two main reasons:
\begin{enumerate*}[label=(\roman*)]
\item the amount of data to be sent is small, so it is unlikely to fill the send-buffers, and
\item the data flows only from the root to the other processes, so there is no need for synchronization
\end{enumerate*}.
Pseudocode of the initialization procedure can be found in algorithm~\ref{alg:initialization}.
\paragraph{Main loop}
Each iteration of the main loop takes a consistent situation at time $t$ and computes a consistent situation at $t + t_{step}$. There is no interpolation: each individual is treated as if it were in the same initial position and status for the whole step. At the beginning of each iteration we assume that each susceptible individual has $status_i = \NotExposed$.
The procedure, summarized in algorithm~\ref{alg:mainloop}, can be broken down in various phases:
\begin{description}
\item[update exposure] Check each susceptible individual $i$ against infected individuals: as soon as $i$ is found within spreading distance $d$ from an infected $j$, it is flagged with $status_i = \Exposed$.
\item[update status] Based on $status$ and $t_{status}$, the status of each individual is updated (for example, an infected individual with $t_{status} >= t_{infection}$ will become immune). In case the individual changed status or was not exposed, $t_{status}$ is reset to zero, otherwise it is incremented by $t_{step}$. \Exposed individuals are set back to \NotExposed. In this way $t_{status}$ will accumulate the time an individual has been continuously exposed.
\item[update position] $pos_i$ is updated by summing the displacement $\Delta pos_i$. In case the resulting position is out of the country boundary, there are two possibilities:
\begin{enumerate*}[label=(\roman*)]
\item it is also out of the world boundary, so it will bounce, or
\item it is in a neighbor country $c'$; in such case it is removed from the local individuals and inserted into an outbound list $migrated \mhyphen to_{c'}$ for the specific country.
\end{enumerate*}.
\item[exchange migrated] Country $c$ exchanges the list of individuals that moved to each of its neighbors $c'$. The source country performs an \MPIIsend and the destination will perform a matching \MPIRecv , both with the specific tag \MigratedTag.
We then integrate the received individuals into the local lists and finally wait for the sends to complete.
We chose a \emph{non-blocking} send so that the process immediately starts receiving from its neighbors, without having to wait for all the outbound data to be copied into the buffers, which may fill since a lot of data is potentially sent. For receives we chose a \emph{blocking} approach, with the disadvantage that a process might be stuck waiting for a slower neighbor, while a faster neighbor has already sent its data. However, the only approaches that overcome this limitation are \emph{event-driven} and \emph{polling}, none of which is available out-of-the-box in MPI, so we chose not to overcomplicate the code for a small gain in performance.
In order to send the struct representing an individual, we created a custom MPI type called \texttt{mpi\_individual}.
\item[write summary] If at least one day has passed since the last summary, each country counts how many susceptible, infected and immune individuals are currently in it. This data is packed into a \texttt{mpi\_summary} triple and gathered by the root using \MPIGather; the root will write an entry for each country in a \texttt{csv} file. An alternative to this approach would be collectively writing to an \MPIFile , but this also involves ensuring that each write operation is atomic. In our opinion the chosen approach is simpler, clearer and equivalent in terms of performance.
\item[check termination] Each country counts the local number of infected individuals, then it calls \MPIAllreduce so that the total number of infected in the world is computed and received by each country. If there are no more infected in the whole world, the loop stops.
\end{description}
\begin{algorithm}[p]
\If{root}{
\Cfg$\leftarrow \langle N, I, W, L, w, l, v, d, t_{infection}, t_{recovery}, t_{immunity}, t_{step}, t_{target}\rangle$\;
\ValidateConfig{\Cfg}\;
}
\MPIBcast{\Cfg}\;
\If{root}{
$\left\{ N_c \right\} \leftarrow$ \DistributePopulation{$N$}\;
$\left\{ I_c \right\} \leftarrow$ \DistributePopulation{$I$}\;
}
\MPIScatter{$N_c$}\;
\MPIScatter{$I_c$}\;
\ForEach{individual $i$}{
$pos_i \leftarrow$ random position\;
$\Delta pos_i \leftarrow t_{step} \cdot v \cdot
\begin{bmatrix} \cos \theta_i & \sin \theta_i \end{bmatrix}$,
where $\theta_i$ is a random direction\;
$status_i \leftarrow$ \NotExposed | \Infected, based on distribution\;
$t_{status} \leftarrow 0$\;
}
\caption{Initialization}
\label{alg:initialization}
\end{algorithm}
\begin{algorithm}[p]
\For{$t \leftarrow 0$ \KwTo $t_{target}$ \Step $t_{step}$}{
\ForEach{susceptible individual $i$, infected individual $j$}{
\UpdateExposure{$i,j$}\;
}
\ForEach{individual $i$}{
\UpdateStatus{i}\;
\UpdatePosition{i}\;
}
$\left\{ migrated \mhyphen to_{c'} \right\} \leftarrow$ list of individuals to be moved from $c$ to each neighbor $c'$\;
\ForEach{neighbor country $c'$}{
\MPIIsend{$migrated \mhyphen to_{c'}$, $c'$}\;
}
\ForEach{neighbor country $c'$} {
\MPIRecv{$migrated \mhyphen from_{c'}$, $c'$} and integrate it with local individuals\;
}
wait until all the \MPIIsend have completed\;
\If{end of day}{
$\langle susceptible_c, infected_c, immune_c \rangle \leftarrow$ \CountByStatus{$\left\{ status_{i} \right\}$}\;
$summary \leftarrow$ \MPIGather{$\left\{\langle susceptible_c, infected_c, immune_c \rangle\right\}$}\;
\If{root}{
\WriteToCSV{$summary$}\;
}
}
$infected_c \leftarrow$ \Count{infected individuals}\;
$total \mhyphen infected \leftarrow$ \MPIAllreduce{$infected_c$}\;
\If{$total \mhyphen infected = 0$}{
\Break\;
}
}
\caption{Main Loop}
\label{alg:mainloop}
\end{algorithm}
| {
"alphanum_fraction": 0.7212825934,
"avg_line_length": 75.3451327434,
"ext": "tex",
"hexsha": "612a00edeea8bcc8434237c0f1c8d6f7602171e3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "30d2c487afe73516e918eb733920a344b6510905",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fuljo/my-population-infection",
"max_forks_repo_path": "report/sections_report/algorithm_description.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "30d2c487afe73516e918eb733920a344b6510905",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fuljo/my-population-infection",
"max_issues_repo_path": "report/sections_report/algorithm_description.tex",
"max_line_length": 657,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "30d2c487afe73516e918eb733920a344b6510905",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fuljo/my-population-infection",
"max_stars_repo_path": "report/sections_report/algorithm_description.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2062,
"size": 8514
} |
\section{Compilation}
\emph{Compilation} is a step which optionally occurs at the end of
interpretation, when T1 is invoked ``as a compiler''; it can also be
triggered explicitly by the source code itself. Compilation takes as
input a list of \emph{entry points} (specific functions), and produces
an executable form of these functions and their transitive dependencies.
This process is meant to fulfill the following:
\begin{itemize}
\item Compiled code is small and self-reliant. It can be invoked and
run without requiring access to a bulky runtime system.
\item Interpretation features, such as defining types or new
functions, are not available in compiled code. Notably, compiled
code cannot access type or function names.
\item Compiled output should be amenable to integration within
applications written in other languages, in particular C.
\item The compiler offers strong guarantees on the usage of memory
resources by compiled code: maximum data stack depth and maximum
storage area for activation contexts (including local variables
and locally allocated instances) are computed; and dynamic
memory allocation, if supported at all, can be made to occur only
in a specific, limited area provided by the caller that invokes
the compiled code.
\item Compiled code is proven not to trigger any error related
to function invocation: whenever a function is invoked, there is
exactly one matching function that is more precise than all other
matching functions; and accessor functions called on instances that
extend the structure on which the accessor was defined find an
unambiguous instance on which the access is to be performed.
\item Similarly, compiled code is proven never to read an
uninitialized local variable, to let a reference to a locally
allocated instance escape its activation context, or to attempt to
write into a statically allocated instance.
\end{itemize}
Compilation can work only on a subset of valid codes; notable among the
restrictions is that compiled code cannot be generally recursive, since
such recursion would prevent computing strong bounds on stack depth.
\begin{rationale}
Banning recursion is controversial, especially since most functional
languages instead strive to use recursion to express most of flow
control. The two main reasons to forbid recursion in T1 are the
following:
\begin{itemize}
\item Not allowing recursion means that the call tree is finished,
which permits the general flow analysis (described below) to
terminate.
\item Recursion allocates memory in spaces which are scarce on
memory resources. T1 aims at being useful for small embedded systems
that have only a few kilobytes of RAM in total; however, even on
bigger systems, stacks are small. For instance, a typical modern
desktop system or laptop will have gigabytes of RAM, but the stack
allocated for a thread is smaller (8 megabytes by default on Linux).
Common sense dictates that if unbounded memory allocation occurs, it
should not be done in an area which is a thousand times smaller than
the heap, and for which the only detection mechanism for allocation
failure is \verb|SIGSEGV|.
\end{itemize}
In a future version, \emph{tail calls} may be implemented, and tail
recursion allowed. In a tail call, the activation context of the caller
is released first, and when the callee returns, control is passed back
not to the caller, but the caller's caller. If a tail call does not
imply undue stack growth, then it won't prevent computing finite bounds
on stack depth, and it \emph{should} be manageable by flow analysis.
\end{rationale}
\subsection{Flow Analysis And Types}
\emph{Flow analysis} is the central step of compilation. Consider
the following code excerpt:
\begin{verbatim}
: triple (object)
dup dup + + ;
: main ()
4i32 triple println
"foo" triple println ;
\end{verbatim}
The flow analysis starts with the entry point (\verb|main|) and an empty
stack. Then, after the \verb|4i32| token, the stack should contain
exactly one element of type \verb|i32|. At that point, the \verb|triple|
function is invoked. There is exactly one matching function of that name
for a stack with one element of type \verb|i32|, and therefore the call
is unambiguous.
Analysis proceeds with the \verb|triple| function. Crucially, analysis
of \verb|main| is not finished; it will be continued when this call to
\verb|triple| is done. Within \verb|triple|, the calls to \verb|dup| and
\verb|+| are followed; notably, when the first \verb|+| call is reached,
the stack is determined to contain three elements of type \verb|i32|.
When the end of \verb|triple| is reached, the stack is back to
containing one element of type \verb|i32|. At that point, flow analysis
jumps back to the caller (\verb|main|) and can proceed to the next call
(\verb|println|) since it is now known that this call is potentially
reachable (the \verb|triple| function may return) and also which stack
contents to expect at that time. The call to \verb|println| is resolved
to the function of that name that expects an element of type \verb|i32|.
Later on, the analysis reaches the second call to \verb|triple| in the
\verb|main| function. For that one, the stack contains one element of
type \verb|string|\footnote{Strictly speaking, \texttt{\textbf{string}}
is merely an alias on \texttt{\textbf{(std::u8 std::array)}}, but we
will use the name \texttt{\textbf{string}} for the clarity of the
exposition.}. There is still one matching function of name
\verb|triple|, and this is the same one as previously (indeed, there's
only one \verb|triple| function defined in this example, so only that
one may be called). However, the flow analysis of \verb|triple| will be
done \emph{again}: everything is done as if that call was a new one.
In that new call to \verb|triple|, the stack initially contains one
\verb|string|; after the two \verb|dup| calls, it contains three
\verb|string| elements; then, the \verb|+| calls will be resolved to the
function that ``adds'' strings (it concatenates them into newly
heap-allocated string values). That second analysis of \verb|triple|
concludes and returns a single \verb|string|. In \verb|main|, the second
\verb|println| call is resolved to the function of that name that
expects one element of type \verb|string| (not the same one as the one
that expects an \verb|i32|).
The salient points of this process are the following:
\begin{itemize}
\item The \verb|triple| function has been \emph{registered} with one
parameter type, which is generic (\verb|object| matches all value
types). It cannot really be called on values of every type; for
instance, it cannot be called on \verb|bool| since there is no
defined \verb|+| function that works on \verb|bool| values. But it
does not matter that the function could \emph{in abstracto} be
invoked on values on which it would not work; what counts is whether
such an invalid call is actually made in the program at hand. The
flow analysis determines that all calls to \verb|triple| will work,
and that is sufficient.
\item Similarly, \verb|triple| could have been registered with no
parameter at all (``\verb|: triple ()|''). During flow analysis, the
compiler would still have known that at the time the function is
invoked, there is a value on the stack, and the first \verb|dup|
call won't underflow. Types for function registration are used
\emph{only} to determine which function is called, not to restrict
the actual usage of values on the stack\footnote{It is still good
software engineering to register functions with exactly the
parameters that it is going to use, if only for better source code
readability.}.
\item The fact that each \verb|triple| call has its own analysis
avoids type merging trouble. If both calls were the same node in the
call graph, then the flow analysis would be faced with calling
\verb|+| on a stack with three elements, each being either a
\verb|string| or an \verb|i32|. Such a call would not succeed
because there is no \verb|+| function that can work over a
\verb|string| and an \verb|i32|; the compiler would reject the code
as making a call which is potentially unsolvable. In this case,
duplicating the \verb|triple| node allows the flow analysis to keep
track of the fact that while all three stack elements at this point
may be of type \verb|i32| or \verb|string|, they all three have the
same type, and cross combinations are not possible.
\end{itemize}
The node duplication means that, as far as flow analysis is concerned,
the ``call graph'' is a \emph{call tree}.
\begin{rationale}
Duplication of nodes for function calls is what makes all function
``generic'' in the Java or C\# sense. But since the analysis is done
only deductively, i.e. based on what may be on the stack at that
point of the program, there is no need for a syntax to express what
type combinations are allowed. Again, T1 does not care whether a given
function could work on all input values that may exist in the universe,
only that it would work with what may actually be present on the stack
at the time of the call.
\end{rationale}
\emph{Type merging} may still occur because of jump opcodes. For
instance, consider this function:
\begin{verbatim}
: muxprint (object object bool)
if drop else swap drop then println ;
\end{verbatim}
Suppose that the top three elements for a call to \verb|muxprint| are
values of type \verb|A|, \verb|B| and \verb|bool|. In the built function,
the call to \verb|println| can be reached from two points: this could
follow the ``\verb|swap drop|'' sequence (the boolean was \verb|false|,
the value of type \verb|A| has been dropped, the stack now contains an
object of type \verb|B|), or be reached through the jump that is implicit
in the \verb|else| construction. In the latter case, the top of the stack
will be a value of type \verb|A|.
Thus, flow analysis will consider that when \verb|println| is called,
the stack may contain one element which is of type \verb|A| or of type
\verb|B|. The call will be accepted only if it is solvable in both
cases. If the two cases are solvable, but lead to distinct functions,
then both functions will be analyzed, each with its own context.
For the purposes of flow analysis, all individual conditional jump
opcodes are considered independent of each other. This means that
the following cannot be compiled successfully:
\begin{verbatim}
: foo (bool)
->{x} {y}
x if 42 ->y then
x if y println then ;
\end{verbatim}
Indeed, this function uses the provided input value (stored in the local
variable \verb|x|) to decide whether to put the integer \verb|42| in the
variable \verb|y| (first ``\verb|if|'' clause), and whether to print the
contents of the \verb|y| variable (second ``\verb|if|'' clause). The
compiler does not notice that both jumps use the same control value;
instead, it considers that the jumps are independant of each other, and,
in particular, the first jump may be taken, thus skipping the
initialization of \verb|y|, while the second would not, leading to the
read of the potentially uninitialized variable \verb|y|.
\begin{rationale}
The idea that conditional jumps are independent of each other has been
borrowed from Java. Indeed, with the equivalent Java code:
\begin{verbatim}
static void foo(boolean x) {
int y;
if (x) {
y = 42;
}
if (x) {
System.out.println(y);
}
}
\end{verbatim}
Java compilation fails with the error ``variable \verb|y| might not
have been initialized''.
This is not considered a great restriction. In practical Java
development, that kind of error occurs mostly when adding debug code to
an existing function, activated with a global ``debug'' flag.
\end{rationale}
The notion of \emph{type} used for the flow analysis is the combination
of the \verb|std::type| for the value, and the \emph{object allocation
point}. An object allocation point is one of the following:
\begin{itemize}
\item static allocation (conceptually, in ROM, thus non-modifiable);
\item the heap;
\item a specific local slot within the activation context of a
specific function call, i.e. a node in the call tree.
\end{itemize}
This information is the basis for the escape analysis (making sure that
instances allocated in activation contexts are not reachable after the
called function has returned) and for the verification of constant
objects (static allocation corresponds to \verb|const| definitions in C,
and thus normally end up in non-modifiable memory).
Apart from the stack contents, the types of local variables at any point
in a given function (for a given activation context, i.e. within a node
in the call tree), is also maintained. A special \verb|nil| type is used
for uninitialized local variables; any attempt at reading \verb|nil| is
rejected at compilation time.
Types of values written in object fields are tracked. The container
types are distinguished by allocation point, but all instances with the
same allocation point use the same tracking. Therefore, writing a value
of type \verb|int| in the field \verb|x| of a heap-allocated object of
type \verb|T| implies that, from the point of view of the flow analyzer,
all objects of type \verb|T| that are heap-allocated may contain, at all
times, a value of type \verb|int| in their \verb|x| field.
Note that any merging may enrich the list of possible types in a stack
slot, local variable or object field, and trigger further flow analysis
for all parts of the call tree that depend on it.
\subsection{Constraints}
The following constraints are enforced by the flow analyzer; any
violation implies a compilation failure:
\begin{itemize}
\item The call tree must be finished.
\item At every merge point, the stack depth is the same for all
code paths leading to that point.
\item No merge between basic types (booleans and small modular
integers), or between a basic type and a non-basic type, may occur,
whether on the stack, in local variables, or within fields of
a structure type.
\item When a local variable is read, it may not contain \verb|nil|
(which marks the uninitialized state).
\item No write to a field of an object with static allocation may
happen.
\item Whenever a value has an allocation point tied to a given node
$N_1$ in the call tree, and it is written in a field of an other
object, then that other object must have an allocation point tied
to a node $N_2$, and $N_2$ must be either equal to $N_1$, or a
descendant of $N_2$ in the call tree.
\item When a function returns, the stack contents must not contain
any value whose allocation point is the node of that function in the
call tree.
\item For every function call, all possible combinations of types on
the stack at that point must be solvable, i.e. lead to a single most
precise function call.
\item When a field accessor is invoked, the field must be
unambiguously located for all possible types of the owner object.
\end{itemize}
Note that some special functions do not return (e.g. \verb|std::fail|).
This is detected during flow analysis. As such, some opcodes may be
unreachable; these will be trimmed during code generation.
\begin{rationale}
Each basic type may be merged only with itself because basic types have
specific storage requirements, that may differ from those of ``normal''
values (which are pointers). In the generated code, values of basic
types may be passed around on a different, dedicated stack, or different
registers. Similarly, an object field declared with type \verb|std::u8|
should correspond to a one-byte slot in the memory layout; a feature
of T1 is that object layouts are predictable, so that they can be
accessed from C code.
The finiteness of the call tree is enforced with nested call counters:
when a node is entered that corresponds to a given function $f$, the
counter for $f$ is incremented; it is decremented when leaving $f$. If
the counter goes over a given threshold, then compilation stops with an
explicit message. This necessarily detects all infinite trees, since
there are only a finite number of functions, each with a finite number
of opcodes: an infinite tree can be obtained only through infinite
recursion.
Some finite recursion is still tolerated. This allows for some cases
where the same generic function is used for several levels of a nested
structure, but with distinct types that guarantee against unbounded
recursion.
\end{rationale}
\subsection{Code Generation}
Code generation occurs after flow analysis has completed successfully.
How code is generated depends on the target type; the compiler may
produce portable threaded code, or native code, or WASM, or anything
else. Generated code includes all functions that are part of the
call tree; other functions are automatically excluded.
A generic \emph{function merging} process occurs during code generation.
In the flow analysis, functions were duplicated: the same piece of code
may yield several distinct nodes in the call tree. When generating code,
these nodes may be merged back. This is subject to some restrictions and
subtleties:
\begin{itemize}
\item Some merge operations may not be feasible. In our example with
the \verb|triple| function, one of the nodes works on \verb|i32|
values while the other uses \verb|string| values. In generated code,
these values use different storage techniques (e.g. stack slots of
different size, or different registers), which may preclude merging.
\item Even when function merging is possible, it may be undesirable
for performance: for instance, the unmerged function may have only
simple calls (each \verb|triple| function node calls a single
well-defined \verb|+| function), while the merged function may need
a type-based dynamic dispatch (if the \verb|i32| and \verb|string|
could be merged, the \verb|triple| function would have to look at
the runtime type of the values to decide which \verb|+| version to
call).
\end{itemize}
\begin{rationale}
In general, in T1, we aim at code compacity, hence apply merging
whenever possible. A future annotation will allow to explicitly tag some
functions as prohibiting non-trivial merging, i.e. when the relevant
types are not all strictly identical. This would reproduce the trade-offs
usually seen in C with \verb|inline| functions.
\end{rationale}
| {
"alphanum_fraction": 0.7568907608,
"avg_line_length": 46.7755610973,
"ext": "tex",
"hexsha": "34b63dde823a7b021fe077f1abffc8e10cbca083",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-01-12T11:34:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-12T11:34:57.000Z",
"max_forks_repo_head_hexsha": "4dfc0fea45ca0659ca626a6a19f986ff5ad26a18",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "t1lang/t1doc",
"max_forks_repo_path": "spec/compiler.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4dfc0fea45ca0659ca626a6a19f986ff5ad26a18",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "t1lang/t1doc",
"max_issues_repo_path": "spec/compiler.tex",
"max_line_length": 73,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "4dfc0fea45ca0659ca626a6a19f986ff5ad26a18",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "t1lang/t1doc",
"max_stars_repo_path": "spec/compiler.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-12T11:34:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-17T12:05:31.000Z",
"num_tokens": 4436,
"size": 18757
} |
\chapter{Prototype set classifier}
\label{ch_classifier}
%
In this chapter, we introduce the prototype set classifier, consider how to fit it to data, test the fitting procedure on benchmark cases, and compare model performance to other algorithms.
%
\section{Definition}
\label{sec_classifier_definition}
%
We observe $N\gg1$ i.i.d.\ pairs $(X_n,Y_n)$ of random variables such that $X_n$ takes values in $\R^D$, $D\geq1$, and $Y_n$ is one of $K>1$ classes represented by the integers from 0 to $K-1$.
Our goal is to estimate the conditional distribution of $Y_n$ given $X_n$.
As proset models are discriminative instead of generative, the distribution of $X_n$ is of no concern and we just deal with the observed realizations $x_n\in\R^D$ in practice.
Nominal or ordinal features can be included in the model via encoding as real vectors.\par
%
A proset classifier is built from $B\geq0$ ``batches'' of points selected from the available samples.
Each batch is defined by a a nonempty subset $S_1,\dots,S_B\subset\{1,\dots,N\}$ of the observation indices.
The collection of batches is denoted by
%
\begin{equation}
\mathcal{S}:=\{S_b:b\in\{1,\dots,B\}\}\label{eq_batches}
\end{equation}
%
or $\mathcal{S}=\emptyset$ if $B=0$.
The indices of each batch are $S_b=:\{s_{b,1},\dots,s_{b,J_b}\}$ where $J_b:=|S_b|$.
We refer to the samples $(x_{s_{b,j}},y_{s_{b,j}})$ as `prototypes'.
The model treats each as a representative example for the distribution of $Y_n$ when $X_n$ is in a neighborhood of $x_{s_{b,j}}$.
To make this notion more precise, we require additional notation:\par
%
The empirical marginal probabilities of $Y$ are
%
\begin{equation}
\hat{p}_{0,k}:=\frac{1}{N}\sum_{n=1}^N\1_{\{k\}}(y_k)\label{eq_p0k}
\end{equation}
%
where $\1_A$ is the indicator function of a set $A$.\par
%
The unnormalized Gaussian kernel $G_v$ with feature weights (inverse bandwidths) $v\in\R^D$, $v_d\geq0$, is
%
\begin{equation}
G_v:\R^D\rightarrow(0,1],
z\mapsto G_v(z):=\exp\left(-\frac{1}{2}\sum_{d=1}^D(v_dx_d)^2\right)\label{eq_kernel}
\end{equation}
%
We associate each batch $b$ with a vector $v_b\in\R^D$, $v_{b,d}\geq0$, and each prototype with a weight $w_{b,j}>0$ to estimate the conditional distribution as
%
\begin{equation}
P(Y=k|X=x)\approx\hat{p}_k(x):=
\frac{\hat{p}_{0,k}+\sum_{b=1}^B\sum_{j=1}^{J_b}\1_{\{k\}}(y_{s_{b,j}})w_{b,j}G_{v_b}(x-x_{s_{b,j}})}
{1+\sum_{b=1}^B\sum_{j=1}^{J_b}w_{b,j}G_{v_b}(x-x_{s_{b,j}})}
\label{eq_pkx}
\end{equation}
%
\begin{remark}
\begin{enumerate}
\item The sets $S_b$ are not required to be disjoint, so the same point can appear multiple times in (\ref{eq_pkx}).
As each batch is associated with its own $v_b$, the impact on the model may be different every time a sample appears.
%
\item We use unnormalized kernels, i.e., the integral over the kernel function is in general not equal to 1, since the scaling can be considered to be subsumed in the choice of $w_{b,j}$.
This avoids any complications related to the fact that the scaling for a Gaussian kernel depends on the number of features with nonzero coefficients.
%
\item The kernels are parameterized in terms of inverse bandwidth to enable feature selection via an $L_1$ penalty on $v_{b,d}$.
If a weight is forced to zero by the penalty, the values of the corresponding feature in points of batch $b$ cease to affect the model.
Conversely, a large value of $v_{b,d}$ means that the model is very sensitive to variations in the feature.
Kernels are limited to a diagonal bandwidth structure (product kernel) instead of a full semi-positive definite matrix to be able to fit on large feature spaces with reasonable effort.
%
\item The conditional probability (\ref{eq_pkx}) is computed as a locally weighted average similar to the Nadaraya-Watson estimator (\ref{eq_nadaraya_watson}) or the conditional distributions studied in \cite{Hall_04}.
However, there are two important differences:
%
\begin{itemize}
\item The model uses a subset of the training samples with individual weights instead of the entire data with unit weights.
This is done to reduce the computational effort for training and scoring if data size is large.
Also, studying the prototypes selected for the model may help to understand its structure.
%
\item The model adds the marginal probabilities to the contribution of the prototypes
This sets the scale for the weights $w_{b,j}$, which would otherwise be determined only up to a multiplicative constant.
Additionally, it defines a natural baseline for $B=0$ , i.e., the model that treats $Y_n$ as independent of the features.
\end{itemize}
\end{enumerate}
\end{remark}
%
Finding an optimal representation of the form (\ref{eq_pkx}) for anything but a very small sample appears intractable.
We thus focus on providing a heuristic that results in models of good quality.
The general idea is to iteratively add batches of prototypes to a base model consisting of the marginal probabilities $p_{0,k}$.
In each iteration, the available samples are split into a set of candidates for prototypes and a remainder used for scoring.
The weights for the candidates and the feature weights for the new batch are chosen to maximize a modified log-likelihood function for the scoring data.
Modifications to the likelihood are
%
\begin{enumerate}
\item reweighting the terms such that each class has the same overall weight as in the set of all samples.
This enables us to use a sampling scheme for candidates that does not draw proportionally from each class.
%
\item adding elastic net penalties for both prototype and feature weights to suppress candidates and features with negligible impact on the model.
\end{enumerate}
%
The method is greedy in so far that parameters selected during earlier iterations remain untouched.
Hyperparameters -- penalty weights and the number of batches -- are selected via cross-validation.\par
%
To state the modified likelihood and its derivatives, we make use of the following expression representing the unnormalized class probabilities truncated at batch $c\geq0$:
%
\begin{equation}
\hat{q}_{c,k}(x)=\hat{p}_{0,k}
+\sum_{b=1}^c\sum_{j=1}^{J_b}\1_{\{k\}}(y_{s_{b,j}})w_{b,j}G_{v_b}(x-x_{s_{b,j}})
\label{eq_qckx}
\end{equation}
%
These satisfy
%
\begin{align}
\hat{p}_{k}(x)&=\frac{\hat{q}_{B,k}(x)}{\sum_{l=0}^{K-1}\hat{q}_{B,l}(x)}\label{eq_q_properties}\\
\hat{q}_{0,k}(x)&=\hat{p}_{0,k}\notag\\
\forall c>0:\hat{q}_{c,k}(x)&
=\hat{q}_{c-1,k}(x)+\sum_{j=1}^{J_c}\1_{\{k\}}(y_{s_{c,j}})w_{c,j}G_{v_c}(x-x_{s_{c,j}})
\notag
\end{align}
%
During the training phase, we do not yet know the final number of batches $B$ but grow the model iteratively.
The conditional probabilities using only batches up to $c$ are given by
%
\begin{equation}
\hat{p}_{c,k}(x)=\frac{\hat{q}_{c,k}(x)}{\sum_{l=0}^{K-1}\hat{q}_{c,l}(x)}\label{eq_pckx}
\end{equation}
%
which also satisfy $\hat{p}_{0,k}(x)=\hat{p}_{0,k}$.\par
%
Given the first $c-1$ batches, the parameters for batch $c>0$ are chosen to minimize the following function, which is the negative log-likelihood with regularization as per (\ref{eq_regularization}) and reweighting as discussed above:
%
\begin{align}
f\left(v_c,\{w_{c,j}\}_j|\{x_n\}_n\right)
&=-\frac{1}{N}\sum_{k=0}^{K-1}\frac{N_k}{N_k-J_{c,k}}\sum_{n\in T_c}
\1_{\{k\}}(y_n)\log(\hat{p}_{c,k}(x_n))\label{eq_log_likelihood}\\
&+\lambda_v\left(\frac{\alpha_v}{2}\sum_{d=1}^Dv_{c,d}^2
+(1-\alpha_v)\sum_{d=1}^D|v_{c,d}|\right)\notag\\
&+\lambda_w\left(\frac{\alpha_w}{2}\sum_{j=1}^{J_c}w_{c,j}^2
+(1-\alpha_w)\sum_{j=1}^{J_c}|w_{c,j}|\right)\notag
\end{align}
%
The following expressions still need to be defined:
%
\begin{itemize}
\item $N_k:=\left|\{n\in\{1,\dots,N\}:y_n=k\}\right|$ is the number of all samples that have class $k$.
%
\item $J_{c,k}:=\left|\{j\in\{1,\dots,J_c\}:y_{s_{c,j}}=k\}\right|$
is the number of all candidates in batch $c>0$ that have class $k$.
%
\item $T_c:=\{1,\dots,N\}\setminus S_c$ is the set of samples not included as candidates for prototypes in batch $c>0$.
%
\item $\lambda_v\geq0$ is the weight for the elastic net penalty applied to feature weights.
%
\item $\lambda_w\geq0$ is the weight for the elastic net penalty applied to prototype weights.
%
\item $\alpha_v\in[0,1]$ is the portion of $\lambda_v$ assigned to the $L_2$ penalty for feature weights.
%
\item $\alpha_w\in[0,1]$ is the portion of $\lambda_w$ assigned to the $L_2$ penalty for prototype weights.
\end{itemize}
%
Given a set of prototypes $S_c$, the objective function (\ref{eq_log_likelihood}) is maximized subject to $v_{c,d}\geq0$ and $w_{c,j}\geq0$.
Note that we permit $w_{c,j}=0$ as solution of the optimization problem which contradicts our earlier definition.
However, since assigning zero weight to a point is equivalent to excluding it from the model, this does not cause any issues.\par
%
As all parameters are constrained to the first orthant, the fact that the $L_1$ penalty is not differentiable in zero does not pose a problem.
In fact, we can simply replace the absolute value with the identity function and solve the optimization task using a standard solver for continuous optimization with bounds like L-BFGS-B \cite{Byrd_95}.\par
%
To gain the full advantage of using L-BFGS-B, we need to compute the gradient of (\ref{eq_log_likelihood}) analytically.
The partial derivatives of $\log(\hat{p}_{c,k}(x))$, $c>0$, are:
%
\begin{align}
\frac{\partial}{\partial v_{c,d}}\log(\hat{p}_{c,k}(x))&
=\frac{\frac{\partial}{\partial v_{c,d}}
\sum_{j=1}^{J_c}\1_{\{k\}}(y_{s_{c,j}})w_{c,j}G_{v_c}(x-x_{s_{c,j}})}
{\hat{q}_{c,k}(x)}
\label{eq_p_partial_v}\\
&-\frac{\frac{\partial}{\partial v_{c,d}}
\sum_{j=1}^{J_c}w_{c,j}G_{v_c}(x-x_{s_{c,j}})}
{\sum_{l=0}^{K-1}\hat{q}_{c,l}(x)}\notag\\
&=v_{c,d}\left(\frac{\sum_{j=1}^{J_c}w_{c,j}
(x_d-x_{s_{c,j},d})^2G_{v_c}(x-x_{s_{c,j}})}
{\sum_{l=0}^{K-1}\hat{q}_{c,l}(x)}\right.\notag\\
&\left.-\frac{\sum_{j=1}^{J_c}\1_{\{k\}}(y_{s_{c,j}})w_{c,j}
(x_d-x_{s_{c,j},d})^2G_{v_c}(x-x_{s_{c,j}})}
{\hat{q}_{c,k}(x)}\right)\notag\\
\frac{\partial}{\partial w_{c,i}}\log(\hat{p}_{c,k}(x))&
=\frac{\frac{\partial}{\partial w_{c,i}}
\sum_{j=1}^{J_c}\1_{\{k\}}(y_{s_{c,j}})w_{c,j}G_{v_c}(x-x_{s_{c,j}})}
{\hat{q}_{c,k}(x)}
\label{eq_p_partial_w}\\
&-\frac{\frac{\partial}{\partial w_{c,i}}
\sum_{j=1}^{J_c}w_{c,j}G_{v_c}(x-x_{s_{c,j}})}
{\sum_{l=0}^{K-1}\hat{q}_{c,l}(x)}\notag\\
&=\frac{\1_{\{k\}}(y_{s_{c,i}})G_{v_c}(x-x_{s_{c,i}})}
{\hat{q}_{c,k}(x)}
-\frac{G_{v_c}(x-x_{s_{c,i}})}
{\sum_{l=0}^{K-1}\hat{q}_{c,l}(x)}\notag
\end{align}
%
The partial derivatives of the objective function on the first orthant are
%
\begin{align}
\frac{\partial}{\partial v_{c,d}}f(v_c,\{w_{c,j}\}_j|\{x_n\}_n)
&=-\frac{1}{N}\sum_{k=0}^{K-1}\frac{N_k}{N_k-J_{c,k}}\sum_{n\in T_c}
\1_{\{k\}}(y_n)\frac{\partial}{\partial v_{c,d}}\log(\hat{p}_{c,k}(x_n))
\label{eq_l_partial_v}\\
&+\lambda(\alpha_v v_{c,d}+(1-\alpha_v))\notag\\
\frac{\partial}{\partial w_{c,j}}f(v_c,\{w_{c,j}\}_j|\{x_n\}_n)
&=-\frac{1}{N}\sum_{k=0}^{K-1}\frac{N_k}{N_k-J_{c,k}}\sum_{n\in T_c}
\1_{\{k\}}(y_n)\frac{\partial}{\partial w_{c,j}}\log(\hat{p}_{c,k}(x_n))
\label{eq_l_partial_w}\\
&+\lambda\beta(\alpha_w w_{c,j}+(1-\alpha_w))\notag
\end{align}
%
It remains to consider how to choose the prototypes for each batch and the starting points for optimization.
For the former, we score the model for iteration $c-1$ on all samples to get a probability distribution for each $Y_n$.
A sample is considered correctly classified iff the probability assigned to its true class is greater than that for the other classes.
This enables us to split the samples into $2K$ bins, i.e., the correctly and incorrectly classified samples for each class.
We now draw prototypes from these bins as evenly as possible, subject to the constraint that no bin is depleted, i.e., some samples from each bin should remain for scoring.\par
%
To make the above notion more precise, let $M>0$ be the total number of prototypes we want to consider for the new batch and $\eta\in(0,1)$ the maximum fraction of samples we want to take from one bin.
Also, let $g_1,\dots, g_{2K}\geq0$ be the number of samples actually available in each bin, which satisfy w.l.o.g.\ $g_1\leq g_2\leq\dots\leq g_{2K}$.
To arrive at a number of samples $h_1,\dots,h_{2K}$ to draw from each bin, we use the following algorithm:
%
\begin{samepage}
\begin{algorithm}[Number of prototypes per bin]~
\label{alg_bins}
%
\begin{description}
\item{[1]} Assign $i\leftarrow1$ and $R\leftarrow M$.
%
\item{[2]} Compute $r:=\frac{R}{2K+1-i}$.
%
\item{[3]} If $r\leq\eta g_i$: assign $h_i,\dots,h_{2K}\leftarrow r$ and go to [5].
%
\item{[4]} Assign $h_i\leftarrow\eta g_i$, $i\leftarrow i+1$, $R\leftarrow R-h_i$, and go to [2].
%
\item{[5]} Round each $h_i$ to the closest integer number.
\end{description}
\end{algorithm}
\end{samepage}
%
Thus, we draw the maximum number of samples from the smallest bin that has not been processed until the remaining bins are large enough to draw an equal amount from each.
In case $\sum_{i=1}^{2K}g_i<M$, we draw the maximum admissible number of samples from each bin.
In case $g_i\leq\frac{M}{2K}$ for all $i$, we draw an even amount $\frac{M}{2K}$ from each bin.
Note that rounding or a lack of suitable samples may mean that we do not draw exactly $M$ candidates.\par
%
\begin{remark}
This choice of sampling prototypes is motivated by the desire to give equal consideration to each class, as well as to `easy wins' -- samples that are classified correctly but could be assigned still higher probability -- and `hard cases' -- samples that are not classified correctly by the current iteration.
For a very unbalanced population, it may not be possible to treat the rare classes exactly equal to the frequent ones.
However, the rare classes are still assigned greater weight in model building than their proportion in the population.\par
%
Note also that at the start of iteration $c=1$, when the model consists only of the marginal probabilities, a sample is classified correctly iff it belongs to the most frequent class.
Thus, half of all bins are completely empty in this situation.
\end{remark}
%
In order to identify good prototypes for every class, we need to include samples from each among the candidates for prototypes and the remaining samples used for scoring. Using the proposed algorithm for distribution, the following condition is necessary and sufficient:
%
\begin{align}
&\forall k\in\{0,\dots,K-1\}:\forall n\in\{0,\dots,N_k\}:\label{eq_min_cases}\\
&(\eta n\geq0.5\vee\eta(N_k-n)\geq0.5)\wedge(\eta n<n-0.5\vee\eta(N_k-n)<N_k-n-0.5)\notag
\end{align}
%
The $n$ represents the number of samples of class $k$ that are correctly classified in iteration $c-1$.
Thus, the first clause states that the fraction $\eta$ of either the correctly or incorrectly classified cases needs to be large enough to result in a single prototype being drawn after rounding.
Likewise, the second clause states that the fraction $\eta$ of either group needs to be small enough such that at least one sample remains for scoring.\par
%
\begin{lemma}
Condition (\ref{eq_min_cases}) can be restated more compactly as
%
\begin{equation}
\forall k\in\{0,\dots,K-1\}:\lceil0.5N_k\rceil\geq0.5\eta^{-1}\wedge\lceil0.5N_k\rceil>0.5(1-\eta)^{-1}\label{eq_min_cases_2}
\end{equation}
%
where the brackets indicate rounding up.
\end{lemma}
%
\paragraph{Proof:} we show first that for any $k$ holds
%
\begin{equation}
\forall n\in\{0,\dots,N_k\}:\eta n\geq0.5\vee\eta(N_k-n)\geq0.5\iff\lceil0.5N_k\rceil\geq0.5\eta^{-1}\label{eq_condition_proof}
\end{equation}
%
The left-hand side implies the right-hand if we choose $n=\lceil0.5N_k\rceil$.\par
%
The right-hand side implies the left-hand side since for any $n$, either $n$ or $N_k-n$ has to be greater or equal to $\lceil0.5N_k\rceil$.\par
%
For the remaining clauses, note that $\eta n<n-0.5$ is equivalent to $(1-\eta)N>0.5$ so that we can use a similar argument to the one above.$\quad\Box$\par
%
A check for condition (\ref{eq_min_cases_2}) is implemented in the software to ensure that samples of each class are available both as prototypes and for scoring.
However, it does not guarantee that the algorithm is able to find meaningful structure in the data.
For example, the recommended default $\eta=0.5$ requires only that $N_k\geq3$ for all $k$, which is inadequate for supervised learning.\par
%
Regarding the starting values for optimization, we initialize all $v_{c,d}$ to $10D^{-1}$ and all $w_{c,j}$ to 1.
The former choice assumes that the features have a scale on the order of magnitude of 1, e.g., they have been scaled to unit variance.
Thus, setting the inverse bandwidth for a single feature to 10 means the kernels for the starting solution describe a structure that is more granular than the whole distribution.
Dividing the weight by the total number of features ensures that the sum of squares in the exponent of (\ref{eq_kernel}) is always of a similar magnitude and the exponential function does not vanish.
A prototype weight of 1 is of the same order of magnitude as the marginal probabilities.
%
\begin{remark}
The optimization problem does in general have multiple stationary points, including the trivial solution where all weights are equal to zero.
Provided the features are scaled, the proposed starting value for the feature weights appears to work well in practice.\par
%
Scaling the features is also recommended for a different reason.
The size of the elastic net penalty can only be interpreted relative to the scale of the partial derivatives.
Thus, to penalize all features equally, they need to be equally scaled.
\end{remark}
%
Despite regularization, there is one situation in which the selection of prototypes is not parsimonious.
If multiple candidate points have the same target and feature values for all features with positive weights, the $L_2$-penalty causes the algorithm to assign equal weight to all candidates.
If this weight is positive, the model retains multiple copies of what is effectively the same prototype (the original samples can differ in the values of excluded features).
To simplify the model representation, we add a clean-up stage to the algorithm:\par
%
Each prototype in a batch is represented by the combined vector of the active features and target.
Two prototypes are considered equivalent if the maximum norm of the difference of their associated vectors does not exceed some small tolerance.
This notion is generalized from pairs to larger groups by finding all connected components in the graph where each node represents a prototype and an edge indicates pairwise equivalence.
We then replace all prototypes belonging to the same component by a single prototype using the combined weight, as well as the features of the sample appearing first in the training data.
%
\section{Fit strategy}
\label{sec_classifier_fit}
%
Fitting a proset classifier is controlled by seven hyperparameters:
%
\begin{itemize}
\item The number of batches $B$.
%
\item The number $M$ of candidates for prototypes evaluated per batch.
%
\item The maximum fraction $\eta$ of candidates drawn from one bin using Algorithm \ref{alg_bins}.
%
\item The penalty term $\lambda_v$ for the feature weights.
%
\item The penalty term $\lambda_w$ for the prototype weights.
%
\item The ratio $\alpha_v$ of $\lambda_v$ assigned as $L_2$ penalty term.
%
\item The ratio $\alpha_w$ of $\lambda_w$ assigned as $L_2$ penalty term.
\end{itemize}
%
One goal of the case studies in this section is to identify good default values and indicate which parameters are worthwhile to tune.
Some of the key findings are summarized here:
%
\begin{enumerate}
\item Fitting a few large batches is preferable to many small ones.
The first batch has the largest impact on the model score and a large first batch results in a better overall score.
We believe the reason for this is that model quality depends on finding good constellations of prototypes, not just individual prototypes.
Suitable constellations are more likely to occur in larger batches.\par
%
Adding too many batches to a model leads to saturation, not overfitting.
The number of prototypes chosen per batch decreases, possibly down to zero.
The corresponding model scores fluctuate slightly around a common level.
Thus, using a small number of batches is mostly a question of reducing computational effort.\par
%
We recommend $M=1,000$ candidates per batch and a single batch ($B=1$) as default.
If optimizing with respect to the number of batches, $B=10$ is a reasonable upper limit.
%
\item If the number of samples in a bin is too small for drawing the desired number of candidates, a compromise is to use half of the samples as candidates and the others as reference points for computing the likelihood.
Thus, we fix $\eta=0.5$.
%
\item The penalty $\lambda_v$ for the feature weights controls the smoothness of the model.
It is the main safeguard against overfitting.\par
%
In the case study for the classifier, good values lie in the range from $10^{-6}$ to $10^{-1}$.
Choosing $10^{-3}$ as default works for all cases.
However, if an experimenter wants to tune only one parameter, it should be $\lambda_v$.
%
\item The penalty $\lambda_w$ for the prototype weights controls how much weight can be assigned to a single prototype.
We observe that increasing $\lambda_w$ within a certain range can actually lead to more candidate points being included in the model with nonzero weight.
Further increases gradually lead to underfitting.
However, reducing $\lambda_w$ does not lead to appreciable overfitting.
It appears that values below a certain threshold mostly control the preference for few prototypes with large weights versus more prototypes with smaller weights.
Above the threshold, the estimated distribution is shrunk towards the marginal probabilities.
This is desirable to some degree for avoiding overfitting.\par
%
In the case study for the classifier, good values lie in the range from $10^{-9}$ to $10^{-4}$.
Choosing $10^{-8}$ as default works for all cases.
%
\item A dominant $L_2$-penalty ($\alpha_v$ and $\alpha_w$ close to 1.0) yields slightly better results in the case study than either a dominant $L_1$- or balanced penalty.
The default values recommended for the algorithm are $\alpha_v=\alpha_w=0.95$.
\end{enumerate}
%
The case study uses the following procedure for hyperparameter search, subject to small variations described later:
%
\begin{algorithm}[Hyperparameter selection]~
\label{alg_hyperparameters}
%
\begin{enumerate}
\item Choose $M$, $\eta$, $\alpha_v$, and $\alpha_w$ based on the above recommendations.
Choose a range for $\lambda_v$, a range for $\lambda_w$, and a set of candidates for $B$ (e.g., the numbers from 0 to 10).
%
\item Split the data into a training set (70 \%) and test set (30 \%), stratified by class.
%
\item\textbf{Stage 1}
%
\begin{enumerate}
\item Randomly generate 50 pairs $(\lambda_v,\lambda_w)$.
Each $\lambda$ is sampled uniformly on the log-scale from the chosen range.
%
\item Perform five-fold cross-validation on the training set using $B=1$.
%
\item Compute the mean and standard deviation of log-loss for each pair of penalties over the left-out folds.
%
\item Determine a threshold for model quality by taking the minimal mean log-loss and adding the corresponding standard deviation.
%
\item Among all pairs whose mean log-loss is less than or equal to the threshold, choose the one maximizing the geometric mean $\sqrt{\lambda_v\lambda_w}$.
\end{enumerate}
%
\item\textbf{Stage 2}
\begin{enumerate}
\item Perform five-fold cross-validation on the training set using the parameters from stage 1 and the maximal number of batches.
%
\item For each value of $B$ up to the maximum, evaluate the models on the left-out fold and compute mean and standard deviation of log-loss.
%
\item Determine a threshold for model quality by taking the minimal mean log-loss and adding the corresponding standard deviation.
%
\item Among all candidates whose mean log-loss is less than or equal to the threshold, choose the smallest $B$.
\end{enumerate}
%
\item Refit the model with parameters selected in stages 1 and 2 on all training data.
%
\item Score the final model on the test data.
\end{enumerate}
\end{algorithm}
%
The purpose of stage 1 is to identify good values for the most important parameters $\lambda_v$ and $\lambda_w$.
Based on the observation that the first batch has the most impact, we fix $B=1$ during this stage to save computation time.
Controlling $B$ can be treated as a secondary concern, since it appears to be impossible to overfit by increasing $B$.
It is still necessary to test larger values as more complex problems may be underfitted with $B=1$.\par
%
Note that stage 2 only fits five models up to the maximum number of batches as these can also be evaluated for smaller choices of $B$.
This introduces a dependency between the means and standard deviations as estimates reuse the same initial batches.
However, as keeping $B$ small is a secondary concern, we consider this time-saving approach acceptable.\par
%
The parameters selected in each stage are not necessarily those that minimize mean cross-validation log-loss.
Instead, we consider all sets of parameters that are `equivalent' to the optimizer in the sense that their mean log-loss is within one standard error of the optimum.
From these, we pick the parameters that yield the sparsest model.
This `1 SE rule' is a recommendation from R package \texttt{glmnet} \cite{Friedman_10}.
The authors observe that a model using the optimal parameters tends to overfit the cross-validation sample, which is mitigated by the rule.
The multiplier of 1 is of course arbitrary but a common `rule of thumb' in statistics.
In case multiple parameters control sparseness, we need to decide which set we consider `sparsest'.
For stage 1, maximizing the geometric mean gives equal importance to both penalties.\par
%
As discussed in the introduction, we use the threshold obtained via the `1 SE rule' also to compare models derived via different fit strategies or classification algorithms.
For the classifier that performs best on testing data, the threshold found during stage 2 is used as upper bound on log-loss to determine which of the other classifiers are still considered `equivalent'.
%
\section{Benchmarks for hyperparameter selection}
\label{sec_classifier_benchmarks}
%
In this section, we test variations of the fit strategy outlined above on different benchmark cases.
We use four small data sets that come as `toy' examples with Python package \texttt{sklearn} \cite{Pedregosa_11}, plus two slightly larger artificial data sets:
%
\begin{enumerate}
\item\textbf{Iris 2f:} this data set consists of the first two features of Fisher's famous iris data set \cite{Fisher_36}.
We limit the analysis to two of four features (sepal length and width) as this allows us to visualize the decision surface of the classifier as a 2d plot.
The data set comprises 150 samples for three species of iris flower, with 50 samples per class.
One class is linearly separable from the others, but measurements for the remaining two overlap.
%
\item\textbf{Wine:} this data set from the UCI Machine Learning Repository \cite{Dua_19} consists of chemical analysis data for wines from three different cultivators.
It consists of 178 samples with between 48 and 71 samples per class.
The data is known to be separable \cite{Aeberhard_92}.
%
\item\textbf{Cancer:} this data set from the UCI Machine Learning Repository \cite{Dua_19} consists of medical analysis data from breast tissue samples.
The 569 samples are classified as either malignant (212 samples) or benign (357 samples).
%
\item\textbf{Digits:} this data set from the UCI Machine Learning Repository \cite{Dua_19} consists of monochrome images of handwritten digits downsampled to an eight-by-eight grid.
The 1,797 samples are approximately balanced among the 10 digits.
%
\item\textbf{Checker:} for this artificial data set, we sample two features uniformly on the unit square and assign class labels deterministically to create an eight-by-eight checkerboard.
The pattern defeats methods that rely on global properties of the data like correlation, e.g., logistic regression.
It can be recovered successfully by methods that model local structure, e.g., a k-nearest neighbor classifier or decision tree.
The total number of samples is 6,400, so each square of the pattern contains approximately 100 data points.
%
\item\textbf{XOR 6f:} for this artificial data set, we sample six features independently and uniformly on the interval $[-1.0, 1.0]$.
The class label is assigned deterministically based on the sign of the product of features: a positive (or zero) sign is class 1, a negative sign is class 0.
This is similar to the `continuous XOR' problem found in the \texttt{mlbench} library for R \cite{Leisch_21}, except that we only distinguish two classes.
Despite being deterministic, this problem appears to be a hard even for classifiers that model local structure.
The total number of samples is 6,400, so each orthant of the feature space contains approximately 100 data points.
\end{enumerate}
%
The first three experiments consider the impact of $\alpha_v$ and $\alpha_w$ on model behavior.
Proset classifiers are fitted to all six data sets with fixed $M=1,000$ and $\eta=0.5$.
Penalty weights are sampled from the ranges $\lambda_v\in(10^{-6},10^{-1})$ and $\lambda_w\in(10^{-9},10^{-4})$, while the number of batches is allowed to vary between 0 and 10.
The first experiment uses a dominant $L_1$-penalty ($\alpha_v=\alpha_w=0.05$), the second uses balanced penalties ($\alpha_v=\alpha_w=0.50$), and the third a dominant $L_2$-penalty ($\alpha_v=\alpha_w=0.95$).\par
%
\begin{figure}
\caption{[E1] Parameter search for Iris 2F data}
\label{fig_parameter_search}
%
\begin{center}
\subfloat[\textbf{Stage 1}]{\includegraphics[height=0.4\textheight]{figures/iris_2f_dominant_l1_2d_search.pdf}}\\
\subfloat[\textbf{Both stages}]{\includegraphics[width=0.99\textwidth]{figures/iris_2f_dominant_l1_1d_search.pdf}}
\end{center}
\end{figure}
%
Figure \ref{fig_parameter_search} shows parameter search results for the first experiment using iris data.
In the lower plot, the log-loss for stage 1 as a function of either $\lambda_v$ or $\lambda_w$ appears highly variable.
However, this is mostly due to changes in the other parameter, as evidenced by the surface plot.
Complete results for all data sets are presented in tables \ref{tab_e1}, \ref{tab_e2}, and \ref{tab_e3}.
A comparison of the first six experiments is found in table \ref{tab_e1_to_e6}.
The description of individual experiments comprises the following information:
%
\begin{description}
\item[Data:] the number of classes, as well as the size of the whole data set and train-test split.
%
\item[Candidates:] the approximate number of candidates for prototypes used to build the final model.
While $M=1,000$ candidates are specified for each model, the effective maximum for small data sets is around 35 \% of the available samples: training data is 70 \% of all samples and $\eta=0.5$ allows at most 50 \% of data in one bin to be used as prototypes.
%
\item[Stage 1:] results for selecting $\lambda_v$ and $\lambda_w$ using a single batch.
Lists the optimal and chosen parameters according to the `1 SE rule' (see algorithm \ref{alg_hyperparameters}), together with the achieved mean log-loss from cross-validation.
The given threshold is the one for the `1 SE rule', i.e., the sum of the minimal mean log-loss and corresponding standard deviation.
%
\item[Stage 2:] results for selecting $B$ using the penalty weights chosen in stage 1.
%
\item[Final model:] information about the final model fitted on all training data with parameters determined in stages 1 and 2.
The number of features and prototypes, as well as the scores achieved for the test data.
Apart from log-loss, the measures used for evaluation are ROC-AUC and balanced accuracy.\par
%
For multi-class problems, the stated ROC-AUC values is the unweighted average for all pairwise comparisons of two classes.
This generalization to more than two classes is recommended in \cite{Hand_01} as being robust to class imbalance.\par
%
To compute balanced accuracy, we use the `naive' rule that assigns each sample the class label with the highest estimated probability.
The reported score is the unweighted average of the correct classification rates for each class.
\end{description}
%
\begin{table}
\caption{[E1] Randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.05$)}
\label{tab_e1}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\textbf{Candidates}&$\sim50$&$\sim60$&$\sim200$&$\sim630$&1,000&1,000\\
\multicolumn{7}{|l|}{\textbf{Stage 1}}\\
Optimal $\lambda_v$&$1.1\times10^{-2}$&$6.9\times10^{-4}$&$5.5\times10^{-3}$&$1.2\times10^{-5}$&$4.5\times10^{-3}$&$1.0\times10^{-2}$\\
Selected $\lambda_v$&$1.1\times10^{-2}$&$3.5\times10^{-3}$&$1.1\times10^{-2}$&$1.8\times10^{-3}$&$5.5\times10^{-3}$&$1.1\times10^{-2}$\\
Optimal $\lambda_w$&$1.0\times10^{-5}$&$1.1\times10^{-6}$&$8.3\times10^{-8}$&$3.4\times10^{-9}$&$1.3\times10^{-8}$&$4.6\times10^{-7}$\\
Selected $\lambda_w$&$1.8\times10^{-5}$&$3.1\times10^{-5}$&$1.8\times10^{-5}$&$7.7\times10^{-8}$&$8.3\times10^{-8}$&$1.0\times10^{-5}$\\
Optimal log-loss&0.47&0.12&0.09&0.16&0.17&0.53\\
Threshold&0.60&0.17&0.14&0.19&0.18&0.55\\
Selected log-loss&0.54&0.12&0.09&0.19&0.18&0.55\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&2&5&1&9&1&1\\
Selected batches&2&2&1&1&1&1\\
Optimal log-loss&0.46&0.12&0.10&0.18&0.19&0.55\\
Threshold&0.54&0.16&0.14&0.22&0.20&0.56\\
Selected log-loss&0.46&0.13&0.10&0.19&0.19&0.55\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&6&3&19&2&6\\
Prototypes&34&43&24&546&257&309\\
Log-loss&0.62&0.14&0.13&0.15&0.17&0.55\\
ROC-AUC&0.86&0.99&0.99&1.00&0.99&0.81\\
Balanced acc.&0.76&0.97&0.94&0.97&0.95&0.71\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E2] Randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.50$)}
\label{tab_e2}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\textbf{Candidates}&$\sim50$&$\sim60$&$\sim200$&$\sim630$&1,000&1,000\\
\multicolumn{7}{|l|}{\textbf{Stage 1}}\\
Optimal $\lambda_v$&$5.6\times10^{-3}$&$1.0\times10^{-2}$&$5.5\times10^{-3}$&$1.2\times10^{-5}$&$9.5\times10^{-4}$&$5.5\times10^{-3}$\\
Selected $\lambda_v$&$1.1\times10^{-2}$&$3.5\times10^{-3}$&$1.1\times10^{-2}$&$4.5\times10^{-3}$&$9.5\times10^{-4}$&$1.0\times10^{-2}$\\
Optimal $\lambda_w$&$1.7\times10^{-5}$&$4.6\times10^{-7}$&$8.3\times10^{-8}$&$3.4\times10^{-9}$&$5.6\times10^{-9}$&$8.3\times10^{-8}$\\
Selected $\lambda_w$&$1.0\times10^{-5}$&$3.1\times10^{-5}$&$1.8\times10^{-5}$&$1.3\times10^{-8}$&$5.6\times10^{-9}$&$4.6\times10^{-7}$\\
Optimal log-loss&0.45&0.10&0.09&0.17&0.17&0.53\\
Threshold&0.51&0.17&0.14&0.20&0.17&0.54\\
Selected log-loss&0.46&0.13&0.09&0.19&0.17&0.54\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&4&10&1&3&1&1\\
Selected batches&3&2&1&1&1&1\\
Optimal log-loss&0.46&0.13&0.10&0.16&0.18&0.54\\
Threshold&0.49&0.16&0.14&0.19&0.19&0.55\\
Selected log-loss&0.47&0.14&0.10&0.18&0.18&0.54\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&7&3&18&2&6\\
Prototypes&30&69&54&533&281&360\\
Log-loss&0.49&0.20&0.13&0.18&0.16&0.53\\
ROC-AUC&0.90&0.98&0.99&1.00&0.99&0.82\\
Balanced acc.&0.73&0.93&0.95&0.97&0.95&0.72\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E3] Randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.95$)}
\label{tab_e3}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\textbf{Candidates}&$\sim50$&$\sim60$&$\sim200$&$\sim630$&1,000&1,000\\
\multicolumn{7}{|l|}{\textbf{Stage 1}}\\
Optimal $\lambda_v$&$5.6\times10^{-3}$&$3.8\times10^{-5}$&$5.5\times10^{-3}$&$7.8\times10^{-6}$&$3.1\times10^{-5}$&$5.5\times10^{-3}$\\
Selected $\lambda_v$&$1.1\times10^{-2}$&$1.1\times10^{-2}$&$1.1\times10^{-2}$&$9.5\times10^{-4}$&$2.9\times10^{-4}$&$5.5\times10^{-3}$\\
Optimal $\lambda_w$&$1.7\times10^{-5}$&$6.9\times10^{-9}$&$8.3\times10^{-8}$&$1.6\times10^{-9}$&$4.3\times10^{-9}$&$8.3\times10^{-8}$\\
Selected $\lambda_w$&$1.0\times10^{-5}$&$1.8\times10^{-5}$&$1.8\times10^{-5}$&$5.6\times10^{-9}$&$6.9\times10^{-8}$&$8.3\times10^{-8}$\\
Optimal log-loss&0.44&0.11&0.09&0.17&0.18&0.53\\
Threshold&0.49&0.18&0.14&0.19&0.20&0.54\\
Selected log-loss&0.46&0.17&0.11&0.18&0.19&0.53\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&5&5&1&8&1&10\\
Selected batches&3&2&1&2&1&1\\
Optimal log-loss&0.51&0.14&0.10&0.14&0.19&0.51\\
Threshold&0.57&0.17&0.14&0.17&0.20&0.53\\
Selected log-loss&0.56&0.15&0.10&0.16&0.19&0.53\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&7&4&30&2&6\\
Prototypes&49&52&56&861&328&413\\
Log-loss&0.43&0.14&0.13&0.15&0.18&0.52\\
ROC-AUC&0.91&1.00&0.99&1.00&0.99&0.82\\
Balanced acc.&0.71&0.98&0.98&0.97&0.95&0.72\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\clearpage
%
In the experiments, a dominant $L_1$-penalty yields slightly worse results than balanced penalties or a dominant $L_2$-penalty.
Also, we have occasionally observed (not in the results reported here) that a dominant $L_1$-penalty causes too few features to be selected for the final model.
This happens even though models created during cross-validation do not underfit.
The likely reason is that random candidate selection for small sample size can result in a set of candidates hat does not fully represent the data.
Since a dominant $L_2$-penalty slightly outperforms the balanced case in the trials, we recommend $\alpha_v=\alpha_w=0.95$ as default values.\par
%
\begin{table}
\caption{[E4] Use optimal hyperparameters ($\alpha_v=\alpha_w=0.95$)}
\label{tab_e4}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\textbf{Candidates}&$\sim50$&$\sim60$&$\sim200$&$\sim630$&1,000&1,000\\
\multicolumn{7}{|l|}{\textbf{Stage 1}}\\
Optimal $\lambda_v$&$5.6\times10^{-3}$&$3.8\times10^{-5}$&$5.5\times10^{-3}$&$7.8\times10^{-6}$&$3.1\times10^{-5}$&$5.5\times10^{-3}$\\
Optimal $\lambda_w$&$1.7\times10^{-5}$&$6.9\times10^{-9}$&$8.3\times10^{-8}$&$1.6\times10^{-9}$&$4.3\times10^{-9}$&$8.3\times10^{-8}$\\
Optimal log-loss&0.44&0.11&0.09&0.17&0.18&0.53\\
Threshold&0.49&0.18&0.14&0.19&0.20&0.54\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&5&5&1&8&1&10\\
Optimal log-loss&0.51&0.14&0.10&0.14&0.19&0.51\\
Threshold&0.57&0.17&0.14&0.17&0.20&0.53\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&11&3&41&2&6\\
Prototypes&65&204&93&3,053&595&403\\
Log-loss&0.48&0.05&0.11&0.13&0.16&0.52\\
ROC-AUC&0.87&1.00&0.99&1.00&0.99&0.83\\
Balanced acc.&0.69&0.98&0.95&0.97&0.94&0.74\\
\hline
\end{tabular}
\end{center}
\end{table}
%
The fourth experiment shows the effect of using the optimal parameters from cross-validation instead of the equivalent sparser solution (see table \ref{tab_e4}).
In half of the cases, results actually outperform those of the first three experiments.
However, the log-loss scores for [E3] are still below the equivalence thresholds for [E4] in these cases.
This means the metrics for the benchmark cases do not indicate that the `1 SE rule' is necessary to prevent overfitting, but we can use it to obtain a sparser parameterization with negligible loss in quality.
%
\clearpage
%
\begin{figure}
\caption{Proset decision surfaces for case Iris 2f}
\label{fig_proset_decision_iris_2f}
%
\begin{center}
\subfloat[\textbf{[E1] $\lambda_v=\lambda_w=0.05$, `1 SE rule'}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_dominant_l1_surf_test.pdf}}
\subfloat[\textbf{[E2] $\lambda_v=\lambda_w=0.50$, `1 SE rule'}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_balanced_surf_test.pdf}}\\
\subfloat[\textbf{[E3] $\lambda_v=\lambda_w=0.95$, `1 SE rule'}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_dominant_l2_surf_test.pdf}}
\subfloat[\textbf{[E4] $\lambda_v=\lambda_w=0.95$, optimal hyperparameters}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_dominant_l2_surf_opt_test.pdf}}
\end{center}
\end{figure}
%
If we study the decision surfaces for the Iris 2f benchmark, it does look like [E4] has a slight tendency to overfit.
Figure \ref{fig_proset_decision_iris_2f} shows the surface plots for experiments [E1] to [E4].
The colors indicates the class with the highest estimated probability in that region of the feature space.
Only the result for [E3] appears fully convincing to us.
It separates the feature space near the training data into three contiguous regions with smooth boundaries.
The surfaces for [E1] and [E4] appear overly complex and somewhat arbitrary, which we take as a visual indicator of overfitting.
[E2] oddly prefers class `virginica' in the lower left, although there is no training data in this region.
This is due to the layering of kernels with different bandwidths in a model with multiple batches.\par
%
\clearpage
%
\begin{table}
\caption{[E5] Grid search for $\lambda_v$ ($\lambda_w=10^{-8}$ and $\alpha_v=\alpha_w=0.95$)}
\label{tab_e5}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\textbf{Candidates}&$\sim50$&$\sim60$&$\sim200$&$\sim630$&1,000&1,000\\
\multicolumn{7}{|l|}{\textbf{Stage 1}}\\
Optimal $\lambda_v$&$1.0\times10^{-2}$&$3.2\times10^{-6}$&$1.0\times10^{-2}$&$1.0\times10^{-4}$&$3.2\times10^{-4}$&$1.0\times10^{-2}$\\
Selected $\lambda_v$&$1.0\times10^{-2}$&$3.2\times10^{-2}$&$3.2\times10^{-2}$&$3.2\times10^{-2}$&$3.2\times10^{-4}$&$1.0\times10^{-2}$\\
Optimal log-loss&0.51&0.11&0.08&0.20&0.17&0.52\\
Threshold&0.63&0.16&0.12&0.22&0.18&0.53\\
Selected log-loss&0.51&0.14&0.10&0.21&0.17&0.52\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&10&1&1&2&1&1\\
Selected batches&1&1&1&1&1&1\\
Optimal log-loss&0.60&0.11&0.10&0.21&0.17&0.52\\
Threshold&0.71&0.21&0.15&0.26&0.19&0.53\\
Selected log-loss&0.67&0.11&0.10&0.21&0.17&0.52\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&4&3&18&2&6\\
Prototypes&25&25&69&410&409&421\\
Log-loss&0.47&0.12&0.13&0.18&0.17&0.53\\
ROC-AUC&0.90&1.00&0.99&1.00&0.99&0.81\\
Balanced acc.&0.73&0.97&0.96&0.97&0.95&0.71\\
\hline
\end{tabular}
\end{center}
\end{table}
%
The fifth experiment shows what happens if we fix $\lambda_w$ and perform cross-validation only with respect to $\lambda_v$ and $B$.
Based on the previous experiments, $\lambda_w=10^{-8}$ appears to be a suitable choice.
As stage 1 now deals with a single parameter, we replace the random search with a grid search using 11 points equidistantly spaced on the log-scale.
The results in table \ref{tab_e5} show that fixing the penalty on prototype weights has only a small impact on model quality.
All log-loss scores except that for the digits case are below the equivalence threshold for the best model found.
Therefore, $\lambda_w=10^{-8}$ is the recommended default for the proset classifier.\par
%
\clearpage
%
\begin{table}
\caption{[E6] Stage 2 only ($\lambda_v=10^{-3}$, $\lambda_w=10^{-8}$, and $\alpha_v=\alpha_w=0.95$)}
\label{tab_e6}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\textbf{Candidates}&$\sim50$&$\sim60$&$\sim200$&$\sim630$&1,000&1,000\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&9&8&9&7&1&10\\
Selected batches&4&2&1&4&1&3\\
Optimal log-loss&0.54&0.10&0.10&0.15&0.19&0.50\\
Threshold&0.72&0.17&0.15&0.17&0.20&0.52\\
Selected log-loss&0.71&0.15&0.12&0.16&0.19&0.51\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&7&4&34&2&6\\
Prototypes&82&72&149&1,270&252&1,261\\
Log-loss&0.45&0.16&0.13&0.12&0.18&0.49\\
ROC-AUC&0.89&0.99&0.99&1.00&0.99&0.84\\
Balanced acc.&0.76&0.95&0.95&0.98&0.95&0.75\\
\hline
\end{tabular}
\end{center}
\end{table}
%
The sixth experiment considers fixing both $\lambda_w$ and $\lambda_v$ (see table \ref{tab_e6}).
The former is again set to $\lambda_w=10^{-8}$, the latter to $\lambda_w=10^{-3}$, which is close to the parameters selected in previous experiments.
The mean scores for the digits and XOR 6f data are the best for any of the experiments.
All other results are equivalent to the best model.
Based on this finding, we recommend $\lambda_w=10^{-3}$ as default for the classifier.\par
%
\begin{table}
\caption{[E1--E6] Comparison of results (best log-loss bold)}
\label{tab_e1_to_e6}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{[E1] Randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.05$)}}\\
Active features&2&6&3&19&2&6\\
Prototypes&34&43&24&546&257&309\\
Log-loss&0.62&0.14&0.13&0.15&0.17&0.55\\
Threshold stage 2&0.54&0.16&0.14&0.22&0.20&0.56\\
ROC-AUC&0.86&0.99&0.99&1.00&0.99&0.81\\
Balanced acc.&0.76&0.97&0.94&0.97&0.95&0.71\\
\multicolumn{7}{|l|}{\textbf{[E2] Randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.50$)}}\\
Active features&2&7&3&18&2&6\\
Prototypes&30&69&54&533&281&360\\
Log-loss&0.49&0.20&0.13&0.18&\textbf{0.16}&0.53\\
Threshold stage 2&0.49&0.16&0.14&0.19&0.19&0.55\\
ROC-AUC&0.90&0.98&0.99&1.00&0.99&0.82\\
Balanced acc.&0.73&0.93&0.95&0.97&0.95&0.72\\
\multicolumn{7}{|l|}{\textbf{[E3] Randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.95$)}}\\
Active features&2&7&4&30&2&6\\
Prototypes&49&52&56&861&328&413\\
Log-loss&\textbf{0.43}&0.14&0.13&0.15&0.18&0.52\\
Threshold stage 2&0.57&0.17&0.14&0.17&0.20&0.53\\
ROC-AUC&0.91&1.00&0.99&1.00&0.99&0.82\\
Balanced acc.&0.71&0.98&0.98&0.97&0.95&0.72\\
\multicolumn{7}{|l|}{\textbf{[E4] Use optimal hyperparameters ($\alpha_v=\alpha_w=0.95$)}}\\
Active features&2&11&3&41&2&6\\
Prototypes&65&204&93&3,053&595&403\\
Log-loss&0.48&\textbf{0.05}&\textbf{0.11}&0.13&\textbf{0.16}&0.52\\
Threshold stage 2&0.57&0.17&0.14&0.17&0.20&0.53\\
ROC-AUC&0.87&1.00&0.99&1.00&0.99&0.83\\
Balanced acc.&0.69&0.98&0.95&0.97&0.94&0.74\\
\multicolumn{7}{|l|}{\textbf{[E5] Grid search for $\lambda_v$ ($\lambda_w=10^{-8}$ and $\alpha_v=\alpha_w=0.95$)}}\\
Active features&2&4&3&18&2&6\\
Prototypes&25&25&69&410&409&421\\
Log-loss&0.47&0.12&0.13&0.18&0.17&0.53\\
Threshold stage 2&0.71&0.21&0.15&0.26&0.19&0.53\\
ROC-AUC&0.90&1.00&0.99&1.00&0.99&0.81\\
Balanced acc.&0.73&0.97&0.96&0.97&0.95&0.71\\
\multicolumn{7}{|l|}{\textbf{[E6] Stage 2 only ($\lambda_v=10^{-3}$, $\lambda_w=10^{-8}$, and $\alpha_v=\alpha_w=0.95$)}}\\
Active features&2&7&4&34&2&6\\
Prototypes&82&72&149&1,270&2527&1,261\\
Log-loss&0.45&0.16&0.13&\textbf{0.12}&0.18&\textbf{0.49}\\
Threshold stage 2&0.72&0.17&0.15&0.17&0.20&0.52\\
ROC-AUC&0.89&0.99&0.99&1.00&0.99&0.84\\
Balanced acc.&0.76&0.95&0.95&0.98&0.95&0.75\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\clearpage
%
\begin{table}
\caption{[E7] Stage 2, vary candidates ($\lambda_v=10^{-3}$, $\lambda_w=10^{-8}$, and $\alpha_v=\alpha_w=0.95$)}
\label{tab_e7}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Checker}&\textbf{Checker}&\textbf{Checker}&\textbf{XOR 6f}&\textbf{XOR 6f}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&2&2&2&2&2&2\\
Features&2&2&2&6&6&6\\
Samples&6,400&6,400&6,400&6,400&6,400&6,400\\
Train samples&4,480&4,480&4,480&4,480&4,480&4,480\\
Test samples&1,920&1,920&1,920&1,920&1,920&1,920\\
\textbf{Candidates}&100&300&1,500&100&300&1,500\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal batches&3&1&1&10&9&10\\
Selected batches&2&1&1&7&4&4\\
Optimal log-loss&0.41&0.29&0.18&0.57&0.53&0.50\\
Threshold&0.42&0.31&0.19&0.58&0.55&0.51\\
Selected log-loss&0.41&0.29&0.18&0.57&0.54&0.51\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&2&2&6&6&6\\
Prototypes&112&125&300&365&487&2,188\\
Log-loss&0.43&0.28&0.17&0.57&0.54&0.48\\
ROC-AUC&0.90&0.97&0.99&0.78&0.80&0.85\\
Balanced acc.&0.79&0.90&0.96&0.70&0.72&0.76\\
\hline
\end{tabular}
\end{center}
\end{table}
%
The seventh experiment shows the impact of different batch sizes $M$ (see table \ref{tab_e7}).
It deals only with the checkerboard and XOR data, as the maximum batch size for the first four cases is limited by their small sample size.
For each of the two large data sets, we try using 100, 300, and 1,500 samples instead of the default $M=1,000$.
The model metrics for 100 and 300 samples are worse than before, while the results for 1,500 candidates are comparable.
Clearly, there is an advantage to using large batches as long as the sample size permits.
%
\section{Comparison with other classifiers}
\label{sec_classifier_comparison}
%
In this section, we compare the proset classifier to the $k$-nearest neighbor (kNN) and XGBoost methods.
The former is conceptually similar to proset, as it scores new points based on their proximity to training samples.
However, there are at least three major differences between proset and kNN:
%
\begin{enumerate}
\item kNN uses all training samples for scoring and gives equal importance to each sample.
Proset selects prototypes and assigns individual weights.
%
\item kNN relies on the user's choice of features and distance metric.
Irrelevant features or a badly scaled metric can degrade performance.
Proset adapts the metric to the problem and removes features that contribute little or nothing.
%
\item kNN has no notion of absolute distance.
The nearest neighbors have the same impact on classification no matter how far away they are from the sample being scored.
If that sample is far away from the training data, estimates become arbitrary.
For proset, remote prototypes have negligible impact on classification.
As the distance of a sample to the set of prototype increases, the estimated probabilities converge to the marginals of the training data.
\end{enumerate}
%
XGBoost is a particular implementation of gradient boosting for decision trees (we do not consider other base learners in this study) \cite{Chen_16}.
It has become a kind of industry standard for supervised learning outside the domain of deep learning.
In terms of the three points stated above, XGBoost relates to proset and kNN as follows:
%
\begin{enumerate}
\item XGBoost does not retain any training samples.
It generates a set of rules in the form of decision trees and performs classification via weighted voting.
%
\item XGBoost relies only on the ordering of features, not on any kind of distance metric.
When building trees, the algorithm selects features using a greedy heuristic.
It is considered good practice to limit the number of features evaluated at each stage by subsampling \cite{Chen_16}.
This results in a more diverse set of trees with less tendency to overfit.
Even with this approach, features that have negligible impact on performance are unlikely to be selected.
%
\item Using XGBoost to score a sample that is far away from the training data gives arbitrary results.
The decision trees have not been validated in that part of the feature space.
\end{enumerate}
%
To fit kNN and XGBoost models, we follow the same general strategy as for proset.
We determine optimal hyperparameters using cross-validation and then choose an equivalent set of parameters that is less likely to overfit via the `1 SE rule' (see algorithm \ref{alg_hyperparameters}).
This is easy to implement for kNN, since the sole hyperparameter is the number of neighbors $k$.
Note that we choose the equivalent solution with the \textit{largest} $k$ to achieve a high degree of smoothing.\par
%
For XGBoost, tuning is more complex as the number of hyperparameters that can be used to control the model fit is quite large.
Based on prior experience, we focus on the following five only, leaving all others at their recommended defaults:
%
\begin{enumerate}
\item Learning rate $\eta$.
Controls the impact of each additional tree on the estimator.
Choosing a smaller $\eta$ can increase model performance slightly but requires more boosting iterations.
%
\item Number of boosting iterations.
Has to be sufficiently large for the model to capture all of the structure in the data.
Increasing it further leads to saturation instead of overfitting, i.e., the model quality on test data fluctuates around a common level.
%
\item Maximum tree depth.
Controls the complexity of each tree.
Our experience is that XGBoost performs best if the individual trees underfit.
%
\item Fraction of features evaluated per split.
Using a random subset of the features in each split results in a more diverse set of trees.
This can result in a more favorable trade-off between bias and variance.
%
\item Fraction of records used for training each tree.
Using a random subset of the training samples to build each tree has a similar effect as randomizing the features.
\end{enumerate}
%
We follow a similar strategy as for proset (see algorithm \ref{alg_hyperparameters}) to fit XGBoost classifiers.
In the first stage, we fix $\eta=0.1$ and use 100 boosting iterations to determine suitable values for maximum tree depth, fraction of features per split, and fraction of records per tree.
Hyperparameter values are sampled randomly to generated 100 trial combinations:
%
\begin{itemize}
\item Maximum tree depth is sampled from 0 (constant model) to 9 with equal probability.
In case the maximum was selected or model performance looked inadequate, we increased the maximum parameter value.
%
\item Fraction of features per split is sampled uniformly between 0.1 and 0.9.
Note that for a model with two features, any value of 0.5 or above just means to test both, while values below 0.5 means to select one at random.
%
\item Fraction of records per tree is sampled uniformly between 0.1 and 0.9.
\end{itemize}
%
After five-fold cross-validation, we determine the parameter combination that minimizes log-loss, compute a threshold, and choose an equivalent combination.
Among all candidates, we use the one that minimizes the depth of the tree first, then the fraction of features per split, and finally the fraction of records per tree.\par
%
In the second stage, we fix $\eta=0.01$ and use five-fold cross-validation to determine the number of boosting iterations up to a maximum of 10,000.
We again apply the `1 SE rule' to find an equivalent smaller number of iterations.
The model is then re-fitted to all training data and scored on test data.\par
%
\begin{table}
\caption{[E8] $k$-nearest neighbor classifier with grid search for $k$}
\label{tab_e8}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\multicolumn{7}{|l|}{\textbf{Cross-validation}}\\
Optimal $k$&12&4&29&8&10&14\\
Selected $k$&29&13&51&43&11&20\\
Optimal log-loss&0.47&0.10&0.15&0.16&0.23&0.54\\
Threshold&0.54&0.15&0.17&0.27&0.24&0.56\\
Selected log-loss&0.53&0.15&0.17&0.27&0.24&0.55\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Log-loss&0.48&0.11&0.14&0.22&0.21&0.54\\
ROC-AUC&0.93&1.00&1.00&1.00&0.98&0.83\\
Balanced acc.&0.82&0.97&0.95&0.93&0.92&0.73\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E9] XGBoost classifier with randomized parameter search}
\label{tab_e9}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{Data}}\\
Classes&3&3&2&10&2&2\\
Features&2&13&30&64&2&6\\
Samples&150&178&569&1,797&6,400&6,400\\
Train samples&105&124&398&1,257&4,480&4,480\\
Test samples&45&54&171&540&1,920&1,920\\
\multicolumn{7}{|l|}{\textbf{Stage 1}}\\
Optimal max.\ depth&2&3&4&4&18&1\\
Selected max.\ depth&1&1&1&3&15&1\\
Optimal colsample&0.43&0.48&0.38&0.27&0.79&0.81\\
Selected colsample&0.11&0.27&0.11&0.24&0.39&0.81\\
Optimal subsample&0.58&0.67&0.45&0.77&0.77&0.25\\
Selected subsample&0.46&0.86&0.46&0.72&0.68&0.25\\
Optimal log-loss&0.50&0.07&0.10&0.13&0.10&0.70\\
Threshold&0.62&0.11&0.14&0.15&0.11&0.70\\
Selected log-loss&0.52&0.09&0.12&0.15&0.11&0.70\\
\multicolumn{7}{|l|}{\textbf{Stage 2}}\\
Optimal iterations&1,586&2,344&5,579&6,187&9,973&16\\
Selected iterations&337&887&668&1,168&1,519&1\\
Optimal log-loss&0.50&0.08&0.10&0.11&0.07&0.69\\
Threshold&0.62&0.11&0.14&0.14&0.09&0.69\\
Selected log-loss&0.62&0.11&0.14&0.14&0.09&0.69\\
\multicolumn{7}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&13&29&53&2&1\\
Log-loss&0.63&0.11&0.12&0.11&0.06&0.69\\
ROC-AUC&0.89&1.00&0.99&1.00&1.00&0.50\\
Balanced acc.&0.69&0.97&0.96&0.98&0.98&0.50\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E3, E8, E9] Comparison of results (best log-loss bold)}
\label{tab_e3_e8_e9}
%
\begin{center}
\small
\begin{tabular}{|lrrrrrr|}
\hline
&\multicolumn{6}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Iris 2f}&\textbf{Wine}&\textbf{Cancer}&\textbf{Digits}&\textbf{Checker}&\textbf{XOR 6f}\\
\multicolumn{7}{|l|}{\textbf{[E3] Proset with randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.95$)}}\\
Active features&2&7&4&30&2&6\\
Log-loss&\textbf{0.43}&0.14&0.13&0.15&0.18&\textbf{0.52}\\
Threshold stage 2&0.57&0.17&0.14&0.17&0.20&0.53\\
ROC-AUC&0.91&1.00&0.99&1.00&0.99&0.82\\
Balanced acc.&0.71&0.98&0.98&0.97&0.95&0.72\\
\multicolumn{7}{|l|}{\textbf{[E8] $k$-nearest neighbor classifier with grid search for $k$}}\\
Active features&2&13&30&64&2&6\\
Log-loss&0.48&\textbf{0.11}&0.14&0.22&0.21&0.54\\
Threshold CV&0.54&0.15&0.17&0.27&0.24&0.56\\
ROC-AUC&0.93&1.00&1.00&1.00&0.98&0.83\\
Balanced acc.&0.82&0.97&0.95&0.93&0.92&0.73\\
\multicolumn{7}{|l|}{\textbf{[E9] XGBoost classifier with randomized parameter search}}\\
Active features&2&13&29&53&2&1\\
Log-loss&0.63&\textbf{0.11}&\textbf{0.12}&\textbf{0.11}&\textbf{0.06}&0.69\\
Threshold stage 2&0.62&0.11&0.14&0.14&0.09&0.69\\
ROC-AUC&0.89&1.00&0.99&1.00&1.00&0.50\\
Balanced acc.&0.69&0.97&0.96&0.98&0.98&0.50\\
\hline
\end{tabular}
\end{center}
\end{table}
%
Results for fitting kNN and XGBoost classifiers to the six examples used for the proset parameter study are given in tables \ref{tab_e8} and \ref{tab_e9}.
The structure of the tables and reported metrics are essentially the same as for proset in the previous section.
For XGBoost, the number of active features reported is the number of features with positive importance score using the method's built-in scoring.\par
%
Table \ref{tab_e3_e8_e9} summarizes the performance of all three models.
For proset, reference values are taken from the full parameters search using $\alpha_v=\alpha_w=0.95$ (see table \ref{tab_e3}).\par
%
\begin{enumerate}
\item\textbf{Iris 2f:} proset is best with kNN equivalent and XGBoost worse.
However, the number of test samples is so small that these results are unreliable.
Changing the random seed for the train-test split can have a large impact on the outcome.\par
%
Figure \ref{fig_comparison_decision_iris_2f} shows the decision surfaces resulting from the three algorithms.
The different modeling strategies are apparent from the plots: the surface for proset is composed of many localized kernels, the one for kNN resembles a Voronoi tesselation, and for XGBoost, it is possible to see the splits made by the decision trees perpendicular to the coordinate axes.
%
\item\textbf{Wine:} XGBoost and kNN achieve the same log-loss.
The score for for proset is above the threshold for XGBoost but not for kNN.
In this case, we consider the lower threshold for the overall ranking in Table \ref{tab_classifier_comparison}.
Proset does use only 7 of 13 features, while XGBoost uses the full set (kNN can perform no selection).
%
\item\textbf{Cancer:} XGBoost is best but both other models are considered `equivalent'.
Proset reduces the number of features considerably from 30 to 4, while XGBoost uses 29.
%
\item\textbf{Digits}: there is a strict ranking with XGBoost being best and proset better than kNN.
However, proset is close to XGBoost in terms of balanced accuracy while kNN is markedly worse.
Proset reduces the number of features from 64 to 30, XGBoost uses 54.
%
\item\textbf{Checker:} there is a strict ranking with XGBoost being best and proset better than kNN.
XGBoost is considerably better than both other models on this data set.
Note that the maximum tree depth for XGBoost was increased to 19 as the fit with a limit of 9 selected the upper bound.
%
\item\textbf{XOR 6f:} proset is best, kNN worse, while XGBoost fails to find any patterns in the data and returns a constant estimator.
Increasing the maximum tree depth for XGBoost to 99 does nothing to improve model quality.
\end{enumerate}
%
\begin{figure}
\caption{Comparison of decision surfaces for case Iris 2f}
\label{fig_comparison_decision_iris_2f}
%
\begin{center}
\subfloat[\textbf{[E3] proset}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_dominant_l2_surf_test.pdf}}
\subfloat[\textbf{[E8] kNN}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_knn_surf_test.pdf}}\\
\subfloat[\textbf{[E9] XGBoost}]{\includegraphics[width=0.49\textwidth]{figures/iris_2f_xgb_surf_test.pdf}}
\end{center}
\end{figure}
%
We draw the following conclusions from this study:
%
\begin{itemize}
\item In terms of achieving a low log-loss, proset appears to rank in-between XGBoost and kNN.
%
\item Proset achieves a greater reduction in the number of features than XGBoost with a preference for small decision trees.
While the fit strategy for XGBoost may not always select the smallest possible feature set due to feature subsampling, the difference is marked.
%
\item The extremely good performance of XGBoost on the checker data is possibly due to the fact that the approximating function is of the same class as the target.
Both are step functions with edges parallel to the main coordinate axes.
%
\item XOR 6f appears to defeat tree ensembles, which choose features one at a time based on a greedy criterion.
No matter how the first split is made, the expected number of cases per class in each resulting subspace is 50 \%, same as for the whole space.
Thus, the first split is always made in response to random fluctuations in the data.
This is also true for the checkerboard with an even number of squares per side, but the first split is almost sure to break the symmetry for this case.
In contrast, for XOR 6f, at least five splits are required to expose the underlying structure.
The partitions created by five random splits apparently do not contain sufficient information to build a meaningful decision tree.
\end{itemize}
%
Based on these observations, we define five additional test cases to further explore the differences between the three algorithms:
%
\begin{enumerate}
\item\textbf{Checker rot:} to determine how much of the good performance of XGBoost on the checker data is due to the axis-parallel steps, we rotate the checkerboard pattern by 45\textdegree.
%
\item\textbf{XOR 3f, XOR 4f, XOR 5f:} in order to find the point at which XGBoost fails on the `continuous XOR' class of problems, we generate instances with three, four, and five features.
As for XOR 6f, these each have an average of 100 samples per orthant.
%
\item\textbf{XOR 6+6f:} to understand how easy it is to confuse the kNN classifier with irrelevant data, we add six more features to the XOR 6f problem.
These are drawn from the same distribution as the first six but have no impact on the target.
\end{enumerate}
%
\begin{table}
\caption{[E10] New examples -- results for proset classifier}
\label{tab_e10}
%
\begin{center}
\small
\begin{tabular}{|lrrrrr|}
\hline
&\multicolumn{5}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Checker rot}&\textbf{XOR 3f}&\textbf{XOR 4f}&\textbf{XOR 5f}&\textbf{XOR 6+6f}\\
\multicolumn{6}{|l|}{\textbf{Data}}\\
Classes&2&2&2&2&2\\
Features&2&3&4&5&12\\
Samples&6,400&800&1,600&3,200&6,400\\
Train samples&4,480&560&1,120&2,240&4,480\\
Test samples&1,920&240&480&960&1,920\\
\textbf{Candidates}&1,000&$\sim280$&$\sim560$&$1000$&1,000\\
\multicolumn{6}{|l|}{\textbf{Stage 1}}\\
Optimal $\lambda_v$&$3.1\times10^{-5}$&$4.5\times10^{-2}$&$1.8\times10^{-3}$&$5.5\times10^{-3}$&$2.2\times10^{-4}$\\
Selected $\lambda_v$&$2.9\times10^{-4}$&$4.2\times10^{-3}$&$1.0\times10^{-2}$&$5.5\times10^{-3}$&$1.9\times10^{-3}$\\
Optimal $\lambda_w$&$4.3\times10^{-9}$&$1.3\times10^{-8}$&$7.7\times10^{-8}$&$8.3\times10^{-8}$&$5.7\times10^{-8}$\\
Selected $\lambda_w$&$6.9\times10^{-8}$&$1.5\times10^{-6}$&$4.6\times10^{-7}$&$8.3\times10^{-8}$&$2.7\times10^{-6}$\\
Optimal log-loss&0.18&0.19&0.30&0.40&0.55\\
Threshold&0.19&0.21&0.34&0.42&0.56\\
Selected log-loss&0.19&0.20&0.32&0.40&0.56\\
\multicolumn{6}{|l|}{\textbf{Stage 2}}\\
Optimal batches&1&6&6&5&1\\
Selected batches&1&1&1&2&1\\
Optimal log-loss&0.19&0.19&0.32&0.39&0.57\\
Threshold&0.20&0.23&0.33&0.40&0.57\\
Selected log-loss&0.19&0.23&0.33&0.39&0.57\\
\multicolumn{6}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&3&4&5&6\\
Prototypes&320&89&124&572&425\\
Log-loss&0.18&0.17&0.28&0.39&0.56\\
ROC-AUC&0.99&0.99&0.97&0.91&0.80\\
Balanced acc.&0.95&0.96&0.91&0.83&0.72\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E11] New examples -- results for $k$-nearest neighbor classifier}
\label{tab_e11}
%
\begin{center}
\small
\begin{tabular}{|lrrrrr|}
\hline
&\multicolumn{5}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Checker rot}&\textbf{XOR 3f}&\textbf{XOR 4f}&\textbf{XOR 5f}&\textbf{XOR 6+6f}\\
\multicolumn{6}{|l|}{\textbf{Data}}\\
Classes&2&2&2&2&2\\
Features&2&3&4&5&12\\
Samples&6,400&800&1,600&3,200&6,400\\
Train samples&4,480&560&1,120&2,240&4,480\\
Test samples&1,920&240&480&960&1,920\\
\multicolumn{6}{|l|}{\textbf{Cross-validation}}\\
Optimal $k$&10&7&8&9&100\\
Selected $k$&11&16&9&13&100\\
Optimal log-loss&0.23&0.22&0.34&0.43&0.70\\
Threshold&0.24&0.27&0.35&0.44&0.71\\
Selected log-loss&0.24&0.27&0.34&0.44&0.70\\
\multicolumn{6}{|l|}{\textbf{Final model, scores for test data}}\\
Log-loss&0.21&0.25&0.31&0.45&0.71\\
ROC-AUC&0.98&0.98&0.95&0.88&0.46\\
Balanced acc.&0.92&0.91&0.86&0.80&0.48\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E12] New examples -- results for XGBoost classifier}
\label{tab_e12}
%
\begin{center}
\small
\begin{tabular}{|lrrrrr|}
\hline
&\multicolumn{5}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Checker rot}&\textbf{XOR 3f}&\textbf{XOR 4f}&\textbf{XOR 5f}&\textbf{XOR 6+6f}\\
\multicolumn{6}{|l|}{\textbf{Data}}\\
Classes&2&2&2&2&2\\
Features&2&3&4&5&12\\
Samples&6,400&800&1,600&3,200&6,400\\
Train samples&4,480&560&1,120&2,240&4,480\\
Test samples&1,920&240&480&960&1,920\\
\multicolumn{6}{|l|}{\textbf{Stage 1}}\\
Optimal max.\ depth&21&7&18&17&1\\
Selected max.\ depth&12&6&11&12&1\\
Optimal colsample&0.77&0.81&0.79&0.72&0.81\\
Selected colsample&0.26&0.88&0.88&0.81&0.81\\
Optimal subsample&0.43&0.84&0.77&0.69&0.25\\
Selected subsample&0.54&0.81&0.88&0.49&0.25\\
Optimal log-loss&0.18&0.10&0.30&0.64&0.70\\
Threshold&0.19&0.11&0.32&0.67&0.70\\
Selected log-loss&0.19&0.11&0.31&0.66&0.70\\
\multicolumn{6}{|l|}{\textbf{Stage 2}}\\
Optimal iterations&8,455&2,677&2,926&895&2\\
Selected iterations&3,364&690&1,154&96&1\\
Optimal log-loss&0.13&0.10&0.28&0.65&0.69\\
Threshold&0.14&0.15&0.31&0.68&0.69\\
Selected log-loss&0.14&0.15&0.31&0.68&0.69\\
\multicolumn{6}{|l|}{\textbf{Final model, scores for test data}}\\
Active features&2&3&4&5&1\\
Log-loss&0.12&0.11&0.23&0.67&0.69\\
ROC-AUC&0.99&1.00&0.98&0.65&0.50\\
Balanced acc.&0.95&0.97&0.94&0.60&0.50\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\begin{table}
\caption{[E10, E11, E12] Comparison of results (best log-loss bold)}
\label{tab_e10_e11_e12}
%
\begin{center}
\small
\begin{tabular}{|lrrrrr|}
\hline
&\multicolumn{5}{c|}{\textbf{\hrulefill\ Data set \hrulefill}}\\
&\textbf{Checker rot}&\textbf{XOR 3f}&\textbf{XOR 4f}&\textbf{XOR 5f}&\textbf{XOR 6+6f}\\
\multicolumn{6}{|l|}{\textbf{[E10] Proset with randomized search for $\lambda_v$ and $\lambda_w$ ($\alpha_v=\alpha_w=0.95$)}}\\
Active features&2&3&4&5&6\\
Log-loss&0.18&0.17&0.28&\textbf{0.39}&\textbf{0.56}\\
Threshold stage 2&0.20&0.23&0.33&0.40&0.57\\
ROC-AUC&0.99&0.99&0.97&0.91&0.80\\
Balanced acc.&0.95&0.96&0.91&0.83&0.72\\
\multicolumn{6}{|l|}{\textbf{[E11] $k$-nearest neighbor classifier with grid search for $k$}}\\
Active features&2&3&4&5&12\\
Log-loss&0.21&0.25&0.31&0.45&0.71\\
Threshold CV&0.24&0.27&0.35&0.44&0.71\\
ROC-AUC&0.98&0.98&0.95&0.88&0.46\\
Balanced acc.&0.92&0.91&0.86&0.80&0.48\\
\multicolumn{6}{|l|}{\textbf{[E12] XGBoost classifier with randomized parameter search}}\\
Active features&2&3&4&5&1\\
Log-loss&\textbf{0.12}&\textbf{0.11}&\textbf{0.23}&0.67&0.69\\
Threshold stage 2&0.14&0.15&0.31&0.68&0.69\\
ROC-AUC&0.99&1.00&0.98&0.65&0.50\\
Balanced acc.&0.95&0.97&0.94&0.60&0.50\\
\hline
\end{tabular}
\end{center}
\end{table}
%
\clearpage
%
Results for proset, kNN, and XGBoost are summarized in tables \ref{tab_e10}, \ref{tab_e11}, and \ref{tab_e12}.
A comparison of the three algorithms is shown in table \ref{tab_e10_e11_e12}:
%
\begin{enumerate}
\item\textbf{Checker rot:} XGBoost still yields the best model on this pattern, but the log-loss is higher than for the original checkerboard.
The largest value for maximum tree depth used in cross-validation was set to 29 as smaller bounds meant the bound was selected.
The metrics for the other two models are very similar to the original case.
%
\item\textbf{XOR 3f, XOR 4f, XOR 5f:} with three features, XGBoost performs better than other models, with four it is still best but the others are equivalent, and with five features, proset is better than the other two models, i.e., they are not affected by the orientation of the pattern.
%
\item\textbf{XOR 6+6f:} proset correctly selects the six relevant features and produces a model that is slightly worse than for the original XOR 6f.
Neither kNN nor XGBoost are able to find any structure and return constant estimators.
\end{enumerate}
%
Overall, we believe that these results show that proset is a worthwhile addition to the supervised learning toolbox.
As intended, it performs feature selection as an integral part of model fitting and is able to identify a nonlinear relationship between the features and target.
%
\endinput
| {
"alphanum_fraction": 0.718864617,
"avg_line_length": 54.9740548554,
"ext": "tex",
"hexsha": "fbe9fcc44a99e0b7db922fe06d53deb11ee2d063",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "101d491e05c2423faddca31029232982f46d8831",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NRuf77/proset",
"max_forks_repo_path": "doc/classifier.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "101d491e05c2423faddca31029232982f46d8831",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NRuf77/proset",
"max_issues_repo_path": "doc/classifier.tex",
"max_line_length": 310,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "101d491e05c2423faddca31029232982f46d8831",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NRuf77/proset",
"max_stars_repo_path": "doc/classifier.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 24965,
"size": 74160
} |
\clearpage
\phantomsection
\addcontentsline{toc}{subsection}{RLDUH}
\label{insn:rlduh}
\subsection*{RLDUH: restricted load of an unsigned 16 bit quantity}
\subsubsection*{Format}
\textrm{RLDUH \%rd, \%r1, \%r2}
\begin{center}
\begin{bytefield}[endianness=big,bitformatting=\scriptsize]{32}
\bitheader{0,7,8,15,16,23,24,31} \\
\bitbox{8}{0x4B}
\bitbox{8}{r1}
\bitbox{8}{r2}
\bitbox{8}{rd}
\end{bytefield}
\end{center}
\subsubsection*{Description}
The \instruction{rlduh} instruction performs a privilege check on the
memory it is about to read from before loading an unsigned, 16 bit,
quantity into \registerop{rd}, indexed by \registerop{r1}.
\subsubsection*{Pseudocode}
\begin{verbatim}
%rd = mem[%r1]
\end{verbatim}
\subsubsection*{Load-time constraints}
The registers \registerop{r1} and \registerop{rd} must be valid registers,
\registerop{r2} must be \registerop{r0} and \registerop{rd} must not be
\registerop{r0}.
\subsubsection*{Failure modes}
\textbf{XXXDS: This depends on dtrace\_canload(). We have to enumerate these.}
| {
"alphanum_fraction": 0.7492795389,
"avg_line_length": 25.3902439024,
"ext": "tex",
"hexsha": "0371d3b82b07c387d7402f06fc40a0a8a526145a",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2020-03-17T10:53:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-10T16:11:37.000Z",
"max_forks_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "trombonehero/opendtrace-documentation",
"max_forks_repo_path": "specification/instr/rlduh.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_issues_repo_issues_event_max_datetime": "2018-11-27T16:42:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-07-29T17:34:24.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "trombonehero/opendtrace-documentation",
"max_issues_repo_path": "specification/instr/rlduh.tex",
"max_line_length": 78,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "trombonehero/opendtrace-documentation",
"max_stars_repo_path": "specification/instr/rlduh.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-14T17:13:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-12T21:24:01.000Z",
"num_tokens": 347,
"size": 1041
} |
\documentclass[11pt]{article}
\usepackage{amssymb} % has to be used before XeTeX unicode trickery.
\usepackage{fullpage}
\usepackage{fontspec}
\usepackage{xunicode}
\usepackage{xltxtra}
\defaultfontfeatures{Mapping=tex-text}
%\setromanfont{Linux Libertine O}
\setromanfont{Brill}
%\newfontfamily\greek{Gentium}
\usepackage{multicol}
\usepackage{ulem}
\usepackage{ifthen}
% Interdocument linking.
\usepackage[xetex]{hyperref}
\hypersetup{bookmarksopen=false,
pdfpagemode=UseNone,
colorlinks=true,
urlcolor=blue,
linkcolor=black, % no links for footnotes; URLs will still have color
pdftitle={Hypmnemata Glossopoetica},
pdfauthor={William S. Annis},
pdfkeywords={conlang}%,
}
% Better sectioning for this document.
\usepackage[compact,rigidchapters,explicit]{titlesec}
\setcounter{secnumdepth}{4}
\titleformat{\section}[display]
{\normalfont\fillast}
{\normalfont\bfseries \thesection. #1}
{1ex minus .1ex}
{\small}
\titlespacing{\section}{3pc}{*4}{-1em}[3pc]
\titleformat{\subsection}[runin]{\normalfont\bfseries}{\thesubsection.}{.5em}{#1. }[ { }]
\titlespacing{\subsection}{1ex}{1.5ex plus .1ex minus .2ex}{0pt}
\titleformat{\subsubsection}[runin]{\normalfont\bfseries\small}{\thesubsubsection.}{.5em}{#1. }[ { }]
\titlespacing{\subsubsection}{1ex}{1.5ex plus .1ex minus .2ex}{0pt}
% If no argument is given, only the section is printed, no title.
\titleformat{\paragraph}[runin]{\normalfont\bfseries\small}{\theparagraph.}{.5em}{\ifthenelse{\equal{#1}{}}{}{#1. }}[ { }]
\titlespacing{\paragraph}{1ex}{1.5ex plus .1ex minus .2ex}{0pt}
% Some utilities.
\newcommand{\LL}[1]{\textbf{#1}} % Other language
\newcommand{\E}[1]{\textit{#1}} % English
\newcommand{\I}[1]{\textsc{#1}} % Interlinears
\newcommand{\note}[1]{\textcolor{magenta}{\small\textit{#1}}}
\newcommand{\tsref}[1]{\hyperref[#1]{\S \textbf{\ref*{#1}}}}
\newcommand{\interlin}[1]{\begin{quotation}{\small\noindent#1}\end{quotation}}
\newcommand{\rara}[1]{$\mathfrak{R}$: #1}
\newenvironment{grammarlist}%
{\begin{itemize}\addtolength{\itemsep}{-0.5\baselineskip}\ignorespaces}%
{\end{itemize}\ignorespacesafterend}
\newenvironment{dlist}%
{\begin{quote}\begin{description}\addtolength{\itemsep}{-0.3\baselineskip}\ignorespaces}%
{\end{description}\ignorespacesafterend\end{quote}}
\newenvironment{examples}{\quote}{\endquote}
\newcommand{\example}[2]{\noindent\LL{#1}\hskip1em\E{#2}}
\newcommand{\longexample}[2]{\noindent\LL{#1}
\indent\E{#2}}
\begin{document}
\frenchspacing
\title{Hypomnemata Glossopoetica}
\author{Wm S. Annis}
\date{\today}
\maketitle
\section{Phonology}
Normally illegal clusters may occur in particular grammatical
contexts, and thus look common (\textit{cf.} Latin \LL{-st} in \I{3sg}
copula \LL{est}).
% http://www.unish.org/upload/word/사본%20-%20MarkVanDam%5B1%5D.pdf
Hierarchy of codas:\footnote{Some attested single-\I{c} coda
inventories: \{n ŋ\}, \{n ŋ t k\}, \{n ŋ m p t k\}, \{n m l r\}, \{n
m w j\}, \{n ŋ m l r j\}, \{n m r d\}, \{d l s x\}, \{m b k l z r\},
\{n l w j t k\}, \{n ŋ m l p t k\}, \{w n m r k t v ʃ ʒ\}.} n < m,
ɳ, ŋ < ɳ << l, ɹ < r, ʎ, ʁ < ɭ, ɽ << t < k, p < s, z, c, q, ʃ < b, d,
g, x h << w, j. There is a slight place hierarchy: alveolar < velar <
retroflex or tap. Classes percolate, such that in complex codas, if
Nasals, Resonants and Stops are permitted you usually expect \I{n, r,
s, nr, ns} and \I{rs} as coda sequences. Other orders are possible,
but the above rule is common-ish.
Hierarchy of clusters (\I{s} = sonorant, \I{o} = obstruent), word
initial: \I{os} < \I{oo} < \I{ss} < \I{so}; word final: \I{so} <
\I{oo} < \I{ss} < \I{os}. Onset clusters tend to avoid identical
places of articulation, which leads to avoidance of things like
\textit{*tl, dl, bw,} etc., in a good number of languages. /j/ is
lightly disfavored as \I{c2} after dentals, alveolars and palatals;
/j/ and palatals are in general disfavored before front vowels.
Languages with \textit{sC-} clusters often have codas. s+\I{stop} <
s+\I{fric} / s+\I{nasal} < s+\I{lat} < s+\I{rhot} (the fricative and
nasal are trickier to order).
Even if a particular \I{c} is a permitted coda, its allowed
environment may be quite restricted. Potential constraints: forbidden
before homorganic stop; or homorganic nasal; geminates
forbidden. Solutions: delete with compensatory vowel lengthening;
debuccalize (become fricative, glottal stop, delete without
compensation); nasal deletion with nasalized vowel remaining;
tone wackiness.
Lower vowels are preferred as syllabic nuclei; high vowels are more
prone to syncope (either midword or finally). Content words less
likely to elide.
% https://linguistics.stonybrook.edu/sites/default/files/uploads/u26/publications/QP1FinalDraft.pdf
Sonority hierarchy:
\begin{quotation}
\noindent low vowels > mid vowels (except /ə/) > high vowels (except /ɨ/) > \\
\indent /ə/ > /ɨ/ > glides > laterals > \\
\indent flaps > trills > nasals > /h/ > \\
\indent voiced fricatives > \\
\indent voiced stops and affricates, voiceless fricatives > \\
\indent voiceless fricatives, voiced stops and affricates > \\
\indent voiceless stops and affricates
\end{quotation}
Vowel devoicing starts with lowest sonority (/i/, /u/).
% https://www.ling.upenn.edu/~gene/courses/530/readings/Casali2011.pdf
Hiatus resolution: 1) elide, contract, add glide /j/ or /w/, with the
glide sharing the same frontness or roundness of \I{v1}; 2)
insert /h/ or /ʔ/; 3) insert coronal consonant, usually /t/ or some
/r/.
Metathesis (where P = /p, b, m/, etc.). Obstruent + resonant can
switch. \I{pk} > \I{kp} occasionally, but not the other way.
\I{t\{pk\}} > \I{\{pk\}t} reasonably common.
% http://linguistics.berkeley.edu/phonlab/annual_report/documents/2007/Hyman_Phono_Universals_PL.pdf
Many things can happen to a consonant following a nasal: NT > ND; NS,
NZ > NTS, NDZ; NT > NTʰ; ND > NN; but also: ND > NT; NTʰ > NT; NN >
ND.
Consonant strength hierarchy (in some African languages, generalized):
continuants, semivowels and ʔ > voiced stops > nasals > voiced and
nasal geminates, voiceless stops > voiceless geminates > nasal
clusters.\footnote{Bilin has consonant ``ablaut'' in the plural, where
the penultimate or ultimate consonant (occasionally both) altnernate
in the plural: \textit{b-f, d-t, d-s, d-ʃ, r-t, l-t, r-l, dʒ-ʃ, g-k,
x-k, gʷ-kʷ, xʷ-kʷ, x-k', xʷ-k'ʷ, w-kʷ}.}
% http://daberinet.com/Blin-files/Main-p-files/Blin%20Morphology%20,%20by%20David%20L.%20Appleyard.pdf
Some suffixes may occur only in pausa, or are different there.
Syllable weight: closed syllables may be heavy or light depending on
the coda consonant, with sonorants more often making weight and stops
not (in general CVV > CVC > CV). Also, low vowels are generally
heavier than high, and central vowels less heavy than non-central. You
can get word-length dependent affix allomorphs of the same number of
syllables, but with heavier vowels in one and lighter in the other.
Consider palatalization, labialiazation, nasalization, etc., as
phenomena which apply to a \textit{syllable} rather than a particular
segment. This can generate substantially differing outcomes in
historical processes (\textit{v.} Chadic family), especially as
differing branches apply the effects more closely or more distantly.
% https://www.eva.mpg.de/lingua/conference/08_springschool/pdf/course_materials/Wolff_Historical_Phonology.pdf
% "Issues in the historical phonology of Chadic languages"
For determining stress accent, Malagasy has some light final
syllables. These are the remnants of final consonants that got vowels
tacked on after a switch to a dominant \I{cv} structure (and the other
final consonants deleted).
\subsection{ATR Harmony}
Some systems are dominated by the feature of the word root, causing
harmony both to the left and the right. Other systems are based on
syllable position, \textit{e.g.,} a system where \I{-atr} vowels can
follow, but not precede, \I{+atr} vowels. An originally larger vowel
inventory may reduce, giving a situation where one apparent vowel can
trigger \I{+atr} in some words and \I{-atr} in others. Different
morphological systems may take different rules within one language.
May apply to all vowels, or just to \I{-atr} /ɪ ʊ/ (or /ɛ ɔ/).
/a/ may be transparent, or may block. Or may be \I{-atr}, with /ə/ the
\I{+atr} version.
Common systems: 9V /iɪeɛaɔoʊu/; 7V(M) /ieɛa(ə)ɔou/ or 7V(H)
/iɪɛa(ə)ɔʊu/. \I{-atr} usually dominant in 7V(M), \I{+atr} usually
dominant in 7V(H).
\subsection{Tone}
In many Dogon languages, postnominal adjectives and demonstratives
(but not numbers or other quantifiers other than one) cause all tones
to lower in the preceding \I{np}. If multiple adjectives follow, all
are lowered except the last. The head noun of an internally headed
\I{rc} is also so lowered.
% http://www.dartmouth.edu/%7Emcpherson/892heath.pdf - "Tonosyntax and
% reference restriction in Dogon NPs".
\subsection{Reduplication} The original word-initial consonant of a
reduplicated root may undergo lenition, or fortition. Malagasy
\LL{fantatra} > \LL{\uline{f}anta\uline{p}antatra}.
The \textbf{inflectional} uses include: number (for nouns; for verbs
marking actor number or event plurality; in noun compounds either or
both elements may reduplicate, sometimes in free variation); very
rarely to encode possession (either of 3rd person, or 1st and 2nd,
usually with partial reduplication); frequent in verbs for
frequentative, habitual or progressive aspect, but also imperfective,
inchoative and perfect(ive); Margi participles are formed by partial
or full \I{red} (seen in a few other languages).
\I{red} may be used to repair a word shape that has taken some affix.
% http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54.8895&rep=rep1&type=pdf
The \textbf{derivational} uses include: ordinal numbers from cardinal;
distribution (``3 each''); valency reduction in verbs (including
antipassive), as well as reciprocals (and mutuality in other word
classes, ``face to face''); diminutive (and both endearment and
contempt); associatives (``someone with X''); similarity; sort/kind of
N; disposition (``someone prone to X, likely to X''); common in
indefinites (``who'' > ``whoever''); in verbs, a lack of control,
disorder, carelessness, pretense, attempt; incrementality
(``gradually, little by little, one by one''); spread-out or scattered
(``here and there, looking around''); non-uniformity (``zig-zag, now
and then, in several colors, hodge-podge'').
Not infrequently used to name insects and birds, and the word for
``baby.''
Noun to verb (``to wear an X''), adjective or adverb; verb to noun
(agent noun, action noun, instrument), adjective or adverb; adjective
to adverb.
For adjectives and other vocabulary of quality: diminutive
attentuative, augmentative, intensification.
The sense of ``collectivity'' might be used where we would expect a
single term (``broom, flight of stairs,'' etc).
Some affixes may simply require the root take \I{red}, with Makah
having several types of \I{red} and affixes selecting one.
\section{Word Classes}
\begin{center}
\small
\begin{tabular}{|l|l|l|l|}
\hline
& \LL{Reference} & \LL{Modification} & \LL{Predication} \\
\hline
\LL{Object} & referring, argument phrase (\LL{noun}) & nominal attributive phrase &
predicate
nominal \\
\hline
\LL{Property} & deadjectival nominal & attributive phrase (\LL{adjective}) & predicate adjectival\\
\hline
\LL{Action} & complement clause & relative clause & clause (\LL{verb})\\
\hline
\end{tabular}
\end{center}
% http://www.unm.edu/~wcroft/Papers/Morphosyntax-ch1-aug15.pdf
\LL{Universal One}. A lexical class used in a nonprototypical
propositional act function\footnote{Reference, modification or
predication.} will be coded with at least as many morphemes as in
its prototypical function, as in \textit{bright $>$ bright-\LL{ness}}
(prototypical functions are marked in bold in the chart).
\LL{Universal Two}. A lexical class used in a nonprototypical
propositional act function will also have no more grammatical
behavioral potential than in its prototypical function. For example, a
predicate nominal will not have more verbal inflectional possibilities
than a full verb.
Individual languages will break up that 9x9 grid differently in terms
of constructions, and where individual words end up.
\section{The Noun}
Nouns are the most frequently borrowed word class.
\subsection{Possession}
``Possession'' can cover a wide range of relationships:
\begin{grammarlist}
\item Ownership (or temporary possession)
\item Whole-part relationship (body part, part of an object)
\item Kinship
\item Attribute of a person, animal or thing (``Bob's temper'')
\item Orientation or location (``the front of the house''; useful
when body part terms are used for location to use different
marking)
\item Association (``my teacher,'' but also dwellings, house to
homeland, and personal clothing and goods)
\item Nominalization
\end{grammarlist}
\noindent Most languages do not have the wide range of possessive uses
found in English or Greek in the same construction. The first three
in the list above are most central.
Marking for inalienable possession is generally smaller (fewer
syllables, no classifier, simpler construction) than the marking for
alienable.
Genitive marked on R, pertensive marked on D. Affixed possession
markers may also induce pertensive marking. Genitive marking
frequently has other functions, pertensive rarely.
Different systems of marking may occur in two or even three groups
depending on possessor: pronoun, proper noun, kin term, common human
noun, common animate noun, common inanimate noun (with contiguous
constructions between groups).
Splits across whole-part D: external body parts, internal body parts,
genitalia, body fluids, parts of animals, parts of plants, parts of
artifacts.
\subsection{Classifiers}
Number classifiers are most common with ``one'' and ``two,'' possibly
obligatory only with these; rarely with different classifier forms for
higher numbers. More classifier types may be available with lower
numbers.
With stative verbs, the classic Dixon set are most likely to take
classifiers; or postural verbs. With transitive verbs, more likely on
high-agency, high-patientness verbs (handling, cooking, killing and
the like). With any verb class, optional classifiers may mark
salience or ``completeness'' of the classified feature.
When not used with their expected word class, classifiers may be
derivational.
\subsubsection{Gender / Noun Class}
Variable gender assignment may code size and/or shape and/or posture
(upright vs.\ horizontal). Phonological gender assignment (initial,
final, or both, are possible determinates) typically restricted to
nonanimates. Function of referent may change gender (water as drink
vs.\ water as part of the landscape).
Agreement: clefting may interfere or inhibit agreement; mixed class
resolution may default to the least marked form, but the behavior for
animate vs.\ inanimate may pick different forms, and coordination of
mixed animacy may simply be avoided altogether (a different
construction, or repeating the phrase with different subjects, etc.);
phonetics may inhibit some kinds of agreement. Semantic agreement
override: attributive < predicate < relative < personal pronoun.
Different word classes may pick different genders for the least
marked, catch-all form.
It is rare but possible to have two noun class systems operating in a
single language, with different agreement rules operating on different
word classes. In particular, ``pronominal'' agreement for pronouns
and verb agreement\footnote{Paumarí has m./f.\ for demonstratives, but
a separate \textit{ka}/non-\textit{ka} agreement system on the
verbs, showing up as a prefix.} and ``nominal'' agreement for
adjectives (and sometimes numbers). Pronominal agreement in general
is focused on pronouns, is a smaller system, aligns with animacy, sex
or humanness, and has a fairly transparent semantic basis. Nominal
agreement systems tend to be larger, focus on animacy, sex, shape and
size, and the semantics may be much less clear. Demonstratives align
with either pattern, from language to language. Where the two systems
have semantic overlap, the markers may be quite different, or cover
different spaces.
Hierarchy (tendency) for site of marking: pronouns > verbs >
demonstratives > adjectives > numerals. Also possible: verbs >
adjectives > pronouns.
\subsubsection{Noun Classifiers}
Similar to gender, and historically may lead to it, but classifiers
are different. The semantics are generally clearer; some languages
may allow cooccurance of classifiers (\I{specific general}, as in
``\I{human man} boy''); single \I{n} may take different classifiers to
determine precise meaning; may be used anaphorically (esp.\ across
clauses of a single sentence), possibly with other syntactic functions
(relative clauses). Attend to coreferentiality.
May be small (2--3) or larger (20, or more).
``Social status'' may be encoded in human classifiers (``initiate,
known,'' etc.). Some classifiers encode inherent nature (person, bug,
tree), some function (edible, drinkable, movable, etc.). Multiple
classifiers will usually encode one of each.
Classifier on \I{n} may be omitted once class established, or
obligatory always.
May be separate \I{q} for ``unknown class'' vs.\ ``known class but no
more.''
Noun classifier systems often related to number classifier systems,
but they may be different. Number classifiers are much less likely to
be optional. Both systems may occur in a single \I{np}.
\subsubsection{Number Classifiers}
Sometimes there is a catch-all classifier, sometimes many nouns don't
take any classifier.
May not go beyond ten, or are dropped with 10s units.
Classifiers for humans/animates may have different forms with
different numbers; with low numbers the \I{n + cl} form can be
suppletive.
Affixed classifiers more likely to code animacy, and to be required;
independent classifiers code shape, consistency, etc., and may be more
or less optional, with different speakers at different competencies.
Rarely, affixed and independent classifiers may appear together.
A single noun may take different classifiers to focus salient
characteristic.
\subsubsection{Possessive Classifiers}
Can code: shape, consistence, animacy of possessum; relationship of
possessor to possessum (as in Oceanic indirect possession); or, very
rarely, class of possessor.
There may just be a bunch of words for ``of'' which match class
(possibly highly suppletive).
\subsubsection{Verbal Classifiers}
Many originate from noun incorporation, possibly competing with it.
However, some verbal classifiers may be clearly related to numeral or
other classifiers in the language. Another origin is from verbs, via
grammaticalized serial verb constructions.
Existential verbs may distinguish animate from inanimate; a few
languages elaborate existentials quite a lot (in a container, movable
vs.\ immobile, non-human animal, etc.).
\subsection{Case}
\I{sov} languages more likely to have case marking than \I{svo}.
\subsubsection{Formal Marking}
\begin{center}
\begin{tabular}{ccccl}
A & O & E & peripheral \\
\hline
w & x & y & z & Latin (very common) \\
w & x & \uline{y} & \uline{y} & Jarawara \\
w & \uline{x} & \uline{x} & y & Kinyarwanda \\
w & \uline{x} & \uline{x} & \uline{x} & Creek
\end{tabular}
\end{center}
% http://www.smg.surrey.ac.uk/features/morphosemantic/transitivity/
\noindent In Creek there is \LL{-t} on subjects and \LL{-n} on
nonsubjects.
\subsubsection{Accusative} This may be sensitive to the animacy of the
object, with high animacy objects more likely to take an overt
accusative (so-called ``differential object marking''). \I{dom} may
be sensitive to the relative animacy of \I{a} and \I{o}. Or it may
vary with definiteness. Even in languages with \I{dom}, \I{o} marking
may be required in all ditransitive constructions.
In many languages, \I{acc} may be used on adverbs of distance and
duration. It is also regular on nominalized complement clauses (``I
know he-is-here-\I{acc}'').
Sometimes motion verbs take a goal in the \I{acc}.
Historically, \I{acc} may evolve from \I{dat}, more rearely from
\I{inst}.
% http://www.ling.helsinki.fi/~eniirane/sija/37-Spencer-ch36.pdf
% https://www.researchgate.net/publication/297883853_Case_Polysemy
\subsubsection{Allative} A huge range of possibilities here.
Occurring more often: true allative (``to, towards, reaching for''),
purpose (``use it for that, in order to''), conceptual (``think about,
occur to''), recipient (dative), timepoint (``at \I{time}''),
addressee (``talk to me''), perceptual (``look at, listen to''),
reason (``because of, ran from fear''). Some less frequent
possibilities include: temporal boundary (``by/until \I{time}''),
benefactive, possessive, porportion or rate (``3 out of 4, 3 at a
time''), equivalence (``as, in exchange for''), subordinator
(``although, when, while''), emotional target (``hard for, be angry
at'').
% RiceKabata2007.pdf
\subsubsection{Ablative} An ablative may mark not only the source but
also the path of motion (perlative).
\subsubsection{Locative} Very rarely (Páez) a language may have
several locatives which mark posture (standing, lying, hanging,
leaning, etc.).
% http://www.deniscreissels.fr/public/Creissels-spatial_cases.pdf
\subsubsection{DPM} Differental Place Marking is possible, with the
hierarchy: human > common inanimate > place. Shorter locative marking
becomes more likely as you go down the hierarchy (such as Latin using
bare cases without prepositions for the names of towns and small
islands, and a few things like \textit{domus}). Names of cities seem
to come in for this a bit more than other things. Further, there
might be special additional morphology when animates take locative
marking, such as the Basque \textit{-ga(n)-} extension for \I{loc},
\I{abl}, and \I{all}. Different, often more complex, adpositional
constructions might be required for animate place targets.
% https://dlc.hypotheses.org/2385
% https://www.academia.edu/37762885/Differential_place_marking_and_differential_object_marking
\subsubsection{Vocative} Vocative expressions often have
characteristic vowel lengthening, stress shifts, or tone changes. In
support of such pitch alterations, overt vocative morphology will
usually have fewer consonants than other sorts of case marking. Mid
vowels are weakly preferred to high.
% https://psyarxiv.com/ef52k/
Swedish has both long and short forms of ``father'' and ``mother,''
\textit{fader:far, moder:mor}. The long forms cannot be used as
vocatives (except \textit{fader} in some archaic relgious forms).
\section{The Pronoun}
There may be separate forms for ``\I{prn} alone'' and/or ``\I{prn}
also'' unrelated to similar expressions for nouns.
Languages with 1/2 systems are not common, but not rare either.
Sg/du/paucal/pl is far more common that sg/du/trial/pl.
It's not especially common for sg and pl forms to be related (as in
Chinese).
Many languages:
\begin{center}
\begin{tabular}{lll}
& Singular & Plural \\
\I{1st} & 1 & 12, 13 (and 11) \\
\I{2nd} & 2 & 22, 23 \\
\I{3rd} & 3 & 33
\end{tabular}
\end{center}
However, one might get something like:
\begin{center}
\begin{tabular}{ll}
1 & 13 (and 11) \\
2 & 12, 22, 23
\end{tabular}
\end{center}
The basic 1/2/3 sg/pl system may be extended to include a special 12
``me and you'' form. This may involve the innovation of a dual
throughout the pronoun system. Or it may be just a normal system with
an inclusive/exclusive distinction. Or a minimal/augmented system,
\begin{center}
\begin{tabular}{ll}
Minimal & Non-minimal \\
1 & 1 + others \\
1+2 & 1+2 + others \\
2 & 2 + others \\
3 & 3 + others
\end{tabular}
\end{center}
\noindent The non-minimal may include ``unit augmented'' (one other
person, producing a dual in many forms) and just ``augmented'' (the
``plural''). Minimal/augmented are common in Australia, Austronesian,
South America, and a few in North America, though the unit
augmented/augmented distinction is nearly restricted to Australia.
3sg = 3pl is a common neutralization. 2pl = 3pl (and 2du = 3du) can
happen.
Using 2du, 2paucal or 2pl as a mark of respect occurs in Europe,
Oceania, Australia.
There is a relationship between the indefinite ``someone'' and
1non-sg forms (French; Caddo, etc). Usually it's 1pl.inc that does
this.
Question-based indefinites may be marked with some other morpheme,
often related to words for ``be,'' ``want,'' ``perhaps,'' ``or'' or
``also.''
\subsection{Demonstratives and Deixis}
% https://research.jcu.edu.au/lcrc/storeroom/sashas-folder/demonstratives-directionals-summary ``Demonstratives and directionals: summing up''
\begin{grammarlist}
\item two way
\item three way: proximal, distal, remote; proximal, medium, further
away or imprecise; proximal, medial (not far, known to 1 and 2),
distal
\item four way: proximal (to 1), proximal (to 2), medial, distant
\item five way: priximal visible, proximal audible, medial, distant,
imperceptible
\item six way (Godoberi): close to 1, close to 2, that at some
distance from 1, that at some distance from 2, that down there,
that (aforementioned)
\end{grammarlist}
Rarely, pronominal uses may require nominalizer.
Some systems have forms specifically used for anaphora, or only permit
particular forms to be used anaphorically (slight preference for
distal forms for this?).
In a very few languages, there may be special forms always accompanied
by physical gestures (English ``yei high'').
May have overtones of familiarity, endearment, pejorative,
empathy. Like diminutives, only context may make clear a positive
vs.\ negative interpretation. Even proximal deixis can be
pejorative.
Demonstratives can be recruited as filler, both within a sentence (a
particular form will be preferred), or as ``um, ah.''
Splits: more distinctions for modifier, with fewer for independent,
pronoun-like forms.
Adverbial locatives (``here, there'') often match demonstrative
distinctions (not infrequently by derivation or clear relation), but
may have fewer distinctions. Rarely, particular clause positions may
have fewer options (Jarawara, one way distinction clause initial,
``here/there,'' two way distnction clause final).
Manner adverbials (``thus, like this, like that''), again often
related to pronouns, may make fewer distinctions.
Verbal demonstratives: ``do/be (like) this/that.'' Can be both
anaphoric and cataphoric.
\rara{Demonstrative agreement with addressee (in addition to usual),
with traces of such agreement for \I{adj}.}
% https://orientalberber.wordpress.com/2012/05/29/siwi-addressee-agreement-and-addressing-aljazeera/
\rara{Ik has demonstratives that locate objects in time, with a
five way (non-past, recent past, removed past, remote past, remotest
past) distinction that matches the temporal adverbs.}
% http://langsci-press.org/catalog/book/98
\subsection{Indefinites}
Cross-linguistically, indefinites derived from interrogatives are about
twice as common as those derived from nouns. If a derivational element
is used on interrogative indefinites, often related to ``be, want,
perhaps, or,'' or ``also.'' Kannada uses ``or'' clitic to form
``some'' indefinites, ``also'' clitic for ``any'' indefinites. A few
languages split function, with interrogatives for ``any'' indefinites
and nouns for ``some'' indefinites.
If a language has \I{q}-derived indefinites, there may still be some
additional marking of a question when used interrogatively, often the
same \I{q} particle used in polar questions. Or an interrogative
mood. Also \I{q}-word focus constructions (fronting, particle, clause
intonation) may distinguish.
% https://ling-blogs.bu.edu/lx500f10/files/2010/10/lx500univf10-07-indefinites-handout.pdf
Indefinites off the the left of the Haspelmath map are more likely to
be appreciative, and those to the right depreciative.
% http://www.philol.msu.ru/~otipl/new/fdsl/abstracts/bylinina.pdf
\section{The Adjective}
Comparatives neither rare nor universal. Can be marked with (1) affix
and adpositional phrase; (2) adposition only; (3) coordination (``X is
big but Y is small'' = ``X is bigger than Y'').
Breaks along word class distinctions follow the adjective class: full
\I{nav} (all of noun, adjective and verb in the lexicon); \I{n[av]}
(i.e., property-concept words go with verbs); \I{[na]v}; and \I{[nav]}
(all three classes conflated).
% http://www.ualberta.ca/~dbeck/StJuste.pdf
Resultative adjectives are most likely in languages in which there are
complex verbal predicates (serial verbs and particle verbs). ``I
hammered it flat'' slots into complex verbal constructions naturally.
French, for example, simply doesn't have these.
% https://www.uni-salzburg.at/fileadmin/multimedia/Linguistik/documents/On_predicting_resultative_adjective_constructions-June-2016.pdf
\section{Numerals}
Bases found in human languages: 2, 3, 4, 5 (hand), 6, 8 (like 4,
counting the spaces between digits), 12, 20, 60. Hybrids: 5+20(+80),
2+5, 10+20. Base 10 is by far the most common. Lower bases are less
common, and will generally only go up to a few powers, with a base 4
system, for example, only reaching 8, 16, or 32.
The higher power units may not be usable alone. That is, unlike
English ``ten,'' some languages may require it be ``one ten'' in the
same way 20 is ``two tens.''
The words for 2 and 2nd often convey sense of ``another''
(``secondary'').
Overcounting: some systems overshoot, then work backwards, so that
``seven thirty'' is 27 (old Turkic). Subtraction: 8 = two from ten;
31 = one plus 10 from 2*20.
A very few languages (PNG) uses different bases for counting different
things (as in Bukiyip, which uses 3 and 4).
% http://sajms.com/wp-content/uploads/2016/03/A-Typology-of-Rare-features-in-Numerals.pdf ``A typology of rare features in numerals''
\section{The Verb}
``Labile transitivity'' is very common. Be clear on \I{s = a}
\textit{vs.} \I{s = o} for intransitive constructions. Some languages
may skew in favor of one or the other, but others do not. English
``cook,'' with transitive and both intransitives (``the chef cooks;
the chef cooks stew; the stew is cooking'') is unusual.
Lability is a dodge! If the language has rich mechanisms for changing
transitivity, it's less frequent.
Labile verbs tend to cluster semantically, 1) destruction and strong
property change, ``break, boil, freeze, dry, go out, melt, dissolve,
burn, destroy, break, split, kill/die;'' 2) motion and spatial
configuration, ``rock, roll, sink, spread, close, open, connect,
rise/raise, stop, fill, turn;'' 3) phase, ``begin, finish;'' 4)
non-physical effect, ``change, improve, develop;'' and 5) verbs with
an animate patient, ``wake up, learn, gather.'' A single language may
pick several clusters, or only a few items from one.
% Letuchiy, A., "Interpreting the spontaneity scale."
A transitive verb may be used intransitively in an extended
intransitive sense, too.
Verb pairs (``die/kill,'' ``eat/feed'') for which the S/O role are
primarily animate are more likely to pattern with one basic
transitivity for the simplest stem (i.e., ``kill'' is primary with a
derivation for ``die''), while those with primarily inanimate S/O role
verbs (``boil/boil,'' ``burn/burn'', ``fall/drop'') will also have a
primary transitivity (in English, these are often labile or
suppletive). In general, though, the basic role for animate verbs
will be intransitive, with augmentation for the transitive form. On
the other hand, in the large, some few languages are fairly
intransitive, some quite strongly transitive.
Stative verbs often mark fewer tense or aspect distinctions (such as
Eng.\ -ing, ``*I am knowing''). Or, they may take some additional
marking obligatorily (Turkish).
Among other categories, some languages have verbs that mark ease or
difficulty. The marking for ``ease'' on a transitive verb may signal
a small \I{o}.
In richly marked verbs, these may not be marked: \I{3rd} inanimate
sobjects, all \I{3rd} objects, \I{3rd} topical subjects, \I{3rd}
absolutives, all \I{3rd} of any kind.
\subsection{Affix Order}
The fewer person markers there are, the more likely they are to be
prefixing. But: object marking on the verb is prefixing with more
than chance frequency. Prefixing in general is less common with
\I{ov} than with \I{vo}, except for object marking, with a distinct
preference for object prefixing with \I{ov}.
% https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.616.5686&rep=rep1&type=pdf
% http://cysouw.de/home/presentations_files/cysouwASSYMETRY_handout.pdf
If \I{t} and \I{a} are on the same side of the verb, \I{a} is closer;
if different, the order is \I{t Verb a}.
\subsection{Person Marking}
Ways to express the subject of a verb, with overt subject vs.\ no
overt subject:
\begin{grammarlist}
\item Marta came-she / came-she (70\%)
\item Marta came-she / she came-she (very rare outside Germanic sphere)
\item Marta came / she came (7\%ish)
\item Marta came / came-she (14\%)
\item Marta came / came (7\%ish)
\end{grammarlist}
% https://dlc.hypotheses.org/1340
Marking of both \I{a} and \I{p} is somewhat more common than all other
options: no agreement, marking \I{a} only, marking \I{p} only (rare),
marking of either (very rare). Agreement may be either by affixes, or
by clitics that will tend to attach themselves prosodically to the
\I{vp}, or in Wackernagel position.
With objects especially, but subjects also sometimes, the verbal
marking may only occur as a pro-form replacing a missing overt object;
or according to some feature of the arguments, such as animacy,
definiteness, high referential prominence, word class (\I{np}
vs.\ \I{pro}); or by tense, aspect, polarity, mood, or even clause
type.
Grammaticalization path of \I{o} marking is probably different from
that of \I{s/a}.
% https://dlc.hypotheses.org/1304
\subsection{Tense Aspect Mood}
\subsubsection{Aspect} Perfective marking has a tense hierarchy: past
> future > present (and the present may be reinterpreted, for example
as habitual or future).
\subsubsection{Telicity}
Tense and default aspect interpretations of verbs may be very
dependent on telicity.
\begin{center}
\I{(pfv)} Achievement > Accomplishment > Activity > State \I{(ipfv)}
\end{center}
\noindent The most natural location for perfective marking is on
achievement verbs; the most natural for imperfective is state (though
state verbs themselves can be odd). Cross-marking (\I{ipfv} on
achievement verb) may have other interpretations (such as iterativity
in English ``I was sneezing'').
Im/perfectives may have a default tense interpretation (present/past),
which can split anywhere along the hierarchy above. In Even, aorist
activities are present, aorist achievements and accomlishments are
recent past.
% http://web.philo.ulg.ac.be/lediasema/wp-content/uploads/sites/43/2018/09/Malchukov_2018_LeDiasema.pdf
\subsubsection{Mood}
It's not unusual in a single language for there to be more kinds of
constructions for expressing mood and the future than for tense and
aspect.
``Irrealis'' often associated with future, or becomes future
historically.
\paragraph{Frustrative}
A frustrative may be a separate word, a clitic, or (most common) an
inflection. As usual, a range of meanings possible, with no
expectation that a single language will use a frustrative for all
senses (mostly derived from Amazonian data):
\begin{grammarlist}
\item action was accomplished, but without expected result (often
unspecified, but inferrable from context)
\item action was accomplished but the speaker finds it irritating or
inconvenient (in a few languages, this sense only in negative clauses)
\item speaker's unwilling participation
\item incompletive, the action was started but not accomplished
(``nearly'')
\item incompletive, the expected action didn't even start
\item counterfactual
\end{grammarlist}
% https://www.academia.edu/28327536/A_typology_of_frustrative_marking_in_Amazonian_languages
If cliticized to a noun, often depreciative rather than simply scoping
over clause.
May be restrictions on combining with: tense, polarity, mood (such as,
never with imperative, or never in future).
\subsection{Evidentiality, Mirativity, Egophoricity}
% http://www.diva-portal.org/smash/get/diva2:821317/FULLTEXT01.pdf
All of evidentiality, mirativity, and egophoricity potentially have
polysemous marking, so that certain uses of, say, evidential marking
might be used for a mirative. May be merged with rest of TAM system,
or not.
Evidentials may distinguish: visual evidence, non-visual or sensory,
inference, assumption (including reasoning, assumption, or general
knowledge), hearsay, and quotative for overt reference to the source.
Evidentials may be limited to particular tenses or aspects; least
common in the future. Most common two-way distinction is reported
vs.\ everything else. Common three-way: direct, hear-say, inference.
Egophoric marking\footnote{Some scholars consider egophore a kind of
person marking. This seems less likely as more data comes in.}
prototypically marks \I{ego} for first person and \I{nonego} for
everything else in statements, and marks \I{ego} for second person and
\I{nonego} for everything else in questions. Evidential distinctions
more likely in \I{nonego}. Again, past is most likely to mark the
most distinctions. Might be separate marking for agent
vs.\ experiencer ``subject.'' Egophoric marking might only occur with
volitional predicates (``cook,'' but not ``be sick'', Tibetan).
Direct egophoric marking might be used in a clause where the speaker
has some relevant and intentional involvement (``he's at the
airport-\I{ego},'' e.g., because I took him). Egophore marking likely
to be more distant from verb stem than \I{tam} or person.
Egophoric marking centrally codes epistemic source or authority. It
might also code for control and agency, such that \I{nonego} marking
with first person can indicate unintential acts (``I ate a
bug-\I{nonego},'' i.e., by accident).
\subsection{Valency}
Possible that applicatives can only be used for animate arguments.
Less often, may only be used with intransitive verbs.
It may be that applicatives in general are less frequent in languages
with rich case systems (less necessary).
Antipassive marker may code for humanness (Rgyalrong).
In languages that restrict causatives in transitives to verbs of
congnition and perception (know > inform), verbs of ingestion and
consumption (eat > feed) may pattern with them.
% https://eprints.soas.ac.uk/5972/1/JAGGAR_BUBA_HAUSA_EAT_DRINK.pdf
% Mine this paper for additional data. http://linguistics.buffalo.edu/people/faculty/dryer/dryer/KeenanDryerPassive.pdf
% 2020 location: https://linguistics.ucla.edu/people/keenan/Papers/KeenanDryerPassiveProofs.pdf
\subsubsection{Passive} Most often synthetic, with the marking usually,
but not always, closer to the verb stem than \I{tam} marking.
If auxiliary, most common intransitives are \textit{be, become,} and
verbs of motion; most common transitives are \textit{get, receive,
suffer, touch;} even \textit{eat} is attested.
Accessibility in promotion of passive: \I{direct object > indirect
object > adjunct}.
Actor most often represented by an oblique: instrumental, locative
(preposition, \textit{by}, ὑπο, etc.), genitive. Rarely, special
marking is employed.
\subsection{Semantic Types and Their Frames}
\begin{center}
\begin{tabular}{lllll}
\I{affect} (\E{hit, cut}) & Agent & Target & Instrument \\
\I{giving} (\E{give, lend}) & Donor & Gift & Recipient \\
\I{speaking} (\E{speak, tell}) & Speaker & Addressee & Message & Medium \\
\I{thinking} (\E{consider}) & Cogitator & Thought \\
\I{attention} (\E{see, hear}) & Perceiver & Impression \\
\I{liking} (\E{like, love, hate}) & Experiencer & Stimulus
\end{tabular}
\end{center}
In some languages, some \E{affect} verbs may require an inanimate
agent, usually specific, such as food making a person sick,
\textit{etc}.
In very few languages are \I{liking} verbs like \I{annoying} verbs,
where the experiencer and stimulus are in \I{o} and \I{a} roles.
However, extended intransitives, with an oblique experiencer, are a
bit more common.
\I{attention} and \I{liking} may have extended intransitive frames.
Or dative subjects. Or oblique objects.
\I{want} verbs may have extended intransitive frames.
The types with more than two arguments may have lexical or
construction splits to determine which of the arguments is in the
primary \I{o} slot (``tell'' vs. ``speak (French)'' vs. ``say'').
For \I{give} there may be i) single verb with different constructions,
ii) gift role is always in \I{o} function, iii) recipient role always
in \I{o} function (rare; amusingly, with \I{dat} for the gift role).
Or, ii and iii may have different lexical items.
\subsection{Associated Motion}
In addition to the basic trans- and cislocative, which link a motion
to the main event of the verb, they may code relative time of motion
(prior: \I{do.arriving}, ``when, after going, X''; simultaneous:
\I{do.going}, ``while going, X''; prior: \I{do.and.leave}, ``before
going, X''). They may distinguish simple deictics (``come in order
to'') vs.\ associated motion (``come while doing''). There may be a
distinction between ``go'' and ``come'' vs.\ ``go back'' and ``come
back.'' Arrernte distinguishes a ``quickly'' or hurried series, as
well as ``do coming through.'' No deictic center may be present for a
small subset (e.g., ``do here and there''). In some systems, the
elaboration for \I{s/a} is reduced, but there is a series for \I{o} on
transitive verbs,\footnote{With deixis oriented to the verb subject.
Not sure if that's universal.} ``I saw-\I{go(o)} a man'' for ``I saw
a man going away from me.'' Finally, the target of motion may be
coded for stability (temporary or permanent,
``enter.\I{come.temporary}'' ``come in,'' vs.\ ``sit.\I{go.perm}''
``go and sit and stay there'').
Very rarely the two core associated motion affixes may be used
together for ``away and back'' sense.
Expansion of the basic system often includes ``up'' and ``down''
senses (and these may be the only extensions); and then ``in'' and
``out.'' Erromangan has a misdirective, ``away from expected
direction, off to the side,'' etc., and Northern Paiute includes
random motion in the set.
Rarely these affixes may be confined to imperatives. Affixes might be
different depending on transitivity of verb, number of subject, or,
very rarely, tense or aspect (probably from \I{svc}s).
% Data Handout for A Cross-Linguistic Survey of Associated Motion and
% Directionals, Daniel Ross
These locate an event in space much like tense locates an event in
time. Though using these may add locative arguments to clause, the
main verb is always foreground. Further, they may link discourse,
with things like ``he came, and he argued-\I{cis} with me,'' with
``come'' and \I{cis} marking the same path information.
Directional affixes may be grammaticalized in various ways (inward =
perfective, downward = progressive, upward = imperative; cis =
inceptive, change of state, trans = endpoint of activity); or more
metaphorical senses (down = not fully satisfactory, up = better, back
= return to health or satisfactory state).
Like tense and aspect, these locatives may be restricted or forbidden
in subordinate or nominalized clauses (these are inflectional, not
derivational).
% https://ecommons.cornell.edu/bitstream/handle/1813/13037/Deal.pdf?sequence=2
Associated motion appears to occur in about 1/3rd of languages, with
certain hotspots (Australia, the Andes and Amazonia). It is less
likely in languages with serial verbs. The forms appear to be
diachronically unstable, with potentially wide variation between
otherwise closely related languages.
% http://linguistics.berkeley.edu/~fforum/handouts/vuillermet_AM_fforum_handoutFINAL.pdf
% http://www.academia.edu/7866301/Reconstructing_the_category_of_associated_motion_in_Tacanan_languages_Amazonian_Bolivia_and_Peru_
% https://escholarship.org/uc/item/85r367r3#page-40
% http://www.ddl.ish-lyon.cnrs.fr/trajectoire/23us23efd5ps/Presentations/Pres_Guillaume091107.pdf
\subsection{Participles}
% Ksenia Shagal, ``Towards a typology of Participles,''
% https://helda.helsinki.fi/bitstream/handle/10138/177418/Towardsa.pdf
Participles may have an \textit{inherent orientation} or a
\textit{contextual orientation.} European languages have inherent
orientation, where the role participant is determined by the
participle form (active participle for agent, passive for patient).
In contextually oriented participles, several different participants
(agent, patient, location) can all be encoded, and usage determines
the interpretation. The relative accessibility hierarchy will be in
play here, too. Contextual oriented participles are generally
generous in what they will accept on the hierarchy.
\rara{Separate inherent orientation participles distinguishing \I{s}
from \I{a} role.}
Locative orientated participles (``live.in-\I{pcpl} city'') and
instrumental orientied participles (``book write-\I{pcpl} pen'') are
possible, but rare-ish. Instrumental may be interpreted as reason,
attaching to a generic noun meaning ``way'' or ``time.''
It is possible to have passive participles (i.e, oriented on \I{p}) in
languages which do not have finite passive constructions at all. It
might be obligatory to state the agent in such constructions. Passive
participle forms are likely to be the form used to relativize further
down the accessibility hierarchy.
\I{s+p} absolutive participles may only be permitted for low-agency
intransitives (``fallen leaf'' but not ``danced woman''). Telicity
sometimes a core consideration.
Participle forms often have several functions in addition to
adjective-like attribution: adverbial (``watching the children,
...''), clausal argument (``I saw he was not running''), action
nominals (``running is good''). Participles and nominalizations may
be hard to distinguish formally.
Sometimes participle markers include \I{tam} information, sometimes
not. Markers that do not include \I{tam} information are sometimes
restricted in what verbal \I{tam} marking they may occur with. Or,
separate \I{tam} markers are used with particples. When the
participles code \I{tam}, it is normal for fewer distinctions to be
made. Finally, participles may make no \I{tam} contrasts at all.
There may be separate negative participle forms. Also, particular
participle forms may not be permitted with negation at all, requiring
a clause to be recast into a particular participle form to be negated
(such as the absolutive participle in Georgian). Nominal negating
constructions may be preferred. Rarely, it might not be permitted to
negate any participle types.
Arguments to participles may be as for verbs, or as for verb
nominalizations (genitives frequent).
Even languages without an adjective class can have things quite like
participles, which are used in relative and attributive constructions.
Participles in some languages are the only way to manage a relative
clause. Or, some marker plus the participle forms a separate relative
construction.
\subsection{Converbs}
Moderately more likely in SOV languages. Most things called converbs
require same subject, but an argument can be made for inflected
converb-like functions being part of the same phemomenon (cf.\
Coptic). Forms not inflected for person may or may not require same
subject, though same subject is most common. Argument marking may be
identical to a main clause, or different. There may be restrictions
on negation. Rarely, converb clause may occur at the other margin of
the clause, often with iconic restrictions, as in a purpose or
intention converb allowed to follow the clause, while imperfective may
not (in an SOV language).
Simplest functions: aspectual, imperfective (simultaneous) and
perfective (anterior, usually). Some languages have many forms (all
from Northern Akhvakh): locative (may take location case marking),
inceptive (``from the moment X-ing began''), immediate (``as soon
as''), anterior (``before X-ing''), imminent (``just before X-ing''),
non-posterior (so fast that the converb event may not have time to
occur at all, ``come down here-\I{cvb} before something bad
happens''), conditional, concessive (``although''), similative (``in
the same way as''), gradual (``the more..., the more...,'' with the
main clause just normally marked), cause (``because''), purposive
(``in order to'').
In languages with rich converb inventories, some may be highly
restricted with respect to the main clause verb: Akhvakh progressive
converb only occurs with ``be, remain, see, find.''
In Turkic languages (and some others), converb forms are the \I{lex}
in auxiliary constructions.
% http://www.deniscreissels.fr/public/Creissels-adv.sub.Akhv.pdf
% http://pj.ninjal.ac.jp/vvsympo/NINJAL2013_Shluinsky_handout_changes.pdf
% http://www.livingtongues.org/docs/AVCsOT1.pdf
\section{The Adverb}
In languages otherwise without a well-defined adverb class, or a
regular way of forming adverbs, there may be root adverbs for the
domains \I{speed} (``quickly, slowly'') and \I{value} (``well,
poorly, bad''). In addition to root adverbs, these can show up as
derivations from adjectives which may differ from other adverb-like
constructions somehow (such as being zero-marked). These two semantic
cores are rarely the only one in any particular language.
%http://www.ling.su.se/polopoly_fs/1.99116.1346331417!/menu/standard/file/Hallonsten_Halling_Pernilla.pdf
\section{Number}
The distinction between ``paucal'' and ``plural'' is context
dependent.
If there is a trial, its use may signify salience of the number, with
the plural being used for three much of the time.
Number suppletion in verbs: not common, aligns ergatively. Usually
only 1--4 verbs take it, but may reach a couple dozen. If there is a
sg/du/pl distinction, at least some verbs will just have sg/pl (or
non-pl/pl). Among intransitives, the posture verbs are most commonly
suppletive, \textit{sit, stand, lie,} and most likely to show numerous
number distinctions (du). Next: \textit{enter, go, be big, die/be
dead, hang, arrive, run, come, fall, cry, be little}. Transitives:
\textit{kill, put/place, throw, give, break, take, bring, carry}.
\subsection{Countability Classes} In languages with a
singulative-collective distinction the hierarchy is: substances <
aggregates < individuals; with individual most likely to be singular
by default, with a marked plural. In languages with singulatives,
aggregates most likely to take those.
Singulatives are rather common with small fruits and vegetables, small
animals, sometimes paired body parts, and groups of people (Welsh:
moch \textit{pigs} vs.\ mochyn \textit{pig;} mwyar
\textit{blackberries} vs.\ mwyaren \textit{blackberry}).
%https://dlc.hypotheses.org/1554
% \section{Exponence}
% https://www.academia.edu/41951336/A_Typological_Approach_to_the_Morphome_-_PhD_dissertation
\section{Constructions}
Any schematic construction may, like lexical constructions, be
polysemous. For example, the English ditransitive:
\begin{grammarlist}
\item \I{X causes Y to receive Z} (central sense), ``Joe gave Sally
the ball.''
\item Conditions of satisfaction imply \I{X causes Y to receive Z},
``Joe promised Bob a car.''
\item \I{X causes Y not to receive Z}, ``Joe refused Bob a cookie.''
\item \I{X acts to cause Y to receive Z} at some future time, ``Joe
bequeathed Bob a fortune.''
\item \I{X enables Y to receive Z}, ``Joe permitted Chris an
apple.''
\item \I{X intends to cause Y to receive Z}, ``Joe baked Bob a cake.''
\end{grammarlist}
Similarly, a particular language may have one construction for a
particular job, such as a ditransitive construction, but others have
several with different pragmatic or pivot significance (``I gave him
the book'' \textit{vs.} ``I gave the book to him'').
\subsection{Grammatical Relations \& Alignment}
% Bickel, ``Grammatical Relations Typology''
Grammatical relations are sets of arguments that are treated the same
way by some contruction, such as being assigned the same case or
causing the same kind of agreement.
Nouns and verbs may have separate alignments, such that nouns may
align ergatively but the verbs nominatively, for example. Grammatical
relations are construction-specific.
\I{s} = intransitive subject, \I{a} = (di)transitive agent, \I{o} =
transitive object (often this is \I{p}), \I{t} = ditransitive theme,
\I{g} = ditransitive goal (or ground).
Rarely, transitive \I{a} is distinguished from ditransitive \I{a} (and
not by a separate case, but by a separate construction).
\begin{grammarlist}
\item \{\I{S}\} intransitive subject, nominative
\item \{\I{S, A}\} subject, nominative; accusative alignment
\item \{\I{A}\} transitive subject, ergative
\item \{\I{O, T}\} direct object, accusative; indirective alignment
\item \{\I{O, G}\} primary object, dative; secundative alignment
\item \{\I{T}\} secondary object
\item \{\I{G}\} indirect object, dative
\item \{\I{S, O, T}\} absolutive; nominative; ergative alignment
\item \{\I{S, O, G}\} absolutive; nominative; ergative alignment
\end{grammarlist}
\I{o} arguments may map to different relations based on: animacy,
humanness, definiteness, specificity or saliency. In Nepali, animate
\I{o} are marked as \I{g}. In Swahili, \I{o} is marked on verb only
if animate and/or known. Or, non-salient \I{o} may be incorporated
into verb.
\I{a} arguments are the inverse of \I{o}: the lower the salience or
animacy the more likely they are to get overt marking, so that a human
may get nominative, a stone ergative. In some languages inanimate
\{\I{S, A}\} is basically impossible, with voice changing or
incorporation to handle things like ``the wind broke it.''
\begin{grammarlist}
\item speech act participant > kin/name > human > animate >
inanimate > mass
\item specific > nonspecific referential > generic/nonreferential
\item known/topical/thematic/definite > new/focal/rhematic/indefinite
\item singular > plural
\end{grammarlist}
Role marking may be sensitive to other roles within the clause. Yurok
only marks \I{o} when there is a \I{3p} subject. Sahaptin only marks
\I{A} if the \I{o} is a SAP, Tauya only if \I{o} is human.
It is unusual for languages to have obligatorily filled grammatical
role positions (very much unlike English).
In Eastern Kiranti, a specific \I{o, g} argument can be an \I{np}
(adjective attribution, etc.), while nonspecific must be a bare
\I{n}.
Floating quantifiers may prefer a particular grammatical relation. In
Tagalog, \textit{lahat} ``all,'' floats to Wackernagel, but goes with
the proximative relation. In Yélî Dnye, quantifiers float to the
preverbal position, but go with \{\I{s, o}\}
Focus constructions may be different for different relations.
The non-ergative clauses in so-called split ergative systems are
rarely nom-acc (\I{a = nom, p = acc, s = nom}). Instead one can get
\I{a = abs, p = abs, s = abs} (common), \I{a = abs, p = obl, s = abs}
(common) or even \I{a = erg, p = abs, s = erg} (common in Mayan). The
aspect splits often reflect (historically) intransitive \I{aux}
constructions. Thus, in rare circumstances, one may get splits in
future clauses, or with negation, if they reflected an original
\I{aux} construction.
\subsection{Complement Clauses}
These come in three types.
1. Fact type, indicate that something did take place. Similarly
marked to main clause. If subject is the same across clauses, it
isn't likely to be omitted. Usually marked with some complementizer
element which will have other functions in the language (often ``say''
or ``be like''). Complementizer may code reliability (sure fact
vs. possible fact).
2. Activity type, indicating extension in time. Often similar to a
noun phrase, but will still have a subject. If the subject is the
same, it may be omitted. Or the verb may have a special form.
Generally less specified in TAM and negation than a main clause; may
not include same bound pronominal elements.
3. Potential type, typically less like a main clause than the Fact
type and less similarity to a NP than the Activity type. In some
languages, the subjects must be the same. Reduced TAM and pronominal
marking. Implicit type reference to same or posterior time.
Generally a special verb form (``infinitive'') or may take marking
similar to dative or some other case.
Languages may range from one to 5--7 complement types, with subtypes.
Attention verbs (``see, hear, show'') typically take Activity
complement. May take Fact, for completed actions or of state.
``Find, discover'' are expected to take Fact.
Thinking verbs (``think (of, about, over), consider, imagine'') take
Fact, or sometimes Activity (``think about''). ``Assume, suppose''
take Fact. ``Remember, forget'' take Fact, with English unusual in
taking potential (``I remembered to shut off the stove''). ``Know,
understand'' take Fact or Potential.
Deciding (``decide, resolve, choose'') take Fact or Potential.
Liking (``love, prefer, regret, fear'') take Activity or sometimes
Fact. ``Enjoy'' takes Activity.
Speaking has several subtypes. ``Say, inform, tell'' usually take
Fact. ``Report'' takes Fact or Activity. ``Describe, refer to''
takes Activity. ``Promise, threaten'' takes Potential, which may be
in the indirect object slot. ``Order, persuade'' generally take
Potential.
\subsubsection{Epistemic Stance}
In addition to things like the subjunctive, the epistemic stance
towards an embedded clause can be affixes, clitics, particles,
adverbs, and auxiliaries. Jacaltec has different two complementation
conjunctions for verbs of speaking, one indicating certainty, one
indicating reservation or hearsay.
% https://lsa2009.berkeley.edu/alt8/vanlier_boye_abstract.pdf
\subsubsection{Expletive Negation}
% ``Fear complement clauses in typological perspective,'' Dobrushina,
% Nina, 2017.
In fear clauses, especially if not indicative, negation may be a
requirement of the construction, though it is not negating the fear
clause.\footnote{French has it optionally: \E{je crains qu'il (ne)
vienne} ``I'm afraid he might come.'' vs.\ \E{je crains qu'il ne
vienne pas} ``I'm afraid he won't come.''} If a language allows
different complement constructions for fear-verbs, one may take
expletive negation while the other does not (cf.\ Russian \E{čto}
vs.\ \E{kak...by ne}).
These may evolve from wish constructions, or from negative
purpose clauses.
When fear clauses look like embedded questions, even if indicative,
expletive negation is more likely.
Fear-verbs likely to take this: \E{fear, prohibit, hinder, prevent,
avoid, deny, refuse}. In a few languages positive predicates
(\E{hope}) might have some constructions with expletive negtation.
Expletive negation may also occur in: exclamatives, emphatic
questions, concessive conditionals, before clauses, until clauses,
polite requests, comparatives.
\subsection{Desententialization}
The most desententialized is purpose clauses: purpose $<$ before $<$
after, when $<$ reason, reality condition. Also: phrasal, modal $<$
desiderative, manipulation\footnote{\textit{urge, suggest,} etc.} $<$
perception $<$ propositional attitude,\footnote{\textit{believe} and
the like} knowledge $<$ utterance.
% Add more from: http://www.academia.edu/2099252/Argument_coding_and_clause_linkage_in_Australian_Aboriginal_languages
\subsection{Negation}
% http://youngconfspb.com/application/files/2115/1334/8210/Miestamo.pdf
Negative affix only a bit less common than negative particle. Very
rarely, might be circumfix.
Asymmetric negation (different structure under negation beyond mere
presence of marker) reasonably common, with a mix of symmetric and
asymmetric within a single language a bit more common than either
alone. Possible asymmetries: lexical verb loses finiteness (sometimes
with \I{aux} negator, sometimes other \I{aux} with nominalized,
negated \I{lex}); negatives use irrealis of some sort; negatives have
some sort marking used for asseveration in non-negated; difference in
grammatical categories (TAM, argument structure; Harar Oromo removes
all person marking distinctions in negative). Negation is stative, so
negated clauses may partake of some stative construction behavior.
The discourse context of negatives \E{contra} affirmatives necessarily
means the details matter less.
Nonstandard negative constructions more likely in: imperatives,
existentials, nonverbal clauses.
Negative imperatives might be (with WALS counts): usual imperative +
usual negation (23\%); usual imperative + different negation (37\%);
different imperative + usual negation (11\%); different imperative +
different negation (29\%).
Circumfix all over planet, just not common.
Possible distinctions, beyond simple ``not:'' noun vs.\ non-noun, verb
vs.\ non-verb (where adj.\ may pattern with either); different negator
for the future; prohibitive; participle negation; negative particle
for non-existence (which may be like ``no'' or plain negation for
nouns). Non-verb negator can be grammaticallized from ``other,
different; refuse, not want.''
\subsection{Questions}
Focus questions (``do \textit{you} want it?'') usually look like polar
questions, but might have a special affix or structure; could have
affinity with alternative question construction.
Rarely, negative polar questions might be distinct from positive polar
questions.
Having unmarked polar questions is extremely rare, whether lacking an
affix declaratives do have (Sheko) or being identical in form and
intonation to a statement (Yeli Dnye). Most common is a marked polar
question and unmarked declarative, though marked declaratives can
happen (Crow, Sabanê).
Intonation is among most important ways to mark questions, though it
usually is accompanied by other markers. Less than 20\% of one survey
(of 955 languages) use intonation alone. The rising pattern, as in
English, is in no way universal. In West Central Africa, an areal
feature of questions involves falling pitch, vowel lengthening, and a
breathy end to the intonation unit.
Word order change to mark questions is extremely rare, basically
confined to Western Europe.
\rara{Koasati uses a glottal stop infix as question marker (usually
morpheme boundary).}
\rara{Halkomelem uses auxiliary verb for polar questions.}
% A Typology of Questions in Northeast Asia and beyond, Hölzl, 2018.
\subsection{Conditionals}
The protasis of a condition and formal marking of the topic may be
identically or similarly marked (including ``if X \I{adp}'' or the
like for topic marking).
\section{Discourse}
\subsection{Definiteness}
In languages with (in)definite articles, where indefiniteness is
marked by artices within the \I{np}, indefinite nouns tend to be
unrestricted in core argument position.
In languages without (in)definite articles, non-specific arguments are
more likely to have restricted core case functions, and definiteness
distinctions tend to be encoded in the verb phrase, generally with
various valency operations (\I{dom}, incorporation, antipassives,
etc.). In such languages, existential expressions are preferred to
introduce novel discourse topics. Even in a language with articles,
such as French, disprefers indefinite subjects, using an existential +
rel.\ expression in colloquial speech. If semi- or de-transitivizing
verbs forms are available in an article language, they might be used.
Subjecthood cline: definite > generic > specific indefinite >
indefinite. In some languages, subjects \textit{must} be definite
with event verbs.
Objecthood cline: definite > specific indefinite > generic >
indefinite. Again, indefinite objects may trigger valency reducing
forms, making objects oblique (or incorporated).
Individuation may be part of the clines above, with, for example,
``the stars'' as a mass treated more like an indefinite than ``the
moon.''
% https://hal.archives-ouvertes.fr/hal-02881062/document
\subsection{Topicality}
Generic topicality hierarchies: speaker > hearer > \I{3rd}; human >
animate > inanimate; agent > dative > patient; large > small (and
adult > child); possessor > possessed; definite > indefinite; pronoun
> full \I{np}.
Referential hierarchies: Speech-act participants > Kinship/Name >
Human > Animate > Inanimate; Specific > Non-specific referential >
Generic; Known/Topical/Thematic/Definite > New. Different languages
take different approaches for SAPs, with both 1 > 2 and 2 > 1 found
(the latter a politeness matter, apparently) and number may play a
role, too. Found in the wild: \I{1pl/2pl > 1sg > 2sg}, \I{1pl > 2},
\I{2pl > 1 > 2sg}.
% http://elanguage.net/journals/lsameeting/article/viewFile/2826/pdf
\subsection{Focus}
Focus constructions may differ by argument type, clause type,
polarity, etc.
Focus markedness (from least to most): indefinite \I{np}, definite
\I{np}, pronoun, bound pronoun, zero.
WH-questions generally require a focused answer. Further, if the
language has \textit{ex situ} WH words (initial, pre-verbal, etc.),
the focus phrase will usually end up in the same position.
Clauses with focused elements tend not to have overt topics.
Focused subjects may move into object position. In an \I{sov}
language, the focused subject may move immediately before the verb.
In a very few languages, subjects may be incorporated into an
intransitive verb to focus them.
% https://spot.colorado.edu/~michaeli/courses/LAM7420/Lambrecht.pdf
In some (\I{sov}) Omotic languages, what look like the subject
agreement affixes (clitics?) can move around and attach to a focused
element. With content questions, the often fronted question word
always gets the subject. In Huallaga Quechua, the evidential marker
goes on the focus (or the verb, in the most neutral, predicate focus
situation).
\subsection{Functions} Discourse particles, expressions, and adverbs
serve a set of core discourse functions.
% http://www.ling.uni-potsdam.de/~stede/Papers/StedeSchmitz00.pdf
\textbf{Structure.} What are usually called \I{push} and \I{pop}
introduce and recover from minor digressions, ``by the way, but,
anyway.'' The \I{check} function asks the listener to verify, ``isn't
it?'' There are two editing functions, \I{repair} which fixes
information, ``no, rather...,'' and \I{exemplify} which gives an
example, ``for example, for instance.'' The \I{hesitate} function is
filler to let the speaker figure out what they're going to say next.
\I{uptake} indicates that the speaker has heard and understood the
listener, and is ready to move on, ``so.''
\textbf{Coherence.} These functions address knowledge assumptions and
attitudes, with \I{known} indicating assumed shared knowledge, ``as
you know, indeed,'' and \I{revised} indicating an updated assumption,
``after all.'' A speaker might restate something ``in other words.''
\textbf{Attitude.} \I{positive} and \I{negative} ``unfortunately.''
\I{indifferent} ``as far as I'm concerned.'' \I{surprise} ``did you
actually?'' and \I{prefer} ``I would rather.'' The \I{smooth}
function can generate a vast stream of words which tone down, lighten,
and moderate face threats. And \I{emphasize} marks something as high
on a scale, ``really.''
Discourse particles in practice might indicate only one of the major
functions, or might, depending on use, cover several. Alternately,
several functions might be joined into a single discourse construction
that expresses all those functions simultaneously. A discourse
construction may have a particular function interpretation only
questions.
Different function constructions may be possible in different places
in a discourse: initiation of a turn, reply (or reply-initiate) to a
turn, midclause, or uttered alone, with different nuances in meaning
possible.
% https://www.researchgate.net/profile/Salvador_Borderia/publication/348190527_Using_discourse_segmentation_to_account_for_the_polyfunctionality_of_discourse_markers_The_case_of_well/links/5ff32d1b92851c13feeb11bc/Using-discourse-segmentation-to-account-for-the-polyfunctionality-of-discourse-markers-The-case-of-well.pdf
Discourse construction frequency and selection may vary by register,
genre, politeness, etc.
\section{Lexicon}
Consider a few phonesthemes.
Diminutive/medial/augmentative may code gender of the speaker (Weining
Ahmao, in the classifier system).
Special formal/elevated vocabulary word shapes may have several
patterns of systematic relationship between it and base level words
(Javanese).
% http://www.linguistics.ucla.edu/faciliti/wpl/issues/wpl17/papers/40_polinsky.pdf
\I{vso} languages are more likely to have even noun-to-verb ratios
(lots of verbs derived from nouns), while \I{sov} languages are more
likely to have \I{n+v} idioms taking up the slack, resulting in more
nouns. This includes \I{svo} languages that lean \I{ov}. Regardless,
there are almost always more nouns than verbs (though perhaps not by
many).
Respect forms may contain morph for ``lord'' or ``sky.''
Some Austronesian languages have an ``anger'' vocabulary, not
swearing, for verbs and nouns. Some common deformation patters.
% The Angry Register of the Bikol Languages of the Philippines:
% https://www.sil.org/system/files/reapdata/18/02/03/18020356736639094334766403643144945625/Lobel.pdf
Hunting, fishing or territorial languages: to conceal your intent from
animals or spirits, or with tabu considerations; circumlocution and
non-systematic deformation common practices for this.
Body parts become special concepts in stages: 1) a region of the human
body, 2) a region of an inanimate object, 3) a region in contact with
an object and 4) a region detached from the object. The landmark path
(``extremity, peak'' $>$ ``head'') goes in reverse.
% http://www.sciencedirect.com/science/article/pii/S0388000112000423
Over time words move from external and objective to subjective and
grammaticalized (``boor'' from farmer to oaf, ``feel'' from touch to
experience emotion, ``insist'' from perservere to demand to believe
strongly).
% https://www.reddit.com/r/linguistics/comments/34r1vk/where_can_i_find_a_list_of_metaphors_that_have/
Once a conceptual metaphor has taken root (\I{seeing} is
\I{understanding}) vocabulary merely related to the concept of seeing
may also be dragged into the metaphor later. You don't expect
``brilliant = intelligent'' to happen until the first metaphor is well
established.
% http://www.e-revistes.uji.es/index.php/clr/article/viewFile/1363/1206
\subsection{Movement}
Verbs of rotation may distinguish features: internal vs.\ external
axis; elevation over landmark; control; single vs.\ repeated turn.
% https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2564011
\subsection{The Body}
Many languages have a ``psychic being'' that experiences emotions,
``my psyche became frightened, is happy,'' etc., instead of the entire
person experiencing them. This psychic being may be an internal
organ: liver, heart, stomach, diaphragm, etc.
Some body parts may be conceived of as parts of other body parts,
where ``my lips'' must be ``the lips of my mouth.''
Some languages with obligatory inalienable possessives on body part
nouns may have a special ``derelationalization'' construction.
Tzutujil has a suffix (while possessors are a prefix).
Inessential body parts (``beard, scar'') may be treated differently
than vital ones. Predicate possession of a vital body part is likely
to have additional interpretations (``I sit with a belly'' = satisfied
with food; ``he really has ears'' = he has good ears).
Body part properties may be expressed: ``on/to W there is an A P,''
``W has an A P,'' ``W is provided with an A P,'' ``W's P is A.'' A-P
(or P-A) compounds are not uncommon, as both properties and exocentric
nouns.
Instrumental function of body part terms may be marked in unexpected
ways (``go on/by foot; carry in my hand, on my shoulder'').
A body part may be etymologically related to its most obvious or
common function (such as ``fist'' < ``pound'').
A person experiences and interacts with the world via their body.
Expressions describing that may involve the person or the body part in
many ways across languages.
\begin{grammarlist}
\item The person may be possessor (``wash his face,'' ``he died by
the hand of the executioner''). In some languages the person is
excluded strongly except as possessor (``he hung his neck'' rather
than ``he hung himself'').
\item The person may be indicated with an oblique (often dative),
(``to me the belly aches, I wash to you the hands, I burnt to me
the hand'').
\item In topic-prominent languages, the person may be the topic
(``elephant-\I{top} nose is long'').
\item The body part may be a locative (``I have a pain in my ear''),
ablative (``I have a pain from my ear''), or some other location
construction.
\item Or, the person may be the locative (``at the child the head
aches'').
\item Body part terms may be assumed to be part of the agent or
subject of a clause (``I opened the eyes'' = my eyes, ``she
snapped with the finger'' = her finger).
\item In a few languages, both person and part may take identical
role marking (``dog-\I{acc} you-grabbed tail-\I{acc}'').
\end{grammarlist}
% https://christianlehmann.eu/publ/lehmann_body.pdf
\subsection{Location and Posture}
Some languages are more location precise, some are more posture
precise. In either situation, there is enough information for a
listner to figure out the answer to ``where is X?'' by insepction.
Unmarked location construction for stereotypical, canonical situations
have several types: Type 0: no verb at all in \I{blc} (Basic Locative
Construction); Type 1: single locative verb (1a copula, as English; 1b
locative or existential verb, as Japanese); type 2 which has a small,
3--7 set of locative verbs; type 3, which has a large set of
dispositional verbs, 9-100, such as Mayan. A few languages are mixed,
such as Goemai which uses both type 2 and 3 together in \I{svc}s.
\I{blc} hierarchy: animate ground > figure pierced > ground pierced >
adhesion > core scenes (ring on finger > apple on skewer > arrow in
apple > stamp > cup on table / fruit in bowl). The core scenes are
most common \I{blc} use, with the full hierarcy quite rare (of course,
English).
Non-canonical location will use more complex constructions.
Locative copula may make animacy distinctions (Japanese). Or, it may
distinguish from known and unknown (used in questions, or when the
situation is unclear).
Type 2: \textit{sit, stand, lie, squat, hang, be near, be located as a
mass, be supported in a medium (like water);} also dispersion,
\textit{spread, cover,} and attachment \textit{be fixed, be stuck to,
be bound}. Which verb is used for a particular item may have
complex decision trees invoking not only orientation, but
prominence, animacy, respect, containment (``the dog sits in the
living room'' even if it is standing, due to the room), contact,
duration of state of affairs, size relation between figure and ground.
Default may need to be memorized like gender, but non-canonical
orientation means a different verb can always be used. ``Sit'' seems
often the default for uncertain situations. ``Hang'' may get quite
surprising usage (implies permanence, due to attachment, or customary
behavior, such as terrestrial animals going about their normal
business). May be separate ``stand'' verb for inanimates.
Type 2 verbs tend to grammaticalize: imperfective, progressive (>
present), or become articles. The default posture verb for an item
may pervade the language across all constructions in subtle ways: Yêlí
Dnye has not only posture verbs, but internal action (\textit{sit
down, stand up}) verbs of putting (\textit{put standing, put
sitting}), verbs of taking (\textit{take sitting X, take hanging
X}).
\subsection{Perception}
% https://research.jcu.edu.au/lcrc/storeroom/research-projects/evidentiality/folder-2-sashas-publications/perception-cognition-2013
Expressions of cognition frequently resort to verbs of seeing,
hearing, smelling, or feeling. ``Seeing = knowing'' is not as
universal as European languages make it seem, but is still widespread
across various features (evidentials often make the visual evidential
the default, least marked).
A single verb meaning both ``see'' and ``hear'' is possible, with
instrumental (ear) or object nouns (voice, sound) indicating a
``hear'' sense is intended.
Polysemous senses of perception verb \textit{roots} often
disambiguated by different case structure or complementation
constructions. Perception verbs occasionally align with cognition
verbs in using different case relations.
Perception verbs might reject derivation (causative, passive, etc.).
Imperatives of several perception verbs (and ``know'') used as
attention getting discourse markers.
``See'' might: warning, ``look out, beware;'' ``discover;'' suggest
special knowledge; ``know'' in perfective or stative forms; ``look
after;'' ``have an opinion, judge.'' Or: ``desire.'' Expressions of
socializing might use sight verbs. They eye may connote (sexual)
desire or aggression in general (extended eye contact expected in some
cultures, but highly aggressive and threatening in others —
communicants might not even look at each other). Seeing certain
things is a common tabu, but hearing rarely comes in for such tabus.
Sight might be supernatural.
% https://www.degruyter.com/view/journals/cogl/29/3/article-p371.xml
For sight, may be distinct lexical item depending on thing seen,
breaking along: surprising object, new object, old object, fact,
future, experiential perfect (``I had seen this man''), present (more
than one break possible, though not more than three lexical items seen
in paper data). Some languages may make one end of this continuum the
most common default visual verb, others the other end.
%http://www.youngconfspb.com/application/files/6914/8424/2059/Waelchli.pdf
% Specific and non-specific perception verbs and lexical typology
% Bernhard Wälchli
``Hear'' might: knowing, understanding, remembering. Often, ``obey.''
May be colexicalized with sense of smell. The ear can be the organ of
understanding, memory, but also of emotions, intention, and obligation.
Highly elaborated vocabulary for tastes and smells happen, but are not
common. In the languages that do have richer aroma terms, food smell
vocabulary is common, possibly including cooking method; the smell of
rotting and feces seems to be common; animals, as well as blood and/or
fat in meat preparation; strongly medicinal or herbal aromas; mold and
mustiness.
% https://www.reddit.com/r/conlangs/comments/huedqu/on_smell_terms/
\begin{center}
\begin{tabular}{lll}
Experience & Activity & Phenomenon \\
\hline
see & look (at) & look, seem \\
hear & listen (to) & sound \\
feel & feel/touch & feel (like) \\
smell & smell & smell (of/like) \\
taste & taste & taste (like)
\end{tabular}
\end{center}
There may be different or identical lexical items of any of the types;
or differing by construction (as English for the bottom
three). \I{s=o} lability (as in English ``taste'') applies even to
sight verbs in some languages.
Positive and negative judgement of perceptions are likely to split for
sight (``beautiful, ugly'') with just ``good'' and ``bad'' for the
other senses much of the time, with occasionally specialized terms
following no particular pattern.
Viberg's lexicalization and markedness hierarchy: sight > hearing >
touch, taste, smell (i.e., more lexical distinctions for sight than
for smell).
If there's a lexical difference between experience and activity,
experience verbs will often be more transitive. In ergative
languages this holds less often, and might even have intransitive
constructions for both.
% Competition and Variation in Natural Languages: The Case for Case
% edited by Mengistu Amberber, Helen de Hoop, p.102
May be internal, ``proprioceptive'' sense constructions, for things
felt happening to oneself (``feel'' vs.\ ``feel internally'' is
attested).
Distinction between objective (``look'') and subjective (``seem'') (or
non-mediated vs.\ mediated). May be a single mediated verb across all
senses.
%https://pdfs.semanticscholar.org/cf1d/359040e7c743ede0e63521e748f2867e931e.pdf
A [-control] root such as ``see'' may become [+control] in the
imperative (``look at!'')
\subsection{Pain}
Crosslinguistically few root words to describe pain. Rather, idioms
from four domains are commonly used: burning (including smolder,
shine, boil), destruction or deformation (sharp or pointed instrument;
break, tear; press, pull; burst), sounds (usually for sensations in
the ears or head), and motion (circular movement, twist, jump).
Atelic verbs may be nominalized to produce telic/stative
constructions.
The body part experiencing pain may be locative (``in my hand''), a
subject (``my hand hurts''), or an agent (``my hand hurts me'').
Loss of functionality is often expressed by lack of motion (lock up,
stiffen, be like a stick).
% https://www.academia.edu/7973204/Towards_a_typology_of_pain_predicates_Linguistics_2012_50.3_421_466_соавторы_А.А._Бонч-Осмоловская_и_Т.И._Резникова_
Derivations locating pain or illness (\textit{-osis, -itis, -algia,}
etc), as in modern medical terminology, are quite rare.
\subsection{Temperature}
Core words for temperature may make only two distictions (``hot/warm''
vs.\ ``cold/cool''), three (``cold/cool'' vs.\ ``warm'' vs.\ ``hot''),
or four (``cold, cool, warm, hot''). Some languages may have unique
intensifiers for extreme temperatures, usually related to words for
processes or things that exemplify the temperature (such as ice or
burning).
Temperature may refer to three domains: tactile (``the plate is
hot''), ambient (``it's hot here''), and personal-feeling (``I am
hot''). Languages like English or Italian may use a single word
(``cold,'' ``freddo'') for all three; there may a unique word for each
domain, as in East Armenian; or the domains may be split up among two
words. Expressions for the personal-feeling domain are most likely to
be different in some way. The ambient domain generally makes the most
distinctions, the personal-feeling the fewest. Personal-feeling terms
may be restricted to ``uncomfortably hot'' and ``uncomfortably cold.''
Terms in the ambient domain may mark source of the heat, humidity, the
effect on breathing, windiness, etc., such as one for ``hot'' and
another for ``hot and humid.'' Some terms may be restricted to
season, such as Palula ``cool'' \textit{šidaloó}, which can refer to
pleasant coolness in summer, but is not used to describe not-too-cold
in the winter.
There may be separate tactile domain temperature terms for food and
especially water.
\subsection{Clause Connection}
% http://allegatifac.unipv.it/caterinamauri/GIACALONE_MAURIfinal.pdf
Historical sources for \textit{and:} adv.\ and prep.\ of linear
succession, ``in front, after, before, then'' (as English \textit{and}
from PIE \textit{*hanti} `in front'); focal additive particles, ``also,
too;'' paragraph linkers (``besides, moreover, and then''); comitative
markers (``with'' > ``and''); verbs meaning ``go, bring'' in narrative
contexts (Hdi \textit{là} ``to go'' > ``and''); pronominal roots (PIE
\textit{*tó} > Hittite \textit{ta} ``and'').
Historical sources for \textit{or:} distal ``that, other'' (PIE
\textit{*au-} > Lat. ``aut''); interrogative particle; free choice
verbs (Fr. ``soit... soit...''); dubitative particles ``perhaps,''
etc.; denied conditional clause ``if not, if it is not so.''
Historical sources for \textit{but:} spatial meaning of distance
(separation), closeness or opposition (OE ``be utan'' > ``but,'' ``in
stead,'' German ``sondern'' separate > ``but rather''); temporal
meaning of overlap, ``while;'' temporal meaning of continuity
``always'' (Eng ``still'' constantly > ``nonetheless''); causal and
resultative (``therefore''); comparative meaning ``more, bigger''
(Lat. \textit{magis} > It.\ ``ma'').
Semantic map of contrast: oppositive (``I bought X, but he bought Y'')
> corrective (``I'm not Xing, I'm Ying'') > counterexpectative (``I'm
tall but bad at basketball'').
\subsection{Compounding}
% ``Towards a general model of associative relations,'' https://folk.uio.no/stevepe/Pepper&Arnaud2020.pdf
There are quite a few possible relationships between the head \I{h}
and modifier \I{m} in \I{n+n} compounds. Compounds might be formed by
1) simply cramming nouns together, as in English; 2) root + some
linking element + \I{n} (for euphonic reasons, or some other ligature
required in the language in compounds); 3) \I{n prep n} (as in French
\E{de} and Japanese \E{no}); 4) genitive; 5) \I{n} + head-marked \I{n}
(Turkish); 6) relational adjective + \I{n} (such as ``iron-\I{adj} +
road'' for ``train'' in Russian).
\begin{grammarlist}
\item Similarity
\begin{grammarlist}
\item Taxonomy: an \I{m} is a kind of \I{h}: \E{oak tree;} an
\I{h} is a kind of \I{m}: \E{bear cub}
\item Coordination: an \I{h} that is also an \I{m}: \E{boy king}
\item Similarity: an \I{h} that is similar to \I{m} \E{kidney bean}
\end{grammarlist}
\item Containment
\begin{grammarlist}
\item Containment: an \I{h} that is contained in \I{m}:
\E{orange seed;} an \I{h} that contains \I{m}: \E{seed orange}
\item Possession: an \I{h} that is possessed by \I{m}: \E{family
estate;} an \I{h} that possesses \I{m}: \E{career girl}
\item Part: an \I{h} that is part of \I{m}: \E{car motor;} an
\I{h} that \I{m} is part of: \E{motor car}
\item Location: an \I{h} located at/near/in \I{m}: \E{house
music;} an \I{h} that \I{m} is located at/near/in: \E{music
hall}
\item Time: an \I{h} that occurs at/during \I{m}: \E{summer
job;} an \I{h} at/during which \I{m} occurs: \E{hunting season}
\item Composition: an \I{h} that \I{m} is composed of: \E{wheat
flour;} an \I{h} that is composed of \I{m}: \E{sugar cube}
\item Topic: an \I{h} that is about \I{m}: \E{history book}
\end{grammarlist}
\item Causation (source, goal)
\begin{grammarlist}
\item Direction: an \I{h} whose goal is \I{m}: \E{sun worship;}
an \I{h} that is the goal of \I{m}: \E{sales target}
\item Source: an \I{h} that is a source of \I{m}: \E{sugar cane;}
an \I{h} whose source is \I{m}: \E{cane sugar}
\item Causation: an \I{h} that causes \I{m}: \E{tear gas;}
an \I{h} that \I{m} causes: \E{sunburn}
\item Production: an \I{h} that produces \I{m}: \E{song bird;}
an \I{h} that \I{m} produces: \E{birdsong}
\item Usage: an \I{h} that uses \I{m}: \E{oil lamp;}
an \I{h} that \I{m} uses: \E{lamp oil}
\item Function: an \I{h} that serves as \I{m}: \E{buffer state}
\item Purpose: an \I{h} intended for \I{m}: \E{animal doctor}
\end{grammarlist}
\end{grammarlist}
\noindent \I{h} and \I{m} elements in compounds may be metonymic
(``cottage cheese, spaghetti western, godfather, roadkill'') or
metaphorical (``milk tooth'' and the ``-father'' element in
``godfather'').
\subsection{Metaphors and Idiom}
Common source: \I{bodily manifestation for emotion}. The face often
(actions, colors), but also eyes. Other organs get up to shenanigans,
including blood.
Colors seem to often carry emotional connotations. What a
particular color signifies will vary from culture to culture: in Thai,
a green eye or face signify anger.
Metonymy: result for whole, salient feature for whole (where whole
might be thing or event), instrument for action, effect for cause,
producer for product, both whole for part and part for whole,
perception for thing perceived (sound for event causing it).
Metonymies often chain (eye > vision > attention > desire, as in ``I
have an eye on that new computer'' vs. ``keep an eye on it''), with
idiom and metaphor applicable at all steps. Back, buttocks \textit{>
back part > behind > after;} or \textit{behind > follow, support}.
Belly \textit{> inside part > inside > inclusive, during;} or
\textit{> pregnancy > offspring}. Ear \textit{> hearing > attention,
(dis)regard, obedience, hearsay}. Eye \textit{> vision > attention,
beauty}. Head \textit{> top part > over, beginning, end}. Mouth,
tongue \textit{> speech > word, speech act}.
% http://members.unine.ch/martin.hilpert/CMLG.pdf
\section{καὶ τὰ λοιπά}
In Kolyma Yukaghir there is a separate \I{acc} just for use on \I{1/2
sg/pl} when the subject is \I{1/2 sg/pl}, the ``pronominal
accusative.''
Form dependency: each element's forms may depend on choices made in
the feature above them (such as fewer tense forms in the negative, for
example). The final three are potentially interdependent: polarity
$<$ tense, aspect, evidentiality $<$ person, reference classification
(gender, etc.) $\Leftrightarrow$ number $\Leftrightarrow$ case.
Hierarchy of diminutives and augmentatives: noun $<$ adj., verb $<$
adv., numeral, pronoun, interjection $<$ determiner. These markers
may be distributed over several words in a sentence.
If the noun takes prefixing morphology, so will verbs.
Serial verb constructions more likely in \I{svo} languages than in
\I{sov}. \I{sov} languages are more likely to use converbs, though a
very few have both converbs and SVCs. Having a generalized ``and''
(\textit{i.e.,} the same for \I{np} and \I{vp}) is weakly correlated
with not having SVCs.
Pseudo-coordination (``try \uline{and} see'') is typically restricted
to a very small set of verbs, and in Germanic languages at least is
often used in aspectual constructions (``sit and ...'' in Norwegian
for progressive, ``return and...'' in Arabic for repetition).
Straight-up aspect verbs (``begin'') can also use such a
construction. ``Go'' and ``come'' with purpose may fall into this
construction. Also used for: causative (``make/cause and''),
intention (``plan and''), reason, conditional.
% http://publish.illinois.edu/djross3/files/2014/05/Ross2014_Lisbon.pdf
Writing systems: vertical and horizontal lines, easiest to identify,
most frequent component of most symbols in a set. Obliques and
diagonals do not mix with vertical and horizontals too much (K, A, Z,
less frequent than E, H, F or W and X). Vertical symmetry (M, A, W)
more common that horizontal (K, D, E).
% http://www.shh.mpg.de/654434/legibility-emerges-spontaneously-rather-than-evolving-over-time
\subsection{Oblique Strategies}
Don't ask what it does, but how it moves.
Don't ask what it does, but what it looks like.
Don't ask what it does, but who it is important to.
Don't ask what it does, but who it hangs out with.
Add something which has no history.
Verb: action vs.\ means vs.\ result.
Pick an animal and make it a central metaphor.
Pick an ancient technology and make it a central metaphor source
(stonework, ceramics, weaving, herding, sailing and knots, etc.).
Meaning comes from familiarity.
What if the arguments or referent switch animacy?
\section{Lexical Functions}
\input lexfun
\section{Sound Changes}
{\small
\input changes
}
\section{Sound Systems}
\subsection{Chipaya} Onset clusters: /s ʃ/ + /p/ + (/x/); /s ʃ/ + /k q/ +
(/x xʷ χ χʷ/); /t/ + /x xʷ χ χʷ/; /tʃ l/ + /x/. Possible codas: /x χ/
+ /p t k q l r/ + (/t/); /xʷ χʷ/ + /k q/ + (/t/); C + /t/.
\subsection{Assiniboine} Onset clusters: /p/ + /t s ʃ tʃ/; /tk/; /k/ +
/t s ʃ tʃ m n/; /s ʃ/ + /p t k tʃ m n/; /x/ + /p t tʃ m n/; /mn/. No
codas.
\subsection{Oksampmin} (C)(C)V(C). Onset choice: any C except /w j/
or labialized stop. Onset cluster: C$_{2}$ may be /j w l x/. Onset
/sk/ also seen. All C may be final except prenasalized stops.
\subsection{Pech} (C)(C)V(C)(C). Onset cluster C$_{2}$ must be /r/.
Final single: all C execpt /p t kʷ b/. Final cluster must have /r/ as
first element.
\subsubsection{Ingessana} (C)V(C)(C). All C in simple codas except /ʔ
k' dʒ/. Coda cluster word finally only, C$_{1}$ is /r l n m/ and
C$_{2}$ is obstruent.
\subsubsection{Aguacatenango Tzeltal} (C)(C)V(C)(C). Onset cluster
C$_{1}$ must be /s ʃ h/, with any consonant following. Coda cluster
limited to /h/ + voiceless stop or affricate.
\subsubsection{Mamaindê} (C)(C)V(C)(C). Onset cluster: /kʰ tʰ k h/ +
/w/, or /h ʔ/ + /l n j w s/, or /ʔm/. Stops and nasals only for
simple coda, with stop or nasal + /ʔ/ for coda cluster.
\subsubsection{Qawasqar} (C)(C)(C)(C)V(C)(C)(C). Biconsonantal onset:
/f q q' s t/ + /tʃ s t t' j w q/. Triconsonantal onset have same
restrictions for C$_1$ (/qsq qst sqw/). All simple codas allowed.
Complex coda: /f j l m n p q r t w s/ + /s q/. Triconsonantal codas
include /lqs rqs qsq/.
\subsubsection{Bardi} (C)V(C)(C). Single coda all except /p/. Coda
cluster is /l r ɻ/ + nasal homorganic with following stop.
\subsubsection{Burushaski} (C)(C)V(C)(C). Complex onset: /p b pʰ t d
tʰ g/ + /ɾ j/. All C except /w j/ in simple coda. Coda clusters:
voiceless fricative + /k/; or sonorant + /t k ʂ ɕ ts tɕ ʈʂ/.
\end{document}
| {
"alphanum_fraction": 0.7509904215,
"avg_line_length": 45.0442520443,
"ext": "tex",
"hexsha": "9ef9c4493c43b0b92ca3a99fad0e8c2a11be305a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ecb51f3088ce87b932c86e0b3516be09106b7f07",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wmannis/hypomnemata",
"max_forks_repo_path": "hypomnemata.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ecb51f3088ce87b932c86e0b3516be09106b7f07",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wmannis/hypomnemata",
"max_issues_repo_path": "hypomnemata.tex",
"max_line_length": 321,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "ecb51f3088ce87b932c86e0b3516be09106b7f07",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wmannis/hypomnemata",
"max_stars_repo_path": "hypomnemata.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-02T16:48:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-23T20:13:36.000Z",
"num_tokens": 25751,
"size": 93647
} |
% This LaTeX was auto-generated from an M-file by MATLAB.
% To make changes, update the M-file and republish this document.
\subsection*{dynamics\_dp.m}
\begin{par}
\textbf{Summary:} Implements ths ODE for simulating the double pendulum dynamics, where an input torque can be applied to both links, f1:torque at inner joint, f2:torque at outer joint
\end{par} \vspace{1em}
\begin{verbatim} function dz = dynamics_dp(t, z, f1, f2)\end{verbatim}
\begin{par}
\textbf{Input arguments:}
\end{par} \vspace{1em}
\begin{lstlisting}
% t current time step (called from ODE solver)
% z state [4 x 1]
% f1 (optional): torque f1(t) applied to inner pendulum
% f2 (optional): torque f2(t) applied to outer pendulum
%
% *Output arguments:*
%
% dz if 4 input arguments: state derivative wrt time
% if only 2 input arguments: total mechanical energy
%
% Note: It is assumed that the state variables are of the following order:
% dtheta1: [rad/s] angular velocity of inner pendulum
% dtheta2: [rad/s] angular velocity of outer pendulum
% theta1: [rad] angle of inner pendulum
% theta2: [rad] angle of outer pendulum
%
% A detailed derivation of the dynamics can be found in:
%
% M.P. Deisenroth:
% Efficient Reinforcement Learning Using Gaussian Processes, Appendix C,
% KIT Scientific Publishing, 2010.
%
%
% Copyright (C) 2008-2013 by
% Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.
%
% Last modified: 2013-03-08
function dz = dynamics_dp(t, z, f1, f2)
\end{lstlisting}
\subsection*{Code}
\begin{lstlisting}
m1 = 0.5; % [kg] mass of 1st link
m2 = 0.5; % [kg] mass of 2nd link
b1 = 0.0; % [Ns/m] coefficient of friction (1st joint)
b2 = 0.0; % [Ns/m] coefficient of friction (2nd joint)
l1 = 0.5; % [m] length of 1st pendulum
l2 = 0.5; % [m] length of 2nd pendulum
g = 9.82; % [m/s^2] acceleration of gravity
I1 = m1*l1^2/12; % moment of inertia around pendulum midpoint (1st link)
I2 = m2*l2^2/12; % moment of inertia around pendulum midpoint (2nd link)
if nargin == 4 % compute time derivatives
A = [l1^2*(0.25*m1+m2) + I1, 0.5*m2*l1*l2*cos(z(3)-z(4));
0.5*m2*l1*l2*cos(z(3)-z(4)), l2^2*0.25*m2 + I2 ];
b = [g*l1*sin(z(3))*(0.5*m1+m2) - 0.5*m2*l1*l2*z(2)^2*sin(z(3)-z(4)) ...
+ f1(t)-b1*z(1);
0.5*m2*l2*(l1*z(1)^2*sin(z(3)-z(4))+g*sin(z(4))) + f2(t)-b2*z(2)];
x = A\b;
dz = zeros(4,1);
dz(1) = x(1);
dz(2) = x(2);
dz(3) = z(1);
dz(4) = z(2);
else % compute total mechanical energy
dz = m1*l1^2*z(1)^2/8 + I1*z(1)^2/2 + m2/2*(l1^2*z(1)^2 ...
+ l2^2*z(2)^2/4 + l1*l2*z(1)*z(2)*cos(z(3)-z(4))) + I2*z(2)^2/2 ...
+ m1*g*l1*cos(z(3))/2 + m2*g*(l1*cos(z(3))+l2*cos(z(4))/2);
end
\end{lstlisting}
| {
"alphanum_fraction": 0.5977327379,
"avg_line_length": 32.7078651685,
"ext": "tex",
"hexsha": "7dd3ff13799910a9afdb260a9c5c3155be7906df",
"lang": "TeX",
"max_forks_count": 36,
"max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z",
"max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "sahandrez/quad_pilco",
"max_forks_repo_path": "doc/tex/dynamics_dp.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834",
"max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "sahandrez/quad_pilco",
"max_issues_repo_path": "doc/tex/dynamics_dp.tex",
"max_line_length": 184,
"max_stars_count": 53,
"max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "SJTUGuofei/pilco-matlab",
"max_stars_repo_path": "doc/tex/dynamics_dp.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z",
"num_tokens": 1067,
"size": 2911
} |
\subsection{Missing transverse energy}
Many interesting physics processes are with the involvement of neutrinos.
Since they do not interact with any materials in the detector, neutrinos cannot be detected directly;
but instead, they can result in imbalance in the plane transverse to the beam axis, where momentum conservation is assumed.
It is known as the missing transverse momentum denoted as $E_{T}^{miss}$,
which is obtained from the negative vector sum of the momenta of all particles detected in a proton-proton collision event.
The $E_{T}^{miss}$ is measured using selected, reconstructed and calibrated hard objects in an event.
Its x- and y- components can be calculated as follow:
\begin{equation} \label{eq:met_xy}
E_{x(y)}^{miss} = E_{x(y)}^{miss, e} + E_{x(y)}^{miss, \gamma} + E_{x(y)}^{miss, \tau} + E_{x(y)}^{miss, jets} + E_{x(y)}^{miss, \mu} + E_{x(y)}^{miss, soft}
\end{equation}
where each object term is given by the negative vectorial sum of the momenta of the respective calibrated objects.
The calorimeter signals are associated with the reconstructed objects in the following order: electrons, photons, hadronically decaying taus, jets, muons.
The soft term is reconstructed from detected objects not match any hard object passing the selections, but associated with the primary vertex.
Details of applied selections for each term are summarized in table~\ref{tab:met_sele}.
\begin{table}[!htbp]
\begin{center}
\small
\caption{Overview of the contributions to $E_{T}^{miss}$~\cite{Aaboud2018}.}
\label{tab:met_sele}
\begin{tabular}{p{1cm}p{1.5cm}p{5cm}p{1.5cm}p{5cm}}
\toprule
\multicolumn{5}{c}{Objects contributing to $E_{T}^{miss}$} \\
\hline
Priority & Type & Selections & Variables & Comments \\
\hline
(1) & $e$ & $|\eta|<1.37~or~1.52<|\eta|<2.47 \newline p_{T}>10 GeV$ & $E_{T}^{miss, e}$
& all $e^{\pm}$ passing kinematic selections and medium reconstruction quality \\
\hline
(2) & $\gamma$ & $|\eta|<1.37~or~1.52<|\eta|<2.47 \newline p_{T}>25 GeV$ & $E_{T}^{miss, \gamma}$
& all $\gamma$ passing kinematic selections and tight reconstruction quality, and without overlapping with (1) \\
\hline
(3) & $\tau_{had}$ & $|\eta|<1.37~or~1.52<|\eta|<2.47 \newline p_{T}>20 GeV$ & $E_{T}^{miss, \tau}$
& all $\tau_{had}$ passing kinematic selections and medium reconstruction quality, and without overlapping with (1) and (2) \\
\hline
(4) & $\mu$ & $|\eta|<2.7 \newline p_{T}>10 GeV$ & $E_{T}^{miss, \mu}$
& all $\mu$ passing kinematic selections and medium reconstruction quality \\
\hline
(5) & jet & $|\eta|<4.5 \newline p_{T}>60 GeV \newline
--- or --- \newline
2.4<|\eta|<4.5 \newline 20 GeV<p_{T}<60 GeV \newline
--- or --- \newline
|\eta|<2.4 \newline 20 GeV<p_{T}<60 GeV \newline JVT>0.59$ & $E_{T}^{miss, jet}$
& all jets passing kinematic selections and reconstruction quality (jet cleaning) , and without overlap with (1)–(4) \\
\hline
(6) & ID track & $p_{T}>400 MeV \newline
|d_{0}|<1.5 mm \newline
|z_{0}sin\theta|<1.5 mm \newline
\Delta R(track, e/\gamma cluster)>0.05 \newline
\Delta R(track, \tau_{had})> 0.2$ & $E_{T}^{miss, soft}$
& all ID tracks from the hard-scattering vertex passing kinematic selections and reconstruction quality, and not associated with any particle from (1), (3) or (4), or associated with a jet from (5) \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
Based on $E_{x(y)}^{miss}$, the magnitude of $E_{T}^{miss}$ and the azimuthal angle $\phi^{miss}$ are computed:
\begin{equation}
\begin{split}
E_{T}^{miss} &= \sqrt{ \left(E_{x}^{miss}\right)^{2} + \left(E_{y}^{miss}\right)^{2} } \\
\phi^{miss} &= arctan \left(E_{y}^{miss}/E_{x}^{miss}\right)
\end{split}
\end{equation}
In equation~\ref{eq:met_xy}, each objects are required to pass certain reconstruction and calibrated criteria and selections mentioned above before taken as inputs.
In figure~\ref{fig:met_dis}, left plot shows the observed $E_{T}^{miss}$ distribution for data and MC of $Z \rightarrow \mu\mu$ events without genuine missing transverse momentum;
and right plot shows the $E_{T}^{miss}$ distribution for $W \rightarrow e\nu$ events that has genuine (true) missing transverse momentum due to real neutrino.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.42\textwidth]{figures/Simulation/met_Zmm.png}
\includegraphics[width=0.42\textwidth]{figures/Simulation/met_Wev.png}
\caption{Measured $E_{T}^{miss}$ distribution for $Z \rightarrow \mu\mu$ events (left) and $W \rightarrow e\nu$ events (right). }
\label{fig:met_dis}
\end{figure}
| {
"alphanum_fraction": 0.6765015806,
"avg_line_length": 60.8333333333,
"ext": "tex",
"hexsha": "7ed7c7ab1d086cfc1d504442a24a42b60090c796",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "zhuhel/PhDthesis",
"max_forks_repo_path": "chapters/Simulation/met.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "zhuhel/PhDthesis",
"max_issues_repo_path": "chapters/Simulation/met.tex",
"max_line_length": 209,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "zhuhel/PhDthesis",
"max_stars_repo_path": "chapters/Simulation/met.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1478,
"size": 4745
} |
\section{Additional Exercises}\label{sec:MoreFunctionExercises}
\Opensolutionfile{solutions}[ex]
\begin{enumialphparenastyle}
%%%%%%%%%%
\begin{ex}
If $f\left( x\right) =\dfrac{1}{x-1},$ then which
of the following is equal to $f\left( \dfrac{1}{x}\right) ?$
\begin{enumerate}
\item $f(x)$
\item $-f(x)$
\item $xf(x)$
\item $-xf(x)$
\item $\dfrac{f(x)}{x}$
\item $-\dfrac{f(x)}{x}$
\end{enumerate}
\begin{sol}
(d)
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
If $f(x)=\dfrac{x}{x+3}$, then find
and simplify $\dfrac{f(x)-f(2)}{x-2}$.
\begin{sol}
$3/\left[5(x+3)\right]$
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
If $f(x)=x^2$, then find and
simplify $\dfrac{f(3+h)-f(3)}{h}$.
\begin{sol}
$6+h$
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
What is the domain of
\begin{enumerate}
\item $f(x)=\dfrac{\sqrt{x-2}}{x^2-9}$?
\item $g(x)=\dfrac{\sqrt[3]{x-2}}{x^2-9}$?
\end{enumerate}
\begin{sol}
\begin{enumerate}
\item $[2,3)\cup(3,\infty)$
\item $(-\infty,-3)\cup(-3,3)\cup(3,\infty)$
\end{enumerate}
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Suppose that $f(x)=x^3$ and $g(x)=x$. What is the domain of $\dfrac{f}{g}$?
\begin{sol}
$\left\{x:x\neq 0\right\}$
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Suppose that $f(x)=3x-4$. Find a
function $g$ such that $(g\circ f)(x)=5x+2$.
\begin{sol}
$g(x)=(5x+26)/3$
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Which of the following functions is one-to-one?
\begin{enumerate}
\item $f(x)=x^2+4x+3$
\item $g(x)=\vert x\vert+2$
\item $h(x)=\sqrt[3]{x+1}$
\item $F(x)=\cos x$, $-\pi\leq x\leq\pi$
\item $G(x)=e^x+e^{-x}$
\end{enumerate}
\begin{sol}
(c)
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
What is the inverse of $f(x)=\ln\left(\dfrac{e^x}{e^x-1}\right)$? What is the domain of $f^{-1}$?
\begin{sol}
$f^{-1}(x)=f(x)=\ln\left(\dfrac{e^x}{e^x-1}\right)$ and its domain is $(0,\infty)$.
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Solve the following equations.
\begin{enumerate}
\item $e^{2-x}=3$
\item $e^{x^2}=e^{4x-3}$
\item $\ln\left(1+\sqrt{x}\right)=2$
\item $\ln(x^2-3)=\ln 2+\ln x$
\end{enumerate}
\begin{sol}
\begin{enumerate}
\item $2-\ln 3$
\item 1,3
\item $(e^2-1)^2$
\item 3
\end{enumerate}
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Find the exact value of $\sin^{-1}\left(-\sqrt{2}/2\right)-\cos^{-1}\left(-\sqrt{2}/2\right)$.
\begin{sol}
$-\pi$
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Find $\sin^{-1}\left(\sin(23\pi/5)\right)$.
\begin{sol}
$2\pi/5$
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
It can be proved that $f(x)=x^3+x+e^{x-1}$ is one-to-one. What is the value of $f^{-1}(3)$?
\begin{sol}
1
\end{sol}
\end{ex}
%%%%%%%%%%
\begin{ex}
Sketch the graph of $f(x)=\left\{
\begin{array}{cc}
-x & \text{if }x\leq 0 \\
\tan ^{-1}x & \text{if }x>0%
\end{array}%
\right. $
\end{ex}
\end{enumialphparenastyle} | {
"alphanum_fraction": 0.562455389,
"avg_line_length": 17.1901840491,
"ext": "tex",
"hexsha": "67d8cba51f3c949199ec08b965f0cf5ca2b3f9e8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "TimAlderson/OpenCalc",
"max_forks_repo_path": "2-functions/2-8-additional-exercises.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "TimAlderson/OpenCalc",
"max_issues_repo_path": "2-functions/2-8-additional-exercises.tex",
"max_line_length": 97,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "TimAlderson/OpenCalc",
"max_stars_repo_path": "2-functions/2-8-additional-exercises.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1258,
"size": 2802
} |
\section{Berry curvature}
The Berry curvature in the non-interacting crystal
for left and right circularly polarized
($\vc{ϵ}_±$) optical excitations for a given $\vK$
is $± 2 Ω_{+ ↑}^+ \of{k}$, where
\begin{subequations}
\begin{align}
Ω_{τ {\s}}^n \of{k}
& = \vc{\hat{z}} · \vc{Ω}_{τ {\s}}^n \ofK, \\
& = - n τ
\left[ \frac{1}{2 k} \pderiv{}{k} \fnTheta{n} \right]
\sin{\fnTheta{n}}, \\
& = - n τ
\frac{2 {\left( a t \right)}^2 \left( E_g - \tau s E_{\text{soc}} \right)}
{{\left[{\left( 2 a t k \right)}^2
+ {\left( E_g - \tau s E_{\text{soc}} \right)}^2 \right]}^{3/2}}.
\end{align}
\end{subequations}
The BCS ground state %
\footnote{%
Note that the full ground state
also contains the two lower filled bands,
but those contribute zero net Berry curvature and may be ignored
in this section and the next.}
is
\begin{subequations}
\begin{align}
\Ket{Ω}
& = ∏_{\vK} \csc{β_{\vK}} γ_{\vK ↑} γ_{-\vK ↓} \Ket{0}, \\
& = ∏_{\vK} \left( \cos{β_{\vK}} - \sin{β_{\vK}}
c_{\vK ↑}^† c_{-\vK ↓}^† \right) \Ket{0}.
\end{align}
\end{subequations}
This superconducting state is built up
from the quasiparticle eigenstates,
$\Ket{\vK}
= \csc{β_{\vK}} γ_{\vK ↑} γ_{-\vK ↓} \Ket{0}$,
of the $\vK$-dependent Hamiltonian
$λ_{\vK} \left( γ_{\vK ↑}^† γ_{\vK ↑}
+ γ_{-\vK ↓}^† γ_{-\vK ↓} \right)$.
The $z$-component of the Berry curvature of
the correlated state is zero,
\begin{equation}
\vc{\hat{z}} · i ∇_{\vK} ⨯
\Braket{\vK | ∇_{\vK} | \vK}
= Ω_{+ ↑}^- \of{k} + Ω_{- ↓}^- \of{-k} = 0.
\end{equation}
A single optically excited state in the left valley
for a given $\vK$ is
${c_{+ ↑}^+}^† \ofK c_{+ ↑}^- \Ket{\vK}$,
which has a Berry curvature
$+2 \sin^6 {β_{\vK}} Ω_{+ ↑}^+ \of{k}$.
The corresponding excitation in the right valley
has a Berry curvature of the same magnitude but opposite sign.
| {
"alphanum_fraction": 0.5820021299,
"avg_line_length": 32.9473684211,
"ext": "tex",
"hexsha": "63e433bb1e8a703616e2189508a7a4966e2c5c6c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "311fab11004ecf5efd79f0fbbe48a5f6f3b62da8",
"max_forks_repo_licenses": [
"BSD-Source-Code"
],
"max_forks_repo_name": "razor-x/aps-dichalcogenides-superconductivity",
"max_forks_repo_path": "tex/_topology.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "311fab11004ecf5efd79f0fbbe48a5f6f3b62da8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-Source-Code"
],
"max_issues_repo_name": "razor-x/aps-dichalcogenides-superconductivity",
"max_issues_repo_path": "tex/_topology.tex",
"max_line_length": 82,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "311fab11004ecf5efd79f0fbbe48a5f6f3b62da8",
"max_stars_repo_licenses": [
"BSD-Source-Code"
],
"max_stars_repo_name": "razor-x/aps-dichalcogenides-superconductivity",
"max_stars_repo_path": "tex/_topology.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 773,
"size": 1878
} |
\section{Proposal}
The body of your proposal. Below are some bullets for questions you should
answers within your proposal. Figure~\ref{fig:example} contains fake information
from a chart that will, with any luck contain some data from a measurement
you've taken. To make sure that you adhere to good academic standards and cite all
of the work relate to your project. For example if your project involves
performance isolation on SmartNICs you might want to cite~\cite{fairnic}. See
paper.bib for the format of \LaTeX references. Many academic papers will have
bibtex entries posted near where you can download their pdfs online.
\begin{itemize}
\item{\todo{What is the question you want to answer}}
\item{\todo{Why is it important}}
\item{\todo{What is the related work}}
\item{\todo{How do you plan to complete it?}}
\item{\todo{What software artifact will you complete?}}
\item{\todo{What software packages do you plan to use?}}
\item{\todo{What is your evaluation plan?}}
\end{itemize}
\begin{figure}[H]
\includegraphics[width=0.45\textwidth]{fig/example.pdf}
\caption{Example of a figure made with matplotlib. Found in fig/example.py. If you have a figure use vector graphics}
\label{fig:example}
%%remove a bit of whitespace under the figure (for space saving)
\vspace*{-13.00mm}
\end{figure}
\begin{center}
\begin{table}
\begin{tabular}{ |c|c|c| }
\hline
& studied & $\neg$ studied \\ \hline
Midterm & Pass \includegraphics{fig/check.png} & Skimmed by \includegraphics{fig/almost.png} \\ \hline
Final & Pass \includegraphics{fig/check.png} & See you next year \includegraphics {fig/fail.png} \\ \hline
\end{tabular}
\caption{Expected outcomes for various studying habits in 223B}
\label{tab:studyhabits}
\end{table}
\end{center}
| {
"alphanum_fraction": 0.7234621666,
"avg_line_length": 41.75,
"ext": "tex",
"hexsha": "07e6b631f840105dc5530a31b2a70e30e8aefb02",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-05-12T04:50:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-05-09T15:37:17.000Z",
"max_forks_repo_head_hexsha": "c0d57221ae94cd606e606ccfc550ff093ad8a425",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ucsd-cse223b-sp21/ProjectProposalTemplate",
"max_forks_repo_path": "proposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0d57221ae94cd606e606ccfc550ff093ad8a425",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ucsd-cse223b-sp21/ProjectProposalTemplate",
"max_issues_repo_path": "proposal.tex",
"max_line_length": 121,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c0d57221ae94cd606e606ccfc550ff093ad8a425",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ucsd-cse223b-sp21/ProjectProposalTemplate",
"max_stars_repo_path": "proposal.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-11T01:18:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-11T01:18:26.000Z",
"num_tokens": 489,
"size": 1837
} |
\documentclass{article} % For LaTeX2e
\usepackage{iclr2020_conference,times}
% Optional math commands from https://github.com/goodfeli/dlbook_notation.
\input{math_commands.tex}
\usepackage{hyperref}
\usepackage{url}
\usepackage{graphicx}
\title{Gaining interpretability: Deep neural networks as teachers for decision trees}
% Authors must not appear in the submitted version. They should be hidden
% as long as the \iclrfinalcopy macro remains commented out below.
% Non-anonymous submissions will be rejected without review.
\author{Antiquus S.~Hippocampus, Natalia Cerebro \& Amelie P. Amygdale \thanks{ Use footnote for providing further information
about author (webpage, alternative address)---\emph{not} for acknowledging
funding agencies. Funding acknowledgements go at the end of the paper.} \\
Department of Computer Science\\
Cranberry-Lemon University\\
Pittsburgh, PA 15213, USA \\
\texttt{\{hippo,brain,jen\}@cs.cranberry-lemon.edu} \\
\And
Ji Q. Ren \& Yevgeny LeNet \\
Department of Computational Neuroscience \\
University of the Witwatersrand \\
Joburg, South Africa \\
\texttt{\{robot,net\}@wits.ac.za} \\
\AND
Coauthor \\
Affiliation \\
Address \\
\texttt{email}
}
% The \author macro works with any number of authors. There are two commands
% used to separate the names and addresses of multiple authors: \And and \AND.
%
% Using \And between authors leaves it to \LaTeX{} to determine where to break
% the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{}
% puts 3 of 4 authors names on the first line, and the last on the second
% line, try using \AND instead of \And before the third author name.
\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
%\iclrfinalcopy % Uncomment for camera-ready version, but NOT for submission.
\begin{document}
\maketitle
\begin{abstract}
Deep neural networks have outstanding performance and flexibility and can be
regularized to generalize well on pretty much any data set. However, without
additional work, they are black boxes and how they come to conclusions is not
transparent or comprehensible. But exactly this right to explanation is well established by Europe’s GDPR, United States’ credit score, and many other real world
applications. Additionally, interpretability helps debugging and evaluating the performance of a model. On the opposite side, decision trees can be much more comprehensi-
ble, and can be trained either towards high understandability (simple tree) or high
accuracy (complex tree). Unfortunately, unlike neural networks they tend to over-
fit when trained on real world data and are hard to regularize. In this contribution I
will show how training decision trees on data generated by a neural network gives
us a dial to be tuned between predictive power on one side and interpretability and
stability on the other side.
\end{abstract}
\section{Motivation and Problem Statement}
Decision trees are one type of machine learning model that allow for interpretation given the tree has low complexity. Unfortunately decision trees tend to overfit when trained on real world data. Real world data often comes from a combination of distributions that largely differ in density of samples. Some parts may be covered by a lot of samples, others just by few. So small variations in training data have a high impact on the tree.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4.0in]{dt-reg-all.png}
\end{center}
\caption{Decision Boundaries by regularized decision tree.}
\label{fig:dt_reg_bad}
\end{figure}
Figure ~\ref{fig:dt_reg_bad} shows the classification results of a decision tree for our example use case where data looks scattered in a way we just described it. From two variables we want to lean the risk class of a driver getting into a car accident given the age of the driver and the top speed of the car driven. You see the test data plottet in the foreground while the decision boundaries are plottet as the background. Darker background colors indicate higher probabilities of the prediction. Even though we apply strong regularization, there are parts of the decision tree that are simply do complicate. Thus the decision tree overfits in a way that does not allow for a good interpretation story.
So the problem at hand can be stated like this: can we keep the interpretability of a decision tree on one side and mix it with the generalization power of a deep neural network on the other side?
\section{Approach}
We start with a deep neural network as our black box model and train it to high accuracy and generalization as shown in figure ~\ref{fig:nn-decision-boundaries}. In the next step to make our model interpretable we replace it with a regularized decision tree as a global surrogate model. We use the black box model to generate a new training data set by feeding in an equidistant grid of samples over the domains of our input values and use the predictions as our new target variable. This is used as the new training data to approximate the predictions of a black box model.
It is important that we do not propose to use the deep neural network for prediction afterwards, but we only use it as an intermediate means to come up with a decision tree which can all by itself be made interpretable. We thus do not need to pay too much attention to making our deep learning model compatible with a decision tree. We are however aware of means to such regularisations as proposed by ~\citep{schaaf2019enhancing} and also some work done by the author himself.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4.0in]{nn.png}
\end{center}
\caption{Decision Boundaries drawn by deep neural network, 72\% accuracy on test and training data.}
\label{fig:nn-decision-boundaries}
\end{figure}
As it turns out the only thing that matters for our approach is that the network is properly regularized, but not so much how this is achieved. I ended up using "Self-Normalizing Neural Networks" as proposed by ~\citep{klambauer2017selfnormalizing} in combination with standard L1-regularization on the activation level. Decision Boundaries of a model trained that way are shown in figure ~\ref{fig:nn-decision-boundaries}.
\section{Results}
In figure ~\ref{fig:surrogate-model} you can see the predictions of the resulting surrogate decision tree that has been tuned for interpretability. Setting the maximum depth of the tree gives us a dial between a model as accurate as the original black box model or as interpretable as the model you are seing. So, practically the findings of our work do not back up ~\citep{rudin2018stop} claim that there is no trade-off between accuracy and interpretability. Even more this work is based on the contrary belief. To me understanding this is largely due to the unsual definition of accuracy she uses. I, however, follow her in her suggestion that explainable models that do not replace the black box models are useless at best. This work also contradicts in the judgement that instability of an interpretable model is a good thing. We will show how our approach makes models more stable even though there is room for improvement.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4.0in]{shallow-surrogate.png}
\end{center}
\caption{Decision Boundaries by shallow surrogate decision tree, still 64\% accuracy.}
\label{fig:surrogate-model}
\end{figure}
The decision tree that could replicate the blackbox model by 100\% has a maximum depth of 12, while the one shown works for interpretation has three levels only and significantly less accuracy (64\% vs 72\%). In the surrogate model we observe the same amount of overfitting as in the blank box neural network (namely none) as the surrogate decision tree overfits on what the black box models presents it - which is already regularized. This also positively impacts the stability of our decision tree.
Having a model like this also allows for generation of if clauses similar to what humans would write as business logic. Figure ~\ref{fig:rules} shows one of many ways to turn a shallow decision tree into code. It would be just as easy to create code having nested if statements instead of one combined condition for each possible prediction. All the generated code blocks are equivalent in power as our decision tree and could used to completely replace it. You could even bring such a code based "model" into production.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4.0in]{code.png}
\end{center}
\caption{Business logic rules automatically generated from surrogate model.}
\label{fig:rules}
\end{figure}
\section{Conclusions}
The best known way of interpreting a prediction made by a decision tree is to look at the path chosen as shown in figure ~\ref{fig:prediction-path}. The information provided for this example already is quite complex, but matches what you see in figure ~\ref{fig:surrogate-model}. Young drivers (20) with relatively fast cars (110) tend to have more accidents. This is what you can directly read off of the plot. Similar thoughs can be made for the other 6 leaf nodes of the tree, e.g. people within a certain range of age are unlikely to have a lot of accidents regardless of the max speed of their cars. Speaking variables, low complexity and shallowness of the tree are a precondition to interpretation, though.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4.0in]{dtreeviz-prediction-path.png}
\end{center}
\caption{Prediction path featuring all kinds of information for interpretation.}
\label{fig:prediction-path}
\end{figure}
Practical issues arise around exactly this area of regularizing the tree to low complexity. Next to good accuracy you would also want stability of the tree. Decision trees are high variance, which means the parameters are very sensitive to small changes in the input. Since it is hard to impossible to make training of neural networks totally deterministic each training run will generate slightly differnt input data for the decision tree potentially leading to drastic changes in their split points and even overall structure. This is undeseriable as it makes interpretation much harder. Best results so far arise from manual experiments restricting both the depth and minimum leaf size which results in stable results for this use case, but there is no evidence this will be the case for other use cases as well. Special measures to stabilize trees are proposed in ~\citep{arsov2019stability} and ~\citep{last2002stability}.
\subsubsection*{Acknowledgments}
Thanks to Terence Parr for advising on how to regularize my decision trees and providing me with the \url{https://github.com/parrt/dtreeviz} tool used to plot figure ~\ref{fig:prediction-path}. Thanks to Mikio Braun for helping me to write this short paper.
\bibliography{iclr2020_conference}
\bibliographystyle{iclr2020_conference}
\end{document}
| {
"alphanum_fraction": 0.7972401104,
"avg_line_length": 73.9455782313,
"ext": "tex",
"hexsha": "cff71791611e407a267c11f317e0cf9a2862ac12",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2020-01-23T12:17:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-22T17:05:52.000Z",
"max_forks_repo_head_hexsha": "96a42e663bb656e97231eff17ef4ca21e2a14b0e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kmunve/ml-workshop",
"max_forks_repo_path": "papers/iclr2020/iclr2020_workshop_real_world.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "96a42e663bb656e97231eff17ef4ca21e2a14b0e",
"max_issues_repo_issues_event_max_datetime": "2020-02-09T17:52:53.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-09T17:52:44.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kmunve/ml-workshop",
"max_issues_repo_path": "papers/iclr2020/iclr2020_workshop_real_world.tex",
"max_line_length": 929,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "96a42e663bb656e97231eff17ef4ca21e2a14b0e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kmunve/ml-workshop",
"max_stars_repo_path": "papers/iclr2020/iclr2020_workshop_real_world.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-22T13:15:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-17T13:35:56.000Z",
"num_tokens": 2464,
"size": 10870
} |
\documentclass[11pt]{article}
\usepackage{times}
\usepackage{fullpage}
\title{Audio feedback for gesture recognition}
\author{John Williamson 9804750w}
\begin{document}
\maketitle
\section{Proposal}\label{proposal}
\subsection{Motivation}\label{motivation}
Gesture recognition offers the opportunity to add controls to a myriad of sensing
devices, particularly on mobile devices with limited control inputs. However, the
resulting interfaces are hard to interpret. Adding auditory feedback to indicate
the progress and success of gesture recognition could improve the usability of
gesture recognition systems.
\subsection{Aims}\label{aims}
This project will develop a software framework for systematically exploring
audio feedback options for gesture recognition. This will be a modular visual
programming environment that allows various gesture recognisers to be configured
and their output processed and fed to audio synthesis devices. The effectiveness
of the project in improving gesture usability will be experimentally validated.
\section{Progress}\label{progress}
\begin{itemize}
\tightlist
\item Language and GUI framework chosen: project will be implemented in Java,
using Swing for GUI development.
\item Software architecture outlined and basic class structure written.
\item Background research conducted on gesture recognition technologies and
feedback mechanisms.
\item Interfacing to inertial sensing unit in Java completed.
\item Initial version of GUI developed, which allows basic signal processing to
be applied to mouse input, with interchangable blocks.
\item Basic finite state machine gesture recogniser implemented.
\item Initial MIDI note based output implemented and working. Limited to
pitch mapping.
\end{itemize}
\section{Problems and risks}\label{problems-and-risks}
\subsection{Problems}\label{problems}
The following issues were encountered in the project so far.
\begin{itemize}
\tightlist
\item Inertial sensing unit had unsupported and out-of-date drivers. Some tricky
fixes had to be applied to get inputs.
\item Many different types of gesture recognition; not clear which ones to focus
on.
\item Implemented FSM recogniser is not robust.
\item Significant latency issues in rendering audio via Java with default audio
generation libraries.
\end{itemize}
\subsection{Risks}\label{risks}
\begin{itemize}
\tightlist
\item Many different gesture recognisers to explore. \textbf{Mitigation}: will narrow
down to three possibilities by start of next semester.
\item Unclear how to evaluate success of the project. \textbf{Mitigation}: will do
background research to investigate how success of audio recognition has
been performed in the research literature.
\item Inertial sensing device seems to be unreliable. \textbf{No clear mitigation available at this stage}
\end{itemize}
\section{Plan}\label{plan}
\subsection{Semester 2}
\begin{itemize}
\tightlist
\item
Week 1-2: develop visual programming interface. \textbf{Deliverable:}
complete interface that allows components to be added, removed and
rearranged.
\item
Week 3-5: implement three recognisers and test them with a standard
recognition task. \textbf{Deliverable:} tested recognisers with
initial performance metrics and integration with visual programming
environment.
\item
Week 6: research on how to best evaluate performance of final system.
\textbf{Deliverable:} detailed evaluation plan, with participant
numbers, information sheet and analysis plan.
\item
Week 7-9: final implementation and improvements to audio rendering.
\textbf{Deliverable: polished software ready, passing basic tests,
ready for evaluation stage.}
\item
Week 9: evaluation experiments run. \textbf{Deliverable: quantitative
measures of usability and qualitative measures of effectiveness for at
least ten users.}
\item
Week 8-10: Write up. \textbf{Deliverable: first draft submitted to
supervisor two weeks before final deadline.}
\end{itemize}
\section{Ethics}
This project will involve tests with human users. These will be user studies
using standard hardware, and require no personally identifiable information to be captured.
I have verified that the experiments I plan to do comply with the Ethics Checklist.
\end{document}
| {
"alphanum_fraction": 0.7774520857,
"avg_line_length": 36.652892562,
"ext": "tex",
"hexsha": "1d3b49fac18ce0327dcd2a29ec3bd0d7b10d2d02",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-02-22T02:53:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-08T08:18:03.000Z",
"max_forks_repo_head_hexsha": "91b125be1d4800c7f8167724806ba246a58f4895",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fuzzzyy17/continuous-visual-perception",
"max_forks_repo_path": "status_report/exemplar/status_report_exemplar.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "91b125be1d4800c7f8167724806ba246a58f4895",
"max_issues_repo_issues_event_max_datetime": "2019-09-23T22:08:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-09-23T22:08:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fuzzzyy17/continuous-visual-perception",
"max_issues_repo_path": "status_report/exemplar/status_report_exemplar.tex",
"max_line_length": 106,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "91b125be1d4800c7f8167724806ba246a58f4895",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fuzzzyy17/continuous-visual-perception",
"max_stars_repo_path": "status_report/exemplar/status_report_exemplar.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-14T13:01:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-18T13:45:33.000Z",
"num_tokens": 964,
"size": 4435
} |
\section{PVFS2 internal I/O API terminology}
PVFS2 contains several low level interfaces for performing various types
of I/O. None of these are meant to be accessed by end users. However, they
are pervasive enough in the design that it is helpful to describe some of their
common characteristics in a single piece of documentation.
\subsection{Internal I/O interfaces}
The following is a list of the lowest level APIs that share characteristics
that we will discuss here.
\begin{itemize}
\item BMI (Buffered Message Interface): message based network communications
\item Trove: local file and database access
\item Flow: high level I/O API that ties together lower level components
(such as BMI and Trove) in a single transfer; handles buffering and
datatype processing
\item Dev: user level interaction with kernel device driver
\item NCAC (Network Centric Adaptive Cache): user level buffer cache that
works on top of Trove (\emph{currently unused})
\item Request scheduler: handles concurrency and scheduling at the file
system request level
\end{itemize}
\subsection{Job interface}
The Job interface is a single API that binds together all of the above
components. This provides a single point for testing for completion of
any nonblocking operations that have been submitted to a lower level API.
It also handles most of the thread management where applicable.
\subsection{Posting and testing}
All of the APIs listed in this document are nonblocking. The model
used in all cases is to first \texttt{post} a desired operation, then
\texttt{test} until the operation has completed, and finally check the
resulting error code to determine if the operation was successful.
Every \texttt{post} results in the creation of a unique ID that is used
as an input to the \texttt{test} call. This is the mechanism by which
particular posts are matched with the correct test.
It is also possible for certain operations to complete immediately at
post time, therefore eliminating the need to test later if it is not
required. This condition is indicated by the return code of the post call.
A return code of 0 indicates that the post was successful, but that the caller
should test for completion. A return code of 1 indicates that the call
was immediately successful, and that no test is needed. Errors are indicated
by either a negative return code, or else indicated by an output argument
that is specific to that API.
\subsection{Test variations}
In a parallel file system, it is not uncommon for a client or server to be
carrying out many operations at once. We can improve efficiency in this case
by providing mechanisms for testing for completion of more than one operation
in a single function call. Each API will support the following variants of
the test function (where PREFIX depends on the API):
\begin{itemize}
\item PREFIX\_test(): This is the most simple version of the test function.
It checks for completion of an individual operation based on the ID given
by the caller.
\item PREFIX\_testsome(): This is an expansion of the above call. The
difference is that it takes an array of IDs and a count as input, and
provides an array of status values and a count as output. It checks for
completion of any non-zero ID in the array. The output count indicates
how many of the operations in question completed, which may range from 0
to the input count.
\item PREFIX\_testcontext(): This function is similar to testsome(). However,
it does not take an array of IDs as input. Instead, it tests for completion
of \emph{any} operations that have previously been posted, regardless of the
ID. A count argument limits how many results may be returned to the caller.
A context (discussed in the following subsection) can be used to limit the scope
of IDs that may be accessed through this function.
\end{itemize}
\subsection{Contexts}
Before posting any operations to a particular interface, the caller must
first open a \texttt{context} for that interface. This is a mechanism
by which an interface can differentiate between two different callers (ie, if
operations are being posted by more than one thread or more than one
higher level component). This context is then used as an input argument
to every subsequent post and test call. In particular, it is very useful
for the testcontext() functions, to insure that it does not return information
about operations that were posted by a different caller.
\subsection{User pointers}
\texttt{User pointers} are void* values that are passed into an interface
at post time and returned to the caller at completion time through one of the
test functions. These pointers are never stored or transmitted over the
network; they are intended for local use by the interface caller. They
may be used for any purpose. For example, it may
be set to point at a data structure that tracks the state of the system.
When the pointer is returned at completion time, the caller can then map
back to this data structure immediately without searching because it has
a direct pointer.
\subsection{Time outs and max idle time}
The job interface allows the caller to specify a time out with all test
functions. This determines how long the test function is allowed to block
before returning if no operations of interest have completed.
The lower level APIs follow different semantics. Rather than a time out, they
allow the caller to specify a \texttt{max idle time}. The max idle time governs
how long the API is allowed to sleep if it is idle when the test call is made.
It is under no obligation to actually consume the full idle time. It is more
like a hint to control whether the function is a busy poll, or if it should
sleep when there is no work to do.
| {
"alphanum_fraction": 0.7922168172,
"avg_line_length": 50.9380530973,
"ext": "tex",
"hexsha": "92d861eee4b8bc68b0dae82734ea2d35b49b10c0",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2021-09-26T07:19:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-23T13:40:21.000Z",
"max_forks_repo_head_hexsha": "386a12df2b0310ec2e0d37aba092204d9490c7be",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dschwoerer/orangefs",
"max_forks_repo_path": "doc/io-api-terms.tex",
"max_issues_count": 84,
"max_issues_repo_head_hexsha": "386a12df2b0310ec2e0d37aba092204d9490c7be",
"max_issues_repo_issues_event_max_datetime": "2021-09-05T19:37:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-05-31T20:14:51.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dschwoerer/orangefs",
"max_issues_repo_path": "doc/io-api-terms.tex",
"max_line_length": 80,
"max_stars_count": 44,
"max_stars_repo_head_hexsha": "386a12df2b0310ec2e0d37aba092204d9490c7be",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dschwoerer/orangefs",
"max_stars_repo_path": "doc/io-api-terms.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-16T11:23:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-11T23:16:42.000Z",
"num_tokens": 1262,
"size": 5756
} |
\chapter{Wrap-up}
This chapter connects the dots that are the many different observations and conclusions made in previous chapters, describing both what has been done and what could still be done and is meant to close the arc between my \acs{CESM} experiments (\chapref{chap:cesm-runs}) and theoretical studies (\chapref{chap:analysis}).
\secref{sec:outro-summary} gives a summary of the main results of my thesis, while \secref{sec:outro-outlook} discusses some open questions and how they could be tackled in future work.
\section{What Have We Learned?}
\label{sec:outro-summary}
The following sections describe the observations I have made when lowering the western boundary layer viscosity at the equator in \acs{CESM} experiments with low (\grid{x3}) and intermediate (\grid{x1}) resolution (\chapref{chap:cesm-runs}), along with my \q{best guess} on how to interpret these results, taking into account the theoretical findings from \chapref{chap:analysis}.
A particular focus lies on the generation of numerical noise (\secref{sec:outro-noise}), cross-equatorial flow in the Atlantic (\secref{sec:outro-amoc}), and the \ac{ITF} (\secref{sec:outro-itf}).
\subsection{Numerical Noise}
\label{sec:outro-noise}
The main result of \secref{sec:cesm-noise} is that, while an intact boundary layer with low grid scale Reynolds numbers is critical for numerical noise suppression in the equatorial regions\sidenote[-2]{Which I hypothesized to be the case because information always travels \emph{westward} in the ocean, hence noise created in the interior basin can not be dissipated without a western boundary layer.}, the smoothed velocity field is still a decent representation of the actual dynamics.
Noise is mostly created on grid scale and preferably in zonal direction, which is also observed in my shallow-water model. This is probably due to the fact that 1) meridional grid spacings are smaller than those in zonal direction close to the equator, and that 2) velocity shears are dominantly in zonal direction (due to the sharp western boundary layer).
The total amount of generated noise under viscosity reduction seems to depend critically on the given grid spacing --- much less noise is observed in \grid{x1} or the high-resolution shallow-water experiments, even when grid scale Reynolds numbers are high.
\subsection{The \acl{AMOC}}
\label{sec:outro-amoc}
The total transport carried by the \ac{AMOC} only depends weakly on viscosity. When only accounting for contours of the vertical \ac{AMOC} stream function that penetrate far into the opposite hemisphere, the transport changes by a maximum of \SI{-1.5}{\sv} (for \run{x3_nomunk}, where the Munk layer viscosity has been globally reduced to the much lower background value), while integrating directly over the velocity at the equator yields a maximum change of about \SI{-1.2}{\sv} (again, for \run{x3_nomunk}). This is less than anticipated intuitively, considering that, without friction, the required transformation of \ac{PV} to enable cross-equatorial flow is impossible. However, it is shown that the efficiency of \ac{PV} transformation inside a Munk boundary layer is independent of viscosity to a leading order (\secref{sec:killworth}).
On the other hand, I \emph{did} observe a clear correlation between viscosity and cross-equatorial transport, where lower viscosities always led to a decreased transport, and vice versa. In order to understand this behavior, I have tried to produce a similar response in an equatorial shallow-water model that is forced by a constant buoyancy forcing and uniform \q{upwelling} across the domain. After running several simulations in different scenarios (\secref{sec:equatorial-shallow-water}), I was not able to reproduce the observed response in this simple model\sidenote[0]{Instead, transports were either independent of viscosity, or increased for smaller viscosities (if the boundary layer was under-resolved).}. I thus conclude that the response of the ocean in \ac{CESM} depends on effects that were not included in my model (some possible candidates are discussed in \secref{sec:outro-moc-viscosity}).
Looking at the flow field around the equator in the Atlantic in \ac{CESM}, I noticed deep, zonal equatorial jets (similar to those described in \cite{greatbatch-jets}) that emerge along the equator and extend all the way to the eastern boundary, where it seems like some boundary layer interactions took place. In \ac{CESM} experiments with extreme viscosity modifications, a roughly equal amount of water crosses the equator in the eastern and western parts of the basin. \emph{The structure and efficiency of the western boundary thus indeed seems to be affected heavily by strong viscosity modifications.}
\subsection{The \acl{ITF}}
\label{sec:outro-itf}
The total transport crossing the \ac{ITF} shows roughly the same dependence on viscosity as the \ac{AMOC}, with maximum changes of about \SI{-2}{\sv} in \grid{x3}, and \SI{-0.5}{\sv} in \grid{x1}.
On top of these small modifications of the total transport (which are expected to be small considering Munk layer theory as in \secref{sec:killworth}), dramatic changes of the composition of the \ac{ITF} are observed in the \grid{x3} experiments\sidenote[-2]{\grid{x1} experiments seem to be largely unaffected because of a very different geometry around the Indonesian islands.}. The \ac{BSF} suggests that, for extreme viscosity modifications, about half of the total \ac{ITF} originates in the southern hemisphere, versus a purely northern source when using default viscosity parameters. This is reinforced by the salinity profile of this region, which shows a considerably saltier \ac{ITF} for low viscosities, which also hints towards a southern source.
In \secref{sec:island-rule}, I have reviewed Godfrey's Island Rule. Since the Island Rule predicts an \ac{ITF} that only depends on the atmospheric forcing and geometry, the observed dependency of the transport on viscosity seems like a contradiction. An extension of the Island Rule with friction (from \cite{wajsowicz}) predicts a \emph{weaker} transport for \emph{higher} viscosities, \ie the opposite of what is observed in \ac{CESM}. It thus seems like the Island Rule, while it delivers a good first approximation, is indeed not strictly valid at the equator.
After all, the processes that determine the source of the \ac{ITF} are still unclear. Some additional possibilities are discussed in \secref{sec:outro-itf-origin}.
\section{Open Questions}
\label{sec:outro-outlook}
The following sections are meant to point out some loose ends in this study, where future work would be necessary to get a clearer understanding, and presents some further ideas that fell out of the scope of this thesis.
In particular, there are two major open questions, one concerning the total strength of the overturning (\secref{sec:outro-moc-viscosity}), and one concerning the \ac{ITF} (\secref{sec:outro-itf-origin}).
\subsection{How is the overturning modified when changing viscosity?}
\label{sec:outro-moc-viscosity}
Since my shallow-water model could not reproduce \ac{CESM}'s dependence of the overturning on viscosity, it is still unclear \emph{how} the total cross-equatorial transport can be modified by changing viscosity, even though theory predicts that it should be constant to a leading order (\secref{sec:killworth}). Apparently, this response is created by an effect that was not modeled in the shallow-water experiments. Some likely candidates are:
%
\begin{enum}
\item boundary topography, which modifies the balance between friction and Coriolis force if the boundary layer is not strictly meridional, and bottom topography (see \eg \cite{nofbc2} and \cite{swaters});
\item interactions between multiple layers, as discussed \eg in \cite{nofbc1}, or the effect of a vanishing layer height (\cf \eqref{eq:munk-friction-balance});
\item additional forcing by the wind; and
\item a feedback between the forcing of the model and the amount of equator-crossing flow.
\end{enum}
%
For starters, an enhanced model could include a realistic geometry of the North Atlantic, and explicitly model both branches of the \ac{AMOC} (northward, shallow and southward, deep). This model should be much closer to reality, and hopefully give a realistic dependence on viscosity\sidenote[-3]{One possibility would be to use a verified \acs{GCM} such as the MIT General Circulation Model (MITgcm), which supports idealized processes and geometries, instead of a self-developed model.}.
Another clue towards the processes in a low-viscosity Atlantic is given by the equatorial jets that are observed in some experiments. These jets are studied and modeled in \cite{kitamura}, and a similar study could reveal their connection to cross-equatorial transport and viscosity.
\subsection{What controls the origin of the \acl{ITF}?}
\label{sec:outro-itf-origin}
The origin of the \ac{ITF} is in fact highly discussed in literature\sidenote{\Eg in \cite{nofind}; \cite{godfreyind}; \cite{lukas}; \cite{gordon}.}, and observations suggest a predominantly northern source. However, none of these studies seems to include the dependence of the \ac{ITF} composition on viscosity. Possible explanations for the observed, sensitive dependence of the \ac{ITF} on viscosity are:
%
\begin{items}
\item In \cite{nofind}, \citeauthorfull{nofind}%
\sidefigure{The \ac{ITF} model presented in \cite{nofind}.}[fig:nof-setup]{%
\includegraphics[width=\marginparwidth]{figures/outro/nof}%
}[1]%
considers a nonlinear, frictionless model of the \ac{ITF} with an idealized geometry (\figref{fig:nof-setup}). He shows that the determining factor for the origin of the \ac{ITF} in this model is (1) the geometry of the basin and (2) the undisturbed layer heights at certain points of the current system in the \ac{ITF}. While not taking viscosity into account explicitly, these layer heights are certainly influenced by the Munk layer viscosity in this region, which could yield an explanation for the observed behavior.
\item Since the most direct effect of a reduced viscosity is a lower boundary layer width, it is certainly possible that the ensuing geometrical alteration of the flow path has a direct influence on the solution in the \ac{ITF} region. Even a slightly displaced path may have a large influence eventually, \eg through a spatially dependent wind stress, or the \q{collision} with other currents.
\item In fact, since the efficiency of the cross-equatorial transport in a western boundary layer \emph{was} found to be influenced dramatically by viscosity in the Atlantic, it seems like the same may be the case here --- flow that fails to dump excess vorticity follows the coast of New Guinea without crossing the equator further than a few Rossby radii of deformation, and then curves back into the southern hemisphere. The flow does not need to form a jet as in the Atlantic, since it never actually crosses the equator in \grid{x3}\sidenote{In contrast to the \grid{x1}-grid, where additional islands require the flow to reach further north in order to curve into the \ac{ITF}, which is thus suppressed.}. However, this interpretation still needs careful testing, \eg in another idealized model with a geometry that is similar to that of the Indonesian islands and New Guinea.
\end{items}
%
%\parabreak
%
\emph{It seems thus, that frictional control of cross-equatorial flow is indeed possible in low-resolution models, and that the chosen model viscosity does have a decisive influence on the flow in the equatorial regions} (though not so much on its total magnitude). | {
"alphanum_fraction": 0.7932874355,
"avg_line_length": 154.9333333333,
"ext": "tex",
"hexsha": "68aaba9071cdd83703c49f240a6d8655bab81b07",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-05-29T16:00:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-09-29T18:31:35.000Z",
"max_forks_repo_head_hexsha": "cc06f14d54f21692ae87a1a4858979841cf531c7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dionhaefner/dionsthesis",
"max_forks_repo_path": "msc-thesis/src/chapters/outro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cc06f14d54f21692ae87a1a4858979841cf531c7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dionhaefner/dionsthesis",
"max_issues_repo_path": "msc-thesis/src/chapters/outro.tex",
"max_line_length": 909,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "cc06f14d54f21692ae87a1a4858979841cf531c7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dionhaefner/dionsthesis",
"max_stars_repo_path": "msc-thesis/src/chapters/outro.tex",
"max_stars_repo_stars_event_max_datetime": "2020-11-25T09:32:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-01-31T00:26:06.000Z",
"num_tokens": 2704,
"size": 11620
} |
\documentclass[10pt,a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[table,xcdraw]{xcolor}
\usepackage[landscape,margin=0.5cm]{geometry}
\usepackage[english]{babel}
\usepackage{subcaption}
\usepackage{tikz-network}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{adjustbox}
\usepackage{booktabs}
\DeclareMathOperator*{\argmax}{argmax} % thin space, limits underneath in displays
\DeclareMathOperator*{\argmin}{argmin} % thin space, limits underneath in displays
% colour themes to come. KnitR?
%-------------------------
\title{\huge{Network Science Cheatsheet}}
\author{\small{Remy Cazabet}}
\date{}
\input{cheatsheet-template.tex}
%--------------------------------------------------------------------------------
\begin{document}
\small
\begin{multicols}{3}
%\maketitle
\thispagestyle{empty}
\scriptsize
%\tableofcontents
\input{Title}
% \input{intro}
% \input{matrices}
% \input{centrality}
% \input{ScaleFree}
%\input{Homophily}
% \input{Random}
% \input{communities}
% \begin{textbox}{Edge Centralities}
% O_{ij} & \textbf{Link Clustering coefficient}
% \end{textbox}
% \begin{textbox}{Networks: Matrix notation}
% Adjacency matrix $A$
% \begin{itemize}
% \item $A_{ij}=0$ if link (i,j) does not exist. Otherwise, 1 (unweighted networks), or a number (weight)
% \item $A$ is symmetric if the network is undirected
% \noindent\rule{4cm}{0.1pt}
% \tiny{More on matrices in \ref{matrix}}
% \end{itemize}
% \end{textbox}
% \begin{textbox}{Random Graphs models}
% \textbf{Erdos Renyi}
% \textbf{Watts-Strogatz}
% \textbf{Configuration Model}
% \textbf{Stochastic Block Model}
% \end{textbox}
% \begin{textbox}{Community Detection}
% \textbf{Modularity}
% Divisive
% Agglomerative
% Louvain
% Overlapping
% CPM
% \textbf{Random Walk Compression - Infomap}
% \textbf{SBM inference}
% \end{textbox}
% \begin{textbox}{Key concepts I}
% \red{Bag-of-words (BOW)}
% \green{\emph{type} vs. \emph{token}}
% \emph{To be or not to be.} = 6 \emph{tokens}, 4 \emph{types}.
% \red{\emph{type}} descriptive criterion\footcite[cf.][12]{stroustrup}
% \red{\emph{token}} unit of analysis\footcites[cf.][12]{stroustrup,attentionMerchants}
% \bigskip
% \underline{Key topics}
% \begin{itemize}
% \item One
% \item Two
% \item Three
% \end{itemize}
% \end{textbox}
% %--------------------------------------------------------------
% \section{Tools}
% \begin{textbox}{Lorem Ipsum}
% test \sep test \sep test \sep test
% \bigskip
% \green{Visualize}
% \begin{enumerate}
% \item \textbf{present} your data
% \item \textbf{analyze} the information
% \item \textbf{explore} the findings
% \end{enumerate}
% \end{textbox}
% \begin{textbox}{Voyant Tools}
% \href{https://voyant-tools.org/}{Voyant Tools} \sep \href{https://voyant-tools.org/}{Voyant Tools}
% \includegraphics[width=\textwidth]{voyant.png}
% \end{textbox}
% \begin{textbox}{Project Gutenberg Texts}
% \begin{tabular}{r|p{0.8\textwidth}}\scriptsize
% 84 & \href{http://www.gutenberg.org/ebooks/84}{Frankenstein; Or, The Modern Prometheus by Mary Wollstonecraft Shelley} \\
% 6087 & \href{https://www.gutenberg.org/ebooks/6087}{The Vampyre; a Tale by John William Polidori} \\
% 696 & \href{https://www.gutenberg.org/ebooks/696}{The Castle of Otranto by Horace Walpole} \\
% 42 & \href{https://www.gutenberg.org/ebooks/42}{The Strange Case of Dr. Jekyll and Mr. Hyde by Robert Louis Stevenson}
% \end{tabular}
% \end{textbox}
% \begin{textbox}{Key concepts}
% \red{Bag-of-words (BOW)}
% \green{\emph{Zipf's Law}}
% \mycommand{_&§!$§/()$}{code}
% \mycommand{shutdown -h now}{to shutdown}
% \end{textbox}
% %--------------------------------------------------------------
% \section{Programming}
% \subsection{Code boxes}
% % first argument: minted programming language name, for example.. css, c, cpp, etc.
% \begin{codebox}{r}{Code box using R}
% # Install
% install.packages("tm") # for text mining
% # Load
% library("tm")
% # text <- readLines(file.choose())
% # Read the text file from internet
% filePath <- "http://www.internet.com/text.txt"
% text <- readLines(filePath)
% \end{codebox}
% % first argument: minted programming language name, for example.. css, c, cpp, etc.
% \begin{codebox}{cpp}{Code box using C++}
% for (auto element : vector)
% {
% sum += element;
% }
% \end{codebox}
% \newpage
% \section{Smaller Subboxes}
% \subsection{Subboxes}
% %---------------------------------------------
% \begin{multibox}{2} % number of boxes in a row
% \begin{subbox}{subbox}{test}
% \tiny
% \bg{alert}{white}{test}
% \bggreen{test}\\
% \end{subbox}
% \begin{subbox}{customcolor}{test}
% \scriptsize
% \bgupper{w3schools}{black}{test}\\
% \tiny
% tiny font
% \mycommand{$\lxor$}{XOR}
% \mycommand{$\lor$}{OR}
% \end{subbox}
% \end{multibox}
% %---------------------------------------------
% \begin{multibox}{2} % number of boxes in a row
% \begin{subbox}{subbox}{ Infos}
% \bg{alert}{white}{bla}\\
% \bgupper{w3schools}{black}{XX}\\
% \end{subbox}
% \begin{subbox}{customcolor}{ Infos}
% \end{subbox}
% \end{multibox}
% \begin{textbox}{Subboxes}
% %---------------------------------------------
% \begin{multibox}{2} % number of boxes in a row
% \begin{subbox}{subbox}{test}
% \red{test}
% \bggreen{test}
% \href{https://latex-ninja.com}{Link}
% \end{subbox}
% \begin{subbox}{customcolor}{test}
% \scriptsize
% \bggreen{test}
% \tiny
% super small font
% \mycommand{$\land$}{AND $\land$}
% \mycommand{$\lor$}{OR $\lor$}
% \end{subbox}
% \end{multibox}
% %---------------------------------------------
% \begin{multibox}{2} % number of boxes in a row
% \begin{subbox}{subbox}{ Info}
% \red{bla}\\
% \bggreen{XX}\\
% \end{subbox}
% \begin{subbox}{customcolor}{Info}
% \end{subbox}
% \end{multibox}
% \end{textbox}
% %--------------------------------------------------------------
% \begin{textbox}{bla}
% \bgupper{w3schools}{black}{test}\\
% \bg{alert}{white}{bla}
% \begin{enumerate}
% \item \emph{bla}: bla.
% \item bla.
% \end{enumerate}
% %---------------------------------------------
% \begin{multibox}{3} % number of boxes in a row
% \begin{subbox}{subbox}{test}
% info
% \end{subbox}
% \begin{subbox}{customcolor}{test}
% $p \to q$
% \mycommand{CTRL+C}{copy}
% \end{subbox}
% \begin{subbox}{subbox}{ Infos}
% bla \\
% more text
% \end{subbox}
% \end{multibox}
% %---------------------------------------------
% \begin{multibox}{3} % number of boxes in a row
% \begin{subbox}{subbox}{ Infos}
% info
% \end{subbox}
% \begin{subbox}{customcolor}{ Infos}
% $p \to q$
% \end{subbox}
% \begin{subbox}{subbox}{ Infos}
% bla \\
% more text
% \end{subbox}
% \end{multibox}
% \end{textbox}
% %---------------------------------------------
\AtNextBibliography{\footnotesize}
\printbibliography
\end{multicols}
\end{document}
| {
"alphanum_fraction": 0.592445328,
"avg_line_length": 13.2368421053,
"ext": "tex",
"hexsha": "89c7d9d2fd5d7ccf0d2f88d60388121fdf1257ed",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0e5e7680504599b1a88c0bb0043803c06e0e110b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Yquetzal/NetworkScience_CheatSheets",
"max_forks_repo_path": "latex_sources/old/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0e5e7680504599b1a88c0bb0043803c06e0e110b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Yquetzal/NetworkScience_CheatSheets",
"max_issues_repo_path": "latex_sources/old/main.tex",
"max_line_length": 127,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "0e5e7680504599b1a88c0bb0043803c06e0e110b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Yquetzal/NetworkScience_CheatSheets",
"max_stars_repo_path": "latex_sources/old/main.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-05T23:25:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-26T06:33:55.000Z",
"num_tokens": 2162,
"size": 7042
} |
\chapter{Discussions \& Implications} \label{ch:discussion}
This thesis has presented a set of techniques that enable the construction of
interactive, tangible devices without requiring any domain experience (e.g.,
assembly of printed parts and circuits, or calibration of machine learning
models). To conclude, I discuss how each of the proposed techniques moves
the state-of-the-art closer to the \papf ideal, and highlight some paths for
future work.
\section{Discussion of Projects}
The main takeaway from this thesis is this: in order to realize the
fabrication revolution, and enable designers to construct on-demand tangible
devices, the fabrication of this devices should be as streamlined as
possible. Ideally, digital fabrication equipment should abstract most, if
not all, of these complications and enable designers to fabricate tangible,
interactive devices that are immediately usable after fabrication. This
ideal scenario is what I call \emph{\papf}: a fabrication paradigm where
tangible devices are printed, not assembled.
In this thesis, I have presented four \pap techniques: \al, \bh, \al,
\mp; each aimed to address specific barriers in sensing, processing, and
providing output to user's interactions. While this thesis has been aimed
for the understanding of the general scientific community, the design and
schematics of each of these projects have been open-sourced upon the
publication of their respective manuscripts, enabling not only researchers
but non-expert designers to experiment with the proposed techniques.
I envision that, mirroring the process of creating graphical user
interfaces, a \papf toolkit will abstract the interactive mechanisms from
their respective techniques. For example, designers can specify ``a touch
location here'', ``a linear actuator there'', and the system would create
the desired tangible device, with the specified interaction modes.
Continuing, once the tangible device has been designed successfully, the
toolkit must be able to aid with the required software deployment. For
tangible devices that can sense user's interactions, I envision this to be
approached in two manners. For non-expert users, the \papf toolkit can
allow some form of programming by example, where designers specify locations
in the model and link them to specific actions; similarly to how \bh's
design environment operates. More expert users, on the other hand, can
obtain pre-trained machine learning models to embed into their applications.
In addition to contributing to the digital fabrication community, the work I
present in this thesis builds on the larger trend in computer science,
specifically Human-Computer Interaction, where computing devices are imbued
with domain expertise to support users. Although previous explorations of
this concept in a digital fabrication setting have been aimed at preserving
user creativity and agency~\cite{Zoran:2013}, the work discussed above aims
to abstract the complexity of creating tangible devices into the fabrication
process using 3D printers. By removing the complexities of constructing
tangible, interactive devices and instead abstracting them into
fabrication pipelines, we can enable non-expert designers and hobbyists to
create on-demand tangible devices without requiring extensive domain
expertise, making the fabrication revolution ideal closer to reality.
A potential development that might change the performance of the projects
discussed above is the advent of new fabrication machines, or the increased
accessibility of higher-end, current ones. While some of the barriers that
previous endeavors face may be overcome by improvements in digital
fabrication equipment (e.g., complex support material
removal~\cite{Laput:2015}, or manual fabrication procedures~\cite{He:2017}),
others efforts present intrinsic barriers in the way they enable the
construction of tangible devices. For example, in Sauron~\cite{Savage:2013},
Savage \etal use video cameras placed in the interior of the object to track
user's interactions. Although fabrication methods will improve over time,
to create Sauron-enabled objects, designers will be required to embed
cameras, mirrors, and reflectors inside their fabricated objects, with all
the complications this process entails. Another effort that presents similar
barriers is \texttt{./trilaterate}~\cite{Schmitz:2019}, by Schmitz \etal
While their fabricated objects are printed as a single structure using
multi-material printers, the chosen way to identify user's interactions,
trilateration, will require post-print calibration by user, and by object.
This stands in contrast to the techniques included in this thesis, where the
advent of new fabrication technologies will only improve their performance.
For example, higher resolution printing can enable \at to embed more
touch-sensitive locations throughout the model, or breakthroughs in flexible
materials can allow \mp to construct a wider range of shapes without the use
of internal support.
Last, in this thesis I have presented a series of projects that enable \papf
using air-powered objects, via custom internal structures. As mentioned
previously, the use of air as a driving mechanism for the proposed
techniques was motivated by two main factors. First, the ample study on
fluid behavior, allows for the design and construction of custom interior
structures to leverage various physical phenomena for constructing tangible
devices that can sense, process, and provide output to user's interactions.
Second, thanks to the broad understanding of fluid behavior, we can employ
these concepts to construct pre-trained machine learning models, or use
their respective mathematical equations, to identify user's interactions.
While these principles of fluid behavior are applicable to most fluids
(e.g., water, oil, other gases), the use of air permits for a ``cleaner''
operation. Compressed air sources are widely available, and used air can be
safely discharged into the atmosphere. This does not mean, however, that
only air-powered approaches can be \pap. As mentioned previously, any
technique that can allow the construction of tangible, interactive devices
without the need of complex post-print activities, or prohibitive
fabrication pipelines is a \papf technique. For example, recent work from
Schmitz and colleagues present a very interesting way to tackle assembly:
using magnets and conductive filament~\cite{Schmitz:2021}. The objects
fabricated using this technique are constructed in two parts: an ``Oh Snap!
board'' which houses a microcontroler with the logic, and the printed object
itself.
\section{Directions for Future Work}
Throughout this thesis, I have laid out possible directions future
researchers can improve each project. Moving from per-project improvements,
this section aims to discuss how \papf can be improved by future
explorations.
\subsection{Support for multi-material printing}
All through the projects that make up this thesis, I have made a conscious
decision to construct tangible, interactive devices using single-material
fabrication pipelines in order to increase the potential audience of each
technique. Future efforts researching new \pap techniques for constructing
tangible devices can explore the use of multi-material fabrication
pipelines. Significant work has already been carried out by
Schmitz~\cite{Schmitz:2019a} using a combination of conductive and
nonconductive materials to construct tangible devices. While these efforts
are not truly \pap, as they require post-print activities including
carefully pouring liquids into the object, machine learning
calibrations, or attaching multiple points of contact, they highlight the
promising possibilities of using multi-material fabrication pipelines to
construct interactive objects.
In addition to using conductive and nonconductive material to construct
tangible devices, future work can explore the use of metamaterials to
create new composite materials, tailored for specific functions. The use
of multi-material fabrication pipelines has the potential for enriching
the vocabulary of interaction modalities, while still remaining \pap. For
example, the use of auxetic metamaterials can enable a variety of haptic
feedback when interacting with the printed device.
\subsection{Support for other interaction modalities}
The work discussed above focuses on demonstrating the possibilities of
constructing tangible devices with little designer intervention
post-print, rather than exploring a variety of interaction modalities. An
interesting direction for future work is to investigate \papf techniques
to construct tangible devices that expand on the interaction modalities
presented in this thesis. I speculate future research can approach this in
three main ways: techniques that provide richer touch input, techniques
that provide rich output, and techniques for constructing tangibles that
can sense their environments.
In order to enable richer touch interaction on tangible devices, future
research on \papf techniques should explore two avenues: deformation sensing,
and multi-touch. All the sensing techniques presented in this thesis are
tailored for interaction on hard tangible devices. An interesting
direction for future work to explore is the implementation of \pap
techniques that enable reliable deformation sensing in soft, and flexible
objects. Regardless of the technique chosen to implement this interaction
modality (e.g., pneumatic, acoustic, or electric sensing), I believe the
main challenge in implementing this interaction modality in a \pap fashion
will be the removal of per-object calibration of machine learning models.
Because of each object's varying geometry, interacting with different
object should yield different results. Inspired by \at's success in
reusing machine learning models for tangible devices of varying
geometries, future work can explore the use of custom, stable interior
structures for deformation (e.g., press, squeeze) sensing.
Continuing, I have presented two techniques for the construction of
efforts that are able to respond to touch interactions: \at and \al. While
successful, these efforts can only reliably sense single touch locations.
I have laid out guidelines on how to pneumatically sense more than one
location in Chapter \ref{ch:airtouch}, but this approach limits the number
of interaction locations if a multi-touch setup is desired. Future
explorations can investigate other \pap-friendly techniques for
constructing tangible devices capable of sensing touch interaction in
various locations.
Another direction future work on \papf should explore is providing richer
output to user's interactions. With \al I explored ways to physicalize the
output of logical operations. These outputs can also be expanded by future
work. For example, \al presents a vibrotactile motor to provide haptic
feedback to its users; future work can explore other haptic interaction
modalities as output, like temperature. Similarly, \al presents acoustic
feedback to its users using whistles. Future work can explore \pap ways to
construct speakers, similar to those presented in \cite{Ishiguro:2014},
and provide richer acoustic output. Continuing, while with \al and \mp I
explored different ways to provide visual output to users, future work can
enrich this output modality by investigating in novel ways to embed
displays into fabricated objects. Similarly how \cite{VanDerHeyden:1969}
employs techniques closely related to \al to develop a seven-segment
display without requiring electronics or moving parts, future work can
investigate novel, \pap-friendly techniques to incorporate rich visual
output in tangible devices.
Last, an interesting avenue to explore is the construction of tangible
devices that not only respond to user's interactions, but to changes in
the environment they sit in. The inclusion of ``smart'' materials can aid
in the creation of such devices. The use of materials which properties
change depending on their environment can enable the easy construction of
``tangible sensors''. These sensors can respond to environmental
temperature, humidity, or sound, and act accordingly.
\subsection{Support for other fabrication equipment}
In this thesis I have presented four \pap ways to construct tangible,
interactive devices using 3D printers. This does not mean, however, that
\pap devices can only be fabricated using 3D printers. The main benefit of
using 3D printers instead of other fabrication devices is their capability
for constructing three-dimensional shapes without the need user
intervention; whereas using other fabrication devices require manual
assembly post-fabrication. A possible research avenue to construct
tangible devices using laser cutters and CNC routers is to not only use
these devices for the fabrication of the object, but for the assembly
tasks as well.
Despite early explorations in this concept have yielded promising
results~\cite{Katakura:2019, Nisser:2021}, they remain far from the \papf
ideal: \cite{Katakura:2019} is not able to construct tangible devices, but
more objects that can be mechanically actuated, and \cite{Nisser:2021}
requires intricate calibration of the laser cutter mid-print to solder
conductive tracings, and combine previously cut parts. Future endeavors
expanding this concept can explore the use of other materials, e.g.,
conductive filaments, to simplify the connection between electronics. | {
"alphanum_fraction": 0.7691507799,
"avg_line_length": 68.0424528302,
"ext": "tex",
"hexsha": "713f20d33bc3fa4febef619b97c45e7b163ac256",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fce13ba8e72d14cde1bc79e700400af7b8a3984d",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "ctejada10/print-and-play",
"max_forks_repo_path": "2.main/chapters/iii.conclusions/discussions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fce13ba8e72d14cde1bc79e700400af7b8a3984d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "ctejada10/print-and-play",
"max_issues_repo_path": "2.main/chapters/iii.conclusions/discussions.tex",
"max_line_length": 83,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fce13ba8e72d14cde1bc79e700400af7b8a3984d",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "ctejada10/print-and-play",
"max_stars_repo_path": "2.main/chapters/iii.conclusions/discussions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2874,
"size": 14425
} |
\chapter{General Introduction}
\label{sec:intro}
In his famous ``On the origin of species [...]'' from 1859, Charles Darwin noted
the outstanding variety of individuals within a species. Especially domesticated
species, as he describes, show remarkable differences---imagine a bloodhound,
terrier, spaniel, and bull-dog next to one another. But also in nature such
differences occur. These difference can be passed on to offspring over
generations and accumulate to such an extent that a distinct species is formed.
It was this observation as well as a great amount of preceding research by him
and others that led Darwin to the formulation of his famous evolutionary theory.
More than 150 years later, we have understood many of the molecular mechanisms
behind this principle. We know that genetic information is encoded in DNA and
that it is subject to mutations, which are partly inheritable. These changes in
DNA create the variability of genetic material that is so essential to
evolution. Today, we are able to study the genetic differences,
which we also call genetic \emph{variants},
between human individuals or between individuals of other species and can investigate
their consequences. Many fundamental associations between the presence of
genetic variants and certain traits have been found since, for example why
red-green color blindness affects more males than females \citep{Nathans1986}.
Other traits, such as the expected height of a person, are less easily conceived
as they appear to be affected by the combination of many genetic variants
\citep{Wood2014,Marouli2017}. We also gained a much better understanding of the
causal role of genetic variation in many diseases, including Mendelian disorders
and cancer \citep{Stankiewicz2010}.
In order to study the consequences of genetic variants we require
methods to accurately detect them. Nowadays, this has been largely enabled with
the advance of DNA sequencing technologies. Based on these methods, the average
genome of many species, including humans, could be charted for the first time
\citep{Lander2001,Venter2001}, and functional units such as genes and regulatory
elements were identified in great detail \citep{Dunham2012}. Then, population
genetics studies gained further insight into the variability within the human
population \citep{Auton2015,Sudmant2015}.
At last, the identified variants could be analyzed for a potential effect on
phenotypes or disease using, for example, genome-wide association studies
\citep{Ott2015,MacArthur2017}.
Variant detection has become a standard procedure in genomics research. However,
not all types of genetic variants have been studied equally comprehensively.
Especially larger genomic rearrangements, so-called \emph{\aclp{sv}} (which are
introduced in \cref{sec:sv}), remained
difficult to ascertain using available assays, which is why their functional
role is still not as well explored as for smaller types of genetic variation.
However, \aclp{sv} are known to have extraordinary impact on our genetic
material---after all, they constitute the majority of genetic
differences within the human population \citep{Sudmant2015}.
The limitations of the current standard techniques had long been noticed and
encouraged further method development \citep{Onishi-Seebacher2011}. Over the
last couple of years, new technologies for DNA sequencing as well as new method
based on DNA sequencing have become available. These techniques hold promise to
improve the abilities to study such rearrangements. In this work, I utilize such
emerging technologies to characterize \aclp{sv} beyond what was possible previously.
In the rest of this chapter, I explain the relevant terminology around genetic
variation (\cref{sec:variation}), introduce structural variants and the biology
behind them (\cref{sec:sv}), give an overview of previous and emerging DNA
sequencing technologies (\cref{sec:sequencing}), and outline the methodology
of SV detection (\cref{sec:sv_detection}). I continue to elaborate the current
limitations of \acl{sv} detection (\cref{sec:limitations}). Then, I formulate
the goals of my research and provide an overview of the studies covered in this
dissertation (\cref{sec:motivation}).
\section{Terminology around genetic variation}
\label{sec:variation}
Alterations in the DNA of an organism, or of a cell, can arise spontaneously
via chemical or biological processes. If not repaired faithfully, they leave
traces in the genetic material that we call genetic \emph{variants}. The process
of altering the DNA sequence is called \emph{mutation}, but the terms mutation
and variant are often used interchangeably.
We already know the nucleotide sequence of the genomes of many species,
including humans. This sequence, which is supposed to represent the an average
of the individuals, is stored in a so-called \emph{reference} genome (also
\emph{reference assembly}, or simply \emph{reference}).
Thus, variants are usually defined as a difference to this reference. Some
variants only affect a single nucleotide, e.g. by changing a cytosine into a
thymine, and they are called \emph{\acfp{snv}}. Others variants delete or insert
nucleotides, or fully rearrange their order, as will be introduced later.
Most \explain{metazoa}{Animals. Not all animals are diploid,
though. Bees and ants, for example, produce fully haploid males, yet still
diploid females. Mammals, on the other hand, are believed to always be
diploid \citep{Svartman2005}} are diploid, so they carry two non-identical
copies of each chromosome in their cells: one of maternal and one of paternal origin. Any new
variant (within a cell or organism) typically arises only on one of the two
homologous chromosomes. Such a variant is said to be \emph{heterozygous}. The
genomic locus harboring this variant exists in two different versions, which we
call \emph{alleles}. Specifically it harbors a \emph{reference allele}, which is
in agreement with the reference assembly, and
an \emph{alternative allele} that describes the non-reference variant form. A
site with exactly two alleles seen across a population is termed
\emph{bi-allelic}, but there are also sites that contain multiple different
alleles and are thus \emph{multi-allelic}. Chromosomes can contain many variants
and, depending on the detection strategies, it is often unclear which variants
reside on the same homologue. If this is known, we refer to the ensemble of
variants present along a single homologue as a haploid genotype, or short
\emph{haplotype}.
Alleles that are present in the germ line, i.e. in cells
carrying inheritable genetic material, can be propagated to offspring. This way,
an individual can end up carrying the same variant of a genomic locus on both
homologues, which makes it a \emph{homozygous} variant. Variants that are seen
more often in a population, specifically in at least 1\% of the homologues,
are also called \emph{polymorphisms}.
When a variant is present in an individual, but not in their parents, we call it
a \emph{de novo} variant (or \emph{de novo} mutation). The mutation could
either have occurred within the
zygote during the first few cell divisions, or already beforehand in the
parental germ line. The latter can sometimes be inferred if other offspring of
those parents carries the same variant. Variants that occur not in the germ line
but in cells of the non-inheritable part of an organism are \emph{somatic}
variants. When such an affected cell undergoes repeated divisions, a somatic variant
can be present in a relevant fraction of the total cells of an individual---this
is called \emph{somatic mosaicism}. Depending on how early in development the
variant occurred (and on its effect on the fitness of the cell), it can be present in all cells of the
same lineage \citep{Youssoufian2002}. Cancer is a pathological form of somatic
mosaicism, which arises through the clonal expansion of cells. \citep{Campbell2007}.
\section{Structural Variation}
\label{sec:sv}
\input{svs}
\section{DNA sequencing technologies}
\label{sec:sequencing}
\input{seq_tech}
\section{Structural variant detection}
\label{sec:sv_detection}
\input{sv_detection}
\section{Research goals and thesis overview}
\label{sec:motivation}
Studies of structural variation are still fundamentally hampered by the
shortcomings of current \sv detection methods. This especially affects studies
of balanced or complex rearrangements, which had often remained cryptic in
previous studies. In this dissertation, I aim at uncovering and further examining
\acp{sv} that had been difficult to ascertain beforehand.
In order to do so, I
utilize emerging sequencing technologies and protocols---namely the
techniques introduced in \crefrange{sec:long_read_seq}{sec:strandseq}.
My work is structured into three separate research projects, in which I explore
one of the techniques, each. Below, the specific aims of each project are
outlined. A common theme throughout these projects is the design of new
bioinformatics methods for data analysis and visualization. I believe that the
full potential of novel technology can only be unlocked by the simultaneous
development of new computational approaches. I hence thrive for advancing the
state of the art of current methodology by building---and making
available---software tools that future research benefits from.
In \cref{sec:complex_invs}, I present my work for the final phase of the
1000 Genomes Project, in which my role was to validate inversion predictions in
the human population. These inversion predictions had initially remained
inconclusive in \pcr-based validation experiments. We then utilized targeted
long-read sequencing on both \pacbio and \ont MinION platforms to examine the
respective loci more closely. My immediate goal in this project was to verify
inversion calls based on long-read information. Moreover, the subsequent goal
was to further characterize these loci and investigate why previous validations
had failed. Afterwards, an optional goal was to investigate the validated
inversions deeper to understand the biological mechanisms that created
them.
In \cref{sec:balancer}, I focused on the functional aspects of structural variation.
Together with my collaborators, we set out to study the consequences of
chromosomal rearrangements on gene expression and on the three-dimensional
organization of chromosomes---the latter had gained much attention in recent
years and is introduce in depth in \cref{sec:balancer_background}. We utilized \hic
to study chromatin conformation of highly rearranged chromosomes of
\textit{Drosophila melanogaster}. A crucial initial step of the project was to
map the rearrangements and other variation present in these chromosomes. My
first goal thus was to characterize \acp{sv}, including large pericentric
inversions, which had been known only at cytogenetic resolution beforehand.
As a part of this goal, I aimed at exploring the usability of
\hic data for validation and characterization purposes. The second and more
general goal of this study was to find out whether, and how these rearrangements
affect gene expression. My specific milestone towards this goal was to robustly
determine differential gene expression between the rearranged and a
non-rearranged (wild type) chromosome. At last, this information should be
integrated with findings on chromatin structure (provided by collaborators) to
be able to conclude on the impact of rearrangements and on the mechanisms that
are involved.
In \cref{sec:mosaicatcher}, my collaborators and I explored the potential of
Strand-seq to discover \acp{sv} in single cells. While this had been previously
done for inversions, we aimed at extending this principle---for the very first
time---to at as many \sv classes as possible. Unlike previous efforts using Strand-seq,
the particular goal was to enable this detection also for variants that are present in only
a low fraction of cells.
Here, I present the current state of this ongoing endeavor. The first
goal in this study was to develop a general concept of how different \sv classes
can be revealed based on the signals available in single-cell Strand-seq data.
Then, the next goal was to prove feasibility of this concept by implementing the
approach and applying it to available as well as newly sequenced cell line data
harboring structural variation. A third important aspect was to test the
limitations of our approach in a controlled environment.
At last, \cref{sec:conclusions} concludes on the achievements resulting from my
work. This includes the specific questions of the three projects as well as a
more general view on how emerging technologies improved \sv characterization in
the three different scenarios. I then briefly review recent developments by others in
the community that occurred around the same time as my research. This should give a
broader impression on how emerging technologies are about to change studies on
structural variation and places my work within the broader context of recent
advances in the field.
| {
"alphanum_fraction": 0.8119023942,
"avg_line_length": 59.4227272727,
"ext": "tex",
"hexsha": "5a2ba7c752ef2137283e44c35c7b4cc982c68416",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "27cebed57488f870666bf1f8abdbc6794146a135",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "meiers/thesis",
"max_forks_repo_path": "intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "27cebed57488f870666bf1f8abdbc6794146a135",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "meiers/thesis",
"max_issues_repo_path": "intro.tex",
"max_line_length": 102,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "27cebed57488f870666bf1f8abdbc6794146a135",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "meiers/thesis",
"max_stars_repo_path": "intro.tex",
"max_stars_repo_stars_event_max_datetime": "2020-11-28T21:02:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-28T21:02:45.000Z",
"num_tokens": 2903,
"size": 13073
} |
\section{Catalog}
\label{sec:catalog}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{images/Catalog_Overview.eps}
\label{catalog_overview}
\caption{Catalog - Class Overview}
\end{figure}
The \code{Catalog} interface was designed to manage \code{Product}s and \code{ProductFeature}s in the system.
It provides functionality to add, remove, and find \code{Product}s.
\code{Product}s can be searched by their name or category.
\code{Product}s and \code{ProductFeature}s are more closely described in Section \ref{sec:product}.
The \code{PersistentCatalog} is an implementation of the \code{Catalog} interface.
Additionally \code{PersistentCatalog} provides an \code{update()}-method to update and merge existing \code{PersistentProduct}s to the database.\footnote{\code{update()} to the interface. Misuse \code{add()} for updates? this seems inconsistent.}
The \code{find()} methods request the database in the form of \code{CriteriaQuerys} which will be processed by JPA and results are returned in the form of \code{Iterables}. The reason for this is to make returned objects immutable without making it difficult to iterate over these results.
%If you want to sell your products at your shop, you must presenting them very good to stimulate customers interests. A helpful solution to giving this people a clearly overview could be
%a catalog. With the \code{PersistentCatalog}-class, you can implemented such a thing. This class is an implementation of the interface \code{Catalog}.\\
%After you created a new catalog, you can add \code{ProductTypes} to it or remove them from it. Also you can checked, whether the catalog contains a defined \code{ProductType}. Aside from you
%this class provides methods to find productTypes by their name or their category at your catalog.\\
%In case you have added a \code{ProductType} to the catalog and now you are changing this type, you do not need remove the old type from this catalog and add the new to it. Only you need to
%used the method \code{update(PersistentProductType productType)} from the \code{PersistentCatalog}-class and this \code{productType} will be updated and persist to the \code{PersistentCatalog}
%and the Database.
| {
"alphanum_fraction": 0.7855855856,
"avg_line_length": 69.375,
"ext": "tex",
"hexsha": "0cc2b6d9260494775274e38a9689e9796728aed5",
"lang": "TeX",
"max_forks_count": 37,
"max_forks_repo_forks_event_max_datetime": "2021-12-15T17:29:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-10-21T22:26:08.000Z",
"max_forks_repo_head_hexsha": "5e363f25501d9241d5e9d0913e3f7ca4abd3010a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "olivergierke/salespoint",
"max_forks_repo_path": "doc/core/catalog.tex",
"max_issues_count": 220,
"max_issues_repo_head_hexsha": "5e363f25501d9241d5e9d0913e3f7ca4abd3010a",
"max_issues_repo_issues_event_max_datetime": "2022-03-08T16:20:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-03T16:05:19.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "olivergierke/salespoint",
"max_issues_repo_path": "doc/core/catalog.tex",
"max_line_length": 290,
"max_stars_count": 111,
"max_stars_repo_head_hexsha": "e99e67b70008498804c7021d6408f3114d07e257",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "svenkarol/salespoint",
"max_stars_repo_path": "doc/core/catalog.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-09T12:29:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-09T22:54:26.000Z",
"num_tokens": 526,
"size": 2220
} |
\documentclass[preprint,12pt]{elsarticle}
\usepackage{geometry}
%\geometry{letterpaper} % ... or a4paper or a5paper or ...
\usepackage{graphicx}
\usepackage{xspace}
\usepackage{amssymb}
\usepackage{epstopdf}
%% Use the option review to obtain double line spacing
%% \documentclass[preprint,review,12pt]{elsarticle}
%% Use the options 1p,twocolumn; 3p; 3p,twocolumn; 5p; or 5p,twocolumn
%% for a journal layout:
%% \documentclass[final,1p,times]{elsarticle}
%% \documentclass[final,1p,times,twocolumn]{elsarticle}
%% \documentclass[final,3p,times]{elsarticle}
%% \documentclass[final,3p,times,twocolumn]{elsarticle}
%% \documentclass[final,5p,times]{elsarticle}
%% \documentclass[final,5p,times,twocolumn]{elsarticle}
%% if you use PostScript figures in your article
%% use the graphics package for simple commands
%% \usepackage{graphics}
%% or use the graphicx package for more complicated commands
%% \usepackage{graphicx}
%% or use the epsfig package if you prefer to use the old commands
%% \usepackage{epsfig}
%% The amssymb package provides various useful mathematical symbols
\usepackage{amssymb}
%% The amsthm package provides extended theorem environments
%% \usepackage{amsthm}
%% The lineno packages adds line numbers. Start line numbering with
%% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on
%% for the whole article with \linenumbers after \end{frontmatter}.
%% \usepackage{lineno}
%% natbib.sty is loaded by default. However, natbib options can be
%% provided with \biboptions{...} command. Following options are
%% valid:
%% round - round parentheses are used (default)
%% square - square brackets are used [option]
%% curly - curly braces are used {option}
%% angle - angle brackets are used <option>
%% semicolon - multiple citations separated by semi-colon
%% colon - same as semicolon, an earlier confusion
%% comma - separated by comma
%% numbers- selects numerical citations
%% super - numerical citations as superscripts
%% sort - sorts multiple citations according to order in ref. list
%% sort&compress - like sort, but also compresses numerical citations
%% compress - compresses without sorting
%%
%% \biboptions{comma,round}
% \biboptions{}
\journal{Journal of Systems and Software}
%%% OUR MACROS %%%
\newcommand{\COMMENT}[1]{ }
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage{amsmath}
\usepackage[thmmarks,amsmath]{ntheorem}
\newcommand{\openbox}{\leavevmode
\hbox to.77778em{%
\hfil\vrule
\vbox to.675em{\hrule width.6em\vfil\hrule}%
\vrule\hfil}}
\theoremstyle{plain}
\theoremheaderfont{\normalfont\bfseries}
\theorembodyfont{\normalfont}
\theoremseparator{}
\theoremindent0cm
\theoremnumbering{arabic}
\newtheorem{algo}{Algorithm}
\theoremstyle{plain}
%\theoremheaderfont{\normalfont\itshape}
\theoremheaderfont{\normalfont\bfseries}
\theorembodyfont{\normalfont}
\theoremseparator{}
\theoremindent0cm
\theoremnumbering{arabic}
\theoremsymbol{\ensuremath{\openbox}}
\newtheorem{example}{Example}
\theoremstyle{plain}
\theoremheaderfont{\normalfont\bfseries}
\theorembodyfont{\normalfont}
\theoremseparator{.}
\theoremindent0cm
\theoremnumbering{arabic}
\theoremsymbol{\ensuremath{\Box}}
\newtheorem{defi}{Definition}
\theoremstyle{plain}
\theoremsymbol{\ensuremath{\Box}}
\theoremseparator{.}
\newtheorem{prop}{Property}
\def\FlyingPig{\textsl{FlyingPig}}
\newcounter{numberInTrivlist}
\newenvironment{numtrivlist}{\begin{list}{\rm \arabic{numberInTrivlist})}
{\usecounter{numberInTrivlist}
\setlength{\leftmargin}{0pt}
\setlength{\rightmargin}{0pt}
\setlength{\itemindent}{12pt}
\setlength{\listparindent}{0pt}}}
{\end{list}}
\newenvironment{itemizedTrivlist}{\begin{list}{\rm ~\hspace{2mm} $\bullet$\ }
{\setlength{\leftmargin}{0pt}
\setlength{\rightmargin}{0pt}
\setlength{\itemindent}{12pt}
\setlength{\listparindent}{0pt}}}
{\end{list}}
\usepackage{listings}
\lstset{numbers=right, numbersep=5pt, numberstyle=\tiny, stepnumber=1,escapechar=\!,columns=fullflexible,
morekeywords={procedure,let,for,do,if,then,else,add,choose,end,while,
true,false,rise,exception,extend,resume,to,return,function}}
\newcommand{\pisodm}[0]{$\pi$SOD-M\xspace}
\begin{document}
\begin{frontmatter}
%% Title, authors and addresses
%% use the tnoteref command within \title for footnotes;
%% use the tnotetext command for the associated footnote;
%% use the fnref command within \author or \address for footnotes;
%% use the fntext command for the associated footnote;
%% use the corref command within \author for corresponding author footnotes;
%% use the cortext command for the associated footnote;
%% use the ead command for the email address,
%% and the form \ead[url] for the home page:
%%
%% \title{Title\tnoteref{label1}}
%% \tnotetext[label1]{}
%% \author{Name\corref{cor1}\fnref{label2}}
%% \ead{email address}
%% \ead[url]{home page}
%% \fntext[label2]{}
%% \cortext[cor1]{}
%% \address{Address\fnref{label3}}
%% \fntext[label3]{}
\title{Designing Service-Oriented Applications in
the Presence of Non-Functional Properties: a mapping study}
%% use optional labels to link authors explicitly to addresses:
%% \author[label1,label2]{<author name>}
%% \address[label1]{<address>}
%% \address[label2]{<address>}
\author[inst3]{Umberto Souza da Costa}
\author[inst3]{Martin A. Musicante}
\author[inst5]{Pl\'acido A. Souza Neto}
\author[inst6,inst4]{Genoveva Vargas-Solar}
\address[inst3]{Universidade Federal do Rio Grande do Norte -- Natal, Brazil}
%\address[inst4]{Universidad de las Am\'ericas-Puebla, LAFMIA -- Cholula, Mexico}
\address[inst5]{Instituto Federal de Educa\c{c}\~{a}o, Ci\^{e}ncia e Tecnologia do Rio Grande do Norte -- Natal, Brazil}
\address[inst6]{CNRS, LIG-LAFMIA, Saint Martin d'H\`eres, France}
\begin{abstract}
\end{abstract}
\begin{keyword}
%% keywords here, in the form: keyword \sep keyword
MDA \sep Non-Functional Requirements \sep Service-based software process.
%% MSC codes here, in the form: \MSC code \sep code
%% or \MSC[2008] code \sep code (2000 is the default)
\end{keyword}
\end{frontmatter}
%%
%% Start line numbering here if you want
%%
% \linenumbers
%% main text
%*********************************************************************************************************
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Background}
\label{sec:background}
%Conceptos existentes en NFR y metodolog��as similares
\input{background}
%..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--
\section{Mapping process}\label{sec:mappingprocess}
\input{mappingprocess}
%..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--
%..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--
\section{Outcomes}
\label{sec:outcomes}
\input{outcomes}
%..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--..--
%*********************************************************************************************************
\section{Concluding remarks}\label{sec:conclusions}
\input{conclusions}
%% References with bibTeX database:
\bibliographystyle{plain}
\bibliography{biblio}
\end{document}
%%
%% End of file `elsarticle-template-1a-num.tex'.
| {
"alphanum_fraction": 0.6258732535,
"avg_line_length": 32.1927710843,
"ext": "tex",
"hexsha": "cbc71571f2472c60ba3a577ce33bf47e97d52707",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mmusicante/PlacidoConfArtigos",
"max_forks_repo_path": "SystematicMapping/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mmusicante/PlacidoConfArtigos",
"max_issues_repo_path": "SystematicMapping/main.tex",
"max_line_length": 153,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mmusicante/PlacidoConfArtigos",
"max_stars_repo_path": "SystematicMapping/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2309,
"size": 8016
} |
\documentclass[a4paper]{article}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{float}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{multirow}
\usepackage{natbib}
\usepackage[colorinlistoftodos]{todonotes}
\title{\bfseries{COMP90016 -- Assignment 2 }}
\author{Haonan Li}
\date{\today}
\begin{document}
\maketitle
\section{Introduction}
\label{sec:introduction}
This assignment consists of three tasks.
In the first task, we discuss the application of HMMs to CNV detection in theory. The second task is an implementation of a CNV detection algorithm using circular binary segmentation. In the final task, we discuss the biology of a particular cancer sample.
\section{Task1}
The lectures introduced a HMM designed to detect CNVs in a haploid organism. It consisted of three states: One for the normal copy number (CP1) and two alternate copy number states (Cp0, CP2).\\
\noindent\textbf{Question 1} How would you adapt this approach to a diploid organism, such as human (with states representing different integer copy numbers)? \\
\noindent\textbf{Answer}: For a diploid organism. The normal copy number of chromosome is 2. Take account of copy number changes. four alternate copy number states should be used (CP0, CP1, CP3, CP4). All of them maybe exist in the real world. However, it should be adjust according to the datasets. If there are some bins with 5 copy numbers. We might add CP5 to the state set. More details from document of CNV\footnote{http://www.completegenomics.com/documents/CNV+Methods.pdf}. In principle, model parameters can all be estimated by expectation-maximization (EM) by the Baum-Welch algorithm. Some assumptions: a) Coverage depends on the number of copies of a given reference segment in the genome of interest. b) Copy number is assumed to be integer-valued. c) Coverage is assumed to be linearly related to copy number. d) For the state corresponding to ploidy = 0, a separate variance estimate is used to allow for the impact of mis-mappings and non-unique mappings. These are the basic assumptions for building HMM model for diploid organism.
\\
\noindent\textbf{Question 2}: Explain the trade-off between the sensitivity of such a HMM and the computational complexity to solve the Viterbi algorithm for it.\\
\noindent\textbf{Answer}: The complexity of such a HMM model is $O(nm^2)$. Where $n$ represents the length of the sequence and $m$ is the number of states. If $m$ is large, the model will contain more possible copy numbers and the result will be more precise. But it will also take more time. Actually, It will take more than twice as much time for m change from 4 to 6. So, if there are almost no bin with 5 or 6 copy numbers. It is not worth to add CP5 and CP6 states.\\
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{fg1}
\caption{Density plot of ratio of read depth in 5kb bins over the average read depth in a diploid organism.}
\label{fig1}
\end{figure}
\noindent\textbf{Question 3}: Consider the data shown in Figure \ref{fig1}. The plot shows read depth of bins normalized by the average in a diploid organism. How would you use this data to parametrize the emission probabilities of your HMM? Explain what about Figure 1 is general and what is specific to the data that this plot was derived from. How does this affect the HMM in terms of its application to different data sets? Also, how could this data be utilized to derive the transition probabilities for a CNV detection HMM?\\
\noindent\textbf{Answer}: First, to build a HMM model, we should know each bin's copy number and average depth. We can calculate the emission probabilities by statistic of these informations. For a specified copy number CPX (X = 0,1,2,...). We count how many bins with CPX, assumed N ($N\geq0$). And for each average read depth D (maybe a range), we count the number of bins with the read depth, assumed M. So the emission probability of CPX to read depth D is M/N.
In Figure 1. The overall appearance of the plot is a Gaussian distribution, which is a general phenomenon in density plot of ratio of read depth plot. In this particular figure, the center ratio (logR) of Gaussian distribution is 1, which reveals the plot derived from a diploid organism. Generally, additional peaks represent alternative copy number states. In this figure, another peak with $logR=0.5$ reveals the particular sample lose some of copies.
If we build a HMM model base on this sample. The model will have a relatively high transition probability within (CP<2). For other datasets. This will increase the predictions of lose of copy number if apply this to other data sets.
Computing transition probabilities is similar with computing emission probabilities. Count the changes copy number pairs of adjacent bins. And compute the proportion of the end state from a specified start state.\\
\noindent\textbf{Question 4}: Describe why a HMM, such as discussed in this task is not very useful for non-clonal data (such as shown in Figure 2 below). Could this shortcoming be alleviated by introducing non-integer CN states?
\noindent\textbf{Answer}: As mentioned above, HMM model suppose that copy number state is always integer so it only consider two kinds of clonality 0\% or 100\%. However, for a non-clonal data (such as shown in Figure \ref{fig}). The clonality maybe any value between $0\sim100\%$, and HMM can not return this value.
The shortcoming can not be alleviated by introducing non-integer CN states. First, what number should be used as a state is uncertain. How many states should be used is also uncertain. Besides, introducing non-integer CN states will inevitably increase the number of states and the complexity of model will explode, it will be very time consuming.
\section{Task 2}
The implementation of circular binary segmentation:
First, inputing. Read data from file and build two lists, one is to store the tuple of start and end positions, the other stores the read depths of all bins.
Second, init the settings. We use numpy to speed up computing. Transfer the list of read depths to numpy array. Then take the median from the first third of the input read depths as `normal' bin m. Transform read depth each bin b into log ratios $log2(b/m)$. Then remove extreme values, specifically, set any log2-ratio that are larger then 2 or smaller than -5 to 0. After this init a segment mark array with 0 for later recursive use.
Third, a recursive cbs function, this is the most important part of the algorithm. The function have five input parameters: log-ratio array $X$, segment mark array $I$, start position $a$, end position $b$ and z-threshold $t$. For each call of the function. First cut X from a to b, which is the current interval we analyze, name it $X'$.
Then compute cumulative sum of bins $S_i = X_1+X_2+...+X_i$
After computing cumulative sum. We build a 2d array $Z$ and compute $Z$ by:
\begin{equation*}
Z_{ij}=(\frac{1}{j-i}+\frac{1}{n-j+i})^{-\frac{1}{2}}\times (\frac{S_j-S_i}{j-i}-\frac{S_n-S_j+S_i}{n-j+i})
\end{equation*}
Then find the maximum of $Z$, if the maximum is larger than threshold $t$, its coordinates are new segment points, suppose $x,y$. We mark them in segment mark array I (set corresponding positions to 1) and call three new cbs functions with new interval (a,x),(x,y),(y,b).
Finally, the segment mark array should have marked with 1 in some positions. Find the corresponding interval between two adjacent marks. Compute the average log-ratio and output.
As for the theoretical complexity of the algorithm. For a file with n bins. we compute a $n\times n$ matrix and separate it to 2 segments and compute recursively. So the complexity is:
\begin{equation*}
Complexity = O(n^2 + 2^{1}(\frac{n}{2^{1}})^2 + 2^{2}(\frac{n}{2^{2}})^2+...+2^{t}(\frac{n}{2^{t}})^2)=O(n^2)
\end{equation*}
The complexity mainly scaled with the number of input bins. Of course, the reads depth will also influence it because whether a piece need to be segment decide by them.
\section{Task 3}
\noindent 1. Figure \ref{fig} visualize the reported CNVs. The result of Z equals 10, 1 and 8 shown in Figure \ref{10}, \ref{1}, \ref{8} separately. In these figures, the x axis represent the index of bins, the x coordinate times 5000 and we can get the corresponding positions in reference. The y axis is the log ratio of each bin.
\begin{figure}[h]
\centering
\subfigure[Z=10]{
\label{10}
\includegraphics[width=0.8\textwidth]{z10.png}
}
\subfigure[Z=1]{
\label{1}
\includegraphics[width=0.45\textwidth]{z1.png}
}
\subfigure[Z=8]{
\label{8}
\includegraphics[width=0.45\textwidth]{z8.png}
}
\caption{Visualization of CNV. (The red part is reported CNVs)}
\label{fig}
\end{figure}
\noindent 2. For $Z=10$, as shown in Figure \ref{10}, we find that no all of the identified CNVs real. The first part, $x$ form 0 to 1800 (0-90Mbp) is not a real CNV, in fact, only $x$ from 1000 to 1800 (50-90Mbp) should be a CNV. In addition, x from 3500 to 3600 (175Mbp-180Mbp) might be false negatives. The average logR is lower than -0.1 and this part does not been detected. To increase sensitivity, we can decrease the Z-threshold, this will lead to more segmentations of the bins meanwhile decreasing the average range of each bins. See Figure \ref{8} and \ref{1}.
\noindent 3. Biology analyze of three large CNVs in the data:
\textbf{a. 50-90Mbp}
\begin{equation*}
logR = -0.3
\end{equation*}
\begin{equation*}
observed\ copy\ number = 2^{1-0.3}=1.6
\end{equation*}
assume $clonality = c$, with $CN = x$, there is:
\begin{equation*}
xc+2(1-c) = 1.6
\end{equation*}
possible $(x,c)$ pairs:
\begin{equation*}
x = 1, c = 0.4
\end{equation*}
\begin{equation*}
x = 0, c = 0.2
\end{equation*}
\textbf{b. 90-125Mbp}
\begin{equation*}
logR = -1.3
\end{equation*}
\begin{equation*}
observed\ copy\ number = 2^{1-1.3}=0.8
\end{equation*}
assume $clonality = c$, with $CN = x$, there is:
\begin{equation*}
xc+2(1-c) = 0.8
\end{equation*}
possible $(x,c)$ pairs:
\begin{equation*}
x = 0, c = 0.6
\end{equation*}
\textbf{c. 125-140Mbp}
\begin{equation*}
logR = -0.8
\end{equation*}
\begin{equation*}
observed\ copy\ number = 2^{1-0.3}=1.1
\end{equation*}
assume $clonality = c$, with $CN = x$, there is:
\begin{equation*}
xc+2(1-c) = 1.1
\end{equation*}
possible $(x,c)$ pairs:
\begin{equation*}
x = 1, c = 0.9
\end{equation*}
\begin{equation*}
x = 0, c = 0.45
\end{equation*}
Based on the computed result above, the first and third changes could be both single or double copy losses of the DNA and the second change must be double copy losses of the DNA. Assume three CNVs happens independently on two clones. There are four possible cases for these three CNVs, show in Table \ref{tb:1}
\begin{table}[h]
\centering
\begin{tabular}{c|cccccc}
case\# & $x_1$ & $c_1$ & $x_2$ & $c_2$ & $x_3$ & $c_3$ \\
\hline
1 & 0 & 0.2 & 0 & 0.6 & 0 & 0.45 \\
2 & 0 & 0.2 & 0 & 0.6 & 1 & 0.9 \\
3 & 1 & 0.4 & 0 & 0.6 & 0 & 0.45 \\
4 & 1 & 0.4 & 0 & 0.6 & 1 & 0.9 \\
\end{tabular}
\label{tb:1}
\end{table}
From the table we can find the first and second changes may happened in different cells because $c_1+c_2$ no larger than 1, while there must be some cells contains the second and third changes because $c_2+c_3$ always larger than 1. For the first and third changes, they may happened in different cell cause there are case that $c_1+c_3$ less than 1.
As discussed in lecture, BAF could be computed from original BAM file to substantiate our theory.
Actually, there are too many possible clonal structures and possible distribution in the overall population of cells. Figure \ref{clonal} shows two possible results. From the figure we find subclone must exist in these cells. For example, in Figure \ref{11}, 90 percent cells lose part 3 (the third change) on one chromosome and within these cell, some of them lose part 1 and 2 (first and second changes).
\begin{figure}[h]
\centering
\subfigure[Totally 10 cells. 1 (10\%) of total is good. 3 (90\%) of total lose the third changes on one chromsome (within azure square). 6 (60\%) of total lose second changes on both chromosomes (within purple square) and 2 (20\%) of total lose first part on both chromosomes(within yellow square).]{
\includegraphics[width=0.75\textwidth]{clonal1.pdf}
\label{11}
}
\subfigure[Totally 10 cells. 1 (10\%) of total is good. 3 (90\%) of total lose the third changes on one chromsome. 6 (60\%) of total lose second changes on both chromosomes and 4 (40\%) of total lose first part on one chromosome.]{
\includegraphics[width=0.75\textwidth]{clonal2.pdf}
}
\caption{Possible Clonal structure and its distribution, The red, blue and green part represent first, second and third changes separately. The proportion of each kind of cells represent the proportion in overall population of cells.}
\label{clonal}
\end{figure}
%\bibliographystyle{plainnat}
%\bibliography{report}
\end{document} | {
"alphanum_fraction": 0.7451871246,
"avg_line_length": 58.2331838565,
"ext": "tex",
"hexsha": "df11c25286eb6c2c7948550ba9eaa8daed83c6a2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-14T11:59:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-14T11:59:13.000Z",
"max_forks_repo_head_hexsha": "07bdb49fd4c50035b7f2e80ca218ac2b620098e4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hidara2000/Unimelb-CS-Subjects",
"max_forks_repo_path": "Conputional_Genonics/Assignment/assignment3/report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "07bdb49fd4c50035b7f2e80ca218ac2b620098e4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hidara2000/Unimelb-CS-Subjects",
"max_issues_repo_path": "Conputional_Genonics/Assignment/assignment3/report/report.tex",
"max_line_length": 1048,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "07bdb49fd4c50035b7f2e80ca218ac2b620098e4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "infinityglow/Unimelb-CS-Subjects",
"max_stars_repo_path": "Conputional_Genonics/Assignment/assignment3/report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-14T16:31:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-14T16:31:07.000Z",
"num_tokens": 3718,
"size": 12986
} |
%\subsection{Complexity Analysis: Not Feasible}%
%\pdfbookmark[2]{Complexity Analysis: Not Feasible}{complexityNotFeasible}%
\appendix{Complexity Analysis: Not Feasible}%
%
\begin{frame}[t]%
\frametitle{Algorithm Analysis and Comparison}%
\begin{itemize}%
\item \alert{Which of the algorithms is best (for my problem)?}%
\item<2-> Traditional Approach: Complexity Analysis, Theoretical Bounds of Runtime and Solution Quality%
\item<3-> \alert<-9>{Usually not feasible}\uncover<4->{%
\begin{itemize}%
\item analysis extremely complicated\uncover<5->{ since%
\item<5-> algorithms are usually randomized\uncover<6->{ and%
\item<6-> have many parameters (e.g., crossover rate, population size)\uncover<7->{ and%
\item<7-> \inQuotes{sub-algorithms} (e.g., crossover operator, mutation operator, selection algorithm)%
\item<8-> optimization problems also differ in many aspects%
\item<9-> theoretical results only available for toy problems and extremely simplified algorithms\scitep{WWCTL2016GVLSTIOPSOEAP}.%
\item<10-> \inQuotes{performance} has two dimensions (time, result quality), not one\dots%
}}}%
\end{itemize}%
}%
%
\item<11-> \alert{Experimental analysis and comparison is the only practical alternative.}%
%
\end{itemize}%
\end{frame}% | {
"alphanum_fraction": 0.7598710717,
"avg_line_length": 47.7307692308,
"ext": "tex",
"hexsha": "bfb3af8c8539d4f0b07a6995f31ea0adcddaace3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2ed532fc6f0cd284b4f1965a24e5f22cf8361b92",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "optimizationBenchmarking/documentation-intro-slides",
"max_forks_repo_path": "appendix_complexityAnalysis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2ed532fc6f0cd284b4f1965a24e5f22cf8361b92",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "optimizationBenchmarking/documentation-intro-slides",
"max_issues_repo_path": "appendix_complexityAnalysis.tex",
"max_line_length": 130,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2ed532fc6f0cd284b4f1965a24e5f22cf8361b92",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "optimizationBenchmarking/documentation-intro-slides",
"max_stars_repo_path": "appendix_complexityAnalysis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 342,
"size": 1241
} |
%\begin{document}
\section{Integrated Autocorrelation Time}%
\label{sec:integrated_autocorrelation_time}
%
We include below plots of the integrated autocorrelation time for the
topological charge, \(\tau_{\mathrm{int}}^{\mathcal{Q}}\), for \(\mathcal{Q}
\in \mathbb{Z}\).
%
\begin{figure}[htpb]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{autocorrs/tau_int_vs_draws}%
\caption{\label{fig:tau_int_vs_draws}Estimate of \(\tau_\mathrm{int}\) vs
number of draws (length of chain) across \(\beta = 5\) (left), \(\beta =
6\) (center), and \(\beta = 7\) (right), for both HMC (grayscale) and the
trained sampler (pink).}
% \caption{Estimate of \(\tau_{\mathrm{int}}^{\mathcal{Q}}\) vs number of
% samples, $N$ for \(\beta = 5\) (left), \(6\) (middle), and \(7\)
% (right).}%
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{autocorrs/tau_int_vs_traj_len}
\caption{\label{fig:tau_int_vs_traj_len}Estimate of \(\tau_{\mathrm{int}}\)
vs trajectory length, \(\lambda\) across \(\beta = 5\) (left), \(\beta =
6\) (center), and \(\beta = 7\) (right), for both HMC (grayscale) and the
trained sampler (pink).}
% \caption{Estimate of \(\tau_{\mathrm{int}}^{\mathcal{Q}}\) vs trajectory
% length, \(N_\mathrm{LF} \cdot \varepsilon\) for \(\beta = 5\) (left), \(6\)
% (middle), and \(7\) (right).}%
% \label{fig:tau_int_vs_traj_len}
\end{subfigure}%
\caption{Intermediate steps in the calculation of the integrated
autocorrelation time \(\tau_{\mathrm{int}}^{\mathcal{Q}}\) vs \(\beta\) for
\(\beta = 5\) (left), \(6\) (middle) \(7\) (right).}
% \begin{subfigure}{\textwidth}
% \centering
% \includegraphics[width=0.7\textwidth]{autocorrs/tau_int_vs_beta}
% \caption{Estimate of \(\tau_{\mathrm{int}}^{\mathcal{Q}}\) vs \(\beta\) for
% both HMC and L2HMC samplers.}%
% \label{fig:tau_int_vs_beta}
% \end{subfigure}
\end{figure}
%
\begin{figure}[htpb]
\centering
\includegraphics[width=0.475\textwidth]{autocorrs/tau_int_vs_beta}
\caption{Estimate of \(\tau_{\mathrm{int}}^{\mathcal{Q}}\) vs \(\beta\) for both
HMC and L2HMC samplers.}%
\label{fig:tau_int_vs_beta}
\end{figure}
| {
"alphanum_fraction": 0.6463414634,
"avg_line_length": 43.320754717,
"ext": "tex",
"hexsha": "5302a2e0751084471e09f17223fb36085e991d71",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-05-25T00:49:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-31T02:25:04.000Z",
"max_forks_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "saforem2/l2hmc-qcd",
"max_forks_repo_path": "doc/autocorrs/autocorrs.tex",
"max_issues_count": 21,
"max_issues_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc",
"max_issues_repo_issues_event_max_datetime": "2022-02-26T17:43:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-09-09T21:10:48.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "saforem2/l2hmc-qcd",
"max_issues_repo_path": "doc/autocorrs/autocorrs.tex",
"max_line_length": 83,
"max_stars_count": 32,
"max_stars_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "saforem2/l2hmc-qcd",
"max_stars_repo_path": "doc/autocorrs/autocorrs.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T18:30:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-18T18:50:28.000Z",
"num_tokens": 804,
"size": 2296
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.