Search is not available for this dataset
text
string | meta
dict |
---|---|
\subsection{Aggregate Demand (AD)}
For any given price level there is a corresponding IS-LM equilibrium, with an output level.
The Aggregate Demand curve models the relationship between the price level and equilibrium output.
As the price level rises, the real money supply falls. This means nominal interest rates rise to ensure LM equilibrium.
This rise in interest rates causes the IS curve to shift inwards, reducing output.
\(Y_d=Y_d(\dfrac{M}{P},G,T)\)
The slope of the Aggregate Demand curve
| {
"alphanum_fraction": 0.781496063,
"avg_line_length": 29.8823529412,
"ext": "tex",
"hexsha": "93754946902183eeb1f86d2fe97dc8b802b245c7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/economics/neoKeynesian/03-01-AD.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/economics/neoKeynesian/03-01-AD.tex",
"max_line_length": 119,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/economics/neoKeynesian/03-01-AD.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 108,
"size": 508
} |
\section{Final Evaluation}
We have presented several research work from several authors related to HRS. The topic, is of chief importance since it addresses on of our proposal goals. A recommendation system for medical imaging diagnosis. We found a large contribution to our work as described above. Based on their result evidence, we are now encourage to explore their work at a more deeper way by using their alternative methods for a further research of the effects of these alternatives on our system and techniques. | {
"alphanum_fraction": 0.8195777351,
"avg_line_length": 173.6666666667,
"ext": "tex",
"hexsha": "9280e2ec0b3a5acf7abddd61191aaa7c3a8986ac",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f65c20947ba85df1f75aa86eab2b622230d8eda7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mida-project/reading-reports",
"max_forks_repo_path": "state-of-the-art/report_3/sections/final_evaluation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f65c20947ba85df1f75aa86eab2b622230d8eda7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mida-project/reading-reports",
"max_issues_repo_path": "state-of-the-art/report_3/sections/final_evaluation.tex",
"max_line_length": 493,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "f65c20947ba85df1f75aa86eab2b622230d8eda7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mida-project/reading-reports",
"max_stars_repo_path": "state-of-the-art/report_3/sections/final_evaluation.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-19T09:55:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-26T14:14:02.000Z",
"num_tokens": 96,
"size": 521
} |
\section{SVD as an eigen problem}
\label{sec:svd-lanczos-eigen}
Aiming to calculate numerically the SVD factorizations, made
researchers reformulate that problem as the quite related
eigen decomposition (or eigenproblem). Such problem consists in
finding, for an square matrix $A$, the eigenvalues and
eigenvectors. If we arrange the eigenvectors in an orthogonal matrix
$Q$ and the eigenvalues in a diagonal matrix $\Lambda$, the eigen
problem can be restated as the following factorization: \\
\[
A = Q \Sigma \trans{Q}
\]
\hfill
In order to see the connection between the SVD and the eigenproblem,
we need to recall the gramian matrix $\trans{A}A$ from the theory
chapter. It was the gramian, which provided the matrix $V$ on the
first place; because the vectors
$\vec{v}$ were its eigenvectors (see \cref{cha:svd-theory}. Finding
the matrix $V$ then, can be thought as the eigenproblem for matrix
$\trans{A}A$; which can be stated as finding its diagonal
factorization $\suchthat$:
\begin{equation}
\label{eq:eigenprob-doc}
\trans{A}A = V\Sigma^2\trans{V}
\end{equation}
\hfill
But the same is true for matrix $U$, if we now consider the matrix
$A\trans{A}$, which can be diagonalized if one finds its eigenvectors
and place them into the matrix $U$ (the eigenvalues are the same as
the gramian):
\begin{equation}
\label{eq:eigenprob-term}
A\trans{A} = U\Sigma^2\trans{U}
\end{equation}
\hfill
It is the second eigenvalue problem equivalence, that is used for this
distributed algorithm of chapter \cref{cha:svd-dist}. Per the SVD
factorization $A = U\Sigma\trans{V}$, if we have the original matrix
$A$, plus the diagonal $\Sigma$ and the matrix $U$; we can
reconstruct the matrix $V$ (if required):
\[
\trans{V} = \inv{S} \trans{U} A = P A
\]
\hfill
The matrix $P = \inv{S} \trans{U}$ is called the projection matrix, and
is used in LSI for ``folding-in'' new document vectors \vec{x}, by
calculating $P\vec{x}$; that is, the matrix $P$ is used as a
predictive (rather than descriptive) model, to predict where the
position of document \vec{x} will be in the latent space. \\
For this chapter though, we could use either eigenproblem from
\cref{eq:eigenprob-doc} or \cref{eq:eigenprob-term}. Actually, the literature
originally reported the former, perhaps due the early shape of the
matrices used for LSI (more terms than documents). Today's LSI
applications have much more documents than terms, but still these
early algorithms are useful, as we will see in \cref{cha:svd-dist}
(where the original matrix is split into several submatrices, which do
have the shape expected by Lanczos algorithm that we document here).
| {
"alphanum_fraction": 0.7551097653,
"avg_line_length": 38.2898550725,
"ext": "tex",
"hexsha": "026403f032c5c7c33823147e86bcd82ee31f7d57",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rzavalet/svd-lsi-project-master",
"max_forks_repo_path": "svd-lanczos-eigen.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rzavalet/svd-lsi-project-master",
"max_issues_repo_path": "svd-lanczos-eigen.tex",
"max_line_length": 77,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rzavalet/svd-lsi-project-master",
"max_stars_repo_path": "svd-lanczos-eigen.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 751,
"size": 2642
} |
\documentclass[titlepage, a4paper, 12pt]{article}
%\usepackage{ctex}
\usepackage{lmodern}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e}
\usepackage{amsmath}
\usepackage{txfonts}
\usepackage{amssymb}
\usepackage{times}
\usepackage{graphicx}
\usepackage{epsfig,tabularx,amssymb,amsmath,subfigure,multirow}
%\usepackage{algorithmic}
\usepackage[linesnumbered,ruled,noend]{algorithm2e}
\usepackage[noend]{algorithmic}
\usepackage{multirow}
\usepackage{graphicx,floatrow}
\usepackage{listings}
\usepackage{threeparttable}
%\usepackage{tikz}
\usepackage[T1]{fontenc}
\usepackage{pgfplots}
\usepackage{filecontents}
\usepackage{comment}
\lstset{%
alsolanguage=Java,
%language={[ISO]C++}, %language为,还有{[Visual]C++}
%alsolanguage=[ANSI]C, %可以添加很多个alsolanguage,如alsolanguage=matlab,alsolanguage=VHDL等
%alsolanguage= tcl,
alsolanguage= XML,
tabsize=4, %
frame=shadowbox, %把代码用带有阴影的框圈起来
commentstyle=\color{red!50!green!50!blue!50},%浅灰色的注释
rulesepcolor=\color{red!20!green!20!blue!20},%代码块边框为淡青色
keywordstyle=\color{blue!90}\bfseries, %代码关键字的颜色为蓝色,粗体
showstringspaces=false,%不显示代码字符串中间的空格标记
stringstyle=\ttfamily, % 代码字符串的特殊格式
keepspaces=true, %
breakindent=22pt, %
numbers=left,%左侧显示行号 往左靠,还可以为right,或none,即不加行号
stepnumber=1,%若设置为2,则显示行号为1,3,5,即stepnumber为公差,默认stepnumber=1
%numberstyle=\tiny, %行号字体用小号
numberstyle={\color[RGB]{0,192,192}\tiny} ,%设置行号的大小,大小有tiny,scriptsize,footnotesize,small,normalsize,large等
numbersep=8pt, %设置行号与代码的距离,默认是5pt
basicstyle=\footnotesize, % 这句设置代码的大小
showspaces=false, %
flexiblecolumns=true, %
breaklines=true, %对过长的代码自动换行
breakautoindent=true,%
breakindent=4em, %
% escapebegin=\begin{CJK*}{GBK}{hei},escapeend=\end{CJK*},
aboveskip=1em, %代码块边框
tabsize=2,
showstringspaces=false, %不显示字符串中的空格
backgroundcolor=\color[RGB]{245,245,244}, %代码背景色
%backgroundcolor=\color[rgb]{0.91,0.91,0.91} %添加背景色
escapeinside=``, %在``里显示中文
%% added by http://bbs.ctex.org/viewthread.php?tid=53451
fontadjust,
captionpos=t,
framextopmargin=2pt,framexbottommargin=2pt,abovecaptionskip=-3pt,belowcaptionskip=3pt,
xleftmargin=4em,xrightmargin=4em, % 设定listing左右的空白
texcl=true,
% 设定中文冲突,断行,列模式,数学环境输入,listing数字的样式
extendedchars=false,columns=flexible,mathescape=true
% numbersep=-1em
}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\usepackage{xltxtra,xunicode}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase}
\newcommand{\euro}{�}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{longtable,booktabs}
\ifxetex
\usepackage[setpagesize=false, % page size defined by xetex
unicode=false, % unicode breaks when used with xetex
xetex]{hyperref}
\else
\usepackage[unicode=true]{hyperref}
\fi
\hypersetup{breaklinks=true,
bookmarks=true,
pdfauthor={},
pdftitle={Gstore System},
colorlinks=true,
citecolor=blue,
urlcolor=blue,
linkcolor=magenta,
pdfborder={0 0 0}}
\urlstyle{same} % don't use monospace font for urls
%\setlength{\parskip}{6pt plus 2pt minus 1pt}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\setcounter{secnumdepth}{0}
\setlength{\parindent}{0pt}
%\setlength{\parindent}{2em}
\addtolength{\parskip}{3pt}
\linespread{1.3}
\begin{document}
\title{\includegraphics[scale=0.3, bb=0 0 385 567]{logo.png} \\
The handbook of gStore System测试}
%\author{Bookug Lobert\footnote{EECS of Peking University, [email protected]}\\[2ex]}
\author{Edited by gStore team \footnote{The mailing list is given in Chapter 13.}}
\date{\today}
%\begin{figure}[b]
% \centering
% \includegraphics[scale=0.3,bb=0 0 385 567]{../logo.png}
%\caption{Some description about the picture}
% \label{logo}
%\end{figure}
\maketitle
\hyperdef{}{MathJaxux5fSVGux5fHidden}{}
\hyperdef{}{wmd-preview}{}
\setcounter{tocdepth}{4}
\tableofcontents
\clearpage
\section{Preface}
The RDF (\emph{R}esource \emph{D}escription \emph{F}ramework) is a family of specifications proposed by W3C for modeling Web objects as part of developing the semantic web. In RDF model, each Web object is modeled as a uniquely named \emph{resource} and denoted by a URI (\emph{U}niform \emph{R}esource \emph{I}dentifier). RDF also uses URIs to name the properties of resources and the relationships between resources as well as the two ends of the link (this is usually referred to as a ``triple''). Hence, an RDF dataset can be represented as a directed, labeled graph where resources are vertices, and triples are
edges with property or relationship names as edge labels. For more details, please go to \href{https://www.w3.org/RDF/}{RDF Introduction}\\
To retrieve and manipulate an RDF graph, W3C also proposes a structured query language, SPARQL (\emph{S}imple \emph{P}rotocol \emph{A}nd \emph{R}DF \emph{Q}uery \emph{L}anguage), to access RDF repository. SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions. SPARQL also supports aggregation, subqueries, negation, creating values by expressions, extensible value testing, and constraining queries by source RDF graph. Similar to RDF graphs, a SPARQL query can also be modeled as a graph, which is a query graph with some variables. Then, evaluating a SPARQL query is equivalent to finding subgraph (homomorphism) matches of a query graph over an RDF graph. You can have a better understanding of SPARQL at \href{https://www.w3.org/TR/sparql11-query/}{SPARQL Introduction}.\\
Although there are some RDF data management systems (like Jena, Virtuoso, Sesame) that store the RDF data in relational systems, few existing systems exploit the native graph pattern
matching semantics of SPARQL. \textbf{Here, we implement a graph-based RDF triple store named gStore, which is a joint research project by Peking University, University of Waterloo and Hong Kong University of Science and Technology. The system is developed and maintained by the database group in Institute of Computer Science and Technology, Peking University, China.} A detailed description of gStore can be found at our papers {[}Zou et al., VLDB 11{]} and {[}Zou et al., VLDB Journal 14{]} in the \hyperref[chapter09]{Publication} chapter. This HELP document includes system installment, usage, API, use cases and FAQ. gStore is a open-source project in github under the BSD license. You are welcome to use gStore, report bugs or suggestions, or join us to make gStore better. It is also allowed for you to build all kinds of applications based on gStore, while respecting our work.\\
\textbf{Please make sure that you have read \hyperref[chapter18]{Legal Issues} before using gStore.}
\clearpage
\part{Start}
\hyperdef{}{chapter00}{\subsection{Chapter 00: A Quick Tour}\label{chapter00}}
Gstore System(also called gStore) is a graph database engine for managing large graph-structured data, which is open-source and targets at Linux operation systems. The whole project is written in C++, with the help of some libraries such as readline, antlr, and so on. Only source tarballs are provided currently, which means you have to compile the source code if you want to use our system.
\hyperdef{}{getting-started}{\subsubsection{Getting
Started}\label{getting-started}}
This system is really user-friendly and you can pick it up in several minutes. Remember to check your platform where you want to run this system by viewing \hyperref[chapter01]{System Requirements}. After all are verified, please get this project's source code. There are several ways to do this:
\begin{itemize}
\item
download the zip from this repository and extract it
\item
fork this repository in your github account
\item
type \texttt{git\ clone\ [email protected]:Caesar11/gStore.git} in your
terminal or use git GUI to acquire it
\end{itemize}
Then you need to compile the project, just type \texttt{make} in the gStore root directory, and all executables will be ok. To run gStore, please type \texttt{bin/gbuild\ database\_name\ dataset\_path} to build a database named by yourself. And you can use \texttt{bin/gquery\ database\_name} command to query a existing database. What is more, \texttt{bin/gconsole} is a wonderful tool designed for you, providing all operations you need to use gStore.
Notice that all commands should be typed in the root directory of gStore.
\emph{A detailed description can be found at Chapter 04
\hyperref[chapter04]{How to use} in this document.}
\hyperdef{}{advanced-help}{\subsubsection{Advanced
Help}\label{advanced-help}}
If you want to understand the details of the gStore system, or you want to try some advanced operations(for example, using the API, server/client), please see the chapters below.
\begin{itemize}
\item
\hyperref[chapter02]{Basic Introduction}: introduce the theory and features of gStore
\item
\hyperref[chapter03]{Install Guide}: instructions on how to install this system
\item
\hyperref[chapter04]{How To Use}: detailed information about using the gStore system
\item
\hyperref[chapter05]{Socket API Explanation}: guide you to develop applications based on our Socket API
\item
\hyperref[chapter06]{HTTP API Explanation}: guide you to develop applications based on our HTTP API
\item
\hyperref[chapter08]{Project Structure}: show the whole structure and sequence of this project
\item
\hyperref[chapter09]{Publications}: contain essays and publications
related with gStore
\item
\hyperref[chapter10]{Update Logs}: keep the logs of the system updates
\item
\hyperref[chapter15]{Test Result}: present the test results of a series of experiments
\end{itemize}
\hyperdef{}{other-business}{\subsubsection{Other Business}\label{other-business}}
We have written a series of short essays addressing recurring challenges in using gStore to realize applications, which are placed in
\hyperref[chapter12]{Recipe Book}.
You are welcome to report any advice or errors in the github Issues part of this repository, if not requiring in-time reply. However, if you want to urgent on us to deal with your reports, please email to to submit your suggestions and report bugs to us by emailing to . A full list of our whole team is in \hyperref[chapter13]{Contributors}.
There are some restrictions when you use the current gStore project, you can see them on \hyperref[chapter10]{Limitations}.
Sometimes you may find some strange phenomena(but not wrong case), or something hard to understand/solve(don't know how to do next), then do not hesitate to visit the \hyperref[chapter11]{Frequently Asked Questions} page.
Graph database engine is a new area and we are still trying to go further. Things we plan to do next is in \hyperref[chapter16]{Future Plan} chapter, and we hope more and more people will support or even
join us. You can support in many ways:
\begin{itemize}
\item
watch/star our project
\item
fork this repository and submit pull requests to us
\item
download and use this system, report bugs or suggestions
\item
\ldots{}
\end{itemize}
People who inspire us or contribute to this project will be listed in the \hyperref[chapter17]{Thanks List} chapter.
\clearpage
\hyperdef{}{chapter01}{\subsection{Chapter 01: System Requirements}\label{chapter01}}
\emph{We have tested on linux server with CentOS 6.2 x86\_64 and CentOS 6.6 x86\_64. The version of GCC should be 4.4.7 or later.}
\begin{longtable}[c]{@{}ll@{}}
\toprule
Item & Requirement\tabularnewline
\midrule
\endhead
operation system & Linux, such as CentOS, Ubuntu and so on\tabularnewline
architecture & x86\_64\tabularnewline
disk size & according to size of dataset\tabularnewline
memory size & according to size of dataset\tabularnewline
glibc & version \textgreater{}= 2.14\tabularnewline
gcc & version \textgreater{}= 4.4.7\tabularnewline
g++ & version \textgreater{}= 4.4.7\tabularnewline
make & need to be installed\tabularnewline
boost & version >= 1.54\tabularnewline
readline & need to be installed\tabularnewline
readline-devel & need to be installed\tabularnewline
openjdk & needed if using Java api\tabularnewline
openjdk-devel & needed if using Java api\tabularnewline
realpath & needed if using gconsole\tabularnewline
ccache & optional, used to speed up the compilation\tabularnewline
\bottomrule
\caption{software requirement}
\end{longtable}
NOTICE:
\begin{enumerate}
\item
The name of some packages may be different in different platforms, just install the corresponding one in your own operation system.
\item
To install readline and readline-devel, just type \texttt{dnf\ install\ readline-devel} in Redhat/CentOS/Fedora, or \texttt{apt-get\ install\ libreadline-dev} in Debian/Ubuntu. Please use corresponding commands in other systems. If you use ArchLinux, just type \texttt{pacman\ -S\ readline} to install the readline and readline-devel.(so do other packages)
\item
You do not have to install realpath to use gStore, but if you want to use the gconsole for its convenience, please do so by using \texttt{dnf\ install\ realpath} or \texttt{apt-get\ install\ realpath}.
\item
Our programs use regEx functions, which are provided by GNU/Linux by default.
\item
ANTLR3.4 is used in gStore to produce lexer and parser code for SPARQL query. However, you do not need to install the corresponding antlr libraries because we have merged the libantlr3.4 in our system.
\item
When you type \texttt{make} in the root directory of the gStore project, the Java api will also be compiled. You can modify the makefile if you do not have JDK in your system. However, you are advised to install openjdk-devel in your Linux system.
\item
To install ccache, you need to add epel repository if using CentOS, while in Ubuntu you can directly install it by 'apt-get install ccache' comand. If you can not install ccahe(or maybe you do not want to), please go to modify the makefile(just change the CC variable to g++).
\item If you need to use the HTTP server in gStore, then Boost Library(like boost-devel, including boost headers for developing) must be installed and the version should not be less than 1.54. Remember to check the makefile for your installed path of Boost.
\item
Any other questions, please go to \hyperref[chapter11]{FAQ} page.
\end{enumerate}
\clearpage
\hyperdef{}{chapter02}{\subsection{Chapter 02: Basic Introduction}\label{chapter02}}
\textit{The first essay to come up with Gstore System is
\href{run:../pdf/gStoreVLDBJ.pdf}{gStore\_VLDBJ}, and you can find related publications in
\hyperref[chapter09]{Publications}.}
\hyperdef{}{what-is-gstore}{\subsubsection{What Is
gStore}\label{what-is-gstore}}
gStore is a graph-based RDF data management system(or what is commonly called a ``triple store'') that maintains the graph structure of the original \href{http://www.w3.org/TR/rdf11-concepts/}{RDF} data. Its data model is a labeled, directed multi edge graph, where each vertex corresponds to a subject or an object.
We represent a given \href{http://www.w3.org/TR/sparql11-overview/}{SPARQL} query by a query graph Q. Query processing involves finding subgraph matches of Q over the RDF graph G, instead of joining tables in relational data management system. gStore incorporates an index over the RDF graph (called VS-tree) to speed up query processing. VS-tree is a height balanced tree with a number of associated pruning techniques to speed up subgraph matching.
\textbf{The gStore project is supported by the National Science Foundation of China (NSFC), Natural Sciences and Engineering Research Council (NSERC) of Canada, and Hong Kong RGC.}
\hyperdef{}{why-gstore}{\subsubsection{Why gStore}\label{why-gstore}}
After a series of test, we analyse and keep the result in \hyperref[chapter15]{Test Results}. gStore runs faster to answer complicated queries(for example, contain circles) than other database systems. For simple queries, both gStore and other database systems work
well.
In addition, now is the big data era and more and more structured data is coming, while the original relational database systems(or database systems based on relational tables) cannot deal with them efficiently. In contrast, gStore can utilize the features of graph data structures, and improve the performance.
What is more, gStore is a high-extensible project. Many new ideas of graph database have be proposed, and most of them can be used in gStore. For example, our group is also designing a distributed gstore system, which is expected to be released at the end of 2016.
\hyperdef{}{open-source}{\subsubsection{Open Source}\label{open-source}}
The gStore source code is available as open-source code under the BSD license. You are welcome to use gStore, report bugs or suggestions, or join us to make gStore better. It is also allowed for you to build all kinds of applications based on gStore, while respecting our work.
\clearpage
\hyperdef{}{chapter03}{\subsection{Chapter 03: Install Guide}\label{chapter03}}
You are advised to read init.conf file, and modify it as you wish. (this file will configure the basic options of gStore system)
gStore is a green software, and you just need to compile it with one command. Please run \texttt{make} in the gStore root directory to compile the gStore code, link the ANTLR lib, and build executable ``gbuild'', ``gquery'', ``gserver'', ``gclient'', ``gconsole''. What is more, the api of gStore is also built now.
If you want to use API examples of gStore, please run \texttt{make\ APIexample} to compile example codes for both C++ API and Java API. For details of API, please visit \hyperref[chapter05]{API} chapter.
Use \texttt{make\ clean} command to clean all objects, executables, and use \texttt{make\ dist} command to clean all objects, executables, libs, datasets, databases, debug logs, temp/text files in the gStore root directory.
You are free to modify the source code of gStore and create your own project while respecting our work, and type \texttt{make\ tarball} command to compress all useful files into a .tar.gz file, which is easy to carry.
Type \texttt{make\ gtest} to compile the gtest program if you want to use this test utility. You can see the \hyperref[chapter04]{HOW TO USE} for details of gtest program.
\clearpage
\hyperdef{}{chapter04}{\subsection{Chapter 04: How To Use}\label{chapter04}}
\textit{gStore currently includes five executables and others.}
\textbf{All the commands of gStore should be used in the root directory of gStore like bin/gconsole, because executables are placed in bin/, and they may use some files whose paths are indicated in the code, not absolute paths. We will ensure that all paths are absolute later by asking users to give the absolute path in their own systems to really install/configure the gStore. However, you must do as we told now to avoid errors.}
\hyperdef{}{0-gconsole}{\paragraph{0. gconsole}\label{0-gconsole}}
gconsole is the main console of gStore, which integrates with all functions to operate on gStore, as well as some system commands. Completion of commands name, line editing features and access to the history list are all provided. Feel free to try it, and you may have a wonderful tour!(spaces or tabs at the beginning or end is ok, and no need to type any special characters as separators)
\begin{verbatim}
[bookug@localhost gStore]$ bin/gconsole
Gstore Console(gconsole), an interactive shell based utility to communicate with
gStore repositories.
usage: start-gconsole [OPTION]
-h,--help print this help
-s,--source source the SPARQL script
For bug reports and suggestions, see https://github.com/Caesar11/gStore
notice that commands are a little different between native mode and remote mode!
now is in native mode, please type your commands.
please do not use any separators in the end.
gstore>help
gstore>help drop
drop Drop a database according to the given path.
gstore>connect 127.0.0.1 3305
now is in remote mode, please type your commands.
server>disconnect
now is in native mode, please type your commands.
gstore>build lubm_10 ./data/LUBM_10.n3
...
import RDF file to database done.
gstore>unload
gstore>load lubm_10
...
database loaded successfully!
gstore>show
lubm_10
gstore>query ./data/LUBM_q0.sql
...
final result is :
?x
<http://www.Department0.University0.edu/FullProfessor0>
<http://www.Department1.University0.edu/FullProfessor0>
<http://www.Department2.University0.edu/FullProfessor0>
<http://www.Department3.University0.edu/FullProfessor0>
<http://www.Department4.University0.edu/FullProfessor0>
<http://www.Department5.University0.edu/FullProfessor0>
<http://www.Department6.University0.edu/FullProfessor0>
<http://www.Department7.University0.edu/FullProfessor0>
<http://www.Department8.University0.edu/FullProfessor0>
<http://www.Department9.University0.edu/FullProfessor0>
<http://www.Department10.University0.edu/FullProfessor0>
<http://www.Department11.University0.edu/FullProfessor0>
<http://www.Department12.University0.edu/FullProfessor0>
<http://www.Department13.University0.edu/FullProfessor0>
<http://www.Department14.University0.edu/FullProfessor0>
gstore>query "select distinct ?x ?y where { ?x <rdf:type>
<ub:UndergraduateStudent> .
?x <ub:takesCourse> ?y . ?y <ub:name> <FullProfessor1> . }"
final result is :
?x ?y
[empty result]
gstore>unload
gstore>quit
\end{verbatim}
Just type \texttt{bin/gconsole} in the root directory of gStore to use this console, and you will find a \texttt{gstore\textgreater{}} prompt, which indicates that you are in native mode and can type in native commands now. There are another mode of this console, which is called remote mode. Just type \texttt{connect} in the native mode to enter the remote mode, and type \texttt{disconnect} to exit to native mode.(the console connect to a gStore server whose ip is `127.0.0.1' and port is 3305, you can specify them by type \texttt{connect\ gStore\_server\_ip\ gStore\_server\_port})
You can use \texttt{help} or \texttt{?} either in native mode or remote mode to see the help information, or you can type \texttt{help\ command\_name} or \texttt{?\ command\_name} to see the information of a given command. Notice that there are some differences between the commands in native mode and commands in remote mode. For example, system commands like \texttt{ls}, \texttt{cd} and \texttt{pwd} are provided in native mode, but not in remote mode. Also take care that not all commands contained in the help page are totally achieved, and we may change some functions of the console in the future.
What we have done is enough to bring you much convenience to use gStore, just enjoy it!
\hyperdef{}{1-gbuild}{\paragraph{1. gbuild}\label{1-gbuild}}
gbuild is used to build a new database from a RDF triple format file.
\texttt{bin/gbuild\ db\_name\ rdf\_triple\_file\_path}
For example, we build a database from LUBM\_10.n3 which can be found in
example folder.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gbuild LUBM10 ./data/LUBM_10.n3
gbuild...
argc: 3 DB_store:LUBM10 RDF_data: ./data/LUBM_10.n3
begin encode RDF from : ./data/LUBM_10.n3 ...
\end{verbatim}
\hyperdef{}{2-gquery}{\paragraph{2. gquery}\label{2-gquery}}
gquery is used to query an existing database with files containing
SPARQL queries.(each file contains exact one SPARQL query)
Type \texttt{bin/gquery\ db\_name\ query\_file} to execute the SPARQL
query retrieved from query\_file in the database named db\_name.
Use \texttt{bin/gquery\ -\/-help} for detail information of gquery
usage.
To enter the gquery console, type \texttt{bin/gquery\ db\_name}. The
program shows a command prompt(``gsql\textgreater{}''), and you can type
in a command here. Use \texttt{help} to see basic information of all
commands, while \texttt{help\ command\_t} shows details of a specified
command.
Type \texttt{quit} to leave the gquery console.
For \texttt{sparql} command, input a file path which contains a single
SPARQL query. (\emph{answer redirecting to file is supported})
When the program finish answering the query, it shows the command prompt
again.
\emph{gStore2.0 only support simple ``select'' queries(not for
predicates) now.}
We also take LUBM\_10.n3 as an example.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gquery LUBM10
gquery...
argc: 2 DB_store:LUBM10/
loadTree...
LRUCache initial...
LRUCache initial finish
finish loadCache
finish loadEntityID2FileLineMap
open KVstore
finish load
finish loading
Type `help` for information of all commands
Type `help command_t` for detail of command_t
gsql>sparql ./data/LUBM_q0.sql
... ...
Total time used: 4ms.
final result is :
<http://www.Department0.University0.edu/FullProfessor0>
<http://www.Department1.University0.edu/FullProfessor0>
<http://www.Department2.University0.edu/FullProfessor0>
<http://www.Department3.University0.edu/FullProfessor0>
<http://www.Department4.University0.edu/FullProfessor0>
<http://www.Department5.University0.edu/FullProfessor0>
<http://www.Department6.University0.edu/FullProfessor0>
<http://www.Department7.University0.edu/FullProfessor0>
<http://www.Department8.University0.edu/FullProfessor0>
<http://www.Department9.University0.edu/FullProfessor0>
<http://www.Department10.University0.edu/FullProfessor0>
<http://www.Department11.University0.edu/FullProfessor0>
<http://www.Department12.University0.edu/FullProfessor0>
<http://www.Department13.University0.edu/FullProfessor0>
<http://www.Department14.University0.edu/FullProfessor0>
\end{verbatim}
Notice:
\begin{itemize}
\item
``{[}empty result{]}'' will be printed if no answer, and there is an
empty line after all results.
\item
readline lib is used, so you can use arrow key in your keyboard to see
command history, and use and arrow key to move and modify your entire
command.
\item
path completion is supported for utility. (not built-in command
completion)
\end{itemize}
\hyperdef{}{3-ghttp}{\paragraph{3. ghttp}\label{3-ghttp}}
ghttp is a daemon. It should be launched first when accessing gStore by HTTP protocol. It uses port 9000.
Just type \texttt{bin/ghttp} to start server. After the server is started, you can access it by visit the url in a browser or use the Restful API in your program. You can press Ctrl-C to stop the server. (Multiple connections are supported in HTTP server)
\begin{verbatim}
[bookug@localhost gStore]$ bin/ghttp
the current settings are as below:
key : value
-----------------------------------------------------------
BackupTime : 2000 # 4 am (GMT+8)
buffer_maxium : 100
db_home : .
db_suffix : .db
debug_level : simple
gstore_mode : single
operation_logs : true
thread_maxium : 1000
enter initialize.
server port: 9000 database name:
\end{verbatim}
URL rules are listed blow:
parameters: operation, db\_name, ds\_path, format, sparql
NOTICE: do URL encoding before sending it to database server.
operation: build, load, unload, query, monitor, show, checkpoint
\begin{itemize}
\item
db\_name: the name of database, like lubm
\item
format: html, json, txt, csv
\item
sparql: select ?s where { ?s ?p ?o . }
\item
ds\_path in the server: like /home/data/test.n3
\end{itemize}
Examples:
\begin{itemize}
\item
to build a database from a dataset:\\
http://localhost:9000/?operation=build\&db\_name=[db\_name]\&ds\_path=[ds\_path]
\item
to load a database:\\
http://localhost:9000/?operation=load\&db\_name=[db\_name]
\item
to query a database:\\
http://localhost:9000/?operation=query\&format=[format]\&sparql=[sparql]
\item
to unload a database:\\
http://localhost:9000/?operation=unload\&db\_name=[db\_name]
\item
to monitor the server:\\
http://localhost:9000/?operation=monitor
\item
to show the database used:\\
http://localhost:9000/?operation=show
\item
to save the database currently:\\
http://localhost:9000/?operation=checkpoint
\end{itemize}
\hyperdef{}{4-gserver}{\paragraph{4. gserver}\label{4-gserver}}
gserver is a daemon. It should be launched first when accessing gStore
by gclient or API. It communicates with client through socket.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gserver -s
Server started at port 3305
\end{verbatim}
\begin{verbatim}
[bookug@localhost gStore]$ bin/gserver -t
Server stopped at port 3305
\end{verbatim}
You can also assign a custom port for listening.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gserver -p 3307
Port changed to 3307.
\end{verbatim}
Notice: Multiple threads are not supported by gserver. If you start up
gclient in more than one terminal in the same time, gserver will go
down.
\hyperdef{}{5-gclient}{\paragraph{5. gclient}\label{5-gclient}}
gclient is designed as a client to send commands and receive feedbacks.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gclient
ip=127.0.0.1 port=3305
gsql>help
help - print commands message
quit - quit the console normally
import - build a database for a given dataset
load - load an existen database
unload - unload an existen database
sparql - load query from the second argument
show - show the current database's name
gsql>import lubm data/LUBM_10.n3
import RDF file to database done.
gsql>load lubm
load database done.
gsql>sparql "select ?s ?o where { ?s <rdf:type> ?o . }"
[empty result]
gsql>quit
\end{verbatim}
You can also assign gserver's ip and port.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gclient 172.31.19.15 3307
ip=172.31.19.15 port=3307
gsql>
\end{verbatim}
We can use these following commands now:
\begin{itemize}
\item
\texttt{help} shows the information of all commands
\item
\texttt{import\ db\_name\ rdf\_triple\_file\_name} build a database
from RDF triple file
\item
\texttt{load\ db\_name} load an existing database
\item
\texttt{unload\ db\_name} unload database, but will not delete it on
disk, you can load it next time
\item
\texttt{sparql\ "query\_string"} query the current database with a
SPARQL query string(quoted by ``'')
\item
\texttt{show} displays the name of the current loaded database
\end{itemize}
Notice:
\begin{itemize}
\item
at most one database can be loaded in the gclient console
\item
you can place ` ' or `\textbackslash{}t' between different parts of
command, but not use characters like `;'
\item
you should not place any space or tab ahead of the start of any
command
\end{itemize}
\hyperdef{}{6-test-utilities}{\paragraph{6. test
utilities}\label{6-test-utilities}}
A series of test program are placed in the test/ folder, and we will
introduce the two useful ones: gtest.cpp and full\_test.sh
\textbf{gtest is used to test gStore with multiple datasets and
queries.}
To use gtest utility, please type \texttt{make\ gtest} to compile the
gtest program first. Program gtest is a test tool to generate structural
logs for datasets. Please type \texttt{./gtest\ -\/-help} in the working
directory for details.
\textbf{Please change paths in the test/gtest.cpp if needed.}
You should place the datasets and queries in this way:
\begin{verbatim}
DIR/WatDiv/database/*.nt
DIR/WatDiv/query/*.sql
\end{verbatim}
Notice that DIR is the root directory where you place all datasets
waiting to be used by gtest. And WatDiv is a class of datasets, as well
as LUBM. Inside WatDiv(or LUBM, etc. please place all datasets(named
with .nt) in a database/ folder, and place all queries(corresponding to
datasets, named with .sql) in a query folder.
Then you can run the gtest program with specified parameters, and the
output will be sorted into three logs in gStore root directory:
load.log/(for database loading time and size), time.log/(for query time)
and result.log/(for all query results, not the entire output strings,
but the information to record the selected two database systems matched
or not).
All logs produced by this program are in TSV format(separated with
`\textbackslash{}t'), you can load them into Calc/Excel/Gnumeric
directly. Notice that time unit is ms, and space unit is kb.
\textbf{full\_test.sh is used to compare the performance of gStore and
other database systems on multiple datasets and queries.}
To use full\_test.sh utility, please download the database system which
you want to tats and compare, and set the exact position of database
systems and datasets in this script. The name strategy should be the
same as the requirements of gtest, as well as the logs strategy.
Only gStore and Jena are tested and compared in this script, but it is
easy to add other database systems, if you would like to spend some time
on reading this script. You may go to
\href{run:../pdf/gstore���Ա���.pdf}{test
report} or \hyperref[chapter11]{Frequently Asked Questions} for help if
you encounter a problem.
\hyperdef{}{7-gadd}{\paragraph{7. gadd}\label{7-gadd}}
gadd is used to add triples in a file to an existing database.
Usage: \texttt{bin/gadd db\_name rdf\_triple\_file\_path}.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gadd lubm ./data/LUBM\_10.n3
...
argc: 3 DB_store:lubm insert file:./data/LUBM_10.n3
get important pre ID
...
insert rdf triples done.
inserted triples num: 99550
\end{verbatim}
\hyperdef{}{8-gsub}{\paragraph{8. gsub}\label{8-gsub}}
gsub is used to remove triples from an existing database.
Usage: \texttt{bin/gsub db\_name rdf\_triple\_file\_path}.
\begin{verbatim}
[bookug@localhost gStore]$ bin/gsub lubm data/LUBM\_10.n3
...
argc: 3 DB_store:lubm remove file: data/LUBM\_10.n3
...
remove rdf triples done.
removed triples num: 99550
\end{verbatim}
\hyperdef{}{9-gmonitor}{\paragraph{9. gmonitor}\label{9-gmonitor}}
After starting ghttp, go into gStore/bin/ and type \texttt{./gmonitor ip port} to check current status of gStore.
\begin{verbatim}
[bookug@localhost bin]$ ./gmonitor 127.0.0.1 9000
parameter: ?operation=monitor
request: http://127.0.0.1:9000/%3Foperation%3Dmonitor
null--->[HTTP/1.1 200 OK]
Content-Length--->[127]
database: lubm
triple num: 99550
entity num: 28413
literal num: 0
subject num: 14569
predicate num: 17
connection num: 7
\end{verbatim}
\hyperdef{}{10-gshow}{\paragraph{10. gshow}\label{10-gshow}}
After starting ghttp, go into gStore/bin and type \texttt{./gshow ip port} to check loaded database.
\begin{verbatim}
[bookug@localhost gStore]$ ./gshow 127.0.0.1 9000
parameter: ?operation=show
request: http://127.0.0.1:9000/%3Foperation%3Dshow
null--->[HTTP/1.1 200 OK]
Content-Length--->[4]
lubm
\end{verbatim}
\clearpage
\part{Advanced}
\hyperdef{}{chapter05}{\subsection{Chapter 05: Socket API Explanation}\label{chapter05}}
\textbf{This Chapter guides you to use socket API for accessing gStore, which can be used when the server runs gserver. We also provide HTTP API for ghttp, please see \hyperref[chapter06]{【HTTP API Explanation】}.}
\hyperdef{}{easy-examples}{\subsubsection{Easy
Examples}\label{easy-examples}}
We provide JAVA, C++, PHP and Python API for gStore now. Please refer to example
codes in \texttt{api/socket/cpp/example}, \texttt{api/socket/java/example}, \texttt{api/socket/php} and \texttt{api/socket/python/example}. To use the four examples to have a try, please ensure that executables have already been generated. Otherwise, for Java and C++, just type \texttt{make\ APIexample} in the root directory of gStore to compile the codes, as well as API.
Next, \textbf{start up a gStore server by using \texttt{./gserver}
command.} It is ok if you know a running usable gStore server and try to
connect to it, but notice that \textbf{the server ip and port of server
and client must be matched.}(you don't need to change any thing if using
examples, just by default) Then, for Java and C++ code, you need to compile the example codes
in the directory gStore/api/socket/. We provide a utility to do this, and you
just need to type \texttt{make\ APIexample} in the root directory of
gStore. Or you can compile the codes by yourself, in this case please go
to gStore/api/socket/cpp/example/ and gStore/api/socket/java/example/, respectively.
Finally, go to the example directory and run the corresponding
executables. For C++, just use \texttt{./example} command to run it. And
for Java, use \texttt{make\ run} command or \texttt{java\ -cp\ ../lib/GstoreJavaAPI.jar:.\ JavaAPIExample} to run
it. For PHP, use \texttt{php ./PHPAPIExample}. For python, use \texttt{python ./PythonAPIExample}. All these four executables will connect to a specified gStore server
and do some load or query operations. Be sure that you see the query
results in the terminal where you run the examples, otherwise please go
to \hyperref[chapter11]{Frequently Asked Questions} for help or report
it to us.(the report approach is described in
\hyperref[chapter00]{README})
You are advised to read the example code carefully, as well as the
corresponding Makefile. This will help you to understand the API,
specially if you want to write your own programs based on the API
interface.
\hyperdef{}{api-structure}{\subsubsection{API structure}\label{api-structure}}
The API of gStore is placed in api/socket/ directory in the root directory of
gStore, whose contents are listed below:
\begin{itemize}
\item
gStore/api/socket/
\begin{itemize}
\item
cpp/ (the C++ API)
\begin{itemize}
\item
src/ (source code of C++ API, used to build the
lib/libgstoreconnector.a)
\begin{itemize}
\item
GstoreConnector.cpp (interfaces to interact with gStore server)
\item
GstoreConnector.h
\item
Makefile (compile and build lib)
\end{itemize}
\item
lib/ (where the static lib lies in)
\begin{itemize}
\item
.gitignore
\item
libgstoreconnector.a (only exist after compiled, you need to
link this lib when you use the C++ API)
\end{itemize}
\item
example/ (small example program to show the basic idea of using
the C++ API)
\begin{itemize}
\item
CppAPIExample.cpp
\item
Makefile
\end{itemize}
\end{itemize}
\item
java/ (the Java API)
\begin{itemize}
\item
src/ (source code of Java API, used to build the
lib/GstoreJavaAPI.jar)
\begin{itemize}
\item
jgsc/GstoreConnector.java (the package which you need to import when you use the Java API)
\item
Makefile (compile and build lib)
\end{itemize}
\item
lib/
\begin{itemize}
\item
.gitignore
\item
GstoreJavaAPI.jar (only exist after compiled, you need to
include this JAR in your class path)
\end{itemize}
\item
example/ (small example program to show the basic idea of using
the Java API)
\begin{itemize}
\item
JavaAPIExample.cpp
\item
Makefile
\end{itemize}
\end{itemize}
\item
php/ (the PHP API)
\begin{itemize}
\item
GstoreConnector.php (source code of PHP API, you need to include this file when you use the PHP API)
\item
PHPAPIExample.php (small example program to show the basic idea of using the PHP API)
\end{itemize}
\item
python/ (the Python API)
\begin{itemize}
\item
src/ (source code of Python API)
\begin{itemize}
\item
GstoreConnector.py (the package which you need to import when you use the Python API)
\end{itemize}
\item
example/ (small example program to show the basic idea of using the Python API)
\begin{itemize}
\item
PythonAPIExample.py
\end{itemize}
\end{itemize}
\end{itemize}
\end{itemize}
\hyperdef{}{c-api}{\subsubsection{C++ API}\label{c-api}}
\hyperdef{}{interface}{\paragraph{Interface}\label{interface}}
To use the C++ API, please place the phrase
\texttt{\#include\ "GstoreConnector.h"} in your cpp code. Functions in
GstoreConnector.h should be called like below:
\begin{verbatim}
// initialize the Gstore server's IP address and port.
GstoreConnector gc("127.0.0.1", 3305);
// build a new database by a RDF file.
// note that the relative path is related to gserver.
gc.build("LUBM10", "example/LUBM_10.n3");
// then you can execute SPARQL query on this database.
std::string sparql = "select ?x where \
{\
?x <rdf:type> <ub:UndergraduateStudent>. \
?y <ub:name> <Course1>. \
?x <ub:takesCourse> ?y. \
?z <ub:teacherOf> ?y. \
?z <ub:name> <FullProfessor1>. \
?z <ub:worksFor> ?w. \
?w <ub:name> <Department0>. \
}";
std::string answer = gc.query(sparql);
// unload this database.
gc.unload("LUBM10");
// also, you can load some exist database directly and then query.
gc.load("LUBM10");
// query a SPARQL in current database
answer = gc.query(sparql);
\end{verbatim}
The original declaration of these functions are as below:
\begin{verbatim}
GstoreConnector();
GstoreConnector(string _ip, unsigned short _port);
GstoreConnector(unsigned short _port);
bool load(string _db_name);
bool unload(string _db_name);
bool build(string _db_name, string _rdf_file_path);
string query(string _sparql);
\end{verbatim}
Notice:
\begin{enumerate}
\item
When using GstoreConnector(), the default value for ip and port is
127.0.0.1 and 3305, respectively.
\item
When using build(), the rdf\_file\_path(the second parameter) should
be related to the position where gserver lies in.
\item
Please remember to unload the database you have loaded, otherwise
things may go wrong.(the errors may not be reported!)
\end{enumerate}
\hyperdef{}{compile}{\paragraph{Compile}\label{compile}}
You are advised to see gStore/api/socket/cpp/example/Makefile for instructions on how to compile your code with the C++ API. Generally, what you must do is compile your own code to object with header in the C++ API, and link the object with static lib in the C++ API.
Let us assume that your source code is placed in test.cpp, whose position is \$\{GSTORE\}/gStore/.(if using devGstore as name instead of gStore, then the path is \$\{GSTORE\}/devGstore/ directory first:
\begin{quote}
Use \texttt{g++\ -c\ -I\$\{GSTORE\}/gStore/api/socket/cpp/src/\ test.cpp\ -o\ test.o} to compile your test.cpp into test.o, relative API header is placed in api/socket/cpp/src/.
Use \texttt{g++\ -o\ test\ test.o\ -L\$\{GSTORE\}/gStore/api/socket/cpp/lib/\ -lgstoreconnector} to link your test.o with the libgstoreconnector.a(a static lib) in api/socket/cpp/lib/.
\end{quote}
Then you can type \texttt{./test} to execute your own program, which uses our C++ API. It is also advised for you to place relative compile commands in a Makefile, as well as other commands if you like.
\hyperdef{}{java-api}{\subsubsection{Java API}\label{java-api}}
\hyperdef{}{interface-1}{\paragraph{Interface}\label{interface-1}}
To use the Java API, please place the phrase
\texttt{import\ jgsc.GstoreConnector;} in your java code. Functions in
GstoreConnector.java should be called like below:
\begin{verbatim}
// initialize the Gstore server's IP address and port.
GstoreConnector gc = new GstoreConnector("127.0.0.1", 3305);
// build a new database by a RDF file.
// note that the relative path is related to gserver.
gc.build("LUBM10", "example/LUBM_10.n3");
// then you can execute SPARQL query on this database.
String sparql = "select ?x where " + "{" +
"?x <rdf:type> <ub:UndergraduateStudent>. " +
"?y <ub:name> <Course1>. " +
"?x <ub:takesCourse> ?y. " +
"?z <ub:teacherOf> ?y. " +
"?z <ub:name> <FullProfessor1>. " +
"?z <ub:worksFor> ?w. " +
"?w <ub:name> <Department0>. " +
"}";
String answer = gc.query(sparql);
//unload this database.
gc.unload("LUBM10");
//also, you can load some exist database directly and then query.
gc.load("LUBM10");// query a SPARQL in current database
answer = gc.query(sparql);
\end{verbatim}
The original declaration of these functions are as below:
\begin{verbatim}
GstoreConnector();
GstoreConnector(string _ip, unsigned short _port);
GstoreConnector(unsigned short _port);
bool load(string _db_name);
bool unload(string _db_name);
bool build(string _db_name, string _rdf_file_path);
string query(string _sparql);
\end{verbatim}
Notice:
\begin{enumerate}
\item
When using GstoreConnector(), the default value for ip and port is
127.0.0.1 and 3305, respectively.
\item
When using build(), the rdf\_file\_path(the second parameter) should
be related to the position where gserver lies in.
\item
Please remember to unload the database you have loaded, otherwise
things may go wrong.(the errors may not be reported!)
\end{enumerate}
\hyperdef{}{compile-1}{\paragraph{Compile}\label{compile-1}}
You are advised to see gStore/api/socket/java/example/Makefile for instructions on how to compile your code with the Java API. Generally, what you must do is compile your own code to object with jar file in the Java API.
Let us assume that your source code is placed in test.java, whose position is \$\{GSTORE\}/gStore/.(if using devGstore as name instead of gStore, then the path is \$\{GSTORE\}/devGstore/ directory first:
\begin{quote}
Use \texttt{javac\ -cp\ \$\{GSTORE\}/gStore/api/socket/java/lib/GstoreJavaAPI.jar\ test.java} to compile your test.java into test.class with the GstoreJavaAPI.jar(a jar package used in Java) in api/socket/java/lib/.
\end{quote}
Then you can type \texttt{java\ -cp\ \$\{GSTORE\}/gStore/api/socket/java/lib/GstoreJavaAPI.jar:.\ test} to execute your own program(notice that the ``:.'' in command cannot be neglected), which uses our Java API. It is also advised for you to place relative compile commands in a Makefile, as well as other commands if you like.
\hyperdef{}{php-api}{\subsubsection{PHP API}\label{php-api}}
\hyperdef{}{interface-2}{\paragraph{Interface}\label{interface-2}}
To use the PHP API, please place the phrase
\texttt{include('GstoreConnector,php');} in your php code. Functions in
GstoreConnector.php should be called like below:
\begin{verbatim}
// initialize the Gstore server's IP address and port.
$gc = new Connector("127.0.0.1", 3305);
// build a new database by a RDF file.
// note that the relative path is related to gserver.
$gc->build("LUBM10", "example/LUBM_10.n3");
// then you can execute SPARQL query on this database.
$sparql = "select ?x where " + "{" +
"?x <rdf:type> <ub:UndergraduateStudent>. " +
"?y <ub:name> <Course1>. " +
"?x <ub:takesCourse> ?y. " +
"?z <ub:teacherOf> ?y. " +
"?z <ub:name> <FullProfessor1>. " +
"?z <ub:worksFor> ?w. " +
"?w <ub:name> <Department0>. " +
"}";
$answer = gc->query($sparql);
//unload this database.
$gc->unload("LUBM10");
//also, you can load some exist database directly and then query.
$gc->load("LUBM10");// query a SPARQL in current database
$answer = gc->query(sparql);
\end{verbatim}
The original declaration of these functions are as below:
\begin{verbatim}
class Connector {
public function __construct($host, $port);
public function send($data);
public function recv();
public function build($db_name, $rdf_file_path);
public function load($db_name);
public function unload($db_name);
public function query($sparql);
public function __destruct();
}
\end{verbatim}
Notice:
\begin{enumerate}
\item
When using Connector(), the default value for ip and port is
127.0.0.1 and 3305, respectively.
\item
When using build(), the rdf\_file\_path(the second parameter) should
be related to the position where gserver lies in.
\item
Please remember to unload the database you have loaded, otherwise
things may go wrong.(the errors may not be reported!)
\end{enumerate}
\hyperdef{}{run-2}{\paragraph{Run}\label{run-2}}
You can see gStore/api/socket/php/PHPAPIExample for instructions on how to use PHP API. PHP script doesn't need compiling. You can run PHP file directly or use it in your web project.
\hyperdef{}{python-api}{\subsubsection{Python API}\label{python-api}}
\hyperdef{}{interface-3}{\paragraph{Interface}\label{interface-3}}
To use the Python API, please place the phrase \texttt{from GstoreConnector import GstoreConnector} in your python code. Functions in GstoreConnector.py should be called like below:
\begin{verbatim}
// initialize the Gstore server's IP address and port.
gc = GstoreConnector('127.0.0.1', 3305)
// build a new database by a RDF file.
// note that the relative path is related to gserver.
gc.build('LUBM10', 'data/LUBM_10.n3')
// then you can execute SPARQL query on this database.
$sparql = "select ?x where " + "{" +
"?x <rdf:type> <ub:UndergraduateStudent>. " +
"?y <ub:name> <Course1>. " +
"?x <ub:takesCourse> ?y. " +
"?z <ub:teacherOf> ?y. " +
"?z <ub:name> <FullProfessor1>. " +
"?z <ub:worksFor> ?w. " +
"?w <ub:name> <Department0>. " +
"}";
answer = gc.query(sparql)
//unload this database.
gc.unload('LUBM10')
//also, you can load some exist database directly and then query.
gc.load('LUBM10')// query a SPARQL in current database
answer = gc.query(sparql)
\end{verbatim}
The original declaration of these functions are as below:
\begin{verbatim}
class GstoreConnector {
def _connect(self)
def _disconnect(self)
def _send(self, msg):
def _recv(self)
def _pack(self, msg):
def _communicate(f):
def __init__(self, ip='127.0.0.1', port=3305):
@_communicate
def test(self)
@_communicate
def load(self, db_name)
@_communicate
def unload(self, db_name)
@_communicate
def build(self, db_name, rdf_file_path)
@_communicate
def drop(self, db_name)
@_communicate
def stop(self)
@_communicate
def query(self, sparql)
@_communicate
def show(self, _type=False)
}
\end{verbatim}
Notice:
\begin{enumerate}
\item
When using GstoreConnector(), the default value for ip and port is
127.0.0.1 and 3305, respectively.
\item
When using build(), the rdf\_file\_path(the second parameter) should
be related to the position where gserver lies in.
\item
Please remember to unload the database you have loaded, otherwise
things may go wrong.(the errors may not be reported!)
\end{enumerate}
\hyperdef{}{run-3}{\paragraph{Run}\label{run-3}}
You are advised to see gStore/api/socket/python/example/PythonAPIExample for examples on how to use python API. Python file doesn't need compiling, and you can run it directly.
\clearpage
\hyperdef{}{chapter06}{\subsection{Chapter 06: HTTP API Explanation}\label{chapter06}}
\textbf{This chapter provides API for ghttp. Compared with socket API, HTTP API is more stable and more standard, and can maintain connection. Socket API can not guaratee correct transmission, so the network transmission is faster.}
\hyperdef{}{easy-http-examples}{\subsubsection{Easy Examples}\label{easy-http-examples}}
We provide JAVA and C++ API for ghttp now. Please see \texttt{api/http/cpp} and \texttt{api/http/java}. To use these examples, please make sure that executables have already been generated.
Next, \textbf{start up ghttp service by using \texttt{./ghttp} command.} It is ok if you know a running usable ghttp server and try to connect to it. (you don't need to change anything if using
examples, just by default) Then, for Java and C++ code, you need to compile the example codes in the directory gStore/api/http/. We provide a utility to do this, and you just need to type \texttt{make\ APIexample} in the root directory of gStore. Or you can compile the codes by yourself, in this case please go to gStore/api/http/cpp/ and gStore/api/http/java/, respectively.
Finally, go to the example directory and run the corresponding executables. All these four executables will connect to a specified ghttp server and do some load or query operations. Be sure that you see the query results in the terminal where you run the examples, otherwise please go to \hyperref[chapter11]{Frequently Asked Questions} for help or report it to us.(the report approach is described in \hyperref[chapter00]{README})
You are advised to read the example code carefully, as well as the corresponding Makefile. This will help you to understand the API, specially if you want to write your own programs based on the API interface.
\hyperdef{}{http-api-structure}{\subsubsection{API Structure}\label{http-api-structure}}
The HTTP API of gStore is placed in api/http/ directory in the root directory of gStore, whose contents are listed below:
\begin{itemize}
\item
gStore/api/http/
\begin{itemize}
\item
cpp/ (C++ API)
\begin{itemize}
\item
client.cpp (source code of C++ API)
\item
client.h
\item
example.cpp (example program to show the basic idea of using the C++ API)
\item
Makefile (compile)
\end{itemize}
\item
java/ (Java API)
\begin{itemize}
\item
src/ (source code of Java API, used to build the
lib/GstoreJavaAPI.jar)
\begin{itemize}
\item
jgsc/GstoreConnector.java (the package which you need to import when you use the Java API)
\item
Makefile (compile and build lib)
\end{itemize}
\item
lib/
\begin{itemize}
\item
.gitignore
\item
GstoreJavaAPI.jar (only exist after compiled, you need to
include this JAR in your class path)
\end{itemize}
\item
example/ (small example program to show the basic idea of using
the Java API)
\begin{itemize}
\item
JavaAPIExample.cpp
\item
Makefile
\end{itemize}
\end{itemize}
\end{itemize}
\end{itemize}
\hyperdef{}{http-c-api}{\subsubsection{C++ API}\label{http-c-api}}
\hyperdef{}{http-interface}{\paragraph{Interface}\label{http-interface}}
To use the C++ API, please place the phrase \texttt{\#include\ "Client.h"} in your cpp code. Functions in Client.h should be called like below:
\begin{verbatim}
CHttpClient hc;
string res;
int ret;
// build a new database by a RDF file.
ret = hc.Get("127.0.0.1:9000/build/lumb/data/LUBM_10.n3", res);
cout<<res<<endl;
// load databse
ret = hc.Get("127.0.0.1:9000/load/lumb", res);
cout<<res<<endl;
// then you can execute SPARQL query on this database.
ret = hc.Get("127.0.0.1:9000/query/data/ex0.sql", res);
cout<<res<<endl;
// output information of current database
ret = hc.Get("127.0.0.1:9000/monitor", res);
cout<<res<<endl;
// unload this databse
ret = hc.Get("127.0.0.1:9000/unload", res);
cout<<res<<endl;
\end{verbatim}
The original declaration of these functions are as below:
\begin{verbatim}
CHttpClient();
int Post(const std::string & strUrl, const std::string & strPost, std::string & strResponse);
int Get(const std::string & strUrl, std::string & strResponse);
int Posts(const std::string & strUrl, const std::string & strPost, std::string & strResponse, const char * pCaPath = NULL);
int Gets(const std::string & strUrl, std::string & strResponse, const char * pCaPath = NULL);
\end{verbatim}
\hyperdef{}{http-java-api}{\subsubsection{Java API}\label{http-java-api}}
\hyperdef{}{http-interface-1}{\paragraph{Interface}\label{http-interface-1}}
To use the Java API, please place the phrase
\texttt{import\ jgsc.GstoreConnector;} in your java code. Functions in
GstoreConnector.java should be called like below:
\begin{verbatim}
// initialize the Gstore server's IP address and port.
GstoreConnector gc = new GstoreConnector("127.0.0.1", 9000);
// build a new database by a RDF file.
// note that the relative path is related to gserver.
gc.build("LUBM10", "example/LUBM_10.n3");
gc.load("LUBM10");
// then you can execute SPARQL query on this database.
String sparql = "select ?x where " + "{" +
"?x <rdf:type> <ub:UndergraduateStudent>. " +
"?y <ub:name> <Course1>. " +
"?x <ub:takesCourse> ?y. " +
"?z <ub:teacherOf> ?y. " +
"?z <ub:name> <FullProfessor1>. " +
"?z <ub:worksFor> ?w. " +
"?w <ub:name> <Department0>. " +
"}";
String answer = gc.query(sparql);
//unload this database.
gc.unload("LUBM10");
//also, you can load some exist database directly and then query.
gc.load("LUBM10");// query a SPARQL in current database
answer = gc.query(sparql);
gc.unload("LUBM10");
\end{verbatim}
The original declaration of these functions are as below:
\begin{verbatim}
GstoreConnector();
GstoreConnector(int _port);
GstoreConnector(String _ip, int _port);
boolean load(String _db_name);
boolean unload(String _db_name);
boolean build(String _db_name, String _rdf_file_path);
boolean drop(String _db_name);
String query(String _sparql);
String show();
String show(boolean _type);
\end{verbatim}
\clearpage
\hyperdef{}{chapter07}{\subsection{Chapter 07: Use gStore in Web}\label{chapter07}}
\textbf{This Chapter provides a specific example on how to use our API in a web project.}
\hyperdef{}{example}{\subsubsection{Example}\label{example}}
Now you have the basic idea on how to use our APIs to connect gStore. Yet you might be still a little confused. Here we provide a simple demo to show you what to do explicitly.
Let's say, you need to use gStore in a web project. PHP is a popular general-purpose scripting language that is especially suited to web development. So, using our PHP API can meet your requirements. Here is what we implement: http://59.108.48.18/Gstore/form.php.
First, get your web server ready so it can run PHP files. We won't give detailed instructions on this step here. You can easily google it according to your web server(for example, Apache or Nginx, etc.)
Next, go to your web document root(usually in /var/www/html or apache/htdocs, you can check it in config file), and create a folder named "Gstore". Then copy the GstoreConnector.php file into it. Create a "PHPAPI.php" file. Edit it like below:
\begin{verbatim}
<?php
include( 'GstoreConnector.php');
$host = '127.0.0.1';
$port = 3305;
$dbname = $_POST["databasename"];
$sparql = $_POST["sparql"];
$format = $_POST["format"];
$load = new Connector($host, $port);
$load->load($dbname);
$query = new Connector($host, $port);
$result = $query->query($sparql);
switch ($format) {
case 1:
$array = explode("<", $result);
$html = '<html><table class="sparql" border="1"><tr><th>' .
$array[0] . "</th></tr>";
for ($i = 1; $i < count($array); $i++) {
$href = str_replace(">", "", $array[$i]);
$html.= '<tr><td><a href="' . $href . '">' .
$href . '</a></td></tr>';
}
$html.= '</table></html>';
echo $html;
exit;
case 2:
$filename = 'result.txt';
header("Content-Type: application/octet-stream");
header('Content-Disposition: attachment;
filename="' . $filename . '"');
echo $result;
exit;
case 3:
$filename = 'result.csv';
header("Content-Type: application/octet-stream");
header('Content-Disposition: attachment;
filename="' . $filename . '"');
$array = explode("<", $result);
echo $array[0];
for ($i = 1; $i < count($array); $i++) {
$href = str_replace(">", "", $array[$i]);
echo $href;
}
exit;
}
?>
\end{verbatim}
This PHP file get three parametres from a website, including databasename, sparql and output format. Then it use our PHP API to connect gStore and run the query. Finally, the "switch" part gives the output.
After that, we need a website to collect those imformation(databasename, sparql and output format). We create a html file and use a form to do it, just like below:
\begin{verbatim}
<form id="form_1145884" class="appnitro" method="post" action="PHPAPI.php">
<div class="form_description">
<h2>Gstore SPARQL Query Editor</h2>
<p></p>
</div>
<ul>
<li id="li_1" >
<label class="description" for="element_1">
Database Name
</label>
<div>
<input id="element_1" name="databasename" class="element text medium"
type="text" maxlength="255" value="dbpedia_2014_reduce">
</input>
</div>
</li>
<li id="li_3">
<label class="description" for="element_3">Query Text </label>
<div>
<textarea id="element_3" name="sparql" class="element textarea large">
SELECT DISTINCT ?uri
WHERE {
?uri <type> <Astronaut> .
{ ?uri <nationality> <Russia> . }
UNION
{ ?uri <nationality> <Soviet_Union> . }
}
</textarea>
</div>
</li>
<li id="li_5" >
<label class="description" for="element_5">
Results Format
</label>
<div>
<select class="element select medium" id="element_5" name="format">
<option value="1" selected="ture">HTML</option>
<option value="2" >Text</option>
<option value="3" >CSV</option>
</select>
</div>
<li class="buttons">
<input type="hidden" name="form_id" value="1145884" />
<input id="saveForm" class="button_text" type="submit"
name="submit" value="Run Query" />
</li>
</ul>
</form>
\end{verbatim}
As you can see in the code, we use a <input> element to get the databasename, and <texarea> for sparql, <select> for output format. <form> lable has an attribute "action" which specifies which file to execute. So, when you click the "submit" button, it will call PHPAPI.php file and post the values from the form.
Finally, don't forget to start gserver on your server.
\clearpage
\hyperdef{}{chapter08}{\subsection{Chapter 08: Project Structure}\label{chapter08}}
\textbf{This chapter introduce the whole structure of the gStore system project.}
\hyperdef{}{the-core-source-codes}{\paragraph{The core source codes are listed below:}\label{the-core-source-codes}}
\begin{itemize}
\item
Database/ (calling other core parts to deal with requests from
interface part)
\begin{itemize}
\item
Database.cpp (achieve functions)
\item
Database.h (class, members and functions definitions)
\item
Join.cpp (join the node candidates to get results)
\item
Join.h (class, members,, and functions definitions)
\end{itemize}
\item
KVstore/ (a key-value store to swap between memory and disk)
\begin{itemize}
\item
KVstore.cpp (interact with upper layers)
\item
KVstore.h
\item
heap/ (a heap of nodes whose content are in memory)
\begin{itemize}
\item
Heap.cpp
\item
Heap.h
\end{itemize}
\item
node/ (all kinds of nodes in B+-tree)
\begin{itemize}
\item
Node.cpp (the base class of IntlNode and LeafNode)
\item
Node.h
\item
IntlNode.cpp (internal nodes in B+-tree)
\item
IntlNode.h
\item
LeafNode.cpp (leaf nodes in B+-tree)
\item
LeafNode.h
\end{itemize}
\item
storage/ (swap contents between memory and disk)
\begin{itemize}
\item
file.h
\item
Storage.cpp
\item
Storage.h
\end{itemize}
\item
tree/ (implement all tree operations and interfaces)
\begin{itemize}
\item
Tree.cpp
\item
Tree.h
\end{itemize}
\end{itemize}
\item
Query/ (needed to answer SPARQL query)
\begin{itemize}
\item
BasicQuery.cpp (basic type of queries without aggregate operations)
\item
BasicQuery.h
\item
IDList.cpp (candidate list of a node/variable in query)
\item
IDList.h
\item
ResultSet.cpp (keep the result set corresponding to a query)
\item
ResultSet.h
\item
SPARQLquery.cpp (deal with a entire SPARQL query)
\item
SPARQLquery.h
\item
Varset.cpp
\item
Varset.h
\item
QueryTree.cpp
\item
QueryTree.h
\item
GeneralEvaluation.cpp
\item
GeneralEvaluation.h
\item
RegexExpression.h
\end{itemize}
\item
Signature/ (assign signatures for nodes and edges, but not for
literals)
\begin{itemize}
\item
SigEntry.cpp
\item
SigEntry.h
\item
Signature.cpp
\item
Signature.h
\end{itemize}
\item
VSTree/ (an tree index to prune more efficiently)
\begin{itemize}
\item
EntryBuffer.cpp
\item
EntryBuffer.h
\item
LRUCache.cpp
\item
LRUCache.h
\item
VNode.cpp
\item
VNode.h
\item
VSTree.cpp
\item
VSTree.h
\end{itemize}
\end{itemize}
\hyperdef{}{the-parser-part}{\paragraph{The parser part is listed below:}\label{the-parser-part}}
\begin{itemize}
\item
Parser/
\begin{itemize}
\item
DBParser.cpp
\item
DBParser.h
\item
RDFParser.cpp
\item
RDFParser.h
\item
SparqlParser.c (auto-generated, subtle modified manually,
compressed)
\item
SparqlParser.h (auto-generated, subtle modified manually,
compressed)
\item
SparqlLexer.c (auto-generated, subtle modified manually, compressed)
\item
SparqlLexer.h (auto-generated, subtle modified manually, compressed)
\item
TurtleParser.cpp
\item
TurtleParser.h
\item
Type.h
\item
QueryParser.cpp
\item
QueryParser.h
\end{itemize}
\end{itemize}
\hyperdef{}{the-utilities}{\paragraph{The utilities are listed below:}\label{the-utilities}}
\begin{itemize}
\item
Util/
\begin{itemize}
\item
Util.cpp (headers, macros, typedefs, functions\ldots{})
\item
Util.h
\item
Bstr.cpp (represent strings of arbitrary length)
\item
Bstr.h (class, members and functions definitions)
\item
Stream.cpp (store and use temp results, which may be very large)
\item
Stream.h
\item
Triple.cpp (deal with triples, a triple can be divided as
subject(entity), predicate(entity), object(entity or literal))
\item
Triple.h
\item
BloomFilter.cpp
\item
BloomFilter.h
\end{itemize}
\end{itemize}
\hyperdef{}{the-interface-part}{\paragraph{The interface part is listed below:}\label{the-interface-part}}
\begin{itemize}
\item
Server/ (client and server mode to use gStore)
\begin{itemize}
\item
Client.cpp
\item
Client.h
\item
Operation.cpp
\item
Operation.h
\item
Server.cpp
\item
Server.h
\item
Socket.cpp
\item
Socket.h
\end{itemize}
\item
Main/ (a series of applications/main-program to operate on gStore)
\begin{itemize}
\item
gbuild.cpp (import a RDF dataset)
\item
gquery.cpp (query a database)
\item
gserver.cpp (start up the gStore server)
\item
gclient.cpp (connect to a gStore server and interact)
\end{itemize}
\end{itemize}
\hyperdef{}{more-details}{\paragraph{More details}\label{more-details}}
To acquire a deep understanding of gStore codes, please go to
\href{run:../pdf/code_overview.pdf}{Code
Detail}. See
\href{run:../pdf/Gstore2.0_useCaseDoc.pdf}{use
case} to understand the design of use cases, and see
\href{run:../pdf/OOA_class.pdf}{OOA}
and
\href{run:../pdf/OOD_class.pdf}{OOD}
for OOA design and OOD design, respectively.
If you want to know the sequence of a running gStore, please view the
list below:
\begin{itemize}
\item
\href{run:../jpg/A01-connectServer.jpg}{connect
to server}
\item
\href{run:../jpg/A02-disconnectServer.jpg}{disconnect
server}
\item
\href{run:../jpg/A03-loadDatabase.jpg}{load
database}
\item
\href{run:../jpg/A04-unloadDatabase.jpg}{unload
database}
\item
\href{run:../jpg/A05-buildDatabase.jpg}{create
database}
\item
\href{run:../jpg/A06-deleteDatabase.jpg}{delete
database}
\item
\href{run:../jpg/A07-connectDatabase.jpg}{connect
to database}
\item
\href{run:../jpg/A08-disconnectDatabase.jpg}{disconnect
database}
\item
\href{run:../jpg/A09-showDatabase.jpg}{show
databases}
\item
\href{run:../jpg/A10-querySPARQL.jpg}{SPARQL
query}
\item
\href{run:../jpg/A11-loadRDF.jpg}{import
RDF dataset}
\item
\href{run:../jpg/A12-insertRDF.jpg}{insert
a triple}
\item
\href{run:../jpg/A13-deleteRDF.jpg}{delete
a triple}
\item
\href{run:../jpg/B01-createAccount.jpg}{create
account}
\item
\href{run:../jpg/B02-deleteAccount.jpg}{delete
account}
\item
\href{run:../jpg/B03-changeAccount.jpg}{modify
account authority}
\item
\href{run:../jpg/B04-removeDatabase.jpg}{compulsively
unload database}
\item
\href{run:../jpg/B05-showAccount.jpg}{see
account authority}
\end{itemize}
It is really not strange to see something different with the original
design in the source code. And some designed functions may have not be
achieved so far.
\hyperdef{}{others}{\paragraph{Others}\label{others}}
The api/ folder in gStore is used to store API program, libs and
examples, please go to \hyperref[chapter05]{API} for details. And test/
is used to store a series test programs or utilities, such as gtest,
full\_test and so on. Chapters related with test/ are
\hyperref[chapter04]{How To Use} and \hyperref[chapter15]{Test Result}.
This project need an ANTLR lib to parse the SPARQL query, whose code is
placed in tools/(also archived here) and the compiled libantlr.a is
placed in lib/ directory.
We place some datasets and queries in data/ directory as examples, and
you can try them to see how gStore works. Related instructions are in
\hyperref[chapter04]{How To Use}. The docs/ directory contains all kinds
of documents of gStore, including a series of markdown files and two
folders, pdf/ and jpg/. Files whose type is pdf are placed in pdf/
folder, while files with jpg type are placed in jpg/ folder.
You are advised to start from the \hyperref[chapter00]{README} in the
gStore root directory, and visit other chapters only when needed. At
last, you will see all documents from link to link if you are really
interested in gStore.
\clearpage
\hyperdef{}{chapter09}{\subsection{Chapter 09: Publications}\label{chapter09}}
\hyperdef{}{publications-related-with-gstore-are-listed-here}{\paragraph{Publications related with gStore are listed here:}\label{publications-related-with-gstore-are-listed-here}}
\begin{itemize}
\item
Lei Zou, M. Tamer $\ddot{O}$zsu,Lei Chen, Xuchuan Shen, Ruizhe Huang, Dongyan
Zhao,
\href{http://www.icst.pku.edu.cn/intro/leizou/projects/papers/gStoreVLDBJ.pdf}{gStore:
A Graph-based SPARQL Query Engine}, VLDB Journal , 23(4): 565-590,
2014.
\item
Lei Zou, Jinghui Mo, Lei Chen,M. Tamer $\ddot{O}$zsu, Dongyan Zhao,
\href{http://www.icst.pku.edu.cn/intro/leizou/projects/papers/p482-zou.pdf}{gStore:
Answering SPARQL Queries Via Subgraph Matching}, Proc. VLDB 4(8):
482-493, 2011.
\item
Xuchuan Shen, Lei Zou, M. Tamer $\ddot{O}$zsu, Lei Chen, Youhuan Li, Shuo Han,
Dongyan Zhao,
\href{http://www.icst.pku.edu.cn/intro/leizou/projects/papers/demo.pdf}{A
Graph-based RDF Triple Store}, ICDE 2015: 1508-1511.
\item
Peng Peng, Lei Zou, M. Tamer $\ddot{O}$zsu, Lei Chen, Dongyan Zhao: \href{http://arxiv.org/pdf/1411.6763v4.pdf}{Processing
SPARQL queries over distributed RDF graphs}. VLDB Journal 25(2): 243-268 (2016).
\item
Dong Wang, Lei Zou, Yansong Feng, Xuchuan Shen, Jilei Tian, and
Dongyan Zhao,
\href{http://www.icst.pku.edu.cn/intro/leizou/projects/papers/Store.pdf}{S-store:
An Engine for Large RDF Graph Integrating Spatial Information}, in
Proc. 18th International Conference on Database Systems for Advanced
Applications (DASFAA), pages 31-47, 2013.
\item
Dong Wang, Lei Zou and Dongyan Zhao,
\href{http://www.icst.pku.edu.cn/intro/leizou/projects/papers/edbtdemo2014.pdf}{gst-Store:
An Engine for Large RDF Graph Integrating Spatiotemporal Information},
in Proc. 17th International Conference on Extending Database
Technology (EDBT), pages 652-655, 2014 (demo).
\item
Lei Zou, Yueguo Chen,
\href{http://www.icst.pku.edu.cn/intro/leizou/documentation/pdf/2012CCCF.pdf}{A
Survey of Large-Scale RDF Data Management}, Comunications of CCCF
Vol.8(11): 32-43, 2012 (Invited Paper, in Chinese).
\end{itemize}
\clearpage
\hyperdef{}{chapter10}{\subsection{Chapter 10: Limitations}\label{chapter10}}
\begin{enumerate}
\item
Queries related with unbounded predicates are not supported.
\item
This version only supports SPARQL select query.
\item
Only support RDF file in N3 file format. More file formats will be
supported in the next version.
\end{enumerate}
\clearpage
\hyperdef{}{chapter11}{\subsection{Chapter 11: Frequently Asked Questions}\label{chapter11}}
\hyperdef{}{when-i-use-the-newer-gstore-system-to-query-the-original-database-why-error}{\paragraph{When
I use the newer gStore system to query the original database, why
error?}\label{when-i-use-the-newer-gstore-system-to-query-the-original-database-why-error}}
\quad\\
The database produced by gStore contains several indexes, whose
structures may have been changed in the new gStore version. So, please
rebuild your dataset just in case.
\hyperdef{}{why-error-when-i-try-to-write-programs-based-on-gstore-just-like-the-maingconsolecpp}{\paragraph{Why
error when I try to write programs based on gStore, just like the
Main/gconsole.cpp?}\label{why-error-when-i-try-to-write-programs-based-on-gstore-just-like-the-maingconsolecpp}}
\quad\\
You need to add these phrases at the beginning of your main program,
otherwise gStore will not run correctly:\\ //NOTICE:this is needed to
set several debug files\\ Util util;
\hyperdef{}{why-does-gstore-report-garbage-collection-failed-error-when-i-use-teh-java-api}{\paragraph{\texorpdfstring{Why
does gStore report ``garbage collection failed'' error when I use the
Java
API?}{Why does gStore report garbage collection failed error when I use teh Java API?}}\label{why-does-gstore-report-garbage-collection-failed-error-when-i-use-teh-java-api}}
\quad\\
You need to adjust the parameters of jvm, see
\href{http://www.cnblogs.com/edwardlauxh/archive/2010/04/25/1918603.html}{url1}
and
\href{http://www.cnblogs.com/redcreen/archive/2011/05/04/2037057.html}{url2}
for details.
\hyperdef{}{when-i-compile-the-code-in-archlinux-why-the-error-that-no-ltermcap-is-reported}{\paragraph{\texorpdfstring{When
I compile the code in ArchLinux, why the error that ``no -ltermcap'' is
reported?}{When I compile the code in ArchLinux, why the error that no -ltermcap is reported?}}\label{when-i-compile-the-code-in-archlinux-why-the-error-that-no-ltermcap-is-reported}}
\quad\\
In ArchLinux, you only need to use \texttt{-lreadline} to link the
readline library. Please remove the \texttt{-ltermcap} in the makefile
which is located in the root of the gStore project if you would like to
use ArchLinux.
\hyperdef{}{why-does-gstore-report-errors-that-the-format-of-some-rdf-datasets-are-not-supported}{\paragraph{Why
does gStore report errors that the format of some RDF datasets are not
supported?}\label{why-does-gstore-report-errors-that-the-format-of-some-rdf-datasets-are-not-supported}}
\quad\\
gStore does not support all RDF formats currently, please see
\href{run:../../test/format_question.txt}{formats}
for details. However, it is quite easy for you to convey your RDF data format to the N3 file format that is used in gStore.
\hyperdef{}{when-i-read-on-github-why-are-some-documents-unable-to-be-opened}{\paragraph{When
I read on GitHub, why are some documents unable to be
opened?}\label{when-i-read-on-github-why-are-some-documents-unable-to-be-opened}}
\quad\\
Codes, markdowns or other text files, and pictures can be read directly
on GitHub. However, if you are using some light weight browsers like
midori, for files in pdf type, please download them and read on your
computer or other devices.
\hyperdef{}{why-sometimes-strange-characters-appear-when-i-use-gstore}{\paragraph{Why
sometimes strange characters appear when I use
gStore?}\label{why-sometimes-strange-characters-appear-when-i-use-gstore}}
\quad\\
There are some documents's names are in Chinese, and you don't need to
worry about it.
\hyperdef{}{in-centos7-if-the-watdivdba-generated-database-after-gbuild-is-copied-or-compresseduncompressed-the-size-of-watdivdb-will-be-differentgenerally-increasing-if-using-du-h-command-to-check}{\paragraph{\texorpdfstring{In
centos7, if the watdiv.db(a generated database after gbuild) is copied or
compressed/uncompressed, the size of watdiv.db will be
different(generally increasing) if using \texttt{du\ -h} command to
check?}{In centos7, if the watdiv.db(a generated database after gbuild) is copied or compressed/uncompressed, the size of watdiv.db will be different(generally increasing) if using du -h command to check?}}\label{in-centos7-if-the-watdivdba-generated-database-after-gbuild-is-copied-or-compresseduncompressed-the-size-of-watdivdb-will-be-differentgenerally-increasing-if-using-du-h-command-to-check}}
\quad\\
It's the change of B+-trees' size in watdiv/kv\_store/ that causes the
change of the whole database's size. The reason is that in
storage/Storage.cpp, many operations use fseek to move file pointer. As
everyone knows, file is organized in blocks, and if we request for new
block, file pointer may be moved beyond the end of this file(file
operations are all achieved by C in gStore, no errors are reported),
then contents will be written in the new position!
In \textbf{Advanced Programming In The Unix Environment}, ``file hole''
is used to describe this phenomenon. ``file hole'' will be filled with
0, and it's also one part of the file. You can use \texttt{ls\ -l} to
see the size of file(computing the size of holes), while \texttt{du\ -h}
command shows the size of blocks that directory/file occupies in system.
Generally, the output of \texttt{du\ -h} is large than that of
\texttt{ls\ -l}, but if ``file hole'' exists, the opposite is the case
because the size of holes are neglected.
The actual size of files containing holes are fixed, while in some
operation systems, holes will be transformed to contents(also 0) when
copied. Operation \texttt{mv} will not affect the size if not across
different devices.(only need to adjust the file tree index) However,
\texttt{cp} and all kinds of compress methods need to scan the file and
transfer data.(there are two ways to achieve \texttt{cp} command,
neglect holes or not, while the output size of \texttt{ls\ -l} not
varies)
It is valid to use ``file hole'' in C, and this is not an error, which
means you can go on using gStore. We achieve a small program to describe
the ``file holes'', you can download and try it yourself.
\hyperdef{}{in-gclient-console-a-database-is-built-queried-and-then-i-quit-the-console-next-time-i-enter-the-console-load-the-originally-imported-database-but-no-output-for-any-queriesoriginally-the-output-is-not-empty}{\paragraph{In
gclient console, a database is built, queried, and then I quit the
console. Next time I enter the console, load the originally imported
database, but no output for any queries(originally the output is not
empty)?}\label{in-gclient-console-a-database-is-built-queried-and-then-i-quit-the-console-next-time-i-enter-the-console-load-the-originally-imported-database-but-no-output-for-any-queriesoriginally-the-output-is-not-empty}}
\quad\\
You need to unload the using database before quiting the gclient
console, otherwise errors come.
\hyperdef{}{if-query-results-contain-null-value-how-can-i-use-the-fulltest-utility-tab-separated-method-will-cause-problem-here-because-null-value-cannot-be-checked}{\paragraph{\texorpdfstring{If
query results contain null value, how can I use the
\href{run:../../test/full_test.sh}{full\_test}
utility? Tab separated method will cause problem here because null value
cannot be
checked!}{If query results contain null value, how can I use the full\_test utility? Tab separated method will cause problem here because null value cannot be checked!}}\label{if-query-results-contain-null-value-how-can-i-use-the-fulltest-utility-tab-separated-method-will-cause-problem-here-because-null-value-cannot-be-checked}}
\quad\\
You may use other programming language(for example, Python) to deal with
the null value cases. For example, you can change null value in output
to special character like `,', later you can use the
\href{run:../../test/full_test.sh}{full\_test}
utility.
\hyperdef{}{when-i-compile-and-run-the-api-examples-it-reports-the-unable-to-connect-to-server-error}{\paragraph{\texorpdfstring{When
I compile and run the API examples, it reports the ``unable to connect
to server''
error?}{When I compile and run the API examples, it reports the unable to connect to server error?}}\label{when-i-compile-and-run-the-api-examples-it-reports-the-unable-to-connect-to-server-error}}
\quad\\
Please use \texttt{./gserver} command to start up a gStore server first,
and notice that the server ip and port must be matched.
\hyperdef{}{when-i-use-the-java-api-to-write-my-own-program-it-reports-not-found-main-class-error}{\paragraph{\texorpdfstring{When
I use the Java API to write my own program, it reports ``not found main
class''
error?}{When I use the Java API to write my own program, it reports not found main class error?}}\label{when-i-use-the-java-api-to-write-my-own-program-it-reports-not-found-main-class-error}}
\quad\\
Please ensure that you include the position of your own program in class
path of java. The whole command should be something like
\texttt{java\ -cp\ /home/bookug/project/devGstore/api/java/lib/GstoreJavaAPI.jar:.\ JavaAPIExample},
and the ``:.'' in this command cannot be neglected.
%\begin{center}\rule{0.5\linewidth}{\linethickness}\end{center}
\clearpage
\hyperdef{}{chapter12}{\subsection{Chapter 12: Recipe Book}\label{chapter12}}
\textbf{This chapter introduces some useful tricks if you are using
gStore to implement applications.}
\emph{no tips available now}
\clearpage
\part{Others}
\hyperdef{}{chapter13}{\subsection{Chapter 13: Contributors}\label{chapter13}}
Please contact with Lei Zou([email protected]), Li Zeng([email protected]), Jiaqi Chen([email protected]) and Peng Peng([email protected]) if you have suggestions or comments about gStore or you need help when using gStore.
\hyperdef{}{faculty}{\paragraph{Faculty}\label{faculty}}
\begin{itemize}
\item
Lei Zou (Peking University) Project Leader
\item
M. Tamer {\"O}zsu (University of Waterloo)
\item
Lei Chen (Hong Kong University of Science and Technology)
\item
Dongyan Zhao (Peking Univeristy)
\item
Zhiyuan Deng (Wuhan University)
\end{itemize}
\hyperdef{}{students}{\paragraph{Students}\label{students}}
\quad \\
\textit{Li Zeng and Jiaqi Chen are responsible for the gStore system optimization. Peng Peng is responsible for the distributed version of gStore, which is expected to be released before October.}
\begin{itemize}
\item
Peng Peng (Peking University) (PhD student)
%email: \href{mailto:[email protected]}{[email protected]}
\item
Youhuan Li (Peking University) (PhD student)
%email: \href{mailto:[email protected]}{[email protected]}
\item
Shuo Han (Peking University) (PhD student)
%email: \href{mailto:[email protected]}{[email protected]}
\item
Li Zeng (Peking University) (Master student)
%email: \href{mailto:[email protected]}{[email protected]}
\item
Jiaqi Chen (Peking University) (Master student)
%email: \href{mailto:[email protected]}{[email protected]}
\end{itemize}
\hyperdef{}{alumni}{\paragraph{Alumni}\label{alumni}}
\begin{itemize}
\item
Xuchuan Shen (Peking University) (Master's student, graduated)
%email: \href{mailto:[email protected]}{[email protected]}
\item
Dong Wang (Peking University) (PhD student, graduated)
%email: \href{mailto:[email protected]}{[email protected]}
\item
Ruizhe Huang (Peking University) (Undergraudate intern, graduated)
\item
Jinhui Mo (Peking University) (Master's, graduated)
\end{itemize}
\clearpage
\hyperdef{}{chapter14}{\subsection{Chapter 14: Updated Logs}\label{chapter14}}
\hyperdef{}{jan-10-2017}{\subsubsection{Jan 10,
2017}\label{jan-10-2017}}
preFilter() function in Join module is optimazed using the pre2num structure, as well as the choose\_next\_node() function.
A global string buffer is added to lower the cost of getFinalResult(), and the time of answering queries is reduced greatly.
In addition, we assign buffers of different size for all B+ trees.(some of them are more important and more frequently used)
WangLibo merges several B+ trees into one, and the num of all B+ trees are reduced to 9 from 17.
This strategy not only reduces the space cost, but also reduces the memory cost, meanwhile speeding up the build process and query process.
What is more, ChenJiaqi has done a lot of work to optimaze the SPARQL query.
For example, some unconnected SPARQL query graphs are dealed specially.
\hyperdef{}{sep-15-2016}{\subsubsection{Sep 15,
2016}\label{sep-15-2016}}
ZengLi splits the KVstore into 3 parts according to the types of key and value, i.e. int2string, string2int and string2string.
In addition, updates are supported now.
You can insert, delete or modify some triples in the gStore database.
In fact, only insert() and remove() are implemented, while the modify() are supported by removing first and insert again.
\hyperdef{}{jun-20-2016}{\subsubsection{Jun 20,
2016}\label{jun-20-2016}}
ZengLi has enabled the gStore to answer queries with predicate variables.
In addition, the structures of many queries have been studied to speed up the query processing.
ChenJiaqi rewrites the sparql query plan to acquire a more efficient one, which brings many benefits to us.
\hyperdef{}{apr-01-2016}{\subsubsection{Apr 01,
2016}\label{apr-01-2016}}
The structure of this project has changed a lot now. A new join method
has been achieved and we use it to replace the old one. The test result
shows that speed is improved and the memory cost is lower. We also do
some change to Parser/Sparql*, which are all generated by ANTLR. They
must be modified because the code is in C, which brings several multiple
definition problems, and its size is too large.
There is a bug in the original Stream module, which brings some control
characters to the output, such as \^{}C, \^{}V and so on. We have fixed
it now and enabled the Stream to sort the output strings(both internal
and external). In addition, SPARQL queries which are not BGP(Basic Graph
Pattern) are also supported now, using the naive method.
A powerful interactive console, which is named \texttt{gconsole} now, is
achieved to bring convenience to users. What is more, we use valgrind
tools to test our system, and deal with several memory leaks.
The docs and API have also changed, but this is of little importance.
\hyperdef{}{nov-06-2015}{\subsubsection{Nov 06,
2015}\label{nov-06-2015}}
We merge several classes(like Bstr) and adjust the project structure, as
well as the debug system.
In addition, most warnings are removed, except for warnings in Parser
module, which is due to the use of ANTLR.
What is more, we change RangeValue module to Stream, and add Stream for
ResultSet. We also better the gquery console, so now you can redirect
query results to a specified file in the gsql console.
Unable to add Stream for IDlist due to complex operations, but this is
not necessary. Realpath is used to supported soft links in the gquery
console, but it not works in Gstore.(though works if not in Gstore)
\hyperdef{}{oct-20-2015}{\subsubsection{Oct 20,
2015}\label{oct-20-2015}}
We add a gtest tool for utility, you can use it to query several
datasets with their own queries.
In addition, gquery console is improved. Readline lib is used for input
instead of fgets, and the gquery console can support commands history,
modifying command and commands completion now.
What is more, we found and fix a bug in Database/(a pointer for
debugging log is not set to NULL after fclose operation, so if you close
one database and open another, the system will fail entirely because the
system think that the debugging log is still open)
\hyperdef{}{sep-25-2015}{\subsubsection{Sep 25,
2015}\label{sep-25-2015}}
We implement the version of B+Tree, and replace the old one.
After testing on DBpedia, LUBM, and WatDiv benchmark, we conclude that
the new BTree performs more efficient than the old version. For the
same triple file, the new version spends shorter time on executing gload
command.
Besides, the new version can handle the long literal objects
efficiently, while triples whose object's length exceeds 4096 bytes
result in frequent inefficient split operations on the old version
BTree.
\hyperdef{}{feb-2-2015}{\subsubsection{Feb 2, 2015}\label{feb-2-2015}}
We modify the RDF parser and SPARQL parser.
Under the new RDF parser, we also redesign the encode strategy, which
reduces RDF file scanning times.
Now we can parse the standard SPARQL v1.1 grammar correctly, and can
support basic graph pattern(BGP) SPARQL queries written by this standard
grammar.
\hyperdef{}{dec-11-2014}{\subsubsection{Dec 11,
2014}\label{dec-11-2014}}
We add API for C/CPP and JAVA.
\hyperdef{}{nov-20-2014}{\subsubsection{Nov 20,
2014}\label{nov-20-2014}}
We share our gStore2.0 code as an open-source project under BSD license
on github.
\clearpage
\hyperdef{}{chapter15}{\subsection{Chapter 15: Test Result}\label{chapter15}}
\hyperdef{}{preparation}{\subsubsection{Preparation}\label{preparation}}
We have compared the performance of gStore with several other database
systems, such as \href{http://jena.apache.org/}{Jena},
\href{http://www.rdf4j.org/}{Sesame},
\href{http://virtuoso.openlinksw.com/}{Virtuoso} and so on. Contents to
be compared are the time to build database, the size of the built
database, the time to answer single SPARQL query and the matching case
of single query's results. In addition, if the memory cost is very
large(\textgreater{}20G), we will record the memory cost when running
these database systems.(not accurate, just for your reference) \\
To ensure all database systems can run correctly on all datasets and
queries, the format of datasets must be supported by all database
systems and the queries should not contain update operations, aggregate
operations and operations related with uncertain predicates. Notice that
when measuring the time to answer queries, the time of loading database
index should not be included. To ensure this principle, we load the
database index first for some database systems, and warm up several
times for others. \\
Datasets used here are WatDiv, Lubm, Bsbm and DBpedia. Some of them are
provided by websites, and others are generated by algorithms. Queries
are generated by algorithms or written by us. Table \ref{table:datasets} summarizes the statistics of these datasets.
The experiment environment is a CentOS server, whose memory size is 82G
and disk size is 7T. We use
\href{run:../../test/full_test.sh}{full\_test}
to do this test.
\begin{table}[b]
\small
\centering
%\vspace{-0.1in}
\caption{Datasets}
\begin{tabular}{|c|c|r|r|r|}
\hline
Dataset& Number of Triples& RDF N3 File Size(B) & Number of Entities\\
\hline
\hline
%WatDiv 10M & 109,164,587 & 1542624409 & 0 \\
%\hline
%WatDiv 100M & 108,997,714 & 15,599,074,048 & 5,212,745 \\
%\hline
WatDiv 300M & 329,539,576 & 47,670,221,085 & 15,636,385 \\
\hline
%LUBM 500 & 6652613 & 801112089 & 1648692 \\
%\hline
LUBM 5000 & 66718642 & 8134671485 & 16437950 \\
\hline
DBpedia 2014 & 170784508 & 23844158944 & 7123915 \\
\hline
Bsbm 10000 & 34872182 & 912646084 & 526590 \\
\hline
\end{tabular}
% \vspace{-0.1in}
\label{table:datasets}
\end{table}
%BETTER:using bsbm_100000?
\hyperdef{}{result}{\subsubsection{Result}\label{result}}
\begin{comment}
Table \ref{table:loading} shows the index size and loading time of the datasets
for different systems.
\begin{table}[htcp]
\small
\begin{threeparttable}
\begin{tabular}{|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{Index Size(KB)}& \multicolumn{3}{c|}{Loading Time(ms)}\\
\hline
\hline
Datasets & gStore & Jena& Virtuoso& gStore & Jena& Virtuoso\\
\hline
DBpedia 2014 & 42,415,852& 23,151,272 & -\tnote{$1$} & 8,639,666 &15,555,000 & - \\
\hline
Bsbm 10000 & 1,814,480 & 718,024 & 2,080,000 & 244,153 & 76,000 & 59999 \\
\hline
LUBM 500 &2,171,084 &1,022,528 & 38,000,000 & 291,382& 94,000 &100,532 \\
\hline
%LUBM 5000 & 23,397,548& 10,262,524 & - & 3,767,764 &1,098,000 & - \\
%\hline
%WatDiv 10M & 2,563,168& 1,315,764 & 10,320,000 & 532,542 &304,000 &225,464 \\
%\hline
WatDiv 100M & 26,566,780& 13,286,608 & 8,615,100 & 7,879,602 &20,969,000 &16,981,470 \\
\hline
%WatDiv 300M & 80,166,500& 38,108,940 & - & 19,864,431 &25,041,000 & - \\
%\hline
\end{tabular}
\begin{tablenotes}
\small
\item[$1$] ``-'' means that loading does not terminate in 10 hour
\end{tablenotes}
\end{threeparttable}
\caption{Offline Performance}
\label{table:loading}
\end{table}
\end{comment}
The performance of different database management systems is shown in Figures \ref{fig:dbpedia2014Performance}, \ref{fig:Bsbm10000Performance}, \ref{fig:LUBMPerformance} and \ref{fig:WatDivPerformance}.
Notice that Sesame and Virtuoso are unable to operate on DBpedia 2014 and
WatDiv 300M, because the size is too large. In addition, we do not use
Sesame and Virtuoso to test on the LUBM 5000 due to format questions.
Generally speaking, Virtuoso is not scalable, and Sesame is so weak. \\
\begin{figure}[b]%
\resizebox{0.48\columnwidth}{!}{
\input{dbpedia2014_comparison}
}
\caption{Query Performance over DBpedia 2014}%
\label{fig:dbpedia2014Performance}
\end{figure}
\begin{figure}%
\resizebox{0.8\columnwidth}{!}{
\input{bsbm10000_comparison}
}
\caption{Query Performance over Bsbm 10000}%
\label{fig:Bsbm10000Performance}
\end{figure}
\begin{figure}[h]%
%\subfigure[LUBM 500]{%
%\resizebox{0.98\columnwidth}{!}{
%\input{LUBM500_comparison}
%}
%\label{fig:LUBM500Performance}%
%}
%\\
\subfigure[LUBM 5000]{%
\resizebox{0.98\columnwidth}{!}{
\input{LUBM5000_comparison}
}
\label{fig:LUBM5000Performance}%
}%
\caption{Query Performance over LUBM}%
\label{fig:LUBMPerformance}
\end{figure}
\begin{figure}[h]%
%\subfigure[WatDiv 10M]{%
%\resizebox{0.8\columnwidth}{!}{
%\input{WatDiv10M_comparison}
%}
%\label{fig:WatDiv10MPerformance}%
%}
%\subfigure[WatDiv 100M]{%
%\resizebox{0.8\columnwidth}{!}{
%\input{WatDiv100M_comparison}
%}
%\label{fig:WatDiv100MPerformance}%
%}
\subfigure[WatDiv 300M]{%
\resizebox{0.8\columnwidth}{!}{
\input{WatDiv300M_comparison}
}
\label{fig:WatDiv300MPerformance}%
}%
\caption{Query Performance over WatDiv}%
\label{fig:WatDivPerformance}
\end{figure}
This program produces many logs placed in result.log/, load.log/ and
time.log/. You can see that all results of all queries are matched by
viewing files in result.log/, and the time cost and space cost of gStore
to build database are larger than others by viewing files in load.log/.
More precisely, there is an order of magnitude difference between gStore
and others in the time/space cost of building database.
Through analysing time.log/, we can find that gStore behave better than
others on very complicated queries(many variables, circles, etc). For
other simple queries, there is not much difference between the time of
these database systems.
Generally speaking, the memory cost of gStore when answering queries is
higher than others. More complicated the query is and more large the
dataset is, more apparent the phenomenon is.
You can find more detailed information in \href{run:../pdf/gstore_test_report.pdf}{original test report}. Notice that some questions in the test report have already be solved now.
The latest test report is \href{run:../test/formal_experiment.pdf}{formal experiment}.
\clearpage
\hyperdef{}{chapter16}{\subsection{Chapter 16: Future Plan}\label{chapter16}}
\hyperdef{}{improve-the-core}{\subsubsection{Improve The
Core}\label{improve-the-core}}
\begin{itemize}
\item
optimize the join operation of node candidates. multiple methods
should be achieved, and design a score module to select a best one
\item
add numeric value query function. need to answer numeric range query
efficiently and space consume cannot be too large
\item
add a control module to heuristically select an kind of index for a
SPARQL query to filter(not always vstree)
\item
typedef all frequently used types, to avoid inconsistence and high
modify cost
\end{itemize}
\hyperdef{}{better-the-interface}{\subsubsection{Better The
Interface}\label{better-the-interface}}
\begin{itemize}
\item
build a console named gconsole, which provides all operations
supported by gStore.(parser and auto-complete is required)
\item
write web interface for gStore, and a web page to operate on it, just
like virtuoso
\end{itemize}
\hyperdef{}{idea-collection-box}{\subsubsection{Idea Collection
Box}\label{idea-collection-box}}
\begin{itemize}
\item
to support soft links in console: realpath not work\ldots{}(redefined
in ANTLR?)
\item
store command history for consoles
\item
warnings remain in using Parser/(antlr)!(modify sparql.g 1.1 and
regenerate). change name to avoid redefine problem, or go to use
executable to parse
\item
build compress module(such as key-value module and stream module), but
the latter just needs one-pass read/write, which may causes the
compress method to be used both in disk and memory. all operations of
string in memory can be changed to operations after compress: provide
compress/archive interface, compare function. there are many compress
algorithms to be chosen, then how to choose? what about utf-8 encoding
problem? this method can lower the consume of memory and disk, but
consumes more CPU. However, the time is decided by isomorphism. Simple
compress is not good, but too complicated method will consume too much
time, how to balance? (merge the continuous same characters, Huffman
tree)
\item
mmap to speedup KVstore?
\item
the strategy for Stream:is 85\% valid? consider sampling, analyse the
size of result set and decide strategy? how to support order by: sort
in memory if not put in file; otherwise, partial sort in memory, then
put into file, then proceed external sorting
\end{itemize}
\clearpage
\hyperdef{}{chapter17}{\subsection{Chapter 17: Thanks List}\label{chapter17}}
\textit{This chapter lists people who inspire us or contribute to this project.}
\paragraph{GitHub user zhangxiaoyang \\
https://github.com/zhangxiaoyang \\
1. add python api \\
2. fix logger message}
%\begin{center}\rule{0.5\linewidth}{\linethickness}\end{center}
\clearpage
\hyperdef{}{chapter18}{\subsection{Chapter 18: Legal Issues}\label{chapter18}}
%\textbf{We are trying our best to avoid errors. However, if you encounter any unrecovable disaster when using this system, we shall not be responsible for it.}
%below is the BSD LICENSE: http://baike.baidu.com/link?url=a7XUsshp1Sd_DvF7oIJ_CpHTOZryu4ACSSj1AyQl1GU9XL5pPEj9RxIEMF1nC213VvJ2quhWTK9OCZot-CS0LK
%The following is a BSD license template. To generate your own license, change the values of OWNER, ORGANIZATION and YEAR from their original values as given here, and substitute your own.
%Note: The advertising clause in the license appearing on BSD Unix files was officially rescinded by the Director of the Office of Technology Licensing of the University of California on July 22 1999. He states that clause 3 is "hereby deleted in its entirety."
%Note the new BSD license is thus equivalent to the MIT License, except for the no-endorsement final clause.
%<OWNER> = gStore team
%<ORGANIZATION> = Peking University
%<YEAR> = 2016
%In the original BSD license, both occurrences of the phrase "COPYRIGHT HOLDERS AND CONTRIBUTORS" in the disclaimer read "REGENTS AND CONTRIBUTORS".
%Here is the license template:
%Copyright (c) <YEAR>, <OWNER>
Copyright (c) 2016 gStore team \\
All rights reserved. \\
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the Peking University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. \\
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \\
What's more, you need to include the label "powered by gStore", as well as the logo of gStore, in your software product which is using gStore.
We would be very grateful if you are willing to tell us about your name, institution, purpose and email. Such information can be sent to us by emailing to \href{mailto:[email protected]}{[email protected]}, and we promise not to reveal privacy.
%using gmail or website
\clearpage
\section{End}
\textbf{Thank you for reading this document. If any question or advice, or you have interests in this project, please don't hesitate to get in touch with us.}
\clearpage
\end{document}
| {
"alphanum_fraction": 0.7404241135,
"avg_line_length": 39.216070742,
"ext": "tex",
"hexsha": "66a0eb8a5faf3ec5aaba64adcb3cc1a891a582a6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "af9fe84667c650c61d22881029d241e9a7ca88e7",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "imbajin/docker",
"max_forks_repo_path": "docs/help/gStore_help.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "af9fe84667c650c61d22881029d241e9a7ca88e7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "imbajin/docker",
"max_issues_repo_path": "docs/help/gStore_help.tex",
"max_line_length": 888,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "af9fe84667c650c61d22881029d241e9a7ca88e7",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "imbajin/docker",
"max_stars_repo_path": "docs/help/gStore_help.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 28674,
"size": 102001
} |
\documentclass[main.tex]{subfiles}
\begin{document}
% \marginpar{Monday\\ 2019-11-08, \\ compiled \\ \today}
% \section*{Fri Nov 08 2019}
% We continue the discussion from yesterday on the dynamics of inflation.
% The Lagrangian for a scalar field in GR is
% %
% \begin{align}
% \mathscr{L} = \frac{1}{2} g^{\mu \nu } \nabla_{\mu } \Phi \nabla_{\nu } \Phi - V(\Phi )
% \,.
% \end{align}
% The ``contravariant derivative'' does not exist.
% \todo[inline]{Why?}
Actions are dimensionless since \(\hbar =1\), and since \(\dd{s^2} = g_{\mu \nu } \dd{x^{\mu }} \dd{x^{\nu }}\) the metric \(g_{\mu \nu }\) is also dimensionless.
The Riemann tensor is given by the second derivatives of the metric, so its dimension is a length to the \(-2\), or a mass squared.
So, the dimensional analysis of \(\int \dd[4]{x} \sqrt{-g} \mathscr{L}\) gives us that \(\mathscr{L}\) must have dimensions of a length to the \(-4\), or a mass to the 4.
The field \(\Phi \) has the dimensions of a mass, which is an inverse length.
The coupling constants are conventionally taken to be dimensionless: therefore if we are to add a term to the Lagrangian, it must be \( \xi \Phi^2 \) times an inverse square length, which is often \(m^2\), \(m \) being the mass of the field, while \(\xi \) is a real number.
With all of this said, in terms of dimensionality we can add to our Lagrangian a term \(\xi R \Phi^2\), where \(R\) is the Ricci scalar.
This is a prototype for modified GR theories.
The value of \(\xi \) is undetermined: setting \(\xi = 1/6 \) gives us conformal symmetry, while in other cases it is useful to set it as \(\xi = 1/4 \).
A Weyl transformation (a local rescaling of the metric) allows us \cite[]{faraoniConformalTransformationsClassical1998} to remove this term: we move from the Jordan frame (where we \emph{do} have coupling between our scalar field and the curvature, with a term such as the one we described) to the Einstein frame, in which we do not have this term, but we \emph{do} have an additional matter-like term in the Einstein equations, a new component in the stress-energy tensor, which will look like:
%
\begin{align}
T_{\mu \nu } (\Phi ) = \Phi_{, \mu } \Phi_{, \nu } - g_{\mu \nu } \qty(\frac{1}{2} g^{\rho \sigma } \Phi_{, \rho } \Phi_{, \sigma } - V(\Phi ))
\,,
\end{align}
%
% and then \(G_{\mu \nu } = 8 \pi G T_{\mu \nu }\).
where commas denote partial derivatives: \(\Phi_{,\alpha } = \partial_{\alpha } \Phi \).
% Varying the Einstein-Hilbert action with respect to the metric gives the LHS of the Einstein equations. If we vary with respect to something else we get the equations of motion of that other thing.
% This means that the stress-energy tensor is just the functional variation of everything but the action of the metric in the global acction, with respect to the metric.
% We get for our scalar field:
We can get an explicit solution for the solution of the equations of motion of this field by using the symmetries of our spacetime: we assume that, because of homogeneity, \(\Phi (x^{\mu }) = \varphi (t)\).
There is another issue: in QFT any field \(\Phi \) is an operator acting on a Fock space, while the left-hand side of the Einstein equation is a simple tensor --- we are not quantizing space!
The solution to this problem is a semiclassical mean-field approximation, which is similar to the Hartree-Fock mean-field method:
we assume we are ``close'' to the ground state, and so we substitute the stress-energy tensor on the right-hand side with its mean value computed in the ground state of the theory:
%
\begin{align}
G_{\mu \nu } = 8 \pi G \expval{\hat{T}_{\mu \nu } }_{0}
\,,
\end{align}
%
where we define the ground state \(\ket{0}\) as that one with the most symmetry allowed (that is, it should be invariant under rotations and translations, the symmetries of the FLRW metric).
% The symmetries we must consider are only rotations and translations.
% There are no issues of commutation, since we do not quantize space unlike the quantum loop gravity people.
If we perturb the state of the field \(\Phi \), we get \(\Phi = \varphi + \delta \Phi \): so \(\expval{\Phi^2 } = \varphi^2 + 2 \expval{\varphi \delta \Phi } + \expval{ \delta \Phi^2}\), but the second term is zero since \(\expval{ \delta \Phi } = 0 \) and \(\varphi \) is constant.
The last term in diverges. We do not know how to deal with it.
We therefore assume that it is small.
\todo[inline]{What? is this about renormalization?}
When computing the stress energy tensor we get only diagonal terms: this scalar acts like a perfect fluid!
The energy density is equal to the Hamiltonian:
%
\begin{align}
\rho = T_{00 } = \frac{1}{2} \dot{\varphi }^2 + V(\varphi ) = H
\,,
\end{align}
%
while the pressure is the Lagrangian:
%
\begin{align}
P = \frac{1}{2} \dot{\varphi}^2 - V(\varphi ) = \mathscr{L}
\,.
\end{align}
We assume that anything in the universe which is not our field behaves like radiation, with energy density \(\rho_r\). Then, the Friedmann equations (assuming zero spatial curvature) will read
%
\begin{subequations}
\begin{align}
H^2 &= \frac{8 \pi G }{3} \qty(\frac{1}{2} \dot{\varphi }^2 + V + \rho_r) \\
\frac{\ddot{a}}{a} &= - \frac{8 \pi G }{3} \qty(\dot{\varphi }^2 - V + \rho_r) \marginnote{\(P_r = \rho _r / 3\).}\\
\dot{\rho} _{\text{tot}} &= - 3 \frac{\dot{a}}{a} \qty(\rho _{\text{tot}} + P _{\text{tot}})
\,,
\end{align}
\end{subequations}
%
but in the continuity equation we can split the contributions by inserting an unknown factor \(\Gamma \), the transfer of energy between the field and radiation, so that the respective energy densities evolve as
%
\begin{subequations}
\begin{align}
\dot{\rho} _\varphi &= - 3 \frac{\dot{a}}{a} \dot{\varphi }^2 + \Gamma \\
\dot{\rho} _r &= - 4 \frac{\dot{a}}{a} \rho_r - \Gamma
\,.
\end{align}
\end{subequations}
%
% so, denoting \(' = \partial_\varphi \), the first of these equations can be written as
% %
% \begin{subequations}
% \begin{align}
% \dot{\rho }_\varphi &= \dot{\varphi } \ddot{\varphi } + V^{\prime } \dot{\varphi } \\
% \ddot{\varphi} \dot{\varphi } + V^{\prime } \dot{\varphi } &= -3 \frac{\dot{a}}{a} + \Gamma
% \,.
% \end{align}
% \end{subequations}
In order to see what the evolution of the field looks like we drop \(\Gamma \), assuming that there is little energy transfer between the field and radiation.
The equation of motion of the field reads
%
\begin{align}
\ddot{\varphi } + 3 H \dot{\varphi} = - V^{\prime }
\,,
\end{align}
%
where \(V' = \partial_{\varphi } V\).\footnote{This may look peculiar, but it is in fact just the Klein-Gordon equation written in curved spacetime: the difference comes about because the Dalambertian operator now reads \cite[]{natarioDecaySolutionsKleinGordon2019}
%
\begin{align}
\square \varphi &= \frac{1}{\sqrt{-g}} \partial_{\mu } \qty(\sqrt{-g} \partial^{\mu } \varphi ) = \partial_{\mu } \partial^{\mu } \phi +
\frac{\partial_{\mu } \sqrt{-g}}{\sqrt{-g}}
\qty(\partial^{\mu } \varphi ) = \ddot{\varphi} + \dot{\varphi} \frac{\partial_{t} (a^3)}{a^3} = \ddot{\varphi} + 3 \dot{\varphi} \frac{\dot{a}}{a}
\,.
\end{align}
}
We have already mentioned that this field behaves like a fluid: so, what is its equation of state? the definition of \(w\) is
%
\begin{align}
% w = - \frac{1}{3}
w
= \frac{P}{\rho }
= \frac{\frac{1}{2} \dot{\varphi }^2 - V}{\frac{1}{2} \dot{\varphi }^2 + V}
\,,
\end{align}
%
% so one possibility we have is
% %
% \begin{align}
% \dot{\varphi }^2 \gg 2 \abs{V} \implies w = 1
% \,,
% \end{align}
% %
% or else
% %
% \begin{align}
% \dot{\varphi }^2 \ll 2 \abs{V} \implies w = - 1
% \,.
% \end{align}
whose limiting case, when \(\dot{\varphi}^2 \ll 2 \abs{V}\), is \(w = -1\).
% A very simple solution, which was one of the first ones people proposed, is \(\varphi = \const\).
% This model seems so fit the data.
As we have seen, this corresponds to an evolution of the universe with \(a \sim \exp(Ht)\): an exponential expansion!
% The continuity equation gives us the Klein-Gordon equation again: it is tautological.
% Several proposals were made in the late seventies, early eighties.
% A very simple model for a symmetry-breaking potential is the Ginzburg-Landau:
% %
% \begin{align}
% V \propto \qty(\Phi^2 - \sigma^2)^2
% \,,
% \end{align}
% %
% which gives a seeming ``mass term'' \(- 2 \varphi^2 \sigma^2\), which has the wrong sign: it is ``tachyonic''!
% The configuration at \(\Phi = 0\) is unstable. The one at \(\abs{\Phi } = \sigma \) is not symmetric under \(\Phi \rightarrow - \Phi \).
% People realized that QFT is a subcase of a condensed matter approach in which we have a thermal bath, an \emph{environment}.
% This is \emph{finite temperature QFT}.
% We consider then an \emph{effective potential} for the temperature: \(V_T (\Phi ) = V(\Phi ) + \) functions of \(T\).
% This might be \(V(\Phi ) + \alpha \varphi^2 T^2 + \gamma T^{4}\), with positive \(\alpha \). The quadratic term then gives us a \emph{positive} mass term: at temperatures larger than some critical temperature we get stability at \(\Phi = 0\), but what happens if we lower the temperature?
% Then, there is symmetry breaking.
% Let us see how our Friedmann equations account for this situation.
% The temperature of radiation is \(\rho _r = \frac{\pi^2}{30 } g_{*} (r) T^{4}\). If we start with a universe which is radiation dominate, then it ends up to be De Sitter.
% This is a consequence of the \emph{No Hair Cosmic Theorem}.
% There is a potential barrier between the metastable ``\(\Phi = 0\)'' state, and the symmetry breaking other ones.
% (even though it does not show in the fourth degree potential model).
% This can happen through quantum tunneling, but there is a delay: a \emph{first order phase transition with supercooling} (by ``super'' what is meant is just that the temperature goes below \(T_C\) even though we still are in the center symmetric state).
% We get bubbles of symmetry broken by fluctuation, expanding through the universe but never meeting because of the expansion.
The ``old inflation model'' has inflation being caused by the breaking of a certain symmetry through quantum tunneling,
while a new inflation model involves ``slow rolling''.
The equation \(\ddot{\varphi } + 3 H \dot{\varphi } = - V'\) looks like a regular equation of motion with a kinetic and friction term: after a time \(1/H\) the ``friction'' velocity-dependent term dominates.
Then we get a slow-roll (friction-dominated) regime: \(H^2 = \frac{8 \pi G}{3} V \) and \(\dot{\varphi} \approx - V^{\prime }/3H\).
% We exploit the flatness of the potential.
% There are quantum fluctuations during inflation.
\todo[inline]{Mention of chaotic inflation and many more things\dots could not really form a coherent narrative. Should try again after reading \cite[sec.\ 7.11]{colespCosmology2002}.}
\todo[inline]{This section could probably do with some pictures of potentials\dots}
% The model of \emph{chaotic inflation}, by Linde, describes the dynamics of
% Since \(\Delta E \Delta t \approx \hbar\), we do not know at which state we are actually.
% As time passes, the energy uncertainty decreases.
% The initial condition for the distribution of the universe is then detemined by the uncertainty principle.
% An alternative is \emph{eternal} chaotic inflation.
% If a fluctuation increases the potential universe, then \(H^2\) increases, then the region feels a larger volume. The case where the field goes towards the minimum is unlikely.
% Why did it happen? This can only be answered with the anthropic principle.
\end{document} | {
"alphanum_fraction": 0.6905049113,
"avg_line_length": 52.7545454545,
"ext": "tex",
"hexsha": "fca738bde9bc20b97b2558188b7ab64fc32a488b",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z",
"max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jacopok/notes",
"max_forks_repo_path": "ap_first_semester/astrophysics_cosmology/08nov.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jacopok/notes",
"max_issues_repo_path": "ap_first_semester/astrophysics_cosmology/08nov.tex",
"max_line_length": 495,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jacopok/notes",
"max_stars_repo_path": "ap_first_semester/astrophysics_cosmology/08nov.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z",
"num_tokens": 3466,
"size": 11606
} |
\subsection{Matrix rank}
\subsubsection{Rank function}
The rank of a matrix is the dimension of the span of its component columns.
\(rank (M)=span(m_1,m_2,...,m_n)\)
\subsubsection{Column and row span}
The span of the rows is the same as the span of the columns.
| {
"alphanum_fraction": 0.7333333333,
"avg_line_length": 19.2857142857,
"ext": "tex",
"hexsha": "55424160dd351f203157b4cb43e16842daf7dc78",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/algebra/linearSystems/02-01-rank.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/algebra/linearSystems/02-01-rank.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/algebra/linearSystems/02-01-rank.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 72,
"size": 270
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\definecolor{orange}{cmyk}{0,0.4,0.8,0.2}
\definecolor{darkorange}{rgb}{.71,0.21,0.01}
\definecolor{darkgreen}{rgb}{.12,.54,.11}
\definecolor{myteal}{rgb}{.26, .44, .56}
\definecolor{gray}{gray}{0.45}
\definecolor{lightgray}{gray}{.95}
\definecolor{mediumgray}{gray}{.8}
\definecolor{inputbackground}{rgb}{.95, .95, .85}
\definecolor{outputbackground}{rgb}{.95, .95, .95}
\definecolor{traceback}{rgb}{1, .95, .95}
% ansi colors
\definecolor{red}{rgb}{.6,0,0}
\definecolor{green}{rgb}{0,.65,0}
\definecolor{brown}{rgb}{0.6,0.6,0}
\definecolor{blue}{rgb}{0,.145,.698}
\definecolor{purple}{rgb}{.698,.145,.698}
\definecolor{cyan}{rgb}{0,.698,.698}
\definecolor{lightgray}{gray}{0.5}
% bright ansi colors
\definecolor{darkgray}{gray}{0.25}
\definecolor{lightred}{rgb}{1.0,0.39,0.28}
\definecolor{lightgreen}{rgb}{0.48,0.99,0.0}
\definecolor{lightblue}{rgb}{0.53,0.81,0.92}
\definecolor{lightpurple}{rgb}{0.87,0.63,0.87}
\definecolor{lightcyan}{rgb}{0.5,1.0,0.83}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=blue,
linkcolor=darkorange,
citecolor=darkgreen,
}
% Slightly bigger margins than the latex defaults
\begin{document}
\maketitle
\section{The Zen Of Python}\label{the-zen-of-python}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}1}]:} \PY{k+kn}{import} \PY{n+nn}{this}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
\end{Verbatim}
\section{Variables}\label{variables}
A name that is used to denote something or a value is called a variable.
In python, variables can be declared and values can be assigned to it as
follows,
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}2}]:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{2}
\PY{n}{y} \PY{o}{=} \PY{l+m+mi}{5}
\PY{n}{xy} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{Hey}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}3}]:} \PY{k}{print} \PY{n}{x}\PY{o}{+}\PY{n}{y}\PY{p}{,} \PY{n}{xy}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
7 Hey
\end{Verbatim}
Multiple variables can be assigned with the same value.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}4}]:} \PY{n}{x} \PY{o}{=} \PY{n}{y} \PY{o}{=} \PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}5}]:} \PY{k}{print} \PY{n}{x}\PY{p}{,}\PY{n}{y}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
1 1
\end{Verbatim}
\section{Operators}\label{operators}
\subsection{Arithmetic Operators}\label{arithmetic-operators}
\begin{longtable}[c]{@{}ll@{}}
\toprule
Symbol & Task Performed\tabularnewline
\midrule
\endhead
+ & Addition\tabularnewline
- & Subtraction\tabularnewline
/ & division\tabularnewline
\% & mod\tabularnewline
* & multiplication\tabularnewline
// & floor division\tabularnewline
** & to the power of\tabularnewline
\bottomrule
\end{longtable}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}6}]:} \PY{l+m+mi}{1}\PY{o}{+}\PY{l+m+mi}{2}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}6}]:} 3
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}7}]:} \PY{l+m+mi}{2}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}7}]:} 1
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}8}]:} \PY{l+m+mi}{1}\PY{o}{*}\PY{l+m+mi}{2}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}8}]:} 2
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}9}]:} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}9}]:} 0
\end{Verbatim}
0? This is because both the numerator and denominator are integers but
the result is a float value hence an integer value is returned. By
changing either the numerator or the denominator to float, correct
answer can be obtained.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}10}]:} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mf}{2.0}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}10}]:} 0.5
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}11}]:} \PY{l+m+mi}{15}\PY{o}{\PYZpc{}}\PY{k}{10}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}11}]:} 5
\end{Verbatim}
Floor division is nothing but converting the result so obtained to the
nearest integer.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}12}]:} \PY{l+m+mf}{2.8}\PY{o}{/}\PY{o}{/}\PY{l+m+mf}{2.0}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}12}]:} 1.0
\end{Verbatim}
\subsection{Relational Operators}\label{relational-operators}
\begin{longtable}[c]{@{}ll@{}}
\toprule
Symbol & Task Performed\tabularnewline
\midrule
\endhead
== & True, if it is equal\tabularnewline
!= & True, if not equal to\tabularnewline
\textless{} & less than\tabularnewline
\textgreater{} & greater than\tabularnewline
\textless{}= & less than or equal to\tabularnewline
\textgreater{}= & greater than or equal to\tabularnewline
\bottomrule
\end{longtable}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}13}]:} \PY{n}{z} \PY{o}{=} \PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}14}]:} \PY{n}{z} \PY{o}{==} \PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}14}]:} True
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}15}]:} \PY{n}{z} \PY{o}{\PYZgt{}} \PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}15}]:} False
\end{Verbatim}
\subsection{Bitwise Operators}\label{bitwise-operators}
\begin{longtable}[c]{@{}ll@{}}
\toprule
Symbol & Task Performed\tabularnewline
\midrule
\endhead
\& & Logical And\tabularnewline
l & Logical OR\tabularnewline
\^{} & XOR\tabularnewline
\textasciitilde{} & Negate\tabularnewline
\textgreater{}\textgreater{} & Right shift\tabularnewline
\textless{}\textless{} & Left shift\tabularnewline
\bottomrule
\end{longtable}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}16}]:} \PY{n}{a} \PY{o}{=} \PY{l+m+mi}{2} \PY{c}{\PYZsh{}10}
\PY{n}{b} \PY{o}{=} \PY{l+m+mi}{3} \PY{c}{\PYZsh{}11}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}17}]:} \PY{k}{print} \PY{n}{a} \PY{o}{\PYZam{}} \PY{n}{b}
\PY{k}{print} \PY{n+nb}{bin}\PY{p}{(}\PY{n}{a}\PY{o}{\PYZam{}}\PY{n}{b}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
2
0b10
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}18}]:} \PY{l+m+mi}{5} \PY{o}{\PYZgt{}\PYZgt{}} \PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}18}]:} 2
\end{Verbatim}
0000 0101 -\textgreater{} 5
Shifting the digits by 1 to the right and zero padding
0000 0010 -\textgreater{} 2
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}19}]:} \PY{l+m+mi}{5} \PY{o}{\PYZlt{}\PYZlt{}} \PY{l+m+mi}{1}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}19}]:} 10
\end{Verbatim}
0000 0101 -\textgreater{} 5
Shifting the digits by 1 to the left and zero padding
0000 1010 -\textgreater{} 10
\section{Built-in Functions}\label{built-in-functions}
Python comes loaded with pre-built functions
\subsection{Conversion from one system to
another}\label{conversion-from-one-system-to-another}
Conversion from hexadecimal to decimal is done by adding prefix
\textbf{0x} to the hexadecimal value or vice versa by using built in
\textbf{hex( )}, Octal to decimal by adding prefix \textbf{0} to the
octal value or vice versa by using built in function \textbf{oct( )}.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}20}]:} \PY{n+nb}{hex}\PY{p}{(}\PY{l+m+mi}{170}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}20}]:} '0xaa'
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}21}]:} \PY{l+m+mh}{0xAA}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}21}]:} 170
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}22}]:} \PY{n+nb}{oct}\PY{p}{(}\PY{l+m+mi}{8}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}22}]:} '010'
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}23}]:} \PY{l+m+mo}{010}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}23}]:} 8
\end{Verbatim}
\textbf{int( )} accepts two values when used for conversion, one is the
value in a different number system and the other is its base. Note that
input number in the different number system should be of string type.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}24}]:} \PY{k}{print} \PY{n+nb}{int}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{010}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{8}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{int}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{0xaa}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{16}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{int}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{1010}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
8
170
10
\end{Verbatim}
\textbf{int( )} can also be used to get only the integer value of a
float number or can be used to convert a number which is of type string
to integer format. Similarly, the function \textbf{str( )} can be used
to convert the integer back to string format
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}25}]:} \PY{k}{print} \PY{n+nb}{int}\PY{p}{(}\PY{l+m+mf}{7.7}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{int}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{7}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
7
7
\end{Verbatim}
Also note that function \textbf{bin( )} is used for binary and
\textbf{float( )} for decimal/float values. \textbf{chr( )} is used for
converting ASCII to its alphabet equivalent, \textbf{ord( )} is used for
the other way round.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}26}]:} \PY{n+nb}{chr}\PY{p}{(}\PY{l+m+mi}{98}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}26}]:} 'b'
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}27}]:} \PY{n+nb}{ord}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{b}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}27}]:} 98
\end{Verbatim}
\subsection{Simplifying Arithmetic
Operations}\label{simplifying-arithmetic-operations}
\textbf{round( )} function rounds the input value to a specified number
of places or to the nearest integer.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}28}]:} \PY{k}{print} \PY{n+nb}{round}\PY{p}{(}\PY{l+m+mf}{5.6231}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{round}\PY{p}{(}\PY{l+m+mf}{4.55892}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
6.0
4.56
\end{Verbatim}
\textbf{complex( )} is used to define a complex number and \textbf{abs(
)} outputs the absolute value of the same.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}29}]:} \PY{n}{c} \PY{o}{=}\PY{n+nb}{complex}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{5+2j}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{abs}\PY{p}{(}\PY{n}{c}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
5.38516480713
\end{Verbatim}
\textbf{divmod(x,y)} outputs the quotient and the remainder in a
tuple(you will be learning about it in the further chapters) in the
format (quotient, remainder).
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}30}]:} \PY{n+nb}{divmod}\PY{p}{(}\PY{l+m+mi}{9}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}30}]:} (4, 1)
\end{Verbatim}
\textbf{isinstance( )} returns True, if the first argument is an
instance of that class. Multiple classes can also be checked at once.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}31}]:} \PY{k}{print} \PY{n+nb}{isinstance}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n+nb}{int}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{isinstance}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{p}{,}\PY{n+nb}{int}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{isinstance}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{p}{,}\PY{p}{(}\PY{n+nb}{int}\PY{p}{,}\PY{n+nb}{float}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
True
False
True
\end{Verbatim}
\textbf{cmp(x,y)}
\begin{longtable}[c]{@{}ll@{}}
\toprule
x ? y & Output\tabularnewline
\midrule
\endhead
x \textless{} y & -1\tabularnewline
x == y & 0\tabularnewline
x \textgreater{} y & 1\tabularnewline
\bottomrule
\end{longtable}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}32}]:} \PY{k}{print} \PY{n+nb}{cmp}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{cmp}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{cmp}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
-1
1
0
\end{Verbatim}
\textbf{pow(x,y,z)} can be used to find the power \(x^y\) also the mod
of the resulting value with the third specified number can be found i.e.
: (\(x^y\) \% z).
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}33}]:} \PY{k}{print} \PY{n+nb}{pow}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{pow}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
27
2
\end{Verbatim}
\textbf{range( )} function outputs the integers of the specified range.
It can also be used to generate a series by specifying the difference
between the two numbers within a particular range. The elements are
returned in a list (will be discussing in detail later.)
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}34}]:} \PY{k}{print} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{9}\PY{p}{)}
\PY{k}{print} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{27}\PY{p}{,}\PY{l+m+mi}{8}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[0, 1, 2]
[2, 3, 4, 5, 6, 7, 8]
[2, 10, 18, 26]
\end{Verbatim}
\subsection{Accepting User Inputs}\label{accepting-user-inputs}
\textbf{raw\_input( )} accepts input and stores it as a string. Hence,
if the user inputs a integer, the code should convert the string to an
integer and then proceed.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}35}]:} \PY{n}{abc} \PY{o}{=} \PY{n+nb}{raw\PYZus{}input}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Type something here and it will be stored in variable abc }\PY{l+s+se}{\PYZbs{}t}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Type something here and it will be stored in variable abc Hey
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}36}]:} \PY{n+nb}{type}\PY{p}{(}\PY{n}{abc}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}36}]:} str
\end{Verbatim}
\textbf{input( )}, this is used only for accepting only integer inputs.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}37}]:} \PY{n}{abc1} \PY{o}{=} \PY{n+nb}{input}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Only integer can be stored in in variable abc }\PY{l+s+se}{\PYZbs{}t}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Only integer can be stored in in variable abc 275
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}38}]:} \PY{n+nb}{type}\PY{p}{(}\PY{n}{abc1}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}38}]:} int
\end{Verbatim}
Note that \textbf{type( )} returns the format or the type of a variable
or a number
% Add a bibliography block to the postdoc
\newpage
\input{02}
\end{document}
| {
"alphanum_fraction": 0.6357178914,
"avg_line_length": 39.4509246088,
"ext": "tex",
"hexsha": "225a5e0bb08f1f8915c8aada7c173c4343af74c1",
"lang": "TeX",
"max_forks_count": 240,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T16:18:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-09-07T01:01:50.000Z",
"max_forks_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "webdevhub42/Lambda",
"max_forks_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/01.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960",
"max_issues_repo_issues_event_max_datetime": "2015-10-08T16:29:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-10-08T15:39:14.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "webdevhub42/Lambda",
"max_issues_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/01.tex",
"max_line_length": 236,
"max_stars_count": 247,
"max_stars_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "webdevhub42/Lambda",
"max_stars_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/01.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-28T17:02:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-14T19:36:07.000Z",
"num_tokens": 10998,
"size": 27734
} |
%\include{preambleSDL}\include{newcommandsSDL}\include{hyphenationSDL}\begin{document}\tableofcontents\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ALL THE ABOVE TO BE COMMENTED OUT FOR COMPLETE DOCUMENT! %%%%%%%%%%%
\chapter{Overview of the syntax of sentences}\label{overviewSyntax}
In describing \PS\ clauses, it is useful to begin with basic clauses that contain a full predicate and its arguments, complements and/or adjuncts, before moving on to describe complex clauses which consist of two or more clauses linked to one another.
Therefore, basic clauses are described in Chapter \ref{basicClauses}, including declarative, interrogative and imperative clauses. Chapter \ref{complexClauses} then deals with complex clauses, covering coordination and subordination.
However, in order to better understand the syntax of sentences, it is sensible to begin with two general discussions that provide a framework for understanding the syntactic descriptions that follow. The first of these, in \SEC\ref{grammaticalRelations} below, covers grammatical relations in Pite Saami. This leads to the second discussion in \SEC\ref{constituentOrderClauses}, which concerns clause-level constituent ordering, and the likely role that information structure plays in determining this.
\section{Grammatical relations}\label{grammaticalRelations}\is{grammatical relations|(}
\PS\ is an accusative language because %the subject and only nominal argument %delete; cf.~Comrie 1989
the only argument of an intransitive\is{valency!intransitive} verb (S) is marked in the same way as the most-agent-like argument of a transitive\is{valency!transitive} verb (A): by the nominative\is{case!nominative} case.
The most-patient-like argument of a transitive verb (P) is marked differently: by the accusative\is{case!accusative} case. This is illustrated by the following examples, with an intransitive verb in \REF{intrans1} and a (mono-)transitive verb in \REF{monotrans1}.
\ea\label{intrans1}
\glll så mån tjielka sinne vällahiv\\
så mån tjielka sinne vällahi-v\\
so \Sc{1sg.nom} sled\BS\Sc{gen.sg} in lie-\Sc{1sg.pst}\\\nopagebreak
\Transl{So I lay in the sled.}{} \Corpus{100404}{303}
\z
\ea\label{monotrans1}
\glll dä almatj biejaj risev dále nåvte\\
dä almatj bieja-j rise-v dále nåvte\\
then person\BS\Sc{nom.sg} put-\Sc{3sg.prs} stick-\Sc{acc.sg} now so\\\nopagebreak
\Transl{Then one places the stick like this.}{} \Corpus{100404}{216}
\z
The direct object of a ditransitive\is{valency!ditransitive} verb is also in the \is{case!accusative}accusative case, while the indirect object %’goal’ is semantic role, not grammatical relation which is needed here
of a ditransitive verb is in an oblique case (usually in the illative\is{case!illative} case, which prototypically indicates that the noun refers to the goal of a movement), as illustrated in \REF{ditrans1}.% or even in a postpositional phrase\marginpar{find examples!}.
\ea\label{ditrans1}
\glll mån vaddav suhta buhtsujda biebmov\\
mån vadda-v suhta buhtsu-jda biebmo-v\\
\Sc{1sg.nom} give-\Sc{1sg.prs} several reindeer-\Sc{ill.pl} food-\Sc{acc.sg}\\\nopagebreak
\Transl{I give food to several reindeer.}{} \CorpusE{110413b}{157}
\z
Grammatical relations in \PS\ are thus marked by morphological means. Constituent ordering\is{constituent order} does not indicate grammatical relations in any way. For instance, in \REF{monotrans2} the object precedes the subject. % when the object is in focus, as illustrated by \REF{monotrans2}.%
\ea\label{monotrans2}
\glll ja dáv aj mån vuojnav vinndegest muv dåbest\\
ja d-á-v aj mån vuojna-v vindege-st muv dåbe-st\\
and \Sc{dem}-\Sc{prox}-\Sc{acc.sg} also \Sc{1sg.nom} see-\Sc{1sg.prs} window-\Sc{elat.sg} \Sc{1sg.gen} house-\Sc{elat.sg}\\\nopagebreak
\Transl{And I also see this from the window from my house.}{} \Corpus{100310b}{030}
%\glll sågijd mån anav\\
% sågi-jd mån ana-v\\
% birch-\Sc{acc.pl} \Sc{1sg.nom} use-\Sc{1sg.prs}\\\nopagebreak
%\Transl{I use birch wood}{} \Corpus{090702}{149}
\z
The following section %\ref{constituentOrderClauses}
provides more examples illustrating the flexibility of constituent ordering.
\is{grammatical relations|)}
\section{Constituent order at clause level}\label{constituentOrderClauses}\is{constituent order|(}
Clause-level constituent ordering in \PS\ is not determined syntactically.
That being said, in elicited clauses from the corpus, some ordering patterns do occur more frequently than others, and indicate that SVO ordering is preferred in context-free elicited clauses, everything else being equal.
This is illustrated by the examples in \REF{standardConstOrder1} through \REF{standardConstOrder3}.
\ea\label{standardConstOrder1}%
\glll sån usjuda\\
sån usjuda\\
\Sc{3sg.nom} think\BS\Sc{3sg.prs} \\\nopagebreak
\Transl{He thinks.}{} \CorpusE{081011}{154}
\z
\ea\label{standardConstOrder2}%
\glll mån vuojnav bierdnav\\
mån vuojna-v bierdna-v\\
\Sc{1sg.nom} see-\Sc{1sg.prs} bear-\Sc{acc.sg} \\\nopagebreak
\Transl{I see a bear.}{} \CorpusE{080926}{01m24s}
\z
\ea\label{standardConstOrder3}%
\glll ålmaj vaddá blåmåv kuijdnaj\\
ålmaj vaddá blåmå-v kuijdna-j\\
man\BS\Sc{nom.sg} give\BS\Sc{3sg.prs} flower-\Sc{acc.sg} woman-\Sc{ill.sg}\\\nopagebreak
\Transl{The man gives the flower to the woman.}{} \CorpusE{100324}{65m25s}
\z
It is possible that this ordering is triggered by typical Swedish\is{language contact} constituent ordering, which is generally SVO, as Swedish was often used as the meta-language in elicitation sessions.
More significantly, the part of the \PS\ corpus consisting of natural language situations confirms the lack of any set constituent ordering based on syntactic criteria.\footnote{Note that Sammallahti claims that at least for North Saami (although it is not entirely clear whether he means North Saami or is generalizing for all Saami languages here), the “order of the main constituents […] is largely free from formal restrictions and guided by pragmatic principles”, but then states that the “basic order is SVO” \citep[95]{Sammallahti1998}. This seems to reflect the data from the \PS\ corpus to the extent that context-free elicited clauses tend to be SVO, while in fact no syntactic criteria for constituent ordering can be ascertained in natural language.
Lagercrantz takes several pages to describe a variety of tendencies in constituent ordering for \PS\ declarative clauses, even after describing ordering preferences concerning topic\is{information structure!topic} and focus\is{information structure!focus} within a discourse and summarizing the actual situation by stating quite vaguely that the position of the subject in a clause has a ‘certain stylistic effect’ (“Die Stellung des Satzgegenstandes hat eine gewisse stilistische Wirkung”) \citep[46]{Lagercrantz1926}. %(in pre-information-structure terms).
Perhaps current descriptions of the syntax of the Saami languages would be better served if linguists %(including myself)
would cease trying to force these languages into an inaccurate (but typologically neat) label such as SVO.}
To illustrate this syntactic flexibility, examples of SOV and OSV %and VS %removed: Adv+V+S - no ex. with V in first w/o dä or some other adv! (with overt subject and only one verb)
constituent ordering are provided in \REF{deviantConstituentOrder1} and \REF{deviantConstituentOrder2}, % and \REF{deviantConstituentOrder3},
respectively.
\ea\label{deviantConstituentOrder1}
\glll mån vuostasj vierbmev biejav Áktjuotjålbmáj \\
mån vuostasj vierbme-v bieja-v Áktjuotjålbmá-j \\
\Sc{1sg.nom} first net-\Sc{acc.sg} put-\Sc{1sg.prs} Áktjuotjålbme-\Sc{ill.sg}\\\nopagebreak
\Transl{First I’ll put out the net in Áktjuotjålbme.}{} \Corpus{090702}{024}
\z
\ea\label{deviantConstituentOrder2}%
\glll sågijd mån anav\\
sågi-jd mån ana-v\\
birch-\Sc{acc.pl} \Sc{1sg.nom} use-\Sc{1sg.prs} \\\nopagebreak
\Transl{I use birchwood.}{} \Corpus{090702}{149}
\z
The example in \REF{deviantConstituentOrder4} has VSO constituent order, and additionally has the non-finite verb \is{complement!phrase}complement (with OV ordering) in clause-final position.
\ea\label{deviantConstituentOrder4}%
\glll dä galgav mån gåvåjd vuosedit\\
dä galga-v mån gåvå-jd vuosedi-t\\
then will-\Sc{1sg.prs} \Sc{1sg.nom} picture-\Sc{acc.pl} show-\Sc{inf} \\\nopagebreak
\Transl{Then I will show some pictures.}{} \Corpus{080825}{036}
\z
Attempting to determine constituent order patterns is further complicated by the fact that it is sometimes impossible to tell what the constituent ordering is because NPs\is{phrase!nominal} referring to %presupposed
information provided by context alone are frequently not realized overtly, as in \REF{deviantConstituentOrder4} above and \REF{missingNP1} through \REF{missingNP4} below.
\ea\label{missingNP1}
\glll ber aktak tjårvev adna \\
ber aktak tjårve-v adna \\
only one antler-\Sc{acc.sg} have\BS\Sc{3sg.prs}\\\nopagebreak
\Transl{(The reindeer) only has one antler.}{} \Corpus{100405b}{019}
\z
\ea\label{missingNP2}%
\glll gallga giesset ulgus\\
gallga giesse-t ulgus\\
will\BS\Sc{3sg.prs} pull-\Sc{inf} out\\\nopagebreak
\Transl{(The reindeer herder) will pull (the reindeer buck) out.}{} \Corpus{080909}{017}
\z
\ea\label{missingNP3}%
\glll mån biejav dut\\
mån bieja-v dut\\
\Sc{1sg.nom} put-\Sc{1sg.prs} there\\\nopagebreak
\Transl{I’ll put (the pole) there.}{} \Corpus{100404}{218}
\z
\ea\label{missingNP4}%
\glll ja vadde, Eva-Karin!\\
ja vadde Eva-Karin\\
and give\BS\Sc{2sg.imp} Eva-Karin\\\nopagebreak
\Transl{And give (me) (a sausage), Eva-Karin!}{} \Corpus{090519}{208}
\z
While person\is{person} and number\is{number} markers on the finite verb indicate grammatical information about the subject, there is no overt subject in \REF{missingNP1} or \REF{missingNP2}. The clauses in \REF{missingNP2} through \REF{missingNP4} are lacking overt objects. The final example is also missing the indirect object.
Indeed, a complete clause can consist of nothing more than an inflected verb, as in the response %\marginpar{the question is of course whether ‘beautiful weather’ is not also a complete clause.}
in \REF{missingNP5}, which consists of nothing more than the copular verb\is{verb!copular} inflected for \Sc{3sg.prs}.
%\clearpage
\ea\label{missingNP5}
\glll \Tn{A:} ja tjábba dállke!\\
{} ja tjábba dállke\\
{} and beautiful weather\BS\Sc{nom.sg} \\\nopagebreak
\TranslMulti{and such beautiful weather!}{}\\ %\Corpus{Hilpert}
\glll \Tn{B:} lä.\\
{} lä\\
{} be\BS\Sc{3sg.prs}\\\nopagebreak
\TranslMulti{yes, it is}{(lit.: ‘is’)\footnotemark} %\Corpus{pit05HilpertDialog}{0m38s}%\footnotemark
%\footnotetext{The recording ‘\hyperlink{pit05HilpertDialog}{pit05HilpertDialog}’ was collected by linguist Martin Hilpert during a pilot project he completed on \PS\ in 2005. I am very grateful to him for providing me with his recordings and annotations. Note that Martin’s recordings are not included in the \PS\ documentation corpus.}
\z
\footnotetext{The source of the example in \REF{missingNP5} is a recording %‘\hyperlink{pit05HilpertDialog}{pit05HilpertDialog}’, which was
collected by linguist Martin Hilpert\aimention{Hilpert, Martin} during a pilot project he completed on \PS\ in 2005. I am very grateful to him for providing me with his recordings and annotations. Note that Martin’s recordings are not included in the \PS\ documentation corpus.}
However, there are no examples in the corpus of verb-initial clauses featuring both an overt subject and a VC with a single verb. While there are plenty of examples of the finite verb preceding the subject and most other clausal constituents, some constituent always precedes the verb. Frequently, it is the adverb \It{dä}, as in \REF{ConstOrderAdvVex1}.
\ea\label{ConstOrderAdvVex1}%
\glll ja dä båhta reksak\\
ja dä båhta reksak\\
and then come\BS\Sc{3sg.prs} ptarmigan\BS\Sc{nom.sg} \\\nopagebreak
\Transl{And then a ptarmigan comes.}{} \Corpus{100404}{241}
\z
If the subject is not realized overtly, or if the VC contains more than one verb form, then the finite verb can be clause initial, as in \REF{ConstOrderAdvVex2} and \REF{ConstOrderAdvVex3}, respectively.
\ea\label{ConstOrderAdvVex2}%
\glll bisij dajd dä såbe nanne\\
bisi-j d-a-jd dä såbe nanne\\
grill-\Sc{3sg.pst} \Sc{dem}-\Sc{dist}-\Sc{acc.pl} then stick\BS\Sc{gen.sg} on\\\nopagebreak
\Transl{He grilled them then on a stick.}{} \Corpus{100404}{125}
\z
\ea\label{ConstOrderAdvVex3}%
\glll ittjiv mån mujte\\
ittji-v mån mujte\\
\Sc{neg}-\Sc{1sg.pst} \Sc{1sg.nom} remember\BS\Sc{conneg} \\\nopagebreak
\Transl{I didn’t remember.}{} \Corpus{100404}{227}
\z
\subsection{Information structure}\label{infoStructure}\is{information structure|(}
Considering the syntactic flexibility described above, it is only reasonable to consider information structure as a constituent ordering\is{constituent order} strategy.
While a thorough investigation of information structure in \PS\ is beyond the scope of the present study %, it seems that constituent order is likely determined by the pragmatic structure of a clause and the text it is part of. While the following is of a preliminary nature,
and must be left for more thorough future research, some preliminary observations can be made. % and so the following is of a preliminary nature.
Specifically, declarative\is{clause!declarative} clauses typically begin with the topic\is{information structure!topic} (frequently the subject in the nominative\is{case!nominative} case) and end with a comment on that topic. If the comment involves a transitive\is{valency!transitive} verb, the object or \is{clause!complement}complement clause (the focus\is{information structure!focus}) normally follows the verb, as in \REF{standardConstOrder2} and \REF{standardConstOrder3} above. However, clausal elements in focus can be moved from their ‘default’ position, which results in significant deviations from the %basic/normal/unmarked/
preferred SVO constituent order. This is reflected in constituent interrogative\is{clause!interrogative!constituent} clauses; here, the interrogative pronoun is in focus and always in clause-initial position %, as evidenced by the fact that interrogative pronouns and adverbs are always clause-initial
(cf.~\SEC\ref{constituentQs}).
The short example text presented in \REF{mouseText4} and \REF{mouseText5} below should serve to give an impression of how information structure may be the driving force behind constituent ordering at clause-level. Here, the speaker is talking about looking inside her mother’s shoes after discovering that a mouse had been in them.
\ea\label{mouseText4}%
\glll ja danne vuojdniv unna jåŋåtjav.\\
ja danne vuojdni-v unna jåŋå-tja-v\\
and there see-\Sc{1sg.pst} small lingonberry-\Sc{dim}-\Sc{acc.sg}\\\nopagebreak
\Transl{And there I saw a little lingonberry.}{} \Corpus{100404}{353}
\ex\label{mouseText5}%
\glll jahkav skafferijav lä danne adnam,\\
jahka-v skafferija-v lä danne adna-m\\
believe-\Sc{1sg.prs} pantry-\Sc{acc.sg} be\BS\Sc{3sg.prs} there have-\Sc{prf}\\\nopagebreak
\Transl{I think (the mouse) had a pantry there.}{} \Corpus{100404}{354}
\z
In the first clause \REF{mouseText4}, the topic\is{information structure!topic} is \It{danne} ‘there’, which refers to the shoes (the topic of the anecdote at this point) and is clause-initial. The constituent \It{jåŋåtjav} ‘lingonberry’ is the focus\is{information structure!focus}, but it is not particularly significant in the anecdote, and it follows the finite verb \It{vuojdniv} ‘I saw’.
However, when particular emphasis is placed on the focus, as in the following clause in \REF{mouseText5}, the constituent in focus can be fronted. Here, \It{skafferijav} ‘pantry’ is in focus, and receives particular emphasis\footnote{The NP \It{skafferijav} ‘pantry’ probably receives special emphasis because it personifies the behavior of the mouse in a light-hearted way by claiming that a mouse can have a pantry.} by occurring before the verb complex \It{lä danne adnam} ‘has had there’, while the topic (the mouse) is not realized overtly at all, but implied by the context and by the finite verb form inflected for \Sc{3sg}.
This fronting of a constituent is often accompanied by higher acoustic intensity, as is the case here.%; such constituents are presented in \Bf{bold} face in the example text.
\is{information structure|)}\is{constituent order|)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%% B A S I C C L A U S E S
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Basic clauses}\label{basicClauses}
A basic clause is a syntactic unit at text-level consisting minimally of a finite\is{verb!finite} verb. In declarative clauses and interrogative clauses, this finite verb is marked morphologically for person\is{person}, number\is{number}, tense\is{tense} and/or mood\is{mood}. Aspect\is{aspect} can be expressed analytically at the clause level using an auxiliary verb and a non-finite verb form\is{verb!non-finite}.
In all basic clauses, the finite verb agrees\is{agreement} in number and, with the exception of imperative \is{mood!imperative}mood, in person with the syntactic subject of the sentence, which is a nominal phrase in the nominative case.
NPs referring to information provided by context alone are not necessarily realized overtly. As a result, the syntactic subject and other verbal arguments are often not overtly present.
The following sections first present basic declarative clauses with intransitive and transitive verbs, existential clauses, copular clauses and complex verbal constructions consisting of more than one verb (\SEC\ref{declClauses}). Then, \SEC\ref{interrogClauses} deals with interrogative clauses, %highlighting the differences between interrogative and declarative clauses, %Negation, which is in fact a special case of complex verbal construction, is treated in \SEC\ref{},
before \SEC\ref{imperClauses} and \SEC\ref{potClauses} cover syntactic aspects of the imperative mood and the potential mood, respectively.
\section{Declarative clauses}\label{declClauses}\is{clause!declarative|(}
Declarative clauses are the most common type of clause in the \PS\ corpus. In the following, declarative clauses with a single verb are dealt with first, covering intransitive and transitive verbs, and two special cases (existential clauses and copular clauses). Then, declarative clauses featuring a modal or auxiliary verb in addition to the lexical head verb are described; because negation\is{negation} is expressed by an auxiliary verb, it is covered in the same section. While constituent ordering is mentioned in the following sections, it mostly refers to tendencies only, and the flexible nature of \PS\ constituent ordering should always be kept in mind, as discussed in \SEC\ref{constituentOrderClauses}.
\subsection{Basic intransitive declaratives}\label{basicIntransDeclaratives}
The subject of an intransitive declarative clause is in the nominative case, %. Such clauses typically have SV constituent ordering when the subject is overt,
as in \REF{basicIntransDeclaratives1} through \REF{basicIntransDeclaratives3}.
\ea\label{basicIntransDeclaratives1}
\glll almatj usjut ja...\\
almatj usjut ja\\
person\BS\Sc{nom.sg} think\BS\Sc{3sg.prs} and \\%odd-syllable verbs for ER have no -a in 3sg.prs, but do have -a for DS, IF
\Transl{One thinks, and...}{} \Corpus{100404}{172}
\z
\ea\label{basicIntransDeclaratives2}
\glll mån tjájmav\\
mån tjájma-v\\
\Sc{1sg.nom} laugh-\Sc{1sg.prs} \\\nopagebreak
\Transl{I laugh.}{} \CorpusE{100323a}{005}
\z
\ea\label{basicIntransDeclaratives3}
\glll dáj Skaile ello såkoj\\
d-á-j Skaile ello såko-j\\
\Sc{dem}-\Sc{prox}-\Sc{gen.pl} Skaile\BS\Sc{gen.pl} reindeer.herd\BS\Sc{nom.sg} drown-\Sc{3sg.pst} \\\nopagebreak
\Transl{The Skaile family’s reindeer herd drowned.}{} \CorpusLink{0906_Ahkajavvre_b}{0906\_Ahkajavvre\_b}{002}
\z
\subsubsection{Clauses with a passive verb}\label{passiveVoice}\is{passive}
When a transitive verb is passivized,\footnote{Transitive verbs can be passivized using the derivational suffix\is{suffix!derivational} \It{-duvv}; cf.~\SEC\ref{VdervPassives} on derivational morphology and \SEC\ref{passiveVinflection} on inflectional morphology for passives.}
its valency is reduced, and it becomes intransitive. In this, the patient is the subject of the verb complex\is{phrase!verb complex}, and therefore marked by nominative\is{case!nominative} case; the agent may be left out. This is illustrated by the pair of elicited examples in \REF{passVoiceEx1} and \REF{passVoiceEx2}, in which the former is a transitive clause in active voice, and the latter is an intransitive clause in passive voice. %The first clause is a transitive verb in the active, while the second
\ea\label{passVoiceEx1}%
\glll máná lä tsiggim gådev\\
máná lä tsiggi-m gåde-v\\
child\BS\Sc{nom.pl} be\BS\Sc{3pl.prs} build-\Sc{prf} hut-\Sc{acc.sg}\\\nopagebreak
\Transl{Children have built the hut.}{} \CorpusE{110518a}{28m14s}
\z
\ea\label{passVoiceEx2}%
\glll dat lä tsiggiduvvum\\
d-a-t lä tsiggi-duvvu-m\\
\Sc{dem}-\Sc{dist}-\Sc{nom.sg} be\BS\Sc{3sg.prs} build-\Sc{pass}-\Sc{prf}\\\nopagebreak
\Transl{That (hut) was built.}{} \CorpusE{110518a}{27m41s}
\z
Similarly, the example from a narrative in \REF{passVoiceEx2b} presents an intransitive\is{valency!intransitive} passive construction.
\ea\label{passVoiceEx2b}%
\glll dát lä vanj dä gajk vuorasumos dágaduvvum\\
d-á-t lä vanj dä gajk vuorasu-mos dága-duvvu-m\\
\Sc{dem}-\Sc{prox}-\Sc{nom.sg} be\BS\Sc{3sg.prs} probably then all old-\Sc{superl}\BS\Sc{sg} make-\Sc{pass}-\Sc{prf}\\\nopagebreak
\Transl{This was probably the absolute oldest made.}{}\\ \CorpusLink{0906_Ahkajavvre_a}{0906\_Ahkajavvre\_a}{120}
\z
The NP referring to the agent of an event can optionally occur obliquely in the elative case if the verb is passivized, as in \REF{passVoiceEx3}. \ea\label{passVoiceEx3}%
\glll gåhte lä tsiggiduvvum mánájst\\
gåhte lä tsiggi-duvvu-m máná-jst\\
hut\BS\Sc{nom.sg} be\BS\Sc{3sg.prs} build-\Sc{pass}-\Sc{prf} child-\Sc{elat.pl}\\\nopagebreak
\Transl{The hut has been built by children.}{} \CorpusE{110518a}{28m41s}
\z
\subsection{Basic transitive declaratives}\label{basicMonotransDeclaratives}
In declarative clauses featuring a monotransitive\is{valency!transitive} verb, %typically have SVO constituent ordering. The
the subject is in nominative\is{case!nominative} and is typically the agent, while the object is in the accusative\is{case!accusative} case and is typically the patient of the predicate. Examples can be seen in \REF{basicMonotransDeclaratives1} through \REF{basicMonotransDeclaratives3}.
\ea\label{basicMonotransDeclaratives1}
\glll ja mån vuojnav muähtagav danne\\
ja mån vuojna-v muähtaga-v danne\\
and \Sc{1sg.nom} see-\Sc{1sg.prs} snow-\Sc{acc.sg} here\\\nopagebreak
\Transl{And I see snow here.}{} \Corpus{100404}{020}
\z
\ea\label{basicMonotransDeclaratives2}%maybe not a good example here because ’danne’ is fronted due to focus, more like: “there is where the Saami have their reindeer in the summer”
\glll danne sáme edne båhtsujd giesen\\
danne sáme edne båhtsu-jd giese-n\\
there Saami\BS\Sc{nom.pl} have\BS\Sc{3pl.prs} reindeer-\Sc{acc.pl} summer-\Sc{iness.sg} \\\nopagebreak
\Transl{The Saami keep the reindeer there in the summer.}{} \Corpus{100404}{011}
\z
\ea\label{basicMonotransDeclaratives3}
\glll almatj bedja virbmijd ehket\\
almatj bedja virbmi-jd ehket\\
person\BS\Sc{nom.sg} put\BS\Sc{3sg.prs} fishing.net-\Sc{acc.pl} evening\\%\BS\Sc{nom.sg}\\\nopagebreak
\Transl{One puts out fishing nets in the evening.}{} \Corpus{100310b}{020}
\z
In declarative clauses with a ditransitive\is{valency!ditransitive} verb, the direct object is also in the \is{case!accusative}accusative case and is typically theme, while the indirect object, typically recipient, is normally in the illative\is{case!illative} case. Examples can be seen in \REF{basicDitransDeclaratives1} and \REF{basicDitransDeclaratives2}.
\ea\label{basicDitransDeclaratives1}
\glll mån vaddav gålbmå buhtsujda biebmov\\
mån vadda-v gålbmå buhtsu-jda biebmo-v\\
\Sc{1sg.nom} give-\Sc{1sg.prs} three reindeer-\Sc{ill.pl} food-\Sc{acc.sg}\\\nopagebreak
\Transl{I give food to three reindeer.}{} \CorpusE{110413b}{156}
\z
\ea\label{basicDitransDeclaratives2}%
\glll mån vaddav dunje fahtsajt\\
mån vadda-v dunje fahtsa-jt\\
\Sc{1sg.nom} give-\Sc{1sg.prs} \Sc{2sg.ill} glove-\Sc{acc.pl} \\\nopagebreak
\Transl{I give gloves to you.}{} \CorpusE{080926}{01m01s}
\z
\subsection{Existential clauses}\label{existentialClauses}\is{clause!existential|(}
The verb \It{gävdnut}\footnote{Much like its Swedish\is{language contact} counterpart \It{finnas}%\marginpar{is the Swedish comparison relevant? especially without further comment?}
, which is a derivation of the verb \It{finna} ‘find’, the \PS\ verb \It{gävdnut} is derived from the verb \It{gávdnat} ‘find’.}
is used as an existential verb.\footnote{Note that a \is{clause!copular}copular clause containing an adjunct can also be used as an existential; cf.~\SEC\ref{copulaClauses}.}
The item whose existence is posited is the syntactic subject of the sentence and thus in the nominative\is{case!nominative} case, which \It{gävdnut} agrees\is{agreement} in person and number with.
Examples can be found in \REF{existential1} and \REF{existential2}.%, as well as in \REF{existential3} below.
\ea\label{existential1}
\glll váren gävdnu aj juomo\\
váre-n gävdnu aj juomo\\
mountain-\Sc{iness.sg} exist\BS\Sc{3pl.prs} also sorrel\BS\Sc{nom.pl}\\\nopagebreak
\Transl{There is sorrel in the mountains, too.}{(lit.: there are sorrels…)} \Corpus{080924}{178}
\z
\ea\label{existential2}
\glll dal itjij gävndoj aktak tjårvebielle\\
dal itji-j gävndoj aktak tjårve-bielle\\
now \Sc{neg}-\Sc{3sg.pst} exist\BS\Sc{conneg} any horn-half\BS\Sc{nom.sg}\\\nopagebreak
\Transl{There isn’t a single \It{tjårvebielle}\footnotemark\ now.}{} \Corpus{100405b}{021}
\z
\footnotetext{A \It{tjårvebielle} (lit.: horn-half) is a term used to describe a reindeer with only one antler remaining after the other antler has broken off.}
The subject frequently follows the verb because the subject is often the focus\is{information structure!focus} of the clause, but as the clause in \REF{existential3} shows, the subject may occur before the verb if it is the topic\is{information structure!topic} and/or is presupposed knowledge (cf.~\SEC\ref{infoStructure} on information structure).
\ea\label{existential3}
\glll motora vadnasij. “motora” ij gävdnu sáme gielan\\
motora vadnasi-j motora ij gävdnu sáme giela-n\\
motor\BS\Sc{gen.sg} boat-\Sc{com.pl} “motor” \Sc{neg}\BS\Sc{3sg.prs} exist\BS\Sc{conneg} Saami\BS\Sc{gen.sg} language-\Sc{iness.sg}\\\nopagebreak
\Transl{…with motor boats. There is no (word for) “motor” in the Saami language.}{} \Corpus{080924}{482,484}
\z
\is{clause!existential|)}
\subsection{Copular clauses}\label{copulaClauses}\is{clause!copular|(}
There are several types of copular clauses in Pite Saami. All of these feature the copular verb\is{verb!copular} \It{årrot} ‘be’ %, which inflects for number and person of the subject and is marked for tense or mood when
(see \SEC\ref{theCopulaVerb} for more details).
Copular clauses can be used to express a variety of information about the subject referent, and these are discussed below.
When the complement of the copula is an NP\is{phrase!nominal} in nominative\is{case!nominative} case, it identifies or classifies the subject referent, as in \REF{copula1} and \REF{copula2}, respectively.
\ea\label{copula1}
\glll Mattijá lij morbror munje\\
Mattijá li-j {morbror\footnotemark} munje\\
Matthias\BS\Sc{nom.sg} be-\Sc{3sg.pst} maternal.uncle\BS\Sc{nom.sg} \Sc{1sg.ill}\\\nopagebreak
\Transl{Matthias was my maternal uncle.}{} \CorpusLink{0906_Ahkajavvre_a}{0906\_Ahkajavvre\_a}{007}%(lit.: uncle to me)
\z
\ea\label{copula2}
\glll mån lev sábme\\
mån le-v sábme\\
\Sc{1sg.nom} be-\Sc{1sg.prs} Saami\BS\Sc{nom.sg}\\\nopagebreak
\Transl{I am a Saami.}{} \Corpus{080813}{00m35s}
\z
\footnotetext{Note that \It{morbror} is a nonce borrowing from \is{language contact}Swedish. Cf.~Swedish \It{morbror} ‘maternal uncle’.}
The complement of a copular clause can be one or more \is{phrase!adjectival}adjectival phrases headed by a predicative adjective which ascribes properties to the subject referent, as in \REF{copula3}.
\ea\label{copula3}
\glll buhtsoj lä nav buojde ja tjábbe\\
buhtsoj lä nav buojde ja tjábbe\\
reindeer\BS\Sc{nom.pl} be\BS\Sc{3pl.prs} so fat\BS\Sc{pl} and beautiful\BS\Sc{pl}\\\nopagebreak
\Transl{The reindeer are so fat and beautiful.}{} \Corpus{080703}{014}
\z
The complement of a copular clause can be an NP in the inessive\is{case!inessive} case in which case it describes the location of the subject referent, as in \REF{copula4}.
\ea\label{copula4}
\glll måj lijmen Fuordnagin\\
måj lij-men Fuordnagi-n\\
\Sc{1du.nom} be-\Sc{1du.pst} Fuordnak-\Sc{iness.sg}\\\nopagebreak
\Transl{We two were in Fuordnak.}{} \Corpus{080924}{590}
\z
The complement of a copular clause can be an NP in the elative\is{case!elative} case in which case it describes the material which the subject referent is made of, as in \REF{copula5}.
\ea\label{copula5}
\glll ja dát lä aj struvdast\\
ja d-á-t lä aj struvda-st\\
and \Sc{dem}-\Sc{prox}-\Sc{nom.sg} be\BS\Sc{3sg.prs} also cloth-\Sc{elat.sg}\\\nopagebreak
\Transl{And this is also (made) of cloth.}{} \CorpusLink{080708_Session08}{080708\_Session08}{015}
\z
A copular clause can also function as an \is{clause!existential}existential clause when it includes a temporal adjunct. In such cases, the existence of the subject referent is posited at that particular time indicated by the adjunct. Pragmatically, this usually announces an event connected to the subject referent. Typically, the temporal referent occurs first\is{constituent order} in the sentence, then the copular verb, and the subject is last (just as with the existential verb \It{gädvnut}; cf.~\SEC\ref{existentialClauses}), as it is usually the \is{information structure!focus}focus.
This is illustrated by the example in \REF{copulaNr}.
\ea\label{copulaNr}
\glll ja dále'l káffa\\
ja dále=l káffa\\
and now=be\BS\Sc{3sg.prs} coffee\BS\Sc{nom.sg}\\\nopagebreak
\Transl{And now it’s coffee (time).}{} \Corpus{090519}{313}
\z
Possession\is{possession} can also be expressed by a copular construction. In such a construction, the possessed NP is the subject of the clause in the nominative\is{case!nominative} case, which the finite verb agrees\is{agreement} with in person and number. The possessor NP is in the inessive\is{case!inessive} case. Such a construction is illustrated by the example in \REF{copula6}.
\ea\label{copula6}
\glll muvne lä akta mánná\\
muvne lä akta mánná\\
\Sc{1sg.iness} be\BS\Sc{3sg.prs} one child\BS\Sc{nom.sg}\\\nopagebreak
\Transl{I have one child.}{} \CorpusE{080621}{30m54s}
\z
In the corpus, such possessive constructions always have the constituent order
\Sc{possessor}\PLUS\Sc{copula}\PLUS\Sc{possessed}.
%\Sc{possessor} \PLUS\ \Sc{copula} \PLUS\ \Sc{possessed}.
%\hfill\Sc{possessor} \PLUS\ \Sc{copula} \PLUS\ \Sc{possessed}\hfill\hbox{}
While this type of possessive construction is the native Saamic structure \citep[9]{Bergsland1977}, it is very uncommon in the \PS\ corpus, and almost exclusively limited to elicitation sessions. The elicitation scenario may have had an effect on the constituent order,\footnote{The equivalent \is{language contact}Swedish structure is also normally \Sc{possessor \PLUS\ verb \PLUS\ possessed}.}
but it is more likely the case that the constituent order reflects information structure preferences, specifically the tendency for the topic\is{information structure!topic} (more often the possessor, which is animate) to come before the \is{information structure!focus}focus\footnote{Cf.~\SEC\ref{infoStructure} on information structure.}
(more often the possessed, which is inanimate).
In any case, a clause-level construction using the monotransitive\is{valency!transitive} verb \It{adnet} ‘have’\footnote{Historically, \It{adnet} meant ‘use’ or ‘keep’ \citep[cf.][10]{Lehtiranta2001}, but synchronically it is most frequently used to indicate possession in a transitive verb construction which is essentially identical to the equivalent \is{language contact}Swedish verb \It{ha} or the English verb \It{have}.}
expressing possession, as in \REF{copula7}, is now the standard in Pite Saami.\footnote{While this construction using the transitive verb \It{adnet} ‘have’ is clearly not a copular clause, it is worth pointing this out here, particularly for any readers from Uralic studies, because the copular construction (as in the example in \REF{copula6}) is no longer the true standard for expressing possession at clause level in \PS.}
\ea\label{copula7}
\glll ja dä inijmä gusajd \\
ja dä ini-jmä gusa-jd \\
and then have-\Sc{1pl.pst} cow-\Sc{acc.pl}\\\nopagebreak
\Transl{And then we had cows.}{} \Corpus{080924}{091}
\z
\is{clause!copular|)}
\subsection{Multi-verb declarative clauses}\label{multiVdeclarativeClauses}
Verbs which govern non-finite\is{verb!non-finite} verbal complements\is{clause!complement} can be classified into three groups based on the type of non-finite complement verb form they co-occur with, as illustrated in Table~\vref{auxVerbTable}. % are three kinds of such verbs, with each category triggering a different kind of non-finite verb form:
\begin{table}[ht]\centering
\caption{Verbs accompanied by a non-finite complement verb}\label{auxVerbTable}
\begin{tabular}{ll}\mytoprule
{} &{non-finite form of complement} \\\hline
modals & \INF \\
aspectual auxiliary & \PRF\ / \PROG \\%\hline
negation verb & \CONNEG \\\mybottomrule
\end{tabular}
\end{table}The finite verb occurs before the non-finite lexical complement verb, unless the complement is in \is{information structure!focus}focus, in which case it can occur before the finite verb.
These verb types are dealt with in \SEC\ref{modalVs}, \SEC\ref{auxV} and \SEC\ref{negation}, respectively.
\subsubsection{Modal verbs}\label{modalVs}\is{verb!modal|(}
Modal verbs are used to express modality for the event denoted by the verbal complement. The \is{complement!phrase}complementing verb is in the infinitive (marked by the \mbox{suffix \It{-t}).} Modal verbs include \It{máhttat} ‘can, be able to’,
\It{ådtjot}\footnote{The modal verb \It{ådtjot} ‘be allowed to’ is homophonous with the full verb \It{ådtjot} ‘get, receive’. This pattern is found in \is{language contact}Swedish, as well, with the verb \It{få} ‘be allowed to’ and ‘receive’, and in English, e.g.: ‘I get to go to the movies’.}
‘may, be allowed to’,
\It{virrtit} ‘must’, \It{hähttut}\footnote{The word \It{hähttut} ‘must’ is likely limited to northern dialects of Pite Saami.}
‘must’, \It{sihtat} ‘want’ and \It{gallgat} ‘will/shall’.
Some examples are provided in \REF{modalVerbs1} through \REF{modalVerbs3}. %\marginpar{what about bedjat ‘put’ used as let+INF (causative)?}
\ea\label{modalVerbs1}%
\glll tjátsev ådtjobihtet juhgat dasste\\
tjátse-v ådtjo-bihtet juhga-t d-a-sste\\
water-\Sc{acc.sg} may-\Sc{2pl.prs} drink-\Sc{inf} \Sc{dem}-\Sc{dist}-\Sc{iness.sg}\\\nopagebreak
\Transl{You all may drink water from that.}{} \Corpus{090519}{022}
\z
\ea\label{modalVerbs2a}%
\glll ja dä del virrtin allget bäbbmat\\
ja dä del virrti-n allge-t bäbbma-t\\
and then well must-\Sc{3pl.pst} begin-\Sc{inf} feed-\Sc{inf}\\\nopagebreak
\Transl{And then they had to start to feed (the reindeer).}{} \Corpus{100405a}{029}
\z
\ea\label{modalVerbs3}%
\glll mij máhttep ságastit Bidumsáme gielav\\
mij máhtte-p ságasti-t Bidum-sáme giela-v\\
\Sc{1pl.nom} can-\Sc{1pl.prs} speak-\Sc{inf} Pite-Saami\BS\Sc{gen.sg} language-\Sc{acc.sg}\\\nopagebreak
\Transl{We can speak the Pite Saami language.}{} \Corpus{110517b2}{022}
\z
The modal verb \It{sihtat} ‘want’ behaves the same when the subject of the complementing verb complex is coreferential with the subject of the matrix clause, as in \REF{modalVerbs4}.
\ea\label{modalVerbs4}%
\glll mån sidav gulijd adnet\\
mån sida-v guli-jd adne-t\\
\Sc{1sg.nom} want-\Sc{1sg.prs} fish-\Sc{acc.pl} have-\Sc{inf}\\\nopagebreak
\Transl{I want to have fish.}{} \Corpus{090702}{012}
\z
However, when the subject of the modal verb \It{sihtat} is not coreferential with the subject of the \is{complement!phrase}complementing verb complex\is{phrase!verb complex}, then a finite verb clause %with a finite verb inflected for the second subject
is the complement to the modal verb, as in the negated clause in \REF{modalVerbs5}. Note that, here, \It{sihtat} is in the connegative non-finite form \It{sida}. %Note that, Here the complement of the main verb \It{sida} (the connegative non-finite form) is the
\ea\label{modalVerbs5}%
\glll dä ij del almatj sida nagin sadjáj vuällget\\
dä i-j del almatj sida nagin sadjá-j vuällge-t\\
then \Sc{neg}-\Sc{3sg.prs} obviously person\BS\Sc{nom.sg} want\BS\Sc{conneg} some place-\Sc{ill.sg} go-\Sc{inf}\\\nopagebreak
\Transl{Then one obviously doesn’t want to go anywhere.}{} \Corpus{080924}{052}
%%following not good because ‘sida’ has an object-argument (mav) AND complement argument (galgav…)
%\glll mav sida galgav enabujt ságastit\\
% ma-v sida galga-v ena-bu-jt ságasti-t\\
% what-\Sc{acc.sg} want\BS\Sc{2sg.prs} will-\Sc{1sg.prs} much-\Sc{comp-acc.pl} say-\Sc{inf}\\\nopagebreak
%\Transl{what more do you want me to say?}{} \Corpus{100404}{179}
\z
The modal verb \It{gallgat} ‘will’ can also be used to locate events in the future\is{tense!future}, as in \REF{future1} through \REF{future3} below.
\ea\label{future1}%not really sure about the parsing of ‘såme-s’ as some-ATTR, don’t have PRED example!
\glll dä galgav såmes mujjtemuv ságastit\\
dä galga-v såmes mujjtemu-v ságasti-t\\
then will-\Sc{1sg.prs} some memory-\Sc{acc.sg} say-\Sc{inf}\\\nopagebreak
\Transl{Then I will tell (you) a memory.}{} \Corpus{100703a}{001}
\z
\ea\label{future2}%
\glll jo, da lä akta vuoberis, gallga giesset ulgus\\
jo da lä akta {vuoberis\footnotemark} gallga giesse-t ulgus\\
yes then be\BS\Sc{3sg.prs} one buck\BS\Sc{nom.sg} will\BS\Sc{3sg.prs} pull-\Sc{inf} out\\\nopagebreak
\Transl{Yes, it’s a 3-year old reindeer buck, (he) will pull (it) out.}{} \Corpus{080909}{016-017}
\footnotetext{Specifically, a \It{vuoberis} is a three-year-old reindeer buck, but the gloss has been shortened to save space.}
\z
\ea\label{future3}%
\glll gallgap dav girjev ådtjot\\
gallga-p d-a-v girje-v ådtjo-t\\
will-\Sc{1pl.prs} \Sc{dem}-\Sc{dist}-\Sc{acc.sg} book-\Sc{acc.sg} get-\Sc{inf}\\\nopagebreak
\Transl{We will get that book.}{} \Corpus{110517b2}{022}
\z
The modal verb \It{gallgat} ‘will’ is often used in conditional\is{mood!conditional} clauses%\marginpar{does gallgat in conditionals need more explanation?}
, as in \REF{condClause1}.
\ea\label{condClause1}%
\glll jus galga sáme viesov valldet, dä galga mielagav dal navt rutastit\\
jus galga sáme vieso-v vallde-t dä galga mielaga-v dal navt rutasti-t\\
if will\BS\Sc{3sg.prs} Saami\BS\Sc{gen.sg} life-\Sc{acc.sg} take-\Sc{inf} then will\BS\Sc{3sg.prs} sternum-\Sc{acc.sg} then so pull-\Sc{inf}\\\nopagebreak
\Transl{If I choose Saami style, then I will pull the sternum like this.}{} \Corpus{080909}{097}
\z
\is{verb!modal|)}
\subsubsection{The aspectual auxiliary verb \It{årrot}}\label{auxV}\is{verb!auxiliary|(}
The auxiliary verb \It{årrot} ‘be’ %\marginpar{check to see if årrot can really be claimed to be the infinitive form, see if “will have gone” is possible, for instance}
together with a non-finite \is{complement!phrase}complement verb\is{verb!non-finite} is used to mark the perfective and progressive aspects\is{aspect}. This auxiliary verb is homophonous with the copular verb\is{verb!copular}, and is also glossed as ‘be’.
In the perfective aspect, the complement verb is in a non-finite form marked by the suffix \It{-m} as in \REF{perfClause1} and \REF{perfClause2}, while the progressive non-finite verb is marked by the suffix \It{-min} %\marginpar{should progress be analyzed as -m-in ‘perf-prog’? if so, why?}
as in \REF{progClause1} and \REF{progClause2}, respectively.
\ea\label{perfClause1}%
\glll denne liv riegadam\\
denne li-v riegada-m\\
there be-\Sc{1sg.prs} be.born-\Sc{prf}\\\nopagebreak
\Transl{I was born there.}{} \Corpus{090702}{008}
\z
\ea\label{perfClause2}%maybe move this NEG example to the NEG section?!
\glll lä dån mannam nagin bále ja tjuvvum Vistegij?\\
lä dån manna-m nagin bále ja tjuvvu-m Visteg-ij\\
be\BS\Sc{2sg.prs} \Sc{2sg.nom} go-\Sc{prf} some time\BS\Sc{gen.sg} and accompany-\Sc{prf} Vistek-\Sc{ill.sg}\\\nopagebreak
\Transl{Have you ever gone and accompanied (them) to Vistek?}{} \Corpus{080924}{630}
\z
\ea\label{progClause1}%not sure about spelling/translation for ‘rudastet’
\glll men mån lev tjåjvev ruhtastemin ullgus\\
men mån le-v tjåjve-v ruhtaste-min ullgus\\
but \Sc{1sg.nom} be-\Sc{1sg.prs} stomach-\Sc{acc.sg} cut-\Sc{prog} out\\\nopagebreak
\Transl{But I’m cutting out the stomach.}{} \Corpus{080909}{054}
\z
\ea\label{progClause2}
\glll nå, mav lä låhkåmin?\\
nå ma-v lä låhkå-min\\
well what-\Sc{acc.sg} be\BS\Sc{3sg.prs} read-\Sc{prog}\\\nopagebreak
\Transl{Well, what is he studying?}{(lit.: reading)} \Corpus{080924}{667}
\z
\is{verb!auxiliary|)}
\enlargethispage{\baselineskip}
\subsubsection{The negation verb}\label{negation}\is{verb!negation verb|(}
\PS\ clause negation is expressed by a special finite negation\is{negation} verb. Unlike other verbs, the negation verb does not have an infinitive or any other non-finite form\is{verb!non-finite}, but is always a finite verb (cf.~\SEC\ref{theNegationVerb}). As such, it always agrees\is{agreement} in person and number with the subject of the clause, and inflects for tense and mood as well.\footnote{In this respect, \PS\ differs significantly from for instance North Saami negative clauses in which the main verb and not the finite negation verb inflects for tense \citep[cf.][92]{Svonni2009}.}
The \is{complement!phrase}complement verb occurs in a special non-finite form called the connegative.
Examples for the negative verb can be found in \REF{negation1} through \REF{negation3}.
\ea\label{negation1}%
\glll iv jáhke\\
i-v jáhke\\
\Sc{neg-1sg.prs} believe\BS\Sc{conneg} \\\nopagebreak
\Transl{I don’t believe so.}{} \Corpus{090702}{411}
\z
\ea\label{negation2}%
\glll ittjij åbbå gävdno vuodja åsstet\\
ittji-j åbbå gävdno vuodja åsste-t\\
\Sc{neg-3sg.pst} at.all exist\BS\Sc{conneg} butter\BS\Sc{nom.sg} buy-\Sc{inf}\\\nopagebreak
\Transl{There wasn’t any butter to buy at all.}{} \CorpusLink{080708_Session03}{080708\_Session03}{006}
\z
\ea\label{negation3}%
\glll men ijtjin del bårå dan sisste \\
men ijtji-n del bårå d-a-n sisste \\
but \Sc{neg-3pl.pst} then eat\BS\Sc{conneg} \Sc{dem}-\Sc{dist}-\Sc{gen.sg} out\\\nopagebreak
\Transl{But they didn’t eat out of this.}{} \CorpusLink{080708_Session03}{080708\_Session03}{019}
\z
In the examples above, the non-finite \is{complement!phrase}complement to the connegative\is{verb!non-finite!connegative} verb is a lexical verb. In the following examples in \REF{negation4} through \REF{negation6}, the complement connegative verb is a \is{verb!modal}modal or \is{verb!auxiliary}auxiliary verb whose own complement then follows in the appropriate non-finite form.
\ea\label{negation4}%
\glll ij vanj dä máhte ilá stuor dålåv adnet\\
ij vanj dä máhte ilá stuor dålå-v adne-t\\
\Sc{neg}\BS\Sc{3sg.prs} well then can\BS\Sc{conneg} too big fire-\Sc{acc.sg} have-\Sc{inf}\\\nopagebreak
\Transl{But you can’t have too big of a fire.}{} \Corpus{090702}{176}
\z
\ea\label{negation5}%
\glll dä iv lä åbbå gullam dav\\
dä i-v lä åbbå gulla-m d-a-v\\
then \Sc{neg}-\Sc{1sg.prs} be\BS\Sc{conneg} at.all hear-\Sc{prf} \Sc{dem}-\Sc{dist}-\Sc{acc.sg}\\\nopagebreak
\Transl{I haven’t heard that at all.}{} \Corpus{090702}{203}
\z
\ea\label{negation6}%
\glll nej, mån iv lä bårråm, men Jåssjå'l bårråm\\
nej mån i-v lä bårrå-m men Jåssjå=l bårrå-m\\
no \Sc{1sg.nom} \Sc{neg}-\Sc{1sg.prs} be\BS\Sc{conneg} eat-\Sc{prf} but Josh\BS\Sc{nom.sg}=be\BS\Sc{3sg.prs} eat-\Sc{prf}\\\nopagebreak
\Transl{No, I haven’t eaten (it), but Josh has eaten (it).}{} \Corpus{090519}{147}
\z
While \PS\ constituent order\is{constituent order} is generally flexible (cf.~\SEC\ref{constituentOrderClauses}), there are no examples in the corpus of the verb of negation occurring after the negated complement, but instead the connegative complement verb always follows the finite negation verb in a clause.
\is{clause!declarative|)}
\is{verb!negation verb|)}
\section{Interrogative clauses}\label{interrogClauses}
Constituent interrogative clauses in \PS\ are consistently marked as such syntactically, and thus are distinct from declarative clauses. Polar interrogative clauses, on the other hand, do not differ significantly from declarative clauses, although some syntactic constructions are more common than others. The following sections deal first with constituent interrogative clauses, then polar interrogative clauses in more detail.
\subsection{Constituent interrogative clauses}\label{constituentQs}\is{clause!interrogative!constituent|(}
Constituent interrogative clauses are the only type of independent clause in \PS\ which is consistently marked syntactically as a clause type. Specifically, every constituent interrogative clause is marked as such by having an interrogative word or phrase in clause-initial position. When it is an interrogative pronoun, it inflects for case and number consistent with its grammatical role in the clause (as with any pronoun), while the humanness of its (expected) referent determines the choice of the root.\footnote{Interrogative pronouns referring to non-human NPs all feature a \It{ma-} root, while those referring to human NPs have a \It{ge-} root. Interrogative adverbs also begin with \It{g-}. Cf.~\SEC\ref{interrogativePronouns}.}
Some examples can be found in \REF{questionWordQ1} through \REF{questionWordQ4}.
\ea\label{questionWordQ1}%
\glll mav dån hålå?\\
ma-v dån hålå\\
what-\Sc{acc.sg} \Sc{2sg.nom} say\BS\Sc{2sg.prs}\\\nopagebreak
\Transl{What are you saying?}{} \Corpus{090519}{329}
\z
\ea\label{questionWordQ2}%
\glll majd dä viehkedi?\\
ma-jd dä viehkedi\\
what-\Sc{acc.pl} then help\BS\Sc{2sg.pst}\\\nopagebreak
\Transl{What then did you help (with)?}{} \Corpus{080924}{615}
\z
\ea\label{questionWordQ3}%
\glll gejna dä tjuovvo\\
ge-jna dä tjuovvo\\
who-\Sc{com.sg} then accompany\BS\Sc{2sg.pst}\\\nopagebreak
\Transl{Who did you go with?}{} \Corpus{080924}{071}
\z
\ea\label{questionWordQ4}%
%\glll nå, man ednak biejve galga danne årrot\\
% nå man ednak biejve galga danne årro-t\\
% well how many day\BS\Sc{nom.pl} will\BS\Sc{2sg.prs} there be-\Sc{2sg.pst}\\\nopagebreak
%\Transl{well, how many days will you be there?}{} \Corpus{080924}{658}
\glll man mällgadav ana dajd riehpenen\\
man mällgada-v ana d-a-jd riehpene-n\\
how long-\Sc{acc.sg} have\BS\Sc{2sg.prs} \Sc{dem}-\Sc{dist}-\Sc{acc.pl} smoke.hole-\Sc{iness.sg}\\\nopagebreak
\Transl{How long do you have those in the smoke hole for?}{} \Corpus{090702}{168}
\z
Alternatively, the interrogative can be an adverb\is{adverb}, as in \REF{questionWordQ6} through \REF{questionWordQ8}. %\Red{intonation? nothing special? (not syntactic anyway).}
\ea\label{questionWordQ6}%
\glll gukt lä dát\\
gukte lä d-á-t\\
how be\BS\Sc{3sg.prs} \Sc{dem}-\Sc{prox}-\Sc{nom.sg}\\\nopagebreak
\Transl{How is it?}{} \Corpus{080924}{130}
\z
\ea\label{questionWordQ7}%
\glll guste dån bådá\\
guste dån bådá\\
from.where \Sc{2sg.nom} come\BS\Sc{2sg.prs}\\\nopagebreak
\Transl{Where are you coming from?}{} \Corpus{080924}{003}
\z
\ea\label{questionWordQ8}%
\glll gånne dajt tjogidä\\
gånne d-a-jt tjogi-dä\\
where \Sc{dem}-\Sc{dist}-\Sc{acc.pl} pick-\Sc{2pl.pst}\\\nopagebreak
\Transl{Where did you pick those?}{} \Corpus{080924}{168}
\z
Assuming that any constituent which is the pragmatic focus\is{information structure!focus} can be marked by fronting, as preliminarily asserted in \SEC\ref{infoStructure}, then the fronting of the interrogative word is consistent with focus-marking. However, for constituent interrogative clauses, fronting is then obligatory.
The rest of the clause is constructed syntactically just as freely as any declarative\is{clause!declarative} clause would be. While subject-verb inversion can occur, the flexible nature of \PS\ constituent ordering prevents this from necessarily marking a clause as interrogative.
It is worth noting that the discourse marker \It{nå}, which can be translated as ‘well’ or sometimes ‘yes’, frequently precedes constituent interrogative clauses, as in \REF{nåQ1}. However, it is not obligatory, nor is it restricted to interrogative clauses. It is likely a discourse marker, perhaps simply indicating the speaker’s active interest in the conversation.
\ea\label{nåQ1}%
\glll nå gukte lij Áhkabákten gu dånnå lidje mánná?\\
nå gukte li-j Áhkabákte-n gu dånnå lidje mánná\\
well how be-\Sc{3sg.pst} Áhkkabákkte-\Sc{iness.sg} when \Sc{2sg.nom} be\BS\Sc{2sg.pst} child\BS\Sc{nom.sg}\\\nopagebreak
\Transl{Well what was Áhkkabákkte like when you were a child?}{} \Corpus{080924}{063}
\z
\is{clause!interrogative!constituent|)}
\subsection{Polar interrogative clauses}\label{polarQs}\is{clause!interrogative!polar|(}
Because of flexible constituent ordering in Pite Saami, there is no reliable syntactic test for whether a clause is a polar interrogative. The intonation\is{intonation} of polar questions does not seem to differ significantly from any other types of clauses, either.
However, polar interrogative clauses frequently have a constituent order\is{constituent order} in which the finite\is{verb!finite} verb occurs before the subject.
Furthermore, this finite verb is generally the first element in a clause. The examples in \REF{polarQinversion1} through \REF{polarQinversion3} illustrate this.
\ea\label{polarQinversion1}%enabu? enabuV?
\glll galga dån ságastit enabuv?\\
galga dån ságasti-t ena-bu-v\\
will\BS\Sc{2sg.prs} \Sc{2sg.nom} say-\Sc{inf} much-\Sc{comp}-\Sc{acc.sg}\\\nopagebreak
\Transl{Are you going to say more?}{} \CorpusLink{0906_Ahkajavvre_b}{0906\_Ahkajavvre\_b}{041}
\z
\ea\label{polarQinversion2}%from elicitation, but prompted by DS herself%not sure of the derivational morphology in suovadit, either
\glll suovade dån?\\
suovade dån\\
smoke\BS\Sc{2sg.prs} \Sc{2sg.nom}\\\nopagebreak
\Transl{Do you smoke?}{} \Corpus{080702b}{073}
\z
\ea\label{polarQinversion3}%
\glll lij sån uktu jala lij Halvar aj maŋŋen\\
li-j sån uktu jala li-j Halvar aj maŋŋen\\
be-\Sc{3sg.pst} \Sc{3sg.nom} alone or be-\Sc{3sg.pst} Halvar\BS\Sc{nom.sg} also along\\\nopagebreak
\Transl{Was he alone or was Halvar also along?}{} \Corpus{080924}{308}
\z
As with any \PS\ clause, the syntactic subject does not have to be realized overtly. In such cases, the finite verb is also usually word initial, as in \REF{polarQinversion4}.
\ea\label{polarQinversion4}%
\glll udtju sáme gielav danne sagastit?\\
udtju sáme giela-v danne sagasti-t\\
may\BS\Sc{2sg.pst} Saami\BS\Sc{gen.sg} language-\Sc{acc.sg} there speak-\Sc{inf}\\\nopagebreak
\Transl{Were you allowed to speak Saami there?}{} \Corpus{080924}{351}
\z
However, it is also possible to front other elements which normally occur after the finite verb, as in \REF{polarQinversion5} and \REF{polarQinversion6}. Here, the non-finite\is{verb!non-finite} perfect form of the \is{complement!phrase}complement verb immediately precedes the aspect-marking\is{aspect} auxiliary verb.
\ea\label{polarQinversion5}%
\glll juhkum lä gajtsa mielkev?\\
juhku-m lä gajtsa mielke-v\\
drink-\Sc{prf} be\BS\Sc{2sg.prs} goat\BS\Sc{gen.sg} milk-\Sc{acc.sg}\\\nopagebreak
\Transl{Have you ever drunk goat’s milk?}{} \Corpus{080924}{128}
\z
\ea\label{polarQinversion6}
\glll bårråm lä dån biergov danne?\\
bårrå-m lä dån biergo-v danne\\
eat-\Sc{prf} be\BS\Sc{2sg.prs} \Sc{2sg.nom} meat-\Sc{acc.sg} there\\\nopagebreak
\Transl{Have you eaten meat there?}{} \Corpus{090519}{130}
\z
\subsubsection{Polar interrogatives and the question marker}\label{Qparticle}\is{interrogative!question marker}
It is possible for polar interrogative clauses to be identified as such by a question marker \It{gu}\TILDE\It{gus} following the finite verb. However, the use of the question marker in polar interrogatives is exceptionally uncommon and can hardly be considered obligatory in current \PS\ usage; this is reflected in the data from the corpus, which contain only three tokens. See \SEC\ref{QpartWordform} for a preliminary discussion of the question marker, including the three tokens from the corpus. % in examples \REF{Qpart1} through \REF{Qpart3} on page \pageref{Qpart1}.
\is{clause!interrogative!polar|)}
\section{Clauses in the imperative mood}\label{imperClauses}\is{mood!imperative}
Clauses in the imperative mood stand out syntactically by lacking an overt subject NP. Furthermore, they are marked by special portmanteau morphemes %\footnote{These imperative portmanteau morphemes indicate, in addition to imperative, the person and number of the syntactic subject, which is generally not overtly realized in imperative clauses.}
on the finite verb which express imperative mood as well as the number of the implied subject of the clause, which is always 2\superS{nd} person.
The finite verb tends to be in clause-initial position, as shown by the examples in \REF{impClause1}\footnote{\It{Låddávvre} is the name of a lake.}
and \REF{impClause2}.
\ea\label{impClause1}%
\glll giehto naginav dan Låddávre birra\\
giehto nagina-v d-a-n Låddávre birra\\
tell\BS\Sc{sg.imp} something-\Sc{acc.sg} \Sc{dem}-\Sc{dist}-\Sc{gen.sg} Låddávvre\BS\Sc{gen.sg} about\\\nopagebreak
\Transl{Say something about this Låddávvre!}{} \Corpus{080924}{314}
\z
\ea\label{impClause2}%from elicitation!
\glll bieja pirunav bävvdaj\\
bieja piruna-v bävvda-j\\
put\BS\Sc{sg.imp} potato-\Sc{acc.sg} table-\Sc{ill.sg}\\\nopagebreak
\Transl{Put the potato on the table!}{} \CorpusE{101208}{478}
\z
%However, other constituents may occur clause-initially as well, as in the standard phrase for ‘thank you’, shown in \REF{impClause3}.
Nonetheless, the standard phrase for ‘thank you’, shown in \REF{impClause3} in dual person, indicates that a constituent other than the finite verb may occur before a finite verb in imperative mood.
\ea\label{impClause3}
\glll gijtov adnen\\
gijto-v adne-n\\
thank-\Sc{acc.sg} have-\Sc{du.imp}\\\nopagebreak
\Transl{Thank you (two)!}{(lit.: have thanks!)} \CorpusE{101208}{292}
\z
However, no other examples of such constructions are found in the corpus, and this constituent ordering may be due to this phrase being a common expression and non-productive lexicalized structure calqued from the \is{language contact}Swedish expression \It{tack ska du ha!} (literally ‘thanks you shall have!’).
The adverb \It{dále} ‘now’ is common in imperative\is{mood!imperative} clauses, and is frequently abbreviated to \It{dál}, as in \REF{impClause5}.
\ea\label{impClause5}
\glll årren dál\\
årre-n dál\\
sleep-\Sc{du.imp} now\\\nopagebreak
\Transl{Go to sleep now!}{} \CorpusE{110518a}{06m55s}
\z
\section{Clauses in the potential mood}\label{potClauses}\is{mood!potential|(}
Aside from featuring a finite verb inflected for the potential mood\footnote{Cf.~\SEC\ref{POTmood} on the usage and the morphology of the potential mood.} by the \It{-tj} suffix, clauses in the potential mood generally lack an overt subject argument, as in %\marginpar{\REF{potSyntaxEx2} and \REF{potSyntaxEx3} already in other section on \POTs!}
\REF{potSyntaxEx2} and \REF{potSyntaxEx3}.
\ea\label{potSyntaxEx2}
\glll nå hålåv, vuolgetjip del\\
nå hålå-v vuolge-tji-p del\\
well say-\Sc{1sg.prs} go-\Sc{pot}-\Sc{1pl} obviously\\\nopagebreak
\Transl{Well then I say we really should probably go.}{} \Corpus{090702}{013}
\z
\ea\label{potSyntaxEx3}
\glll nä, virtitjav nuollat\\
nä virti-tja-v nuolla-t\\
no must-\Sc{pot}-\Sc{1sg} undress-\Sc{inf}\\\nopagebreak
\Transl{Oh no, I’ll probably have to take off some clothes.}{} \Corpus{090519}{029}
\z
However, as the clause in \REF{potSyntaxEx1} makes clear, it is possible to have an overt subject argument.
\ea\label{potSyntaxEx1}
\glll jus sån vuosjatja káfav\\
jus sån vuosja-tj-a káfa-v\\
if \Sc{3sg.nom} prepare.coffee-\Sc{pot}-\Sc{3sg} coffee-\Sc{acc.sg}\\\nopagebreak
\Transl{If he will perhaps make coffee.}{} \CorpusE{110404}{269}
\z
With this in mind, clauses in the potential mood do not differ syntactically from declarative\is{clause!declarative} clauses. %, although the frequency of lacking overt subject arguments seems to be higher for potential mood than declaratives.
As mentioned in \SEC\ref{POTmood}, the potential mood can also be used as a less severe command. This resembles clauses in the imperative\is{mood!imperative} mood by also never occurring with an overt subject, as shown %\marginpar{\REF{potSyntaxEx4} already in \SEC on \POTs!}
in example \REF{potSyntaxEx4}.
\ea\label{potSyntaxEx4}%
\glll vuosjatja káfav\\
vuosja-tj-a káfa-v\\
prepare.coffee-\Sc{pot}-\Sc{2sg} coffee-\Sc{acc.sg}\\\nopagebreak
\Transl{Perhaps you could make some coffee.}{} \CorpusE{110404}{267}
\z
\is{mood!potential|)}
%Future research could be used to shed more light on the potential syntactic constraints on clauses in the potential mood.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% C O M P L E X C L A U S E S
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Complex clauses}\label{complexClauses}%
Two or more clauses can be conjoined by coordination or subordination. %In coordination, clauses are connected at the discourse level, while in subordination, one clause - as a unit - fulfills a constituent role in the matrix clause.
After coordination is covered in \SEC\ref{clausalCoordination}, complement clauses are presented in \SEC\ref{complementClauses} and adverbial clauses are dealt with in \SEC\ref{adverbialClauses}.
Finally, relative clauses which do not form a constituent of a matrix clause, but are instead part of a nominal phrase, are described in detail in this chapter as well (in \SEC\ref{relativeClauses}).
\section{Clausal coordination}\label{clausalCoordination}\is{coordination|(}
There are several coordinating conjunctions that are used to syntactically join the basic clauses described in Chapter \ref{basicClauses}. In such cases, a coordinating conjunction occurs between the two clauses it connects. The clauses themselves are otherwise not marked in any way for coordination. The coordinating conjunctions are \It{ja} ‘and’, \It{vala} ‘but’, \It{men}\footnote{Note that \It{men} is a borrowing from \is{language contact}Swedish (< \It{men} ‘but’) and is used almost exclusively in the corpus, while the native Saamic word \It{vala} is only found in a \PS\ reading based on a Lule Saami translation of the New Testament (recording \hyperlink{pit100403}{pit100403}). Several examples in \citet{Lagercrantz1926} include \It{men} (e.g. on p. 20), so it has been part of \PS\ for at least a century.}
‘but’, \It{jala} ‘or’ and \It{eller}\footnote{Just as with \It{men}, \It{eller} is a borrowing from \is{language contact}Swedish (< \It{eller} ‘or’); however, the native Saamic word \It{jala} is rather common as well in the corpus. Furthermore, unlike \It{men}, \It{eller} is not mentioned in \citet{Lagercrantz1926}, so it seems that \It{eller} is likely a more recent word-choice development, perhaps due to increased dominance of the Swedish language over the last century.}
‘or’. The examples in \REF{coordination1} and \REF{coordination2} illustrate clausal coordination using the coordinators \It{ja} and \It{men}, respectively.
\ea\label{coordination1}
\glll mån anav Árjepluove gaptev nanne, ja Ivan adna Arrvehavre gaptev\\
mån ana-v Árjepluove gapte-v nanne ja Ivan adna Arrvehavre gapte-v\\
\Sc{1sg.nom} have-\Sc{1sg.prs} Arjeplog\BS\Sc{gen.sg} frock-\Sc{acc.sg} on and Ivan have-\Sc{3sg.prs} Arvidsjaur\BS\Sc{gen.sg} frock-\Sc{acc.sg}\\\nopagebreak
\Transl{I have an Arjeplog frock on, and Ivan has an Arvidsjaur frock.}{} \Corpus{080825}{047}
\z
\ea\label{coordination2}
\glll men ijtjin del bårå dan siste, men ednen biebmojd biergojd ja dále návte deggara sinne\\
men ijtji-n del bårå d-a-n siste men edne-n biebmo-jd biergo-jd ja dále návte deggara sinne\\
but \Sc{neg}-\Sc{3pl.pst} obviously eat\BS\Sc{conneg} \Sc{dem}-\Sc{dist}-\Sc{gen.sg} out but have-\Sc{3pl.pst} food-\Sc{acc.pl} meat-\Sc{acc.pl} and now like.this such\BS\Sc{gen.pl} in\\\nopagebreak
\Transl{But they obviously didn't eat out of that, but they had food, meat and so on in such things.}{} \CorpusLink{080708_Session03}{080708\_Session03}{019}
\z
When \It{jala} and \It{eller} function as clausal coordinators in the corpus,
they are mostly used to indicate meta-language commentary showing that the second clause is an alternate or amended version of the first clause, as in \REF{ORcoordination1}, rather than to provide clause-level alternatives.
\ea\label{ORcoordination1}%
\glll dále’l gámbal dåhpe, jala almatj hållå “unna dåbátj”\\
dále=l gámbal dåhpe jala almatj hållå unna dåbá-tj\\
now=be\BS\Sc{3sg.prs} old house\BS\Sc{nom.sg} or person\BS\Sc{nom.sg} say\BS\Sc{3sg.prs} small house-\Sc{dim}\BS\Sc{nom.sg}\\\nopagebreak
\Transl{Now this is the old house, or one says “the little house”.}{} \Corpus{100310b}{047-049}%
\z
\is{coordination|)}
\enlargethispage{\baselineskip}
\section{Clausal subordination}\label{clausalSubordination}\is{subordination|(}
Certain types of \PS\ clauses can be subordinate to another clause or to a nominal phrase\is{phrase!nominal}. %are normally marked as being subordinate to a matrix phrase.
When embedded at clause-level, a subordinate clause can be either a complement clause or an adverbial clause, depending on whether it fills an argument or an adverbial role.
These two types of subordinate clause are described in \SEC\ref{complementClauses} and \SEC\ref{adverbialClauses}, respectively.
Subordinate clauses featuring non-finite verb forms\is{verb!non-finite} are likely also found in \PS, a possibility which is dealt with briefly in \SEC\ref{otherSubclauses}.
Finally, relative clauses are covered in \SEC\ref{relativeClauses}.
%A final type of clausal subordination, when embedded in an NP, a subordinate clause is a relative clause, and covered in \SEC\ref{relativeClauses}.
\subsection{Complement clauses}\label{complementClauses}\is{clause!complement|(}\is{complement!clause|see {clause}}
A complement clause fills an argument slot of the verbal predicate in the matrix clause it belongs to.
There are a variety of complement clause constructions, and both finite and infinitive predicates are possible. Complement\is{clause!complement} clauses can be marked by a complementizer %, a question word or question pronoun,
or can stand in juxtaposition to the matrix clause. %In most cases, the complement clause contains a fully inflected finite verb; however, an infinite verb form can also be part of the juxtaposition strategy.
The different complement clause marking strategies are summarized in Table~\vref{complementClauseSummary} and described in the following sections.
\begin{table}[ht]\centering
\caption{Types of complement clause marking}\label{complementClauseSummary}
\begin{tabular}{ll}\mytoprule
{predicate type} &{subordination strategy} \\\hline%&\it typical verbs \\\mybottomrule
{finite} & complementizer \It{att} \\%\cline{2-2}%\hline%& hållåt, diehtet \\\hline
%finite & fronted question word/pronoun \\\cline{2-2}%\hline%& diehtet \\\hline
& juxtaposition \\%question/relative pronoun & finite \\\hline%& skenet \\\hline
infinitive & juxtaposition \\\mybottomrule
\end{tabular}
\end{table}
\subsubsection{Complement clauses with a finite predicate}\label{finiteComplementClauses}
Complement clauses with a fully inflected finite\is{verb!finite} predicate are attested using one of two strategies.
First, the borrowed complementizer\is{complementizer} \It{att}\footnote{Cf.~the \is{language contact}Swedish marker \It{att}, which is also a complementizer.}
can mark a complement clause. In such cases, the complement clause typically follows the matrix clause.
\enlargethispage{\baselineskip}
The complementizer is in clause-initial position in the complement clause.
Examples can be found in \REF{complementizer1} and \REF{complementizer2}.
\ea\label{complementizer1}
\glll ja dä mån hålåv att sidav bajket\\
ja dä mån hålå-v att sida-v bajke-t\\
and then \Sc{1sg.nom} say-\Sc{1sg.prs} \Sc{subord} want-\Sc{1sg.prs} poop-\Sc{inf}\\\nopagebreak
\Transl{And then I say that I want to poop.}{} \Corpus{080924}{591}
\z
\ea\label{complementizer2}
\glll men mån diedav att háre lä jávren\\
men mån dieda-v att háre lä jávre-n\\
but \Sc{1sg.nom} know-\Sc{1sg.prs} \Sc{subord} greyling\BS\Sc{nom.pl} be\BS\Sc{3pl.prs} lake-\Sc{iness.sg}\\\nopagebreak
\Transl{But I know that there are greyling in the lake.}{} \Corpus{100404}{052}
\z
Secondly, complement clauses with a finite predicate may be juxtaposed to the matrix clause they belong to. The complement clause typically follows the matrix clause.
Verbs hosting such complements include \It{jáhkket} ‘believe’, \It{diehtet} ‘know’, \It{hållåt} ‘say’ and \It{tuhtjet} ‘like’. %, \It{} ‘’, \It{} ‘’
Examples can be found in \REF{complClauseJuxFin1} through \REF{complClauseJuxFin3}.%\marginpar{other verbs for this? skenet(understand)?}
\ea\label{complClauseJuxFin1}
\glll mån jáhkav stuor tjuovtja lä danne\\
mån jáhka-v stuor tjuovtj-a lä danne\\
\Sc{1sg.nom} believe-\Sc{1sg.prs} big whitefish-\Sc{nom.pl} be\BS\Sc{3pl.prs} there\\\nopagebreak
\Transl{I believe there are big whitefish there.}{} \Corpus{090702}{123}
\z
\ea\label{complClauseJuxFin2}%
\glll men mån tuhtjiv dat lij nav suohtas tieltajn viessot\\
men mån tuhtji-v d-a-t lij nav suohtas tielta-jn viesso-t\\
but \Sc{1sg.nom} think-\Sc{1sg.prs} \Sc{dem}-\Sc{dist}-\Sc{nom.sg} be\BS\Sc{3sg.pst} so nice tent-\Sc{iness.pl} live-\Sc{inf}\\\nopagebreak
\Transl{But I think it was so nice to stay in tents.}{} \Corpus{080924}{644}
\z
\ea\label{complClauseJuxFin3}%
\glll men hålåv, vuhtjijmä mija sárvav\\
men hålå-v vuhtji-jmä mija sárva-v\\
but say-\Sc{1sg.prs} shoot-\Sc{1pl.pst} \Sc{1pl.nom} moose-\Sc{acc.sg}\\\nopagebreak
\Transl{But then I say we shot a moose.}{} \Corpus{090702}{404}
\z
Constituent interrogative clauses\is{clause!interrogative!constituent} can also be juxtaposed complement clauses. As with any such interrogative clause, an interrogative pronoun\is{pronoun!interrogative} or other question word occurs as the initial element of the complement clause. %The question word or interrogative pronoun is in clause-initial position in the complement clause.
This strategy typically coincides with complements of epistemic verbs such as \It{diehtet} ‘know’ or \It{skenit} ‘understand’. %, \It{} ‘’, \It{} ‘’, \It{} ‘’, \It{} ‘’
Some examples are provided in \REF{Qsubordination1} through \REF{Qsubordination3}.
\ea\label{Qsubordination1}
\glll mån iv diede gåsse gillgin gávnadit maŋep bále\\
mån i-v diede gåsse gillgi-n gávnadi-t maŋe-p bále\\
\Sc{1sg.nom} \Sc{neg-1sg.prs} know\BS\Sc{conneg} when will-\Sc{1du.prs} meet-\Sc{inf} after-\Sc{comp} time\BS\Sc{gen.sg}\\\nopagebreak
\Transl{I don’t know when we’ll meet next time.}{} \Corpus{081011}{183}
\z
\ea\label{Qsubordination2}
\glll mån iv skene mav dån hålå\\
mån i-v skene ma-v dån hålå\\
\Sc{1sg.nom} \Sc{neg-1sg.prs} understand\BS\Sc{conneg} what-\Sc{acc.sg} \Sc{2sg.nom} say\BS\Sc{2sg.prs}\\\nopagebreak
\Transl{I don’t understand what you’re saying.}{} \CorpusE{080926}{05m14s}
\z
\ea\label{Qsubordination3}
\glll mån diedav gie lä\\
mån dieda-v gie lä\\
\Sc{1sg.nom} know-\Sc{1sg.prs} who\BS\Sc{nom.sg} be\BS\Sc{3sg.prs}\\\nopagebreak
\Transl{I know who she is.}{} \Corpus{090702}{460}
\z
While the complement clause typically follows the matrix clause, this does not necessarily have to be the case, as illustrated by \REF{Qsubordination4}.% and \REF{Qsubordination5}.
\ea\label{Qsubordination4}
\glll man mälgat lij gu lij hiejman, iv mån diede\\
man mälgat li-j gu li-j hiejma-n i-v mån diede\\
how far be-\Sc{3sg.pst} when be-\Sc{3sg.pst} home-\Sc{iness.sg} \Sc{neg-1sg.prs} \Sc{1sg.nom} know\BS\Sc{conneg}\\\nopagebreak
\Transl{I don’t know how far it was to get home.}{} \Corpus{100404}{317}
\z
\subsubsection{Complement clauses with an infinitive predicate}\label{infinitiveComplementClauses}
Complement clauses with an infinitive\is{verb!non-finite} predicate can be juxtaposed to the matrix clause they belong to. The complement clause typically follows the matrix clause.
While not particularly common in the corpus, verbs such as \It{állget} ‘begin’ and \It{vajáldahtet}/\It{åjaldahtet}\footnote{The word \It{vajáldahtet} ‘forget’ is likely limited to the northern dialects of \PS, while \It{åjaldahtet} is preferred in the south.}
‘forget’ %\marginpar{other verbs for this?} %\It{} ‘’, \It{} ‘’, \It{} ‘’
are accompanied by complement clauses headed by an infinitive verb, as in \REF{complClauseJuxInf1} through \REF{complClauseJuxInf4}.
\ea\label{complClauseJuxInf1}
\glll nå gosse dijá älgijdä Örnvikast vuodjet vadnásav?\\
nå gosse dijá älgi-jdä Örnvika-st vuodje-t vadnása-v\\
well when \Sc{2pl.nom} begin-\Sc{2pl.pst} Örnvik-\Sc{elat.sg} drive-\Sc{inf} boat-\Sc{acc.sg}\\\nopagebreak
\Transl{Well when did you start to take the boat from Örnvik?}{} \Corpus{080924}{563}
\z
\ea\label{complClauseJuxInf2}
\glll ja dä del virrtin allget biebmat, fodderijd vuodjet\\
ja dä del virrti-n allge-t biebma-t fodderi-jd vuodje-t\\
and then obviously must-\Sc{3pl.pst} begin-\Sc{inf} feed-\Sc{inf} feed-\Sc{acc.pl} drive-\Sc{inf}\\\nopagebreak
\Transl{And so they obviously had to start to feed, to transport the feed.}{} \Corpus{100405a}{029}
\z
\ea\label{complClauseJuxInf3}%
\glll nä, mån liv åjaldahtam valldet maŋen\\
nä mån li-v åjaldahta-m vallde-t maŋen\\
no \Sc{1sg.nom} be-\Sc{1sg.prs} forget-\Sc{prf} take-\Sc{inf} with\\\nopagebreak
\Transl{No, I forgot to take it along.}{} \Corpus{090519}{322}
\z
\ea\label{complClauseJuxInf4}%
\glll vajálduhtiv hållåt, gu vusjkonijd dihkiv...\\
vajálduhti-v hållå-t gu vusjkoni-jd dihki-v\\
forget-\Sc{1sg.prs} say-\Sc{inf} when perch-\Sc{acc.pl} do-\Sc{1sg.pst}\\\nopagebreak
\Transl{I am forgetting to say, when I did the perch...}{} \Corpus{090702}{079}
\z
\is{clause!complement|)}
\subsection{Adverbial clauses}\label{adverbialClauses}\is{clause!adverbial|(}
An adverbial clause is a subordinate clause that fills an adverbial function in the matrix clause. %, e.g., indicating when, how or why something happens.
Adverbial clauses begin with a subordinating element such as \It{gu} ‘when, once’, \It{jus} ‘if’, \It{maŋŋel} ‘after’, \It{åvdål} ‘before’, \It{innan}\footnote{Note that the conjunction \It{innan} is a borrowing from Swedish\is{language contact} and is only attested once in the corpus.} ‘before’ %\marginpar{add to list of subordinators!}
or \It{gukte} ‘how’, %\marginpar{\It{danen} ‘because’ requires following \It{gu}, so danen is not really a subordinator! cf.~\ref{adverbialClause2}}
but otherwise are not marked syntactically as subordinate clauses. The adverbial clause itself is headed by a fully inflected finite verb\is{verb!finite}.% agreeing in person and number with the syntactic subject and inflecting for tense\marginpar{are other moods possible?}.
For instance, the example in \REF{adverbialClause1} shows that the adverbial clause can follow the matrix clause.%, while the clause in \REF{adverbialClause3} provides an example for an adverbial with
\ea\label{adverbialClause1}%
\glll hihtu vanj dä baktjat innan mån stärtiv motorav\\
hihtu vanj dä baktja-t innan mån stärti-v motora-v\\
must\BS\Sc{2sg.prs} well then back-\Sc{inf} before \Sc{1sg.nom} start-\Sc{1sg.prs} motor-\Sc{acc.sg}\\\nopagebreak
\Transl{Well you have to back up then before I start the motor.}{} \Corpus{090702}{018-019}
\z
In the example in \REF{adverbialClause2}, the dependent \is{clause!complement}complement clause \It{gu lidjin sladjim} ‘once they had harvested’ precedes the matrix clause \It{dä båhtin da bajás} ‘then they came up’. %could stand alone syntactically, but is supplemented by the preceding .
\ea\label{adverbialClause2}%
\glll gu lidjin sladjim, dä båhtin da bajás\\
gu lidji-n sladji-m dä båhti-n d-a bajás\\
when be-\Sc{3pl.pst} harvest-\Sc{prf} then come-\Sc{3pl.pst} \Sc{dem}-\Sc{dist}\BS\Sc{sg.nom} up\\\nopagebreak
\Transl{Once (the farmers) had harvested, then they (the plants) came up.}{} \Corpus{080924}{173}
\z
Adverbial clauses introduced by the subordinator \It{jus} ‘if’ set a condition for the matrix sentence. Other than this conjunction, there is no special marking for the \is{mood!conditional}conditional.
%Conditional clauses are adverbial clauses introduced by the subordinator \It{jus} ‘if, whether’. The finite verb in both the subordinate clause and the superordinate clause is not marked in any way for conditionality, as in \Ref {conditionalClause1} and \REF{conditionalClause2}.
The conditional clause can occur before or after the matrix clause, as shown in \REF{conditionalClause1} through \REF{conditionalClause3}.%
%Examples for conditional clauses in \ref{conditionalClause1} through \ref{conditionalClause2}.
%jus gussa dajd olli dä borre dajd dija gabmasuijnid%if the cows reach up to it, then they eat your shoe hay%pit080924.221
\ea\label{conditionalClause1}%
\glll jus gussa dajd ulli, dä bårre dajd, dija gamasuijnijd\\
jus gussa d-a-jd ulli dä bårre d-a-jd dija gama-suijni-jd\\
if cow\BS\Sc{nom.pl} \Sc{dem}-\Sc{dist}-\Sc{acc.pl} reach-\Sc{3pl.prs} then eat\BS\Sc{3pl.prs} \Sc{dem}-\Sc{dist}-\Sc{acc.pl} \Sc{2pl.gen} shoe-hay-\Sc{acc.pl}\\\nopagebreak
\Transl{If the cows reach up to it, then they eat it, your shoe hay.\footnotemark}{} \Corpus{080924}{221}\footnotetext{Note that in example \REF{conditionalClause1}, ‘hay’ and both pronouns referring to ‘hay’ are plural; however, for ease of reading, these are singular in the English translation. \It{Gamasuäjdne} ‘shoe-hay’ refers to a special type of grass placed inside shoes to insulate one’s feet from cold and moisture.}
\z
\ea\label{conditionalClause2}%
\glll ja dat lij samma, jus del lij guallbana vaj bijjadaga vaj smav biehtsasdaga\\
ja d-a-t li-j samma jus del lij guallban-a vaj bijjadag-a vaj smav biehtsasdag-a\\
and \Sc{dem}-\Sc{dist}-\Sc{nom.sg} be-\Sc{3sg.pst} same if then be-\Sc{3sg.pst} flat.pine.heath-\Sc{nom.pl} or high.ground-\Sc{nom.pl} or small pine.forest-\Sc{nom.pl}\\\nopagebreak
\Transl{And that was the same whether it was flat-pine-heath or higher-ground or small pine-forests.}{} \Corpus{100405a}{009}
\z
\ea\label{conditionalClause3}%
\glll jus galga njuallga dajd njuovvat dä galga dajd valdet olgus åvdål gádsastam\\
jus galga njuallga d-a-jd njuovva-t dä galga d-a-jd valde-t olgus åvdål gádsasta-m\\
if will\BS\Sc{2sg.prs} correct \Sc{dem}-\Sc{dist}-\Sc{acc.pl} slaughter-\Sc{inf} then will-\Sc{2sg.prs} \Sc{dem}-\Sc{dist}-\Sc{acc.pl} take-\Sc{inf} out before hang-\Sc{prf}\\\nopagebreak
\Transl{If you slaughter them correctly, then you take them out before hanging (them) up.}{} \Corpus{080909}{105}
\z
\is{clause!adverbial|)}
\subsection{Other subordinate clauses with non-finite verb forms}\label{otherSubclauses}\is{verb!non-finite}
The literature on Saami languages often mentions other non-finite verb forms in addition to those mentioned above, which can be considered part of non-finite subordinate clauses often in adverbial function. These include the verb genitive, verb abessive or gerunds, for instance (cf.~\citet[103--104]{Sammallahti1998} and \citet[67--73]{Svonni2009} for North Saami, or \citet[104--111]{Spiik1989} for Lule Saami).
For \PS, \citet[95--106]{Lehtiranta1992} describes the morphological form for a number of such non-finite forms,\footnote{The non-finite forms are also included in the verb paradigms in \citet[150--155]{Lehtiranta1992}.}
but does not go into how these are used syntactically, and only one or two example clauses are provided at all. \citet{Lagercrantz1926} does not describe such verb forms.
With this in mind, it is certainly plausible that \PS\ can make use of non-finite verb forms other than those mentioned above. However, there is little evidence of such forms in the corpus, and even this is limited to progressive forms in elicitation sessions. For instance, as mentioned in \SEC\ref{ADVverbs}, the progressive non-finite verb form can be used adverbially. One example featuring the progressive form \It{gullamin} ‘listening’ is repeated here in \REF{ADVverbsEx2repeat}. Even here, it is not clear whether such non-finite forms can include arguments or adjuncts.
\ea\label{ADVverbsEx2repeat}%
\glll gullamin mån tjálav\\
gulla-min mån tjála-v\\
listen-\Sc{prog} \Sc{1sg.nom} write-\Sc{1sg.prs}\\\nopagebreak
\Transl{I write while listening.}{} \CorpusE{110404}{089}
\z
Ultimately, the syntactic behavior of such non-finite verbs, and whether these can be part of subordinate clauses, must be left for future study.
\subsection{Relative clauses}\label{relativeClauses}\is{clause!relative|(}
\PS\ relative clauses are marked by a clause-initial relative pronoun\is{pronoun!relative}. The fact that this relative pronoun
%pivot NP within the relative clause itself is never a full NP, but is instead reduced to a relative pronoun. This relative pronoun
is always the initial constituent\is{constituent order} in the relative clause is the only internal syntactic marking for relative clauses; otherwise, relative clauses are ordinary clauses with a fully inflected finite verb\is{verb!finite}. %The relative clause is in most respects a regular finite clause containing a fully inflected finite verb, however it is syntactically marked by always having a relative pronoun in clause-initial position.
The relative pronoun %links the relativized noun phrase to the the relative clause, and
inflects for case according to the syntactic function it fills within the relative clause, and for the number of the NP\is{phrase!nominal} that it modifies, as illustrated by \REF{relClause1} through \REF{relClause3}.
\ea\label{relClause1}%
\glll dä inijmä aktav vuoksav majna vuojadijmä muorajd\\
dä ini-jmä akta-v vuoksa-v ma-jna vuojadi-jmä muora-jd\\
then have-\Sc{1pl.pst} one-\Sc{acc.sg} bull-\Sc{acc.sg} \Sc{rel}-\Sc{com.sg} drive-\Sc{1pl.pst} wood-\Sc{acc.pl}\\\nopagebreak
\Transl{We had one bull with which we transported firewood.}{} \CorpusLink{0906_Ahkajavvre_a}{0906\_Ahkajavvre\_a}{020}
\z
\ea\label{relClause2}%
\glll ja dä maŋŋemus skoterijd majd iniga\\
ja dä maŋŋe-mus skoteri-jd ma-jd ini-ga\\
and then after-\Sc{superl} snowmobile-\Sc{acc.pl} \Sc{rel}-\Sc{acc.pl} have-\Sc{3du.pst}\\\nopagebreak
\Transl{…and the last snowmobiles which they had.}{} \Corpus{100404}{281}
\z
\ea\label{relClause3}%
\glll dä lä ájge ma lä urrum\\
dä lä ájge ma lä urru-m \\
then be\BS\Sc{3pl.prs} time\BS\Sc{nom.pl} \Sc{rel}\BS\Sc{nom.pl} be\BS\Sc{3pl.prs} be-\Sc{prf}\\\nopagebreak
\Transl{Those are times which have been.}{(i.e.: ‘those were the good old days’)} \Corpus{090702}{409}
\z
Just as demonstrative\is{pronoun!demonstrative} and interrogative\is{pronoun!interrogative} pronouns, relative pronouns only inflect for singular and plural, but not for dual\is{number!dual} number, as illustrated by the example in \REF{relClause4}.
\ea\label{relClause4}%
\glll måj ma lin båhtam\\
måj ma li-n båhta-m\\
\Sc{1du.nom} \Sc{rel}\BS\Sc{nom.pl} be-\Sc{3pl.pst} come-\Sc{prf}\\\nopagebreak
\Transl{We two who had come.}{} \CorpusE{110329}{32m45s}
\z
Note that the relative pronouns are homophonous with the %\marginpar{is ‘impersonal’ the right term for (-hum)?}
set of interrogative pronouns referring to non-human NPs.\footnote{Cf.~\SEC\ref{interrogativePronouns} and \SEC\ref{relativePronouns} for more on interrogative and relative pronouns, respectively.}
However, not only do relative pronouns have a different syntactic function than interrogative pronouns in general, they are not sensitive to the humanness of the referent, unlike interrogative pronouns. For instance, the relative pronoun \It{ma} is the same in both \REF{relClause3} above and \REF{relClause5} below, although the former has ‘times’ as an antecedent and the latter refers to ‘young people’.
\ea\label{relClause5}%
\glll dä lin nuora álmatja ma lin riejdnohimen\\
dä li-n nuora álmatj-a ma li-n riejdnohi-men\\
then be-\Sc{3pl.pst} young\BS\Sc{pred.pl} people-\Sc{nom.pl} \Sc{rel}\BS\Sc{nom.pl} be-\Sc{3pl.pst} herd-\Sc{prog}\\\nopagebreak
\Transl{They were young people who were herding.}{} \CorpusLink{0906_Ahkajavvre_b}{0906\_Ahkajavvre\_b}{017}
\z
In the previous examples, relative clauses immediately follow the head of the noun phrase they modify. However, it is possible for a postposition\is{adposition!postposition} to occur between the modified NP and the relative clause, as illustrated by \REF{relClause6}.
\ea\label{relClause6}%
\glll dat lij duv gugu masa båhten\\
d-a-t li-j duv gugu ma-sa båhte-n\\
\Sc{dem}-\Sc{dist}-\Sc{nom.sg} be-\Sc{3sg.pst} \Sc{2sg.gen} to \Sc{rel}-\Sc{ill.sg} come-\Sc{3pl.pst}\\\nopagebreak
\Transl{It was to you they came.}{} \CorpusE{110329}{37m04s}
\z
This shows that it is possible, in this case, for a relative clause to not be embedded syntactically in the modified NP, as the relative clause can occur outside the postpositional phrase\is{phrase!postpositional} which the modified NP is a constituent of. It should be emphasized that only a postposition can split a relative clause from the matrix NP it modifies.
There does not appear to be any restriction on the syntactic function that a relative pronoun can fill within the relative clause. With the exception of abessive\is{case!abessive} and essive\is{case!essive}, which are rare in the corpus for all nominals, relative pronouns are attested for all grammatical cases, as well as being the dependent NP in a postpositional phrase (in genitive\is{case!genitive} case), as in \REF{relClause7}, or similarly as the possessor NP (also in genitive case) modifying a noun, as in \REF{relClause8}.
\ea\label{relClause7}%
\glll dat lä náhppe man sisa båhtjen buhtsujd\\
d-a-t lä náhppe ma-n sisa båhtje-n buhtsu-jd\\
\Sc{dem}-\Sc{dist}-\Sc{nom.sg} be\BS\Sc{3sg.prs} milking.bowl\BS\Sc{nom.sg} \Sc{rel}-\Sc{gen.sg} into milk-\Sc{3pl.pst} reindeer-\Sc{acc.pl}\\\nopagebreak
\Transl{This is a milking bowl into which they milked reindeer.}{}\nopagebreak\CorpusLink{080708_Session02}{080708\_Session02}{003}
\z
\ea\label{relClause8}%
\glll men dä lä danne urrum dat pluovve man namma lä, mij lä namma dan, dáv mijá Árjepluovev\\
men dä lä danne urru-m d-a-t pluovve ma-n namma lä mij lä namma d-a-n d-á-v mijá Árjepluove-v\\
but then be\BS\Sc{3sg.prs} there be-\Sc{prf} \Sc{dem}-\Sc{dist}-\Sc{nom.sg} pond\BS\Sc{nom.sg} \Sc{rel}-\Sc{gen.sg} name\BS\Sc{nom.sg} be\BS\Sc{3sg.prs} \Sc{rel}\BS\Sc{nom.sg} be\BS\Sc{3sg.prs} name\BS\Sc{nom.sg} \Sc{dem}-\Sc{dist}-\Sc{gen.sg} \Sc{dem}-\Sc{prox}-\Sc{acc.sg} \Sc{1pl.gen} Arjeplog-\Sc{acc.sg}\\\nopagebreak
\Transl{But here was the pond whose name was, which was the name of that, this, our Arjeplog.}{} \Corpus{090915}{013}
\z
This latter example is clear evidence for such a structure, but it is part of a false-start, as it also contains a semantically-driven self-correction just after the targeted example; however, as the single instance in the corpus for a relative pronoun modifying a noun, it must suffice as evidence at this point. % until further research is done.
%A summary of the syntactic functions that relative pronouns can fulfill within a relative clause is provided in Table~\vref{syntacticFunctionsRelPronouns}. %, including the numbers of examples which provide evidence for each function.
%\begin{table}[ht]\centering
%\caption[Possible syntactic functions of relative pronouns]{Possible syntactic functions of relative pronouns}\label{syntacticFunctionsRelPronouns}
%\begin{tabular}{lc}\mytoprule
% &{possible for} \\%\dline
%{syntactic function} &{relative pronoun} \\\hline
%argument NP &\CH \\
%adjunct NP &\CH \\
%dependent of PP &\CH \\
%possessor of NP &\CH \\\mybottomrule
%\end{tabular}
%\end{table}
In summary, relative pronouns can fulfill a variety of syntactic functions in a relative clause. Specifically, relative pronouns can be
an argument NP,
an adjunct NP,
a dependent of a PP, or
a possessor of an NP.
\is{clause!relative|)}
\is{subordination|)}
%\include{postambleSDL}
| {
"alphanum_fraction": 0.7577872621,
"avg_line_length": 76.069955157,
"ext": "tex",
"hexsha": "7bd5b9fddfe19831bda40e7a347a913155ff7eea",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/sidl",
"max_forks_repo_path": "finished/Wilbur/syntaxSentencesSDL.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/sidl",
"max_issues_repo_path": "finished/Wilbur/syntaxSentencesSDL.tex",
"max_line_length": 822,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/sidl",
"max_stars_repo_path": "finished/Wilbur/syntaxSentencesSDL.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 27643,
"size": 84818
} |
The previous chapter developed a mathematical relationship between the position of a point $P$ in a scene (expressed in world frame coordinates $P_W$), and the corresponding point $p$ in pixel coordinates that gets projected onto the image plane of the camera. This relationship was derived based on the pinhole camera model, and required knowledge about the camera's intrinsic and extrinsic parameters. Nonetheless, even in the case where all of these camera parameters are known it is still impossible to reconstruct the depth of $P$ with a single image (without additional information).
However, in the context of robotics, recovering 3D information about the structure of the robot's environment through computer vision is often a very important task (e.g. for obstacle avoidance). Two approaches for using cameras to gather 3D information are therefore presented in this chapter, namely \textit{stereo vision} and \textit{structure from motion}\cite{SiegwartNourbakhshEtAl2011}\cite{ForsythPonce2011}.
\notessection{Stereo Vision and Structure From Motion}
Recovering scene structure from images is extremely important for mobile robots to safely operate in their environment and successfully perform tasks. While a number of other sensors can also be used to recover 3D scene information, such as ultrasonic sensors or laser rangefinders, cameras capture a broad range of information that goes beyond depth sensing. Additionally, cameras are a well developed technology and can be an attractive option for robotics based on cost or size.
Unfortunately, unlike sensors that are specifically designed to measure depth like laser rangefinders, the camera's projection of 3D data onto a 2D image makes it impossible to gather some information from a single image\footnote{Unless you are willing to make some strong assumptions, for example that you know the physical dimensions of the objects in the environment.}. Techniques for extracting 3D scene information from 2D images have therefore been developed that leverage \textit{multiple} images of a scene. Examples of such techniques include \textit{depth-from-focus} (uses images with different focuses), \textit{stereo vision} (uses images from different viewpoints), or \textit{structure from motion} (uses images captured by a moving camera).
\subsection{Stereo Vision}
Stereopsis (from \textit{stereo} meaning solidity, and \textit{opsis} meaning vision or sight) is the process in visual perception leading to the sensation of depth from two slightly different projections of the world onto the retinas of the two eyes. The difference in the two retinal images is called horizontal \textit{disparity}, retinal disparity, or binocular disparity, and arise from the eyes' different positions in the head. It is the disparity that makes our brain fuse (perceive as a single image) the two retinal images, making us perceive the object as one solid object. For example, if you hold your finger vertically in front of you and alternate closing each eye you will see that the finger jumps from left to right. The distance between the left and right appearance of the finger is the disparity.
Computational stereopsis, or \textit{stereo vision}, is the process of obtaining depth information of a 3D scene via images from two cameras which look at the same scene from different perspectives. This process consists of two major steps: fusion and reconstruction. Fusion is a problem of correspondence, in other words how do you correlate each point in the 3D environment to their corresponding pixels in \textit{each} camera. Reconstruction is then a problem of \textit{triangulation}, which uses the pixel correspondences to determine the full position of the source point in the scene (including depth).
\subsubsection{Epipolar Constraints}
As previously mentioned, the first step in the stereo vision process is to fuse the two (or more) images and generate point correspondences\footnote{This generally assumes that the perspective of each image is only a slight variation from the other, such that the features appear similarly in each.}. This task can be quite challenging, and erroneously matching features can lead to large errors in the reconstruction step. Therefore, several techniques are leveraged to make this task simpler. The most important simplifying technique is to impose an \textit{epipolar constraint}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.8\textwidth]{tex/figs/ch09_figs/stereo.png}
\end{center}
\caption{The point $P$ in the scene, the optical centers $O$ and $O'$ of the two cameras, and the two images $p$ and $p'$ of $P$ all lie in the same plane, referred to as the epipolar plane. The lines $l$ and $l'$ are the epipolar lines of the points $p$ and $p'$, respectively. Note that if the point $p$ is observed in one image, the corresponding point in the second image must lie on the epipolar line $l'$!}
\label{fig:epi}
\end{figure}
Consider the images $p$ and $p'$ of a point $P$ observed by two cameras with optical centers $O$ and $O'$ (see Figure \ref{fig:epi}). These five points all belong to the \textit{epipolar plane} defined by the two intersecting rays $OP$ and $O'P$.
In particular, the point $p$ lies on the line $l$ where the epipolar plane and the image plane intersect. The line $l$ is referred to as the \textit{epipolar line} associated with the point $p$, and it passes through the point $e$ (referred to as the \textit{epipole}).
Based on this geometry, if $p$ and $p'$ are images of the same point $P$, then $p$ must lie on the epipolar line $l$ and $p'$ must lie on the epipolar line $l'$.
Therefore, when searching for correspondences between $p$ and $p'$ for a particular point $P$ in the scene it makes sense to restrict the search to the corresponding epipolar line. This is referred to as an \textit{epipolar constraint}, and greatly simplifies the correspondence problem by restricting the possible candidate points to a line rather than the entire image (i.e. a one dimensional search rather than a two dimensional search).
Mathematically, the epipolar constraints can be written as:
\begin{equation}
\overline{Op} \cdot [\overline{OO'} \times \overline{O'p'}] = 0,
\end{equation}
since $\overline{Op}$, $\overline{O'p'}$, and $\overline{OO'}$ are coplanar. Assuming the world reference frame is co-located with camera 1 (with an origin at point $O$) this constraint can be written as:
\begin{equation} \label{eq:epiconst}
p^\top F p'=0,
\end{equation}
where $F$, referred to as the \textit{fundamental matrix}, has seven degrees of freedom and is singular. For a derivation of the epipolar constraint see Section 7.1 from Forsyth et al.\cite[]{ForsythPonce2011}. Additionally, the matrix $F$ is only dependent on the intrinsic camera parameters for each camera and the geometry that defines their relative positioning, and can be assumed to be constant. The expression for the fundamental matrix in terms of the camera intrinsic parameters is:
\begin{equation}
F = K^{-\top}EK'^{-1}, \quad E = \begin{bmatrix}
0 & -t_3 & t_2 \\
t_3 & 0 & -t_1 \\
-t_2 & t_1 & 0
\end{bmatrix}R,
\end{equation}
where $K$ and $K'$ are the intrinsic parameter matrices for cameras 1 and 2 respectively, and $R$ and $t = [t_1, t_2, t_3]^\top $ are the rotation matrix and translation vector that map camera 2 frame coordinates into camera 1 frame coordinates.
Note that with the epipolar constraint defined by the fundamental matrix \eqref{eq:epiconst}, the epipolar lines $l$ and $l'$ can be expressed by $l = Fp'$ and $l' = F^\top p$. Additionally, it can be shown that $F^\top e = Fe' = 0$ where $e$ and $e'$ are the epipoles in the image frames of cameras 1 and 2, since by definition the translation vector $t$ is parallel to the coordinate vectors of the epipoles in the camera frames. This in turn guarantees that the fundamental matrix $F$ is singular.
If the parameters $K$, $K'$, $R$, and $t$ are not already known, the fundamental matrix $F$ can be determined in a manner similar to the intrinsic parameter matrix $K$ in the previous chapter. Suppose a number of corresponding points $p^h = [u, v, 1]^\top $ and $(p^h)'= [u',v',1]^\top $ are known and are expressed as homogeneous coordinates. Each pair of points has to satisfy the epipolar constraint \eqref{eq:epiconst}, which can be written as:
\begin{equation*}
\begin{bmatrix}
u & v & 1
\end{bmatrix} \begin{bmatrix}
F_{11} & F_{12} & F_{13} \\
F_{21} & F_{22} & F_{23} \\
F_{31} & F_{32} & F_{33}
\end{bmatrix} \begin{bmatrix}
u' \\ v' \\ 1
\end{bmatrix} = 0
\end{equation*}
This expression can then be equivalently expressed by reparameterizing the matrix $F$ in vector form $f$ as:
\begin{equation}
\begin{bmatrix}
uu' & uv' & u & vu' & vv' & v & u' & v' & 1
\end{bmatrix}f = 0
\end{equation}
where $f = [F_{11}, \:F_{12} , \: F_{13}, \:F_{21}, \:F_{22}, \:F_{23}, \:F_{31}, \:F_{32}, \:F_{33}]^\top $. For $n$ known correspondences $(p,p')$ these constraints can be stacked to give:
\begin{equation}
Wf = 0,
\end{equation}
where $W \in \R^{n \times 9}$.
Given $n \geq 8$ correspondences, an estimate $\tilde{F}$ of the fundamental matrix estimate is given by:
\begin{equation} \label{eq:fopt}
\begin{split}
\min_{f} \:\:& \lVert Wf \rVert^2, \\
\text{s.t.} \:\:& \lVert f \rVert^2 = 1.
\end{split}
\end{equation}
Note that the estimate $\tilde{F}$ computed by \eqref{eq:fopt} is not guaranteed to be singular. A second step is therefore taken to enforce this additional condition. In particular it is desirable to find the matrix $F$ that is closest to the estimate $\tilde{F}$ that has a rank of two:
\begin{equation}
\begin{split}
\min_F \:\:& \lVert F-\tilde{F}\rVert^2, \\
\text{s.t.} \:\:& \text{det}(F) = 0,
\end{split}
\end{equation}
which can be accomplished by computing a singular value decomposition of the matrix $\tilde{F}$.
\subsubsection{Image Rectification}
Given a pair of stereo images, epipolar rectification is a transformation of each image plane such that all corresponding epipolar lines become colinear and parallel to one of the image axes, for convenience usually the horizontal axis. The resulting rectified images can be thought of as acquired by a new stereo camera obtained by rotating the original cameras about their optical centers. The great advantage of the epipolar rectification is the correspondence search becomes simpler and computationally less expensive because the search is done along the horizontal lines of the rectified images. The steps of the epipolar rectification algorithm are illustrated in Figure \ref{fig:rect}. Observe that after the rectification, all the epipolar lines in the left and right image are colinear and horizontal.
For an in-depth discussion on algorithms for image rectification see \cite{Fusiello2000}\cite{LoopZhang1999}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.7\textwidth]{tex/figs/ch09_figs/rectification.png} \end{center}
\caption{Epipolar rectification example from Loop et al. (1999).}
\label{fig:rect}
\end{figure}
\subsubsection{Correspondence Problem}
Epipolar constraints and image rectification are commonly used in stereo vision to address the problem of correspondence, which is the problem of determining the pixels $p$ and $p'$ from two different cameras with different perspectives that correspond to the same scene feature $P$. While these concepts make finding correspondences easier, there are still several challenges that must be overcome. These include challenges related to feature occlusions, repetitive patterns, distortions, and others.
\subsubsection{Reconstruction Problem}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{tex/figs/ch09_figs/triangulation.png}
\caption{Triangulation with rectified images (horizontal view on the left, top-down view on the right).}
\label{fig:recttri}
\end{figure}
In a stereo vision setup, once a correspondence between the two images is identified it is possible to reconstruct the 3D scene point based on \textit{triangulation}. This process of triangulation has already been covered by the discussion on the epipolar geometry. However if the images have also be rectified such that the epipolar lines become parallel to the horizontal image axis the triangulation problem becomes simpler. This occurs, for example, when the two cameras have the same orientation, are placed with their optical axes parallel, and are separated by some distance $b$ called the \textit{baseline} (see Figure \ref{fig:recttri}).
In Figure \ref{fig:recttri}, a point $P$ on the object is described as being at coordinate $(x,y,z)$ with respect to the origin located in the left camera at point $O$. The horizontal pixel coordinate in the left and right image are denoted by $p_u$ and $p'_u$ respectively. Based on the geometry the depth of the point $P$ can be computed from the properties of similar triangles:
\begin{align}
\frac{z}{b} &= \frac{z-f}{b-p_u+p'_u},
\end{align}
which can be algebraically simplified to:
\begin{equation}
z = \frac{bf}{p_u-p'_u},
\end{equation}
where $f$ is the focal length. Generally a small baseline $b$ will lead to larger depth errors, but a large baseline $b$ may cause features to be visible from one camera but not the other. The difference in the image coordinates, $p_u-p'_u$, is referred to as \textit{disparity}. This is an important term in stereo vision, because it is only by measuring disparity that depth information can be recovered. The disparity can also be visually represented in a \textit{disparity map} (for example see Figure \ref{fig:disparity}), which is simply a map of the disparity values for each pixel in an image. The largest disparities occur from nearby objects (i.e. since disparity is inversely proportional to $z$).
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{tex/figs/ch09_figs/disparitymap.png}
\caption{Disparity map from a pair of stereo images. Notice that the lighter values of the disparity map represent larger disparity, and correspond to the point in the scene that are closer to the cameras. The black points represent points that were occluded from one of the images and therefore no correspondence could be made. Images from Scharstein et al. (2003) \nocite{ScharsteinSzeliski2003}.}
\label{fig:disparity}
\end{figure}
\subsection{Structure From Motion (SFM)}
The structure from motion (SFM) method uses a similar principle as stereo vision, but uses \textit{one} camera to capture multiple images from different perspectives while moving within the scene. In this case, the intrinsic camera parameter matrix $K$ will be constant, but the extrinsic parameters (i.e. the rotation matrix $R$ and relative position vector $t$) will be different for each image.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{tex/figs/ch09_figs/sfm.png} \end{center}
\caption{A depiction of the structure from motion (SFM) method. A single camera is used to take multiple images from different perspectives, which provides enough information to reconstruct the 3D scene.}
\label{sfm}
\end{figure}
Consider a case where $m$ images of $n$ fixed 3D points are taken from different perspectives. This would involve $m$ homography matrices $M_k$ and $n$ 3D points $P_j$ that would need to be determined by leveraging the relationships:
\begin{equation*}
p_{j,k}^h = M_k P^h_j, \quad j = 1,\dots,n, \quad k=1,\dots,m.
\end{equation*}
However, SFM also has some unique disadvantages, such as an ambiguity in the absolute scale of the scene that cannot be determined. For example a bigger object at a longer distance and a smaller object at a closer distance may yield the same projections.
One application of the SFM concept is known as \textit{visual odometry}. Visual odometry estimates the motion of a robot by using visual inputs (and possible additional information). This approach is commonly used in practice, for example by rovers on Mars, and is useful because it not only allows for 3D scene reconstruction but also to recover the motion of the camera.
| {
"alphanum_fraction": 0.7707362535,
"avg_line_length": 111,
"ext": "tex",
"hexsha": "a2c497e9d4cb23021a69cf3f8cfb348a76a21d1e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "StanfordASL/Principles-of-Robot-Autonomy",
"max_forks_repo_path": "tex/source/ch09.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "StanfordASL/Principles-of-Robot-Autonomy",
"max_issues_repo_path": "tex/source/ch09.tex",
"max_line_length": 817,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "StanfordASL/Principles-of-Robot-Autonomy",
"max_stars_repo_path": "tex/source/ch09.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-10T14:15:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-23T16:03:45.000Z",
"num_tokens": 4077,
"size": 16095
} |
\documentclass[main.tex]{subfiles}
\begin{document}
\section*{Thu Oct 10 2019}
Last lecture we introduced tensors.
An example of those is the EM tensor \(F_{\mu \nu}\):
\begin{equation}
F_{\mu \nu} = \left[\begin{array}{cccc}
0 & E_{x}/c & E_y/c & E_z/c \\
-E_x/c & 0 & -B_x & B_y \\
-E_y/c & B_x & 0 & -B_z \\
-E_z/c & -B_y & B_z & 0
\end{array}\right]\,,
\end{equation}
%
which, it can be checked, transforms as a \((0,2)\) tensor. Also, we can define the current vector \(j^{\mu} = (c \rho, j^{i})\). Then, the Maxwell equations read:
\begin{equation}
\partial_{\mu} F^{\mu \nu }= \mu_0 j^{\nu}
\qquad \text{and} \qquad
\partial_{[\mu} F_{\nu \rho]}\,.
\end{equation}
They are covariant!
\subsection{Particles in motion}
In Newtonian mechanics, the motion of a particle is described by a function of time \(x^{i} = x^{i}(t)\).
In special relativity, we introduce the concept of \emph{worldline} of a particle: the one-dimensional set of spacetime events in which the particle is found.
It must be parametrized with respect to some parameter \(\lambda\), such that \(x^{\mu} = x^{\mu}(\lambda)\). A preferred choice for \(\lambda\) is the proper time of the particle, \(\lambda = \tau\), which is defined by \( \dd{s^2} = - \dd{\tau^2}\).
We then define the 4-velocity:
%
\begin{equation}
u^\mu = \dv{x ^\mu}{\tau} \,.
\end{equation}
It is a tensor since it is the product of a scalar and a tensor.
Multiplying \(u^\mu u_{\mu}\) we always get \(-c^{2}\), since:
\begin{equation}
u^{\mu} u_{\mu} = \frac{\dd{x^{\mu}} \dd{x_{\mu}}}{\dd{\tau^{2}}} = -c^2 \frac{\dd{s^{2}}}{\dd{s^2}}
\end{equation}
We can make the expression of the 4-velocity explicit using \(\dd{\tau} = \gamma \dd{t}\), which gives us \(u^{\mu} = (\gamma c, \gamma v^{i})\).
In the frame of the particle the particle is not moving, therefore \(u^{\mu} = (c, 0)\).
The \emph{four-momentum} of a massive particle with mass \(m\) is defined as:
\begin{equation}
p^{\mu} = m u^{\mu} = (m \gamma c, m \gamma v^{i})\,.
\end{equation}
The component \(p^{0}\) is equal to \(mc\) at \(v=0\). What does it look like in the nonrelativistic approximation? we can expand it for small \(v/c\):
\begin{equation}
\frac{mc}{\sqrt{1 - \frac{v^{2}}{c^{2}}}} \sim
mc \qty(1 + \frac{v^{2}}{2 c^{2} } )
= mc + \frac{1}{c} \frac{mv^{2}}{2}\,.
\end{equation}
We get the mass, plus a kinetic energy term: more explicitly, \(cp^0 = mc^{2} + \frac[i]{1}{2} m v^{2}\).
We can rewrite Newton's first law in SR:
\begin{proposition}[Newton I]
A free particle moves with constant \(u^{\mu}\), or
\begin{equation}
\dv{u^{\mu}}{\tau} = 0
\end{equation}
\end{proposition}
To express this in an easier way we introduce the 4-acceleration:
\begin{equation}
a^{\mu} \defeq \dv{u^{\mu}}{\tau} = \dv[2]{x^{\mu}}{\tau}
\end{equation}
We now wish to introduce the concept of a path minimizing proper time.
Recall Snell's law, which allows us to relate the angles of incidence of light when it passes between one medium to another, if they have different indices of refraction:
\begin{equation}
n_1 \sin(\theta_1 ) = n_2 \sin(\theta_2 )
\qquad \text{or} \qquad
\frac{\sin(\theta_{2})}{\sin(\theta_{1})} = \frac{n_1}{n_2 } = \frac{v_2}{v_1 }\,.
\end{equation}
It can be shown that a beam of light following this law is equivalent to the beam minimizing the time it takes to move from a point in one medium to a point in the other.
Analogously, saying that a massive particle travels along the worldline which minimizes \(\tau\) is equivalent to Newton's first principle.
We want to perturb a generic worldline \(x^{\mu}\) with some \(\dd{x^{\mu}}\), and consider the proper time functional \(\tau\) which gives the proper time of a generic trajectory: we impose
\begin{equation}
\frac{\tau \qty[x^{\mu} + \varepsilon^\mu] - \tau \qty[x^{\mu}]}{\abs{\varepsilon ^\mu} } = \frac{\delta \tau}{\delta x^\mu} \overset{!}{=} 0\,,
\end{equation}
%
where a limit \(\abs{\varepsilon^{\mu}} \rightarrow 0 \) is implied, and only the linear terms are considered.
The proper time functional for paths between \(A\) and \(B\) is given \(\tau = \int _A^B \dd{\tau}\).
We can rewrite it as:
\begin{equation}
\tau = \int_A^B \dd{\tau} \frac{\dd{\tau^2}}{\dd{\tau^2}}
= \int_A^B \dd{\tau} \frac{-\eta_{\mu \nu}\dd{x^{\mu} \dd{x^{\nu}}}}{\dd{\tau^2}}\,.
\end{equation}
We now consider a perturbation \(\varepsilon^{\mu} = \delta^{\mu}_1 \delta x\):
\begin{equation}
\tau_{AB}[x + \varepsilon] = \int_A^B \dd{\tau}
\qty[\qty(\dv{t}{\tau})^2
- \frac{1}{c^{2}} \qty(\dv{t}{\tau} + \dv{\delta x}{\tau})^2
- \frac{1}{c^{2}} \qty(\dv{y}{\tau})^2
- \frac{1}{c^{2}} \qty(\dv{z}{\tau})^2]\,.
\end{equation}
We can discard a second order term \(\qty(\dv*{\delta x}{\tau})^2\), and subtract off \(\tau_{AB}[x]\): we are left with
\begin{equation}
\delta \tau = - \frac{2}{c^{2}} \int_A^B \dd{\tau} \dv{x}{\tau} \dv{\delta x}{\tau}
\end{equation}
Now, we integrate by parts, disregard the boundary terms since the endpoints of the path cannot be deformed, and get:
\begin{equation}
\frac{\delta \tau_{AB}}{\delta x} = + \frac{2}{c^{2}} \int_A^B \dd{\tau} \dv[2]{x}{\tau} \,,
\end{equation}
which proves the equivalence for this type of perturbation, the others are analogous.
\begin{bluebox}
Let us write a more general sequence of equations, somewhat informally:
%
\begin{subequations}
\begin{align}
\delta \qty(\int_{A}^{B} \dd{\tau }) &=
- \delta \int \frac{ \dd{x^{\mu }} \dd{x_{\mu }}}{ \dd{\tau^2}} \dd{\tau^2} \marginnote{Expand \(\dd{s^2}\)} \\
&= - \int \frac{\qty( \dd{x_{\mu }} + \dd{\epsilon_{\mu }}) \qty( \dd{x^{\mu }} + \dd{\epsilon^{\mu }})}{ \dd{\tau^2}} \dd{\tau } \marginnote{Perturb}\\
&= - 2 \int \frac{ \dd{\epsilon_{\mu } \dd{x^{\mu }}}}{ \dd{\tau^2}} \dd{\tau } \marginnote{Discard terms of order different than 1 in the perturbation}\\
&= + 2 \int \epsilon_{\mu } \dv[2]{x^{\mu }}{\tau } \dd{\tau }
\overset{!}{=} 0
\marginnote{Integrate by parts}
\,,
\end{align}
\end{subequations}
%
which must hold for any \(\epsilon_{\mu }\): so, by the fundamental lemma of variational calculus, we must have
%
\begin{align}
\dv[2]{x^{\mu }}{\tau } = 0
\,.
\end{align}
\end{bluebox}
The generalization of Newton's second law, which at low speeds is \(F^{i} = m a^{i}\), can be similarly restated as \(\delta S = 0\), for the action \(S = \int L \dd{\tau}\), where \(L\) is the Lagrangian for the system.
\subsection{Motion of light rays}
For light we cannot compute \(u^{\mu}\) with the previous definition, since its proper time is always zero.
Instead, we \emph{define} \(u^{\mu}\) to be a normalized null-like vector, such that locally \(x^{\mu} = \lambda u^{\mu}\) for some \(\lambda\).
We know from quantum mechanics that \(E = \hbar \omega\), where \(\hbar = h / (2 \pi)\) and \(\omega = 2 \pi / T = 2 \pi f\).
The momentum is proportional to the wavevector \(k^{i}\): \(p^{i} = \hbar k^{i} / c\). The relativistic generalization of this fact is
\begin{equation}
p^{\mu} = \qty(\frac{\hbar \omega}{c}, \frac{\hbar k^{i}}{c}) = \frac{\hbar k^{\mu}}{c}\,.
\end{equation}
Since the momentum of light must be null-like (\(p_{\mu } p^{\mu } = 0\)) we have that necessarily \(\omega = \abs{k} \).
\subsection{Doppler effect}
We take a special case: radiation goes in the same direction as the observer.
In the \(O\) frame we have \(k^{\mu} = (\omega, \omega,0,0)\).
The observer, moving with velocity \(v\), measures \(k^{\prime \mu}\). This can be easily computed with a Lorentz transformation: \(k^{\prime \mu} = \tensor{\Lambda}{^\mu_\nu} k^\nu\).
We are mostly interested in \(k^{\prime 0} = \omega'\): it comes out to be \(\omega' = \gamma \omega + (- \gamma \beta)\omega = (1- v/c) \gamma \omega\).
Some notes: at slow speeds \(\omega' \approx (1-v/c) \omega\); we have \(f'<f\) when source and observer are moving away from each other. This can be readily proven by considering either \(k^{\mu } = (\omega , -\omega , 0,0)\) or \(\beta \rightarrow - \beta \).
\end{document}
| {
"alphanum_fraction": 0.6315789474,
"avg_line_length": 41.6424870466,
"ext": "tex",
"hexsha": "264a01fa6df11988ba47b83bc0876c3859381e90",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z",
"max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jacopok/notes",
"max_forks_repo_path": "ap_first_semester/general_relativity/10oct.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jacopok/notes",
"max_issues_repo_path": "ap_first_semester/general_relativity/10oct.tex",
"max_line_length": 262,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jacopok/notes",
"max_stars_repo_path": "ap_first_semester/general_relativity/10oct.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z",
"num_tokens": 2837,
"size": 8037
} |
\section{Task B}
\label{sec:task-b}
For a constant strain triangle (CST) element, the shape functions can be defined as
\begin{equation}
\label{eq:shape-func}
N_{1} = 1 - \xi - \eta, \quad N_{2} = \xi, \quad N_{3} = \eta
\end{equation}
According to equations \eqref{eq:material-shape-grad}, the \textit{material}
gradient of shape function reads
\begin{equation} \tag{9.6ab} \label{eq:material-shape-grad}
\ubar{\nabla}_{0}N_{a} = \frac{\partial N_{a}}{\partial \ubar{\bm{X}}} =
\left( \frac{\partial \ubar{\bm{X}}}{\partial \ubar{\bm{\xi}}} \right)^{-T}
\frac{\partial N_{a}}{\partial \ubar{\bm{\xi}}} =
\left( \frac{\partial \ubar{\bm{X}}}{\partial \ubar{\bm{\xi}}} \right)^{-T}
\cdot \ubar{\nabla}_{\xi} N_{a}, \quad
\frac{\partial \ubar{\bm{X}}}{\partial \ubar{\bm{\xi}}}
= \sum_{a=1}^{n} \ubar{\bm{X}}_{a} \otimes \ubar{\nabla}_{\xi}N_{a}
\end{equation}
%
\begin{equation}
\ubar{\nabla}_{\xi} N_{a} = \left[
\begin{array}{c}
\dfrac{\partial N_{a}}{\partial \xi} \\
\dfrac{\partial N_{a}}{\partial \eta} \\
\end{array} \right] =
\left[
\begin{array}{c c c}
\dfrac{\partial N_{1}}{\partial \xi} & \dfrac{\partial N_{2}}{\partial \xi} & \dfrac{\partial N_{3}}{\partial \xi}\\
\dfrac{\partial N_{1}}{\partial \eta} & \dfrac{\partial N_{2}}{\partial \eta} & \dfrac{\partial N_{3}}{\partial \eta}\\
\end{array} \right] =
\left[
\begin{array}{c c c}
-1 & 1 & 0 \\
-1 & 0 & 1 \\
\end{array} \right]
\end{equation}
Similarly, the \textit{spatial} shape function gradients are given as
\begin{equation} \tag{9.11ab}
\label{eq:spatial-shape-grad}
\ubar{\nabla} N_{a} = \frac{\partial N_{a}}{\partial \ubar{\bm{x}}} =
\left( \frac{\partial \ubar{\bm{x}}}{\partial \ubar{\bm{\xi}}} \right)^{-T}
\cdot \ubar{\nabla}_{\xi} N_{a}, \quad
\frac{\partial \ubar{\bm{x}}}{\partial \ubar{\bm{\xi}}}
= \sum_{a=1}^{n} \ubar{\bm{x}}_{a} \otimes \ubar{\nabla}_{\xi}N_{a}
\end{equation}
Lastly, the deformation gradient can be expressed as
\begin{equation} \tag{9.5}
\label{eq:deform-grad}
\utilde{F} = \sum_{a=1}^{n} \ubar{\bm{x}}_{a} \otimes \ubar{\nabla}_{0} N_{a}
\end{equation}
The Matlab implementation of this task can be found in \texttt{shape\_gradients.m}
(see section \ref{app:matlab-code}).
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../main"
%%% End:
| {
"alphanum_fraction": 0.6105442177,
"avg_line_length": 38.5573770492,
"ext": "tex",
"hexsha": "7fa326f572ea02c2d8d95773bbc6b17b5ccc13b5",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-09-14T03:19:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-14T03:19:55.000Z",
"max_forks_repo_head_hexsha": "b4f33e0c4f79473df47f29cce398b61e264f204b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "iamrosk/hyperelasticity",
"max_forks_repo_path": "doc/sec/task_b.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b4f33e0c4f79473df47f29cce398b61e264f204b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "iamrosk/hyperelasticity",
"max_issues_repo_path": "doc/sec/task_b.tex",
"max_line_length": 125,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b4f33e0c4f79473df47f29cce398b61e264f204b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "iamrosk/hyperelasticity",
"max_stars_repo_path": "doc/sec/task_b.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-14T00:14:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-14T00:14:05.000Z",
"num_tokens": 914,
"size": 2352
} |
% chapters/tree-traversal.tex
\chapter{Traversal on Trees} \label{chapter:tree-traversal}
\input{chapters/tree-dfs}
\input{chapters/tree-bfs}
| {
"alphanum_fraction": 0.7777777778,
"avg_line_length": 20.5714285714,
"ext": "tex",
"hexsha": "7b20e027e5c053097c5afaacd25f8015500fc2b9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c8265b6368f851337ca9c0dd1476c07b6e29f83",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hengxin/algorithms-pseudocode",
"max_forks_repo_path": "chapters/tree-traversal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5c8265b6368f851337ca9c0dd1476c07b6e29f83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hengxin/algorithms-pseudocode",
"max_issues_repo_path": "chapters/tree-traversal.tex",
"max_line_length": 59,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "5c8265b6368f851337ca9c0dd1476c07b6e29f83",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hengxin/algorithms-pseudocode",
"max_stars_repo_path": "chapters/tree-traversal.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-27T13:01:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-06T08:52:25.000Z",
"num_tokens": 42,
"size": 144
} |
\section{Results}
\label{section:results}
In this section results from evaluating the Naive Bayes Classifier against various datasets and using a few different
tokenizing approaches are described.
\subsection{Datasets}
\label{subsection:datasets}
In addition to the required Dickens and Hardy books required for the assignment, some additional datasets
were taken from the UCI Machine Learning Repository \cite{uci}. Specifically, classification datasets stored in an
easy-to-use text format were selected. The datasets used are described in table \ref{table:datasets}. For each dataset,
its ``type" is specified, indicating the structure in which the data is stored. The available dataset types are
inline\footnotemark[1] and guten\footnotemark[2].
\begin{table}
\begin{tabular}{lll}
\hline
\textbf{Dataset} & \textbf{Source} & \textbf{Type} \\ [0.5ex]
\hline\hline
SMS & UCI - SMS Spam Collection & inline\footnotemark[1] \\
Badges & UCI - Badges & inline \\
Main & Gutenberg - Dickens \& Hardy & guten\footnotemark[2] \\
\hline
\end{tabular}
\caption{Datasets used for this paper}
\label{table:datasets}
\end{table}
\footnotetext[1]{Dataset is stored as a single file in which each line represents a training point. The first word in
each line is the class/category, while the rest of the line is a list of words used as the training "text blob."}
\footnotetext[2]{Dataset is stored as a list of directories representing classes/categories (e.g. "dickens", "hardy").
Each file within the class directories represent a training point. These files are actually books, but are abstractly
considered to be "text blobs," just like the inline dataset type.}
\subsection{Performance Measurements}
\label{subsection:performanceMeasurements}
Evaluations of the classifier are recorded in terms of ``True Positive" (TP), ``False Negative" (FN), ``False Positive"
(FP), \& ``True Negative" (TN), indicating what was predicted verse what the data's actual class. Then, the results are
presented in terms of some commonly used formulas: ``Precision", ``Recall", ``Specificity", and ``Accuracy"
\cite{measures}. A description of these formulas is shown in table \ref{table:measures}.
\begin{table}
\begin{tabular}{lll}
\hline
\textbf{Measure} & \textbf{Formula} & \textbf{Meaning} \\ [0.5ex]
\hline\hline
Precision & TP / (TP + FP) & \% of correct +'s \\
Accuracy & (TP + TN) / (total) & \% correct \\
Recall & TP / (TP + FN) & \% of +'s predicted as + \\
Specificity & TN / (TN + FP) & \% of -'s predicted as + \\
\hline
\end{tabular}
\caption{The measurement terms used in evaluations for this project, with their formulas and intuitive meanings}
\label{table:measures}
\end{table}
\subsection{Evaluation of the Basic Classifier}
\label{subsection:basicResults}
Here, the basic classifier is evaluated. The basic classifier simply tokenizes text by splitting on spaces. It does
nothing to filter or alter the tokens before training and classifying on them. Here, the results of evaluating SMS,
Badge, and Dickens-vs-Hardy data are discussed.
\subsubsection{SMS Spam Collection}
\label{subsection:smsBasic}
The SMS Spam Collection dataset \cite{sms} contains 5,574 data samples, of which 4,827 (86.6%) are not spam (``ham") and 747 (13.4%)
are spam (``spam"). The precision \ref{fig:smsBasicPrecision} and accuracy \ref{fig:smsBasicAccuracy} of the classifier
are shown for varying ratios of the dataset used to train with 5 random distributions for each ratio. Recall and
specificity are not shown because they were found to always be 100\% and 0\%, respectively. Specific numbers
for a few train:test ratios are shown in table \ref{table:smsBasicResults}.
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/sms_basic-precision.png}
\caption{SMS Precision: Overall, spam-specific, and ham-specific precisions found at varying portions of the dataset used to train}
\label{fig:smsBasicPrecision}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/sms_basic-accuracy.png}
\caption{SMS Accuracy: Overall, spam-specific, and ham-specific accuracies found at varying portions of the dataset used to train}
\label{fig:smsBasicAccuracy}
\end{figure}
\begin{table}
\begin{tabular}{rrrrrrrr}
\hline
\textbf{Train} &
\textbf{\begin{math}P_{All}\end{math}} & \textbf{\begin{math}P_{Ham}\end{math}} & \textbf{\begin{math}P_{Spam}\end{math}} &
\textbf{\begin{math}A_{All}\end{math}} & \textbf{\begin{math}A_{Ham}\end{math}} & \textbf{\begin{math}A_{Spam}\end{math}} &
\textbf{t} \\ [0.5ex]
\hline\hline
1\% & 44.7 & 36.3 & 98.7 & 44.7 & 36.3 & 98.7 & 79s \\
5\% & 78.0 & 75.1 & 97.1 & 78.0 & 75.1 & 97.1 & 66s \\
15\% & 88.8 & 87.6 & 96.7 & 88.8 & 87.6 & 96.7 & 68s \\
25\% & 91.7 & 91.0 & 96.6 & 91.7 & 91.0 & 96.6 & 69s \\
50\% & 94.4 & 94.1 & 96.5 & 94.4 & 94.1 & 96.5 & 79s \\
75\% & 95.5 & 96.5 & 95.3 & 95.5 & 96.5 & 95.3 & 115s \\
\hline
\end{tabular}
\caption{Mean \% precision (P) and accuracy (A) found over 1000 iterations at a number of train/test ratios,
and the time (t) to split and evaluate each}
\label{table:smsBasicResults}
\end{table}
\subsubsection{Badge Problem}
\label{subsection:badgesBasic}
The Badge Problem dataset \cite{badge} contains 294 data samples, of which 210 (71.4%) are positive (P) and 84 (28.6%)
are negative (N). The dataset contains a list of names, each of which has a class of P or N (originally ``+'' and ``-'',
but it was converted for this project to avoid special characters) This is an interesting dataset because the original
algorithm determining how to assign P and N is unknown. The original intent was to create a challenge for computer
scientists to generically apply machine learning to discover the target function. The precision
\ref{fig:badgeBasicPrecision} and accuracy \ref{fig:badgeBasicAccuracy} of the classifier are shown for varying ratios
of the dataset used to train with 5 random distributions for each ratio. Recall and specificity are not shown because
they were found to always be 100\% and 0\%, respectively. Since the basic approach simply looks at words, it did a
poor job on this dataset. Improved results could be possible by evaluating gender and length of name, or perhaps
simply be tokenizing on characters rather than words.
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/badge_basic-precision.png}
\caption{Badge Precision: Overall, P-specific, and N-specific precisions found at varying portions of the dataset used to train}
\label{fig:badgeBasicPrecision}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/badge_basic-accuracy.png}
\caption{Badge Accuracy: Overall, P-specific, and N-specific accuracies found at varying portions of the dataset used to train}
\label{fig:badgeBasicAccuracy}
\end{figure}
\subsubsection{Dickens vs. Hardy Novels}
\label{subsection:novelsBasic}
Finally, a data set of 15 Charles Dickens books \cite{gutenberg:dickens} and 15 Thomas Hardy books \cite{gutenberg:hardy}
was evaluated against the naive bayes classifier. Only slight alterations were made to the original books downloaded
from the Gutenberg Project. First, all preamble preceeding ``Chapter 1" or similar was removed so as not to train the
classifier on the author's name. Additionally all postamble added by Project Gutenberg was removed for the same reasons.
Without applying any special tokenizing, the stringified naive bayes classifier was found to be about 6 MB (6,125,536 B)
after training on 5 of 30 books simply from storing and tracking every word seen in the training set. Obviously the
basic approach is not scalable yet, for the sake of completeness, results are shown here as was done for the SMS Spam
Collection and the Badge Problem. In general, the classifier did much better with Dickens than Hardy. This likely
stems from the fact that the Dickens novels used here were larger than the Hardy books and thus any given word is more
likely to be a Dickens word. Again, the precision \ref{fig:novelBasicPrecision} and accuracy \ref{fig:novelBasicAccuracy}
are shown for varying ratios. A more limited range of ratios was used because the dataset was only of size 30.
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/novels_basic-precision.png}
\caption{Novels Basic Precision: Overall, Dickens-specific, and Hardy-specific precisions found at varying portions of the dataset used to train}
\label{fig:novelBasicPrecision}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/novels_basic-accuracy.png}
\caption{Novels Basic Accuracy: Overall, Dickens-specific, and Hardy-specific accuracies found at varying portions of the dataset used to train}
\label{fig:novelBasicAccuracy}
\end{figure}
\subsection{Extending the Classifier}
\label{subsection:advancedResults}
After the poor results from the basic classifier on the Dickens verse Hardy novels, several attempts were taken to
improve the accuracy and precision of the classifier on these large bodies of text.
First, a stemming step was added to the tokenizing process, in hopes of slimming the dataset and removing noise;
however, this had little impact on the effectiveness of either the precision \ref{fig:novelsStemPrecision} or accuracy
\ref{fig:novelsStemAccuracy}.
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/novels_stem-precision.png}
\caption{Novels Stem Precision: Overall, Dickens-specific, and Hardy-specific precisions found at varying portions of the dataset used to train}
\label{fig:novelsStemPrecision}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=90mm]{img/novels_stem-accuracy.png}
\caption{Novels Stem Accuracy: Overall, Dickens-specific, and Hardy-specific accuracies found at varying portions of the dataset used to train}
\label{fig:novelsStemAccuracy}
\end{figure}
| {
"alphanum_fraction": 0.7366049564,
"avg_line_length": 55.4836956522,
"ext": "tex",
"hexsha": "b52508ee3a029450b8689a8885abedb3400d8077",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7a8f8c91c864d2db4a3265836108fb632e24e481",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ross-nordstrom/cs5860-naive_bayes",
"max_forks_repo_path": "paper/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7a8f8c91c864d2db4a3265836108fb632e24e481",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ross-nordstrom/cs5860-naive_bayes",
"max_issues_repo_path": "paper/results.tex",
"max_line_length": 149,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7a8f8c91c864d2db4a3265836108fb632e24e481",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ross-nordstrom/cs5860-naive_bayes",
"max_stars_repo_path": "paper/results.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2732,
"size": 10209
} |
\documentclass[a4paper,11pt]{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{array}
\usepackage{float}
\usepackage{subfigure}
\usepackage{color}
\usepackage{listings}
\usepackage[utf8x]{inputenc}
\usepackage{hyperref}
\usepackage{rotating}
\usepackage[english]{babel}
\usepackage{amsfonts,amsmath,amsthm,amssymb}
\usepackage{algorithm}
\usepackage{algpseudocode}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
%% --------- Algorithms -------------
\providecommand{\BeginAlgSize}[0]{\begin{scriptsize}}
\providecommand{\EndAlgSize}[0]{\end{scriptsize}}
\providecommand{\ForEach}[1]{\For{\textbf{each} #1}}
\providecommand{\Or}[0]{O}
\providecommand{\A}[0]{\mathbf{\alpha}}
%% --------- End of Algorithms -------------
% Read MRPT version:
\newread\file
\openin\file=../../version_prefix.txt
\read\file to\MRPTVERSION % Reads a line of the file
\closein\file
% Title Page
\title{PbMap brief description and user guide.\\C++ implementation in \texttt{libmrpt-pbmap}}
\author{Eduardo Fern\'andez-Moral \\ [email protected] \\
%Jose-Luis Blanco-Claraco \\ [email protected] \\
\\
\texttt{http://www.mrpt.org/} }
\date{MRPT version: \MRPTVERSION \\ Document build: \today }
% C++ listings settings
\lstset{ %
language=C++, % choose the language of the code
basicstyle=\scriptsize, % the size of the fonts that are used for the code
numbers=none, % where to put the line-numbers
numberstyle=\footnotesize, % the size of the fonts that are used for the line-numbers
stepnumber=1, % the step between two line-numbers. If it is 1 each line will be numbered
numbersep=5pt, % how far the line-numbers are from the code
backgroundcolor=\color{white}, % choose the background color. You must add \usepackage{color}
commentstyle=\color{blue},
showspaces=false, % show spaces adding particular underscores
showstringspaces=false, % underline spaces within strings
showtabs=false, % show tabs within strings adding particular underscores
frame=single, % adds a frame around the code
tabsize=2, % sets default tabsize to 2 spaces
captionpos=b, % sets the caption-position to bottom
breaklines=true, % sets automatic line breaking
breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
escapeinside={\%*}{*)} % if you want to add a comment within your code
}
\begin{document}
\maketitle
\vfill
\begin{scriptsize}
\begin{center}
\includegraphics[width=3cm]{imgs/by-sa.pdf}
\\
This work is licensed under Attribution-ShareAlike 3.0 International (CC BY-SA 3.0) License.
\end{center}
\end{scriptsize}
\newpage
\tableofcontents
\newpage
\begin{figure}[t!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.42\textwidth]{imgs/scene.pdf} & \includegraphics[width=0.45\textwidth]{imgs/pbmap.pdf} \\
\scriptsize a) & \scriptsize b) \\
\end{tabular}
\end{center}
\caption{Plane-based representation. a) RGB image of the scene. b) Point cloud representation with the segmented planar patches superimposed.}
\label{fig:pbmap}
\end{figure}
\section{Introduction}
A PbMap is a highly compact representation of the scene based on a planar model of it. This map representation is proposed to avoid the high memory requirements and processing cost of traditional point cloud representations, which use has raised considerably with the appearance of low cost RGB-D (Kinect like) sensors. A PbMap compresses the point cloud into a set of planar patches, neglecting the non planar data. In this way, it offers an enormous data compression at the cost of losing the non-planar details, but we argue that such details have little importance for some applications, as for building lifelong maps, since only the large planes belonging to the scene structure are persistent over time, while the non-planar, generally small objects are more likely to move or disapear from the scene.
We define a PbMap as a set of planar patches described by geometric features (shape, relative position, etc.) and/or radiometric features (dominant color). It is organized as an annotated, undirected graph, where nodes stand for planar patches and edges connect neighbor planes when the distance between their closest points is under a threshold. This graph structure permits to find efficiently the closest neighbors of a plane, or to select groups of nearby planes representing part of the scene.
The input data to construct a PbMap is given by organized point clouds (depth images) together with poses. The process to build a PbMap can run online from the streaming of RGB-D images. This is possible thanks to efficient algorithms to segmentat planes from organized clouds \cite{holz2012fast}. The poses of those RGB-D images can be obtained in a number of ways: e.g. visual odometry, robot localization, etc. Thus, the planes are efficiently segmented from the organized point clouds, and these planes are integrated into the PbMap according to the sensor pose, either by updating an already existing plane or by initializing a new one when it is first observed. Figure \ref{fig:pbmap_construction} depicts a 2D scheme of the PbMap construction process.
An application for recognising previous places using PbMaps is presented in \cite{fdez2013PbMap}. This method relies on an interpretation tree to efficiently match sets of neighboring planes. Such interpretation tree applies geometric and radiometric restrictions in the form of both unary and binary constraints \cite{grimson1990}. The unary constraints check the individual correspondence of two planes by comparing directly their features (e.g. size, color), while the binary constraints validate if two pairs of connected planes present the same geometric relationship (e.g. perpendicularity). For further details refer to our paper \emph{“Fast place recognition with plane-based maps”} \cite{fdez2013PbMap}. Some results of this work are shown in this video %\href{http://www.youtube.com/watch?v=uujqNm_WEIo}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{imgs/pbmap_construction.pdf}
\caption{2D representation of the map construction procedure. a) RGB-D capture with segmentated planes (blue). b) Current PbMap with segmented planes (blue) superimposed according to the sensor pose. c) PbMap updated: the planes updated are highlighted d) PbMap graph updated: the planes updated are highlighted in blue, the new plane P7 is marked in green and, the new edges are represented with dashed lines.}
\label{fig:pbmap_construction}
\end{figure}
%\section{Library installation}
%
\section{Setting the parameters}
There are some heuristic parameters which govern the plane segmentation proces, the PbMap construction and the place recognition and re-localisation methods. These parameters are set in two configuration files: \emph{configPbMap.ini} (for plane segmentation and PbMap construction) which is read by the class \texttt{PbMapMaker}; and \emph{configLocaliser.ini} (for place recognition and localisation) which is read by the class \texttt{heuristicParamenters}. Each parameter is described below: \\
\\
\underline{Plane segmentation} (in \emph{configPbMap.ini})
\begin{itemize}
\item \emph{float dist\_threshold} → Set the maximum distance perpendicular to the plane between two 3D-points (default is set to 0.04 m).
\item \emph{float angle\_threshold} → Set the maximum angle between the normal vectors of two neighbor 3D-points (default is set to 4 deg).
\item \emph{float minInliersRate} → Set the minimum number of inliers as a fraction of the image pixels to segment a plane (default is set to 0.005).
\end{itemize}
\underline{Map construction} (in \emph{configPbMap.ini}) \\
\\
\emph{Global settings:}
\begin{itemize}
\item \emph{bool use\_color} → Choose wether to add color information or not to the planes (default set to true);
\item \emph{int graph\_mode} → Choose between establishing edges in the graph according to distance (0) or to visibility (1) (default is set to 0);
\item \emph{float proximity\_neighbor\_planes} → Set the maximum distance between two planar patches to consider them as neighbors (default is set to 1 m);
\end{itemize}
\emph{Parameters to merge two planes representing the same planar surface:}
\begin{itemize}
\item \emph{float max\_cos\_normal} → set the maximum angle (actually the minimum angle cosine) between two planes to merge them (default is set to 0.9848 = 10deg);
\item \emph{float max\_dist\_center\_plane} → Set the maximum distance between a plane and another plane's center to merge the planes (default is set to 0.1 m);
\item \emph{float proximity\_threshold} → Set the maximum distance between two planes to merge them (default is set to 0.15 m);
\end{itemize}
\emph{Parameters to infer some simple semantic knowledge to the planar patches:}
\begin{itemize}
\item \emph{bool inferStructure} → Choose wether to infer if the planes correspond to entities as e.g. floor, ceiling, walls, etc. (default is set to true);
\item \emph{bool makeCovisibilityClusters} → Should the PbMapMaker cluster groups of planes according to their co-visibility (default is set to true);
\end{itemize}
\emph{Loop closure:}
\begin{itemize}
\item \emph{bool detect\_loopClosure} → If set to true it runs the PbMapLocaliser functionality in a different thread to detect loop closures or to recognise previous PbMaps (default is set to true)
\item \emph{string config\_localiser} → Path to the configuration file containing the heuristic parameters which control the place recognition functionality;
\\
\end{itemize}
\underline{Place recognition} (in \emph{configLocaliser.ini}) *These parameters are required only if detect\_loopClosure=true \\
\\
\emph{Global settings:}
\begin{itemize}
\item \emph{string path\_prev\_pbmaps} → Path to previous PbMaps which are to be detected while cunstructing the current PbMap;
\item \emph{int min\_planes\_recognition} → Minimum number of planes to accept a match between two places (defaultis is set to 4);
\item \emph{bool use\_structure} → Choose whether to employ or not the semantic knowledge inferred to the planes (default is set to true);
\item \emph{use\_completeness} → Choose whether to differentiate between fully detected planes and partial observations to set different constraints for matching planes (default set to true);
\end{itemize}
\emph{Unary constraints:}
\begin{itemize}
\item \emph{color\_threshold} → Maximum color difference to match two planes (default set to 0.1);
\item \emph{elongation\_threshold} → Maximum elongation ratio to match two planes ratio (default set to 2.8)
\item \emph{area\_threshold} → Maximum areas ratio to match two planes (default set to 3.0)
\item \emph{area\_full\_threshold} → Used only if use\_completeness is true. Maximum areas ratio to match two planes (default set to 3.0);
\item \emph{area\_half\_threshold} → Used only if use\_completeness is true. Maximum areas ratio to match two planes (default set to 2.5);
\end{itemize}
\emph{Binary constraints:}
\begin{itemize}
\item \emph{angle\_threshold} → Maximum difference between the angles formed by of two pairs of planes to match such pairs (default set to 7.0)
\item \emph{dist\_threshold} → Maximum ratio between the distances of two pairs of planes to match such pairs (default set to 2.0)
\item \emph{height\_threshold} → Maximum difference between the height of two pairs of planes (almost parallel) to match such pairs (default set to 0.2 m)
\item \emph{cos\_angle\_parallel} → Maximum angle difference to consider two planes almost parallel (default set to 0.985)
\end{itemize}
%\section{Programmer's first steps}
%\label{sect:program_first}
\section{Implementation in MRPT (lib \texttt{mrpt-pbmap})}
\label{sect:implementation}
This library implements the functionality to build Plane-based Maps (PbMaps) from a set of point clouds plus their corresponding poses, which might be given by e.g. the odometry of a robot.
Application examples
Two application examples have been created for creating PbMaps \\ (\texttt{pbmap\_example}) and for visualising them (\texttt{pbmap\_visualizer}). To build the example within MRPT, the Cmake option BUILD\_EXAMPLES must be set to ON (default is OFF).
\subsection{How to create a simple program to build a PbMap}
The central class in mrpt-pbmap is PbMapMaker. This class creates its own thread in which it runs. The input data is passed through its public member frameQueue which stores a vector of pairs of point cloud plus pose. Thus, in order to create a PbMap, the user just need to create an object PbMapMaker and fill the vector frameQueue:
\begin{lstlisting}
#include <mrpt/pbmap.h>
using namespace mrpt::pbmap;
using namespace std;
int main(int argc, char**argv)
{
// Create a PbMapMaker object specifying the configPbMap.ini
PbMapMaker pbmap_maker("path_to/config_files/pbmap/configPbMap.ini");
// While certain condition is fulfilled (e.g. while exploration)
while (...) {
...
// Get dupla point_cloud + pose
frameRGBDandPose cloudAndPose;
pcl::copyPointCloud(point_cloud, *cloudAndPose.cloudPtr);
cloudAndPose.pose << pose;
// Detect planes and build PbMap
pbmap_maker.frameQueue.push_back(cloudAndPose);
...
}
}
\end{lstlisting}
Also, loop closure is run if it was activated in the configuration file configPbMap.ini. Note that we treat loop closure and place recognition in an equivalent manner, with the only difference that place recognition implies searching previous PbMaps with yet no relation with the current one being built.
%% ---------------------------------------------------------------
%% BIBLIOGRAPHY
%% ---------------------------------------------------------------
%\newpage
\bibliographystyle{plain}
\bibliography{cites}
\end{document}
| {
"alphanum_fraction": 0.7579918472,
"avg_line_length": 58.5062761506,
"ext": "tex",
"hexsha": "e2e211570808b79386326855ba9ec1beb2371089",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-02-22T08:56:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-06-08T07:55:51.000Z",
"max_forks_repo_head_hexsha": "4af5fdf7e45b00be4a64c3d4f009acb9ef415ec7",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "gao-ouyang/mrpt",
"max_forks_repo_path": "doc/pbmap-guide/pbmap-guide.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4af5fdf7e45b00be4a64c3d4f009acb9ef415ec7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "gao-ouyang/mrpt",
"max_issues_repo_path": "doc/pbmap-guide/pbmap-guide.tex",
"max_line_length": 813,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "1baff7cf8ec9fd23e1a72714553bcbd88c201966",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "zarmomin/mrpt",
"max_stars_repo_path": "doc/pbmap-guide/pbmap-guide.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-16T02:13:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-19T16:18:09.000Z",
"num_tokens": 3482,
"size": 13983
} |
\subsection{Electroweak theory}
\label{ewktheory}
The electroweak interaction is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction.
It is based on the gauge group $SU(2)_{L} \times SU(1)_{Y}$, in which $L$ is the left-handed fields and $Y$ is the weak hypercharge~\cite{Langacker:2009my}.
It follows the Lagrangian of
\begin{equation} \label{eq:Lew}
L_{EW} = L_{gauge} + L_{Higgs} + L_{fermion} + L_{Yukawa}
\end{equation}
$L_{gauge}$ is the \textbf{gauge term} part:
\begin{equation}
L_{gauge} = -\frac{1}{4} W^{i}_{\mu\nu} W^{\mu\nu i} - \frac{1}{4} B_{\mu\nu} B^{\mu\nu}
\end{equation}
where $W^{i}_{\mu}$ and $B_{\mu}$ present the $SU(2)_{L}$ and $SU(1)_{Y}$ gauge fields respectively, with the corresponding field strength tensors of
\begin{equation}
\begin{split}
& B_{\mu\nu} = \partial_{\mu} B_{\nu} - \partial_{\nu} B_{\mu} \\
& W^{i}_{\mu\nu} = \partial_{\mu} W^{i}_{\nu} - \partial_{\nu} W^{i}_{\mu} - g \epsilon_{ijk} W^{j}_{\mu} W^{k}_{\nu}
\end{split}
\end{equation}
In the equations above, $g$ denotes the $SU(2)_{L}$ gauge coupling and $\epsilon_{ijk}$ denotes the totally antisymmetric tensor.
The gauge Lagrangian has three and four -point self interactions of $W^{i}$ that results in triple and quartic gauge boson couplings.
The second term is the \textbf{scaler part}:
\begin{equation} \label{eq:Lhiggs}
{L}_{Higgs} = \left(D^{\mu}\phi\right)^{\dagger}D_{\mu}\phi - V(\phi)
\end{equation}
where $\phi = \binom{\phi^{+}}{\phi^{0}}$ denotes a complex Higgs scalar,
and $V(\phi)$ is the Higgs potential which is restricted into the form of
\begin{equation} \label{eq:Vhiggs}
V(\phi) = +\mu^{2}\phi^{\dagger}\phi + \lambda\left(\phi^{\dagger}\phi\right)^{2}
\end{equation}
due to the combination of $SU(2)_{L} \times SU(1)_{Y}$ invariance and renormalizability.
In Eq.~\ref{eq:Vhiggs}, $\mu$ is a mass-dependent parameter and $\lambda$ is the quartic Higgs scalar coupling,
which represents a quartic self-interaction between the scalar fields.
When $\mu^{2} < 0$, there will be spontaneous symmetry breaking (more details in section~\ref{symbreaking}).
To maintain vacuum stability, $\lambda > 0$ is required.
And in Eq.~\ref{eq:Lhiggs}, the gauge covariant derivative is defined as~\cite{Langacker:2009my}s
\begin{equation}
D_{\mu}\phi = \left(\partial_{\mu} +ig\frac{\tau^{i}}{2}W_{\mu}^{i} + \frac{ig'}{2}B_{\mu}\right)\phi
\end{equation}
in which $\tau^{i}$ represents the Pauli matrices, and $g'$ is the $U(1)_{Y}$ gauge coupling.
The square of the covariant derivative results in three and four -point interactions between the gauge and scalar fields.
The third term of the Lagrangian is the \textbf{fermion part}
\begin{equation} \label{eq:Lfermion}
\begin{split}
{L}_{fermion} = \sum_{m=1}^{F} & ( \bar{q}_{mL^{i}}^{0}\gamma_{\mu}D_{\mu}q_{mL}^{0} + \bar{l}_{mL^{i}}^{0}\gamma_{\mu}D_{\mu}l_{mL}^{0} + \bar{u}_{mR^{i}}^{0}\gamma_{\mu}D_{\mu}u_{mR}^{0} \\
& + \bar{d}_{mR^{i}}^{0}\gamma_{\mu}D_{\mu}d_{mR}^{0} + \bar{e}_{mR^{i}}^{0}\gamma_{\mu}D_{\mu}e_{mR}^{0} + \bar{\nu}_{mR^{i}}^{0}\gamma_{\mu}D_{\mu}\nu_{mR}^{0})
\end{split}
\end{equation}
In Eq.~\ref{eq:Lfermion}, m is the family index of fermions, F is the number of families.
The subscripts $L (R)$ stand for the left (right) chiral projection $\psi_{L(R)} \equiv \left(1 \mp \gamma_{5} \right) \psi/2$.
\begin{equation}
q_{mL}^{0} = \binom{u_{m}^{0}}{d_{m}^{0}}_{L} \qquad l_{mL}^{0} = \binom{\nu_{m}^{0}}{e_{m}^{-0}}_{L}
\end{equation}
are the $SU(2)$ doublets of left-hand quarks and leptons, while
$u_{mR}^{0}$, $d_{mR}^{0}$, $e_{mR}^{-0}$ and $\nu_{mR}^{0}$ are the right-hand singlets.
The last term in Eq.~\ref{eq:Lew} is \textbf{Yukawa term}
\begin{equation}
\begin{split}
{L}_{Yukawa} =& -\sum_{m,n=1}^{F} [\Gamma_{mn}^{u}\bar{q}_{mL}^{0}\widetilde{\phi}u_{nR}^{0} + \Gamma_{mn}^{d}\bar{q}_{mL}^{0}\phi d_{nR}^{0} \\
& + \Gamma_{mn}^{e}\bar{l}_{mn}^{0}\phi e_{nR}^{0} + \Gamma_{mn}^{\nu}\bar{l}_{mL}^{0}\widetilde{\phi}\nu_{nR}^{0}]+h.c.
\end{split}
\end{equation}
the matrices $\Gamma_{mn}$ refer to the Yukawa couplings between single Higgs doublet ($\phi$) and the various flavors of quarks (m) and leptons (n).
| {
"alphanum_fraction": 0.6634250474,
"avg_line_length": 61.1014492754,
"ext": "tex",
"hexsha": "be06c17e2543e5e002ea72d75e4b71002357e87e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "zhuhel/PhDthesis",
"max_forks_repo_path": "chapters/Theory/ewktheory.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "zhuhel/PhDthesis",
"max_issues_repo_path": "chapters/Theory/ewktheory.tex",
"max_line_length": 194,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "zhuhel/PhDthesis",
"max_stars_repo_path": "chapters/Theory/ewktheory.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1631,
"size": 4216
} |
\documentclass[a4paper,12pt]{article}
% Margins
\usepackage{a4wide}
% Write english
\usepackage[english]{babel}
% Used for images
\usepackage{graphicx}
% Used to show eps on Windows
\usepackage{epstopdf}
\usepackage{float}
% Text encoding
% Needed to use headers
\usepackage{fancyhdr}
\usepackage{hyperref}
% Used for the euro symbol
\usepackage[gen]{eurosym}
% Used for optimal usage of gensymb4
\usepackage{textcomp}
% Used for the degree symbol
%\usepackage{gensymb}
% Used for captions and subcaptions on images
\usepackage{caption}
\usepackage{subcaption}
% Colors
\usepackage{color}
% Code fragments
\usepackage{listings}
% No tab at start of paragraph
\setlength{\parindent}{0em}
\setlength{\parskip}{1em}
% Defines the todo macro
\newcommand{\todo}[1]{\textcolor{red}{\textbf{TODO: #1}}\PackageWarning{TODO:}{#1!}}
\usepackage{booktabs}% http://ctan.org/pkg/booktabs
\usepackage{colortbl}% http://ctan.org/pkg/colortbl
\usepackage{amsmath}% http://ctan.org/pkg/amsmath
\usepackage{xcolor}% http://ctan.org/pkg/xcolor
\usepackage{graphicx}% http://ctan.org/pkg/graphicx
\colorlet{tableheadcolor}{gray!25} % Table header colour = 25% gray
\newcommand{\headcol}{\rowcolor{tableheadcolor}} %
\colorlet{tablerowcolor}{gray!10} % Table row separator colour = 10% gray
\newcommand{\rowcol}{\rowcolor{tablerowcolor}} %
% Command \topline consists of a (slightly modified) \toprule followed by a \heavyrule rule of colour tableheadcolor (hence, 2 separate rules)
\newcommand{\topline}{\arrayrulecolor{black}\specialrule{0.1em}{\abovetopsep}{0pt}%
\arrayrulecolor{tableheadcolor}\specialrule{\belowrulesep}{0pt}{0pt}%
\arrayrulecolor{black}}
% Command \midline consists of 3 rules (top colour tableheadcolor, middle colour black, bottom colour white)
\newcommand{\midline}{\arrayrulecolor{tableheadcolor}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}\specialrule{\lightrulewidth}{0pt}{0pt}%
\arrayrulecolor{white}\specialrule{\belowrulesep}{0pt}{0pt}%
\arrayrulecolor{black}}
% Command \rowmidlinecw consists of 3 rules (top colour tablerowcolor, middle colour black, bottom colour white)
\newcommand{\rowmidlinecw}{\arrayrulecolor{tablerowcolor}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}\specialrule{\lightrulewidth}{0pt}{0pt}%
\arrayrulecolor{white}\specialrule{\belowrulesep}{0pt}{0pt}%
\arrayrulecolor{black}}
% Command \rowmidlinewc consists of 3 rules (top colour white, middle colour black, bottom colour tablerowcolor)
\newcommand{\rowmidlinewc}{\arrayrulecolor{white}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}\specialrule{\lightrulewidth}{0pt}{0pt}%
\arrayrulecolor{tablerowcolor}\specialrule{\belowrulesep}{0pt}{0pt}%
\arrayrulecolor{black}}
% Command \rowmidlinew consists of 1 white rule
\newcommand{\rowmidlinew}{\arrayrulecolor{white}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}}
% Command \rowmidlinec consists of 1 tablerowcolor rule
\newcommand{\rowmidlinec}{\arrayrulecolor{tablerowcolor}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}}
% Command \bottomline consists of 2 rules (top colour
\newcommand{\bottomline}{\arrayrulecolor{white}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}\specialrule{\heavyrulewidth}{0pt}{\belowbottomsep}}%
\newcommand{\bottomlinec}{\arrayrulecolor{tablerowcolor}\specialrule{\aboverulesep}{0pt}{0pt}%
\arrayrulecolor{black}\specialrule{\heavyrulewidth}{0pt}{\belowbottomsep}}%
\begin{document}
\begin{titlepage}
\fontsize{12pt}{14pt}\selectfont
\begin{center}
% The logo of the University of Ghent
\includegraphics[height=4cm]{figures/logo}
\vspace{1cm}
\fontsize{14pt}{17pt}\selectfont
% The course:
\textsc{Advanced Multimedia Applications}
\fontsize{12pt}{14pt}\selectfont
\vspace{1.5cm}
% De auteur van de thesis:
Feliciaan De Palmenaer\\
Wouter Pinnoo\\
Stefaan Vermassen\\
Titouan Vervack
\vspace{2.8cm}
\fontsize{17.28pt}{21pt}\selectfont
% The title:
\textsc{Academic Data - Automatic Course Assembly}
\fontseries{m}
\fontsize{12pt}{14pt}\selectfont
\vspace{3cm}
2015-2016
\vspace{2cm}
\end{center}
\end{titlepage}
\thispagestyle{empty}
\tableofcontents
\newpage
% ---------------------------------------------------------------------------- %
% Titlepage %
% ---------------------------------------------------------------------------- %
% ---------------------------------------------------------------------------- %
% Body %
% ---------------------------------------------------------------------------- %
\fontsize{12pt}{16pt}\selectfont
\section{General Information}
Onsophic, a Silicon Valley based startup focused on transforming learning, offers an intuitive, data driven, online training platform. The platform continuously gathers and analyses learning data and can
therefore be seen as the equivalent of Google Analytics for learning by providing organizations with a toolset to measure, analyse and discover what works and, more importantly, what doesn't work in training. In this business process, integrating existing content into the e-learning
platform is crucial.
The goal in this project is to automate the course assembly process, by transforming raw learning materials into structured courses.
\section{General project decisions}
This section will outline all of the important decisions we have made during the development of the project.
\subsection{Decisions}
\textbf{Iteration 1 (Course introduction - 18/02/2016)}
\begin{itemize}
\item Stefaan will fulfil the role of team lead. (18/02)
\item Stefaan will be responsible for the communication with the other parties. (18/02)
\item Git will be used as distributed versioning control system for the code, with the UGent GitHub platform as tool. (20/02)
\item Titouan will be responsible for the progress report and will make sure it is always up to date. (22/02)
\item Wouter will be responsible for git and over the code quality. (22/02)
\item Feliciaan will be responsible for the planning of the project. (22/02)
\item For scalability purposes, a web service is chosen over a native application (22/02)
\end{itemize}
\textbf{Iteration 2 (Starting 24/02/2016)}
\begin{itemize}
\item Play web application framework is chosen. Play is written in Scala and Java and is a clean alternative to the legacy Enterprise Java stacks focusing on predictable, minimal resource consumption (25/2)
\item Akka is chosen to create the asynchronous task system. Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM (25/2)
\item For Named-Entity Recognition, the NERD API was chosen for its abstraction of other NER APIs. (25/2)
\item We chose to use jsoup for our HTML parser as it doesn't require us to create the syntax tree ourselves and because it implements the WHATWG HTML5 specification. This on it's own, using ANTR4, would take quite some time. The license used for jsoup is the MIT license, strengthening our choice. (25/2)
\end{itemize}
\textbf{Iteration 3 (Starting 09/03/2016)}
\begin{itemize}
\item To decrease the impact of the risk \textit{``NER processing is not fast enough for practical usage of our tool"}, use several NER services in parallel and only choose the fastest one (12/03)
\item Implement threading for parallel processing of the several documents that have to be recognised in one task (12/03)
\end{itemize}
\textbf{Iteration 4 (Starting 16/03/2016)}
\begin{itemize}
\item We have defined a set of supported input formats. We noticed that most user manuals come with a Toc. The web service will expect this Toc only, and will dynamically decide which formatting is used (nested unordered lists, nested tables, divs,..).
\item Error handling implementation
\item Get partial JSON before NER to avoid long wating times before any result
\item Chapters are meant to group modules, no other metadata is attached to a chapter. Chapters are optional. So if you are able to detect an extra level on top of modules, you can use chapters to reflect that extra level.
\item The only tags that matter for now are the section tags, so we only focus on these.
\end{itemize}
\subsection{Responsibilities}
\begin{tabular}{lp{12cm}}
\topline
\headcol Responsible & Task \\
\midline
\textbf{Iteration 2} \\\\
\rowcol Titouan & - Responsible for the initial architecture\\
\rowcol & - Digitalising the component-connector diagram \\
\rowcol & - Initial wireframe of classes \\
Feliciaan & - Library research for input parsing\\
& - Adding HTML parser to the solution \\
& - Handle multiple URLs \\
\rowcol Wouter & - Research on Named-Entity Recognition libraries\\
\rowcol & - Making a motivated decision on which NER tool or tools are most suitable for what you need, and which will be the easiest to integrate into the solution, and be most future-proof \\
\rowcol & - Implement the most optimal solution for NER.\\
Stefaan & - Setting up the initial Play project\\
& - Creating an asyncronous task system using Akka to come up with a scalable approach \\
& \\\hline
\textbf{Iteration 3} \\\\
\rowcol Titouan & - Implement threading (parallel functionality for different documents in the same task)\\
Feliciaan & - Use the parsed HTML document to fill in section names in the Onsophic JSON document\\
\rowcol Wouter & - Risk list, planning and presentation\\
Stefaan & - Research on Onsophic JSON format, progress report\\
& \\\hline
\textbf{Iteration 4} \\\\
\bottomlinec
\end{tabular}
\subsection{Requirements}
\subsubsection{Functional requirements}
\textbf{Must-haves}
\begin{itemize}
\item parse the input source (HTML)
\item detect the learning modules and their learning activities
\item tag these detected modules and activities based on named entity recognition
\item estimate the Bloom level for each of the detected activities
\end{itemize}
\textbf{Nice-to-haves}
\begin{itemize}
\item Detect chapters in the HTML document, as these are optional (property of a module).
\item Parse other types of documents than use manuals.
\end{itemize}
\subsubsection{Non-functional requirements}
\textbf{Must-haves}
\begin{itemize}
\item Interoperability. Our tool must be able to interchange information with third-party services correctly. See Quality-Attribute Scenario in Figure \ref{fig:qas-interoperability}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figures/QAS-interoperability}
\caption{QAS: interoperability}
\label{fig:qas-interoperability}
\end{figure}
\item Modifiability. A developer must be able to change formats of input/output documents without it having an impact on other modules. See Quality-Attribute Scenario in Figure \ref{fig:qas-modifiability}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figures/QAS-modifiability}
\caption{QAS: modifiability}
\label{fig:qas-modifiability}
\end{figure}
\end{itemize}
\subsection{Assumptions}
\subsubsection{Input}
After surveying multiple different helpsites we didn't find any similarities that worked for all sites.
Some sites can be parsed by counting the depth of the node in the parse tree.\\
Some sites have a first page which is the index, other sites have the index on every page, some even need use javascript to create the whole index of the site.\\
At the moment we have implemented our parser for sites that have a table of contents.\\
We assume that every html document contains exactly one document.
\subsection{Risk list}
\begin{tabular}{p{5.5cm}llp{5.5cm}}
\topline
\headcol Risk & Probability & Impact & Mitigation \\
\midline
\rowcol Not finding a generic line in the different input sources & M & H & Review the assumptions about the input data with the client\\
No suitable NER tool that is free to use and license to use in closed-source software & L & H & Looking into commercial tools\\
\rowcol Time-management issues due to other projects and master dissertation & M & M & Discuss planning and requirements with client\\
NER processing is not fast enough for practical usage of our tool & M & M & Use several services in parallel and only use the fastest one\\
\rowcol Not able to fill in all metadata (not fully compatible JSON) & H & L & Use simplified JSON\\
\bottomlinec
\end{tabular}
\subsection{APIs \& frameworks}
\begin{enumerate}
\item Play Framework: web application framework, written in Scala and Java, clean alternative to the legacy Enterprise Java stacks, focussing on predictable, minimal resource consumption
\item Akka: toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM
\item For the parsing of HTML we use jsoup. It implements the WHATWG HTML5 specification and creates the same DOM (Document Object Model) as modern browsers create.
\item Named-Entity recognition will be done using the NERD API.
\end{enumerate}
\section{Prototype}
\subsection*{Iteration 2 (Starting 24/02/2016)}
\subsubsection*{Architecture}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{design/components}
\caption{Initial high-level architecture of the Automatic Course Assembly service}
\label{fig:intelligence}
\end{figure}
Our initial architecture is modular and has as goal to easily allow to extends our application with more features later on. Adding a new type of input or output format for example should be as easy as possible.\\
First and foremost we have the REST API, this is the starting points of our application. A url will be passed to our REST API after which it will use the right DocumentGrabber to grab the document from the url. An example of a DocumentGrabber would be an HTMLGrabber that knows how to grab HTML documents and returns an unprocessed HTMLDocument.\\
Throughout the application we will work with Documents to pass around the (un)processed data.\\
The REST API then passes the Document it grabbed to the DocumentProcessor. The Processor calls the DocumentParser which will create a parse tree, analyse it and create a new Document, during analysis the learning modules and activities are created. In this Document only the useful parts of the input document are left over.\\
The DocumentProcessor than passes the parsed Document to the EntityRecogniser which will perform Named Entity Recognition (NER) on the Document. This module will add tags to the Document and pass it back to the DocumentProcessor.\\
Last but not least the DocumentProcessor passes the Document to a DocumentTranslator which will translate a completely processed Document into a Document fit for outputting. In our current case, this will translate an HTMLDocument into a JSONDocument.\\
The only document that can be accessed from the outside is the REST API, everything else is hidden from the user. The user can query the API to start a new import, get the status of a previous task or get the generated output document.
\subsubsection*{NER}
In the assembly process of the JSON-representation of a course document, tags will be created in addition to the parsed content of the (HTML-) documents. For each document, these tags will represent the subject of the document. For this purpose, the content of the whole document will have to be scanned, since the title of the document is not always sufficient to serve as tag for the document. For example, user manuals often contain \textit{Prerequisites} as a title for the first section of the manual. This section will probably contain more useful information than only this subject line.\par
For the automatic scanning of the document for useful tags, Named-Entity Recognition (NER) will be used. NER uses machine learning techniques to find (sequences of) words in the text, i.e. entities, that can be related to concepts from the semantic web. For example, in the sentence \textit{Trump and Clinton are two candidates for the presidential elections of 2016}, the entity \textit{Trump} will be linked to \textit{http://dbpedia.org/page/Donald\_Trump}. The used algorithm will make decisions based on relevance of the word to the concept of the semantic web and its confidence.\par
We think NER is the best solution for the generation of tags, since
\begin{itemize}
\item NER finds sequences of words that are relevant (i.e. leave out words like \textit{'and'}, \textit{'or'}, \textit{'the'}, etc.
\item NER provides a measure of relevance, which we can use to sort all entities on. Only the 5 most relevant concepts in the document will be used as tag for the document.
\end{itemize}\par
Several NER libraries exist that serve an API for easy access. We chose to use the NERD API\footnote{see http://nerd.eurecom.fr/}. This API acts as an abstraction layer for other existing NER APIs, one of which is the most commonly used NER API: AlchemyAPI. The NERD API provides easy methods to submit a document, choose a NER library and retrieve the results of the chosen extractor.\par
This abstraction layer will be a big advantage in this project, because we can easily switch from extractors to tune performance and correctness of the tags.\par
A disadvantage of the NERD API is its license, which prohibits any commercial use of its services. However, we think that the benefit of the abstraction layer outweighs the disadvantage of the license, since one can use the learned results of all the different used extractors gained in this project course to eventually implement the best performing extractor in the project (which will take more effort than implementing this abstraction layer).\par
\subsection*{Demo}
\begin{itemize}
\item Entering new URL in the system:
\begin{lstlisting}
$ http http://localhost:9000/start\
?url=https://docs.oracle.com/cd/E18727_01/doc.121/e13522/toc.htm
HTTP/1.1 200 OK
Content-Length: 27
Content-Type: application/json; charset=utf-8
{
"id": 1,
"status": "QUEUED"
}
\end{lstlisting}
\item Checking progress of a task
\begin{lstlisting}
$ http localhost:9000/check?id=1
HTTP/1.1 200 OK
{
"id": 1,
"status": "RECOGNISING"
}
\end{lstlisting}
\item Retrieving the results of a task
\begin{lstlisting}
$ http localhost:9000/result?id=1
HTTP/1.1 200 OK
{
"tags": ["tag1", "tag2"],
"text": "..."
}
\end{lstlisting}
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/screenshot-queueing}
\caption{Screenshot of the queueing process}
\label{fig:screenshot-queueing}
\end{figure}
\subsection*{Iteration 3 (Starting 09/03/2016)}
\subsubsection*{Architecture}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{design/componentsv2}
\caption{Revised high-level architecture of the Automatic Course Assembly service}
\label{fig:intelligence}
\end{figure}
A few changes were made in this iteration because we noticed that the document we receive as input has links to other documents that should also be grabbed and parsed.\\
In our second architecture the DocumentGrabber has moved to the DocumentProcessor. The API just passes the url of the first url to the DocumentProcessor.\\
In the DocumentGrabber two new thread are created, one in which the DocumentGrabber grabs new documents and one in which the DocumentParser parses the grabbed Document. In the latter more urls can be derived from the document and these are added to the list of documents to grab.\\
The EntityRecogniser and DocumentTranslator remain the same but they are called once for every document that went through the parser.
\subsubsection*{Retrieve section headers}
As a first attempt in parsing, section headers from a HTML manual document were retrieved. Those headers are used in the \verb|embeddedSections|-\verb|title| field in the JSON format of Onsophic, and the tags (retrieval implementation made in iteration 2) are used in the corresponding JSON field. The code below shows the output of our tool at the end of this iteration.
\begin{lstlisting}
$ http localhost:9000/result?id=1
HTTP/1.1 200 OK
{
"@type":"CourseEdit",
"title":"AMMA test course",
"embeddedSections":[
{
"@type":"SectionEdit",
"title":"Oracle Receivables User Guide",
"activity":{
"url":"https://docs.oracle.com/cd/E18727_01/doc.121
/e13522/toc.htm",
"modality":"text",
"activityType":{
"id":"READ",
"title":"Read"
},
"title":"Oracle Receivables User Guide"
},
"tags":[
"Receipts",
"Transactions",
"Bills Receivable",
"Customer",
"Receivables"
],
"visible":true,
"optionality":"required"
},
{
"@type": "SectionEdit",
"title": "...",
...
}
]
}
\end{lstlisting}
\subsection*{Iteration 4 (Starting 16/03/2016)}
We have defined a set of supported input formats. We noticed that most user manuals come with a table of contents. The web service will expect this table of contents only, and will dynamically decide which formatting is used (nested unordered lists, nested tables, divs,..).
\par
As Onsophic indicated that they would like to test the service during the development to give more feedback and evaluate the integration in the learning platform, we decided to expose stable versions of this web service on a web server. For this purpose, an Azure web service has been created that always reflects the stable version on the master branch and is accessible on the following endpoint: \textbf{http://13.94.197.112:9000}.
\par
The supported input formats will be explained in detail in the sections below. For each format, an example is provided.
\subsubsection*{HTML unordered lists}
The table of content lists all the sections in the form of a nested unordered list.
\begin{lstlisting}
<ul>
<li>List item one</li>
<li>List item two with subitems:
<ul>
<li>Subitem 1</li>
<li>Subitem 2</li>
</ul>
</li>
<li>Final list item</li>
</ul>
\end{lstlisting}
Example: \url{http://13.94.197.112:9000/assets/examples/lists.html}.
\begin{lstlisting}
Optional<Element> toc = document.getHtmlDocument()
.select("ul").stream().filter(l -> !l.parent().tag().equals("li"))
.sorted((e1, e2) ->
e1.childNodes().size() > e2.childNodes().size() ? -1 : 1).findFirst();
\end{lstlisting}
To find the table of contents on the page, we first get all the lists that are not nested inside another list. The list with most childnodes will be selected. Each child in the parent list will be treated as a Module. If it seems that a Module node has no children, an Section and Activity with the same name will be created artificially. The URL of the activity will represent the URL of the Module node.
\begin{lstlisting}
//No sublevels found, create new activity with this element's content
//If there is a link assigned to this listelement
if (!e.select(":root > a").isEmpty() &&
!e.select(":root > a").get(0).attr("href").isEmpty()) {
Section section = new Section();
Element linkElement = e.select(":root > a").get(0);
section.setTitle(linkElement.ownText());
setActivity(linkElement, section);
module.addSection(section);
}
\end{lstlisting}
However, as lists can be nested, the children of the Module nodes will be treated as sections. If the Section node has no children, an activity node will be created artificially. The children of Section nodes will be treated as activities.
\subsubsection*{Description lists}
The table of content lists all the sections in the form of a description list.
\begin{lstlisting}
<dl>
<dt>Firefox</dt>
<dt>Mozilla Firefox</dt>
<dt>Fx</dt>
<dd>A free, open source, cross-platform, graphical web browser
developed by the Mozilla Corporation and hundreds of volunteers.</dd>
</dl>
\end{lstlisting}
Example: \url{http://13.94.197.112:9000/assets/examples/definitionlists.html}.
\subsubsection*{Raw weblinks}
The table of content lists all the sections in the form hyperlinks. The parser will automatically follow each link and parse the pages on the deeper levels.
\begin{lstlisting}
<a href="http://help.apple.com/ipad/5/voiceover/en/iPad73fccd85.html"
alt="At a Glance" class="voiceoverListLink">At a Glance</a><br />
<a href="http://help.apple.com/ipad/5/voiceover/en/iPad741db878.html"
alt="Getting Started" class="voiceoverListLink">Getting Started</a><br />
<a href="http://help.apple.com/ipad/5/voiceover/en/iPad743b0e91.html"
alt="Basics" class="voiceoverListLink">Basics</a>
\end{lstlisting}
Example: \url{http://13.94.197.112:9000/assets/examples/rawlists.html}.
\subsubsection*{Error handling}
Since we decided that the parsing will be handled asynchronously, it is possible that an error occurs of which the web service consumer doesn't know about. Therefore, we implemented better error handling. If an unhandled exception occurs in the parsing process, this will be visible for the consumer as the error message will now be shown in response of the check method. \par
Before this implementation, the status of a task would not change and the consumer could wait forever.
\subsubsection*{NER after writing out the JSON}
As the NER lookup is the slowest operation in the process, we decided to make the JSON result accessible before the NER is completed. As soon as the parsing part is done, querying the \verb|result| endpoint will result in the status \verb|DONE_WITHOUT_TAGS|. The returned JSON document will be complete except that the tags array will be empty. As soon as the NER processing is done, the tags array will be filled and the status will be \verb|DONE_WITH_TAGS|.
\subsubsection*{Prefix handling}
We encountered some problems with documents that contain relative links to other documents. Instead of having to modify all links in a document before submitting it to the system, the prefix of the links can be passed to the system in the \verb|start| endpoint. For example, using \verb|?prefix=http://example.com/| will make sure that relative links (e.g. \verb|subpage/page1.html/|) are retrieved correctly.\par
In case no prefix is provided in the \verb|start| endpoint, a prefix will be automatically generated based on the URL of the page that contains the table of contents. For example, submitting \verb|http://example.com/subpage/toc.html| without \verb|prefix| parameter will cause the system to use the prefix \verb|http://example.com/subpage/|.
\section{Application}
The application we create is a web API that takes a url of an HTML document. The document is then processed and a JSON document compatible with the JSON format used in Onsophic is created. Multiple requests can be issued simultaneously and the status of a document can also be queried.
\section{Overview of the different meetings}
\subsection{Initial meeting with client (24/02/2016)}
\begin{itemize}
\item \textbf{Present:} Stefaan, Titouan and Wouter
\item \textbf{Excused:} Feliciaan
\item \textbf{Goal:} Getting to know the specific details and requirements for the project.
\item \textbf{Meeting notes:} We met with Davy, the CTO of Onsophic. Onsophic has seven employees, of which two to three are technical employees. The platform that Onsophic produces is an educational platform, aimed at different companies such as: software, retail,\ldots They are based in Europe \& the USA.\\
The platform offers courses and learning activities. A course exists of modules, each module contains learning activities, e.g. youtube clip, drive document,\ldots. Onsophic hosts nothing by itself, everything is hosted somewhere else.
Documents that have to be imported into the platform are for example user manuals about products. The support can than learn more about the products through the courses. We have to convert our input documents to a series of modules and learning activities.
\item \textbf{Decisions:} This week we received a sample JSON file and test logins for the Onsophic platform. We will have a meeting with the client weekly, the day before meeting with the teaching staff.
\end{itemize}
\subsection{Review with teaching staff (25/02/2016)}
\begin{itemize}
\item \textbf{Meeting notes}
Make sure to discuss these topics next time:
\begin{itemize}
\item Overview of architecture
\item Who is doing what
\item Risk list
\item What can go wrong, what problems do you have to tackle first, prioritize
\item What features are we going to offer
\item Use your client
\item Include assumptions about inputs in the report!
\item Agile approach, get a demo ready as fast as possible
\item Planning in a powerpoint, what technologies to work together
\end{itemize}
\item \textbf{Decisions:} Next time, come up with a presentation that gives a summary of the above subjects.
\end{itemize}
\subsection{Review with teaching staff (10/03/2016)}
\begin{itemize}
\item \textbf{Meeting notes:}
\begin{itemize}
\item Define more risks. Not only encountered events, but also risks of events that may happen in the future.
\item Make sure a good planning for the next six weeks is defined and presented on the next meeting (17/03/2016).
\end{itemize}
\end{itemize}
\subsection{Review with client (16/03/2016)}
\begin{itemize}
\item \textbf{Meeting notes:}
\begin{itemize}
\item An external NER service can be down. Think about mitigations for this risk.
\item Think about possibilities to use our system in a more user-friendly way.
\end{itemize}
\end{itemize}
\subsection{Review with teaching staff (17/03/2016)}
\begin{itemize}
\item \textbf{Meeting notes:}
\begin{itemize}
\item A more detailed sprint planning must be added to the progress report.
\item A self-reflection on the proposed sprint must be discussed during the next meeting.
\end{itemize}
\end{itemize}
\subsection{Extra internal meeting (28/03/2016)}
\begin{itemize}
\item \textbf{Meeting notes:}
\begin{itemize}
\item Several sources were researched, and a categorisation is made to
\begin{enumerate}
\item sources with navigation based on simple HTML list tags;
\item sources with navigation places in HTML tables;
\item sources with raw (hierarchical) links.
\end{enumerate}
\end{itemize}
\item \textbf{Decisions:}
\begin{itemize}
\item Stefaan implements a parser based on category 1.
\item Titouan implements a parser based on category 2.
\item Wouter implements a parser based on category 3.
\item Feliciaan implements the detection of each category and executes the corresponding parser.
\end{itemize}
\end{itemize}
\subsection{Review with client (30/03/2016)}
\begin{itemize}
\item \textbf{Meeting notes:}
\begin{itemize}
\item There're a lot of possible formats. Make a list of the format we want to support and provide some examples for it.
\item The parsing algorithm should be described more with all relevant constraints.
\item Sometimes a \textit{parent} in the hierarchical tree has content too. In that case, make a separate section that contains that content, e.g. an \textit{``Introduction"} section.
\item Make a better error handling system, where a user gets notified it things go wrong.
\item Units are not yet implemented. Without units, we can not import our JSON into the client's system.
\item Both \textit{Bloomlevels} and \textit{Modalities} have to be implemented.
\end{itemize}
\item \textbf{Decisions:}
\begin{itemize}
\item Stefaan provides the client with a link to the Azure production server.
\item Wouter checks whether the used NER service has a usage limitation.
\item The client provides the team with correct user permissions on his platform.
\end{itemize}
\end{itemize}
\subsection{Review with client (13/04/2016)}
\begin{itemize}
\item \textbf{Meeting notes:}
\begin{itemize}
\item The proposed categorisation of Bloomlevels is too wide. The client will provide the team with the exact list of keywords that will have to be used in the application.
\item Testing is not yet possible because there was an error while deploying on Azure. This will be fixes as soon as possible.
\item For the Bloom level detection: the title of an activity is more important that keywords in the content of that activity.
\end{itemize}
\item \textbf{Decisions:}
\begin{itemize}
\item As temporarily solution for the Bloom levels, use only the first two categorisation columns from the proposed levels.
\item A draft of the progress report will be sent to the client before next week.
\item Set a priority: deploy on Azure.
\end{itemize}
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7485716022,
"avg_line_length": 56.829015544,
"ext": "tex",
"hexsha": "acd01daf36e9073d17c7c95bde3fa9b8400931f2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-07-18T11:23:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-18T11:23:49.000Z",
"max_forks_repo_head_hexsha": "5561ce3bb73d5bc5bf31bcda2be7e038514c7072",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "FlashYoshi/UGentProjects",
"max_forks_repo_path": "AMMA/ProgressReport.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5561ce3bb73d5bc5bf31bcda2be7e038514c7072",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "FlashYoshi/UGentProjects",
"max_issues_repo_path": "AMMA/ProgressReport.tex",
"max_line_length": 598,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5561ce3bb73d5bc5bf31bcda2be7e038514c7072",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "FlashYoshi/UGentProjects",
"max_stars_repo_path": "AMMA/ProgressReport.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8092,
"size": 32904
} |
\subsubsection{\stid{5.05} Argo}
\paragraph{Overview}
The Argo project~\cite{perarnau2017argo} is building portable, open source system software that improves
the performance and scalability and provides increased functionality to
Exascale applications and runtime systems.
We focus on four areas of the OS/R stack where the need from the ECP
applications and facilities is perceived to be the most urgent:
1) support for hierarchical memory;
2) dynamic and hierarchical power management to meet performance
targets;
3) containers for managing resources within a node; and
4) internode interfaces for collectively managing resources across groups
of nodes.
\paragraph{Key Challenges}
Many ECP applications have a complex runtime structure, ranging from in
situ data analysis, through an ensemble of largely independent individual
subjobs, to arbitrarily complex workflow structures~\cite{dreher2017situ}. At the same time, HPC
hardware complexity increases as well, from deeper memory hierarchies
encompassing on-package DRAM and byte-addressable NVRAM, to heterogeneous
compute resources and performance changing dynamically based on
power/thermal constraints.
To meet the emerging needs of ECP workloads while providing optimal
performance and resilience, the compute, memory, and interconnect resources
must be managed in cooperation with applications and runtime systems; yet
existing resource management solutions lack the necessary capabilities and
vendors are reluctant to innovate in this space in the absence of clear
directions from the community.
\paragraph{Solution Strategy}
Our approach is to augment and optimize for HPC the existing open source
offerings provided by vendors. We are working with ECP applications and
runtime systems to distill the needed new interfaces and to build, test,
and evaluate the newly implemented functionality with ECP workloads. This
needs to be done in cooperation with facilities, who can provide early
hardware testbeds where the newly implemented functionality can be
demonstrated to show benefits, tested at scale, and matured. Over the
years we have cultivated an excellent relationship with the vendors
providing HPC platforms because our approach has been to augment and
improve, rather than develop our own OS/R from scratch. IBM, Cray, and
Intel are eager to integrate the components we develop for ECP that can
help applications.
Our work in each area focuses on the following:
\begin{enumerate}
\item \textbf{Hierarchical memory:} Incorporate NVRAM into the memory hierarchy
using UMap: a user-space \texttt{mmap} replacement for out-of-core data,
leveraging recent \texttt{userfaultfd} mechanism of the Linux kernel for page fault
handling, featuring application-class specific prefetching and eviction
algorithms. Expose deep DRAM hierarchy by treating high-bandwidth memory
(MCDRAM, HBM) as a scratchpad~\cite{perarnau2016exploring}, managed by the Argonne Memory Library (AML),
which provides applications with asynchronous memory migration
between memory tiers and other convenience mechanisms.
\item \textbf{Power management:}
\emph{PowerStack} explores hierarchical interfaces for power management
at three specific
levels~\cite{Ellsworth:argo,ellsworth_e2sc2016,patki2016,sakamoto2017}: the
global level of batch job schedulers (which we refer to as the Global
Resource Manager or GRM), the enclave level of job-level runtime systems
(open-source solution of Intel GEOPM and the ECP Power Steering project
are being leveraged here), and the node-level through measurement and control
mechanisms integrated with the NRM (described below).
At the node level, we are developing low-level, vendor-neutral
monitoring/controlling capabilities to monitor power/energy consumption,
core temperature and other hardware status~\cite{osti_1353371,zhang2015minimizing}, and control the hardware power
capping and the CPU frequencies.
\item \textbf{Containers:} Develop a Node Resource Manager (NRM) that leverages
technologies underlying modern container runtimes
(primarily \texttt{cgroups}) to partition resources on compute nodes~\cite{zounmevo2015container},
arbitrating between application components and runtime services.
\item \textbf{Hierarchical resource management:} Develop a set of distributed
services and user-facing interfaces~\cite{perarnau2015distributed} to allow applications and runtimes to
resize, subdivide, and reconfigure their resources inside a job. Provide
the enclave abstraction: recursive groups of nodes that are managed as a
single entity; those enclaves can then be used to launch new services or to
create subjobs that can communicate with each other.
\end{enumerate}
\paragraph{Recent Progress}
We developed the first stable version of UMap, the user-space memory map
page fault handler for NVRAM. UMap handler maps application threads'
virtual address ranges to persistent data sets, transparently pages in
active pages and evicts unused pages. We evaluated the costs and overheads
of various approaches and characterized end-to-end performance for simple
I/O intensive applications.
%
Further work on performance evaluation and capability improvements is
ongoing. UMap API has been extended to allow for application-specific I/O
strategies. Read-only support was added to enable the use with released
enterprise versions of the Linux kernel. UMap now runs on LLNL Sierra. We
are studying locality-aware eviction and replacement algorithms and are
conducting scaling studies using astronomy application and data.
We developed AML, a memory library for explicit management of deep memory
architectures. Its main feature is a flexible and composable API, allowing
applications to implement algorithms similar to out-of-core for deep
memory. We provided multiple optimized versions of memory migration
facilities, ranging from a regular copy to a transparent move of memory
pages, using synchronous and asynchronous interfaces and single- and
multithreaded backends. We validated the initial implementation on Intel's
Knights Landing using a pipelining scheme for stencil applications. We
also identified interaction points between UMap and AML. Further
performance and capability improvements are underway. In particular, we
performed exhaustive studies comparing performance of various approaches
for block-based DGEMM and task-based Cholesky decomposition.
We developed an API between Node Power and Node Resource Manager (NRM),
which in turn allows Global Resource Manager (GRM) to control and monitor
power and other node-local resources. Additionally, we studied the effect
of power capping on different applications using the NodePower API and
developed power regression models required for a demand-response policy.
We also developed a variation-aware scheduler to address manufacturing
variability under power constraints with Flux infrastructure, and extended
SLURM to support power scheduling plugins. This was tested on two systems
up to 1,200 nodes and resulted in two publications. PowerStack is now a
community-wide effort encompassing five industry partners and multiple
academic and research labs across the US, Europe, and Asia. It enables
prioritization of the critical path, application performance, and
throughput.
We developed the first version of the unified Node Resource Manager. The
NRM provides high level of control over node resources, including initial
allocation at job launch and dynamic reallocation at the request of the
application and other services. The initial set of managed resources
includes CPU cores and memory; they can be allocated to application
components via a container abstraction, which is used to describe
partitions of physical resources (to decrease interference), and more. NRM
integrates dynamic power control using COOLR and libMSR, and provides
support for tracking and reporting of application progress. Such
functionality is needed by the BSP-oriented power policy, which we are
currently evaluating and scaling up. Work is also ongoing to support
third-party container technologies such as Docker, Singularity, and
Shifter. At the job level, we enabled enclave-aware MPI facilities that
can be used to create inter-communicators between MPI jobs launched in
separate enclaves.
On the overall project integration front, we succeeded in creating a first
integrated release, internal for now. Sources from all the individual
components were pulled into one location. Testsuites and Spack packages
were created. We used ChameleonCloud as the first integration platform,
thanks to its capability to provision raw hardware resources. We also
developed custom CI infrastructure, running on development KNL boxes,
on the ChameleonCloud, and on systems at LLNL.
%\begin{figure}[h]
%\centering
%\includegraphics[height=.15\textheight]{projects/2.3.5-Ecosystem/2.3.5.05-Argo/argo-global}\hspace{1em}%
%\includegraphics[height=.15\textheight]{projects/2.3.5-Ecosystem/2.3.5.05-Argo/argo-node}
%\caption{Global and node-local components of the Argo software stack and
% interactions between them and the surrounding HPC system components.}
%\end{figure}
\paragraph{Next Steps}
As outlined above, we are actively working on performance and capability
improvements of individual software components and on scaling them up to
leadership-class systems; significant work remains in these areas.
We need to intensify our engagement with ECP applications and runtimes so
that they can benefit from the technologies we have been developing. We
expect that the newly expanded project team at ANL will enable us to make
significant progress in this area in the near future.
Final software release will take place when the scalability and performance
is validated on a set of relevant ECP workloads.
Looking further into the future, we want to improve resource management for
emerging hardware accelerators, expand dynamic resource management and
placement making it the default, add advanced AI-based autotuning
techniques, use container abstraction to target coupled codes, workflows,
ensembles, and so on. True byte-addressable NVRAM will enable new,
flexible management and access.
| {
"alphanum_fraction": 0.8238011006,
"avg_line_length": 53.8412698413,
"ext": "tex",
"hexsha": "06738779a79e0d4f8c3217dbd527c32138697824",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5628f542d152aaab2d98c60c8eb16e39226e08f8",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "dhrogers/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.5-Ecosystem/2.3.5.05-Argo/2.3.5.05-Argo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5628f542d152aaab2d98c60c8eb16e39226e08f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "dhrogers/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.5-Ecosystem/2.3.5.05-Argo/2.3.5.05-Argo.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5628f542d152aaab2d98c60c8eb16e39226e08f8",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "dhrogers/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.5-Ecosystem/2.3.5.05-Argo/2.3.5.05-Argo.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2170,
"size": 10176
} |
\section{Introduction}
Functions may be a great asset and objects as well. But as mentioned in previous lesson, objects have methods, and here is where it will take its point. The document will also bring up names and namespaces for objects.
One would be recommended to start Blockland and singleplayer to be able to follow these instructions a bit easier.
| {
"alphanum_fraction": 0.8050139276,
"avg_line_length": 59.8333333333,
"ext": "tex",
"hexsha": "de4975af6cfbba7a3906c9a917b7ee3edf10ffe5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4d5c0d29440de2920bb98c570fbab87ab6668473",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "McTwist/Blockland-TorqueScript-Lessons",
"max_forks_repo_path": "src/lesson3/section1.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4d5c0d29440de2920bb98c570fbab87ab6668473",
"max_issues_repo_issues_event_max_datetime": "2018-04-26T01:02:39.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-02-27T08:36:01.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "McTwist/Blockland-TorqueScript-Lessons",
"max_issues_repo_path": "src/lesson3/section1.tex",
"max_line_length": 218,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4d5c0d29440de2920bb98c570fbab87ab6668473",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "McTwist/Blockland-TorqueScript-Lessons",
"max_stars_repo_path": "src/lesson3/section1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 71,
"size": 359
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\documentclass[11pt]{article}
\usepackage[T1]{fontenc}
% Nicer default font (+ math font) than Computer Modern for most use cases
\usepackage{mathpazo}
% Basic figure setup, for now with no caption control since it's done
% automatically by Pandoc (which extracts  syntax from Markdown).
\usepackage{graphicx}
% We will generate all images so they have a width \maxwidth. This means
% that they will get their normal width if they fit onto the page, but
% are scaled down if they would overflow the margins.
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth
\else\Gin@nat@width\fi}
\makeatother
\let\Oldincludegraphics\includegraphics
% Set max figure width to be 80% of text width, for now hardcoded.
\renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}}
% Ensure that by default, figures have no caption (until we provide a
% proper Figure object with a Caption API and a way to capture that
% in the conversion process - todo).
\usepackage{caption}
\DeclareCaptionLabelFormat{nolabel}{}
\captionsetup{labelformat=nolabel}
\usepackage{adjustbox} % Used to constrain images to a maximum size
\usepackage{xcolor} % Allow colors to be defined
\usepackage{enumerate} % Needed for markdown enumerations to work
\usepackage{geometry} % Used to adjust the document margins
\usepackage{amsmath} % Equations
\usepackage{amssymb} % Equations
\usepackage{textcomp} % defines textquotesingle
% Hack from http://tex.stackexchange.com/a/47451/13684:
\AtBeginDocument{%
\def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code
}
\usepackage{upquote} % Upright quotes for verbatim code
\usepackage{eurosym} % defines \euro
\usepackage[mathletters]{ucs} % Extended unicode (utf-8) support
\usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document
\usepackage{fancyvrb} % verbatim replacement that allows latex
\usepackage{grffile} % extends the file name processing of package graphics
% to support a larger range
% The hyperref package gives us a pdf with properly built
% internal navigation ('pdf bookmarks' for the table of contents,
% internal cross-reference links, web links for URLs, etc.)
\usepackage{hyperref}
\usepackage{longtable} % longtable support required by pandoc >1.10
\usepackage{booktabs} % table support for pandoc > 1.12.2
\usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment)
\usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout)
% normalem makes italics be italics, not underlines
% Colors for the hyperref package
\definecolor{urlcolor}{rgb}{0,.145,.698}
\definecolor{linkcolor}{rgb}{.71,0.21,0.01}
\definecolor{citecolor}{rgb}{.12,.54,.11}
% ANSI colors
\definecolor{ansi-black}{HTML}{3E424D}
\definecolor{ansi-black-intense}{HTML}{282C36}
\definecolor{ansi-red}{HTML}{E75C58}
\definecolor{ansi-red-intense}{HTML}{B22B31}
\definecolor{ansi-green}{HTML}{00A250}
\definecolor{ansi-green-intense}{HTML}{007427}
\definecolor{ansi-yellow}{HTML}{DDB62B}
\definecolor{ansi-yellow-intense}{HTML}{B27D12}
\definecolor{ansi-blue}{HTML}{208FFB}
\definecolor{ansi-blue-intense}{HTML}{0065CA}
\definecolor{ansi-magenta}{HTML}{D160C4}
\definecolor{ansi-magenta-intense}{HTML}{A03196}
\definecolor{ansi-cyan}{HTML}{60C6C8}
\definecolor{ansi-cyan-intense}{HTML}{258F8F}
\definecolor{ansi-white}{HTML}{C5C1B4}
\definecolor{ansi-white-intense}{HTML}{A1A6B2}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Additional commands for more recent versions of Pandoc
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}}
\newcommand{\ImportTok}[1]{{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}}
\newcommand{\BuiltInTok}[1]{{#1}}
\newcommand{\ExtensionTok}[1]{{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{Assignment 1 - Sayantan Raha}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=urlcolor,
linkcolor=linkcolor,
citecolor=citecolor,
}
% Slightly bigger margins than the latex defaults
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\begin{document}
\maketitle
\subsubsection{SAYANTAN RAHA}\label{sayantan-raha}
\subsubsection{IIMB - BAI09 - Assignment
1}\label{iimb---bai09---assignment-1}
\paragraph{Submission Date: August, 02,
2018}\label{submission-date-august-02-2018}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}1}]:} \PY{k+kn}{from} \PY{n+nn}{IPython}\PY{n+nn}{.}\PY{n+nn}{display} \PY{k}{import} \PY{n}{HTML}
\PY{n}{HTML}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}\PYZsq{}\PYZsq{}}\PY{l+s+s1}{\PYZlt{}script\PYZgt{}}
\PY{l+s+s1}{code\PYZus{}show=true; }
\PY{l+s+s1}{function code\PYZus{}toggle() }\PY{l+s+s1}{\PYZob{}}
\PY{l+s+s1}{ if (code\PYZus{}show)}\PY{l+s+s1}{\PYZob{}}
\PY{l+s+s1}{ \PYZdl{}(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{div.input}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{).hide();}
\PY{l+s+s1}{ \PYZcb{} else }\PY{l+s+s1}{\PYZob{}}
\PY{l+s+s1}{ \PYZdl{}(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{div.input}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{).show();}
\PY{l+s+s1}{ \PYZcb{}}
\PY{l+s+s1}{ code\PYZus{}show = !code\PYZus{}show}
\PY{l+s+s1}{\PYZcb{} }
\PY{l+s+s1}{\PYZdl{}( document ).ready(code\PYZus{}toggle);}
\PY{l+s+s1}{\PYZlt{}/script\PYZgt{}}
\PY{l+s+s1}{\PYZlt{}form action=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{javascript:code\PYZus{}toggle()}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{\PYZgt{}\PYZlt{}input type=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{submit}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{ value=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{Toggle on/off Code}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{\PYZgt{}\PYZlt{}/form\PYZgt{}}\PY{l+s+s1}{\PYZsq{}\PYZsq{}\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}1}]:} <IPython.core.display.HTML object>
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}2}]:} \PY{k+kn}{import} \PY{n+nn}{warnings}
\PY{n}{warnings}\PY{o}{.}\PY{n}{filterwarnings}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ignore}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{o}{\PYZpc{}}\PY{k}{load\PYZus{}ext} rpy2.ipython
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}3}]:} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy} \PY{k}{as} \PY{n+nn}{sp}
\PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{seaborn} \PY{k}{as} \PY{n+nn}{sns}
\PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline
\end{Verbatim}
\section{Q1}\label{q1}
\subsection{Q1 - a}\label{q1---a}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}4}]:} \PY{n}{cgpaList} \PY{o}{=} \PY{p}{[}\PY{l+m+mf}{3.36}\PY{p}{,} \PY{l+m+mf}{1.56}\PY{p}{,} \PY{l+m+mf}{1.48}\PY{p}{,} \PY{l+m+mf}{1.43}\PY{p}{,} \PY{l+m+mf}{2.64}\PY{p}{,} \PY{l+m+mf}{1.48}\PY{p}{,} \PY{l+m+mf}{2.77}\PY{p}{,} \PY{l+m+mf}{2.20}\PY{p}{,} \PY{l+m+mf}{1.38}\PY{p}{,} \PY{l+m+mf}{2.84}\PY{p}{,}
\PY{l+m+mf}{1.88}\PY{p}{,} \PY{l+m+mf}{1.83}\PY{p}{,} \PY{l+m+mf}{1.87}\PY{p}{,} \PY{l+m+mf}{1.95}\PY{p}{,} \PY{l+m+mf}{3.43}\PY{p}{,} \PY{l+m+mf}{1.28}\PY{p}{,} \PY{l+m+mf}{3.67}\PY{p}{,} \PY{l+m+mf}{2.23}\PY{p}{,} \PY{l+m+mf}{1.71}\PY{p}{,} \PY{l+m+mf}{1.68}\PY{p}{,}
\PY{l+m+mf}{2.57}\PY{p}{,} \PY{l+m+mf}{3.74}\PY{p}{,} \PY{l+m+mf}{1.98}\PY{p}{,} \PY{l+m+mf}{1.66}\PY{p}{,} \PY{l+m+mf}{1.66}\PY{p}{,} \PY{l+m+mf}{2.96}\PY{p}{,} \PY{l+m+mf}{1.77}\PY{p}{,} \PY{l+m+mf}{1.62}\PY{p}{,} \PY{l+m+mf}{2.74}\PY{p}{,} \PY{l+m+mf}{3.35}\PY{p}{,}
\PY{l+m+mf}{1.80}\PY{p}{,} \PY{l+m+mf}{2.86}\PY{p}{,} \PY{l+m+mf}{3.28}\PY{p}{,} \PY{l+m+mf}{1.14}\PY{p}{,} \PY{l+m+mf}{1.98}\PY{p}{,} \PY{l+m+mf}{2.96}\PY{p}{,} \PY{l+m+mf}{3.75}\PY{p}{,} \PY{l+m+mf}{1.89}\PY{p}{,} \PY{l+m+mf}{2.16}\PY{p}{,} \PY{l+m+mf}{2.07}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{List of cgpa scores: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
List of cgpa scores: [3.36, 1.56, 1.48, 1.43, 2.64, 1.48, 2.77, 2.2, 1.38, 2.84, 1.88, 1.83, 1.87, 1.95, 3.43, 1.28, 3.67, 2.23, 1.71, 1.68, 2.57, 3.74, 1.98, 1.66, 1.66, 2.96, 1.77, 1.62, 2.74, 3.35, 1.8, 2.86, 3.28, 1.14, 1.98, 2.96, 3.75, 1.89, 2.16, 2.07]
\end{Verbatim}
\textbf{MEAN}
The Mean of a list of numbers are defined as the sum of all the numbers
in the list divided by the number of elements of the list.
n = number of elements in list
\begin{equation*}
Mean = \frac{( \sum_{k=1}^n x_k )} {n}
\end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}5}]:} \PY{n}{meanList} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)} \PY{o}{/} \PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean of CGPA: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{meanList}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mean of CGPA: 2.2652499999999995
\end{Verbatim}
\textbf{MEDIAN}
The Median is the mid Value of the list of numbers. Here n (number of
items in list) is even hence median is average of
\begin{equation*} \frac{( n )} {2} \end{equation*}
and
\begin{equation*} \frac{( n + 2)} {2} \end{equation*}
element of the list.
First we will sort all items in the list in ascending order, then take
average of 20 and 21 element of the list to find the median of the list
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}6}]:} \PY{n}{cgpaList}\PY{o}{.}\PY{n}{sort}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sorted list is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{20th Element of list is: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{l+m+mi}{19}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{21th Element of list is: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{l+m+mi}{20}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Median of list is: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{l+m+mi}{20}\PY{p}{]} \PY{o}{+} \PY{n}{cgpaList}\PY{p}{[}\PY{l+m+mi}{20}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Sorted list is : [1.14, 1.28, 1.38, 1.43, 1.48, 1.48, 1.56, 1.62, 1.66, 1.66, 1.68, 1.71, 1.77, 1.8, 1.83, 1.87, 1.88, 1.89, 1.95, 1.98, 1.98, 2.07, 2.16, 2.2, 2.23, 2.57, 2.64, 2.74, 2.77, 2.84, 2.86, 2.96, 2.96, 3.28, 3.35, 3.36, 3.43, 3.67, 3.74, 3.75]
20th Element of list is: 1.98
21th Element of list is: 1.98
Median of list is: 1.98
\end{Verbatim}
\textbf{MODE}
Mode is the most frequent occurring element in the list. Sort the
elements and count the most frequesnt ones. The mode(s) of the list is:
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}7}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mode is:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{;}
\PY{n+nb}{print}\PY{p}{(}\PY{n}{pd}\PY{o}{.}\PY{n}{Series}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{o}{.}\PY{n}{mode}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mode is:
0 1.48
1 1.66
2 1.98
3 2.96
dtype: float64
\end{Verbatim}
\textbf{Standard Deviation}
The standard deviation of a list of numbers is calculated by:
\begin{equation*} \sigma = \sqrt{\frac{ \sum_{k=1}^n (x_k - \bar{x})^2} {n - 1}} \end{equation*}
Instead of using n, we use n - 1. This is called Bessel's correction,
which is required due to \textbf{downward bias}. When we compute the
average from sample, we underestimate the numerator.
Another possible explanation is, since we compute the mean from the
sample, we loose \textbf{one degree of freedom}. Hence the denominator
is n-1
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}8}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sample Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sum of Squared Difference from Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{p}{(}\PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{cgpaList} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Standard Deviation: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{cgpaList} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mf}{0.5}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Sample Mean: 2.26525
Sum of Squared Difference from Mean: 22.1391975
Standard Deviation: 0.7534399317591488
\end{Verbatim}
\subsection{Q1 - b}\label{q1---b}
\textbf{Percentile}
Position corresponding to percentile (x) is given by the following
formula:
\begin{equation*} P_x = {\frac{x * (n + 1)} {100}} \end{equation*}
Value at position is computed as:
\begin{equation*} Val_x = Val_{P_x} + Fractional part of P_x * (Value_{P_{x+1}} - Value_{P_x})\end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}9}]:} \PY{k+kn}{import} \PY{n+nn}{math}
\PY{n}{cgpaList}\PY{o}{.}\PY{n}{sort}\PY{p}{(}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(cgpaList)}
\PY{n}{pos} \PY{o}{=} \PY{l+m+mi}{90}\PY{o}{*} \PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{100}
\PY{n}{dec} \PY{o}{=} \PY{n+nb}{round}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{modf}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{val} \PY{o}{=} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+} \PY{n}{dec} \PY{o}{*} \PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Position corresponding to Percentile 90: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value corresponding to Percentile 90: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{val}\PY{p}{)}\PY{p}{)}
\PY{n}{pos} \PY{o}{=} \PY{l+m+mi}{95}\PY{o}{*} \PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{100}
\PY{n}{dec} \PY{o}{=} \PY{n+nb}{round}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{modf}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{val} \PY{o}{=} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+} \PY{n}{dec} \PY{o}{*} \PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Position corresponding to Percentile 95: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value corresponding to Percentile 95: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{val}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Position corresponding to Percentile 90: 36
Value corresponding to Percentile 90: 3.423
Position corresponding to Percentile 95: 38
Value corresponding to Percentile 95: 3.737
\end{Verbatim}
\subsection{Q1 - c}\label{q1---c}
\textbf{IQR}
IQR is the range between \textbf{Percentile 25} and \textbf{Percentile
75}. The formula for computing percentile is same as above.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}10}]:} \PY{n}{cgpaList}\PY{o}{.}\PY{n}{sort}\PY{p}{(}\PY{p}{)}
\PY{n}{pos} \PY{o}{=} \PY{l+m+mi}{25} \PY{o}{*} \PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{100}
\PY{n}{dec} \PY{o}{=} \PY{n+nb}{round}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{modf}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{val25} \PY{o}{=} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+} \PY{n}{dec} \PY{o}{*} \PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Position corresponding to Percentile 25: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value corresponding to Percentile 25: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{val25}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n}{pos} \PY{o}{=} \PY{l+m+mi}{75}\PY{o}{*} \PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{100}
\PY{n}{dec} \PY{o}{=} \PY{n+nb}{round}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{modf}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{val75} \PY{o}{=} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+} \PY{n}{dec} \PY{o}{*} \PY{p}{(}\PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{cgpaList}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{pos}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Position corresponding to Percentile 75: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{pos}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value corresponding to Percentile 75: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{val75}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{IQR: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{val75} \PY{o}{\PYZhy{}} \PY{n}{val25}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Position corresponding to Percentile 25: 10
Value corresponding to Percentile 25: 1.665
Position corresponding to Percentile 75: 30
Value corresponding to Percentile 75: 2.855
IQR: 1.19
\end{Verbatim}
\subsection{Q1 - d}\label{q1---d}
Plotting a histogram or computing Skew will both provide whether the
distribution has right tail or not
\textbf{SKEW} Skewness is computed by the following equation
(\textbf{Pearson moment coefficient of skewness}):
\begin{equation*} g_1 = {\frac{ \sum_{k=1}^n \frac{(x_k - \bar{x})^3} {n}} {\sigma^3}} \end{equation*}
\begin{equation*} \sigma = Standard Deviation\end{equation*}
For samples with n onservation the formula is adjusted as follows:
\begin{equation*} G_1 = \sqrt{\frac{n * (n - 1)} {n - 2}} * g_1\end{equation*}
If G1 \textless{} 0 it is left Skewed If G1 \textgreater{} 0 it is right
Skewed For G1 near 0, the distribution is considered to be symmetric
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}11}]:} \PY{n}{x\PYZus{}bar} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}
\PY{n}{numerator} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{cgpaList} \PY{o}{\PYZhy{}} \PY{n}{x\PYZus{}bar}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{3}\PY{p}{)} \PY{o}{/} \PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}
\PY{n}{skewness} \PY{o}{=} \PY{n}{numerator} \PY{o}{/} \PY{n}{sigma} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{3}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x\PYZus{}bar}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Numerator: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{numerator}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sigma: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sigma}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Skewness: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{skewness}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This distribution is slightly Right Tailed}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mean: 2.26525
Numerator: 0.23074712128124997
Sigma: 0.7439623226346883
Skewness: 0.56
This distribution is slightly Right Tailed
\end{Verbatim}
\subsection{Q1 - e}\label{q1---e}
The Optimal number of bins is computed using the following formula:
\textbf{N = 1 =3.322 * Log10 (n)} where n = number of onservations
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}12}]:} \PY{n}{N} \PY{o}{=} \PY{l+m+mi}{1} \PY{o}{+} \PY{l+m+mf}{3.322}\PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{log10}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of bins: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{int}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{cgpaList}\PY{p}{,} \PY{n}{bins}\PY{o}{=} \PY{n+nb}{int}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Number of bins: 6
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}12}]:} <matplotlib.axes.\_subplots.AxesSubplot at 0x7feec0ed1e48>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_26_2.png}
\end{center}
{ \hspace*{\fill} \\}
\section{Q2}\label{q2}
\subsection{Q2 - a}\label{q2---a}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}13}]:} \PY{n}{data} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{./Chapter 2 BKB.xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{sheet\PYZus{}name}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DKB Bank Data}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{data}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}13}]:} Applicant ID Loan Type Gender Marital Status Accomodation Type \textbackslash{}
0 1 Home Loan Male Married Family Other
1 2 Home Loan Male Married Family Other
2 3 Home Loan Male Married Company Provided
3 4 Home Improvement Male Married Owned
4 5 Home Loan Male Married Family Other
No of years in the current address No. of Years in the current job \textbackslash{}
0 32 7
1 3 17
2 7 8
3 15 7
4 4 10
Monthly Salary Balance in Savings Account Loan Amount Requested Term \textbackslash{}
0 8000 1406 500000 120
1 30000 42235 1000000 180
2 8000 15217 700000 120
3 20462 29551 250000 84
4 13534 2056 400000 180
Down Payment EMI Affordable
0 700000 8690
1 300000 16000
2 955000 15000
3 250000 5856
4 700000 6157
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}14}]:} \PY{n}{data}\PY{o}{.}\PY{n}{describe}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}14}]:} Applicant ID No of years in the current address \textbackslash{}
count 3864.000000 3864.000000
mean 1932.500000 10.600414
std 1115.585048 10.905176
min 1.000000 0.000000
25\% 966.750000 2.000000
50\% 1932.500000 6.000000
75\% 2898.250000 15.000000
max 3864.000000 92.000000
No. of Years in the current job Monthly Salary \textbackslash{}
count 3864.000000 3864.000000
mean 10.926501 22618.984472
std 8.086977 19783.322389
min 0.000000 0.000000
25\% 5.000000 12200.750000
50\% 10.000000 19000.000000
75\% 15.000000 28500.000000
max 65.000000 500000.000000
Balance in Savings Account Loan Amount Requested Term \textbackslash{}
count 3.864000e+03 3864.000000 3864.000000
mean 3.158295e+04 609055.261905 160.158126
std 1.274989e+05 245075.074723 37.995737
min 0.000000e+00 50000.000000 15.000000
25\% 1.500000e+03 400000.000000 180.000000
50\% 6.357500e+03 600000.000000 180.000000
75\% 2.500000e+04 800000.000000 180.000000
max 5.388413e+06 1000000.000000 180.000000
Down Payment EMI Affordable
count 3.864000e+03 3.864000e+03
mean 4.274711e+05 1.288153e+04
std 6.380023e+05 3.295702e+04
min 0.000000e+00 8.400000e+01
25\% 2.000000e+05 7.696000e+03
50\% 3.000000e+05 1.077400e+04
75\% 5.000000e+05 1.500000e+04
max 1.700000e+07 1.200000e+06
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}15}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize} \PY{o}{=} \PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{countplot}\PY{p}{(} \PY{n}{y} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Loan Type}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{data} \PY{o}{=} \PY{n}{data}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{countplot}\PY{p}{(} \PY{n}{y} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{data} \PY{o}{=} \PY{n}{data}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{countplot}\PY{p}{(} \PY{n}{y} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Marital Status}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{data} \PY{o}{=} \PY{n}{data}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{countplot}\PY{p}{(} \PY{n}{y} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Accomodation Type}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{n}{data} \PY{o}{=} \PY{n}{data}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_31_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{Observations from above
graphs}\label{observations-from-above-graphs}
\begin{itemize}
\tightlist
\item
Most Loans are Home Loans
\item
Most Loan Applicants are Male
\item
Most Married people apply for loans
\item
Most people applying for loan live in rented accomodation
\end{itemize}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}16}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize} \PY{o}{=} \PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{)}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{No of years in the current address}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{No. of Years in the current job}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Loan Amount Requested}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Term}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{7}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Down Payment }\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{EMI Affordable }\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\paragraph{Observations}\label{observations}
\begin{itemize}
\tightlist
\item
Number of years in Current Address - Right Skewed
\item
No. of Years in current Job - Right Skewed. Jumps at 5, 10, 15, 20,
25, 30, implying people have a habit of rounding off to nearest 5 year
mark
\item
Salary is right skewed, most of salaries are withing 100K, with a few
high value outliers
\item
Balance in saving account - Mostly less than 50K, with a few high
values
\item
Loan Amount - Multimodal distribution. The values are rounded off more
towards 200K, 300K, 400K, 500K, 600K, 700K, 800K 1000K
\item
Loan tenure - Most apply for 180 Months of loan tenure. Right Skew
with few outliers
\item
Very few people can afford high EMI
\end{itemize}
\subsubsection{Q2 - b}\label{q2---b}
\textbf{Formula and rules for Mean, Median, Mode, Standard Deviation and
Skew is provided in answers for Qs1}
The Kurtosis is given by the equation:
\begin{equation*} Kurtosis = {\frac{ \sum_{k=1}^n \frac{(x_k - \bar{x})^4} {n}} {\sigma^4}} \end{equation*}
\begin{equation*} \sigma = Standard Deviation\end{equation*}
\textbf{Monthly Salary}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}17}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{o}{/} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Total elements: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mean: 22618.98447204969
Total elements: 3864
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}18}]:} \PY{n}{data}\PY{o}{.}\PY{n}{sort\PYZus{}values}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{inplace} \PY{o}{=} \PY{k+kc}{True}\PY{p}{)}
\PY{n}{data}\PY{o}{.}\PY{n}{reset\PYZus{}index}\PY{p}{(}\PY{n}{inplace} \PY{o}{=} \PY{k+kc}{True}\PY{p}{,} \PY{n}{drop} \PY{o}{=} \PY{k+kc}{True}\PY{p}{)}
\PY{n}{x} \PY{o}{=} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Median: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{x}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{/} \PY{l+m+mi}{2} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+}
\PY{n}{x}\PY{p}{[}\PY{n}{x}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{/} \PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Median: 19000.0
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}19}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mode :}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mode}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mode :
0 15000
dtype: int64
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}20}]:} \PY{n}{x\PYZus{}bar} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{numerator} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{x\PYZus{}bar}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{3}\PY{p}{)} \PY{o}{/} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{skewness} \PY{o}{=} \PY{n}{numerator} \PY{o}{/} \PY{n}{sigma} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{3}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Calculation for computing Skew: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Numerator: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{numerator}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sigma: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sigma}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Skewness: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{skewness}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Calculation for computing Skew:
Numerator: 61538297730342.375
Sigma: 19780.762269971405
Skewness: 7.951
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}21}]:} \PY{n}{x\PYZus{}bar} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{numerator} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{x\PYZus{}bar}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4}\PY{p}{)} \PY{o}{/} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{skewness} \PY{o}{=} \PY{n}{numerator} \PY{o}{/} \PY{n}{sigma} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Numerator: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{numerator}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sigma: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sigma}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Kurtosis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{skewness}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Numerator: 2.0575588117373972e+19
Sigma: 19780.762269971405
Kurtosis: 134.394
\end{Verbatim}
\textbf{Balance in Savings Account}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}22}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{o}{/} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Total elements: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mean: 31582.945910973085
Total elements: 3864
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}23}]:} \PY{n}{data}\PY{o}{.}\PY{n}{sort\PYZus{}values}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{inplace} \PY{o}{=} \PY{k+kc}{True}\PY{p}{)}
\PY{n}{data}\PY{o}{.}\PY{n}{reset\PYZus{}index}\PY{p}{(}\PY{n}{inplace} \PY{o}{=} \PY{k+kc}{True}\PY{p}{,} \PY{n}{drop} \PY{o}{=} \PY{k+kc}{True}\PY{p}{)}
\PY{n}{x} \PY{o}{=} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{sort\PYZus{}values}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Median: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{x}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{/} \PY{l+m+mi}{2} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+}
\PY{n}{x}\PY{p}{[}\PY{n}{x}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{/} \PY{l+m+mi}{2} \PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Median: 6357.5
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}24}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mode :}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mode}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Mode :
0 0
dtype: int64
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}25}]:} \PY{n}{x\PYZus{}bar} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{numerator} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{x\PYZus{}bar}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{3}\PY{p}{)} \PY{o}{/} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{skewness} \PY{o}{=} \PY{n}{numerator} \PY{o}{/} \PY{n}{sigma} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{3}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Calculation for computing Skew: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Numerator: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{numerator}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sigma: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sigma}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Skewness: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{skewness}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Calculation for computing Skew:
Numerator: 4.8671220825693384e+16
Sigma: 127482.44692023737
Skewness: 23.492
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}26}]:} \PY{n}{x\PYZus{}bar} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{numerator} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{x\PYZus{}bar}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4}\PY{p}{)} \PY{o}{/} \PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Balance in Savings Account}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}
\PY{n}{skewness} \PY{o}{=} \PY{n}{numerator} \PY{o}{/} \PY{n}{sigma} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Numerator: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{numerator}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sigma: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sigma}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Kurtosis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{skewness}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Numerator: 2.2645418005570515e+23
Sigma: 127482.44692023737
Kurtosis: 857.391
\end{Verbatim}
\subsection{Q2 - c}\label{q2---c}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}27}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize} \PY{o}{=} \PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{i}\PY{p}{,} \PY{n}{source} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Loan Amount Requested}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Down Payment }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{EMI Affordable }\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{p}{:}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{i} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{boxplot}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{n}{source}\PY{p}{]}\PY{p}{)}
\PY{c+c1}{\PYZsh{} Label the plots}
\PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Boxplot of }\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{n}{source}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{n}{h\PYZus{}pad} \PY{o}{=} \PY{l+m+mf}{2.5}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_49_0.png}
\end{center}
{ \hspace*{\fill} \\}
\textbf{Observations:} - Loan Amount does not have outliers -
Downpayment and EMI Affordable has outliers
\subsection{Q2 - d}\label{q2---d}
Continuous variables are the following: - Monthly Salary - Loan Amount
Requested - Down Payment - EMI Affordable
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}28}]:} \PY{n}{colLst} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Monthly Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Loan Amount Requested}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Down Payment }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{EMI Affordable }\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{colLst}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Column }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ has skew : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{col}\PY{p}{,} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{skew}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Column }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ has Kurtosis : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{col}\PY{p}{,} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{kurtosis}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Column Monthly Salary has skew : 7.950902138083359
Column Monthly Salary has Kurtosis : 131.39408769367247
Column Loan Amount Requested has skew : 0.0073934349150113765
Column Loan Amount Requested has Kurtosis : -1.018197531983543
Column Down Payment has skew : 13.041420458301284
Column Down Payment has Kurtosis : 257.61353462521816
Column EMI Affordable has skew : 25.533691859591546
Column EMI Affordable has Kurtosis : 760.4961457368507
\end{Verbatim}
Hence in descending order of Skewness and Kurtosis the columns can be
arranged in following order: - EMI Affordable - has highest skew - Down
Payment - Number 2 in skewness - Monthly Salary = Number 3 in skewness -
Loan Amount Requested has no skewness
\section{Q3}\label{q3}
\subsection{Q3 - a}\label{q3---a}
Z stat is computed by following equation:
\begin{equation*} z = (x - \mu)/ \sigma \end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}29}]:} \PY{n}{mu} \PY{o}{=} \PY{l+m+mi}{68}
\PY{n}{sigma} \PY{o}{=} \PY{l+m+mi}{14}
\PY{n}{x} \PY{o}{=} \PY{l+m+mi}{90}
\PY{n}{z} \PY{o}{=} \PY{p}{(}\PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{mu}\PY{p}{)} \PY{o}{/} \PY{n}{sigma}
\PY{n}{p90} \PY{o}{=} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{norm}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{z}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Z Statistic for mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, variance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, x: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Z\PYZhy{}Stat: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{mu}\PY{p}{,} \PY{n}{sigma}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{z}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Corresponding CDF: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{p90}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Proportion of orders delivered after 90 mins = 1 \PYZhy{} CDF(90): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZpc{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{p90}\PY{p}{)}\PY{o}{*} \PY{l+m+mi}{100}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Z Statistic for mean: 68, variance: 14, x: 90, Z-Stat: 1.5714285714285714
Corresponding CDF: 0.9419584331306725
Proportion of orders delivered after 90 mins = 1 - CDF(90): 5.8 \%
\end{Verbatim}
\subsection{Q3 - b}\label{q3---b}
For 99\% order fulfillment within 90 mins, Target Mean is given by
equation:
\begin{equation*} \mu = 90 - Z_c * \sigma \end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}30}]:} \PY{n}{z} \PY{o}{=} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{norm}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{o}{.}\PY{l+m+mi}{99}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Critical Zc: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{z}\PY{p}{)}\PY{p}{)}
\PY{n}{mu} \PY{o}{=} \PY{l+m+mi}{90} \PY{o}{\PYZhy{}} \PY{n}{z} \PY{o}{*} \PY{n}{sigma}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean Delivery Time :}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{mu}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Critical Zc: 2.3263478740408408
Mean Delivery Time :57.43112976342823
\end{Verbatim}
\subsection{Q3 - c}\label{q3---c}
To meet 99\% orders and to maintain same mean and variance the committed
time should be calculated as following:
\begin{equation*} x = \mu + Z_c * \sigma \end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}31}]:} \PY{n}{mu} \PY{o}{=} \PY{l+m+mi}{68}
\PY{n}{x} \PY{o}{=} \PY{n}{z} \PY{o}{*} \PY{n}{sigma} \PY{o}{+} \PY{n}{mu}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{For Mean }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, variance }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, committed time is }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{mu}\PY{p}{,} \PY{n}{sigma}\PY{p}{,} \PY{n}{x}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
For Mean 68, variance 14, committed time is 100.56887023657177
\end{Verbatim}
\section{Q4}\label{q4}
\subsection{Q4 - a}\label{q4---a}
For exponential distribution:
\begin{equation*} Mean = {\frac {1} {\lambda}} \end{equation*}
\begin{equation*} \lambda = {Rate of decay} \end{equation*}
Probability of Missing Flight :: P(X \textgreater{}= 25) = 1 - P(X
\textless{}= 25)
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}32}]:} \PY{n}{lmbda} \PY{o}{=} \PY{l+m+mf}{1.0}\PY{o}{/}\PY{l+m+mi}{20}
\PY{n}{time\PYZus{}left} \PY{o}{=} \PY{l+m+mi}{40} \PY{o}{\PYZhy{}} \PY{l+m+mi}{15}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{lambda : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{lmbda}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Time Left for Security check for Ms. TKW is }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ mins}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{time\PYZus{}left}\PY{p}{)}\PY{p}{)}
\PY{n}{p25} \PY{o}{=} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{expon}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{time\PYZus{}left}\PY{p}{,} \PY{n}{scale} \PY{o}{=} \PY{l+m+mi}{1} \PY{o}{/} \PY{n}{lmbda}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Probability of missing flight: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{p25}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
lambda : 0.05
Time Left for Security check for Ms. TKW is 25 mins
Probability of missing flight: 0.287
\end{Verbatim}
\subsection{Q4 - b}\label{q4---b}
Probability of Waiting for more than 40 min :: P(X \textgreater{}= 40) =
1 - P(X \textless{}= 40)
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}33}]:} \PY{n}{time\PYZus{}left} \PY{o}{=} \PY{l+m+mi}{40}
\PY{n}{p40} \PY{o}{=} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{expon}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{l+m+mi}{40}\PY{p}{,} \PY{n}{scale} \PY{o}{=} \PY{l+m+mi}{1} \PY{o}{/} \PY{n}{lmbda}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Probability of Ms. TKW waiting for }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ mins at security check: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{40}\PY{p}{,} \PY{n+nb}{round}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{p40}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Probability of Ms. TKW waiting for 40 mins at security check: 0.135
\end{Verbatim}
\subsection{Q4 - c}\label{q4---c}
Exponential Distributions are memory less.
\begin{equation*} P(X \ge 20 + 20 \vert X \ge 20) = P(X \ge 20) = \exp^{{-\lambda} * {20}}\end{equation*}
\begin{equation*} \lambda = {Rate of decay} \end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}34}]:} \PY{n}{PXgt20} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*}\PY{n}{lmbda} \PY{o}{*} \PY{l+m+mi}{20}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Probability that she would wait for another 20 mins : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{PXgt20}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Probability that she would wait for another 20 mins : 0.368
\end{Verbatim}
\subsection{Q4 - d}\label{q4---d}
Equation to find t is given as follows:
\begin{equation*} t = -1 * \frac{1}{\lambda} * \ln (0.01)\end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}35}]:} \PY{n}{t} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{o}{*} \PY{p}{(}\PY{l+m+mi}{1}\PY{o}{/}\PY{n}{lmbda}\PY{p}{)} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mf}{0.01}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{To ensure she catches flight 99}\PY{l+s+si}{\PYZpc{} o}\PY{l+s+s2}{f times she should reach airport }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ hours before departure}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{t}\PY{o}{/}\PY{l+m+mi}{60}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
To ensure she catches flight 99\% of times she should reach airport 1.54 hours before departure
\end{Verbatim}
\section{Q5}\label{q5}
\subsection{Q5 - 1}\label{q5---1}
We will use T-test for this hypothesis test as the standard deviation is
unknown.
T-statistic equation:
\begin{equation*} T = {\frac{\bar x - \mu} {S / \sqrt n}} \end{equation*}
where S = Sample Sample SD
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}36}]:} \PY{n}{data} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{./Jayalaxmi Agro Case Data .xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{sheet\PYZus{}name}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{App Usage Data}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{d6} \PY{o}{=} \PY{n}{data}\PY{p}{[}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2015\PYZhy{}10\PYZhy{}01 00:00:00}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We assume that the correct data is in column D6}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{}sns.distplot(d6.D6, bins= 5)}
\PY{n}{mu} \PY{o}{=} \PY{l+m+mi}{30}
\PY{n}{xbar} \PY{o}{=} \PY{n}{d6}\PY{o}{.}\PY{n}{D6}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S} \PY{o}{=} \PY{n}{d6}\PY{o}{.}\PY{n}{D6}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n} \PY{o}{=} \PY{n}{d6}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean of accessing D6 info:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{xbar}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sigma of accessing D6 info:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{S}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of observations of accessing D6 info:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}\PYZdq{}\PYZdq{}}\PY{l+s+s2}{Since the population standard deviation is not known, we will compute same from data.}\PY{l+s+se}{\PYZbs{}n}
\PY{l+s+s2}{We will use T\PYZhy{}Test for the hypothesis}\PY{l+s+s2}{\PYZdq{}\PYZdq{}\PYZdq{}}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ho = Mean of accessing D6 \PYZlt{}= 30}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ha = Mean of accessing D6 \PYZgt{} 30}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a Right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{n}{xbar} \PY{o}{\PYZhy{}} \PY{n}{mu}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{S} \PY{o}{/} \PY{n}{n} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value of T\PYZhy{}statistics: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.05}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ @ df }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{n}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{n}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is less than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{n}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can reject the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
We assume that the correct data is in column D6
Mean of accessing D6 info:40.863046476139324
Sigma of accessing D6 info:19.21537805197856
Number of observations of accessing D6 info:20
Since the population standard deviation is not known, we will compute same from data.
We will use T-Test for the hypothesis
Null hypothesis: ho = Mean of accessing D6 <= 30
Alternate hypothesis: ha = Mean of accessing D6 > 30
This is a Right sided T-Test
Value of T-statistics: 2.5282365298960063
alpha/ Significance: 0.05
Significant t-value at alpha - 0.05 is : 1.7291328115213678 @ df 19
p-value:0.010240954188212248 is less than alpha(0.05)
Hence we can reject the NULL Hypothesis (ho)
\end{Verbatim}
\subsection{Q5 - 2}\label{q5---2}
We will use Z-test for this hypothesis test for proportions.
Z-statistic equation:
\begin{equation*} Z = {\frac{\hat p - p} {\sqrt \frac{p * (1 - p)}{n}}} \end{equation*}
where
\begin{equation*} \hat p = Estimated Sample proportion \end{equation*}
p = Expected Proportion
The above is valid for large n (sample size) and if
\begin{equation*} n * p * (1-p) >= 10 \end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}37}]:} \PY{n}{d6users} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{D6}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}
\PY{n}{totusers} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{D1}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D2}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D3}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D4}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D5}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D6}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D7}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D8}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D1}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D10}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{+} \PY{n}{data}\PY{o}{.}\PY{n}{D11}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{d6 users }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{d6users}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{total Disease application users (Sum of D1..D11): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{totusers}\PY{p}{)}\PY{p}{)}
\PY{n}{p} \PY{o}{=} \PY{n}{d6users} \PY{o}{/} \PY{n}{totusers}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{proportion of d6 users }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{p}\PY{p}{)}\PY{p}{)}
\PY{n}{n} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{n = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{n * p * (1\PYZhy{} p)= }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n} \PY{o}{*} \PY{n}{p} \PY{o}{*} \PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{p}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(\PYZdq{}n * q =() \PYZob{}\PYZcb{}\PYZdq{}.format(n * (1 \PYZhy{} p)))}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{n * p * (1 \PYZhy{} p) is less than 10, hence the power of the Z test for proportions may not be correct. }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Proceeding to calculate the metrics even though the assumptions are incorrect.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{phat} \PY{o}{=} \PY{o}{.}\PY{l+m+mi}{15}
\PY{n}{S} \PY{o}{=} \PY{p}{(}\PY{n}{p} \PY{o}{*} \PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{p}\PY{p}{)} \PY{o}{/} \PY{n}{n}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Standard Error: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{S}\PY{p}{)}\PY{p}{)}
\PY{n}{zstat} \PY{o}{=} \PY{p}{(}\PY{n}{phat} \PY{o}{\PYZhy{}} \PY{n}{p}\PY{p}{)} \PY{o}{/} \PY{n}{S}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Z\PYZhy{}Stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{zstat}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Proportion of D6 users \PYZlt{}= 15}\PY{l+s+s2}{\PYZpc{}}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Proportion of D6 users \PYZgt{} 15}\PY{l+s+s2}{\PYZpc{}}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a Right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant Z\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{norm}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is greater than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{norm}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{zstat}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho) but as the assumptions are not valid, the test results may not be valid}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
d6 users 856.806242628309
total Disease application users (Sum of D1..D11): 5713.156874519798
proportion of d6 users 0.14997071871938142
n = 24
n * p * (1- p)= 3.0595080539081665
n * p * (1 - p) is less than 10, hence the power of the Z test for proportions may not be correct.
Proceeding to calculate the metrics even though the assumptions are incorrect.
Standard Error: 0.07288103955710223
Z-Stat 0.00040176815254719557
Null hypothesis: Proportion of D6 users <= 15\%
Alternate hypothesis: Proportion of D6 users > 15\%
This is a Right sided T-Test
alpha/ Significance: 0.05
Significant Z-value at alpha - 0.05 is : 1.6448536269514729
p-value:0.49983971770134217 is greater than alpha(0.05)
Hence we can retain the NULL Hypothesis (ho) but as the assumptions are not valid, the test results may not be valid
\end{Verbatim}
\subsection{Q5 - 3}\label{q5---3}
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}38}]:} \PY{n}{jun15jun16} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2015\PYZhy{}06\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2016\PYZhy{}06\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{,}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of users}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}
\PY{n}{n1} \PY{o}{=} \PY{n}{jun15jun16}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{x1} \PY{o}{=} \PY{n}{jun15jun16}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{jun15jun16}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Total Usage data for Time \PYZgt{}= June 2015 \PYZam{} Time \PYZlt{} Jul\PYZhy{}2016 :: len (n2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, mean (x1) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, se (S1) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n1}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{)}\PY{p}{)}
\PY{n}{jun16may17} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZgt{}} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2016\PYZhy{}06\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of users}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{c+c1}{\PYZsh{}Number of users}
\PY{n}{n2} \PY{o}{=} \PY{n}{jun16may17}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{x2} \PY{o}{=} \PY{n}{jun16may17}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{jun16may17}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Total Usage data for Time \PYZgt{}= Jul\PYZhy{}2016 :: len (n2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, mean (x2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, se (S2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n2}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(x1)}
\PY{c+c1}{\PYZsh{}print(x2)}
\PY{c+c1}{\PYZsh{}print(S1)}
\PY{c+c1}{\PYZsh{}print(S2)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DF }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x2} \PY{o}{\PYZhy{}} \PY{n}{x1}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x2 \PYZhy{} x1 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x2 \PYZhy{} x1 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is less than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ for t\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{,} \PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can reject the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We have proven statistically that number of users have increased over the years}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Total Usage data for Time >= June 2015 \& Time < Jul-2016 :: len (n2) = 13, mean (x1) = 198.53846153846155, se (S1) = 73.04977228417094
Total Usage data for Time >= Jul-2016 :: len (n2) = 11, mean (x2) = 409.3636363636364, se (S2) = 219.79775828123124
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 69.29932393687176
DF 11
T-stat 3.0422399938161893
Null hypothesis: The proportions x2 - x1 <= 0
Alternate hypothesis: x2 - x1 > 0
This is a right sided T-Test
alpha/ Significance: 0.05
Significant t-value at alpha - 0.05 is : 1.7958848187036696
p-value:0.005600843093913066 is less than alpha(0.05 for t-stat 3.0422399938161893)
Hence we can reject the NULL Hypothesis (ho)
We have proven statistically that number of users have increased over the years
\end{Verbatim}
\subsection{Q5 - 4}\label{q5---4}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}39}]:} \PY{n}{tblmean} \PY{o}{=} \PY{n}{data}\PY{p}{[}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D1}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D2}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D3}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D4}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D5}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D6}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D7}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D8}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D9}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D10}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D11}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{tblstd} \PY{o}{=} \PY{n}{data}\PY{p}{[}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D1}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D2}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D3}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D4}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D5}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D6}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D7}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D8}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D9}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D10}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D11}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{usgdata} \PY{o}{=} \PY{n}{data}\PY{p}{[}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D1}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D2}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D3}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D4}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D5}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D6}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D7}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D8}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D9}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D10}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D11}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{]}
\PY{c+c1}{\PYZsh{}print(tblmean)}
\PY{c+c1}{\PYZsh{}print(tblstd)}
\PY{c+c1}{\PYZsh{}print(usgdata.melt())}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{usgdata}\PY{o}{.}\PY{n}{melt}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{value}\PY{p}{)}
\PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{mlab} \PY{k}{as} \PY{n+nn}{mlab}
\PY{k+kn}{import} \PY{n+nn}{math}
\PY{k+kn}{import} \PY{n+nn}{statsmodels}\PY{n+nn}{.}\PY{n+nn}{api} \PY{k}{as} \PY{n+nn}{sm}
\PY{n}{mu} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{usgdata}\PY{o}{.}\PY{n}{melt}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{value}\PY{p}{)}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{usgdata}\PY{o}{.}\PY{n}{melt}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{value}\PY{p}{)}
\PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{n}{mu} \PY{o}{\PYZhy{}} \PY{l+m+mi}{3}\PY{o}{*}\PY{n}{sigma}\PY{p}{,} \PY{n}{mu} \PY{o}{+} \PY{l+m+mi}{3}\PY{o}{*}\PY{n}{sigma}\PY{p}{,} \PY{l+m+mi}{100}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,}\PY{n}{mlab}\PY{o}{.}\PY{n}{normpdf}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mu}\PY{p}{,} \PY{n}{sigma}\PY{p}{)}\PY{p}{)}
\PY{n}{pp\PYZus{}x} \PY{o}{=} \PY{n}{sm}\PY{o}{.}\PY{n}{ProbPlot}\PY{p}{(}\PY{n}{usgdata}\PY{o}{.}\PY{n}{melt}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{value}\PY{p}{,} \PY{n}{fit}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)}
\PY{n}{pp\PYZus{}x}\PY{o}{.}\PY{n}{ppplot}\PY{p}{(}\PY{n}{line}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{45}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_77_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_77_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}40}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Its close to normal distribution, with close SD.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Performing Anova}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ho = means of groups are same}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ha = means of groups are not same}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{NULL Hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate Hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{} Anova by hand}
\PY{n}{data2} \PY{o}{=} \PY{n}{usgdata}\PY{o}{.}\PY{n}{melt}\PY{p}{(}\PY{p}{)}
\PY{n}{k} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{unique}\PY{p}{(}\PY{n}{data2}\PY{o}{.}\PY{n}{variable}\PY{p}{)}\PY{p}{)}
\PY{n}{N} \PY{o}{=} \PY{n}{data2}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{n} \PY{o}{=} \PY{n}{data2}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{variable}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{c+c1}{\PYZsh{}print(k)}
\PY{n}{DFbetween} \PY{o}{=} \PY{n}{k} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}
\PY{n}{DFwithin} \PY{o}{=} \PY{n}{N} \PY{o}{\PYZhy{}} \PY{n}{k}
\PY{n}{DFtotal} \PY{o}{=} \PY{n}{N} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}
\PY{n}{SSbetween} \PY{o}{=} \PY{p}{(}\PY{n+nb}{sum}\PY{p}{(}\PY{n}{data2}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{variable}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{value}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{/}\PY{n}{n}\PY{p}{)} \PYZbs{}
\PY{o}{\PYZhy{}} \PY{p}{(}\PY{n}{data2}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{value}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{/}\PY{n}{N}
\PY{n}{sum\PYZus{}y\PYZus{}squared} \PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{[}\PY{n}{value}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{k}{for} \PY{n}{value} \PY{o+ow}{in} \PY{n}{data2}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{value}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{p}{]}\PY{p}{)}
\PY{n}{SSwithin} \PY{o}{=} \PY{n}{sum\PYZus{}y\PYZus{}squared} \PY{o}{\PYZhy{}} \PY{n+nb}{sum}\PY{p}{(}\PY{n}{data2}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{variable}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{value}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{/}\PY{n}{n}
\PY{n}{SStotal} \PY{o}{=} \PY{n}{sum\PYZus{}y\PYZus{}squared} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{n}{data2}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{value}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{/}\PY{n}{N}
\PY{n}{MSbetween} \PY{o}{=} \PY{n}{SSbetween}\PY{o}{/}\PY{n}{DFbetween}
\PY{n}{MSwithin} \PY{o}{=} \PY{n}{SSwithin}\PY{o}{/}\PY{n}{DFwithin}
\PY{n}{F} \PY{o}{=} \PY{n}{MSbetween}\PY{o}{/}\PY{n}{MSwithin}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{F\PYZhy{}Statistic: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{F}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(sp.stats.f.cdf(F, DFbetween, DFwithin))}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Critical F }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, for DFBetween }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ and DFWithin }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{f}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{o}{.}\PY{l+m+mi}{95}\PY{p}{,} \PY{n}{DFbetween}\PY{p}{,} \PY{n}{DFwithin}\PY{p}{)}\PY{p}{,} \PY{n}{DFbetween}\PY{p}{,} \PY{n}{DFwithin}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(sp.stats.f\PYZus{}oneway(usgdata.D1, usgdata.D2, usgdata.D3, usgdata.D4, usgdata.D5, usgdata.D6, }
\PY{c+c1}{\PYZsh{} usgdata.D7, usgdata.D8, usgdata.D9, usgdata.D10, usgdata.D11))}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{P\PYZhy{}Value: 1.54e\PYZhy{}05 is less that 0.05}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We reject the NUll Hypothesis, means are not same}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Its close to normal distribution, with close SD.
Performing Anova
NULL Hypothesis: ho = means of groups are same
Alternate Hypothesis: ha = means of groups are not same
F-Statistic: 4.287407508171504
Critical F 1.8682470831185125, for DFBetween 10 and DFWithin 253
P-Value: 1.54e-05 is less that 0.05
We reject the NUll Hypothesis, means are not same
\end{Verbatim}
\subsection{Q5 - 5}\label{q5---5}
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}41}]:} \PY{n}{jun15jun16} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2015\PYZhy{}06\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2016\PYZhy{}06\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{,}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Usage}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}
\PY{n}{n1} \PY{o}{=} \PY{n}{jun15jun16}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{x1} \PY{o}{=} \PY{n}{jun15jun16}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{jun15jun16}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Usage data for Time \PYZgt{}= June 2015 \PYZam{} Time \PYZlt{} Jul\PYZhy{}2016 :: len (n2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, mean (x1) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, se (S1) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n1}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{)}\PY{p}{)}
\PY{n}{jun16may17} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZgt{}} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2016\PYZhy{}06\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Usage}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{c+c1}{\PYZsh{}Number of users}
\PY{n}{n2} \PY{o}{=} \PY{n}{jun16may17}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{x2} \PY{o}{=} \PY{n}{jun16may17}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{jun16may17}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Usage data for Time \PYZgt{}= Jul\PYZhy{}2016 :: len (n2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, mean (x2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, se (S2) = }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{n2}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The usage means x2 \PYZhy{} x1 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x2 \PYZhy{} x1 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE Adjusted: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DF: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x2} \PY{o}{\PYZhy{}} \PY{n}{x1}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is less than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can reject the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{In Q5 \PYZhy{} 3, we have already proven statistically that number of app users have increased over the years}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Correlation Analysis:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Since both are Quantitative variables we will use Pearson Coefficient.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(data[[\PYZdq{}Usage\PYZdq{}, \PYZdq{}Number of users\PYZdq{}]].corr())}
\PY{n}{rho} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Usage}\PY{o}{.}\PY{n}{values} \PY{o}{\PYZhy{}} \PY{n}{data}\PY{o}{.}\PY{n}{Usage}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{o}{*}
\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of users}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{o}{\PYZhy{}} \PY{n}{data}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of users}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{o}{/}\PY{p}{(}
\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{)} \PY{o}{*} \PY{n}{data}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of users}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)} \PY{o}{*} \PY{n}{data}\PY{o}{.}\PY{n}{Usage}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Correlation : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{rho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Correlation is positive and non\PYZhy{}zero}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{We will perform hypothesis test for Significance of correlation.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{ho} \PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ho : rho == 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ha : rho != 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null Hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate Hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{n}{rho} \PY{o}{*}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{22}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{rho} \PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n}{tcrit} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{l+m+mf}{0.025}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{l+m+mi}{22}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}critical }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ at alpha 0.05}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tcrit}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat \PYZlt{} T\PYZhy{}critical for Two sided T\PYZhy{}test, hence we retain Null hypothesis (ho), the correlation between App users and total users is not statistically relevant.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Usage data for Time >= June 2015 \& Time < Jul-2016 :: len (n2) = 13, mean (x1) = 152.15384615384616, se (S1) = 98.2766894655477
Usage data for Time >= Jul-2016 :: len (n2) = 11, mean (x2) = 296.6363636363636, se (S2) = 99.8982209323797
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
Null hypothesis: The usage means x2 - x1 <= 0
Alternate hypothesis: x2 - x1 > 0
This is a right sided T-Test
SE Adjusted: 40.62250691274703
DF: 21
T-stat 3.556711007345043
alpha/ Significance: 0.05
Significant t-value at alpha - 0.05 is : 1.7207429028118777
p-value:0.0009325540627762585 is less than alpha(0.05)
Hence we can reject the NULL Hypothesis (ho)
In Q5 - 3, we have already proven statistically that number of app users have increased over the years
Correlation Analysis:
Since both are Quantitative variables we will use Pearson Coefficient.
Correlation : 0.3503700784608535
Correlation is positive and non-zero
We will perform hypothesis test for Significance of correlation.
Null Hypothesis: ho : rho == 0
Alternate Hypothesis: ha : rho != 0
T-stat 1.7546032832085956
T-critical 2.073873067904015 at alpha 0.05
T-stat < T-critical for Two sided T-test, hence we retain Null hypothesis (ho), the correlation between App users and total users is not statistically relevant.
\end{Verbatim}
\subsection{Q5 - 6}\label{q5---6}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}42}]:} \PY{n}{newAppRel} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{data}\PY{o}{.}\PY{n}{Year} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2016\PYZhy{}08\PYZhy{}01 00:00:00}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Year}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Usage}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{]}
\PY{n}{newAppRel}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Year}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{=} \PY{n}{newAppRel}\PY{o}{.}\PY{n}{Year}\PY{o}{.}\PY{n}{dt}\PY{o}{.}\PY{n}{strftime}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZpc{}}\PY{l+s+s2}{Y\PYZhy{}}\PY{l+s+s2}{\PYZpc{}}\PY{l+s+s2}{m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Year}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Usage}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{data}\PY{o}{=}\PY{n}{newAppRel}\PY{p}{,} \PY{n}{marker}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{o}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{From the graph it is clear that October \PYZhy{} 2016 the app usage shifts}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
From the graph it is clear that October - 2016 the app usage shifts
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_82_1.png}
\end{center}
{ \hspace*{\fill} \\}
\subsection{Q5 - 7}\label{q5---7}
We will perform T-test between means for Disease users during favourable
and unfavourable periods for the 5 Diseases provided (with weather data)
for the 2 Regions.
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}43}]:} \PY{c+c1}{\PYZsh{} Define Functions}
\PY{k}{def} \PY{n+nf}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}\PY{p}{,}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}\PY{p}{,} \PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}\PY{p}{,} \PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}\PY{p}{,}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{20}\PY{p}{,} \PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{24}\PY{p}{,} \PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Disease type: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{source}\PY{p}{)}\PY{p}{)}
\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{0}
\PY{k}{if} \PY{p}{(}\PY{p}{(}\PY{n}{ignoreH} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{ignoreT} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{T2} \PY{o}{==} \PY{k+kc}{True}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{H2} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{n}{lowerT}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{higherT}\PY{p}{)}
\PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Relative Humidity}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{n}{lowerH}\PY{p}{)}\PY{p}{,}
\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{elif} \PY{p}{(}\PY{p}{(}\PY{n}{ignoreH} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{ignoreT} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{T2} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{H2} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{n}{lowerT}\PY{p}{)}
\PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Relative Humidity}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{n}{lowerH}\PY{p}{)}\PY{p}{,}
\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{elif} \PY{p}{(}\PY{p}{(}\PY{n}{ignoreH} \PY{o}{==} \PY{k+kc}{True}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{ignoreT} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{T2} \PY{o}{==} \PY{k+kc}{True}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{n}{lowerT}\PY{p}{)}
\PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{higherT}\PY{p}{)}\PY{p}{,}
\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{elif} \PY{p}{(}\PY{p}{(}\PY{n}{ignoreH} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{ignoreT} \PY{o}{==} \PY{k+kc}{False}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{T2} \PY{o}{==} \PY{k+kc}{True}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{H2} \PY{o}{==} \PY{k+kc}{True}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{n}{lowerT}\PY{p}{)}
\PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Temperature}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{higherT}\PY{p}{)}
\PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Relative Humidity}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{n}{lowerH}\PY{p}{)}
\PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Relative Humidity}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{higherH}\PY{p}{)}\PY{p}{,}
\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{x1} \PY{o}{=} \PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{source}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{source}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n1} \PY{o}{=} \PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{source}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{c+c1}{\PYZsh{}print(x1, S1, n1)}
\PY{n}{x2} \PY{o}{=} \PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{,} \PY{n}{source}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{,} \PY{n}{source}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n2} \PY{o}{=} \PY{n}{dharwad}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{dharwad}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{d1\PYZus{}weather}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{,} \PY{n}{source}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{c+c1}{\PYZsh{}print(x2, S2, n2)}
\PY{k}{return}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{c+c1}{\PYZsh{}print(Su)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(\PYZdq{}Degrees of freedom: \PYZob{}\PYZcb{}\PYZdq{}.format(df))}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X1 (Mean of users checking Disease in fav condition) }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X2 (Mean of users checking Disease in un\PYZhy{}fav condition) }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(tstat)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x1 \PYZhy{} x2 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(\PYZdq{}alpha/ Significance: \PYZob{}\PYZcb{}\PYZdq{}.format(alpha))}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ and df: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n}{tcrit} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}
\PY{k}{if}\PY{p}{(}\PY{n}{tstat} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{tcrit}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T Statistics:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is less than T Critical: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ at alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{tcrit}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{k}{else}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T Statistics:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is gt than T Critical: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ at alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{tcrit}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can reject the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}44}]:} \PY{n}{dharwad} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{./Jayalaxmi Agro Case Data .xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{sheet\PYZus{}name}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Dharwad\PYZus{}weather}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Analysing for Dharwad}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D1}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{20}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{24}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{80}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D2}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mf}{21.5}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mf}{24.5}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{83}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D4}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{22}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{26}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{85}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D3}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{22}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{24}
\PY{n}{lowerH} \PY{o}{=} \PY{k+kc}{None}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D5}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{22}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mf}{24.5}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{77}
\PY{n}{higherH} \PY{o}{=} \PY{l+m+mi}{85}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D7}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{25}
\PY{n}{higherT} \PY{o}{=} \PY{k+kc}{None}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{80}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Analysing for Dharwad
Disease type: D1
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 31.590651412079982, SE 19.50886987470224, Len 7
X2 (Mean of users checking Disease in un-fav condition) 6.515126050420168, SE 6.330623182247867, Len 15
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 6 is : 1.4397557472577691
T Statistics:3.3200927651509353 is gt than T Critical: 1.4397557472577691 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D2
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 40.134920634920626, SE 33.565536651346264, Len 5
X2 (Mean of users checking Disease in un-fav condition) 6.096486366382561, SE 7.463629544788599, Len 17
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 4 is : 1.5332062737131438
T Statistics:2.251261246997875 is gt than T Critical: 1.5332062737131438 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D4
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 39.166666666666664, SE 45.468242885679125, Len 3
X2 (Mean of users checking Disease in un-fav condition) 12.108752272838958, SE 12.79395896519898, Len 19
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 2 is : 1.88561808316415
T Statistics:1.0243513460539664 is less than T Critical: 1.88561808316415 at alpha(0.1)
Hence we can retain the NULL Hypothesis (ho)
Disease type: D3
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 40.269712430426715, SE 51.14705832340868, Len 14
X2 (Mean of users checking Disease in un-fav condition) 11.961659663865547, SE 16.825323356036495, Len 8
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 17 is : 1.3333793897216264
T Statistics:1.8988640725750916 is gt than T Critical: 1.3333793897216264 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D5
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 14.177489177489177, SE 14.069264069264067, Len 4
X2 (Mean of users checking Disease in un-fav condition) 13.067252827056748, SE 19.183808487079045, Len 18
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 5 is : 1.475884048782027
T Statistics:0.13276358065762542 is less than T Critical: 1.475884048782027 at alpha(0.1)
Hence we can retain the NULL Hypothesis (ho)
Disease type: D7
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 35.0, SE 21.213203435596427, Len 2
X2 (Mean of users checking Disease in un-fav condition) 19.908221925133688, SE 28.31825542990713, Len 20
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 1 is : 3.077683537207806
T Statistics:0.9269123652944348 is less than T Critical: 3.077683537207806 at alpha(0.1)
Hence we can retain the NULL Hypothesis (ho)
\end{Verbatim}
\textbf{For Disease D1, D2, D3 for Dharwad, the claim that more people
access the disease information during favourable periods hold true. For
the rest of the Diseases it is not true}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}45}]:} \PY{n}{dharwad} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{./Jayalaxmi Agro Case Data .xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{sheet\PYZus{}name}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Belagavi\PYZus{}weather}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Analysing for Belgavi}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D1}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{20}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{24}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{80}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D2}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mf}{21.5}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mf}{24.5}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{83}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D4}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{22}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{26}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{85}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D3}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{22}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mi}{24}
\PY{n}{lowerH} \PY{o}{=} \PY{k+kc}{None}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D5}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{True}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{22}
\PY{n}{higherT} \PY{o}{=} \PY{l+m+mf}{24.5}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{77}
\PY{n}{higherH} \PY{o}{=} \PY{l+m+mi}{85}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\PY{n}{source} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{D7}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{ignoreT} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{ignoreH} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{T2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{H2} \PY{o}{=} \PY{k+kc}{False}
\PY{n}{lowerT} \PY{o}{=} \PY{l+m+mi}{25}
\PY{n}{higherT} \PY{o}{=} \PY{k+kc}{None}
\PY{n}{lowerH} \PY{o}{=} \PY{l+m+mi}{80}
\PY{n}{higherH} \PY{o}{=} \PY{k+kc}{None}
\PY{c+c1}{\PYZsh{}dharwad.head()}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{prepareData}\PY{p}{(}\PY{n}{dharwad}\PY{p}{,} \PY{n}{source}\PY{p}{,} \PY{n}{ignoreT}\PY{p}{,}
\PY{n}{ignoreH}\PY{p}{,} \PY{n}{T2}\PY{p}{,} \PY{n}{H2}\PY{p}{,} \PY{n}{lowerT}\PY{p}{,} \PY{n}{higherT}\PY{p}{,} \PY{n}{lowerH}\PY{p}{,} \PY{n}{higherH}\PY{p}{)}
\PY{n}{pairwiseT\PYZus{}test}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Analysing for Belgavi
Disease type: D1
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 37.593051847437806, SE 32.135791146151945, Len 9
X2 (Mean of users checking Disease in un-fav condition) 11.916694121951826, SE 13.214657338143653, Len 15
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 9 is : 1.3830287383964925
T Statistics:2.2839245979002736 is gt than T Critical: 1.3830287383964925 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D2
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 29.380223212460052, SE 14.257675698003663, Len 8
X2 (Mean of users checking Disease in un-fav condition) 9.173547458456326, SE 11.634011544592797, Len 16
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 11 is : 1.3634303180205214
T Statistics:3.4720833038193164 is gt than T Critical: 1.3634303180205214 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D4
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 24.28983784246942, SE 17.12396482492092, Len 5
X2 (Mean of users checking Disease in un-fav condition) 12.973835703960896, SE 11.293745993618758, Len 19
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 4 is : 1.5332062737131438
T Statistics:1.3997159471060217 is less than T Critical: 1.5332062737131438 at alpha(0.1)
Hence we can retain the NULL Hypothesis (ho)
Disease type: D3
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 30.957728757809885, SE 24.56035201007001, Len 13
X2 (Mean of users checking Disease in un-fav condition) 11.61232644458533, SE 16.414430640928842, Len 11
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 20 is : 1.325340706985046
T Statistics:2.297579720814241 is gt than T Critical: 1.325340706985046 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D5
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 36.574074074074076, SE 18.326406324471435, Len 4
X2 (Mean of users checking Disease in un-fav condition) 10.515473835553788, SE 11.909006454368914, Len 20
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 3 is : 1.6377443572159065
T Statistics:2.7308507082037217 is gt than T Critical: 1.6377443572159065 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
Disease type: D7
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
X1 (Mean of users checking Disease in fav condition) 72.42328042328042, SE 15.394217855891272, Len 3
X2 (Mean of users checking Disease in un-fav condition) 21.006418135849568, SE 25.02226505813728, Len 21
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
Significant t-value at alpha - 0.1 and df: 3 is : 1.6377443572159065
T Statistics:4.929164561043593 is gt than T Critical: 1.6377443572159065 at alpha(0.1)
Hence we can reject the NULL Hypothesis (ho)
\end{Verbatim}
\textbf{For Disease D1, D2, D3, D5, D7 for Belgavi, the claim that more
people access the disease information during favourable periods hold
true. For the rest of the Diseases it is not true}
\subsection{Q5 - 8}\label{q5---8}
\textbf{Conclusions:}
\begin{itemize}
\tightlist
\item
Farmers in Belgavi are more aware of Disease D5 and D7 and impact of
weather on disease. They seem to be checking for disease in favourable
weather conditions more than Dharwad Farmers
\item
Farmers are not cheking for D4 Diseases. May be they need to be made
more aware of D4, favourable weather conditions for D4
\end{itemize}
\section{Q6}\label{q6}
\subsection{Q6 - 1}\label{q6---1}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}46}]:} \PY{n}{deandata} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{./IMB 485 Deans Dilemma Data.xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{deandata}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}46}]:} SlNo Gender Gender-B Percent\_SSC Board\_SSC Board\_CBSE Board\_ICSE \textbackslash{}
0 1 M 0 62.00 Others 0 0
1 2 M 0 76.33 ICSE 0 1
2 3 M 0 72.00 Others 0 0
3 4 M 0 60.00 CBSE 1 0
4 5 M 0 61.00 CBSE 1 0
Percent\_HSC Board\_HSC Stream\_HSC {\ldots} Percentile\_ET Percent\_MBA \textbackslash{}
0 88.00 Others Commerce {\ldots} 55.0 58.80
1 75.33 Others Science {\ldots} 86.5 66.28
2 78.00 Others Commerce {\ldots} 0.0 52.91
3 63.00 CBSE Arts {\ldots} 75.0 57.80
4 55.00 ISC Science {\ldots} 66.0 59.43
S-TEST*SCORE Specialization\_MBA Marks\_Communication Marks\_Projectwork \textbackslash{}
0 55.0 Marketing \& HR 50 65
1 86.5 Marketing \& Finance 69 70
2 0.0 Marketing \& Finance 50 61
3 75.0 Marketing \& Finance 54 66
4 66.0 Marketing \& HR 52 65
Marks\_BOCA Placement Placement\_B Salary
0 74 Placed 1 270000
1 75 Placed 1 200000
2 59 Placed 1 240000
3 62 Placed 1 250000
4 67 Placed 1 180000
[5 rows x 26 columns]
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}47}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize} \PY{o}{=} \PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{20}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{i}\PY{p}{,} \PY{n}{source} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}MBA}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}
\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Marks\PYZus{}Communication}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Marks\PYZus{}Projectwork}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{p}{:}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{i} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{boxplot}\PY{p}{(}\PY{n}{x} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{n}{source}\PY{p}{,} \PY{n}{data}\PY{o}{=} \PY{n}{deandata}\PY{p}{)}
\PY{c+c1}{\PYZsh{} Label the plots}
\PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Boxplot of }\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{n}{source}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{n}{h\PYZus{}pad} \PY{o}{=} \PY{l+m+mf}{2.5}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_93_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}48}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{15}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{barplot}\PY{p}{(}\PY{n}{x} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Course\PYZus{}Degree}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{data} \PY{o}{=} \PY{n}{deandata}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}48}]:} <matplotlib.axes.\_subplots.AxesSubplot at 0x7fee82dd2048>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_94_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}49}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{barplot}\PY{p}{(}\PY{n}{x} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Specialization\PYZus{}MBA}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{data} \PY{o}{=} \PY{n}{deandata}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}49}]:} <matplotlib.axes.\_subplots.AxesSubplot at 0x7fee833b84a8>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_95_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}50}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{barplot}\PY{p}{(}\PY{n}{x} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{data} \PY{o}{=} \PY{n}{deandata}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}50}]:} <matplotlib.axes.\_subplots.AxesSubplot at 0x7fee831b6978>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_96_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}51}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{countplot}\PY{p}{(}\PY{n}{x} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{data} \PY{o}{=} \PY{n}{deandata}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}51}]:} <matplotlib.axes.\_subplots.AxesSubplot at 0x7fee833f6f60>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_97_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{itemize}
\tightlist
\item
SSC Median scores for students with placement is greater than students
without placement
\item
Students who perform better in project work seems to be placed more
\item
There are significantly more Male students than Female students
\item
Marketing students seems to be doing worse than others in term of
placements
\item
In percentage terms, Science students get getter placement than others
\item
Percentage of Male students getting placed \textgreater{} percentage
of Female students getting placed
\end{itemize}
\subsection{Q6 - 2}\label{q6---2}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}52}]:} \PY{n}{p} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{Placement\PYZus{}B}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}\PY{o}{/}\PY{n}{deandata}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Proportion of students not placed: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{p}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Prob = comb(20,i) * (p) ** i * (1\PYZhy{}p) ** (20 \PYZhy{} i)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{special} \PY{k}{import} \PY{n}{comb}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Exactly 5 out of randomly selected 20 Students is: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{comb}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{p}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{5} \PY{o}{*} \PY{n}{p} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{15}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{x} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{6}\PY{p}{)}\PY{p}{:}
\PY{n}{x}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n+nb}{round}\PY{p}{(}\PY{n}{comb}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{,}\PY{n}{i}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{p}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{n}{i}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{n}{p} \PY{o}{*}\PY{o}{*} \PY{p}{(}\PY{l+m+mi}{20} \PY{o}{\PYZhy{}} \PY{n}{i}\PY{p}{)}\PY{p}{)}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Prob of 0, 1, 2, 3, 4,5 students not been selected is (P): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{At least 5 out of randomly selected 20 Students is (1 \PYZhy{} P): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{x}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Proportion of students not placed: 0.20204603580562663
Prob = comb(20,i) * (p) ** i * (1-p) ** (20 - i)
Exactly 5 out of randomly selected 20 Students is: 0.17675144725366596
Prob of 0, 1, 2, 3, 4,5 students not been selected is (P): [0.011, 0.0555, 0.1334, 0.2027, 0.2181, 0.1768]
At least 5 out of randomly selected 20 Students is (1 - P): 0.20250000000000012
\end{Verbatim}
\subsection{Q6 - 3 - a}\label{q6---3---a}
The Optimal number of bins for histogram is computed using the following
formula (this will be used later in the solution): \textbf{N = 1 =3.322
* Log10 (n)} where n = number of onservations
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}53}]:} \PY{n}{sslc} \PY{o}{=} \PY{n}{deandata}\PY{p}{[}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{]}
\PY{c+c1}{\PYZsh{}print(sslc.shape[0])}
\PY{n}{mu} \PY{o}{=} \PY{n}{deandata}\PY{p}{[}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{sigma} \PY{o}{=} \PY{n}{deandata}\PY{p}{[}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{n}{mu} \PY{o}{\PYZhy{}} \PY{l+m+mi}{3}\PY{o}{*}\PY{n}{sigma}\PY{p}{,} \PY{n}{mu} \PY{o}{+} \PY{l+m+mi}{3}\PY{o}{*}\PY{n}{sigma}\PY{p}{,} \PY{l+m+mi}{100}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,}\PY{n}{mlab}\PY{o}{.}\PY{n}{normpdf}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mu}\PY{p}{,} \PY{n}{sigma}\PY{p}{)}\PY{p}{,} \PY{n}{color} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{red}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{sslc}\PY{p}{,} \PY{n}{bins}\PY{o}{=}\PY{l+m+mi}{9}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)}
\PY{n}{pp\PYZus{}x} \PY{o}{=} \PY{n}{sm}\PY{o}{.}\PY{n}{ProbPlot}\PY{p}{(}\PY{n}{sslc}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{fit}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)}
\PY{n}{pp\PYZus{}x}\PY{o}{.}\PY{n}{ppplot}\PY{p}{(}\PY{n}{line}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{45}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_102_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_102_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}54}]:} \PY{o}{\PYZpc{}\PYZpc{}}R \PY{o}{\PYZhy{}}i sslc
\PY{k+kn}{library}\PY{p}{(}data.table\PY{p}{)}
\PY{k+kp}{options}\PY{p}{(}warn\PY{o}{=}\PY{l+m}{\PYZhy{}1}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{ho = The distibutions are same (normal)\PYZdq{}}\PY{p}{)}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{ha = The distibutions are dissimilar\PYZdq{}}\PY{p}{)}\PY{p}{)}
propUndVot \PY{o}{=} \PY{k+kp}{unlist}\PY{p}{(}sslc\PY{o}{\PYZdl{}}Percent\PYZus{}SSC\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(propUndVot[propUndVot \PYZgt{}=76 \PYZam{} propUndVot \PYZlt{}81.6])}
Xmin \PY{o}{=} \PY{k+kp}{min}\PY{p}{(}propUndVot\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Min Range:\PYZdq{}}\PY{p}{,} Xmin\PY{p}{)}\PY{p}{)}
Xmax \PY{o}{=} \PY{k+kp}{max}\PY{p}{(}propUndVot\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Max Range:\PYZdq{}}\PY{p}{,} Xmax\PY{p}{)}\PY{p}{)}
N \PY{o}{=} \PY{l+m}{1} \PY{o}{+} \PY{l+m}{3.3}\PY{o}{*}\PY{k+kp}{log10}\PY{p}{(}\PY{k+kp}{length}\PY{p}{(}propUndVot\PY{p}{)}\PY{p}{)} \PY{c+c1}{\PYZsh{} Number of bins in the range}
N \PY{o}{=} \PY{k+kp}{floor}\PY{p}{(}N\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Total Num Bins:\PYZdq{}}\PY{p}{,} \PY{k+kp}{floor}\PY{p}{(}N\PY{p}{)}\PY{p}{)}\PY{p}{)}
obsdistribution \PY{o}{=} as.data.table\PY{p}{(}\PY{k+kp}{table}\PY{p}{(}\PY{k+kp}{cut}\PY{p}{(}propUndVot\PY{p}{,} breaks \PY{o}{=} \PY{k+kp}{seq}\PY{p}{(}\PY{l+m}{37}\PY{p}{,} \PY{l+m}{87.2}\PY{p}{,} by\PY{o}{=}\PY{p}{(}\PY{p}{(}\PY{l+m}{87.2} \PY{o}{\PYZhy{}} \PY{l+m}{37}\PY{p}{)}\PY{o}{/}N\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{c+c1}{\PYZsh{}breaks = N)))}
\PY{k+kp}{print}\PY{p}{(}obsdistribution\PY{p}{)}
cutpoint \PY{o}{=} \PY{k+kp}{unique}\PY{p}{(}\PY{k+kt}{c}\PY{p}{(}\PY{k+kp}{as.numeric}\PY{p}{(} \PY{k+kp}{sub}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{\PYZbs{}\PYZbs{}((.+),.*\PYZdq{}}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{\PYZbs{}\PYZbs{}1\PYZdq{}}\PY{p}{,} obsdistribution\PY{o}{\PYZdl{}}V1\PY{p}{)} \PY{p}{)}\PY{p}{,}
\PY{k+kp}{as.numeric}\PY{p}{(} \PY{k+kp}{sub}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{[\PYZca{},]*,([\PYZca{}]]*)\PYZbs{}\PYZbs{}]\PYZdq{}}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{\PYZbs{}\PYZbs{}1\PYZdq{}}\PY{p}{,} obsdistribution\PY{o}{\PYZdl{}}V1\PY{p}{)} \PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(cutpoint)}
meandist \PY{o}{=} \PY{k+kp}{mean}\PY{p}{(}propUndVot\PY{p}{)}
std \PY{o}{=} sd\PY{p}{(}propUndVot\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(meandist)}
\PY{c+c1}{\PYZsh{}print(std)}
normaldist \PY{o}{=} pnorm\PY{p}{(}cutpoint\PY{p}{,} meandist \PY{p}{,} std\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(normaldist)}
probval \PY{o}{=} \PY{k+kt}{c}\PY{p}{(}\PY{p}{)}
\PY{k+kr}{for}\PY{p}{(}i \PY{k+kr}{in} \PY{k+kp}{seq}\PY{p}{(}\PY{l+m}{1}\PY{o}{:}\PY{k+kp}{length}\PY{p}{(}normaldist\PY{p}{)}\PY{l+m}{\PYZhy{}1}\PY{p}{)}\PY{p}{)}\PY{p}{\PYZob{}}
probval \PY{o}{=} \PY{k+kt}{c}\PY{p}{(}probval\PY{p}{,} normaldist\PY{p}{[}i\PY{l+m}{+1}\PY{p}{]} \PY{o}{\PYZhy{}} normaldist\PY{p}{[}i\PY{p}{]}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(normaldist[i+1])}
\PY{c+c1}{\PYZsh{}print(normaldist[i])}
\PY{p}{\PYZcb{}}
\PY{c+c1}{\PYZsh{}print(probval)}
normfreq \PY{o}{=} probval \PY{o}{*} \PY{k+kp}{length}\PY{p}{(}propUndVot\PY{p}{)}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm \PY{o}{=} \PY{k+kp}{as.integer}\PY{p}{(}normfreq\PY{p}{[}\PY{l+m}{1}\PY{o}{:}\PY{l+m}{9}\PY{p}{]}\PY{p}{)}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev \PY{o}{=} \PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}N \PY{o}{\PYZhy{}} obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm\PY{p}{)}\PY{o}{\PYZca{}}\PY{l+m}{2}\PY{o}{/}
\PY{k+kp}{ifelse}\PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm\PY{o}{==}\PY{l+m}{0}\PY{p}{,} \PY{l+m}{0}\PY{p}{,}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}obsdistribution\PY{p}{)}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev\PY{p}{[}\PY{k+kp}{is.infinite}\PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev\PY{p}{)}\PY{p}{]} \PY{o}{=} \PY{l+m}{0}
\PY{c+c1}{\PYZsh{}print(sum(obsdistribution\PYZdl{}ExpectedNormDev))}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste0}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Chisq:\PYZdq{}}\PY{p}{,}\PY{k+kp}{sum}\PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Distribution is Normal \PYZhy{} Chisq Critical = 15.5, DF = \PYZdq{}}\PY{p}{,}\PY{p}{(}N\PY{l+m}{\PYZhy{}1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{We retain the Null Hypothesis\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{}install.packages(\PYZdq{}fitdistrplus\PYZdq{})}
\PY{c+c1}{\PYZsh{}library(fitdistrplus)}
\PY{c+c1}{\PYZsh{}x = fitdist(propUndVot, \PYZdq{}norm\PYZdq{})}
\PY{c+c1}{\PYZsh{}summary(x)}
\end{Verbatim}
\begin{verbatim}
[1] "ho = The distibutions are same (normal)"
[1] "ha = The distibutions are dissimilar"
[1] "Min Range: 37"
[1] "Max Range: 87.2"
[1] "Total Num Bins: 9"
V1 N
1: (37,42.6] 6
2: (42.6,48.2] 20
3: (48.2,53.7] 35
4: (53.7,59.3] 73
5: (59.3,64.9] 64
6: (64.9,70.5] 72
7: (70.5,76] 50
8: (76,81.6] 47
9: (81.6,87.2] 23
V1 N ExpectedNorm ExpectedNormDev
1: (37,42.6] 6 6 0.00000000
2: (42.6,48.2] 20 17 0.52941176
3: (48.2,53.7] 35 36 0.02777778
4: (53.7,59.3] 73 60 2.81666667
5: (59.3,64.9] 64 76 1.89473684
6: (64.9,70.5] 72 75 0.12000000
7: (70.5,76] 50 57 0.85964912
8: (76,81.6] 47 34 4.97058824
9: (81.6,87.2] 23 16 3.06250000
[1] "Chisq:14.2813304093567"
[1] "Distribution is Normal - Chisq Critical = 15.5, DF = 8"
[1] "We retain the Null Hypothesis"
\end{verbatim}
\subsection{Q6 - 3 - b}\label{q6---3---b}
First solution with SSC Marks == 60, and \textgreater{} 60. Since the
problem is not clearly defined hence giving both options
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}55}]:} \PY{n}{x1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X1 = Salary Stats for Percent == 60:: Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{)}\PY{p}{)}
\PY{n}{x2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X2 = Salary Stats for Percent \PYZgt{} 60:: Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x1 \PYZhy{} x2 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DF }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is greater than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
X1 = Salary Stats for Percent == 60:: Mean: 276230.76923076925, STD: 127706.14305646763, Len: 13
X2 = Salary Stats for Percent > 60:: Mean: 279859.40594059404, STD: 80594.30269880143, Len: 202
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 35870.36750625153
DF 12
T-stat -0.1011597304987852
alpha/ Significance: 0.1
Significant t-value at alpha - 0.1 is : 1.3562173340231973
p-value:0.5394528767472087 is greater than alpha(0.1)
Hence we can retain the NULL Hypothesis (ho)
\end{Verbatim}
\subsection{Q6 - 3 - b}\label{q6---3---b}
Alternate solution with SSC Marks \textless{}= 60 and greater than 60.
Since the problem is not clearly defined hence giving both options.
2 Sample T Test equations remain same
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}56}]:} \PY{n}{x1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X1 = Salary Stats for Percent \PYZlt{}= 60:: Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{)}\PY{p}{)}
\PY{n}{x2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}SSC}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{60}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X2 = Salary Stats for Percent \PYZgt{} 60:: Mean: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x1 \PYZhy{} x2 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DF }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}Statistics: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
X1 = Salary Stats for Percent <= 60:: Mean: 264800.0, STD: 112817.20214983808, Len: 110
X2 = Salary Stats for Percent > 60:: Mean: 279859.40594059404, STD: 80594.30269880143, Len: 202
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
SE 12159.860487859336
DF 170
T-Statistics: -1.2384521973446707
alpha/ Significance: 0.1
Significant t-value at alpha - 0.1 is : 1.286551282624592
Hence we can retain the NULL Hypothesis (ho)
\end{Verbatim}
\subsection{Q6 - 3 - c}\label{q6---3---c}
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}57}]:} \PY{n}{x1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{M}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}} \PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{M}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{M}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Stats for male: X1:Mean }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{)}\PY{p}{)}
\PY{n}{x2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{F}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{F}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gender}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{F}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{\PYZam{}} \PY{p}{(}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Stats for Female: X2:Mean }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x1 \PYZhy{} x2 \PYZlt{}= 10000}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{} 10000}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SU }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Degrees of freedom: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{10000}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.05}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is less than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can reject the NULL Hypothesis (ho), Male Salary \PYZgt{} Female Salary by 10K}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Stats for male: X1:Mean 284241.8604651163, STD 99430.42049537445, Len 215
Stats for Female: X2:Mean 253068.04123711342, STD 74190.54233637161, Len 97
Null hypothesis: The proportions x1 - x2 <= 10000
Alternate hypothesis: x1 - x2 > 10000
This is a right sided T-Test
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SU 10135.482345249799
Degrees of freedom: 243
T-stat 2.08907859604002
alpha/ Significance: 0.05
Significant t-value at alpha - 0.05 is : 1.6511484017734768
p-value:0.018870644990801377 is less than alpha(0.05)
Hence we can reject the NULL Hypothesis (ho), Male Salary > Female Salary by 10K
\end{Verbatim}
\subsection{Q6 - 3 - d}\label{q6---3---d}
Assumption is effectiveness is determined by the outcome variable
Placement, i.e. how frequently CBSE students get placed verses placement
of non-CBSE student. There are correlation of other KPIs vs CBSE /
non-CBSE, even graphically we dont see any significant deviations. We
will perform a T-Test only for CBSE and Placement subsequently after the
plots
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{c+c1}{\PYZsh{}deandata.loc[deandata[\PYZdq{}Board\PYZus{}CBSE\PYZdq{}] == 1]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Correlation Board\PYZus{}CBSE and Placement}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{deandata}\PY{p}{[}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Board\PYZus{}CBSE}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Placement\PYZus{}B}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{]}\PY{o}{.}\PY{n}{corr}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Correlation Board\PYZus{}CBSE and Percent MBA Marks Obtained}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{deandata}\PY{p}{[}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Board\PYZus{}CBSE}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Percent\PYZus{}MBA}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{]}\PY{o}{.}\PY{n}{corr}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Correlation Board\_CBSE and Placement
Board\_CBSE Placement\_B
Board\_CBSE 1.000000 0.053834
Placement\_B 0.053834 1.000000
Correlation Board\_CBSE and Percent MBA Marks Obtained
Board\_CBSE Percent\_MBA
Board\_CBSE 1.000000 -0.090726
Percent\_MBA -0.090726 1.000000
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}59}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize} \PY{o}{=} \PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{i}\PY{p}{,} \PY{n}{source} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Salary}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Percent\PYZus{}MBA}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Marks\PYZus{}Projectwork}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{p}{:}
\PY{n}{plt}\PY{o}{.}\PY{n}{subplot}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{n}{i} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{barplot}\PY{p}{(}\PY{n}{x} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{n}{source}\PY{p}{,} \PY{n}{data} \PY{o}{=} \PY{n}{deandata}\PY{p}{)}
\PY{c+c1}{\PYZsh{} Label the plots}
\PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Barplot of }\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{n}{source}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{n}{h\PYZus{}pad} \PY{o}{=} \PY{l+m+mf}{2.5}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_112_0.png}
\end{center}
{ \hspace*{\fill} \\}
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{n}{x1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n1} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Stats for CBSE Placement: X1:Mean }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1}\PY{p}{)}\PY{p}{)}
\PY{n}{x2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n}{n2} \PY{o}{=} \PY{n}{deandata}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{deandata}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Board\PYZus{}CBSE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Placement\PYZus{}B}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Stats for NonCBSE Placement: X1:Mean }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, STD }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Len }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Degrees of freedom: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x1 \PYZhy{} x2 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is gt than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Conclusions:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{1. There is no statistical proof that CBSE students get more jobs}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{2. They do not get more marks, if anything there the graph shows on avg CBSE students get less marks in MBA}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence giving more preference to CBSE students adds no extra value to the institution}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Stats for CBSE Placement: X1:Mean 0.831858407079646, STD 0.37565787215935875, Len 113
Stats for NonCBSE Placement: X1:Mean 0.7841726618705036, STD 0.4121369848339013, Len 278
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 0.04312580767108984
Degrees of freedom: 226
T-stat 1.1057357017596072
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
alpha/ Significance: 0.05
Significant t-value at alpha - 0.05 is : 1.651623859318793
p-value:0.1350083082855783 is gt than alpha(0.05)
Hence we can retain the NULL Hypothesis (ho)
Conclusions:
1. There is no statistical proof that CBSE students get more jobs
2. They do not get more marks, if anything there the graph shows on avg CBSE students get less marks in MBA
Hence giving more preference to CBSE students adds no extra value to the institution
\end{Verbatim}
\subsection{Q6 - 3 - e}\label{q6---3---e}
\textbf{Recommendations:}
\begin{itemize}
\tightlist
\item
Giving priority to CBSE students should be discontinued as there is no
proof thet they get better placement or does better in Project Work /
obtains overall higher percetage of marks
\item
More emphasis should be laid to doing Project Work well. It seems
people with better Project Work, gets placed better (this has not ben
statistically proved, but graphs provide some visual evidence)
\item
Marketing students seems to be faring worse than the other streams in
terms of placements
\item
Male students are paid significantly more than Female students on an
average. There is a possibility of bias and gender pay gap (more tests
are needed to provide complete evidence to this thought, effect of
other variables like experience etc.). Hence he might need to work
with recruiters on this if further studies do provide evidence to
gender pay bias
\end{itemize}
\section{Q7}\label{q7}
\subsection{Q7 - a}\label{q7---a}
The Optimal number of bins for histogram is computed using the following
formula (this will be used later in the solution): \textbf{N = 1 =3.322
* Log10 (n)} where n = number of onservations
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}61}]:} \PY{n}{propUndVot} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{12}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{14}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{13}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,}
\PY{l+m+mi}{19}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{11}\PY{p}{,} \PY{l+m+mi}{19}\PY{p}{,} \PY{l+m+mi}{14}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{11}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,}
\PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{,} \PY{l+m+mi}{17}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,}
\PY{l+m+mi}{15}\PY{p}{,} \PY{l+m+mi}{17}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{13}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{11}\PY{p}{,} \PY{l+m+mi}{7}\PY{p}{,} \PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{11}\PY{p}{,}
\PY{l+m+mi}{23}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{24}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{24}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{7}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{15}\PY{p}{]}
\PY{n}{mu} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{)}
\PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{)}
\PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{n}{mu} \PY{o}{\PYZhy{}} \PY{l+m+mi}{3}\PY{o}{*}\PY{n}{sigma}\PY{p}{,} \PY{n}{mu} \PY{o}{+} \PY{l+m+mi}{3}\PY{o}{*}\PY{n}{sigma}\PY{p}{,} \PY{l+m+mi}{100}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,}\PY{n}{mlab}\PY{o}{.}\PY{n}{normpdf}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mu}\PY{p}{,} \PY{n}{sigma}\PY{p}{)}\PY{p}{,} \PY{n}{color} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{red}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{sns}\PY{o}{.}\PY{n}{distplot}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{,} \PY{n}{bins}\PY{o}{=}\PY{l+m+mi}{6}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)}
\PY{n}{pp\PYZus{}x} \PY{o}{=} \PY{n}{sm}\PY{o}{.}\PY{n}{ProbPlot}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{)}\PY{p}{,} \PY{n}{fit}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)}
\PY{n}{pp\PYZus{}x}\PY{o}{.}\PY{n}{ppplot}\PY{p}{(}\PY{n}{line}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{45}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_118_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_118_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}62}]:} \PY{o}{\PYZpc{}\PYZpc{}}R \PY{o}{\PYZhy{}}i propUndVot
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{ho = The distibutions are same (normal)\PYZdq{}}\PY{p}{)}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{ha = The distibutions are dissimilar\PYZdq{}}\PY{p}{)}\PY{p}{)}
propUndVot \PY{o}{=} \PY{k+kp}{unlist}\PY{p}{(}propUndVot\PY{p}{)}
\PY{k+kn}{library}\PY{p}{(}data.table\PY{p}{)}
Xmin \PY{o}{=} \PY{k+kp}{min}\PY{p}{(}propUndVot\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Min Range:\PYZdq{}}\PY{p}{,} Xmin\PY{p}{)}\PY{p}{)}
Xmax \PY{o}{=} \PY{k+kp}{max}\PY{p}{(}propUndVot\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Max Range:\PYZdq{}}\PY{p}{,} Xmax\PY{p}{)}\PY{p}{)}
N \PY{o}{=} \PY{l+m}{1} \PY{o}{+} \PY{l+m}{3.3}\PY{o}{*}\PY{k+kp}{log10}\PY{p}{(}\PY{k+kp}{length}\PY{p}{(}propUndVot\PY{p}{)}\PY{p}{)} \PY{c+c1}{\PYZsh{} Number of bins in the range}
N \PY{o}{=} \PY{k+kp}{floor}\PY{p}{(}N\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Total Num Bins:\PYZdq{}}\PY{p}{,} \PY{k+kp}{floor}\PY{p}{(}N\PY{p}{)}\PY{p}{)}\PY{p}{)}
obsdistribution \PY{o}{=} as.data.table\PY{p}{(}\PY{k+kp}{table}\PY{p}{(}\PY{k+kp}{cut}\PY{p}{(}propUndVot\PY{p}{,} breaks \PY{o}{=} \PY{k+kp}{seq}\PY{p}{(}\PY{l+m}{4.9}\PY{p}{,}\PY{l+m}{24.1}\PY{p}{,} by \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{l+m}{24.1}\PY{l+m}{\PYZhy{}5}\PY{p}{)}\PY{o}{/}N\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{c+c1}{\PYZsh{}breaks = N)))}
\PY{k+kp}{print}\PY{p}{(}obsdistribution\PY{p}{)}
cutpoint \PY{o}{=} \PY{k+kp}{unique}\PY{p}{(}\PY{k+kt}{c}\PY{p}{(}\PY{k+kp}{as.numeric}\PY{p}{(} \PY{k+kp}{sub}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{\PYZbs{}\PYZbs{}((.+),.*\PYZdq{}}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{\PYZbs{}\PYZbs{}1\PYZdq{}}\PY{p}{,} obsdistribution\PY{o}{\PYZdl{}}V1\PY{p}{)} \PY{p}{)}\PY{p}{,}
\PY{k+kp}{as.numeric}\PY{p}{(} \PY{k+kp}{sub}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{[\PYZca{},]*,([\PYZca{}]]*)\PYZbs{}\PYZbs{}]\PYZdq{}}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{\PYZbs{}\PYZbs{}1\PYZdq{}}\PY{p}{,} obsdistribution\PY{o}{\PYZdl{}}V1\PY{p}{)} \PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(cutpoint)}
meandist \PY{o}{=} \PY{k+kp}{mean}\PY{p}{(}propUndVot\PY{p}{)}
std \PY{o}{=} sd\PY{p}{(}propUndVot\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(meandist)}
\PY{c+c1}{\PYZsh{}print(std)}
normaldist \PY{o}{=} pnorm\PY{p}{(}cutpoint\PY{p}{,} meandist \PY{p}{,} std\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(normaldist)}
probval \PY{o}{=} \PY{k+kt}{c}\PY{p}{(}\PY{p}{)}
\PY{k+kr}{for}\PY{p}{(}i \PY{k+kr}{in} \PY{k+kp}{seq}\PY{p}{(}\PY{l+m}{1}\PY{o}{:}\PY{k+kp}{length}\PY{p}{(}normaldist\PY{p}{)}\PY{l+m}{\PYZhy{}1}\PY{p}{)}\PY{p}{)}\PY{p}{\PYZob{}}
probval \PY{o}{=} \PY{k+kt}{c}\PY{p}{(}probval\PY{p}{,} normaldist\PY{p}{[}i\PY{l+m}{+1}\PY{p}{]} \PY{o}{\PYZhy{}} normaldist\PY{p}{[}i\PY{p}{]}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(normaldist[i+1])}
\PY{c+c1}{\PYZsh{}print(normaldist[i])}
\PY{p}{\PYZcb{}}
normfreq \PY{o}{=} probval \PY{o}{*} \PY{k+kp}{length}\PY{p}{(}propUndVot\PY{p}{)}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm \PY{o}{=} \PY{k+kp}{as.integer}\PY{p}{(}normfreq\PY{p}{[}\PY{l+m}{1}\PY{o}{:}\PY{l+m}{6}\PY{p}{]}\PY{p}{)}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev \PY{o}{=} \PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}N \PY{o}{\PYZhy{}} obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm\PY{p}{)}\PY{o}{\PYZca{}}\PY{l+m}{2}\PY{o}{/}
\PY{k+kp}{ifelse}\PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm\PY{o}{==}\PY{l+m}{0}\PY{p}{,} \PY{l+m}{0}\PY{p}{,}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNorm\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}obsdistribution\PY{p}{)}
obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev\PY{p}{[}\PY{k+kp}{is.infinite}\PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev\PY{p}{)}\PY{p}{]} \PY{o}{=} \PY{l+m}{0}
\PY{k+kp}{print}\PY{p}{(}\PY{k+kp}{paste0}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{ChiSq:\PYZdq{}}\PY{p}{,}\PY{p}{(}\PY{k+kp}{sum}\PY{p}{(}obsdistribution\PY{o}{\PYZdl{}}ExpectedNormDev\PY{p}{)}\PY{o}{\PYZhy{}} \PY{l+m}{0.5}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{We have rejected the last bucket as the number of records in the bin is \PYZlt{} 5\PYZdq{}}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{Not Normal \PYZhy{} Chisq Critical = 9.2, DF = 5\PYZdq{}}\PY{p}{)}
\PY{k+kp}{print}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{We reject the Null Hypothesis, the distribution is statistically not normal\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{}install.packages(\PYZdq{}fitdistrplus\PYZdq{})}
\PY{c+c1}{\PYZsh{}library(fitdistrplus)}
\PY{c+c1}{\PYZsh{}x = fitdist(propUndVot, \PYZdq{}norm\PYZdq{})}
\PY{c+c1}{\PYZsh{}summary(x)}
\end{Verbatim}
\begin{verbatim}
[1] "ho = The distibutions are same (normal)"
[1] "ha = The distibutions are dissimilar"
[1] "Min Range: 5"
[1] "Max Range: 24"
[1] "Total Num Bins: 6"
V1 N
1: (4.9,8.08] 15
2: (8.08,11.3] 11
3: (11.3,14.5] 7
4: (14.5,17.6] 6
5: (17.6,20.8] 8
6: (20.8,24] 3
V1 N ExpectedNorm ExpectedNormDev
1: (4.9,8.08] 15 6 13.500000
2: (8.08,11.3] 11 10 0.100000
3: (11.3,14.5] 7 11 1.454545
4: (14.5,17.6] 6 8 0.500000
5: (17.6,20.8] 8 5 1.800000
6: (20.8,24] 3 2 0.500000
[1] "ChiSq:17.3545454545455"
[1] "We have rejected the last bucket as the number of records in the bin is < 5"
[1] "Not Normal - Chisq Critical = 9.2, DF = 5"
[1] "We reject the Null Hypothesis, the distribution is statistically not normal"
\end{verbatim}
\subsection{Q7 - b}\label{q7---b}
We will calculate the lower and upper bound of CI using the following
equation:
\begin{equation*} X \pm T * (\frac {S}{\sqrt n}) \end{equation*}
where X = mean of sample T = Critical T-statistic Value at
alpha/significance S = Standard Deviation
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}63}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T distribution with 90}\PY{l+s+s2}{\PYZpc{}}\PY{l+s+s2}{ CI (Population variance in unknown)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{n} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{)}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.1}
\PY{n}{T} \PY{o}{=} \PY{n+nb}{abs}\PY{p}{(}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha} \PY{o}{/} \PY{l+m+mi}{2}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{n} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ at alpha }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{T}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n}{Xbar} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{)}
\PY{n}{S} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{propUndVot}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{X\PYZus{}bar }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Xbar}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{S}\PY{p}{)}\PY{p}{)}
\PY{n}{lowerBound} \PY{o}{=} \PY{n}{Xbar} \PY{o}{\PYZhy{}} \PY{n}{T} \PY{o}{*} \PY{n}{S} \PY{o}{/} \PY{n}{n} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{lowerBound }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{lowerBound}\PY{p}{)}\PY{p}{)}
\PY{n}{upperBound} \PY{o}{=} \PY{n}{Xbar} \PY{o}{+} \PY{n}{T} \PY{o}{*} \PY{n}{S} \PY{o}{/} \PY{n}{n} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{upperBound }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{upperBound}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
T distribution with 90\% CI (Population variance in unknown)
T-stat 1.6765508919142635 at alpha 0.1
X\_bar 12.22
SE 5.397369729785055
lowerBound 10.940283092282368
upperBound 13.499716907717634
\end{Verbatim}
\section{Q8}\label{q8}
We will use paired 2 sample T-test.
T-statistic equation:
\begin{equation*} T = \frac{D - \mu _d} {\frac {S_d}{\sqrt n}}\end{equation*}
where Sd = SD
Here we have
\begin{equation*} \mu _d = 0\end{equation*}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}64}]:} \PY{n}{students} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{\PYZob{}}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{pair\PYZus{}num}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{11}\PY{p}{)}\PY{p}{,}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{meditation}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{p}{[}\PY{l+m+mf}{4.0}\PY{p}{,} \PY{l+m+mf}{2.65}\PY{p}{,} \PY{l+m+mf}{3.65}\PY{p}{,} \PY{l+m+mf}{2.55}\PY{p}{,} \PY{l+m+mf}{3.2}\PY{p}{,} \PY{l+m+mf}{3.6}\PY{p}{,} \PY{l+m+mf}{2.9}\PY{p}{,} \PY{l+m+mf}{3.41}\PY{p}{,} \PY{l+m+mf}{3.33}\PY{p}{,} \PY{l+m+mf}{2.90}\PY{p}{]}\PY{p}{,}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{no\PYZus{}meditation}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{p}{[}\PY{l+m+mf}{3.75}\PY{p}{,} \PY{l+m+mf}{2.75}\PY{p}{,} \PY{l+m+mf}{3.45}\PY{p}{,} \PY{l+m+mf}{2.11}\PY{p}{,} \PY{l+m+mf}{3.21}\PY{p}{,} \PY{l+m+mf}{3.25}\PY{p}{,} \PY{l+m+mf}{2.58}\PY{p}{,} \PY{l+m+mf}{3.28}\PY{p}{,} \PY{l+m+mf}{3.35}\PY{p}{,} \PY{l+m+mf}{2.65}\PY{p}{]}
\PY{p}{\PYZcb{}}\PY{p}{)}
\PY{n}{students}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{D}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{=} \PY{n}{students}\PY{o}{.}\PY{n}{meditation} \PY{o}{\PYZhy{}} \PY{n}{students}\PY{o}{.}\PY{n}{no\PYZus{}meditation}
\PY{n}{students}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}64}]:} meditation no\_meditation pair\_num D
0 4.00 3.75 1 0.25
1 2.65 2.75 2 -0.10
2 3.65 3.45 3 0.20
3 2.55 2.11 4 0.44
4 3.20 3.21 5 -0.01
5 3.60 3.25 6 0.35
6 2.90 2.58 7 0.32
7 3.41 3.28 8 0.13
8 3.33 3.35 9 -0.02
9 2.90 2.65 10 0.25
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}65}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We will use Paired Sample T test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Difference in CGPA between students performing meditation and not performing meditation \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Difference in CGPA between students performing meditation and not performing meditation \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{alpha} \PY{o}{=}\PY{l+m+mf}{0.05}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{dmean} \PY{o}{=} \PY{n}{students}\PY{o}{.}\PY{n}{D}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}
\PY{n}{dstd} \PY{o}{=} \PY{n}{students}\PY{o}{.}\PY{n}{D}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean of difference: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{dmean}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SD of difference: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{dstd}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null Hypothesis: ho = mean \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate Hypothesis: ha = mean \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Tstat Formula : tsat = (dmean \PYZhy{} 0) / dstd / n ** 0.5}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{n}{dmean} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{dstd} \PY{o}{/} \PY{n}{students}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}Statistic Value: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{For Alpha }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{students}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We reject null hypothesis ho, proves meditation helps}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
We will use Paired Sample T test
Null hypothesis: Difference in CGPA between students performing meditation and not performing meditation <= 0
Alternate hypothesis: Difference in CGPA between students performing meditation and not performing meditation > 0
This is a right sided T-Test
Mean of difference: 0.181
SD of difference: 0.17741664709566196
Null Hypothesis: ho = mean <= 0
Alternate Hypothesis: ha = mean > 0
Tstat Formula : tsat = (dmean - 0) / dstd / n ** 0.5
T-Statistic Value: 3.2261474098417446
For Alpha 0.05 is : 1.8331129326536337
We reject null hypothesis ho, proves meditation helps
\end{Verbatim}
\section{Q9}\label{q9}
We will use 2 sample T-test (with unknown SD and unequal SD) for this
hypothesis test for proportions.
T-statistic equation:
\begin{equation*} T = \frac{(\bar x_1 - \bar x_2) - (\mu _1 - \mu _2)} {\sqrt {\frac {S_1}{n_1} + \frac {S_2} {n_2}}} \end{equation*}
DF equation :
\begin{equation*} df = \left \lfloor{\frac{S_u ^ 4} {\frac{( \frac{S_1 ^ 2}{n_1} )^2}{n_1 - 1} + {\frac{( \frac{S_2 ^ 2}{n_2} )^2}{n_2 - 1}}}} \right \rfloor\end{equation*}
SD (Su):
\begin{equation*} S_u = \sqrt{\frac{S_1 ^ 2}{n_1} + \frac{S_2 ^ 2}{n_2}} \end{equation*}
S1, S2 are the Sample SD, n1, n2 are the sample lengths
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}66}]:} \PY{n}{emergingMkt} \PY{o}{=} \PY{p}{[}
\PY{l+m+mf}{11.20}\PY{p}{,} \PY{l+m+mf}{12.10}\PY{p}{,} \PY{l+m+mf}{13.33}\PY{p}{,} \PY{l+m+mf}{16.40}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{10.00}\PY{p}{,} \PY{l+m+mf}{12.00}\PY{p}{,} \PY{l+m+mf}{13.00}\PY{p}{,} \PY{l+m+mf}{12.00}\PY{p}{,} \PY{l+m+mf}{13.00}\PY{p}{,}
\PY{l+m+mf}{8.25}\PY{p}{,} \PY{l+m+mf}{7.00}\PY{p}{,} \PY{l+m+mf}{10.00}\PY{p}{,} \PY{l+m+mf}{11.46}\PY{p}{,} \PY{l+m+mf}{11.00}\PY{p}{,} \PY{l+m+mf}{7.70}\PY{p}{,} \PY{l+m+mf}{7.00}\PY{p}{,} \PY{l+m+mf}{12.00}\PY{p}{,} \PY{l+m+mf}{18.00}\PY{p}{,} \PY{l+m+mf}{10.00}\PY{p}{,}
\PY{l+m+mf}{13.11}\PY{p}{,} \PY{l+m+mf}{9.00}\PY{p}{,} \PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{9.90}\PY{p}{,} \PY{l+m+mf}{16.00}\PY{p}{,} \PY{l+m+mf}{9.00}\PY{p}{,} \PY{l+m+mf}{6.00}\PY{p}{,} \PY{l+m+mf}{11.40}\PY{p}{,} \PY{l+m+mf}{7.00}\PY{p}{,} \PY{l+m+mf}{16.00}\PY{p}{,}
\PY{l+m+mf}{8.41}\PY{p}{,} \PY{l+m+mf}{17.21}\PY{p}{,} \PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{17.20}\PY{p}{,} \PY{l+m+mf}{18.00}\PY{p}{,} \PY{l+m+mf}{9.00}\PY{p}{,} \PY{l+m+mf}{7.00}\PY{p}{,} \PY{l+m+mf}{15.45}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,}
\PY{l+m+mf}{13.00}\PY{p}{,} \PY{l+m+mf}{18.60}\PY{p}{,} \PY{l+m+mf}{16.00}\PY{p}{,} \PY{l+m+mf}{9.60}\PY{p}{,} \PY{l+m+mf}{12.00}\PY{p}{,} \PY{l+m+mf}{6.00}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{8.00}\PY{p}{,} \PY{l+m+mf}{16.29}\PY{p}{,} \PY{l+m+mf}{9.00}\PY{p}{]}
\PY{n}{derivativeMkt} \PY{o}{=} \PY{p}{[}
\PY{l+m+mf}{17.65}\PY{p}{,} \PY{l+m+mf}{10.20}\PY{p}{,} \PY{l+m+mf}{19.00}\PY{p}{,} \PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{11.00}\PY{p}{,} \PY{l+m+mf}{4.97}\PY{p}{,} \PY{l+m+mf}{11.00}\PY{p}{,} \PY{l+m+mf}{7.00}\PY{p}{,} \PY{l+m+mf}{5.12}\PY{p}{,} \PY{l+m+mf}{4.90}\PY{p}{,}
\PY{l+m+mf}{19.00}\PY{p}{,} \PY{l+m+mf}{11.45}\PY{p}{,} \PY{l+m+mf}{16.00}\PY{p}{,} \PY{l+m+mf}{6.87}\PY{p}{,} \PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{8.00}\PY{p}{,} \PY{l+m+mf}{10.78}\PY{p}{,} \PY{l+m+mf}{16.00}\PY{p}{,} \PY{l+m+mf}{18.00}\PY{p}{,} \PY{l+m+mf}{11.00}\PY{p}{,}
\PY{l+m+mf}{13.00}\PY{p}{,} \PY{l+m+mf}{17.00}\PY{p}{,} \PY{l+m+mf}{18.00}\PY{p}{,} \PY{l+m+mf}{16.00}\PY{p}{,} \PY{l+m+mf}{12.00}\PY{p}{,} \PY{l+m+mf}{13.26}\PY{p}{,} \PY{l+m+mf}{19.00}\PY{p}{,} \PY{l+m+mf}{10.00}\PY{p}{,} \PY{l+m+mf}{17.00}\PY{p}{,} \PY{l+m+mf}{5.56}\PY{p}{,}
\PY{l+m+mf}{8.00}\PY{p}{,} \PY{l+m+mf}{15.55}\PY{p}{,} \PY{l+m+mf}{11.22}\PY{p}{,} \PY{l+m+mf}{6.78}\PY{p}{,} \PY{l+m+mf}{10.00}\PY{p}{,} \PY{l+m+mf}{19.00}\PY{p}{,} \PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{7.00}\PY{p}{,}
\PY{l+m+mf}{14.00}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{18.00}\PY{p}{,} \PY{l+m+mf}{7.78}\PY{p}{,} \PY{l+m+mf}{10.00}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{16.20}\PY{p}{,} \PY{l+m+mf}{15.00}\PY{p}{,} \PY{l+m+mf}{11.65}\PY{p}{,} \PY{l+m+mf}{13.00}
\PY{p}{]}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Emerging Market: Mean (x1): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{emergingMkt}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{emergingMkt}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Derivative Market: Mean (x2): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{derivativeMkt}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{derivativeMkt}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}test with unknown population SD and unequal variance will be used for hypothesis testing}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{n1} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{emergingMkt}\PY{p}{)}
\PY{n}{x1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{emergingMkt}\PY{p}{)}
\PY{n}{S1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{emergingMkt}\PY{p}{)}
\PY{n}{n2} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{derivativeMkt}\PY{p}{)}
\PY{n}{x2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{derivativeMkt}\PY{p}{)}
\PY{n}{S2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{derivativeMkt}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DF }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The proportions x1 \PYZhy{} x2 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{} 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Null hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a right sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{alpha/ Significance: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{alpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{alpha}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is greater than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{alpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Emerging Market: Mean (x1): 12.0322, SD: 3.4964898341050556
Derivative Market: Mean (x2): 12.6588, SD: 4.158908818428219
T-test with unknown population SD and unequal variance will be used for hypothesis testing
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 0.7684004648619104
DF 95
T-stat -0.8154602042212539
Null hypothesis: The proportions x1 - x2 <= 0
Alternate hypothesis: x1 - x2 > 0
This is a right sided T-Test
alpha/ Significance: 0.05
Significant t-value at alpha - 0.05 is : 1.6610518172519093
p-value:0.2084243682887848 is greater than alpha(0.05)
Hence we can retain the NULL Hypothesis (ho)
\end{Verbatim}
\section{Q10}\label{q10}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}69}]:} \PY{n}{diet} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{2}\PY{p}{]}
\PY{n}{exercise} \PY{o}{=} \PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,}\PY{l+m+mi}{4} \PY{p}{,}\PY{l+m+mi}{2} \PY{p}{,}\PY{l+m+mi}{3}\PY{p}{]}
\PY{n}{modificbeh} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{]}
\PY{n}{pp\PYZus{}x} \PY{o}{=} \PY{n}{sm}\PY{o}{.}\PY{n}{ProbPlot}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{diet} \PY{o}{+} \PY{n}{exercise} \PY{o}{+} \PY{n}{modificbeh}\PY{p}{)}\PY{p}{,} \PY{n}{fit}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)}
\PY{n}{pp\PYZus{}x}\PY{o}{.}\PY{n}{ppplot}\PY{p}{(}\PY{n}{line}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{45}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{PP Plot may not be normal by visual inspection. Morreover sample size is small.}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_128_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
PP Plot may not be normal by visual inspection. Morreover sample size is small.
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Diet == Mean (x1): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Count: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{diet}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{diet}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{diet}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Exer == Mean (x2): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Count: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{exercise}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{exercise}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{exercise}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ModB == Mean (x3): }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, SD: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{, Count: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{modificbeh}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{modificbeh}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{modificbeh}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We cannot use Anova as the variances are not similar, we will use multiple pairwise t tests.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}
\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{We will use 3 pairwise tests. Since this increases the Type 1 error, we will use Bonferroni}\PY{l+s+s2}{\PYZsq{}}\PY{l+s+s2}{s Correction}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{x1}\PY{p}{,} \PY{n}{S1}\PY{p}{,} \PY{n}{n1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{diet}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{diet}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{diet}\PY{p}{)}
\PY{n}{x2}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{exercise}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{exercise}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{exercise}\PY{p}{)}
\PY{n}{x3}\PY{p}{,} \PY{n}{S3}\PY{p}{,} \PY{n}{n3} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{modificbeh}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{modificbeh}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{modificbeh}\PY{p}{)}
\PY{n}{alpha} \PY{o}{=} \PY{l+m+mf}{0.05}
\PY{n}{adjustedAlpha} \PY{o}{=} \PY{n}{alpha} \PY{o}{/} \PY{l+m+mi}{3}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Adjusted Alpha: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{adjustedAlpha}\PY{p}{)}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x2 \PYZgt{}0}\PY{l+s+s2}{\PYZdq{}}
\PY{k}{def} \PY{n+nf}{pairwise\PYZus{}t\PYZus{}test}\PY{p}{(}\PY{n}{S1}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{n2}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{adjustedAlpha}\PY{p}{)}\PY{p}{:}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{NUll Hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ho}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Alternate Hypothesis: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{ha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is 2 Sample T test, with unknown population SD and the SD of the two are unequal}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{Su} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1} \PY{o}{+} \PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mf}{0.5}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{SE }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{Su}\PY{p}{)}\PY{p}{)}
\PY{n}{df} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{math}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{n}{Su} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S1} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n1}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n1} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{n}{S2} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{n}{n2}\PY{p}{)} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{n2} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DF }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{df}\PY{p}{)}\PY{p}{)}
\PY{n}{tstat} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{l+m+mi}{0}\PY{p}{)} \PY{o}{/}\PY{p}{(}\PY{n}{Su}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{T\PYZhy{}stat }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{tstat}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{This is a two sided T\PYZhy{}Test}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{c+c1}{\PYZsh{}print(\PYZdq{}alpha/ Significance: \PYZob{}\PYZcb{}\PYZdq{}.format(adjustedAlpha / 2))}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Significant t\PYZhy{}value at alpha \PYZhy{} }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is : }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{adjustedAlpha} \PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{o}{*}\PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{ppf}\PY{p}{(}\PY{n}{adjustedAlpha}\PY{p}{,}
\PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{p\PYZhy{}value:}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ is greater than alpha(}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{)}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{sp}\PY{o}{.}\PY{n}{stats}\PY{o}{.}\PY{n}{t}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{tstat}\PY{p}{,} \PY{n}{df} \PY{o}{=} \PY{n}{df}\PY{p}{)}\PY{p}{,} \PY{n}{adjustedAlpha}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Hence we can retain the NULL Hypothesis (ho)}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\PY{n}{pairwise\PYZus{}t\PYZus{}test}\PY{p}{(}\PY{n}{S1}\PY{p}{,} \PY{n}{S2}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{n2}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{adjustedAlpha}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x2 \PYZhy{} x3 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x2 \PYZhy{} x3 \PYZgt{}0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{pairwise\PYZus{}t\PYZus{}test}\PY{p}{(}\PY{n}{S2}\PY{p}{,} \PY{n}{S3}\PY{p}{,} \PY{n}{n2}\PY{p}{,} \PY{n}{n3}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{x3}\PY{p}{,} \PY{n}{adjustedAlpha}\PY{p}{)}
\PY{n}{ho} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x3 \PYZlt{}= 0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{ha} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x1 \PYZhy{} x3 \PYZgt{}0}\PY{l+s+s2}{\PYZdq{}}
\PY{n}{pairwise\PYZus{}t\PYZus{}test}\PY{p}{(}\PY{n}{S1}\PY{p}{,} \PY{n}{S3}\PY{p}{,} \PY{n}{n1}\PY{p}{,} \PY{n}{n3}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{x3}\PY{p}{,} \PY{n}{adjustedAlpha}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{There is not enough statistically significant difference in weight loss between three weight loss programs}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Diet == Mean (x1): 2.3333333333333335, SD: 3.80058475033046, Count: 9
Exer == Mean (x2): 2.1666666666666665, SD: 3.531603350069515, Count: 6
ModB == Mean (x3): 5.5, SD: 5.80086200490927, Count: 10
We cannot use Anova as the variances are not similar, we will use multiple pairwise t tests. We will use 3 pairwise tests. Since this increases the Type 1 error, we will use Bonferroni's Correction
Adjusted Alpha: 0.016666666666666666
NUll Hypothesis: x1 - x2 <= 0
Alternate Hypothesis: x1 - x2 >0
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 1.9192816300138555
DF 11
T-stat 0.08683804610033379
This is a two sided T-Test
Significant t-value at alpha - 0.016666666666666666 is : 2.431291192825905
p-value:0.4661804170878926 is greater than alpha(0.016666666666666666)
Hence we can retain the NULL Hypothesis (ho)
NUll Hypothesis: x2 - x3 <= 0
Alternate Hypothesis: x2 - x3 >0
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 2.3331745977752507
DF 13
T-stat -1.428668620218891
This is a two sided T-Test
Significant t-value at alpha - 0.016666666666666666 is : 2.3795918574465182
p-value:0.9116578507983054 is greater than alpha(0.016666666666666666)
Hence we can retain the NULL Hypothesis (ho)
NUll Hypothesis: x1 - x3 <= 0
Alternate Hypothesis: x1 - x3 >0
This is 2 Sample T test, with unknown population SD and the SD of the two are unequal
SE 2.229335836433115
DF 15
T-stat -1.420452950567223
This is a two sided T-Test
Significant t-value at alpha - 0.016666666666666666 is : 2.342924621576576
p-value:0.9120359852133058 is greater than alpha(0.016666666666666666)
Hence we can retain the NULL Hypothesis (ho)
There is not enough statistically significant difference in weight loss between three weight loss programs
\end{Verbatim}
% Add a bibliography block to the postdoc
\end{document}
| {
"alphanum_fraction": 0.5227601868,
"avg_line_length": 79.5023027326,
"ext": "tex",
"hexsha": "0c3fcf8e3455270cd2d1f4b25a7910e97a31a459",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e052da538df84034ec5a0fe3b19c4287de307286",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rahasayantan/Work-For-Reference",
"max_forks_repo_path": "IIMB-Assignments/Assgn-1/Module 1 Assignment Cases and Data Sets-20180708/notebook.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e052da538df84034ec5a0fe3b19c4287de307286",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rahasayantan/Work-For-Reference",
"max_issues_repo_path": "IIMB-Assignments/Assgn-1/Module 1 Assignment Cases and Data Sets-20180708/notebook.tex",
"max_line_length": 908,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e052da538df84034ec5a0fe3b19c4287de307286",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rahasayantan/Work-For-Reference",
"max_stars_repo_path": "IIMB-Assignments/Assgn-1/Module 1 Assignment Cases and Data Sets-20180708/notebook.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 120343,
"size": 258939
} |
\documentclass[11pt]{article}
\usepackage{url}
\usepackage{pgfplots}
\pgfplotsset{compat=newest}
\setlength\topmargin{-0.6cm}
\setlength\textheight{23.4cm}
\setlength\textwidth{17.0cm}
\setlength\oddsidemargin{0cm}
\begin{document}
\title{Ling 572 HW2 }
\author{Daniel Campos \tt {[email protected]}}
\date{01/23/2019}
\maketitle
\section{ Q1 Mallet DT Learner}
\begin{description}
\item [(a)] The command lines:\\
mallet import-file --input train.vectors.txt --output train.vectors\\
mallet import-file --input test.vectors.txt --output test.vectors --use-pipe-from train.vectors \\
vectors2classify --training-file train.vectors --testing-file test.vectors --trainer DecisionTree --report test:raw test:accuracy test:confusion train:confusion train:accuracy \textgreater de1.stdout 2 \textgreater de1.stderr \\
tail de1.stdout
\item [(b)] What are the training accuracy and the test accuracy? \\
Train:0.1781 \\
Test: 0.1
\end{description}
\section{ Q2 Mallet Different depth}
\begin{table}[h]
\centering
\caption{Mallet's DT learner with different depths}
See table 1. \\ B) Looking at table 1 we can see that as we train deeper or model learns the distribution of the train set much better but that doesnt necissarily mean that it will perform better on our test set. This, as is common with most ML shows that we must be careful when focusing on performance on the train set because it may mean nothing in terms of true model performance.
\label{table1}
\begin{tabular}{|l|l|l|} \hline
Depth & Training accuracy & Test accuracy \\ \hline
1 & 0.1393 & 0.1033 \\ \hline
2 & 0.1419 & 0.12 \\ \hline
4 & 0.1781 & 0.10 \\ \hline
10 & 0.6285 & 0.1133 \\ \hline
20 & 0.9970 & 0.1367 \\ \hline
50 & 1 & 0.1367 \\ \hline
\end{tabular}
\end{table}
\section{ Q3 build\_dt.sh}
See table 2 and table 3
\begin{table}[hbtp]
\centering
\caption{build\_dt.sh min\_gain=0}
\label{table2}
\begin{tabular}{|l|l|l|l|} \hline
Depth & Training accuracy & Test accuracy & CPU time (in minutes)\\ \hline
1 & 0.4530 & 0.4167 & 1 \\ \hline
2 & 0.5207 & 0.53 & 2 \\ \hline
4 & 0.6377 & 0.5267 & 3 \\ \hline
10 &0.7511 & 0.5933 & 10 \\ \hline
20 & 0.8541 & 0.6733 & 11 \\ \hline
50 & 0.96333 & 0.6933 & 33 \\ \hline
\end{tabular}
\end{table}
\begin{table}[hbtp]
\centering
\caption{build\_dt.sh min\_gain=0.1}
\label{table3}
\begin{tabular}{|l|l|l|l|} \hline
Depth & Training accuracy & Test accuracy & CPU time (in minutes)\\ \hline
1 & 0.6014& 0.54 &3 \\ \hline
2 & 0.52 & 0.53 & 2 \\ \hline
4 & 0.6014 &0.54 &4 \\ \hline
10 & 0.6014 &0.54& 3 \\ \hline
20 & 0.6014 &0.54& 3 \\ \hline
50 & 0.6014 & 0.54 & 2 \\ \hline
\end{tabular}
\end{table}
\section{ Q4}
\begin{tikzpicture}
\begin{axis}[
axis x line=center,
axis y line=center,
xtick={-20,-10,...,30},
ytick={-10,0,...,30},
xlabel={$x$},
ylabel={$y$},
xlabel style={below right},
ylabel style={above left},
xmin=-22,
xmax=32,
ymin=-12,
ymax=32]
\draw[red, dashed, very thick] (20,10) rectangle (30,30) node {L1};
\draw[blue, dashed, very thick] (10,10) rectangle (20,30) node {L2};
\draw[green, dashed, very thick] (0,-10) rectangle (10,30) node {L3};
\draw[orange, dashed, very thick](0,-10) rectangle (10,30) node {L4};
\draw[pink, dashed, very thick] (-20,20) rectangle (0,30) node {L5};
\draw[yellow, dashed, very thick] (-20,-10) rectangle (0,20) node {L6};
\draw(-20,-10) rectangle (-10,20) node {L7};
\end{axis}
\end{tikzpicture}
\section{Q5: Notes}
I believe everything is working great except when the depth of the tree is high the run time is slow. It me a while to move from pandas data frames to numpy arrays and when I did so my programs started going much faster. This assignment really showcases how improtant some of the hyperparameters like max dcepth and min gain are to both affect the runtime and the accuracy.
\end{document}
| {
"alphanum_fraction": 0.6535239034,
"avg_line_length": 40.1782178218,
"ext": "tex",
"hexsha": "62b33f43047f0d5b9d89155ee54bc2659ada6759",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-12-26T01:28:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-12-26T01:28:41.000Z",
"max_forks_repo_head_hexsha": "f0380de9912c984ec21607cdb3b1f190853c5ca8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "spacemanidol/CLMS572",
"max_forks_repo_path": "Assignments/hw2/hw2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f0380de9912c984ec21607cdb3b1f190853c5ca8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "spacemanidol/CLMS572",
"max_issues_repo_path": "Assignments/hw2/hw2.tex",
"max_line_length": 385,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f0380de9912c984ec21607cdb3b1f190853c5ca8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "spacemanidol/CLMS572",
"max_stars_repo_path": "Assignments/hw2/hw2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1419,
"size": 4058
} |
\section{Other Riemann surfaces}
| {
"alphanum_fraction": 0.7714285714,
"avg_line_length": 8.75,
"ext": "tex",
"hexsha": "226758b488b18c6d8b1e084f8a7255368e63559a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/analysis/riemann/02-00-Other_Riemann_surfaces.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/analysis/riemann/02-00-Other_Riemann_surfaces.tex",
"max_line_length": 32,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/analysis/riemann/02-00-Other_Riemann_surfaces.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10,
"size": 35
} |
\documentclass{article}
\usepackage{amsmath,amssymb,amsthm}
\newcommand{\G}{\mathbb{G}}
\begin{document}
Public parameters are supposed to consist of $g_1^{\alpha^i}$ and $g_2^{\alpha^i}$ for $i$ from $1$ to $2N$, except $N+1$ is omitted. This document describes and argues security of two procedures, one deterministic and one probabilistic, for determining whether an alleged set of parameters is indeed of this form.
\subsection*{Notation}
For $1 \le i \le 2N, i \ne N+1$, write
\begin{align*}
f_i \; &\text{to denote the $\G_1$ element that is allegedly} \; g_1^{\alpha^i} \\
h_i \; &\text{to denote the $\G_2$ element that is allegedly} \; g_2^{\alpha^i} \\
\intertext{and for notational convenience, write}
f_0 &= g_1 \\
h_0 &= g_2.
\end{align*}
A set of (alleged) parameters \[
\{f_i, h_i\}_{1 \le i \le 2N, i \ne N+1}
\]
are \textit{consistent} if $\exists \alpha \notin\{0,1\}$ such that $\forall i, f_i = g_1^{\alpha^i}$ and $h_i = g_2^{\alpha^i}$.
\subsection*{Deterministic consistency check}
Given an alleged set of parameters, the following procedure determines whether they are consistent. In the below discussion, without loss of generality, let $\alpha$ be such that $f_1 = g_1^{\alpha}$.
\begin{enumerate}
\item Check that none of the $f_i$ or $h_i$ are 1, $g_1$, or $g_2$. (This ensures $\alpha \notin \{0,1\}$.)
\item Check that \[ e(f_i, h_0) = e(f_0, h_i) \quad\text{for all $1\le i \le N$ and $N+2 \le i \le 2N$}\]
This ensures that $\log_{g_1} f_i = \log_{g_2} h_i$ for all $i$ --- in other words, the exponents of each $f_i$ and corresponding $h_i$ match. In particular we now know $h_1 = g_2^\alpha$.
\item Check that \[e(f_i, h_1) = e(f_{i+1}, h_0) \quad \text{for all $1 \le i \le N-1$}\]
This ensures $f_i = g_1^{\alpha^i}$ (and also $h_i = g_2^{\alpha^i}$ by the previous check) for $1 \le i \le N$. (To see this, consider the $i = 1$ case, where we check $e(f_1, h_1) = e(f_2, h_0)$. $e(f_1, h_1) = e(g_1^\alpha, g_2^\alpha)$, so $f_2$ must be $g_1^{\alpha^2}$. Then the $i=2$ check forces $f_3$ to be $g_1^{\alpha^3}$ and so on.)
\item Check that \[e(f_i, h_N) = e(f_{i+N}, h_0) \quad \text{for all $2 \le i \le N$}\]
In other words, check $e(f_{i+N}, g_2) = e(g_1^{\alpha^{i}}, g_2^{\alpha^{N}})$, which ensures $f_{i+N} = g_1^{\alpha^{i+N}}$ for $2\le i \le N$. (This also ensures $h_{i+N} = g_2^{\alpha^{i+N}}$ for $2 \le i \le N$ because we know the exponents of the $f$'s and $h$'s match.)
\end{enumerate}
If all checks pass, then we know that $f_i = g_1^{\alpha^i}$ and $h_i = g_2^{\alpha^i}$ for all $1 \le i \le 2N, i\ne N$, and we know $\alpha \notin \{0,1\}$. So the parameters are consistent.
\subsection*{Probabilistic consistency check}
A randomized version of the above algorithm can check consistency much more efficiently, using $O(1)$ rather than $O(N)$ pairings, with a soundness error of $O(\frac1{|\G_1|})$.
\begin{enumerate}
\item Generate $N$ random scalars $r_1, \dots, r_N$ and compute the following:
\begin{align*}
R_1 &= \prod_{i=1}^{N} f_i^{r_i} \\
R_2 &= \prod_{i=1}^{N} h_i^{r_i} \\
S & = \prod_{i=1}^{N-1} f_i^{r_i} \left( = \frac{R_1}{f_N^{r_N}}\right)\\
T &= \prod_{i=1}^{N-1} f_{i+1}^{r_i} \\
U_1 &= \prod_{i=1}^{N-1} f_{i+N+1}^{r_i} \\
U_2 &= \prod_{i=1}^{N-1} h_{i+N+1}^{r_i} \\
\end{align*}
\item Check that none of the $f$'s or $h$'s are 1, $g_1$, or $g_2$, just like in the deterministic procedure.
\item Check that
\begin{align*}
e(R_1, h_0) &= e(f_0, R_2) \\
\intertext{In other words, check that}
e(\prod_{i=1}^N f_i^{r_i}, h_0) &= e(f_0, \prod_{i=1}^N h_i^{r_i})
\end{align*}
This ensures $e(f_i, h_0) = e(f_0, h_i)$ for all $1 \le i \le N$ with soundness error $\frac1{|\G_1|}$.
\item Check that
\[
e(U_1, h_0) = e(f_0, U_2)
\]
This ensures $e(f_i, h_0) = e(f_0, h_i)$ for all $N+2 \le i \le 2N$ with soundness error $\frac1{|\G_1|}$. Combined with the previous step, this is equivalent to check 2 of the deterministic procedure.
\item Check that
\begin{align*}
e(S, h_1) &= e(T, h_0)
\intertext{or in other words,}
e(\prod_{i=1}^{N-1} f_i^{r_i}, h_1) &= e(\prod_{i=1}^{N-1} f_{i+1}^{r_i}, h_0)
\end{align*}
This ensures that $e(f_i, h_1) = e(f_{i+1}, h_0)$ for all $1 \le i \le N-1$ with soundness error $\frac1{|\G_1|}$. This is equivalent to check 3 of the deterministic procedure.
\item Check that
\begin{align*}
e(U_1, h_0) &= e(T, h_N)
\intertext{or in other words,}
e(\prod_{i=1}^{N-1} f_{i+N+1}^{r_i}, h_0) &= e(\prod_{i=1}^{N-1} f_{i+1}^{r_i}, h_N)
\end{align*}
This ensures that $e(f_{i+N+1}, h_0) = e(f_{i+1}, h_N)$ for all $1 \le i \le N-1$ (with soundness error $\frac1{|\G_1|}$). This is equivalent to check 4 of the deterministic procedure.
\end{enumerate}
If all of the above checks pass, then the parameters are consistent with high probability.
This randomized procedure is equivalent to the deterministic procedure in the previous section but with soundness error $\frac4{|\G_1|}$.
\end{document}
| {
"alphanum_fraction": 0.6514041514,
"avg_line_length": 51.1875,
"ext": "tex",
"hexsha": "52597ffa26c5f2ac4e61dcbeb4dae52afca9084e",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-11-19T12:17:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-04T03:24:12.000Z",
"max_forks_repo_head_hexsha": "43a92ebf4430fed4e13abc8145f7935d8db26461",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sourcenetwork/pointproofs-paramgen",
"max_forks_repo_path": "consistencycheck.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "43a92ebf4430fed4e13abc8145f7935d8db26461",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sourcenetwork/pointproofs-paramgen",
"max_issues_repo_path": "consistencycheck.tex",
"max_line_length": 344,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "43a92ebf4430fed4e13abc8145f7935d8db26461",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sourcenetwork/pointproofs-paramgen",
"max_stars_repo_path": "consistencycheck.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-19T12:16:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-08T12:54:30.000Z",
"num_tokens": 1941,
"size": 4914
} |
\subsubsection{Influence of Time Series Distance Measure} \label{influence_of_time_series_distance_measure}
2346 different simulations were tested that contain \textit{Mid} as window size determination and
\textit{HAveD} as threshold determination in their configuration. One third of those simulations used plain DTW, one
third used $\eta$DTW and one third used $\eta '$DTW. Figure \ref{fig:distance_measure_result}
illustrates the distribution of all three subsets. It is clearly visible that $\eta$DTW reaches better $Recall_{\mu}$
values as DTW in the experiment, however also slightly worse $Precision_{\mu}$ values. Furthermore is $\eta$DTW
dominating $\eta '$DTW with the $Precision_{\mu}$ and $Recall_{\mu}$ values. The blob of poor performing simulations in
the lower left corner of figure \ref{fig:distance_measure_result} are too small Sakoe-Chiba bands.
\definecolor{light-gray}{gray}{0.8}
\begin{figure}[H]
\begin{center}
\resizebox {\textwidth} {!} {
\begin{tabular}{cc}
\resizebox {!} {\height} {
\begin{tikzpicture}
\begin{axis}[
legend pos=south west,
xmin=0.4,
xmax=0.9,
ymin=0.2,
ymax=0.7,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
xlabel=$Precision_{\mu}$,
ylabel=$Recall_{\mu}$]
\addplot[blue, only marks, mark size=1] table {../data/fig/distance_measure_result/dtw.dat};
\addlegendentry{DTW}
\addplot[red, only marks, mark size=1] table {../data/fig/distance_measure_result/ndtw.dat};
\addlegendentry{$\eta$DTW}
\addplot[green, only marks, mark size=1] table {../data/fig/distance_measure_result/n1dtw.dat};
\addlegendentry{$\eta '$DTW}
\addplot[gray, domain=0.4:0.9] {(0.3 * x) / (2 * x - 0.3)};
\addplot[gray, domain=0.4:0.9] {(0.4 * x) / (2 * x - 0.4)};
\addplot[gray, domain=0.4:0.9] {(0.5 * x) / (2 * x - 0.5)};
\addplot[gray, domain=0.4:0.9] {(0.6 * x) / (2 * x - 0.6)};
\addplot[gray, domain=0.4:0.9] {(0.7 * x) / (2 * x - 0.7)};
\end{axis}
\end{tikzpicture}
} &
\resizebox {!} {\height} {
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=1,
ymin=0,
ymax=1,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
xlabel=$Precision_{\mu}$,
ylabel=$Recall_{\mu}$,
samples=100]
\addplot[blue, only marks, mark size=1] table {../data/fig/distance_measure_result/dtw.dat};
\addplot[red, only marks, mark size=1] table {../data/fig/distance_measure_result/ndtw.dat};
\addplot[green, only marks, mark size=1] table {../data/fig/distance_measure_result/n1dtw.dat};
\addplot[gray, domain=0.051:1] {(0.1 * x) / (2 * x - 0.1)};
\addplot[gray, domain=0.11:1] {(0.2 * x) / (2 * x - 0.2)};
\addplot[gray, domain=0.16:1] {(0.3 * x) / (2 * x - 0.3)};
\addplot[gray, domain=0.21:1] {(0.4 * x) / (2 * x - 0.4)};
\addplot[gray, domain=0.26:1] {(0.5 * x) / (2 * x - 0.5)};
\addplot[gray, domain=0.31:1] {(0.6 * x) / (2 * x - 0.6)};
\addplot[gray, domain=0.36:1] {(0.7 * x) / (2 * x - 0.7)};
\addplot[gray, domain=0.41:1] {(0.8 * x) / (2 * x - 0.8)};
\addplot[gray, domain=0.46:1] {(0.9 * x) / (2 * x - 0.9)};
\end{axis}
\end{tikzpicture}
}
\end{tabular}
}
\end{center}
\caption{$Precision_{\mu}$ and $Recall_{\mu}$ of all simulations with \textit{Mid} as window determination and
\textit{HAveD} as threshold determination, separated by the used time series distance measures DTW, $\eta$DTW
or $\eta '$DTW. The left plot is a zoomed version of the right plot. Gray lines are illustrating the
distribution of $F_{1}score_{\mu}$ in $\frac{1}{10}$ steps.}
\label{fig:distance_measure_result}
\end{figure}
\input{experiment/evaluation/influence_of_time_series_distance_measure/influence_of_sakoe-chiba_band_size.tex}
| {
"alphanum_fraction": 0.4906682721,
"avg_line_length": 62.2875,
"ext": "tex",
"hexsha": "cd5107641fde38f59dca7962f61d3beb9424ff15",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-01-11T23:15:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-11T23:15:57.000Z",
"max_forks_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GordonLesti/SlidingWindowFilter",
"max_forks_repo_path": "bachelor-thesis/experiment/evaluation/influence_of_time_series_distance_measure.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GordonLesti/SlidingWindowFilter",
"max_issues_repo_path": "bachelor-thesis/experiment/evaluation/influence_of_time_series_distance_measure.tex",
"max_line_length": 123,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GordonLesti/SlidingWindowFilter",
"max_stars_repo_path": "bachelor-thesis/experiment/evaluation/influence_of_time_series_distance_measure.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-14T11:43:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-22T09:37:30.000Z",
"num_tokens": 1323,
"size": 4983
} |
\documentclass[10pt,a4paper,titlepage]{report}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{xcolor}
\usepackage{minted}
\setlength{\arrayrulewidth}{0pt}
\setlength{\tabcolsep}{18pt}
\nonstopmode
\begin{document}
\newpage
\begin{center}
\textbf{Project Report}\\\vspace{0.4cm}
on\\\vspace{1cm}
\begin{LARGE}
PHP Login Form
\end{LARGE}\\\vspace{0.4cm}
\textit{using LAMP Stack}\vspace{1cm}
\\
\textbf{Computer Science and Engineering}\\\vspace{1cm}
Submitted by\\\vspace{1cm}
\begin{tabular}{l|r}
\textbf{Roll No.}&\textbf{Name}&
38& Justine Biju&
53& Rwithik Manoj&
\end{tabular}
\vfill
\textsc{College of Engineering, Trivandrum\\Semester 4}
\end{center}
\newpage
\tableofcontents
\chapter{Introduction}
LAMP is an archetypal model of web service stacks, named as an acronym of the names of its original four open-source components: the Linux operating system, the Apache HTTP Server, the MySQL relational database management system (RDBMS), and the PHP programming language. The LAMP components are largely interchangeable and not limited to the original selection. As a solution stack, LAMP is suitable for building dynamic web sites and web applications.
\newline
\par It is called a stack because each layer derives off its base layers. Linux, the OS forms the base layers. Apache, the web daemon, sits on top of the OS. The database, is hosted by the web daemon. PHP (or any scripting language) is used to drive and display all the data, and allow for user interaction.
\newline
\par The code of our project is hosted at https://github.com/justine05/foss-php
\section{Linux}
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
\par Popular Linux distributions include Debian, Fedora, and Ubuntu. Commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, and a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, and include a solution stack such as LAMP. Because Linux is freely redistributable, anyone may create a distribution for any purpose.
\section{Apache HTTP Server}
The role of LAMP's web server has been traditionally supplied by Apache, and has since included other web servers such as Nginx.
\par Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation. Released under the Apache License, Apache is open-source software. A wide variety of features are supported, and many of them are implemented as compiled modules which extend the core functionality of Apache. These can range from server-side programming language support to authentication schemes.
\section{MySQL}
MySQL's original role as the LAMP's relational database management system (RDBMS) has since been alternately provisioned by other RDBMSs such as MariaDB or PostgreSQL, or even NoSQL databases such as MongoDB.
\par MySQL is a multithreaded, multi-user, SQL database management system (DBMS), acquired by Sun Microsystems in 2008, which was then acquired by Oracle Corporation in 2010. Since its early years, the MySQL team has made its source code available under the terms of the GNU General Public License, as well as under a variety of proprietary agreements.
\par This application uses MariaDB, a community developed fork of MySQL, led by its original devs.
\section{PHP}
PHP's role as the LAMP's application programming language has also been performed by other languages such as Perl and Python.
\par PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. PHP code is interpreted by a web server via a PHP processor module, which generates the resulting web page. PHP commands can optionally be embedded directly into an HTML source document rather than calling an external file to process data. It has also evolved to include a command-line interface capability and can be used in standalone graphical applications.
\chapter{The Database Schema}
\par The data base schema is as shown below\newline
\includegraphics[width=\linewidth]{assets/schema.png}
A user table with the following attributes:
\begin{center}
\begin{tabular}{|{2cm}|{2cm}|{2cm}|{2cm}|}
uid& User ID& int& Primary Key, Auto Increment&
username& Username& varchar(50)& Unique&
password& Hash of the password& varchar(256)&
\end{tabular}
\end{center}
A tasks table with the following attributes:
\begin{center}
\begin{tabular}{|{2cm}|{2cm}|{2cm}|{2cm}|}
tid& Task ID& int& Primary Key, Auto Increment&
title& Task Title& varchar(50)& &
descr& Task Decription& varchar(100)& &
uid& The User ID of the user& int& Foreign Key to users.uid&
priority& Task Priority& int& &
done& Whether the task is done& int& &
\end{tabular}
\end{center}
\par The database is created using the phpmyadmin. The SQL query used is:
\begin{minted}[tabsize=4,breaklines]{php}
CREATE TABLE users (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(30) NOT NULL,
password varchar(512) NOT NULL,
PRIMARY KEY (id),
UNIQUE (username)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
\end{minted}
\begin{minted}[tabsize=4]{php}
CREATE TABLE tasks (
tid int(11) NOT NULL AUTO_INCREMENT,
title varchar(50) NOT NULL,
descr varchar(100) DEFAULT NULL,
uid int(11) NOT NULL,
priority int(11) NOT NULL DEFAULT 3,
done tinyint(1) NOT NULL DEFAULT 0,
PRIMARY KEY(tid),
FOREIGN KEY (uid) REFERENCES users(uid)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
\end{minted}
\chapter{The To-Do Application}
\section{Introduction}
\par What we have made is a simple to-do application in which a user can login to see his/her personalized to-do list.
\section{index.php}
\par This page is the front page of our application. It contains a login form and a link to the register page.
\begin{minted}[tabsize=4]{php}
if (isset($_POST['submit']))
\end{minted}
\par This line checks if the user has come to the page after clicking the Login button. If it is the code checks if the inputs are not empty and are valid.
\par If the inputs are valid, it gets the row from the database and verifies the password, redirecting to the {\color{red}tasks.php} page if the username and password match. If not, the corrosponding errors are printed onto the page.
\begin{minted}[tabsize=4,breaklines]{php}
$dbpass = mysqli_query($db,"SELECT password FROM users WHERE username = '$username' ");
$p = mysqli_fetch_array($dbpass)["password"];
if(empty($p)){
$error = -1;
}
else if (password_verify($password, $p)){
$error = 0;
$_SESSION['username'] = $username;
header('location: tasks.php');
}
\end{minted}
\section{register.php}
\par This page is used to register a new user. A register for is present in the page. This page contains similar code to check if the inputs are valid and if the user came to the page after clicking the Register button. Also checks if the password and the reentered password are the same. It also checks if the username already exists in the database. If everything is ok, the username and hashed password are inserted into the database.
\par The user is then redirected to the login page.
\section{tasks.php}
\par It is the page that the user is redirected to after successful login. Here the user can veiw and add tasks to his personalized to-do list. Each task has a title, optional desription and a priority(default 3). The task can be marked as done. A task marked as done will have different CSS.
\par There is also an option to cleared tasks from the list. It display the same page after a redirect via drop.php
\par Clicking the logout button, redirects the user to the login page, index.php, via a logout.php file.
\section{logout.php}
\par This file clears the session and redirects to index.php file.
\section{drop.php}
\par This file runs a query to drop all the tasks for the current user that are done.
\chapter{Conclusion}
\par We were able to successfully create a login page using php. First of all, we learnt the basics of PHP, SQL and HTML, we could complete the login page within 1 day. This project has helped us improve our knowledge of php and sql. We sincerely thank our teachers to give us an opportunity to form groups and complete the project as we could share our ideas and have a fruitful discussions!
\chapter{References}
\begin{enumerate}
\item digitalocean.com
\item stackoverflow.com
\item w3schools.com
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.7714805963,
"avg_line_length": 52.9337349398,
"ext": "tex",
"hexsha": "4b60fc36eba0c55cae1b1dc1b0768cb713c9d3d3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d6e9972aa94dadc2585af759358faef1f664a192",
"max_forks_repo_licenses": [
"PostgreSQL"
],
"max_forks_repo_name": "justine05/foss-php",
"max_forks_repo_path": "LoginForm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d6e9972aa94dadc2585af759358faef1f664a192",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"PostgreSQL"
],
"max_issues_repo_name": "justine05/foss-php",
"max_issues_repo_path": "LoginForm.tex",
"max_line_length": 494,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d6e9972aa94dadc2585af759358faef1f664a192",
"max_stars_repo_licenses": [
"PostgreSQL"
],
"max_stars_repo_name": "justine05/foss-php",
"max_stars_repo_path": "LoginForm.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2177,
"size": 8787
} |
\documentclass[journal]{vgtc} % final (journal style)
%\let\ifpdf\relax
%\documentclass[review,journal]{vgtc} % review (journal style)
%\documentclass[widereview]{vgtc} % wide-spaced review
%\documentclass[preprint,journal]{vgtc} % preprint (journal style)
%% Uncomment one of the lines above depending on where your paper is
%% in the conference process. ``review'' and ``widereview'' are for review
%% submission, ``preprint'' is for pre-publication, and the final version
%% doesn't use a specific qualifier.
%% Please use one of the ``review'' options in combination with the
%% assigned online id (see below) ONLY if your paper uses a double blind
%% review process. Some conferences, like IEEE Vis and InfoVis, have NOT
%% in the past.
%% Please use the ``preprint'' option when producing a preprint version
%% for sharing your article on an open access repository
%% Please note that the use of figures other than the optional teaser is not permitted on the first page
%% of the journal version. Figures should begin on the second page and be
%% in CMYK or Grey scale format, otherwise, colour shifting may occur
%% during the printing process. Papers submitted with figures other than the optional teaser on the
%% first page will be refused. Also, the teaser figure should only have the
%% width of the abstract as the template enforces it.
%% These few lines make a distinction between latex and pdflatex calls and they
%% bring in essential packages for graphics and font handling.
%% Note that due to the \DeclareGraphicsExtensions{} call it is no longer necessary
%% to provide the the path and extension of a graphics file:
%% \includegraphics{diamondrule} is completely sufficient.
%%
\ifpdf% % if we use pdflatex
\pdfoutput=1\relax % create PDFs from pdfLaTeX
\pdfcompresslevel=9 % PDF Compression
\pdfoptionpdfminorversion=7 % create PDF 1.7
\ExecuteOptions{pdftex}
\usepackage{graphicx} % allow us to embed graphics files
\DeclareGraphicsExtensions{.pdf,.png,.jpg,.jpeg} % for pdflatex we expect .pdf, .png, or .jpg files
\else% % else we use pure latex
\ExecuteOptions{dvips}
\usepackage{graphicx} % allow us to embed graphics files
\DeclareGraphicsExtensions{.eps} % for pure latex we expect eps files
\fi%
%% it is recomended to use ``\autoref{sec:bla}'' instead of ``Fig.~\ref{sec:bla}''
\graphicspath{{figures/}{pictures/}{images/}{./}} % where to search for the images
\usepackage{microtype} % use micro-typography (slightly more compact, better to read)
\PassOptionsToPackage{warn}{textcomp} % to address font issues with \textrightarrow
\usepackage{textcomp} % use better special symbols
\usepackage{mathptmx} % use matching math font
\usepackage{times} % we use Times as the main font
\renewcommand*\ttdefault{txtt} % a nicer typewriter font
%\usepackage{cite} % needed to automatically sort the references
\usepackage{tabu} % only used for the table example
\usepackage{booktabs} % only used for the table example
\usepackage[numbers]{natbib} % for citations
\usepackage{anyfontsize} %
%% We encourage the use of mathptmx for consistent usage of times font
%% throughout the proceedings. However, if you encounter conflicts
%% with other math-related packages, you may want to disable it.
%% In preprint mode you may define your own headline. If not, the default IEEE copyright message will appear in preprint mode.
%\preprinttext{To appear in IEEE Transactions on Visualization and Computer Graphics.}
%% In preprint mode, this adds a link to the version of the paper on IEEEXplore
%% Uncomment this line when you produce a preprint version of the article
%% after the article receives a DOI for the paper from IEEE
%\ieeedoi{xx.xxxx/TVCG.201x.xxxxxxx}
%% If you are submitting a paper to a conference for review with a double
%% blind reviewing process, please replace the value ``0'' below with your
%% OnlineID. Otherwise, you may safely leave it at ``0''.
\onlineid{0}
%% declare the category of your paper, only shown in review mode
\vgtccategory{Research}
%% please declare the paper type of your paper to help reviewers, only shown in review mode
%% choices:
%% * algorithm/technique
%% * application/design study
%% * evaluation
%% * system
%% * theory/model
\vgtcpapertype{vgtcpapertype here}
%% Paper title.
\title{Absence Makes The Chart Grow Stronger: Blank Space and Axis Range Influence Interpretations of Magnitude in Risk Communication}
%% This is how authors are specified in the journal style
%% indicate IEEE Member or Student Member in form indicated below
\author{Duncan Bradley, Gabriel Strain, Caroline Jay, Andrew J. Stewart}
\authorfooter{
%% insert punctuation at end of each item
\item{Duncan Bradley is with the Division of Neuroscience and Experiment Psychology, The University of Manchester, UK. Email: [email protected].}
\item{Gabriel Strain and Caroline Jay are with the Department of Computer Science, The University of Manchester, UK. Email: \{gabriel.strain \textbar{} caroline.jay\}@manchester.ac.uk.}
\item{Andrew J. Stewart is with the Division of Neuroscience and Experiment Psychology and the Department of Computer Science, The University of Manchester, UK. Email: [email protected].}
}
%other entries to be set up for journal
\shortauthortitle{Duncan Bradley \MakeLowercase{\textit{et al.}}: Absence Makes The Chart Grow Stronger}
%\shortauthortitle{Firstauthor \MakeLowercase{\textit{et al.}}: Paper Title}
%% Abstract section.
\abstract{When visualizing data, chart designers have the freedom to choose the upper and lower limits of their numerical axes. Axis limits determine the physical positions of plotted values, and can introduce substantial blank space. For charts presenting data on the chance of negative events occuring, manipulating axis limits affects viewers' interpretations of plotted values' magnitudes, influencing understanding of the risk information being communicated. Across three experiments (total N=420), we demonstrate that, surprisingly, participants did not simply equate values presented at higher vertical positions with greater magnitudes. Instead, they used the numerical context supplied by axis limits to assess the magnitude of data points by contrasting these values against accompanying blank space. Data points were considered larger when they were numerically greater than the plausible values implied by blank space, \emph{even} when they were presented at the \emph{bottom} of a chart. Chart designers must consider the role of their axis range in viewers' interpretations of the magnitudes of plotted data points. We recommend displaying the range of relevant values in order to communicate the specific context for each dataset.}
%% Keywords that describe your work. Will show as 'Index Terms' in journal
%% please capitalize first letter and insert punctuation after last keyword
\keywords{Data cognition, framing effects, chart design, axis range, magnitude judgements.}
%% ACM Computing Classification System (CCS).
%% See <http://www.acm.org/class/1998/> for details.
%% The ``\CCScat'' command takes four arguments.
\CCScatlist{ % not used in journal version
\CCScat{code here}{title here}%
}
%% A teaser figure can be included as follows
%\teaser{
% \centering
% \includegraphics[width=\linewidth]{CypressView}
% \caption{caption here}
% \label{fig:teaser}
%}
%% Uncomment below to disable the manuscript note
%\renewcommand{\manuscriptnotetxt}{}
%% Copyright space is enabled by default as required by guidelines.
%% It is disabled by the 'review' option or via the following command:
% \nocopyrightspace
\vgtcinsertpkg
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%% START OF THE PAPER %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%% The ``\maketitle'' command must be the first command after the
%% ``\begin{document}'' command. It prepares and prints the title block.
%% the only exception to this rule is the \firstsection command
\firstsection{Introduction}
\maketitle
%% \section{Introduction} %for journal use above \firstsection{..} instead
{Context is crucial for effectively judging the magnitude of numbers. A 40\% probability is twice as great as a 20\% probability, but in the absence of context, it is unclear whether this value should be considered large or small. For the chance of experiencing post-surgery complications, 40\% may be considered large, but may be considered small for the chance that a laboratory test can detect a disease.
In charts, numerical axes often provide contextual cues for judging the magnitude of plotted values. The range of values on an axis provides a frame of reference for assessing whether a data point is numerically large or small. Figure \ref{fig:senators-chart} (a reproduction of a similar bar chart from the New York Times), which plots over time the number of Black members of the U.S. senate \citep{history_art__archives_us_house_of_representatives_office_of_the_historian_black-american_nodate}, provides a striking illustration. Unusually, the continuous y-axis does not terminate just above the highest plotted value. Instead, it extends all the way to the maximum possible number of senators: 100. As a result, bars representing Black senators are confined to the very bottom, visible just above the x-axis, and a significant expanse of blank space looms above them. This highlights the absent data points: the vast majority of senators who are not Black. The visual arrangement communicates the magnitude of the plotted values in context.
It is unclear exactly how an axis range influences a viewer's inferences about magnitude. One possible explanation is that the unfilled area indicates the range of plausible values. That is, plotted values may be judged as small in magnitude because the potential for substantially larger values is clearly displayed. Alternatively, viewers' assessments may be influenced by the appearance of plotted values only, and not by contrast with blank space. Viewers may simply interpret the magnitude of data points at higher positions as `high' and those at lower position as `low', ignoring the plausible alternative values implied by blank space. The present set of experiments explores which of these two accounts explains how axis ranges contribute to the communication of magnitude.}
\hypertarget{effects-of-context-on-magnitude-judgments}{%
\subsection{Effects of Context on Magnitude Judgments}\label{effects-of-context-on-magnitude-judgments}}
Empirical evidence demonstrates that judgment of a value's magnitude can
depend on its relationship to a grand total or to surrounding values.
This can influence interpretation of verbal approximations, and also
absolute values. For example, participants instructed to take `a few'
marbles picked up more when the total number available was larger
\citep{borges_common_1974} and rated satisfaction with the same salary as
higher when it appeared in the upper end of a range, compared to the
lower end \citep{brown_does_2008}.~
\hypertarget{effects-of-axis-limits-on-comparison-judgements}{%
\subsection{Effects of Axis Limits on Comparison Judgements}\label{effects-of-axis-limits-on-comparison-judgements}}
Several studies have explored how axis limits can alter impressions of
the \emph{relationships between} presented values, rather than the magnitudes
of values themselves. When axis ranges are expanded to create blank
space around a cluster of data points, correlation between those points
is judged as stronger \citep{cleveland_variables_1982}. In bar charts,
participants rate the differences between values as greater when the
vertical gap between bars is larger, due to a truncated y-axis
\citep{pandey_how_2015}. Correll et al.'s \citep{correll_truncating_2020}
experiments found that greater truncation resulted in higher effect-size
judgments in both line charts and bar charts. Truncation effects
persisted even when participants estimated the values of specific data
points, suggesting this bias is driven by initial impressions, rather
than a misinterpretation of the values portrayed by graphical markings.
\citet{correll_truncating_2020} found no reduction in effect size judgments
when truncation was communicated using graphical techniques (e.g., axis
breaks and gradients). The unavoidable consequence, they suggest, is
that designers' choices will influence viewers' interpretations whether
axes are truncated or not.
Choosing an appropriate axis range involves a trade-off between
participants' bias (over-reliance on the visual appearance of
differences) and their sensitivity (capacity to visually recognize
actual differences). Just as a highly truncated y-axis can exaggerate
trivial differences between values, an axis spanning the entire range of
possible values can conceal important differences \citep{witt_graph_2019}.
Based on participants' judgments of effect size, \citet{witt_graph_2019} found
that bias was reduced and sensitivity increased when using an axis range
of approximately 1.5 standard deviations of the plotted data, compared
to axes which spanned only the range of the data, or the full range of
possible values. This provides further evidence of a powerful
association between the appearance of data, when plotted, and subjective
interpretations of differences between data points.
Further evidence of truncation effects, provided by
\citet{yang_truncating_2021} improves on the design of previous studies which
employed only a few observations per condition \citep{pandey_how_2015} or
very small sample sizes \citep{witt_graph_2019}. Participants' ratings of
the difference between two bars consistently provided evidence of the
exaggerating effects of y-axis truncation. \citet{yang_truncating_2021} noted
that increasing awareness does not eliminate the effect, which may
function like an anchoring bias, where numerical judgments are
influenced by reference points \citep{tversky_judgment_1974}. Another
potential explanation discussed draws upon Grice's cooperative principle
\citep{grice_logic_1975}. According to this account of effective
communication, speakers are assumed to be in cooperation, and so will
communicate in a manner that is informative, truthful, relevant, and
straightforward. Analogously, a viewer will assume that a numerical
difference in a chart must be genuinely large if it appears large, else
it would not be presented that way. Effective visualizations should be
designed so a viewer's instinctive characterization of the data
corresponds closely to their interpretation following a more detailed
inspection \citep{yang_truncating_2021}.
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/senators-chart-1} \caption{In this chart, the y-axis limit is the largest possible value, rather than the largest observed value, so plotted values appear to have particularly small magnitudes.}\label{fig:senators-chart}
\end{figure}
\hypertarget{effects-of-axis-limits-on-magnitude-judgements}{%
\subsection{Effects of Axis Limits on Magnitude Judgements}\label{effects-of-axis-limits-on-magnitude-judgements}}
The above research consistently demonstrates that the magnitude of \emph{the
difference between values} is interpreted differently depending on the
appearance of the data points when plotted. The present investigation is
concerned with how interpretations of the magnitude of \emph{the values
themselves} are affected by their visual properties. From a cognitive
processing perspective, vertical position is a strong indicator of
magnitude. For example, children appear to intuitively understand the
relationship between height and value \citep{gattis_structure_2002}. Both the
physical world, and language (e.g., spatial metaphors), provide
countless examples where `higher' is associated with `more', and `lower'
with `less', and this principle has been adopted as a convention in data
visualization \citep{tversky_cognitive_1997}.
Research on data visualizations has identified cases where the
relationship between magnitude and vertical position can influence
interpretation. For example, inversions of this mapping in charts can
lead to misinterpretations \citep{okan_when_2012, pandey_how_2015, woodin_conceptual_2022}. Furthermore, when a company's financial
performance was displayed entirely in the bottom fifth of a line chart,
the company was perceived as less successful than when no blank space
appeared above the maximum value \citep{taylor_misleading_1986}.
\citet{sandman_high_1994} investigated assessments of magnitude in risk
ladders, where greater risks are presented at physically higher
positions on a vertical scale. Participants rated the threat of asbestos
exposure higher when it was plotted at a higher position.~
The above findings can be regarded as preliminary evidence that changing
axis limits may affect appraisals of data points' magnitudes. However,
the evidence is not substantial. \citet{taylor_misleading_1986} did not
disclose how judgments were elicited, or provide details of their sample
size. \citet{sandman_high_1994} only explored responses to one specific risk
(asbestos), and each participant only took part in a single trial. In
addition, the `threat' was a composite of several separate ratings,
preventing diagnosis of whether manipulations affected interpretations
of the plotted information in particular, or just related concepts.
Further, both studies introduced a confounding variable by adjusting the
difference between the minimum and maximum y-axis values across
conditions. To understand how different displays of the same values
elicit different inferences about magnitude, and to provide
recommendations for best practice, stronger evidence is required, as is
investigation into the cognitive mechanisms involved in generating these
inferences.~
\hypertarget{the-present-experiments}{%
\subsection{The Present Experiments}\label{the-present-experiments}}
In a set of three experiments employing a large number of observations,
we investigate how employing different axis limits affects
interpretations of the magnitude of plotted values. This manipulation
changes the context surrounding data points, and their physical
positions, but crucially the numerical values themselves remain the
same.
All data visualizations used in the present set of experiments displayed
the chance of negative events occurring. This provides participants with
a purpose in the experiments; evaluating information in such risk
scenarios is a more meaningful task than assessing, in an abstract
manner, how `large' a value is. Furthermore, charts are frequently used
for the communication of such risks, and manipulating aspects of a chart
can change interpretations of the risks displayed
\citep{elting_influence_1999, feldman-stewart_perception_2000, keller_effect_2009, okan_probability_2020, zikmund-fisher_whats_2005}.
Risk events are composed of two core components: 1) chance of occurrence
and 2) outcome magnitude (severity). Individuals' assessments of chance
and severity are not necessarily independent. An event is perceived as
more likely when it is described as having more severe consequences
\citep{harris_estimating_2009, harris_communicating_2011}. In a similar
manner, an event is associated with more substantial consequences when
it is described as more likely \citep{kupor_probable_2020}. One account
suggests that perceptions of probability and outcome magnitude are
related because they are both assumed to reflect the potency of the
event's cause (probability-outcome correspondence principle; \citep{keren_probabilityoutcome_2001}. According to this account,
probabilities can occasionally provide meaningful indications of outcome
magnitude (e.g., rainfall), but it is inappropriate to apply this
perspective to all situations (e.g., volcanic eruptions). Therefore,
even though charts in the present set of experiments only display the
chance of events occurring, assessments of the severity of events'
consequences may also differ between conditions. Collecting separate
judgments of chance and severity of consequences for each scenario
provides a clearer picture of how the manipulation affects distinct
aspects of participants' representations of risk. Use of Likert scales
(with discrete options) rather than visual analogue scales (with
continuous options; \citep{sung_visual_2018}) prevents participants from
simply mapping probability percentages directly onto a linear scale. We
also administered a subjective graph literacy measure, to determine
the degree to which our manipulation(s) affect interpretations after
accounting for differences in graph literacy. Previous
research has shown that responses to visualizations which violate
graphical conventions by using atypical scales suggest individuals with
lower graph literacy are more likely to draw on data
points' physical positions when making inferences about their magnitudes
\citep{okan_how_2016, okan_when_2012}.
\hypertarget{open-research-statement}{%
\subsubsection{Open Research Statement}\label{open-research-statement}}
This research has been conducted following the principles of open and reproducible research. All data and analysis code is available at \url{https://duncanbradley.github.com/position_magnitude/tree/ieee_vis_22}. This link also provides all necessary resources for running a Docker container, within which the computational environment used for analysis is recreated, meaning a fully-reproducible version of this paper can be generated.
All experiments in this paper were pre-registered (\url{https://osf.io/qn46s/}). There are no diversions from pre-registered experimental designs, exclusion criteria
or sample size. However, the reported analyses differ in some respects
from the pre-registered protocol. For full transparency, we outline
these diversions here.
Consistent with our pre-registration, when building models for our main
analyses, we sought the most complex random effects structures that
would successfully converge. These model structures were identified by
the \emph{buildmer} package in R \citep{voeten_buildmer_2022}, which subsequently removed terms which did
not contribute substantially to explaining variance in ratings. This
means that the final model used in analysis was not always the most
complex converging model.
In pre-registrations for Experiment 2 and Experiment 3, we proposed
testing for an interaction between our manipulation(s) and graph
literacy. However, this was motivated by a concern about whether
accounting for graph literacy could explain the presence or absence of
effects of our manipulation(s). Therefore, we substitute these planned
analyses with a more appropriate approach, treating graph literacy as a
co-variate only (no interaction). This matches the pre-registered
analysis from Experiment 1, providing consistency across the three
experiments. Due to this revision, pre-registered hypotheses about graph
literacy are not discussed.
\hypertarget{experiment-1}{%
\section{Experiment 1}\label{experiment-1}}
\hypertarget{introduction}{%
\subsection{Introduction}\label{introduction}}
Our initial experiment investigated whether changing axis limits affects
interpretation of data points' magnitudes. For the different versions of
each chart, we presented the same data points at different vertical
positions by altering both the upper and lower y-axis limits.
We predicted that ratings of data points'
magnitudes (chance of occurrence) and/or ratings of the severity of
consequences would be greater when data points were presented at higher
physical positions, compared to when the same data points were presented
at lower positions. Experimental code, materials and a link to run the experiment are available at \url{https://gitlab.pavlovia.org/ExPrag_UoM/riskE1}.
\hypertarget{methods}{%
\subsection{Methods}\label{methods}}
\hypertarget{materials}{%
\subsubsection{Materials}\label{materials}}
Text and an accompanying chart were presented in each trial. Two
sentences outlined a scenario involving a risk, and explained what the
chart depicted. For example:
\begin{quote}
\emph{You are going on a camping trip next week. The graph below shows the
chance of heavy rainfall for three randomly selected days next week.}
\end{quote}
The accompanying dot plot displayed the chance (as a percentage) of a
negative outcome occurring, for three options associated with the
scenario (Figure \ref{fig:example-charts}). The label `Chance' was used
instead of `Probability' to avoid confusion with the standard 0-1 scale
for probabilities, and to reflect casual usage.
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/example-charts-1} \caption{Example Charts. The 'high physical position' condition (left) presents data points near the top of the chart; the 'low physical position' condition (right) presents the same data points near the bottom of the chart.}\label{fig:example-charts}
\end{figure}
In experimental trials (n = 40), all three data points were either
plotted in the top third of the chart (high physical position: Figure
\ref{fig:example-charts}, left) or in the bottom third of the chart
(low physical position: Figure \ref{fig:example-charts}, right). The
plotted dataset differed for each distinct scenario, but was identical
for the two charts associated with a given scenario. In filler trials (n
= 15) and attention check trials (n = 5), data points were plotted in
the middle third of the chart.
The y-axis range in each chart was 10 percentage points. Horizontal
gridlines appeared at one-unit increments. In all trials, the gridline
1.5 percentage points above the bottom of the chart was labelled with a
numerical value, as was the gridline 1.5 percentage points below the top
of the chart.
\hypertarget{procedure}{%
\subsubsection{Procedure}\label{procedure}}
The experiment was programmed in PsychoPy (version 2021.1.4, \citep{peirce_psychopy2_2019}) and hosted on
pavlovia.org. Participants were instructed to complete the experiment on
a desktop computer or laptop, not a tablet or mobile phone. After
providing informed consent, participants submitted their age and gender,
and completed a five-item subjective graph literacy scale
\citep{garcia-retamero_measuring_2016}. They were reminded that the
experiment involved information about risks, and could cause distress,
so were entitled to withdraw from the experiment at any time. Following
this, instructions explained that their task involved assessing the
chance and severity of negative outcomes in various scenarios involving
risks. The instructions noted that some scenarios might appear similar
to other scenarios. Participants were asked to complete the task as
quickly and accurately as possible. Two practice trials were presented
before the experiment proper began.
Two responses were required for each trial: a rating of the chance of
the negative event occurring; and a rating of the severity of the
consequences if that negative event occurred. Above these Likert scales,
a short phrase indicated that the questions should be answered in
response to the plotted data (e.g., \emph{``If you camp on one of these
days\ldots{}''}).
Each Likert scale had two anchors at its extremes, but all other points
were unlabeled. The leftmost option in the `chance' Likert scale was
\emph{`Very unlikely'}, and the rightmost option \emph{`Very likely'}. The
leftmost option in the `severity' Likert scale was \emph{`Very mild'} and the
rightmost option \emph{`Very severe'}. Likert scales appeared on the same
screen as the text and chart (Figure \ref{fig:example-trial}).
Participants were permitted to change their responses as many times as
they wished before proceeding to the next trial, but could not return to
previous trials.
\begin{figure}
\includegraphics[width=250px]{images/example_trial} \caption{Example Trial. Participants rated the chance and severity of negative outcomes in each trial.}\label{fig:example-trial}
\end{figure}
Attention check trials (n = 5) followed the same layout, with text, a
chart, and Likert scales, but the task differed. Participants were
instructed not to attend to the chart, and instead to provide specified
responses on the Likert scales. For example:
\begin{quote}
\emph{You are expected to stay on task throughout this experiment. For this
trial, ignore the graph below. Respond `Very unlikely' on the top
scale, and `Very mild' on the bottom scale.}
\end{quote}
For attention check trials, the questions above the Likert scales were
\emph{``What is the chance response specified above?''} and \emph{``What is the
severity response specified above?''}.
Before exiting the experiment, participants were informed that all data
presented was fictional and were offered guidance in case of any
distress.
\hypertarget{design}{%
\subsubsection{Design}\label{design}}
We employed a repeated-measures, within-participants design. In
experimental trials, participants encountered each scenario twice: once
with data presented at a high physical position and once with data
presented at a low physical position. In each trial, participants rated
the chance of a negative event occurring, and the severity of its
consequences, on seven-point Likert scales.
Materials were divided into two lists to minimize the likelihood of
different versions of the same scenario appearing in close succession.
In one list, half of the experimental scenarios were accompanied by
charts displaying data at high physical positions, and half were
accompanied by charts displaying data at low physical positions. The
other list contained the alternate versions of each of the experimental
scenarios. Fillers and attention check questions were split between the
two lists, and did not appear more than once. The order of the two lists
was counterbalanced across participants, and within each list, scenarios
were presented in a random order.
\hypertarget{participants}{%
\subsubsection{Participants}\label{participants}}
The experiment was advertised on Prolific.co, a platform for recruiting
participants for online studies. A viral social media post on 24th July
2021 endorsing the website attracted many new users from a narrow
demographic, skewing studies' participant distributions
\citep{charalambides_we_2021}, however, data for this experiment were
collected prior to this. Normal or corrected-to-normal vision and
English fluency were required for participation.
Data were returned by 160 participants. Per pre-registered exclusion
criteria, 10 participants' submissions were rejected because they
answered more than two of 10 attention check questions incorrectly. This
left a total of 150 participants whose submissions were used for
analysis (52.00\% male, 45.33\%
female, 2.67\% non-binary). Mean age was
31.49 (\emph{SD} = 12.47)\footnote{Age data was unavailable for one participant, but was available
for all other participants in the dataset.}. The mean
graph literacy score was 21.28 (\emph{SD} =
4.58), out of a maximum of 30. Participants
whose submissions were approved were paid £3.55, and average completion
time was 25 minutes \footnote{Timing data was unavailable for two participants, but was
available for all other participants in the dataset.}. Ethical
approval was granted by The University of Manchester's Division of
Neuroscience \& Experimental Psychology Ethics Committee (Ref.
2021-11115-18258).
\hypertarget{analysis}{%
\subsection{Analysis}\label{analysis}}
Analyses were conducted using R (version 4.1.2, \citep{r_core_team_r_2021}).
Likert scales only express granularity at the level of ordinal data.
They record whether one rating is higher or lower than than another, but
do not record the magnitude of this difference. Therefore, Likert scales
do not capture values from latent distributions (mental representation)
in a linear manner. On a Likert scale, the distance between one pair of
points and another pair may appear equal, but may represent very
different distances on the latent distribution. Therefore, it is
inappropriate to analyse Likert scale data with metric models, such as
linear regression \citep{liddell_analyzing_2018}. Throughout this paper, we
construct cumulative link mixed-effects models, using the \emph{ordinal}
package (version 2019.12-10, \citep{christensen_ordinalregression_2019}) to analyse Likert scale
ratings.
Selection of random effects structures for models was automated using
the \emph{buildmer} package (version 2.3, \citep{voeten_buildmer_2022}). The maximal random
effects structure included random intercepts for participants and
scenarios, plus corresponding slopes for fixed effects terms
\citep{barr_random_2013}. From this formula, \emph{buildmer} initially identified
the most complex model which could successfully converge, prioritizing
the terms which explained the most variance in the data, then eliminated
terms which did not provide significant contributions (assessed using
likelihood ratio tests).
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r1-c-plot-1} \caption{Participants rated the chance of each negative event occurring on a 7-point Likert scale. The distribution of ratings, ranging from "Very unlikely" (far left, dark green) to "Very likely" (far right, red), is shown separately for charts where values were presented at a high physical position (top) and a low physical position (bottom). Note that data points at high physical positions elicited a larger proportion of ratings on the right-hand side (which represents greater magnitudes), compared to data points at low physical positions, which elicited a larger proportion of ratings on the left-hand side (representing smaller magnitudes).}\label{fig:r1-c-plot}
\end{figure}
Figure \ref{fig:r1-c-plot} plots the distribution of participants'
ratings of data points' magnitudes, for data points presented at high
and low physical positions. A likelihood ratio test reveals that a model
including physical position as a fixed effect explains significantly
more variability in ratings than a model which does not include physical
position as a fixed effect (\(\chi^2\)(1) =
74.21, p \textless{} .001). Data
points' magnitudes were rated as greater when those data points were
presented at high physical positions, compared to when the same data
points were presented at low physical positions (z =
8.57, p
\textless{} .001). This model employed random
intercepts for each scenario and each participant. Estimated marginal
means, calculated using the \emph{emmeans} package (version 1.7.0, \citep{lenth_emmeans_2021}) for these ratings are plotted in Figure \ref{fig:r1-c-emm-plot}.
For ratings of the severity of consequences, a likelihood ratio test
reveals that a model including physical position as a fixed effect
explains significantly more variability in ratings than a model which
does not include condition as a fixed effect:
(\(\chi^2\)(1) = 6.16, p
= .013). The severity of consequences was
rated as greater when data points representing the chance of an event
occurring were presented at high physical positions, compared to when
the same data points were presented at low physical positions (z =
2.50, p
= .012). This model employed random
intercepts for each scenario, plus random intercepts and slopes for each
participant. The slopes modeled, for each participant, the average
difference between responses to data presented at different positions
(henceforth referred to as `by-position slopes').
We also generate two additional models, to test whether or not the above
results could be explained by differences in graph literacy. These
models were identical to the above models except for the inclusion of
participants' graph literacy scores as an additional fixed effect.
Adjusting for participants' graph literacy scores did not eliminate the
effects of data points' positions on ratings of the magnitude of data
points themselves (z = 8.57, p
\textless{} .001) or severity of
consequences (z = 2.51, p
= .012).
The above analysis employs models with \emph{flexible} thresholds. This
allows for variable distances between decision thresholds in the models
(points on the latent distribution dividing responses between two
categories). Comparison with models that specify \emph{equidistant}
thresholds reveals that models with flexible thresholds are superior,
for ratings of the magnitude of data points themselves
(\(\chi^2\)(4) = 609.44, p
\textless{} .001) and ratings of the severity of
consequences (\(\chi^2\)(4) =
142.84, p \textless{} .001.
flexible). This suggests participants treated
intervals between response categories as irregular, and validates the
use of flexible thresholds in model construction.
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r1-c-emm-plot-1} \caption{Estimated marginal means for ratings of data points' magnitudes (generated by the cumulative-link mixed model). Magnitudes were rated as greater when data points were presented at high physical positions. Translucent bars show 95\% confidence intervals.}\label{fig:r1-c-emm-plot}
\end{figure}
\hypertarget{discussion}{%
\subsection{Discussion}\label{discussion}}
Participants rated the magnitudes of data points as greater when those
data points were presented near the top of the chart, compared to when
the same data points were presented near the bottom.
Higher bars and ascending lines typically represent higher numbers and
ascending trends, so within a single chart, inferring that values
presented higher up are greater than those lower down will often be
correct in normal usage. This experiment, however, establishes that
inferences about the magnitude of \emph{the same value} can change depending
on its position. Modeling differences in participants' graph literacy
did not remove the influence of our experimental manipulation on
interpretations.
Ratings of both the chance of an event occurring, and the severity of
consequences, were affected by the manipulation of axis limits and data
points' positions, even though the charts only displayed data on the
former. This accords with previous reports of an interplay between
properties of presented information and impressions of related but
distinct concepts, in particular the finding that higher prior
probabilities were associated with impressions of greater event
magnitudes \citep{kupor_probable_2020}. However, it is unclear whether the
effects of different axis ranges on interpretations of magnitude are
driven by an association between a data point's \emph{absolute} position and
its magnitude, or an association between its \emph{relative} position and its
magnitude. If absolute position influences interpretations, mentally
representing the magnitude of a data point may simply involve
associating data points at higher positions with higher values (and
lower positions with lower values). In contrast, if relative position
influences interpretations, mentally representing the magnitude of a
data point would involve a comparison with plausible alternative values,
which are not plotted, but implied through use of blank space. This
important distinction is explored in Experiment 2.
\hypertarget{experiment-2}{%
\section{Experiment 2}\label{experiment-2}}
\hypertarget{introduction-1}{%
\subsection{Introduction}\label{introduction-1}}
Experiment 1 (E1) found that participants associated data points with
greater magnitudes when those data points were positioned near the \emph{top}
of a chart and substantial blank space appeared \emph{below} them, compared
to when the same data points were positioned near the \emph{bottom} of a
chart, with substantial blank space \emph{above}.
One possible explanation for this finding is that participants made
simple associations between absolute position and magnitude, equating
physically higher data points with larger magnitudes and physically
lower data points with smaller magnitudes. This relates to
well-established conceptual metaphors for magnitude, where greater
vertical positions denote greater magnitudes \citep{tversky_cognitive_1997}.
An alternative explanation is that participants used blank space as a
reference point when assessing the magnitude of plotted values. For
example, when viewing substantial blank space above plotted data points,
participants may have recognized the potential for values larger than
those observed, consequently associating plotted data points with
smaller magnitudes.
E1 does not provide a means of differentiating these competing
explanations. Drawing inferences from data points' absolute positions
would orient magnitude judgments in the same direction as drawing
inferences from their positions relative to blank space. A high
magnitude is implied by a data point's high physical position \emph{and} the
presence of substantial blank space below. Therefore, an additional
experiment is required in order to distinguish between the two competing
explanations.~
Inverting a vertical axis changes the relationship between physical
position and numerical value: increasingly \emph{lower} positions represent
increasingly \emph{higher} numerical values. This means data points presented
near the \emph{bottom} of a chart, with substantial blank space above, are
numerically \emph{larger} than the plausible values represented by this blank
space. This is illustrated in Figure \ref{fig:r2-rationale-plot}.
Therefore, inferences invoking blank space would generate the opposite
impressions to inferences invoking data points' physical positions only.
In E2, we manipulate data points' physical positions by changing axis
limits (as in E1), but \emph{also} manipulate axis orientation, by employing
conventional and inverted axes (in a 2 x 2 design). If interpretations
of magnitude differ according to whether data points are smaller or
larger than other plausible values implied by the chart (regardless of
physical position), this will demonstrate that interpretations are
driven by positions relative to blank space, rather by absolute
position.
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r2-rationale-plot-1} \caption{Rationale for Experiment 2: Distinguishing the Roles of Absolute and Relative Position.
In charts with conventional axis orientations (left column), there is congruity between data points’ absolute positions and their relative positions in the chart.
In charts with inverted axis orientations (right column), there is incongruity between data points’ absolute positions and their relative positions in the chart.
For example, at high absolute positions in conventional charts (top left), data points are relatively higher than implied alternatives. But at the same absolute positions, in inverted charts, the same values are relatively lower than alternatives (top right).}\label{fig:r2-rationale-plot}
\end{figure}
Previous research suggests that charts with inverted axes can be prone
to misinterpretation when viewers are not informed about the inversion
(\citet{pandey_how_2015}; \citet{woodin_conceptual_2022}). In E2, we provide explicit
instruction to ensure participants are aware that inverted charts are
presented.
For charts with conventional axis orientations,
we predicted in our pre-registration that results from E1 would be replicated. That is, data
points presented at higher physical positions would be associated with
greater magnitude ratings, compared to data points presented at lower
physical positions. For charts with inverted axis orientations, we
outlined what different patterns of magnitude ratings would signal about
the mechanism used to interpret magnitude. Specifically, use of absolute
position would be indicated by greater magnitude ratings for data points
at \emph{higher} physical positions (and therefore no difference compared to
conventional charts). Alternatively, use of position relative to blank
space would be indicated by greater magnitude ratings for data points at
\emph{lower} physical positions (and therefore the opposite pattern compared
to conventional charts). Experimental code, materials and a link to run the experiment are available at \url{https://gitlab.pavlovia.org/ExPrag_UoM/riskE2}.
\hypertarget{method}{%
\subsection{Method}\label{method}}
\hypertarget{materials-1}{%
\subsubsection{Materials}\label{materials-1}}
For this experiment, we used a Latin-squared design where participants
only viewed one chart per scenario. In response to this, we increased
the number of scenarios. This provided some compensation for the reduced
experimental power caused by a reduction the number of observations per
participant (as well as a reduction in participant numbers).
Two scenarios which were fillers in E1 were used as experimental
scenarios\footnote{For one of these scenarios, the mean of the plotted data was also
modified.} and three additional scenarios were created. One filler
scenario was removed due to a concern about its quality (it concerned
the risk to others as well as the risk to oneself). This gave a total of
24 experimental scenarios, 12 filler scenarios, and 5 attention check
questions (41 trials in total).
\hypertarget{procedure-1}{%
\subsubsection{Procedure}\label{procedure-1}}
The experiment used PsychoPy version 2021.2.3. Participants specified the highest level of education they had received,
in addition to answering demographic questions on age and gender. An
additional slide in the instructions explained how to identify and
interpret the different axis orientations, and encouraged participants
to pay attention to this:
\begin{quote}
\emph{You should pay attention to the direction of the arrow on the
`Chance' axis. If the arrow points upwards, the numbers in the graph
get bigger as the axis goes up. Alternatively, if the arrow points
downwards, the numbers get bigger as the axis goes down.}
\end{quote}
Otherwise, the procedure was identical to E1.
\hypertarget{design-1}{%
\subsubsection{Design}\label{design-1}}
We employed a Latin-squared, within-participants design. Participants
encountered each individual scenario only once, but were exposed to all
combinations of position and axis orientation throughout the experiment.
\hypertarget{participants-1}{%
\subsubsection{Participants}\label{participants-1}}
The experiment was not advertised on Prolific.co to those who had
participated in E1, or those who signed-up to Prolific.co after 24th
July 2021 (due to the shift in participant demographics). Normal or
corrected-to-normal vision and English fluency were required for
participation.
Data were returned by 129 participants. Per pre-registered exclusion
criteria, five participants' submissions were rejected because they
answered more than two of 10 attention check questions incorrectly.
Submissions from four other participants were excluded from the final
dataset for the following reasons: maximum completion time (67 minutes)
was exceeded (two participants); the submission constituted second
attempt following a saving error on first attempt (one participant);
data were collected prior to pre-registration (one participant). This
left a total of 120 participants whose submissions were used in the
analysis (49.17\% male, 50.83\%
female). Mean age was 29.32 (\emph{SD} =
10.45). 100\% had
completed at least secondary education. The mean graph literacy score
was 21.73 (\emph{SD} =
4.70). Participants whose submissions were
approved were paid £2.37, and average completion time was
21 minutes. Ethical approval was
granted by The University of Manchester's Division of Neuroscience \&
Experimental Psychology Ethics Committee (Ref. 2021-11115-20464).
\hypertarget{analysis-1}{%
\subsection{Analysis}\label{analysis-1}}
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r2-c-plot-1} \caption{Participants rated the chance of each negative event occurring on a 7-point Likert scale. The distribution of ratings, ranging from "Very unlikely" (far left, dark green) to "Very likely" (far right, red) is shown separately for each combination of the levels of each condition (axis orientation: conventional, inverted; data points' physical position: high, low). Note that the pattern of responses to data presented at different positions in the Conventional Axis condition appears to be the opposite to the pattern for Inverted Axis condition. When charts used conventional axes, greater magnitude ratings were more common for data presented at high physical positions, whereas when charts used inverted axes, greater magnitude ratings were more common for data presented at low physical positions.}\label{fig:r2-c-plot}
\end{figure}
Figure \ref{fig:r2-c-plot} plots the distribution of participants'
ratings of data points' magnitudes, for data points presented at high
and low physical positions, in charts with conventional axis
orientations and inverted axis orientations. A likelihood ratio test
reveals that a model including the interaction between physical position
and axis orientation as a fixed effect explains significantly more
variability in ratings than a model without this interaction as a fixed
effect (\(\chi^2\)(1) = 8.22, p
= .004). There was a significant
interaction between physical position and y-axis orientation (z =
2.91, p
= .004). This
interaction is plotted in Figure \ref{fig:r2-int-plot}. This model
employed random intercepts and by-position slopes for each scenario.
Random intercepts were included for each participant, as well as slopes
capturing differences in participants' responses to data presented at
different positions, different orientations, and the interaction between
these.
Pairwise comparisons (with Sidak adjustment) reveal that the effect of
position in charts with conventional y-axis orientations (E1) was
replicated (z = 3.56, p
= .001). Data points'
magnitudes were rated as greater when they were presented at high
physical positions, compared to when they were presented at low physical
positions. There was no significant difference between magnitude ratings
for data points plotted at different positions when inverted axes were
used (z = -1.39, p
= .512). Therefore, we
observe a different pattern of results when an inverted axis is used,
compared to when a conventional axis is used. This suggests that
differences in ratings for data points at different positions in
physical space are not due to simple associations between vertical
position and magnitude. The interaction remained when controlling for
graph literacy: z = 2.91, p
= .004, and when controlling for
list number: z = 2.92, p
= .004.
For ratings of the severity of consequences, a likelihood ratio test
reveals that a model including the interaction between physical position
and axis orientation as a fixed effect explains significantly more
variability in ratings than a model without this interaction as a fixed
effect (\(\chi^2\)(1) = 5.13, p
= .024).There was a significant
interaction between physical position and y-axis orientation (z =
2.28, p
= .022). This
model employed random intercepts for each scenario. Random intercepts
were included for each participant, as well as slopes capturing
differences in participants' responses to data presented at different
positions, different orientations, and the interaction between these.
Despite the interaction, the main effect in severity ratings from E1,
different responses to data points at different positions in
conventional charts, was not replicated
(1.53, p
= .414). There was also
no evidence of different responses to data points at different positions
in inverted charts (-1.54, p
= .412). This
interaction appears to be driven by a weak and likely spurious
difference between ratings for data points at high physical positions in
inverted and conventional charts (-2.52,
p = .047). The
interaction remained when controlling for graph literacy: z =
2.29, p
= .022, and when controlling for
list number: z = 2.28, p
= .023.
Models employing flexible decision thresholds (as above) were superior
to models employing equidistant thresholds, for ratings of the magnitude
of data points themselves (\(\chi^2\)(4) =
346.93, p \textless{} .001), and
ratings of the severity of consequences:
(\(\chi^2\)(4) = 74.10, p
\textless{} .001).
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r2-int-plot-1} \caption{Estimated marginal means for ratings of data points' magnitudes (generated by the cumulative-link mixed model). The slope for conventional charts differs from the slope of inverted charts. Thus, the effect of position on interpretation of data points' magnitudes differs according to axis orientation. Translucent bars show 95\% confidence intervals.}\label{fig:r2-int-plot}
\end{figure}
\hypertarget{discussion-1}{%
\subsection{Discussion}\label{discussion-1}}
In E1, when using conventional charts only, we found that displaying
data within different axis limits affected magnitude judgments. However,
it was unclear whether judgments were based on data points' absolute
positions, or their positions relative to blank space, because both
would generate similar interpretations. Therefore, in E2, for half of
trials, we reversed the mapping of values in physical space, so these
two features would imply different magnitudes for a given value.
In E2, we replicated the primary finding from E1. In charts with
conventional axis orientations, the same data points elicited different
chance judgments when presented at different positions. These
differences were consistent with magnitudes implied by data points'
absolute positions and their positions relative to blank space. However,
in charts with inverted axis orientations, the same pattern was not
observed. Therefore, we can conclude that interpretations of magnitude
are affected by a chart's physical arrangement of values. The pattern of
differences in magnitude judgments for data points presented at distinct
physical positions depends on how axes are oriented.
Figure \ref{fig:r2-int-plot} suggests that the pattern of results for
inverted charts is the reverse of the pattern for conventional charts.
However, our analysis indicates that the same data points did not elicit
significantly different magnitude judgments when presented at different
positions in \emph{inverted} charts. Therefore, we cannot conclude from this
analysis that magnitude judgments are driven solely by data points'
positions relative to blank space. The lack of significant difference is
likely due to a lack of experimental power. An additional experiment is
required to confirm whether there is a genuine difference.
\hypertarget{experiment-3}{%
\section{Experiment 3}\label{experiment-3}}
\hypertarget{introduction-2}{%
\subsection{Introduction}\label{introduction-2}}
The interaction in E2 revealed that the influence of position on
magnitude judgments depends on how different numerical values are
arranged in a chart (axis orientation). The pattern of responses in
inverted charts appeared to be the inverse of the pattern for
conventional charts. This suggests that participants may not have based
inferences about magnitude on data points' absolute positions, but on
their positions relative to blank space. However, the absence of a
significant difference between ratings for data points at different
positions in inverted charts prohibits the conclusion that
interpretations are driven entirely by comparisons with plausible values
implied by blank space.
It is possible that no significant effect was detected due to
insufficient experimental power. Unlike E1, with 150 participants in a
single-factor design, E2 involved 120 participants in a Latin-squared 2
x 2 design. Despite an increase in the number of experimental scenarios
(from 20 to 24), there were still fewer observations for each unique
condition (3000 in E1 vs.~720 in E2).
In E3 we increase the experimental power and focus only on inverted
charts. This will provide a clearer account of how magnitude is
interpreted in inverted charts, furthering understanding of the
mechanism by which axis ranges influence interpretations of magnitude.
We outlined in our pre-registration what
different patterns of magnitude ratings would signal about the
mechanisms used to interpret magnitude. Specifically, use of absolute
position would be indicated by higher magnitude ratings for data points
at \emph{high} physical positions (mirroring the finding for conventional
charts). Alternatively, use of position relative to blank space would be
indicated by higher magnitude ratings for data points at \emph{low} physical
positions (the reverse of the finding for conventional charts). Experimental code, materials and a link to run the experiment are available at \url{https://gitlab.pavlovia.org/ExPrag_UoM/riskE3}.
\hypertarget{method-1}{%
\subsection{Method}\label{method-1}}
\hypertarget{materials-2}{%
\subsubsection{Materials}\label{materials-2}}
Materials were identical to E1, except for the inversion of the y-axis
in all charts, including practice trials. There were 60 trials in total
(40 experimental trials, 15 fillers, 5 attention check questions).
\hypertarget{procedure-2}{%
\subsubsection{Procedure}\label{procedure-2}}
The experiment used PsychoPy version 2021.2.3. As in E2, participants were asked to indicate their education level. One
slide in the instructions explained to participants how charts with
inverted axes function: \emph{``In all graphs in this experiment, the arrow on
the `Chance' axis points downwards, meaning the numbers get bigger as
the axis goes down.''}. Otherwise, the procedure was identical to E1.
\hypertarget{design-2}{%
\subsubsection{Design}\label{design-2}}
As in E1, we employed a repeated-measures, within-participants design.
Participants encountered each experimental scenario twice: once with
data presented at a high physical position and once with data presented
at a low physical position.
\hypertarget{participants-2}{%
\subsubsection{Participants}\label{participants-2}}
The experiment was not advertised on Prolific.co to those who had
participated in E1 or E2, or those who signed-up to Prolific.co after
24th July 2021. Normal or corrected-to-normal vision and English fluency
were required for participation.
Data were returned by 161 participants. Per pre-registered exclusion
criteria, 10 participants' submissions were rejected because they
answered more than two of 10 attention check questions incorrectly. One
additional participant was excluded from the final dataset because they
exceeded the maximum completion time (87 minutes). This left a total of
150 participants whose submissions were used for analysis:
(60.00\% male, 40.00\% female).
Mean age was 29.64 (\emph{SD} =
9.56)\footnote{Age data was unavailable for two participants, but was available
for all other participants in the dataset.}. 100\% had
completed at least secondary education. The mean graph literacy score
was 21.87 (\emph{SD} =
4.28). Participants whose submissions were
approved were paid £3.45, and average completion time was
24 minutes. Ethical approval was
granted by The University of Manchester's Division of Neuroscience \&
Experimental Psychology Ethics Committee (Ref. 2021-11115-20745).
\hypertarget{analysis-2}{%
\subsection{Analysis}\label{analysis-2}}
Figure \ref{fig:r3-c-plot} plots the distribution of participants'
ratings of data points' magnitudes, for data points presented at
different physical positions in inverted charts.
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r3-c-plot-1} \caption{Participants rated the chance of each negative event occurring on a 7-point Likert scale. The distribution of ratings, ranging from "Very unlikely" (far left, dark green) to "Very likely" (far right, red), is shown separately for charts where values were presented at a high physical position (top) and a low physical position (bottom). Note that data points at high physical positions elicited a larger proportion of ratings on the left-hand side (which represents smaller magnitudes), compared to data points at low physical positions, which elicited a larger proportion of ratings on the right-hand side (representing greater magnitudes).}\label{fig:r3-c-plot}
\end{figure}
A likelihood ratio test
reveals that a model including physical position as a fixed effect
explains significantly more variability in ratings of data points' magnitudes than a model which
does not include physical position as a fixed effect
(\(\chi^2\)(1) = 46.45, p
\textless{} .001). Data points' magnitudes were
rated as greater when those data points were presented at low physical
positions, compared to when the same data points were presented at high
physical positions (z = 6.80, p
\textless{} .001). This model employed random intercepts for each
scenario. This effect remained when adjusting for participants' graph
literacy scores (z = 6.83, p
\textless{} .001). Estimated marginal means
for these ratings are plotted in Figure \ref{fig:r3-c-emm-plot}.
For ratings of the severity of consequences, a likelihood ratio test
reveals that a model including physical position as a fixed effect did
not explain significantly more variability in ratings than a model
without this fixed effect (\(\chi^2\)(1) =
3.40, p = .065). This
model employed random intercepts for each scenario, plus random
intercepts and by-position slopes for each participant. This finding
remained when adjusting for participants' graph literacy scores (z =
1.85, p
= .064).
Models employing flexible decision thresholds (as above) were superior
to models employing equidistant thresholds, for ratings of the magnitude
of data points themselves (\(\chi^2\)(4) =
752.74), and ratings of the severity of consequences
(\(\chi^2\) (4) = 177.78, p
\textless{} .001).
\begin{figure}
\includegraphics[width=250px]{position_magnitude_files/figure-latex/r3-c-emm-plot-1} \caption{Estimated marginal means for ratings of data points' magnitudes (generated by the cumulative-link mixed model). Magnitudes were rated as greater when data points in inverted charts were presented at low physical positions. Translucent bars show 95\% confidence intervals.}\label{fig:r3-c-emm-plot}
\end{figure}
\hypertarget{discussion-2}{%
\subsection{Discussion}\label{discussion-2}}
When viewing charts with inverted axes, participants judged data points'
magnitudes according to whether accompanying blank space implied the
existence of higher or lower plausible values. Participants ignored
conventional associations between position and magnitude to interpret
magnitude in the context of the chart.
In the previous experiment (E2), we did not observe a significant
difference between magnitude ratings for data points at different
positions in inverted charts, although the pattern was consistent with
the use of blank space in the interpretation of the plotted data.
However, E3, with increased experimental power, demonstrates that such a
difference is statistically significant.
E2 involved switching between conventional and inverted charts, whereas
E3 presented inverted charts in isolation. However, the differences in
estimated marginal means for inverted charts, which represent the
differences in ratings of data points' magnitudes when presented at
different positions, are almost identical for these two experiments (E2:
0.34; E3:
0.33). This suggests inverted charts
were not treated differently in the different experiments. Therefore,
the presence or absence of switching should not prohibit the use of E3's
data in explaining E2's interaction.
In light of this, we can interpret the results of E2 more easily. The
same data points, presented at the same positions in a chart, convey
different magnitudes depending on how they compare to plausible values
implied by blank space. Viewers do not draw upon simple associations
between vertical position and magnitude, but recognize the context in
which values are plotted.
\hypertarget{general-discussion}{%
\section{General Discussion}\label{general-discussion}}
Over three experiments, we demonstrate how judgments of data points'
magnitudes are influenced by the presence of blank space in a chart.
Regardless of their physical positions, data points were associated with
greater magnitudes when they were numerically greater than the plausible
values represented by blank space. This was observed for charts with
both conventional and inverted axes. This highlights viewers'
sensitivity to context in the interpretation of information in data
visualizations, suggesting designers should consider this aspect when
creating charts.
When comparing data points within a single chart, it is appropriate to
infer that data points which appear at different positions between two
axis limits have distinct magnitudes. The results we report indicate
that magnitude judgments can vary when \emph{the same value} appears at
different positions between two axis limits. Interpretation of an
absolute value is biased by its relative position.~
The impact of surrounding information on assessments of data is an
example of a framing effect. We illustrate that this effect occurs in
the absence of contrasting data points: the presence of blank space is
sufficient for implying the relative status of plotted data.
The present data complement findings from prior research on y-axis
truncation, which has found that the choice of axis limits can impact
interpretation of data. The results we report reinforce the notion that
the amount of blank space surrounding plotted values influences viewers'
impressions of those values. While previous investigations have shown
that y-axis limits affect \emph{comparisons} of plotted values
\citep{correll_truncating_2020, witt_graph_2019, yang_truncating_2021}, the
present findings show that they also affect \emph{magnitude judgments}.
A previous study addressing a similar question also concluded that a
data point's location within a range of values affects interpretation of
its magnitude \citep{sandman_high_1994}. The present set of experiments
builds upon this research by identifying the mechanism behind this
effect and removing the confound of variable axes ranges. It also
extends the finding beyond a single scenario to a wider range of
situations, and separately analyses specific judgments, rather than
using a combined measure, to verify that different presentations affect
judgments of the specific variable plotted in a chart.
This set of experiments was not concerned with endorsing or opposing
inverted charts; the sole function of these charts was in distinguishing
competing explanations. However, when explicit instruction was provided,
our data provide evidence of comprehension, contrary to the typical
finding of misinterpretation resulting from associating higher positions
with higher values \citep{woodin_conceptual_2022, pandey_how_2015}.
Visualization rhetoric involves presenting numerical information in a
way that provokes a particular interpretation
\citep{hullman_visualization_2011}. The manipulation of visualization
components examined in the present set of experiments is related to two
rhetorical strategies: \emph{axis thresholding} and \emph{contrast.} The former is
an instance of `information access' rhetoric, and involves setting an
axis range that provides an incomplete picture of the data. The latter
is an instance of `mapping' rhetoric, and employs visual properties to
promote comparisons.
We did not find consistent evidence that assessments of the severity of
consequences are affected by the positioning of values representing the
chance of events occurring. Prior research has found that probability
estimates change as a function of outcome magnitude
\citep{harris_communicating_2011, harris_estimating_2009} and that outcome
magnitude estimates change as a function of event probability
\citep{kupor_probable_2020}. However, whereas prior research focused on the
potency of an event, we asked participants to evaluate another feature:
the severity of its consequences. How affected parties are impacted by
an event is one step removed from a core component of risk, outcome
magnitude. In addition, unlike prior work which substantially
manipulated underlying scenarios, our more subtle manipulation retained
the same probability values, changing only the surrounding context. The
effect of relative position on interpretation of chance data does not
consistently extend to judgments about the severity of consequences.
Adjusting for data visualization literacy did not remove the influence
of axis range on interpretations. \citet{yang_truncating_2021} also observed
that data visualization literacy could not sufficiently explain variance
in the degree of bias caused by y-axis truncation. This measure captures
comprehension of the conventions of data visualization, indicating
receipt of elementary instruction \citep{okan_how_2016}. Therefore, it is
perhaps better suited to measuring ability to decipher more complicated
designs, but is not well-placed to predict susceptibility to differences
in presentation format \citep{yang_truncating_2021}.
\hypertarget{implications-for-visualization-design}{%
\subsection{Implications for Visualization Design}\label{implications-for-visualization-design}}
This finding highlights an opportunity for data visualization designers
to creatively construct axes for dramatic effect. Introducing blank
space when setting axis limits allows designers to persuasively convey
large or small magnitudes. However, even those avoiding creative use of
blank space should be sensitive to our finding that axis ranges are
likely to be considered representative of relevant values for assessing
the magnitude of plotted data. Designers should consider what is \emph{not}
plotted and reflect on the impression(s) of magnitude resulting from
their choice of axis limits. To avoid misleading displays, axes should
present appropriate values. Like \citet{correll_truncating_2020}, we
acknowledge that there is no objectively correct method for achieving
this. Ultimately, the designer decides what context is appropriate,
based on the chart's purpose and content. This may involve taking into
account historical data, comparable scenarios, established baselines,
current objectives, \emph{etc.}. Our findings are also relevant for assessing
the quality of data visualizations; one should consider whether a chart
appropriately portrays magnitude, in addition to standard
considerations.
Setting an axis range that extends far beyond the range of the plotted
data impacts discrimination ability \citep{witt_graph_2019}, and may distract
attention from meaningful variance within the data. Witt recommends
setting an axis range to 1.5-2 times the plotted data's standard
deviation. This guidance is broadly consistent with our suggestions in
its recommendation that axis limits should take into account relevant
values to provide context. The present experiment has demonstrated that
magnitude is communicated by the relative position of data points within
the space of all plausible values.
When following Witt's \citep{witt_graph_2019} suggestions, data points'
positions are determined solely by the size of the numerical difference
between two conditions. A large difference between conditions would
result in data points being located near the two extremes of the chart,
which may capture genuine small and large magnitudes. At other times,
applying Witt's guidance will create an inaccurate impression of
individual magnitudes. For example, with a small difference between
conditions, no data points will be displayed near the extremes, even
though they may be genuinely large or small when considered within a
larger context. This occurs because Witt's guidance was created for the
sole purpose of managing bias and sensitivity when comparing two
conditions (in fields with standardized effect sizes). Accordingly,
setting axes which provide context for \emph{individual} magnitudes, is not
considered pertinent. Again, designers must consider their dataset and
the message they intend to relate in order to reach a trade-off between
suitable communication of variability and individual magnitudes. A
possible compromise may involve displaying values against blank space to
convey magnitude in context, and also in a focused display to facilitate
comparisons between values. This resembles an approach for communicating
differences discussed by \citet{correll_truncating_2020}, and reported to
benefit users by \citet{ritchie_lie_2019}. Its suitability for conveying
magnitude should be investigated in future work.
\hypertarget{limitations}{%
\subsection{Limitations}\label{limitations}}
To avoid likelihood of misinterpretation, participants were given
instructions on how to read inverted charts. This may have suppressed a
spontaneous interpretation of magnitude, based on physical position, in
favor of a learned interpretation. Our investigation therefore only
explains how viewers interpret magnitude when they know how to interpret
a given chart.
In addition to associations between vertical position and magnitude,
vertical position is also a common conceptual metaphor for emotional
valence. Lower physical positions are typically associated with negative
valence and higher physical positions with positive valence.
\citet{woodin_conceptual_2022} found that comprehension is facilitated when the
physical arrangement of data is consistent with the conceptual metaphor
for valence, but that associations between vertical position and
numerical magnitude affect interpretations more strongly. In the present
set of experiments, charts displayed negative outcomes, so data were
aligned with the conceptual metaphor for valence in inverted charts, and
misaligned in conventional charts. Participants evidently did not use
valence metaphors to interpret values in conventional charts; this would
have produced the opposite pattern of results to those observed. The
simplest explanation for our results is that participants relied on
relative position when interpreting both conventional and inverted
charts, rather than sometimes generating inferences based on a
conceptual metaphor for valence.
In analyses employing graph literacy as a co-variate, graph literacy
scores were calculated as the average of five Likert scale responses.
This means that responses to graph literacy questions were modeled as
continuous data, whereas Likert scale ratings from experimental trials
were modeled as ordinal data. This approach was used by the scale's
developers \citep{garcia-retamero_measuring_2016}, but is not the most
appropriate method \citep{liddell_analyzing_2018}.
\hypertarget{conclusion}{%
\subsection{Conclusion}\label{conclusion}}
The position of data points in a chart affects interpretation of how big
or small their values are. We demonstrate that this relationship between
physical position and inferences about magnitude critically depends on
whether accompanying blank space represents higher or lower alternatives
to the plotted data. Viewers take into account the context in which data
appears, even when comparison values are not explicitly displayed. Axis
limits and blank space warrant consideration from data visualization
designers.
\hypertarget{acknowledgments}{%
\section*{Acknowledgments}\label{acknowledgments}}
\addcontentsline{toc}{section}{Acknowledgments}
Duncan Bradley was supported by the Economic and Social Research Council
(Grant Number ES/P000665/1). This work was supported in part by a BPS
Cognitive Section Postgraduate Rapid Project Grant. We thank Jen McBride
and Paul Warren for comments on an earlier draft, and Paul Stott for
assistance with manuscript formatting.
\setlength{\bibsep}{0.0pt}
%\bibliographystyle{bib_styles/abbrv}
\bibliographystyle{bib_styles/abbrvnatdoi}
%\bibliographystyle{bib_styles/abbrv-doi}
%\bibliographystyle{bib_styles/abbrv-doi-narrow}
%\bibliographystyle{bib_styles/abbrv-doi-hyperref}
%\bibliographystyle{bib_styles/abbrv-doi-hyperref-narrow}
{\fontsize{8pt}{9.6pt}\selectfont \bibliography{bibliography}}
\end{document}
| {
"alphanum_fraction": 0.8077615915,
"avg_line_length": 58.5240641711,
"ext": "tex",
"hexsha": "30f35b9c9fa7fdf22191e6d76f4d7481cac1d23f",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-02-11T12:30:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-02-11T12:11:51.000Z",
"max_forks_repo_head_hexsha": "f8d86d659478ae817b9f22b415cc76b1f5114e4e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "duncanbradley/position_magnitude",
"max_forks_repo_path": "position_magnitude.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "f8d86d659478ae817b9f22b415cc76b1f5114e4e",
"max_issues_repo_issues_event_max_datetime": "2022-02-11T14:57:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-02-11T14:57:48.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "duncanbradley/position_magnitude",
"max_issues_repo_path": "position_magnitude.tex",
"max_line_length": 1247,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f8d86d659478ae817b9f22b415cc76b1f5114e4e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "duncanbradley/position_magnitude",
"max_stars_repo_path": "position_magnitude.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 16617,
"size": 76608
} |
\section{Trowing Training}\label{perk:throwingTraining}
\textbf{Cost:} 80CP\\
\textbf{Requirements:} Simple Weapon Training\\
\textbf{Passive, Repeatable, Source(80 Gold)}\\
You are trained with throwing weapons.
This includes all weapons that have the "Throwing" descriptor.
You add your level to attack and block rolls made with these weapons.\\
If you already have another perk that grants you proficiency with a certain weapon, only the one with the highest bonus counts.\\
\\
Level Progression:\\
\\
\rowcolors{2}{lightgray}{white}
\begin{tabular}{l | l | l | l}
Level & CP Cost & Gold Cost & Effect\\
II & 250CP & 250 Gold & You also add 1d4 to attack and block rolls.\\
III & 1,200CP & 1,200 Gold & You also add 2d4 to attack and block rolls.\\
IV & 4,500CP & 4,500 Gold & You also add 3d4 to attack and block rolls.\\
\end{tabular} | {
"alphanum_fraction": 0.7319098458,
"avg_line_length": 46.8333333333,
"ext": "tex",
"hexsha": "9b0ab20e2e18954d151bfec8b924b85f4ef74675",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_forks_repo_path": "perks/martial/weapontraining/throwntraining.tex",
"max_issues_count": 155,
"max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_issues_repo_path": "perks/martial/weapontraining/throwntraining.tex",
"max_line_length": 129,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_stars_repo_path": "perks/martial/weapontraining/throwntraining.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z",
"num_tokens": 256,
"size": 843
} |
\begin{frame}{\ft{Example Use-Cases}}
\section{Examples}
\vspace{-5em}
{\Large\fontfamily{uhv}\selectfont
\vspace{1em}
\begin{center}
\begin{minipage}{.9\textwidth}
\vspace{1em}
\fcolorbox{lqboutercolor!30!purple}{lqbinnercolor}{\begin{minipage}{\textwidth}%
\begin{lightquadblockc}{0.5,0.2,0.1}{Inter-Application Networking and Workflow Management}
\begin{center}\begin{minipage}{.98\textwidth}
{\bf \begin{itemize}
\sqitem {\lsep} Export data and instructions between Qt-based applications
{\color[rgb]{0.3,0,0.1}{(slides 16-17)}}.\vspace{1em}
\sqitem {\lsep} \parbox[t]{15cm}{Embed document or multi-media viewers inside scientific or
dataset applications
{\color[rgb]{0.3,0,0.1}{(slides 28-31)}}.}\\\vspace{1.5em}
\end{itemize}}\end{minipage}
\end{center}
\end{lightquadblockc}
\end{minipage}}
\vspace{.7em}
\color[rgb]{0,0.24,0.25}{\hrule height 7pt}
\vspace{-.5em}
\color[rgb]{0.5,0.2,0.1}{\hrule height 3pt}
\vspace{.8em}
\fcolorbox{lqboutercolor!30!blue}{lqbinnercolor}{\begin{minipage}{\textwidth}%
\begin{lightquadblockc}{0,0.24,0.25}{Responsive desktop-style applications for enhanced UX}
\vspace{-9pt}
\hspace{24pt}{\setlength{\fboxsep}{10pt}\colorbox[rgb]{1,1,0.75}{\begin{minipage}{18cm}
{\sl\LARGE\setstretch{1.05}
\textbf{\color[rgb]{0.2,0,0.5}{Native applications offer superior User Experience, leveraging distinct interactive
features of desktop GUIs: context menus,
dialog boxes, tool tips, Multiple Window Display,
dock windows, and so on:}}\par}
\end{minipage}}}
\vspace{1em}
\begin{itemize}
\sqitem {\lsep} \parbox[t]{17cm}{Compelling front-ends for e-commerce
(Note: \q{46\% of global online retail orders happen on desktop},
source: leftronic.com),
Real Estate, VR, etc.
{\color[rgb]{0.3,0,0.1}{(slides 21-27)}}.}\vspace{1em}
\sqitem {\lsep} \parbox[t]{17cm}{For scientists and researchers,
build innovative data-collection instruments
as well as interactive Research Object applications
{\color[rgb]{0.3,0,0.1}{(slides 18-20)}}.}\vspace{1.5em}
\end{itemize}
\end{lightquadblockc}
\end{minipage}}
\end{minipage}
\end{center}
}
\end{frame}
| {
"alphanum_fraction": 0.7269689737,
"avg_line_length": 35.5084745763,
"ext": "tex",
"hexsha": "41015fb8974d48707ec460ac2f48113830040316",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/ntxh",
"max_forks_repo_path": "NA3/presentation/slide5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/ntxh",
"max_issues_repo_path": "NA3/presentation/slide5.tex",
"max_line_length": 116,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/ntxh",
"max_stars_repo_path": "NA3/presentation/slide5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 759,
"size": 2095
} |
\subsection{\label{ref:ZXBox}ZXBox}
\screenshot{plugins/images/ss-zxbox}{ZXBox}{img:zxbox}
ZXBox is a port of the ``Spectemu'' ZX Spectrum 48k emulator for Rockbox
(\url{https://sourceforge.net/projects/spectemu/}).
To start a game open a tape file or snapshot saved as
\fname{.tap}, \fname{.tzx}, \fname{.z80} or \fname{.sna} in the file browser.\\
\note{As ZXBox is a 48k emulator only loading of 48k z80 snapshots is possible.}
\subsubsection{Default keys}
The emulator is set up for 5 different buttons: Up, Down, Left, Right and
Jump/Fire. Each one of these can be mapped to one key of the Spectrum Keyboard
or they can be used like a ``Kempston'' joystick. Per default the buttons,
including an additional but fixed menu button, are assigned as follows:
\begin{btnmap}
\opt{IPOD_3G_PAD,IPOD_4G_PAD}{\ButtonMenu/\ButtonPlay/}
\opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD%
,IAUDIO_X5_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD,MROBE100_PAD%
,PBELL_VIBE500_PAD,SANSA_FUZEPLUS_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}%
{\ButtonUp/\ButtonDown/}
\opt{IRIVER_H10_PAD}{\ButtonScrollUp/\ButtonScrollDown/}
\opt{IPOD_3G_PAD,IPOD_4G_PAD,RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD%
,IRIVER_H300_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD,IAUDIO_X5_PAD%
,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD,MROBE100_PAD%
,IRIVER_H10_PAD,PBELL_VIBE500_PAD,SANSA_FUZEPLUS_PAD,SAMSUNG_YH92X_PAD%
,SAMSUNG_YH820_PAD}{\ButtonLeft/\ButtonRight}
\opt{COWON_D2_PAD}{\TouchTopMiddle{}/\TouchBottomMiddle{}/\TouchMidLeft{}/\TouchMidRight}
\opt{MPIO_HD200_PAD}{\ButtonVolDown / \ButtonVolUp / \ButtonRew / \ButtonFF}
\opt{MPIO_HD300_PAD}{\ButtonRew / \ButtonFF / \ButtonScrollUp / \ButtonScrollDown}
\opt{HAVEREMOTEKEYMAP}{& }
& Directional movement\\
%
\opt{IPOD_3G_PAD,IPOD_4G_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD,IAUDIO_X5_PAD%
,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD,MROBE100_PAD
,SANSA_FUZEPLUS_PAD}{\ButtonSelect}
\opt{RECORDER_PAD}{\ButtonPlay}
\opt{SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonPlay{} or \ButtonFF}
\opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn}
\opt{ONDIO_PAD}{\ButtonMenu}
\opt{IRIVER_H10_PAD}{\ButtonRew}
\opt{COWON_D2_PAD}{\TouchCenter}
\opt{PBELL_VIBE500_PAD}{\ButtonOK}
\opt{MPIO_HD200_PAD}{\ButtonFunc}
\opt{MPIO_HD300_PAD}{\ButtonEnter}
\opt{HAVEREMOTEKEYMAP}{& }
& Jump/Fire\\
%
\opt{RECORDER_PAD}{\ButtonFOne}
\opt{ONDIO_PAD}{\ButtonOff}
\opt{IPOD_3G_PAD,IPOD_4G_PAD}{\ButtonHold{} switch}
\opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode}
\opt{GIGABEAT_PAD,GIGABEAT_S_PAD,COWON_D2_PAD}{\ButtonMenu}
\opt{SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_E200_PAD,MROBE100_PAD}{\ButtonPower}
\opt{SANSA_FUZE_PAD}{Long \ButtonHome}
\opt{IAUDIO_X5_PAD}{\ButtonPlay}
\opt{IRIVER_H10_PAD}{\ButtonFF}
\opt{PBELL_VIBE500_PAD}{\ButtonCancel}
\opt{MPIO_HD200_PAD}{\ButtonRec + \ButtonPlay}
\opt{MPIO_HD300_PAD}{Long \ButtonMenu}
\opt{SANSA_FUZEPLUS_PAD}{\ButtonBack}
\opt{SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonRew}
\opt{HAVEREMOTEKEYMAP}{& }
& Open ZXBox menu\\
\end{btnmap}
\subsubsection{ZXBox menu}
\begin{description}
\item[ Vkeyboard.]
This is a virtual keyboard representing the Spectrum keyboard. Controls are
the same as in standard Rockbox, but you just press one key instead of
entering a phrase.
\item[Play/Pause Tape.] Toggles playing of the tape (if it is loaded).
\item[Save Quick Snapshot.] Saves snapshot into \fname{/.rockbox/zxboxq.z80}.
\item[Load Quick Snapshot.] Loads snapshot from \fname{/.rockbox/zxboxq.z80}.
\item[Save Snapshot.]
Saves a snapshot of the current state. You would enter the full path and
desired name - for example \fname{/games/zx/snapshots/chuckie.sna}. The
snapshot format will be chosen after the extension you specified, per
default \fname{.z80} will be taken in case you leave it open.
\item[Toggle Fast Mode.]
Toggles fastest possible emulation speed (no sound, maximum frameskip etc.).
This is Useful when loading tapes with some specific loaders.
\item[Options.]
\begin{description}
\item[Map Keys To Kempston.]
Controls whether the \daps{} buttons should simulate a ``Kempston''
joystick or some assigned keys of the Spectrum keyboard.
\item[Display Speed.]Toggle displaying the emulation speed (in percent).
\item[Invert Colours.]
Inverts the Spectrum colour palette, sometimes helps visibility.
\item[Frameskip]
Sets the number of frames to skip before displaying one. With zero
frameskip ZXBox tries to display 50 frames per second.
\item[Sound.]Turns sound on or off.
\item[Volume.]Controls volume of sound output.
\item[Predefined Keymap]
Select one of the predefined keymaps. For example \setting{2w90z} means:
map ZXBox's \btnfnt{Up} to \setting{2}, \btnfnt{Down} to \setting{w},
\btnfnt{Left} to \setting{9}, \btnfnt{Right} to \setting{0} and
\btnfnt{Jump/Fire} to \setting{z}. This example keymap is used in the
``Chuckie Egg'' game.
\item[Custom Keymap]
This menu allows you to map one of the Spectrum keys accessible through the
plugin's virtual keyboard to each one of the buttons.
\item[Quit.] Quits the emulator..
\end{description}
\end{description}
\nopt{ipodvideo}{% no scaling for here, still include it?
\subsubsection{Hacking graphics}
Due to ZXBox's simple (but fast) scaling to the screen by dropping lines and
columns some games can become unplayable. It is possible to hack graphics to
make them better visible with the help of an utility such as the ``Spectrum
Graphics Editor''. Useful tools can be found at the ``World of Spectrum'' site
(\url{http://www.worldofspectrum.org/utilities.html}).}
| {
"alphanum_fraction": 0.7391158178,
"avg_line_length": 51.2844827586,
"ext": "tex",
"hexsha": "b19fc8e302f01c9deb693f87c4799d70d1e772b9",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z",
"max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_forks_repo_path": "manual/plugins/zxbox.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_issues_repo_path": "manual/plugins/zxbox.tex",
"max_line_length": 96,
"max_stars_count": 24,
"max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_stars_repo_path": "manual/plugins/zxbox.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z",
"num_tokens": 1862,
"size": 5949
} |
%\documentclass[12pt]{article}
\documentclass[12pt,landscape]{article}
\include{preamble}
\newcommand{\instr}{\small Your answer will consist of a lowercase string (e.g. \texttt{aebgd}) where the order of the letters does not matter. \normalsize}
\title{Math 342W / 650 Fall \the\year{} \\ Midterm Examination Two}
\author{Professor Adam Kapelner}
\date{Thursday, May 13, \the\year{}}
\begin{document}
\maketitle
%\noindent Full Name \line(1,0){410}
\thispagestyle{empty}
\section*{Code of Academic Integrity}
\footnotesize
Since the college is an academic community, its fundamental purpose is the pursuit of knowledge. Essential to the success of this educational mission is a commitment to the principles of academic integrity. Every member of the college community is responsible for upholding the highest standards of honesty at all times. Students, as members of the community, are also responsible for adhering to the principles and spirit of the following Code of Academic Integrity.
Activities that have the effect or intention of interfering with education, pursuit of knowledge, or fair evaluation of a student's performance are prohibited. Examples of such activities include but are not limited to the following definitions:
\paragraph{Cheating} Using or attempting to use unauthorized assistance, material, or study aids in examinations or other academic work or preventing, or attempting to prevent, another from using authorized assistance, material, or study aids. Example: using an unauthorized cheat sheet in a quiz or exam, altering a graded exam and resubmitting it for a better grade, etc.
\\
\noindent By taking this exam, you acknowledge and agree to uphold this Code of Academic Integrity. \\
%\begin{center}
%\line(1,0){250} ~~~ \line(1,0){100}\\
%~~~~~~~~~~~~~~~~~~~~~signature~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ date
%\end{center}
\normalsize
\section*{Instructions}
This exam is 100 minutes (variable time per question) and closed-book. You are allowed \textbf{two} 8.5 $\times$ 11'' pages (front and back) of a \qu{cheat sheet}, blank scrap paper and a graphing calculator. Please read the questions carefully. No food is allowed, only drinks. %If the question reads \qu{compute,} this means the solution will be a number otherwise you can leave the answer in \textit{any} widely accepted mathematical notation which could be resolved to an exact or approximate number with the use of a computer. I advise you to skip problems marked \qu{[Extra Credit]} until you have finished the other questions on the exam, then loop back and plug in all the holes. I also advise you to use pencil. The exam is 100 points total plus extra credit. Partial credit will be granted for incomplete answers on most of the questions. \fbox{Box} in your final answers. Good luck!
\pagebreak
\problem\timedsection{11} Consider the following causal diagrams where all events have other causes that are not displayed. It is assumed that the timing of events is known and to scale. Assume you have a training set $\mathbb{D}$ with a large sample size $n$ with columns x, y, z (and w if included).
\vspace{-.3cm}
\begin{figure}[htp]
\centering
\includegraphics[width=7in]{basic_causal_diagrams}
\end{figure}
\vspace{-.4cm}
\vspace{-0.3cm}\benum\truefalsesubquestionwithpoints{11}
\begin{enumerate}[(a)]
%\item In diagram A, x and z could be spuriously correlated.
%\item In diagram D, w and y could be spuriously correlated.
%\item In diagram E, w and y could be spuriously correlated.
%\item Assume the data generating process that $\mathbb{D}$ is sampled from is diagram A. When running the OLS model y $\sim$ x you find a strong correlation. This correlation is spurious.
\item Assume diagram A is the data generating process that $\mathbb{D}$ is sampled from. When running the OLS model y $\sim$ x you find a strong correlation. In out-of-sample data there would likely be a near-zero correlation betweeen y and x.
\item In diagram A, z is a lurking variable when analyzing the causal effect of x on y.
\item In diagram B, z is a lurking variable when analyzing the causal effect of x on y.
\item In diagram D, w is a lurking variable when analyzing the causal effect of x on y.
\item In diagram E, w is a lurking variable when analyzing the causal effect of x on y.
\item In diagram C, x causes y (according to our in-class definition of causality).
\item In diagram B, an OLS model of y $\sim$ x demonstrates that x and y are correlated but this correlation disappears if you run an OLS model of y $\sim$ x + z
\item In diagram D, an OLS model of y $\sim$ x demonstrates that x and y are correlated but this correlation disappears if you run an OLS model of y $\sim$ x + w
\item In diagram D, x can be used to predict y better than $g_0$ (out of sample)
\item In diagram D, w can be used to predict y better than $g_0$ (out of sample)
\item In diagram E, an OLS model of y $\sim$ x demonstrates that x and y are correlated but this correlation disappears if you run an OLS model of y $\sim$ x + w
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\problem\timedsection{7} \ingray{Consider the following causal diagrams where all events have other causes that are not displayed. It is assumed that the timing of events is known and to scale.} Assume now you have a training set $\mathbb{D}$ with a large sample size $n$ with columns x, y, z (and w if included).
%
%
%\begin{figure}[htp]
%\centering
%\includegraphics[width=7in]{basic_causal_diagrams}
%\end{figure}
%
%\vspace{-.5cm}
%
%\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{14}
%
%\begin{enumerate}[(a)]
%\item Assume the data generating process that $\mathbb{D}$ is sampled from is diagram A. When running the OLS model y $\sim$ x you find a strong correlation. This correlation is spurious.
%\item Assume the data generating process that $\mathbb{D}$ is sampled from is diagram A. When running the OLS model y $\sim$ x you find a strong correlation. In out-of-sample data there would likely be a near-zero correlation betweeen y and x.
%\item Assume the data generating process that $\mathbb{D}$ is sampled from is diagram E. When running the OLS model y $\sim$ x you find a strong correlation. This correlation is spurious.
%\item Assume the data generating process that $\mathbb{D}$ is sampled from is diagram E. When running the OLS model y $\sim$ x you find a strong correlation. In out-of-sample data there would likely be a near-zero correlation betweeen y and x.
%\end{enumerate}
%\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{7} Consider the \texttt{iris} data frame, a famous dataset of four measurements on $n = 150$ iris flowers where the response is \texttt{Species}, a categorical variable with three levels: versicolor, viriginica and setosa (where each species has 50 observations). Below is a sample:
\lstset{
basicstyle=\footnotesize,
xleftmargin=.2\textwidth, xrightmargin=.2\textwidth
}
\begin{lstlisting}
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
5.5 4.2 1.4 0.2 setosa
7.6 3.0 6.6 2.1 virginica
7.2 3.2 6.0 1.8 virginica
5.6 3.0 4.1 1.3 versicolor
5.2 4.1 1.5 0.1 setosa
...
\end{lstlisting}
\vspace{-0.7cm}
\noindent And also consider the \texttt{common\_name} data frame which provides the common names:
\lstset{
basicstyle=\footnotesize,
xleftmargin=.3\textwidth, xrightmargin=.3\textwidth
}
\begin{lstlisting}
Species English_Name
aphylla table iris
setosa bristle-pointed iris
reichenbachii rock iris
versicolor blue flag iris
flavescens lemonyellow iris
\end{lstlisting}
\vspace{-1cm}
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{8}
%\begin{changemargin}{-0.8cm}{0cm}
Joining the two data frames on the \texttt{Species} column via a ...
\begin{enumerate}[(a)]
%\item These two data frames can be joined by either a left join, a right join, an inner join and a full join.\\
\item ... left join (where \texttt{iris} was on the left) would yield a new data frame with 150 rows.
\item ... right join (where \texttt{iris} was on the right) would yield a new data frame with 150 rows.
\item ... left join (where \texttt{common\_name} was on the left) would yield a new data frame with 150 rows.
\item ... right join (where \texttt{common\_name} was on the right) would yield a new data frame with 150 rows.
\item ... inner join would yield a new data frame with 150 rows.
\item ... full join would yield a new data frame with 150 rows.
\item ... inner join would yield a new data frame with 100 rows.
\item ... full join would yield a new data frame with 100 rows.
\end{enumerate}
%\end{changemargin}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{6} \ingray{Consider the \texttt{iris} data frame, a famous dataset of four measurements on $n = 150$ iris flowers where the response is \texttt{Species}, a categorical variable with three levels: versicolor, viriginica and setosa (where each species has 50 observations).} Consider the subset of the observations for only y = setosa. Below is a sample:
\lstset{
basicstyle=\footnotesize,
xleftmargin=.25\textwidth, xrightmargin=.25\textwidth
}
\begin{lstlisting}
Sepal.Length Sepal.Width Petal.Length Petal.Width
5.2 4.1 1.5 0.1
5.4 3.4 1.7 0.2
5.3 3.7 1.5 0.2
4.5 2.3 1.3 0.3
4.8 3.0 1.4 0.3
...
\end{lstlisting}
\vspace{-1cm}
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{7}
If you were to convert this dataset from wide format to long format ...
\begin{enumerate}[(a)]
\item ... there would be two columns
\item ... there would be four columns
\item ... all columns would have continuous features
\item ... all columns would have categorical features
\item ... there would be 50 rows
\item ... there would be 200 rows
\item ... the resulting data frame would be easier to manipulate using a visualization library such as \texttt{ggplot2}
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{9} \ingray{Consider the \texttt{iris} data frame, a famous dataset of four measurements on $n = 150$ iris flowers where the response is \texttt{Species}, a categorical variable with three levels: versicolor, viriginica and setosa (where each species has 50 observations).} We fit a CART model with \texttt{Nodesize} = 1 to this dataset. The result is a model with 17 internal nodes and 9 leaf nodes with $\hat{y}$ values of 1 (= setosa), 2 (= virginica) and 3 (= versicolor). Below is an abridged illustration of the tree. Ignore the ``M$\rightarrow$'' and \qu{$\leftarrow$M} notation. Numbers in parentheses indicate number of observation in a node. The left direction means the split condition is true.
\vspace{-0.2cm}
\begin{figure}[htp]
\centering
\includegraphics[width=5.3in]{classification_tree.png}
\end{figure}
\vspace{-0.3cm}
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{14}
\vspace{-0.2cm}
\begin{enumerate}[(a)]
\item This is a regression tree model
\item This model can be written as a linear model of the form $g(\x) = b_0 + b_1 x_1 + b_2 x_2 + b_3 x_3 + b_4 x_4$ where $x_1, x_2, x_3, x_4$ are the four measurements \texttt{Sepal.Length}, \texttt{Sepal.Width}, \texttt{Petal.Length} and \texttt{Petal.Width}
\item This model is overfit
\item If the stump is considered depth zero, then this tree has a depth of 3
\item For all $\mathbb{D}$, a binary split on \texttt{Petal.Length} is the best overall split to reduce heterogeneity in the response
\item For future data, this model will likely predict setosa correctly with high probability
\item If \texttt{Petal.Length} $> 2.45$, there exists a second binary split that is able to isolate all virginica observations in one leaf and all versicolor observations in the other leaf
\item If \texttt{Petal.Length} $> 4.95$, $\hat{y}$ will always be the same
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{6} Consider the following data frame displayed in random order, where missingess is visualized in red. The last column is the response $y$ and there is no missingness in the response. The feature that has the most missingness is $x_{34}$ and the observation with the most missingness is row \#10.
\begin{figure}[htp]
\centering
\includegraphics[width=7in]{missingness.png}
\end{figure}
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{7}
\begin{enumerate}[(a)]
\item Dropping all observations with missingness (listwise deletion) will seriously impact future performance of any model fit with the data
\item The dataset exhibits one MCAR missingness mechanism and this mechanism is the same for all entries
\item The dataset may exhibit a different independent NMAR missingness mechanism for each feature
\item When building a predictive model, the most prudent thing to do is to drop the feature $x_{34}$ and to drop observation \#10
\item Imputing missing values using the columns' sample averages will result in a data frame with no missingness
\item The recommended procedure is to impute with the \texttt{missForest} and then use only the imputed matrix $\X$ to construct your model ignoring the original $\X$ matrix
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{6} \ingray{Consider the following data frame, where missigness is visualized in red. The last column is the response $y$ and there is no missingness in the response. The feature that has the most missingness is $x_{34}$ and the observation with the most missingness is row \#10.}
\begin{figure}[htp]
\centering
\includegraphics[width=7in]{missingness.png}
\end{figure}
\noindent Now assume that $x_{34}$ and observation \#10 were dropped and the remaining missing values were imputed via the \texttt{missForest} algorithm.
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{6}
\begin{enumerate}[(a)]
\item An OLS model cannot be fit on the imputed data frame
\item A ridge model with $\lambda = 10^{-6}$ cannot be fit on the imputed data frame
\item A lasso model with $\lambda = 10^{-6}$ cannot be fit on the imputed data frame
\item An elastic net model with $\lambda = 10^{-6}$ and $\alpha = 0.1$ cannot be fit on the imputed data frame
\item After using the model selection procedure to select $\lambda$ for the lasso based on oos $s_e$, the number of nonzero linear coefficients in the resulting model will likely be small relative to $p$
\item After using the model selection procedure to select $\lambda$ for the ridge based on oos $s_e$, the number of nonzero linear coefficients in the resulting model will likely be small relative to $p$
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\problem\timedsection{7} Consider the \texttt{diamonds} data frame, which has $n = 53940$ diamonds with 9 measurements each and the response variable \texttt{price} measured in USD. Below is a sample of five observations:
%
%\lstset{
% basicstyle=\footnotesize,
% xleftmargin=.2\textwidth, xrightmargin=.2\textwidth
%}
%\begin{lstlisting}
% carat cut color clarity depth table price x y z
% 0.41 Very Good D SI2 62.3 61 638 4.72 4.75 2.95
% 0.50 Very Good F VS2 62.8 57 1402 5.05 5.08 3.18
% 1.03 Fair I SI2 65.2 56 3530 6.42 6.35 4.16
% 1.10 Ideal I SI1 62.1 57 5037 6.60 6.64 4.11
% 1.51 Very Good E VS2 63.3 61 13757 7.24 7.17 4.56
%...
%\end{lstlisting}
%\vspace{-.5cm}
%
%\noindent We wish to fit many models to this dataset: (1) OLS with the formula $y \sim .$ (2) OLS with the formula $y \sim . * .$ (3) OLS with the formula $y \sim . * . * .$ (4) a regression tree with \texttt{nodesize} = 10 (5) a Random Forest (RF) with 500 trees. The dataset is sampled into a distinct training set of size $n=3,000$, a distinct select set of $n=3,000$ and a distinct test set of $n=3,000$ and these three subsets are used in accordance with the modeling selection procedure we learned about in class.
%
%\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{14}
%
%\begin{enumerate}[(a)]
%\item The training set is fit to each of the five models above
%\item The select set is fit to each of the five models above
%\item Predicting on the training set is \qu{out of sample}
%\item Predicting on the select set is \qu{out of sample}
%\item Each of the five models above predict on the select set
%\item Each of the five models above predict on the test set
%\item The model that predicts on the test set was fit on $n=6000$ observations
%\item The final model was fit on $n=6000$ observations
%\item For the RF model, there is no need to use a separate out-of-sample set as you can assess the oos performance using the oob error estimate
%\end{enumerate}
%\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{14} Consider the \texttt{diamonds} data frame, which has $n = 53940$ diamonds with 9 measurements each and the response variable \texttt{price} measured in USD. We wish to fit many models to this dataset: (1) OLS with the formula $y \sim .$ (2) OLS with the formula $y \sim . * .$ (3) OLS with the formula $y \sim . * . * .$ (4) a regression tree with \texttt{nodesize} = 10 (5) a Random Forest (RF) with 500 trees. Consider two model selection procedures:
\vspace{-0.1cm}
\begin{enumerate}[A.]
\item The three-split dataset model selection procedure where the original dataset is sampled into a distinct training set of size $n=3,000$, a distinct select set of $n=3,000$ and a distinct test set of $n=3,000$.
\item The \textit{nested resampling procedure} for model selection where $K_{select} = 3$ and $K_{test} = 5$ on a subset of $n=9,000$ observations from the original data frame.
\end{enumerate}
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{16}
\begin{enumerate}[(a)]
\item In procedure [A], the training set is fit to each of the five models above
\item In procedure [A], the the select set is fit to each of the five models above
\item In procedure [A], predicting on the training set is \qu{out of sample}
\item In procedure [A], predicting on the select set is \qu{out of sample}
\item In procedure [A], each of the five models above predict on the test set
\item In procedure [A], the model that predicts on the test set was fit on $n=6000$ observations
\item In procedure [A], the final model was fit on $n=6000$ observations
%\item For the RF model, there is no need to use a separate out-of-sample set as you can assess the oos performance using the oob error estimate
\item Procedure [B] is more computationally costly than procedure [A]
\item Procedure [B] has lower variance in its estimate of future performance
\item Procedure [B] is likely to select a model with better future performance than procedure [A]
\item There are a total of 16 models fit during procedure [B] including the final model
\item There are a total of 26 models fit during procedure [B] including the final model
\item There are a total of 65 models fit during procedure [B] including the final model
\item There are a total of 76 models fit during procedure [B] including the final model
\item During procedure [B], for each of the $K_{test}$ outer resamplings, there could be a different model that predicts on the test set
\item In procedure [B], the final model will be the modal model out of the $K_{test}$ models selected
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{8} \ingray{Consider the \texttt{diamonds} data frame, which has $n = 53940$ diamonds with 9 measurements each and the response variable \texttt{price} measured in USD.} Beginning with the design matrix $\tilde{\X}$ created by the OLS model of $y \sim . * . * .$ we consider the \emph{greedy forward stepwise} linear model algorithm. The dataset is sampled into a distinct training set of size $n=3,000$, a distinct select set of $n=3,000$ and a distinct test set of $n=3,000$ and these three subsets are used in accordance with the modeling selection procedure we learned about in class.
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{7}
\begin{enumerate}[(a)]
\item This algorithm at most has $p+1$ iterations where $p+1$ is the number of columns in $\tilde{\X}$ and in each iteration, it fits a different linear model
\item This algorithm likely has less than $p+1$ iterations
\item The prediction error in the training set is monotonically decreasing
\item The prediction error in the select set is monotonically decreasing
\item The prediction error in the test set is monotonically decreasing
\item When predicting on the test set, the model will definitely have the same number of degrees of freedom as the final model
\item The future performance of the model selected by the \emph{greedy forward stepwise} linear model algorithm will be better than the future performance of the OLS model of $y \sim .$ based on your knowledge of this dataset from class
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{7} \ingray{Consider the \texttt{diamonds} data frame, which has $n = 53,940$ diamonds with 9 measurements each and the response variable \texttt{price} measured in USD. Beginning with the design matrix $\tilde{\X}$ created by the OLS model of $y \sim . * . * .$ we consider the \emph{greedy forward stepwise} linear model algorithm. The dataset is sampled into a distinct training set of size $n=3,000$, a distinct select set of $n=3,000$ and a distinct test set of $n=3,000$ and these three subsets are used in accordance with the modeling selection procedure we learned about in class.} The design matrix $\tilde{\X}$ has $p+1 = 1477$ number of columns. Consider the case where we run through all of the $t = 1, 2, \ldots, 1477$ iterations of this stepwise algorithm.
\vspace{-0.2cm}\benum\truefalsesubquestionwithpoints{7}
Let $g_t$ denote the model fit at every iteration $t$.
\begin{enumerate}[(a)]
\item As $t$ increases the bias of $g_t$ increases monotonically
\item As $t$ increases the variance of $g_t$ increases monotonically
\item As $t$ increases the MSE of $g_t$ increases monotonically
\item $g_1, g_2, \ldots, g_{1477}$ are independent models\\
Consider the model $g_{avg}$ that averages models $g_1, g_2, \ldots, g_{1477}$.
\item $g_{avg}$ is called a \qu{bagged} model
\item $g_{avg}$ has lower bias than $g_{1477}$
\item $g_{avg}$ always has lower MSE than the model that was selected by the greedy forward stepwise linear model algorithm
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{12} Consider the \texttt{adult} data frame, which has data on $n = 32,560$ people with 14 measurements each and the response variable \texttt{income} which is binary (1 = the person has an income $>$50K and 0 = the person has an income $\leq$50K). Consider the following model fit to \texttt{adult\_train}, a training set of $n=10,000$ leaving the remainder of the data as a holdout set. The $\b$ vector for the fitted model is displayed below on the last line.
\lstset{
basicstyle=\footnotesize,
xleftmargin=.0\textwidth, xrightmargin=.0\textwidth
}
\begin{lstlisting}
> lmod = glm(income ~ age + hours_per_week + capital_gain + education_num, adult_train, family = "binomial")
Warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
> coef(lmod)
(Intercept) age hours_per_week capital_gain education_num
-8.1582 0.0454 0.0380 0.0003 0.3200
\end{lstlisting}
\vspace{-0.8cm}\benum\truefalsesubquestionwithpoints{12}
\begin{enumerate}[(a)]
\item The vector $\b$ was computed solely using linear algebra calculations
\item For a future person $\x_*$, the quantity $\x_*\b$ can be used to produce probability predictions in the set (0, 1)
\item For a future person $\x_*$, if $\x_*\b < 0$, this model is predicting that it is impossible for this person's income to be $>$50K
\item The Brier score for this model's predictions in-sample will likely be closer to zero than the Brier score for predictions out-of-sample.
\item The probability predictions of this model will be the same predictions as a probit model fit to the same data
\item \qu{For a future person $\x_*$, the probability that this person's income $>$50K will increase by 0.044 if the person becomes one year older}.
\item \qu{For a future person $\x_*$, the probability that this person's income $>$50K will increase by 0.044 if the person becomes one year older as long as the three other measurements for this person do not change}.
\item If \texttt{age} increases and the three other variables remain the same, the predicted probability will increase for any $\x$
\item The warning message means there was numerical underflow and/or overflow during the computation of $\b$
\item For a 20yr-old with no job, capital gains or education, the probability he has an income of $>$50K is 0.9993 to the nearest four significant digits
\item For a 20yr-old with no job, capital gains or education, the probability he has an income of $>$50K is 0.0007095 to the nearest four significant digits
\item You have all the information necessary in this problem statement to trace out a DET curve
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\problem\timedsection{14} \ingray{Consider the \texttt{adult} data frame, which has data on $n = 32,560$ people with 14 measurements each and the response variable \texttt{income} which is binary (1 = the person has an income $>$50K and 0 = the person has an income $\leq$50K).} Considering the model from the previous quesiton fit to \texttt{adult\_train}, a training set of $n=10,000$ leaving the remainder of the data as a holdout set, we now predict on \texttt{adult\_test} and perform binary classification:
\lstset{
basicstyle=\footnotesize,
xleftmargin=.15\textwidth, xrightmargin=.15\textwidth
}
\begin{lstlisting}
> yhat = as.numeric(predict(lmod, adult_test, type = "response") > 0.9)
> table(y_test, yhat)
yhat
y_test 0 1
0 15107 30
1 4439 585
\end{lstlisting}
\vspace{-0.8cm}\benum\truefalsesubquestionwithpoints{9}
\begin{enumerate}[(a)]
\item This is likely an asymmetric cost classification model
\item The FDR, FOR, FPR calculated using the above table are honest estimates of future performance
\item This model makes mistakes 22.2\% of the time to the nearest three significant digits
\item This classification model is absolutely the best classification model you could build from \texttt{lmod} to minimize costs when the cost of a FP is \$9 and the cost of a FN is \$9
\item This model implies the point (0.00198, 0.116) on an ROC curve to the nearest three significant digits
\item When predicting in the future using this classification model, if $\hat{y} = 1$, then the probability of making an error is 4.88\% to the nearest three signification digits
\item If the cost of a false positive is \$100 and the cost of false negatives was negligible and there is no reward for classifying correctly, then you expect to lose \$0.15 per prediction to the nearest two signification digits
\item If the probability threshold was increased within the binary classification model, then the number of FP's would decrease
\item You have all the information necessary in this problem statement to compute the AUC for \texttt{lmod}
\end{enumerate}
\eenum\instr\pagebreak
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| {
"alphanum_fraction": 0.7108842348,
"avg_line_length": 62.5842450766,
"ext": "tex",
"hexsha": "ec5b103a3399559a87245a2682fc5b2fab1c1cbb",
"lang": "TeX",
"max_forks_count": 19,
"max_forks_repo_forks_event_max_datetime": "2021-12-25T05:20:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-02-01T05:00:39.000Z",
"max_forks_repo_head_hexsha": "09afc327958f6c0aee12960617a49deff5d1304a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "HubertMajewski/QC_MATH_342W_Spring_2021",
"max_forks_repo_path": "exams/midterm2/midterm2.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "09afc327958f6c0aee12960617a49deff5d1304a",
"max_issues_repo_issues_event_max_datetime": "2021-02-25T05:05:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-04T03:46:34.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "HubertMajewski/QC_MATH_342W_Spring_2021",
"max_issues_repo_path": "exams/midterm2/midterm2.tex",
"max_line_length": 893,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "b78a71aab88d8ad2401c16a0b6746356dbbc4afb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kapelner/QC_MATH_342W_Spring_2021",
"max_stars_repo_path": "exams/midterm2/midterm2.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-14T17:58:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-02T15:13:09.000Z",
"num_tokens": 7524,
"size": 28601
} |
\chapter{Sandbox}
This is inline tikz graphics \tikz \draw (0pt,0pt) -- (20pt,6pt); and continue text after this.
\begin{tikzpicture}
\draw (-1.5,0) -- (1.5,0);
\draw (0,-1.5) -- (0,1.5);
%\draw (-1,0) .. controls (-1,1) and (1,1) .. (1,0);
\draw (0,0) circle [radius=10pt];
\end{tikzpicture}
\begin{tikzpicture}
\draw (0,0) .. controls (1,1) and (2,1) .. (2,0);
\end{tikzpicture}
| {
"alphanum_fraction": 0.5862944162,
"avg_line_length": 26.2666666667,
"ext": "tex",
"hexsha": "a72ded750d6db1ce9704390b42092a870636c625",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1a6a02bd6156e2beda44172b26d7e271ab07cc1a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "mixignal/book-basic-eng",
"max_forks_repo_path": "content/sandbox.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1a6a02bd6156e2beda44172b26d7e271ab07cc1a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "mixignal/book-basic-eng",
"max_issues_repo_path": "content/sandbox.tex",
"max_line_length": 95,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1a6a02bd6156e2beda44172b26d7e271ab07cc1a",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "mixignal/book-basic-eng",
"max_stars_repo_path": "content/sandbox.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 177,
"size": 394
} |
% \documentclass[SE,authoryear,toc]{lsstdoc}
\documentclass[SE,lsstdraft,authoryear,toc]{lsstdoc}
% lsstdoc documentation: https://lsst-texmf.lsst.io/lsstdoc.html
\input{meta}
% Package imports go here.
% Local commands go here.
%If you want glossaries
%\input{aglossary.tex}
%\makeglossaries
\title{Rubin Observatory Construction Documentation Inventory}
% Optional subtitle
\setDocSubtitle{(Report from the Rubin Observatory Documentation Working Group)}
\author{%
Chuck Claver (chair),
David Cabrera (co-Chair),
Rob McKercher (co-chair),
John Andrew,
Diane Hascall,
Patrick Ingraham,
Tony Johnson,
Kristen Metzger,
Austin Roberts,
Jonathan Sick,
Matthew Rumore.
}
\setDocRef{SITCOMTN-012}
\setDocUpstreamLocation{\url{https://github.com/lsst-sitcom/sitcomtn-012}}
\date{\vcsDate}
% Optional: name of the document's curator
% \setDocCurator{The Curator of this Document}
\setDocAbstract{%
This technical note primarily brings together the sources of critical and historical documentation developed across the Rubin Observatory Construction Project; some sources developed by Rubin Observatory Pre-operations and Operations Teams are also included. This inventory is meant as a guide for the Rubin Operations Team to determine how and what documentation is transferred as a deliverable from the Construction Project.
}
% Change history defined here.
% Order: oldest first.
% Fields: VERSION, DATE, DESCRIPTION, OWNER NAME.
% See LPM-51 for version number policy.
\setDocChangeRecord{%
\addtohist{1}{2021-06-01}{Initial draft from Confluence pages.}{Chuck Claver}
}
\begin{document}
% Create the title page.
\maketitle
% Frequently for a technote we do not want a title page uncomment this to remove the title page and changelog.
% use \mkshorttitle to remove the extra pages
% ADD CONTENT HERE
% You can also use the \input command to include several content files.
\input{Intro}
\input{DocuShare}
\input{TechNotes}
\input{ConfluencePages}
\input{EngineeringModels}
\input{GitHub}
\input{SystemDatabases}
\input{VerificationReports}
\input{EPO}
\input{InfoTech}
% \input{OtherSources}
\newpage
\appendix
% Include all the relevant bib files.
% https://lsst-texmf.lsst.io/lsstdoc.html#bibliographies
\section{References} \label{sec:bib}
\renewcommand{\refname}{} % Suppress default Bibliography section
\bibliography{local,lsst,lsst-dm,refs_ads,refs,books}
% Make sure lsst-texmf/bin/generateAcronyms.py is in your path
\section{Acronyms} \label{sec:acronyms}
\input{acronyms.tex}
% If you want glossary uncomment below -- comment out the two lines above
%\printglossaries
\end{document}
| {
"alphanum_fraction": 0.7843212237,
"avg_line_length": 27.2395833333,
"ext": "tex",
"hexsha": "ad9eed6d2439ed530e7da37f66f1c4448be3216e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-06T16:52:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-06T16:52:01.000Z",
"max_forks_repo_head_hexsha": "2a2e1503a0ea736a91156d6934ca16d7b32a7a13",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst-sitcom/sitcomtn-012",
"max_forks_repo_path": "SITCOMTN-012.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2a2e1503a0ea736a91156d6934ca16d7b32a7a13",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst-sitcom/sitcomtn-012",
"max_issues_repo_path": "SITCOMTN-012.tex",
"max_line_length": 426,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2a2e1503a0ea736a91156d6934ca16d7b32a7a13",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst-sitcom/sitcomtn-012",
"max_stars_repo_path": "SITCOMTN-012.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 696,
"size": 2615
} |
\documentclass[12pt]{article}
\usepackage{graphicx}
\title{E-Vote}
\author{}
\begin{document}
\maketitle
\section{Introduction}
In this paper we descrived E-Vote a simple platform for online voting. Many online groups occasionally have the necessity of voting to elect a president, new members, or make collective decisions. These groups tend to have similar characteristics: members of the group communicate primarily via email (maling lists), members do not personally know each other and may not trust other members or managers of the group, members have various level of interest and engagement in the decisions of the community and are easily turned off by complex voting procedures.
We developed the E-Vote system to provide a service to these communities. The system has the following characteristics:
\begin{itemize}
\item The system is open source and anybody can check the source code. The code is small and written in the Python language. This makes it easy for professionals in the field to check it.
\item The system can run as a service and one installation can run mutiple elections. Anybody can login into the system, create a new election, register voters and managers, and customize the ballot using an easy to use WYSIWYG interface.
\item The system communicates with voters and managers by email.
\item Voters do not need to login into the system to vote. They only need to click on the link in the notification email, fill a web form and submit.
\item Each voter can only vote once per election.
\item Results are computed automatically at closing of the election and published.
\item Voting is completely anonymous. Even a hacker with a complete database dump of the system would not be able to link voters to ballots.
\item Each voter can check at any time that his vote has been properly recorded and not alatered.
\item Each voter can independenty and at any time perform an election recount.
\item Upon voting, each voter receives an email recipt containing a copy of their filled and anonymized ballot.
\item Managers are notified by email when a new vote is cast and receive a copy of the anonimized ballot.
\item All ballots, anonymized and digitally signed, are published, along with instruciton to verify the digital signature.
\end{itemize}
\section{Workflow}
When a new user of the system wishes to create and run a new election, the user needs to login (voting does not require login, only managing an election requires login). We will refer to this user as the election officer.
\begin{center}
\includegraphics[width=3in]{images/elections.png}
\end{center}
The election officer can create a new election by clicking on a button, editing the content of the model ballot, listing the emails of voters and those of the election managers, and declaring the election deadline. Election managers are users to be notified when a new vote is cast. The officer is also a manager.
\begin{center}
\includegraphics[width=4in]{images/edit.png}
\end{center}
The model ballot is created using a WYSIWYG syntax similar to MS Word but using only the browsers. Special tags like \{\{0\}\} allow to insert radio boxes into the ballot. The voters will check these radio boxes. Two or more radio boxes identified by the same code belong to the same radio group and are exclusive. For example:
\begin{verbatim}
Do you agree? {{0}} yes or {{0}} no
\end{verbatim}
Groups of radio boxes identified by different code are independent. For example:
\begin{verbatim}
Do you agree with A? {{0}} yes or {{0}} no
Do you agree with B? {{1}} yes or {{1}} no
\end{verbatim}
After the election is created, it can be tested, and edited. New voters and managers can be added. Voters and managers do not have to be previous users of the system and do not need an account. The election starts by emailing memebrs:
\begin{center}
\includegraphics[width=4in]{images/start.png}
\end{center}
Each email listed by the officer corresponds to a voter. When the election starts a new record is created in the system for each voter. A voter record is uniquely identified by a {\tt voter-uuid} code. UUID stands for a Universal Unique IDentifier. It is a long random sequence of characters that is impossible to guess. The {\tt voter-uuid} is used to build a unique, one-time link, which is email to the voter and allow the user to vote.
\begin{center}
\includegraphics[width=4in]{images/vote.png}
\end{center}
The system recognizes the user from the {\tt voter-uuid} in the URL.
When the election starts, the system also generates one ballot for each voter. A ballot is a BLANK form with a unique ID for the form {\tt ballot-in} where {\tt id} is comprised on an election id and a ballot id. Ballots are not linked to the voters at this time.
When a user fills and submits the voting form, the system picks a random ballot and records the vote on that ballot. The filled ballot, containing the vote and a unique {\tt ballot-id}. The ballot is the digitally signed using the RSA algorithm using a private key associated to the election and only known to the system. The completed ballot, anonymized and signed, is published online, emailed to the user as a receipt, and emailed to the managers.
\begin{center}
\includegraphics[width=4in]{images/ballot.png}
\end{center}
At this point the system has recorded that this voter has voted and its {\tt voter-uuid} will no longer be useful for voting. The voter can check his vote was recorded because he received a receipt with a copy of the completed ballot.
When the election starts, the system also published a complete list of {\tt ballot-id}. This allows everybody to check that new ballots have not been forged and some existing ballots have been used for voting.
\begin{center}
\includegraphics[width=2in]{images/ballots.png}
\end{center}
When the election is closed, results become public. The election is completely recounted evey time the {\tt results} page is visited. The system counts how many times each checkbox has been clicked.
\begin{center}
\includegraphics[width=4in]{images/results.png}
\end{center}
When the election closes some users may not have voted. They still need prof that their ballot is empty and has not been used by somebody else. For this purpose at closing of the election, the system emails every user who has not voted one of the remaining blank ballot as receipt.
\begin{center}
\includegraphics[width=4in]{images/ballot-blank.png}
\end{center}
In this way every user receive a receipt with a copy of their ballot (blank or filled).
\begin{verbatim}
https://127.0.0.1:8000/evote/default/elections
https://127.0.0.1:8000/evote/default/start/2
https://127.0.0.1:8000/evote/default/close_election/2
https://127.0.0.1:8000/evote/default/ballots/2
https://127.0.0.1:8000/evote/default/results/2
https://127.0.0.1:8000/evote/default/edit/2
https://127.0.0.1:8000/evote/default/vote/2/voter-160D...BE91
https://127.0.0.1:8000/evote/default/ballot/ballot-2-1
\end{verbatim}
\begin{verbatim}
ballot-2-1
\end{verbatim}
\begin{verbatim}
signature-5CF7...E246
\end{verbatim}
\begin{verbatim}
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAMTZTDA2xdl1DFisptWMVFdEt3gj+ROKrP3lYbntvJnPBzqf1/gmTqKb
rqzmQ+uSv6P5FIR1nuFX+17LBDjg0Ovz5bFYHxWYdRy4ObktUlvH+wOs0Byi7k0z
8ZAKIKJYJQgpZptS0rqxXRqREHmEyPKT5aE6oiL9AAEvKoVxuVmrAgMBAAE=
-----END RSA PUBLIC KEY-----
\end{verbatim}
\section{Validation Process}
\begin{verbatim}
# import required libraries
import base64, rsa # install module with "pip install rsa".
# this is the ballot to verify
ballot = """
<div class="ballot"><h2>Election Title</h2>
<p>This is a ballot!</p>
<table>
<tbody><tr><td>Candidate 1</td><td><input disabled="disabled" name="ck_0" type="radio" value="1" /></td></tr>
<tr><td>Candidate 2</td><td><input checked="checked" disabled="disabled" name="ck_0" type="radio" value="2" /></td></tr>
<tr><td>Candidate 3</td><td><input disabled="disabled" name="ck_0" type="radio" value="3" /></td></tr>
</tbody></table>
<p>or</p>
<table>
<tbody><tr><td>Candidate 1</td><td><input checked="checked" disabled="disabled" name="ck_1" type="radio" value="1" /> yes</td><td><input disabled="disabled" name="ck_1" type="radio" value="2" /> no</td><td><input disabled="disabled" name="ck_1" type="radio" value="3" /> abstain</td></tr>
<tr><td>Candidate 2</td><td><input checked="checked" disabled="disabled" name="ck_2" type="radio" value="1" /> yes</td><td><input disabled="disabled" name="ck_2" type="radio" value="2" /> no</td><td><input disabled="disabled" name="ck_2" type="radio" value="3" /> abstain</td></tr>
<tr><td>Candidate 3</td><td><input checked="checked" disabled="disabled" name="ck_3" type="radio" value="1" /> yes</td><td><input disabled="disabled" name="ck_3" type="radio" value="2" /> no</td><td><input disabled="disabled" name="ck_3" type="radio" value="3" /> abstain</td></tr>
</tbody></table><pre>
ballot-2-1
</pre></div>
""".strip()
# this is the ballot RSA signature
signature = base64.b16decode("5CF72361F326D5980C8B031DA8D7B4279AE29E592DD49633C73232F7CD752D05DE32C09A359D44AB2D243FBBAA2185872CC3CB0A26EFDBAFDB69B207FD97B4D13B87E8421B848C817D191A97A3B5BB377637975AA4C52C81DF35E2B29B8B9F9D9B0E6E0473071E1D2279BC5A7434DB620AD4D7A7E22DBB711782CF614EAEE246")
# this is the election public key
pk_pem = """
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAMTZTDA2xdl1DFisptWMVFdEt3gj+ROKrP3lYbntvJnPBzqf1/gmTqKb
rqzmQ+uSv6P5FIR1nuFX+17LBDjg0Ovz5bFYHxWYdRy4ObktUlvH+wOs0Byi7k0z
8ZAKIKJYJQgpZptS0rqxXRqREHmEyPKT5aE6oiL9AAEvKoVxuVmrAgMBAAE=
-----END RSA PUBLIC KEY-----
"""
# this is the code that verifies the signature
public_key = rsa.PublicKey.load_pkcs1(pk_pem)
if rsa.verify(ballot, signature, public_key)==None:
print 'valid'
else:
print 'invalid'
\end{verbatim}
\section{Conclusions}
\begin{thebibliography}{999}
\bibitem{test} ...
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7711493434,
"avg_line_length": 57.1104651163,
"ext": "tex",
"hexsha": "cc04ae9befba2c5661bdf39a24b1a6444d2c5c3e",
"lang": "TeX",
"max_forks_count": 52,
"max_forks_repo_forks_event_max_datetime": "2021-10-19T16:23:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-07T17:51:02.000Z",
"max_forks_repo_head_hexsha": "e598a1dffdc104bfbdcd98d80d8a7a730f3944e4",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "SoulDev64/evoteGJ",
"max_forks_repo_path": "docs/evote_paper.tex",
"max_issues_count": 17,
"max_issues_repo_head_hexsha": "e598a1dffdc104bfbdcd98d80d8a7a730f3944e4",
"max_issues_repo_issues_event_max_datetime": "2021-05-02T15:50:29.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-02-02T00:29:41.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "SoulDev64/evoteGJ",
"max_issues_repo_path": "docs/evote_paper.tex",
"max_line_length": 560,
"max_stars_count": 104,
"max_stars_repo_head_hexsha": "e598a1dffdc104bfbdcd98d80d8a7a730f3944e4",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "SoulDev64/evoteGJ",
"max_stars_repo_path": "docs/evote_paper.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-12T06:56:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-04-16T00:13:54.000Z",
"num_tokens": 2774,
"size": 9823
} |
\documentclass{beamer}
\usepackage{graphicx}
\usepackage[latin1]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{eso-pic}
\usepackage{mathrsfs}
\usepackage{url}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{multirow}
\usepackage{hyperref}
\usepackage{booktabs}
% \usepackage{bbm}
\usepackage{cooltooltips}
\usepackage{colordef}
\usepackage{beamerdefs}
\usepackage{lvblisting}
\usepackage{multimedia}
\usepackage{algorithmicx}
\usepackage[noend]{algpseudocode}
\usepackage{algorithm}
\pgfdeclareimage[height=2cm]{logobig}{hulogo}
\pgfdeclareimage[height=0.7cm]{logosmall}{Figures/LOB_Logo}
\renewcommand{\titlescale}{1.0}
\renewcommand{\titlescale}{1.0}
\renewcommand{\leftcol}{0.6}
\title[Eigenvalue Problems - Numerical Solutions]{Eigenvalues and Eigenvectors}
\authora{Thomas Siskos}
\authorb{}
\authorc{}
\def\linka{[email protected]}
\def\linkb{http://github.com/thsis/NIS18}
\def\linkc{}
\institute{Numerical Introductory Seminar \\
Humboldt--University Berlin \\}
\hypersetup{pdfpagemode=FullScreen}
\begin{document}
% 0-1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[plain]{
\titlepage
}
\frame{
\frametitle{Agenda}
\tableofcontents
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Motivation}
\frame[containsverbatim]{
\frametitle{PCA}
\begin{columns}[onlytextwidth]
\begin{column}{0.25\textwidth}
\begin{itemize}
\item The iris dataset is already linearly separable.
\item With various techniques we can show this in even more detail.
\end{itemize}
\end{column}
\begin{column}{0.75\textwidth}
\begin{figure}
\begin{center}
\includegraphics[scale=0.20]{../media/plots/iris_raw.png}
\caption{\href {https://github.com/thsis/NIS18/blob/master/tests/tests_models.py}{Iris Pairplot} \protect\includegraphics[scale=0.05]{qletlogo.pdf}}
\end{center}
\end{figure}
\end{column}
\end{columns}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[containsverbatim, label=PCA-prop]{
\frametitle{PCA}
\begin{columns}[onlytextwidth]
\begin{column}{0.65\textwidth}
\begin{itemize}
\item objective:
\begin{equation}
\label{pca_obj}
max\ \delta^{\prime} Var \left(X\right) \delta \; s.t. \; \sum \delta_i^2 = 1.
\end{equation}
where $X \in \mathbb{R}^{n \times m}; m,n \in \mathbb{N}; \delta \in \mathbb{R}^m$
\item solution \hyperlink{PCA-proof}{\beamergotobutton{Proof}}
:
\begin{equation}
\label{pca_sol}
Y = \Gamma^{\prime} \left(X - \mu\right)
\end{equation}
where $Y \in \mathbb{R}^{n \times m}$ is the matrix of rotations,
$\Gamma \in \mathbb{R}^{m \times m}$ is the matrix of eigenvectors,
$\mu \in \mathbb{R}^m$ is the vector of sample means.
\end{itemize}
\end{column}
\begin{column}{0.35\textwidth}
\begin{figure}
\includegraphics[width=3.5cm, height=5cm]{../media/plots/iris_pca.png}
\caption{\href {https://github.com/thsis/NIS18/blob/master/tests/tests_models.py}{Iris PCA} \protect\includegraphics[scale=0.05]{qletlogo.pdf}}
\end{figure}
\end{column}
\end{columns}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{
\frametitle{LDA}
\begin{itemize}
\item objective:
\begin{equation}
\label{lda_obj}
max \ \frac{w^{\prime}S_B w}{w^{\prime}S_W w},
\end{equation}
where
\begin{align*}
S_B &= \sum\limits_{c}^{C} (\mu_c - \mu)(\mu_c - \mu)^{\prime}, \\
S_W &= \sum\limits_{c}^{C} \sum\limits^{n}_{i=1} (x_i - \mu_c)(x_i - \mu_c)^{\prime}
\end{align*}
and $x_i \in \mathbb{R}^m$, $\mu_c$ is the vector of class means.
\end{itemize}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[containsverbatim, label=LDA-prop]{
\frametitle{LDA}
\begin{columns}[onlytextwidth]
\begin{column}{0.65\textwidth}
\begin{itemize}
\item solution \hyperlink{LDA-proof}{\beamergotobutton{Proof}}:
\begin{equation}
\label{lda_sol}
S_B^{\frac{1}{2}}S_{W}^{-1}S_B^{\frac{1}{2}} v = \lambda v
\end{equation}
with $$ v = S_B^{\frac{1}{2}} w, $$ where this is again an Eigenvalue problem and it's solution will provide the rotation that ensures the largest possible (linear) separability.
\item Now how do we get the Eigenvalues?
\end{itemize}
\end{column}
\begin{column}{0.35\textwidth}
\begin{figure}
\begin{center}
\includegraphics[width=3.5cm, height=5cm]{../media/plots/iris_lda.png}
\caption{\href {https://github.com/thsis/NIS18/blob/master/tests/tests_models.py}{Iris LDA} \protect\includegraphics[scale=0.05]{qletlogo.pdf}}
\end{center}
\end{figure}
\end{column}
\end{columns}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Key Idea \& Definitions}
\frame{
\frametitle{Eigenvalue}
If $A$ is an $n \times n$ matrix, $v$ is a non-zero vector and $\lambda$ is a scalar, such that
\begin{equation}
\label{eigenvalue-def}
Av = \lambda v
\end{equation}
then $v$ is called an \textit{eigenvector} and $\lambda$ is called an \textit{eigenvalue} of the matrix $A$.
An eigenvalue of A is a root of the characteristic equation,
\begin{equation}
\label{eigenvalue-solve}
det\left(A - \lambda I \right) = 0
\end{equation}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Characteristic Polynomial \& Diagonal Matrices}
\frame[label=companion_prop]{
\frametitle{The characteristic polynomial}
Consider the polynomial
\begin{equation}
\label{characteristic-polynomial}
f(\lambda) = \lambda^p + a_{p-1}\lambda^{p-1} + \dots + a_1 \lambda + a_0
\end{equation}
We now construct a matrix $A \in \mathbb{R}^{n \times n}$ such that the eigenvalues of $A$ are the roots of the polynomial $f(\lambda)$ \hyperlink{companion_exmpl}{\beamergotobutton{Example}}:
\begin{equation}
\label{companion-matrix}
A = \begin{bmatrix}
0 & 1 & 0 & \dots & 0 \\
0 & 0 & 1 & \cdots & 0 \\
& & & \ddots & \\
0 & 0 & 0 & \dots & 1 \\
-a_0 & -a_1 & -a_2 & \dots & -a_{p-1} \\
\end{bmatrix}
\end{equation}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Similarity Transformations}
\frame[containsverbatim]{
\frametitle{General Idea}
What are the eigenvalues of $X_1$ and $X_2$?
\begin{columns}[onlytextwidth]
\begin{column}{0.3\textwidth}
\begin{equation*}
X_1 = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 3 & 0 \\
0 & 0 & 0 & 4 \\
\end{bmatrix}
\end{equation*}
\end{column}
\begin{column}{0.7\textwidth}
\begin{equation*}
X_2 = \begin{bmatrix}
2.297 & -0.461 & -0.459 & 0.225 \\
-0.461 & 1.4 & -0.097 & -0.829 \\
-0.459 & -0.097 & 2.672 & 0.224 \\
0.225 & -0.829 & 0.224 & 3.631 \\
\end{bmatrix}
\end{equation*}
\end{column}
\end{columns}
}
\frame[label=similarity_prop]{
\frametitle{Similarity Transformations}
Two $n \times n$ matrices $A$ and $B$, are said to be \textit{similar} if there exists a nonsingular matrix $P$ such that
\begin{equation}
\label{similarity-prop}
A = P^{-1} B P
\end{equation}
If the two matrices $A$ and $B$ are similar, then they also share the same eigenvalues. \hyperlink{similarity_proof}{\beamergotobutton{Proof}}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Householder Reflections}
\frame[label=householder_prop]{
\frametitle{Householder Reflections}
Let $u$ and $v$ be orthonormal vectors and let $x$ be a vector in the space spanned by $u$ and $v$, such that
$$x = c_1 u + c_2 + v$$
for some scalars $c_1$ and $c_2$. The vector
$$\tilde{x}=-c_1 u + c_2 v$$
is a \textit{reflection} of x through the line difined by the vector u. Now consider the matrix
\begin{equation}
Q = I - 2 uu^{\prime}.
\end{equation}
Note that \hyperlink{householder_proof}{\beamergotobutton{Proof}}:
$$Qx = \tilde{x}$$
}
\frame{
\frametitle{Householder Reflections}
We will use Householder-Reflections to tranform a vector
$$a = (a_1, \dots, a_n)$$
into
$$\hat{a} = (\hat{a}_1, 0, \dots, 0)$$
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Givens Rotations}
\frame[containsverbatim]{
\frametitle{Givens Rotations}
Using orthogonal transformations we can also rotate a vector in such a way that a specified element becomes 0 and only one other element in the vector is changed.
\begin{columns}[onlytextwidth]
\begin{column}{0.5\textwidth}
$$
Q = \begin{bmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{bmatrix}
$$
\end{column}
\begin{column}{0.5\textwidth}
\includegraphics[scale=.5]{../media/plots/givens.png}
\end{column}
\end{columns}
}
\frame[label=givens_prop]{
\frametitle{Givens Rotations}
\begin{equation}
V_{pq}(\theta) = \begin{bmatrix}
1 & & & & & & & & \\
& \ddots & & & & & & & \\
& & 1 & & & & & & \\
& & & \cos\theta & & \sin\theta & & & \\
& & & & \ddots & & & & \\
& & & -\sin\theta & & \cos\theta & & & \\
& & & & & & 1 & & \\
& & & & & & & \ddots & \\
& & & & & & & & 1 \\
\end{bmatrix}
\end{equation}
where $\cos\theta = \frac{x_p}{||x||}$ and $\sin\theta = \frac{x_q}{||x||}$
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Algorithms}
\subsection{Jacobi-Method}
\frame{
\frametitle{Jacobi Method}
The Jacobi method for determining the eigenvalues of a symmetric matrix $A$ uses a sequence of orthogonal similarity transformations that result in the transformation:
$$A = P \Lambda P^{-1}$$ or rather: $$\Lambda=P^{-1}AP$$
where we use Givens Rotations to obtain $P$. The Jacobi iteration is:
\begin{equation}
\label{j-meth}
A^{k} = V_{p_k q_k}(\theta_k)A^{k-1}V_{p_k q_k}(\theta_k)
\end{equation}
The Jacobi Method is of $O(n^3)$
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{
\frametitle{Jacobi-Method}
\begin{algorithm}[H]
\caption{\texttt{jacobi}}
\label{j-algo}
\begin{algorithmic}
\Require symmetric matrix $A$
\Ensure $0 < precision < 1$
\Statex \textbf{initialize: } $L \gets A$; $U \gets I$; $L_{max} \gets 1$
\While{$L_{max} > precision$}
\State Find indices $i$, $j$ of largest value in lower triangle of $abs(L)$
\State $L_{max} \gets L_{i,j}$
\State $\alpha \gets \frac{1}{2}\cdot \arctan(\frac{2A_{i, j}}{A_{i, i}-A_{j, j}})$
\State $V \gets I$
\State $V_{i, i}, V_{j, j} \gets \cos \alpha$; $V_{i, j}, V_{j, i} \gets -\sin \alpha, \sin \alpha$
\State $A \gets V^{\prime} A V$; $U \gets UV$
\EndWhile
\Return $diag(A), U$
\end{algorithmic}
\end{algorithm}
}
\subsection{QR-Method}
\frame{
\frametitle{QR-Method}
The QR-Method is the most common algorithm for obtaining eigenvalues and eigenvectors of a matrix $A$. It relies on the so called QR-Factorization:
\begin{equation}
A = QR,
\end{equation}
where $Q$ is an orthogonal and $R$ is an upper triangular matrix. \\
The QR iteration is:
\begin{equation}
A^k = Q_{k-1}^{\prime} A_{k-1} Q_{k-1} = R_{k-1}Q_{k-1}
\end{equation}
The QR Method is of $O(n^3)$
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Basic Variant}
\frame{
\frametitle{Basic QR-Method}
\begin{algorithm}[H]
\caption{\texttt{QRM1}}
\label{qr1-meth}
\begin{algorithmic}
\Require square matrix $A$
\Statex \textbf{initialize: } $conv \gets False$
\While{not $conv$}
\State $Q, R \gets$ QR-Factorization of $A$
\State $A \gets RQ$
\If{$A$ is diagonal}
\State $conv \gets \texttt{True}$
\Statex
\EndIf
\EndWhile
\Return $diag\left(A\right), Q$
\end{algorithmic}
\end{algorithm}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Hessenberg Variant}
\frame{
\frametitle{Refined QR-Method}
\begin{columns}[onlytextwidth]
\begin{column}{0.5\textwidth}
For faster convergence it is common to convert the matrix first into a so called upper Hessenberg form.
\end{column}
\begin{column}{0.5\textwidth}
$$
\begin{bmatrix}
X & X & X & X & X & X & X \\
X & X & X & X & X & X & X \\
0 & X & X & X & X & X & X \\
0 & 0 & X & X & X & X & X \\
0 & 0 & 0 & X & X & X & X \\
0 & 0 & 0 & 0 & X & X & X \\
0 & 0 & 0 & 0 & 0 & X & X \\
\end{bmatrix}
$$
\end{column}
\end{columns}
\begin{algorithm}[H]
\caption{\texttt{QRM2}}
\label{qr2-meth}
\begin{algorithmic}
\Require square matrix $A$
\State $A \gets \texttt{hessenberg(}A\texttt{)}$
\State continue with: \Call {QRM1} A
\end{algorithmic}
\end{algorithm}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{
\frametitle{QRM2 Visualized}
\begin{center}
\movie[width=8cm, height=5.35cm]{\includegraphics[width=8cm, height=5.35cm]{Figures/placeholder.jpg}}{Figures/qrm_symmetric.mp4}
\href {https://github.com/thsis/NIS18/blob/master/analysis/animation.py}{QR-Method} \includegraphics[scale=0.05]{qletlogo.pdf}
\end{center}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Accelerated Variant}
\frame{
\frametitle{Accelerated QR-Method}
We can accelerate the QR-Method by creating an artificial zero on the main diagonal of $A^k$s Hessenberg form $T$ at an iteration step $k$:
\begin{align*}
T^{\star} &= T - t _{p-1, p-1} I \\
T^{\star} &= QR \\
T &= T^{\star} + t _{p-1, p-1} I
\end{align*}
}
\frame{
\frametitle{Accelerated QR-Method}
\begin{algorithm}[H]
\begin{algorithmic}
\Require square matrix $A \in \mathbb{R}^{p \times p}$
\State $T \gets \texttt{hessenberg}(A),\ conv \gets False$
\While{not $conv$}
\State $Q, R \gets$ QR-Factorization of $T - t_{p-1, p-1} I$
\State $T \gets RQ + t_{p-1, p-1}I$
\If{$T$ is diagonal}
\State $conv \gets True$
\EndIf
\EndWhile
\Return $diag\left(T\right), Q$
\end{algorithmic}
\end{algorithm}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Analysis}
\subsection{Accuracy}
\frame{
\frametitle{Unit tests - Idea}
\begin{enumerate}
\item Construct a $p \times p$ matrix, with known Eigenvalues $\lambda_{true} \in \mathbb{R}^p$. To do this we can invert the spectral decomposition.
\item Run the implemented algorithm on it, obtain the computed Eigenvalues $\lambda_{algo} \in \mathbb{R}^p$.
\item Assess $L_1$-Norm: $|\lambda_{true} - \lambda_{algo}|$, pass the test if it is smaller than a threshold $\epsilon$
\end{enumerate}
Repeat the procedure $1000$ times.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Efficiency}
\frame{
\frametitle{Time taken}
\begin{figure}
\begin{center}
\caption{\href {https://github.com/thsis/NIS18/blob/master/tests/tests_eigen.py}{Unit-tests: Time} \includegraphics[scale=0.05]{qletlogo.pdf}}
\includegraphics[width=11.5cm, height=5cm]{../media/plots/time_boxplot.png}
\end{center}
\end{figure}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{
\frametitle{Iterations needed}
\begin{figure}
\begin{center}
\caption{\href {https://github.com/thsis/NIS18/blob/master/tests/tests_eigen.py}{Unit-tests: Iterations} \protect\includegraphics[scale=0.05]{qletlogo.pdf}}
\includegraphics[width=11.5cm, height=5cm]{../media/plots/iterations_boxplot.png}
\end{center}
\end{figure}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[label=PCA-proof]{
\frametitle{PCA: proof}
The objective of \hyperlink{PCA-prop}{\beamergotobutton{PCA}}:
$$
max\ \delta^{\prime} Var \left(X\right) \delta \; s.t. \; \sum \delta_i^2 = 1
$$
Corresponding Lagrangean:
$$
\mathcal{L}(Var \left(X\right), \delta, \lambda) =
\delta^{\prime} Var \left(X\right) \delta - \lambda \left(\delta^{\prime}\delta - 1\right),
$$
where $\lambda \in \mathbb{R}^m$ \\
First order condition:
\begin{align}
\frac{\partial \mathcal{L}}{\partial \delta} &\stackrel{!}{=} 0 \notag\\
2Var(X)\delta - 2\lambda_k \delta &\stackrel{!}{=} 0 \notag\\
Var(X)\delta &= \lambda_k \delta \notag
\end{align}
Which is now reduced to a common Eigenvalue problem.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[label=LDA-proof]{
\frametitle{LDA: proof}
The objective of \hyperlink{LDA-prop}{\beamergotobutton{LDA}}:
$$ max \ \frac{w^{\prime}S_B w}{w^{\prime}S_W w},$$
Which we can reformulate to:
$$ max w^{\prime}S_B w \ s.t. w^{\prime}S_W w = 1. $$
Corresponding Lagrangean:
$$ \mathcal{L}(w, S_B, S_W, \lambda) = w^{\prime}S_B w - \lambda \left(w^{\prime}S_W w - 1\right)$$
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{
\frametitle{LDA: proof}
First order condition:
\begin{align*}
\frac{\partial \mathcal{L}}{\partial w} &\stackrel{!}{=} 0 \notag \\
2S_B w - 2 \lambda S_W w &\stackrel{!}{=} 0 \notag \\
S_{W}^{-1}S_B w &= \lambda w, \\
\end{align*}
which is known as a generalized Eigenvalue problem. We can redefine
\begin{align*}
S_B &= S_B^{\frac{1}{2}} S_B^{\frac{1}{2}} \\
v &= S_B^{\frac{1}{2}} w
\end{align*}
}
\frame{
\frametitle{LDA: proof}
We then get:
\begin{align*}
S_{W}^{-1}S_B w &= \lambda w \\
S_{W}^{-1}S_B^{\frac{1}{2}} \underbrace{S_B^{\frac{1}{2}} w}_v &= \lambda w \\
S_B^{\frac{1}{2}} S_{W}^{-1}S_B^{\frac{1}{2}} v &=\lambda \underbrace{S_B^{\frac{1}{2}} w}_v
\end{align*}
We can also rewrite this as:
$$S_B^{-\frac{1}{2}}S_{W}^{-1}S_B^{-\frac{1}{2}} v = \lambda v$$
Which now an Eigenvalue problem of a symmetric, positive semidefinite matrix \hyperlink{LDA-prop}{\beamergotobutton{back}}.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[label=similarity_proof]{
\frametitle{Eigenvalues of similar matrices}
From the definition in (\ref{similarity-prop}) it follows immediately that a matrix $A$ with Eigenvalues $\lambda_1, \dots, \lambda_n$ is similar to the matrix $diag(\lambda_1, \dots, \lambda_n)$.
If $A$ and $B$ are similar, as in (\ref{similarity-prop}), it holds:
\begin{align*}
B - \lambda I &= P^{-1} B P - \lambda P^{-1} I P \notag \\
&= A - \lambda I.
\end{align*}
Hence $A$ and $B$ have the same eigenvalues. Additionally, important transformations are based around orthogonal matrices. If $Q$ is orthogonal and
\begin{equation*}
A = Q^{\prime} B Q,
\end{equation*}
$A$ and $B$ are said to be \textit{orthogonally similar} \hyperlink{similarity_prop}{\beamergotobutton{back}}.
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[label=companion_exmpl]{
\frametitle{Companion-Matrix: Example}
Demonstrate that the companion matrix \hyperlink{companion_prop}{\beamergotobutton{back}}:
\begin{enumerate}
\item corresponds to a polynomial.
\item has eigenvalues equal to the roots of the polynomial.
\end{enumerate}
$$A = \begin{bmatrix}
0 & 1 \\
-a_0 & -a_1 \\
\end{bmatrix}$$
\begin{align*}
det \left(A - \lambda I\right) &= \begin{bmatrix}
0 - \lambda & 1 \\
-a_0 & -a_1- \lambda \\
\end{bmatrix} \\
&= -\lambda\left(-a_1 - \lambda\right) + a_0 \\
&= \lambda^2 + a_1 \lambda + a_0
\end{align*}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame[label=householder_proof]{
\frametitle{Householder Reflections: proof}
\hyperlink{householder_prop}{\beamergotobutton{back}}
Remember:
\begin{itemize}
\item $Q = I - 2 uu^{\prime}$.
\item $u$ $v$ are orthonormal.
\end{itemize}
\begin{align*}
Qx &= c_1 u + c_2 v - 2c_1 uuu^{\prime} - 2 c_2 v uu^{\prime} \\
&= c_1 u + c_2 v - 2c_1 u^{\prime}uu - 2 c_2 u^{\prime} v u \\
&= -c_1 u + c_2 v\\
&= \tilde{x}
\end{align*}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{
\frametitle{Sources}
\begin{thebibliography}{3}
\bibitem{NME}
Seffen B{\"o}rm and Christian Mehl.
Numerical Methods for Eigenvalue Problems.
Walter de Gruyter GmbH \& Co.KG, Berlin/Boston, 2012.
\bibitem{NLA}
James E. Gentle.
Numerical Linear Algebra for Applications in Statistics.
Springer Science + Business Media, New York, 2003.
\bibitem{MVA}
Wolfgang K. H{\"a}rdle and L{\'e}opold Simar.
Applied Multivariate Statistical Analysis.
Springer-Verlag Gmbh, Berlin, Heidelberg, 2015.
\end{thebibliography}
}
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| {
"alphanum_fraction": 0.5698587861,
"avg_line_length": 32.3051359517,
"ext": "tex",
"hexsha": "8b2ffecf8cd002032f78f2b3054de143ab25a3c1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1f2a7be1ab209fa7c0a25cb8eace744336b07c1f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thsis/NIS18",
"max_forks_repo_path": "presentation/presentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1f2a7be1ab209fa7c0a25cb8eace744336b07c1f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thsis/NIS18",
"max_issues_repo_path": "presentation/presentation.tex",
"max_line_length": 196,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1f2a7be1ab209fa7c0a25cb8eace744336b07c1f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thsis/NIS18",
"max_stars_repo_path": "presentation/presentation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6681,
"size": 21386
} |
%%
%% This is file `lexample.tex',
%% Sample file for siam macros for use with LaTeX 2e
%%
%% October 1, 1995
%%
%% Version 1.0
%%
%% You are not allowed to change this file.
%%
%% You are allowed to distribute this file under the condition that
%% it is distributed together with all of the files in the siam macro
%% distribution. These are:
%%
%% siamltex.cls (main LaTeX macro file for SIAM)
%% siamltex.sty (includes siamltex.cls for compatibility mode)
%% siam10.clo (size option for 10pt papers)
%% subeqn.clo (allows equation numbners with lettered subelements)
%% siam.bst (bibliographic style file for BibTeX)
%% docultex.tex (documentation file)
%% lexample.tex (this file)
%%
%% If you receive only some of these files from someone, complain!
%%
%% You are NOT ALLOWED to distribute this file alone. You are NOT
%% ALLOWED to take money for the distribution or use of either this
%% file or a changed version, except for a nominal charge for copying
%% etc.
%% \CharacterTable
%% {Upper-case \A\B\C\D\E\F\G\H\I\J\K\L\M\N\O\P\Q\R\S\T\U\V\W\X\Y\Z
%% Lower-case \a\b\c\d\e\f\g\h\i\j\k\l\m\n\o\p\q\r\s\t\u\v\w\x\y\z
%% Digits \0\1\2\3\4\5\6\7\8\9
%% Exclamation \! Double quote \" Hash (number) \#
%% Dollar \$ Percent \% Ampersand \&
%% Acute accent \' Left paren \( Right paren \)
%% Asterisk \* Plus \+ Comma \,
%% Minus \- Point \. Solidus \/
%% Colon \: Semicolon \; Less than \<
%% Equals \= Greater than \> Question mark \?
%% Commercial at \@ Left bracket \[ Backslash \\
%% Right bracket \] Circumflex \^ Underscore \_
%% Grave accent \` Left brace \{ Vertical bar \|
%% Right brace \} Tilde \~}
\documentclass[final,leqno]{siamltex}
%\newcommand\floor[1]{\lfloor#1\rfloor}
\usepackage{amsmath}
\usepackage{svg}
\usepackage{graphicx}
% definitions used by included articles, reproduced here for
% educational benefit, and to minimize alterations needed to be made
% in developing this sample file.
\usepackage{caption}
\usepackage{subfig}
\usepackage{pdflscape}
\usepackage{mathtools}
\DeclarePairedDelimiter\floor{\lfloor}{\rfloor}
%\usepackage{listings}
%\lstset{basicstyle=\small\sffamily,%
% frame=shadowbox}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{soul}
\usepackage{color}
\usepackage{varioref}
\newcommand{\hlgray}[1]{{\sethlcolor{lightgray}\hl{#1}}}
\newcommand{\pe}{\psi}
\def\d{\delta}
\def\ds{\displaystyle}
\def\e{{\epsilon}}
\def\eb{\bar{\eta}}
\def\enorm#1{\|#1\|_2}
\def\Fp{F^\prime}
\def\fishpack{{FISHPACK}}
\def\fortran{{FORTRAN}}
\def\gmres{{GMRES}}
\def\gmresm{{\rm GMRES($m$)}}
\def\Kc{{\cal K}}
\def\norm#1{\|#1\|}
\def\wb{{\bar w}}
\def\zb{{\bar z}}
% some definitions of bold math italics to make typing easier.
% They are used in the corollary.
\def\bfE{\mbox{\boldmath$E$}}
\def\bfG{\mbox{\boldmath$G$}}
\title{Evaluating a sublinear-time algorithm \\ for the minimum spanning tree weight problem}
% The thanks line in the title should be filled in if there is
% any support acknowledgement for the overall work to be included
% This \thanks is also used for the received by date info, but
% authors are not expected to provide this.
\author{Gabriele Santi\thanks{Master's Degree in Computer Science, University of Rome ``Tor Vergata'' ({\tt [email protected]}).}
\and Leonardo De Laurentiis\thanks{Master's Degree in Computer Science, University of Rome ``Tor Vergata'' ({\tt [email protected]}).}}
\begin{document}
\maketitle
\begin{abstract}
We present an implementation and an experimental evaluation of an algorithm that, given a connected graph G (represented by adjacency lists), estimates in sublinear time, with a relative error $e$, the Minimum Spanning Tree Weight of G (see~\cite{crt} for a theoretical exposure of the algorithm).
Since the theoretical performances have already been shown and demonstrated in the above-mentioned paper of Chazelle et al. our goal is, exclusively, to experimental evaluate the algorithm and at last to present the results.
Some technical insights are given on the implementation of the algorithm and on the dataset used in the test phase, hence to show how the experiment has been carried out even for reproducibility purposes;
the results are then evaluated empirically and widely discussed, comparing these with the performances of the Prim algorithm and the Kruskal algorithm, launching several runs on a heterogeneous set of graphs and different theoretical models for them.
We assume hereafter that the reader has knowledge about the cited paper as we will just recap the theoretical results.
\end{abstract}
\begin{keywords}
minimum spanning tree, sublinear time algorithms, randomized algorithm, approximation algorithm, minimum spanning tree weight, experimental evaluation
\end{keywords}
\begin{AMS}
68W20, 68W25, 68R10
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{\scriptsize GABRIELE SANTI AND LEONARDO DE LAURENTIIS}{\scriptsize EVALUATING A SUBLINEAR-TIME ALGORITHM FOR THE MST WEIGHT PROBLEM}
\section{Introduction}
We will discuss here some preliminary observations and assumptions. First of all, we observe that we need a set of graphs that satisfies the following points:
\begin{itemize}
\item they should be finite and should not be multigraphs;
\item they should be undirected;
\item they should have weighted edges;
\item the weights on the edges should be integers; since the graph is finite, it is enough to show that there exist $W$ such that it is the maximum weight on the edges of the graph;\footnote{more generally, it is enough to have a numerable set of values for the weights which is always true when the graph is finite}
\item they might contain self-loops;
\item they have to be connected (the graph has only one connected component);
\item they should be represented with adjacency lists;
\item they should be represented in the same manner (we need an unvarying file format).
\end{itemize}
Unfortunately, the graphs and their representations, which can easily be found on the Internet, don't accomplish all of this requirements at the same time, although many standards for the file format are available.
Given this observation, our choice was to use randomly generated graphs, hence to implement our own graphs generator. This gives us the opportunity to generate a wide set of connected graphs, with tunable parameters, carefully chosen looking forward to the tests; these parameters include the number of nodes, the number of edges and the edges weight, nonetheless the distribution law for the edges. The edges between the nodes are step-by-step randomly constructed, respecting the connection requirement. The different types of graphs that we use in our experimental evaluation are presented afterwards.
After studying the paper we made some assumptions. One of the problem we encountered is that the theoretical algorithm assumes to have as input only graph $G$ and to have direct access to the family of graphs $G_i$\footnote{we recall that $G_i$ is an induced subgraph of $G$ such that the maximum weight on his edges is $i$ (e.g. $G_w$ is exactly $G$)}; with ``direct'' we intend that no computation is required to extract $G_i$, which is not true. In fact, we can show easily that a lower bound for the extraction of all the family is, at least, $O(m)$ (i.e. is linear on the number of edges). A na\"{\i}ve approach would cost $O(m)$ for the extraction of $G_i$ for a given $i$, hence $O(w m)$ for the whole family; a better approach could order the edges in $O(m \log m)$ and build the family in a single pass on the edges, achieving $O(m + m \log m)$. Having as input only $G$, it would seem that the algorithm is responsible for the extraction of the family, but this is not desirable due to this lower bound that would sabotage the overall performance. Finally we decided to consider this cost as a part of the construction of the data structure of the graph, to be done prior to the call of the algorithm.
\section{Design studies and choices}
\subsection{Random Graphs Generator}
As mentioned before, our choice is to implement our own graphs generator. The aim is to generate \emph{connected random graphs} with a specific set of parameters, like the number of nodes, the number of edges, the maximum edges weight and the average degree. Moreover, we want to test how our algorithm behaves in different environments, so we want to control the distribution law of the edges. Keep in mind that the graph has to satisfy all the points of the previous section, among which the connectivity requirement. Given a desired number of nodes $n$ and $e \geq n-1$ edges, the key-concepts under the implementation of our connected graphs generator are the following:
\begin{itemize}
\item we begin by generating a random permutation of the vertices $v_0, v_1, \dots v_n$
\item we generate a random spanning tree by iteratively adding edges in this vector in a uniform random manner; suppose we have added $\tau$ vertices to the tree $T$, we randomly select a vertex $s$ in the set $\lbrace T_0, \dots, T_\tau \rbrace$ and add a new edge $<T_s, v_{\tau+1}>$. At the end we will have an acyclic graph with $n-1$ edges.
\item following a certain probability law, we add the remaining $e - (n-1)$ edges
\end{itemize}
note that every time we add an edge, a weight is associated to it, with a value uniformly choosen in $\left[ 1, w \right]$.
We even decided to use a custom file format to save the data, which is a \texttt{.ssv} file, standing for \emph{space separated values}: every line of the file corresponds to two edges, reporting the values $<v_s, v_t, w>$ that means source and target nodes, edge weight. Being the graph undirected, it is implicit that the edge $<v_t, v_s, w>$ also exists.
\subsection{Graph Data Structures}
We implemented two versions of the algorithm: the first time using the well-known \emph{Boost}'s BGL\footnote{\texttt{Boost Graph Libray}, \cite{bst}} for the graphs data structures and efficient implementations of Kruskal's and Prim's algorithms; the latter instead embodies our own implementation of both the structures and all the side algorithms. We decided to do so in order to obtain more control over the code for testing purposes and because, at the moment (version \texttt{1.61.0}), the BGL subgraphs framework presents some still unfixed bug.
Unfortunately, our Kruskal algorithm, although using \emph{union-find} structures with path compression\footnote{it is known that the Kruskal algorithm time complexity is $O(m \log m)$, but can be improved to $O(m \log n)$ using union find structures with some euristic conveniently applied}, is not as fine-tuned as that proposed by the Boost's libraries; we didn't take further time on this because on the other side, our version of Prim's algorithm shows the \emph{same exact performances} of the Boost version and it counts as a theoretical lower bound, being an optimal algorithm and way better than the Kruskal's solution. Our data structures have been called FastGraph for their optimization over the operation required for the test. In any way our data structures \emph{can} nor \emph{do} ``help'' the CRT\footnote{Short for B. Chazelle, R. Rubinfeld, and L. Trevisan.\cite{crt}} algorithm in improving his time complexity.
\subsection{Tuning the algorithm}
In the implementation of the algorithm, the graph stored in the text file is read into a \texttt{FastGraph} structure, which is of course a representation by adjacency lists.
We want to compare the performances of CRT algorithm with a standard MST algorithm that in addition computes the total weight.\\
We want to emphasize now that the CRT algorithm is based on probabilistic assumptions; some parameters, that depend asymptotically on $\varepsilon$ should be selected carefully in order to
provide a good approximation very fast. These includes:
\begin{itemize}
\item $r$, the number of vertices uniformly choosen in the ``approx-number-connected-components''. This number is critical for the performances, as it directly determines the running time of the entire algorithm.
\item $C$, the large costant that we use to pick the number of vertices determinants for the application of the original paper's Lemma 4.
\end{itemize}
Both these values largely affect the overall performances because they undirectly decide how many BFS will be carried out by the CRT algorithm; the BFSes represent the fundamental cost driver of the entire run.
Initially in our opinion, the choice of these parameters must depend on the number of vertices in a way that they are dynamic, keeping their dependency on $\varepsilon$. We tried to untie this bond to $n$ but having static values for this parameters showed poor performances and a relative error exploding too fast as we grow the input instance.
As the paper says, the only requirement for $r$ is that it is $O(\frac{1}{\varepsilon^{2}})$ and $C \in O(\frac{1}{\varepsilon})$.
The demonstration reported on the original paper bound these values in a way that helped us on choosing the right function to compute them; in fact, we have that\footnote{see Theorem 7 and Theorem 8 of \cite{crt}}
\[
r \sqrt{\frac{w}{n}} < \varepsilon < 1/2, \qquad \frac{C}{\sqrt{n}} < \varepsilon < 1/2
\]
hence we choose
\[
r = \floor*{ \frac{\floor*{ \sqrt{\frac{n}{w}} \varepsilon - 1 }}{\varepsilon^2}} \in O \left( \frac{1}{\varepsilon^2} \right)
\]
\[
C = \floor*{ \frac{\floor*{ \sqrt{n} \varepsilon - 1}}{\varepsilon}} \in O \left( \frac{1}{\varepsilon} \right)
\]
\subsection{Random Sequences}
Another problem we encountered was the random selection of $k$ distinct vertices out of $n$ with $k<n$; after some research, we found that this kind of problem cannot be solved in sublinear time on the number of vertices, which can't be done for the same reasons exposed in the introduction about the subgraph family. The problem here is that we can't extract \emph{with no reinsertion} $k$ values without using an external data structure; this structure has a linear cost for the maintainance that depends on $n$. A special case is that in which $n$ is a prime that fulfills certain properties; the key idea is to make use of the properties of the quadratic residues of $n$\footnote{this is widely used in cryptography for ``format preserving encryption''; see~\cite{bbs} and~\cite{cafd}}. That permits us to extract a non-repeating random value in constant time in the range $0 \dots n-1$; sadly, this solution is not affordable here because we need a dynamic range for each call, so that we cannot fulfill such constraints for $n$.
The solution we found is to use Fisher-Yates sequences\footnote{originally decribed in \cite{fys} and then redesigned for computer use in \cite{fysc}} which permits us to prepare in advance the sequences and then get different, distinct values at each call in constant time and with dynamic bounds. The cost of the preparation of those sequences is linear and is not considered in the total cost.
\section{Implementation choices}
We choose to implement the algorithm in C++ because we wanted a language that offers the advantages of an OOP language, not least a faster development, that had a standard library and sufficiently ``near'' the machine to have a tighter control over the performances and the memory occupation (e.g. performances high variance due to overhead of a VM or an interpreter). We considered that absolute performances are not important here, whereas relative performances of the CRT algorithm and other MST algorithms are the focus, so finally the choice of the language is something not of the greatest importance. The IDE tool used for the implementation is JetBrains Clion.
We also used GitHub, as our code versioning control system.
Also, as already mentioned, we did use the Boost library to extend the STL that C++ already offers; we did implement FastGraph in place of BGL also to address memory issues in relation to the method Boost uses to store subgraphs of a graph; we in fact used an advanced method to store the family of subgraphs $G_i, i = 0,1,\dots,w$ that simply store the difference of vertices between them, $\Delta_{G_i} := V(G_i) - V(G_{i-1})$. It is always possibile to reconstruct every $G_i$ because this family only has \emph{induced} subgraphs, and this cost is not taken into account in the computation of the total time.\\
The main function of the implementation is in the “AlgoWEB.cpp” file.
Launching the program from this file allows us to run either the CRT algorithm or the Prim algorithm,or the Kruskal algorithm, and to view either the running time or the computed weight of the Minimum Spanning Tree. It takes some argument in input, namely the file containing the graph, a suitable value for $\varepsilon$ and the path where to save the results of the single run.
Besides, it is possible to run the random graph generator by the \texttt{grandom} utility we developed apart. It takes many arguments in input, that you can find just by calling
\begin{verbatim}
$> grandom -h|--help
\end{verbatim}
\subsection{Random Graphs Models}
In order to give exhaustive results, we designed \texttt{grandom} to produce 4 classes of random graphs:
\begin{itemize}
\item Erd\H{o}s-R\'{e}nyi model, that builds random graphs using a uniform distribution of the edges over the vertices; his average degree $d \approx \frac{2m}{n}$;
\item Gaussian model, that builds random graphs with a ``cluster zone'' where the edges are more frequent, hence the gaussian shape of the degree distribution; we have still $d \approx \frac{2m}{n}$;
\item Barab\'{a}si-Albert model, that builds random \emph{scale-free} graphs. The average degree is $d \approx \frac{m}{n}$;
\item Watts-Strogatz model, that builds random graphs with \emph{small-world} properties and $d \approx \frac{2m}{n}$.
\end{itemize}
\label{ddd}
We want to emphasize here the fact that for the Barab\'{a}si-Albert model the average degree results to be different respect to the other models; this is due to the algorithm the scientific literature offers to build such graphs. For the other models is always possible to add an arbitrary number of edges keeping the theoretical properties valid; having, on the contrary, for the Barab\'{a}si-Albert model a certain probability $p_k$ at each step to successfully add an edge, it is not possible to build a graph with an arbitrary number of edges; the user can solely give the number of vertices. But then again, the theory states that if the algorithm starts with a complete graph of $m_0$ vertices (hence $m_0 - 1$ edges), it will produce a Barab\'{a}si-Albert graph whose average degree is scaled by this quantity. Our initial complete graph has $m_0 = 2$ vertices, so we will have $d \approx \frac{2m}{m_0 n} = \frac{m}{n}$. A little insight on the proof is the following: the distribution of the degree in the Barab\'{a}si-Albert model is a power law with a cubic exponent. Fortunately, in that case this distribution has a well defined mean. Applying a little of arithmetic we can easily see the truthfulness of what stated previously.
This difference in the models has to be bore in mind when reading the test results, because when comparing the performances over a Barab\'{a}si-Albert graph with $n$ vertices and $m$ edges and any other one of a different model with same number of vertices and edges, we will have different average degrees. Since the CRT algorithm is designed to only depend on the latter and not on $m$ nor $n$, a multiplicative factor of $2$ is to be taken into account.
Our implementation provide a \emph{memory-aware}\footnote{it calculates the average using the \emph{progressive mean} technique, avoiding arithmetic overflow} subroutine that calculates the exact value of $d$ at each run; the values obtained so far agree with the statements above.
\section{Tests}
The following section reports the results we had launching the algorithm over a variegate dataset of graphs, as described in the next paragraph.
\subsection{Dataset}
For each random graph model listed in the previous section we decided to produce a dataset; each dataset has a ``family'' of graphs that differs one from each other following a pattern for the parameters. Precisely, we composed the dataset grouping sets of graphs based on the value of the parameters, following the rules:
\begin{itemize}
\item build one set of graphs for each model (\textsf{Uniform, Gaussian, Small-World, Scale-Free});
\item every set contains in turn other sets, one for each selected value of $n$, i.e. the number of nodes ($5000$, $30000$, $55000$, $80000$, $105000$, $130000$, $155000$, $180000$, $205000$, $230000$);
\item then again, for each selected value of $d$, i.e. the average degree ($20, 100, 200$); this way we also determine the desired value for $m$,
since we have, for our models and for the fact that $m \propto n$, a proportion between the two. It's easy to see that $d \, \xrightarrow{n \rightharpoonup + \infty} \, \frac{2m}{n}$, so we have the selected values for $m$ ($10, 50, 100$ times $n$)\footnote{operatively we fixed the desired number of edges $m$, so $d$ is a random variable with mean $\frac{2m}{n}$};
\item finally, a set for each selected value of $w$, i.e. the weight ($20, 40, 60, 80$).
\end{itemize}
In conclusion we have \textsf{the number of models} $\times$ \textsf{the number of selected values for $n$} $\times$ \textsf{the number of selected values for $d$ (hence $m$)} $\times$ \textsf{the number of selected values for $w$} for a total of $4 \cdot 10 \cdot 3 \cdot 4 = 480$ graphs. This dataset, made of plain text files as already described, is quite heavy: $42.1$ GiB of data.
\subsection{Runs}
Using the dataset just described, we used a pattern for the runs, in order to have a complete view of the behavior of the algorithm in the domain of the various parameters; every single result consists of three plots, and we obtained, inspired by the structure of the dataset:
\begin{itemize}
\item a set of results for each model;
\item this set containing in turn a set for each selected value of $\varepsilon$ ($0.2, 0.3, 0.4, 0.49999$);
\item then a set for each selected value of $d/2 \simeq \frac{m}{n}$ ($10, 50, 100$);
\item finally a set for each selected value of $w$.
\end{itemize}
The first two plots report the absolute and relative error of the CRT result compared to the correct one, calculated with Prim's algorithm; the third report the trend of the used time. As we mentioned, we indeed used Kruskal too but it's not reported here for it was not faster respect to Prim's times and unuseful for the computation of the MST weight, since already done with Prim's method.
We had this way a total of $3 \cdot 4 \cdot 3 \cdot 4 = 144$ plots, or better $48$ different cases of study. A comparison between those cases we consider meaningful finally concludes our work, as reported below in section~\ref{sec:results}.
As mentioned before, the parameters should be selected carefully in order to provide a good approximation very fast. In this sense $\varepsilon$ plays a crucial role here: since the CRT algorithm is a probabilistic algorithm and $\varepsilon$ is indeed the driver parameter either for performances and for the accuracy of the estimate MST weight, it does not make sense to choose values too small for this parameter, as the performances could dramatically degrades (as, indeed, is expected).
So, although it could be of interest to study the right limit of $1/2$, we empirically noted that values of $\varepsilon$ below $0.2$ shows undesired behaviour of the CRT algorithm, either for the computed MST weights and for the running times.
This is not unusual dealing with theoretical algorithms that shows \emph{asymptotical} properties; the class this algorithm belongs to is known as \emph{property testing algorithms}, whose requirement is to have a query complexity much smaller than the instance size of the problem. Given that and the fact that the algorithm is not required to compute an \emph{exact} value, but to \emph{estimate} a value \emph{probabilistically near} to the correct one\footnote{iff the property we are looking for is a \emph{probabilistically checkable proof}, see~\cite{sta}}, we are not surprised that if the instance of the problem is small, the algorithm shows worse performances respect to a ``deterministic'' method. Because of this, results for $\varepsilon < 0.2$ were not comprehensive nor interesting and are not reported.
Given the intrinsically probabilistic nature of the CRT algorithm, we had to launch several runs on the same graph to have a good estimate of the CRT running time. For this purpose we decided to launch 10 runs for each graph of the dataset, and to take the average running time as the estimate of the running time; the amount of runs has been decided as a compromise between having a low variance between execution time's mode and mean values, and keeping a restrained amount of tests to do over the whole dataset. For the output value of the approximated MST weight we instead took one over the ones computed, randomly, to preserve the information on the tolerance of the estimate.
\section{Results}\label{sec:results}
Following there are considerations about the meaningful line charts regarding the results.
We know that the CRT time complexity is $O(dw\varepsilon^{-2} \log{\dfrac{dw}{\varepsilon}})$, where $d$ is the average degree of a graph; on the other hand, we know that the accuracy depends on $\varepsilon$ also, so we expect an inversely proportional relation with the running time. Therefore what we expect is that:
\begin{itemize}
\item by increasing one or more of the parameters $d$, $w$, there should be a worsening of the average running time;
\item keeping all the other parameters unchanged, if we consider an increase in the value of $\varepsilon$ there must be an improvement of the running time (as well as a worsening of the result, though);
\item viceversa we expect the opposite behaviour if we decrease $d$ and/or $w$ or decrease $\varepsilon$ with the other parameters unchanged.
\end{itemize}
Let us call the above \emph{crucial parameters}.
What we still need to know is what happens to the error; we should expect a direct proportion with the running times, so the above considerations could be also valid for the error. On the contrary, we'll see that this is not exactly respected.
First thing to show is that the CRT algorithm is sublinear in the number of edges, hence is better than any other \emph{exact} algorithm. This can be easily seen in figures from~\ref{U_03_50_40_time_kruskal} to~\ref{U_03_50_40_rel}. For the rest of the plots listed from now on it is possibile to see, above each graph, the values of the parameters for the presented run.
It is interesting to note that the correct value, computed with Prim's algorithm, it's linear in the number of edges (figure~\ref{U_03_50_40_abs}); we want to stress the fact that this is not the general case, and that this trend is due to the uniform distribution of the weights over the edges. We will dissect this point more in section~\ref{rtn}.
\begin{figure}[htbp]
\centering
\subfloat[][\emph{Running time; for completeness, Kruskal's time is also reported}.]
{\includegraphics[width=.8\textwidth]{plots/uniform_03_50_40_time_kruskal}\label{U_03_50_40_time_kruskal}} \\
\subfloat[][\emph{Relative error}.]
{\includegraphics[width=.45\textwidth]{plots/uniform_03_50_40_abs}\label{U_03_50_40_abs}} \quad
\subfloat[][\emph{Absolute error}.]
{\includegraphics[width=.45\textwidth]{plots/uniform_03_50_40_rel}\label{U_03_50_40_rel}}
\caption{Sublinearity of the CRT algorithm and related error trends.}
\label{is_sublin}
\end{figure}
Given that the CRT Algorithm respects the sub-linearity constraint, let's now see the variations that occur when selectively changing other parameters. For the sake of completeness, in figures~\ref{is_sublin} informations about Kruskal's runs are reported, yet we won't report them in the charts that follow.
\subsection{Variations of crucial parameters}
\subsubsection{Average degree $d$}
Let us now see the behaviour for the variation of $d$; we will initially concentrate on the running time and solely for the \textsf{uniform} model. The selected values of $\varepsilon$ and $w$ are respectivey $0.3$ and $40$.
\begin{figure}[htbp]
\centering
\subfloat[][$d \simeq 20$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_10_40_time}\label{U_03_10_40_time}} \\
\subfloat[][$d \simeq 100$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_50_40_time}\label{U_03_50_40_time}} \\
\subfloat[][$d \simeq 200$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_100_40_time}\label{U_03_100_40_time}}
\caption{Behaviour for the increase of $d$. ({\sf Uniform model})}
\label{d_increase_time}
\end{figure}
As we can see in figures~\ref{U_03_10_40_time} to~\ref{U_03_100_40_time}, there is a worsening in the performance \emph{for small instances}: an increase of the average degree $d$ is somewhat correlated to a loss of performances to the point that our property testing algorithm needs more time that the deterministic one; still, that seems to be true \emph{under} a certain dimension of the instance of the graph, so that we don't lose the truthfulness of the theoretical time complexity because for a fixed $d^*$ it will always be possibile to find empirically a certain number of edges $m^* \propto d^*$ beyond which the running time function is always below $C \cdot dw\varepsilon^{-2} \log{\dfrac{dw}{\varepsilon}}$ for a certain $C$.\footnote{that comes directly from the definition of \emph{asymptotic complexity}}
We want here to highlight the crucial fact that the algorithm behaves better on \emph{big} instances, where with ``\emph{big}'' we refer to the parameters the performances depend on, namely $d$, $w$. Almost all the trends reported in this paper, in fact, show this initial ``bad'' curve and, from a certain value onward, a sublinear behaviour. We will discuss further about this in section~\vref{gbb}.
\begin{figure}[htbp]
\centering
\subfloat[][$d \simeq 20$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_10_40_rel}\label{U_03_10_40_rel}} \\
\subfloat[][$d \simeq 100$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_50_40_rel}\label{U_03_50_40_rel2}} \\
\subfloat[][$d \simeq 200$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_100_40_rel}\label{U_03_100_40_rel}}
\caption{Error behaviour for the increase of $d$. ({\sf Uniform model})}
\label{d_increase_rel}
\end{figure}
We would speculate that the error could be related to this. Indeed in figure~\ref{d_increase_rel} we can see that to an increase of $d$ corresponds a dramatic annihilation of the error; to explain this point we use a simplified example. Let us consider a minimal complete graph\footnote{a connected graph with the least number of edges, i.e. $n-1$}; the algorithm launches some BFSes on a \emph{strict subset} of the graph, in more steps that could be interrupted according to certain rules, based in part on a stochastic process. It is easily provable that in this kind of graphs we have a worst case of $n-1$ hops between to vertices $u$ and $v$; if we add a random edge to this graph, the length of this path decrease \emph{at least} of one hop. By induction we can hence prove that the \emph{diameter} of the graph decreases as $\left| E(G) \right|$ grows. In other words having a stronger connection within the graph (i.e. a greater probability to visit a vertex $v$ from $u$ in $k$ hops of a random walk) increases the probability to have a complete view of the graph, that is more \emph{information} about the MST weight.
Moreover, we saw in our study of all the results showed in this paper, that the performances of the CRT are completely untied from the number of vertices $n$ and from the number of edges $m$ of the input graph; this suggests us also that the error is in turn driven solely by the parameters responsible of the algorithm's complexity, as the results that follow are in fact going to prove.
In figure~\ref{d_increase_time2} we summarize the results for the Gaussian and Small-World models, noticing that they equate the one we showed about the uniform model. This suggests that the algorithm complexity does not depend on the dimension and clustering coefficient of the graphs, being those the main differences from one model to another.
\begin{figure}[htbp]
\centering
\subfloat[][gaussian, $d \simeq 20$]
{\includegraphics[width=.65\textwidth]{plots/gaussian_03_10_40_time}\label{G_03_10_40_time}}
\subfloat[][small-world, $d \simeq 20$]
{\includegraphics[width=.65\textwidth]{plots/smallworld_03_10_40_time}\label{SM_03_10_40_time}} \\
\subfloat[][gaussian, $d \simeq 100$]
{\includegraphics[width=.65\textwidth]{plots/gaussian_03_50_40_time}\label{G_03_50_40_time}}
\subfloat[][small-world, $d \simeq 100$]
{\includegraphics[width=.65\textwidth]{plots/smallworld_03_50_40_time}\label{SM_03_50_40_time}} \\
\subfloat[][gaussian, $d \simeq 200$]
{\includegraphics[width=.65\textwidth]{plots/gaussian_03_100_40_time}\label{G_03_100_40_time}}
\subfloat[][small-world, $d \simeq 200$]
{\includegraphics[width=.65\textwidth]{plots/smallworld_03_100_40_time}\label{SM_03_100_40_time}}
\caption{Execution times behaviour for the increase of $d$, different models.}
\label{d_increase_time2}
\end{figure}
In figure~\ref{d_increase_rel2} we summarize instead the trend of the relative error; we see here a slightly different evolution. We cannot conclude, as we did for the time complexity, that the error doesn't suffer from the different graph model. The error in fact depends on the clustering coefficient, because it is going to grow dependently on the number of not accomplished BFSes: each one of them cause in fact a loss of information. The algorithm, as well explained in~\cite{crt}, during the BFSes phase avoid to explore nodes that shows a high degree and even stops when encounters hubs.
In other words, having equals values of $d$ in two different runs of the algorithm, we see that its time complexity trend remain the same; assuming that $\delta$ is the sample mean of the vertices degree, this tells us that the time complexity is bound to the \emph{average} of $\delta$ and, since we have a growth of the error on graphs that contains hubs, we also conclude that the relative error is bound to the \emph{variance} of $\delta$.
\begin{figure}[htbp]
\centering
\subfloat[][gaussian, $d \simeq 20$]
{\includegraphics[width=.65\textwidth]{plots/gaussian_03_10_40_rel}\label{G_03_10_40_rel}}
\subfloat[][small-world, $d \simeq 20$]
{\includegraphics[width=.65\textwidth]{plots/smallworld_03_10_40_rel}\label{SM_03_10_40_rel}} \\
\subfloat[][gaussian, $d \simeq 100$]
{\includegraphics[width=.65\textwidth]{plots/gaussian_03_50_40_rel}\label{G_03_50_40_rel}}
\subfloat[][small-world, $d \simeq 100$]
{\includegraphics[width=.65\textwidth]{plots/smallworld_03_50_40_rel}\label{SM_03_50_40_rel}} \\
\subfloat[][gaussian, $d \simeq 200$]
{\includegraphics[width=.65\textwidth]{plots/gaussian_03_100_40_rel}\label{G_03_100_40_rel}}
\subfloat[][small-world, $d \simeq 200$]
{\includegraphics[width=.65\textwidth]{plots/smallworld_03_100_40_rel}\label{SM_03_100_40_rel}}
\caption{Error behaviour for the increase of $d$, different models.}
\label{d_increase_rel2}
\end{figure}
\subsection{Maximum weight $w$}
Here we will manipulate the value of $w$, similarly to what we have already done with the average degree. Figures~\ref{w_increase_time} and~\ref{w_increase_rel} are hereafter proposed; this time the other fixed values are $\varepsilon = 0.4$, $d = 50$. Still this graphs has been build using a uniform model.
\begin{figure}[htbp]
\centering
\subfloat[][$w = 20$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_20_time}\label{U_04_50_20_time}}
\subfloat[][$w = 40$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_40_time}\label{U_04_50_40_time}} \\
\subfloat[][$w = 60$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_60_time}\label{U_04_50_60_time2}}
\subfloat[][$w = 80$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_80_time}\label{U_04_50_80_time}}
\caption{Behaviour for the increase of $w$. ({\sf Uniform model})}
\label{w_increase_time}
\end{figure}
This time we see the error growing as $w$ increase. So we see here a \emph{direct} proportion of the execution time with the maximum weight, unlike the \emph{inverse} proportion it had with $d$. This is due to the fact that every iteration of the subroutine \texttt{approx-number-connected-components} that the reader can find in the original paper and remembered in pseudocode~\ref{alg}, adds a further approximation to the final result, because approximates the addend $\hat{c}$ that contributes to the total approximation, that is, the final error.
We see also that the dimension of the initial curve described here so far, grows proportionally to the worsening of the excution time's trend. This evidence also is observable in all the trends studied.
\begin{figure}[htbp]
\centering
\subfloat[][$w = 20$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_20_rel}\label{U_04_50_20_rel}}
\subfloat[][$w = 40$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_40_rel}\label{U_04_50_40_rel}} \\
\subfloat[][$w = 60$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_60_rel}\label{U_04_50_60_rel2}}
\subfloat[][$w = 80$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_80_rel}\label{U_04_50_80_rel}}
\caption{Error behaviour for the increase of $w$. ({\sf Uniform model})}
\label{w_increase_rel}
\end{figure}
\subsection{Error tolerance $\varepsilon$}
We will test our algorithm for the values of $\varepsilon = 0.2, 0.3, 0.4, 0.49999$ over uniform generated graphs. As already explained, no values below $0.2$ are investigated. We see in figures~\ref{e_increase_time} and~\ref{e_increase_rel} the trends.
\begin{figure}[htbp]
\centering
\subfloat[][$\varepsilon = 0.2$]
{\includegraphics[width=.65\textwidth]{plots/uniform_02_50_40_time}\label{U_02_50_40_time}}
\subfloat[][$\varepsilon = 0.3$]
{\includegraphics[width=.65\textwidth]{plots/uniform_03_50_40_time}\label{U_03_50_40_time2}} \\
\subfloat[][$\varepsilon = 0.4$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_50_40_time}\label{U_04_50_40_time2}}
\subfloat[][$\varepsilon = 0.49999$]
{\includegraphics[width=.65\textwidth]{plots/uniform_049999_50_40_time}\label{U_049999_50_40_time}}
\caption{Behaviour for the increase of $\varepsilon$. ({\sf Uniform model})}
\label{e_increase_time}
\end{figure}
As expected, we do note that the time trends tend to decrease as $\varepsilon$ increases, since we tolerate a higher error for the computed value, so the algorithm is less aggressive on computation and takes less time.
\begin{landscape}
\begin{figure}[htbp]
\centering
\subfloat[][$\varepsilon = 0.2$]
{\includegraphics[width=.411\textwidth]{plots/uniform_02_50_40_rel}\label{U_02_50_40_rel}}
\subfloat[][$\varepsilon = 0.3$]
{\includegraphics[width=.411\textwidth]{plots/uniform_03_50_40_rel}\label{U_03_50_40_rel3}}
\subfloat[][$\varepsilon = 0.4$]
{\includegraphics[width=.411\textwidth]{plots/uniform_04_50_40_rel}\label{U_04_50_40_rel2}}
\subfloat[][$\varepsilon = 0.49999$]
{\includegraphics[width=.411\textwidth]{plots/uniform_049999_50_40_rel}\label{U_049999_50_40_rel}} \\
\subfloat[][$\varepsilon = 0.2$]
{\includegraphics[width=.411\textwidth]{plots/uniform_02_50_40_abs}\label{U_02_50_40_abs}}
\subfloat[][$\varepsilon = 0.3$]
{\includegraphics[width=.411\textwidth]{plots/uniform_03_50_40_abs}\label{U_03_50_40_abs2}}
\subfloat[][$\varepsilon = 0.4$]
{\includegraphics[width=.411\textwidth]{plots/uniform_04_50_40_abs}\label{U_04_50_40_abs}}
\subfloat[][$\varepsilon = 0.49999$]
{\includegraphics[width=.411\textwidth]{plots/uniform_049999_50_40_abs}\label{U_049999_50_40_abs}} \\
\caption{Absolute and relative value of the error in a {\sf Uniform} dataset, as $\varepsilon$ changes; fixed values are $\bar{d} = 50$, $w_\textup{max} = 40$.}
\label{e_increase_rel}
\end{figure}
\end{landscape}
For the error trend instead we note an increase, still as expected; figures from~\ref{U_02_50_40_rel} to~\ref{U_049999_50_40_rel} has to be read with attention because every function graph has a different scale. The reader must not confuse the apparent lowering of the error since it is not the case. Looking at the other graphs about the absolute error, from~\ref{U_02_50_40_abs} to~\ref{U_049999_50_40_abs}, we see an expansion of the tolerance cone (light blue and ochre yellow lines) as increasing $\varepsilon$ means admitting an higher deviation from the correct MST weight value. Here too the different scaling must not confuse about the increasing trend of the error.
We see that the result is coherent with the theoretical model as the error increases with $\varepsilon$, but his variation is, after all, contained.
\subsection{Variations of graph model}
As a last comparison, instead of varying one by one the crucial parameters and see the time and error trends, we fix all of them and try to change the model the graph belongs to. Figures~\vref{model_variation} show those results. We see here that both uniform and small-world models keep the same trend for the error, but the small-world one behaves slightly better on small instances. On the contrary, the gaussian show the same time trend respect to the uniform case, but his error has a higher growth curve. The scale-free model seems to be the worse case both regarding time and error trend, but we might remember, as observed earlier at~\vref{ddd}, that a real term of comparison requires to considerate a scale factor of $2$ as done in figure~\vref{model_variation_part}. We see in fact that, compared to the uniform case, the scale-free model has even a better behaviour, looking carefully at the function graph scale: but after all, both of them show sublinear complexity for instances of more than $10^7$ edges, so have both the same transients as the same steady state trend.
Another thing we note is that a bad trend in execution times are always bond to an explosion of the error, as we can see in the charts so far. This means that using more time to compute the value doesn't mean it will be nearer to the correct one.
\begin{landscape}
\begin{figure}[htbp]
\centering
\subfloat[][uniform, time trend]
{\includegraphics[width=.411\textwidth]{plots/uniform_04_50_60_time}\label{U_04_50_60_time}}
\subfloat[][gaussian, time trend]
{\includegraphics[width=.411\textwidth]{plots/gaussian_04_50_60_time}\label{G_04_50_60_time}}
\subfloat[][small world, time trend]
{\includegraphics[width=.411\textwidth]{plots/smallworld_04_50_60_time}\label{SW_04_60_40_time}}
\subfloat[][scale free, time trend]
{\includegraphics[width=.411\textwidth]{plots/scalefree_04_50_60_time}\label{SF_04_60_40_time}} \\
\subfloat[][uniform, relative error]
{\includegraphics[width=.411\textwidth]{plots/uniform_04_50_60_rel}\label{U_04_50_60_rel}}
\subfloat[][gaussian, relative error]
{\includegraphics[width=.411\textwidth]{plots/gaussian_04_50_60_rel}\label{G_04_50_60_rel}}
\subfloat[][small world, relative error]
{\includegraphics[width=.411\textwidth]{plots/smallworld_04_50_60_rel}\label{SW_04_50_60_rel}}
\subfloat[][scale free, relative error]
{\includegraphics[width=.411\textwidth]{plots/scalefree_04_50_60_rel}\label{SF_04_50_60_rel}}
\caption{Different behaviours for different graph models; fixed values are $\varepsilon = 0.4$, $\bar{d} = 50$, $w_\textup{max} = 60$.}
\label{model_variation}
\end{figure}
\end{landscape}
\begin{figure}[htbp]
\centering
\subfloat[][{\sf uniform}, $\bar{d} = 100$]
{\includegraphics[width=.65\textwidth]{plots/uniform_04_100_60_time}\label{U_04_100_60_time}}
\subfloat[][{\sf scale free}, $\bar{d} = 50$]
{\includegraphics[width=.65\textwidth]{plots/scalefree_04_50_60_time}\label{SF_04_50_60_time2}}
\caption{A more correct comparison between the general case and scale-free graphs ($\varepsilon = 0.4$, $w_\textup{max} = 60$.)}
\label{model_variation_part}
\end{figure}
\section{A specific case of study}\label{gbb}
At this point, all of our graphs show a bad initial curve every time we burden the input instance; in a specific case of study this anomalous curve was so persistent that all the function graph was more than linear; we decided to deepen this special case of study, performing a longer run to see the tail of this particular trend. The results are reported on figure~\vref{long_run}.
\begin{figure}[htbp]
\centering
\subfloat[][Heavy case of \textsf{uniform}, $\varepsilon = 0.3$, $d \simeq 200$, $w = 80$]
{\includegraphics[width=.5\textwidth]{plots/uniform_03_100_80_time}\label{U_03_100_80_time}}
\subfloat[][Long run, up to 50 millions edges]
{\includegraphics[width=.7\textwidth]{plots/bg_time}\label{bg_time}} \\
\subfloat[][Long run, error trend]
{\includegraphics[width=.75\textwidth]{plots/bg_err}\label{bg_err}}
\caption{Long run results; figure (a) shows the original case of study, while (b) his extension.}
\label{long_run}
\end{figure}
We can see that the original case of study hinted a sublinear trend beyond 20 millions edges instances, but we considered to investigate further, and figure~\ref{bg_time} confirms the sublinearity. We have a good behaviour on the error trend (figure~\ref{bg_err}).
\section{Conclusions and final thoughts}
As expected, a probabilistic algorithm like the CRT allows us to compute an approximation of the Minimum Spanning Tree weight in sublinear time on the number of edges, under certain conditions.
Tunable parameters, that depends on $\varepsilon$, allows us to perform either a better or a worse approximation, implying respectively a very slow and a very fast computation. The choice of a small value of $\varepsilon$ can lead to terrible running times, and for these values it does not make sense to compare the CRT algorithm with any other deterministic algorithm.\\
For other $\varepsilon$ values, instead, we prove the good performances of the CRT. The reader can easily view the better performances of CRT algorithm versus Prim algorithm or Kruskal algorithm watching the line charts in the previous section of this paper.
More in general, we see that execution time and error depend on the number of BFSes successfully completed during the computation The more BFSes are completed, the more information the algorithm has to predict a correct value for the MST weight, but on the other hand, completing the BFSes takes time. If instead we have a lot of interrupted BFSes, we waste a large amount of time without gathering information, hence resulting in both high execution time and error.
We considered so far different theoretical graph models, and we conclude that a high \emph{clustering coefficient} tends to increase the probability to have interrupted BFSes. This because one of the reasons the algorithm has to interrupt the search is having encountered a hub, i.e. a vertex with a high degree. We saw in fact that when changing the model there is a slight perturbation of the trend, although it remains sublinear. The key concept is the ``distance'' from the hubs of the graph of the root vertex from which our BFS starts.
Generally we also concluded that increasing the average degree let our algorithm gather more information, because there is a growing of the probability to visit the generic node $u$ of our graph. On the other side, increasing the maximum weight correspond to an increase in the number of iterations our algorithm does, that leads to summing more intermediate approximated results that imply a higher final approximation.
\subsection{Parallel implementation}
We observe that the CRT algorithm lends itself very well to a parallel implementation. Indeed the majority of the algorithm's code is organized into independent sections, and in most cases they don't need to comunicate to each other. We also observe that three levels of paralellism can be achieved within the code. In the first level we parallelize each of the $w$ independent calls to \texttt{approx-number-connected-components}, as depicted in pseudocode~\ref{alg}, below; every of this calls internally performs $r$ independent BFSes from $r$ different roots, that could in turn run in parallel, achieving a second level of parallelism. Moreover, considering that in the academic world there already exist different parallel implementations of the BFS algorithm, we can use one of them to perform an additional third level of paralellism.
\begin{algorithm}
\caption{First level parallelism for the CRT algorithm}
\label{alg}
\begin{algorithmic}[*]
\Function{approx-MST-weight}{$G$, $\varepsilon$}\Comment{$G$ - input graph, $\varepsilon$ - error tolerance}
\State $d^* \gets $ \Call{approx-avg-degree}{$\varepsilon$}\Comment{sequential, runs in $O(d/\varepsilon)$ as shown in~\cite{crt}}
\State $\hat{c} \gets 0$
\For{$i = 1, \dots , i = w$}
\Comment{{\small \hlgray{this $w-1$ calls can be run in $w-1$ parallel threads}}}
\State $\hat{c} \mathrel{+}=$ \Call{approx-number-connected-components}{$G^{(i)}$, $\varepsilon$, $d^*$}
\EndFor
\State \Return $\hat{v} \gets n - w + \hat{c}$
\EndFunction
\end{algorithmic}
\end{algorithm}
To make it even more simpler, the number of different flows is known a priori, so a static pre-instantiation and a smart scheduling of the threads can be performed. At the first and second levels a \emph{master-slave} model can be used in a \emph{fork-join} structure, while in the third level a shared variable is needed between the different BFSes. As a final remark, parallelizing the BFSes could have too much overhead given that the algorithm is optimized to run them very fast and to stop the ones that seem to cost too much.
\section{Future prospects}\label{rtn}
During an e-mail exchange with one of the original authors of \cite{crt}, Dr. Ronitt Rubinfeld, another topic of discussion and study has emerged, about the distribution of the weight on the edges.
\begin{figure}[htbp]
\centering
\subfloat[][Assuming a uniform distribution]
{\includegraphics[width=.525\textwidth]{plots/same_gap}\label{same_gap}}
\subfloat[][Assuming a generic distribution]
{\includegraphics[width=.525\textwidth]{plots/diff_gap}\label{diff_gap}}
\caption{Expected behaviour with different laws for the distribution of weights.}
\label{gaps}
\end{figure}
In our code, the generation of random graphs only assume a uniform distribution of the weights on the edges, i.e. a given edge can have an integer weight $k \in [1, w]$ with probability $\frac{1}{w}$. That implies a linear growth of the dimension of $E(G_i), \forall i \in [1, w]$, namely the set of $G_i$'s edges; this is well depicted in figure~\ref{same_gap}, where, as $i$ grows, the size of $E(G_i)$ increases at each step of a quantity ``near'' $\frac{ \left| E(G) \right| }{w}$, and it is more true as $\left| E(G) \right|$ is big for the law of large numbers. On the other side, having a generic law of distribution for the edges weight implies having a different behavior as depicted on~\ref{diff_gap}.
This difference could be of interest because it means that the input of different subsequent iterations of \texttt{approx-number-connected-components} will have a non regularly increasing size, or even the same size for some calls; it can be easily shown indeed that the function in~\ref{gaps} is nondecreasing. Since the cost of those calls finally determines the overall cost, we might argue that this could lead to a minor difference, but at the same time think that only with another set of tests we could conclude something relevant about this observation.
%Another thing that we notice is that the algorithm lends well itself to a parallelized running; for example, we can easily make parallelized the execution of the “approx-MST-weight” routine, using $w-1$ threads. Since a parallel execution is out of scope to prove the algorithm goodness, this observation leads only to a possible future development; in this context, we are only interested in testing the sequential algorithm bound. VA SULLE CONSIDERAZIONI FINALI
\begin{thebibliography}{10}
\bibitem{crt} {\sc B. Chazelle, R. Rubinfeld, and L. Trevisan}, Approximating the minimum spanning tree weight in
sublinear time. SIAM J {\em Computing}, 34, 2005.
\bibitem{sta} {\sc R. Rubinfeld, A. Shapira}, Sublinear Time Algorithms. SIAM {\em Journal on Discrete Mathematics}, 2011, Vol. 25, No. 4 : pp. 1562-1588.
\bibitem{bst} {\sc Beman Dawes, David Abrahams, Rene Rivera}, Boost C++ Libraries. http://www.boost.org/.
\bibitem{fys} {\sc Fisher, Ronald A.; Yates, Frank (1948) [1938]}, Statistical tables for biological, agricultural and medical research (3rd ed.). London: Oliver \& Boyd. {\em pp. 26–27}.
\bibitem{bbs} {\sc Lenore Blum, Manuel Blum, Mike Shub}; Comparison of Two Pseudo-Random Number Generators. Advances in Cryptology: {\em Proceedings of CRYPTO '82 pp. 61-78}, Plenum 1982.
\bibitem{cafd} {\sc John Black, Phillip Rogaway}; Ciphers with Arbitrary Finite Domains. {\em Topics in Cryptology - {CT-RSA} 2002, The Cryptographer's Track at the {RSA} Conference, 2002, San Jose, CA, USA, February 18-22, 2002, Proceedings, pp. 114-130}
\bibitem{fysc} {\sc Durstenfeld, R. (July 1964)}, ``Algorithm 235: Random permutation''. {\em Communications of the ACM}.
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7649111734,
"avg_line_length": 87.0258481422,
"ext": "tex",
"hexsha": "a30ba5dab2153578f4989d01de6bda8ed953fa6f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1915f2593c39b28a8ea9dd84ea2e8850046f69a9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LeGazzelle/AlgoWEB",
"max_forks_repo_path": "paper/santi-laurentiis_evaluating_sublinear_algorithm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1915f2593c39b28a8ea9dd84ea2e8850046f69a9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LeGazzelle/AlgoWEB",
"max_issues_repo_path": "paper/santi-laurentiis_evaluating_sublinear_algorithm.tex",
"max_line_length": 1241,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1915f2593c39b28a8ea9dd84ea2e8850046f69a9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LeGazzelle/AlgoWEB",
"max_stars_repo_path": "paper/santi-laurentiis_evaluating_sublinear_algorithm.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-22T09:45:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-22T09:45:01.000Z",
"num_tokens": 14138,
"size": 53869
} |
\documentclass{article}
\begin{document}
\section{section}
Hello world
\subsection{subsection}
structuring a document is easy
\subsection{Subsection}
\paragraph{paragraph}
Some more text
\subparagraph{subparagraph}
Even more texts
\section{another section}
\end{document}
| {
"alphanum_fraction": 0.75,
"avg_line_length": 19.7333333333,
"ext": "tex",
"hexsha": "6c75be1359690a05dad0d2e0a4f5e79a28c3c6c5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d1286e3914f8f685a0b91916a3bcfd558438f978",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ebosetaaa/ebosetaCSC101",
"max_forks_repo_path": "my latex files/practice 4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d1286e3914f8f685a0b91916a3bcfd558438f978",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ebosetaaa/ebosetaCSC101",
"max_issues_repo_path": "my latex files/practice 4.tex",
"max_line_length": 32,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d1286e3914f8f685a0b91916a3bcfd558438f978",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ebosetaaa/ebosetaCSC101",
"max_stars_repo_path": "my latex files/practice 4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 88,
"size": 296
} |
\section{Introduction}
The functional programming paradigm is nowadays an upward trend
\cite{ford-2013-ibm}, whose popularity can be traced to the several advantages
it offers: an arguably easier way of reasoning about its concepts, the use of
high-level abstractions that can be used for both reducing the workload on the
developers and including optimizations in the compiled code, or the fact that
its characteristics and principles make the fact of turning purely functional
code into concurrent code a trivial task \cite{hammond-2012-parallel}. This is
done by radically stripping the side effects from the code and keeping the
state of the machine immutable, or more recently, by isolating them and keeping
the referential transparency intact through as much as possible of the code base.\\
This trend is present in new languages being developed with the functional
paradigm: for instance, Scala \cite{scala} is a functional, strongly-typed
language based on the Java virtual machine, which makes it able to be used for
Android applications development. Another interesting example is Elm
\cite{elm}, which compiles to HTML, CSS and Javascript allowing functional web
development. But not only new languages are being inspired by functional
programming: long-established languages are also introducing functional
concepts such as map-reduce, folds, higher-order or anonymous functions into
traditional workflows. Some examples of this include Java \cite{java8},
C++ \cite{cpp-lambdas, cpp-high-level} or Python \cite{python-mrs}.\\
Arguably, the flagship language of the pure functional paradigm is Haskell
\cite{haskell-98}. Among its main features we can highlight the fact that it is
purely functional (and allows complete referential transparency by isolating
the side effects), lazy (contrary to strict evaluation, laziness implies that
only the required results are computed among all definitions), strongly and
statically typed (all types are checked at compile time) and provides a great
set of abstractions that can be compiled into concurrent code. This
particularities have placed on Haskell the stigma of being an academic
language, with hardly any practical use on real life; a stigma that the
community has worked exhaustively to get rid of.\\
Ironically enough, some Computer Science fields seem like unexplored territory
in Haskell. For example, a quick query on how many Haskell packages are
available on the most popular Haskell package manager related to heuristic
search returns a scarce set of algorithms, with different specifications and
barely any cohesion between them. That makes the task of studying the behavior
of different search algorithms in functional programming too much of a hassle,
compared to other languages with solid developed frameworks.\\
Out of this necessity the idea of \emph{Agis} is born: a full-fledged,
functional heuristic search framework. Agis not only provides a heuristic
search library with well known algorithms, but also the building blocks used to
implement them and a suite of different interfaces to performance and
automation tools for Haskell. The library is divided in two different flavors:
\texttt{Search.Pure}, which provides a set of completely pure functional
algorithms implemented and ready to be used; and \texttt{Search.Monadic}, which
adds a new layer of complexity using a monad that wraps the search. This causes
a time and space overhead with the advantage of being able to gather run-time
statistics, that can be used for educational, algorithm design or research
purposes.
\newpage
%%% Local Variables:
%%% TeX-master: "tfg"
%%% End:
| {
"alphanum_fraction": 0.8127929418,
"avg_line_length": 59.4590163934,
"ext": "tex",
"hexsha": "3f51c46317e7466aed5303855e1ed819ed37c927",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "feb1657ef4082402434d5e6519ec57eac85ac7a6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DiegoVicen/bachelor-thesis",
"max_forks_repo_path": "thesis/1-introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "feb1657ef4082402434d5e6519ec57eac85ac7a6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DiegoVicen/bachelor-thesis",
"max_issues_repo_path": "thesis/1-introduction.tex",
"max_line_length": 83,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "feb1657ef4082402434d5e6519ec57eac85ac7a6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DiegoVicen/bachelor-thesis",
"max_stars_repo_path": "thesis/1-introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-02T18:49:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-25T13:07:49.000Z",
"num_tokens": 757,
"size": 3627
} |
% !TeX spellcheck = de_DE
\chapter{Background}\label{background}
This chapter gives a short introduction to topics regarding Agile processes and their definitions, Model Driven Architecture, Unified Modeling Language and Petri Nets. Basic knowledge in these topics helps us to appreciate the various design and implementation decisions taken in Chapter \ref{relatedwork} and \ref{proposedsolution}. This chapter also acts a guide to understand the various terminologies used throughout this work.
\section{Evolution of Agile Software Development}
Agile Software Development was introduced in the year 2001 by Agile Alliance to keep up with growing demands of the software industry. Traditional software development methods like Waterfall model couldn’t deliver the expected results on circumstances of frequent requirement changes and increased software complexity. The main reason that can be attributed to the failure of traditional development methods is the single flow of sequential development phases without any iterative phases.
For example, in Waterfall Model, the business analyst along with the client creates a set of requirements and a model of the final product. A requirement specification document is created which acts as the base document for the next development phases like Analysis, Design, Development and Testing of the product. The client is never involved during the development process and would view the product only after the final testing is completed. If the requirements of the client were not captured correctly or if the client has a changed requirement, then the entire development process has to be repeated. This kind of rework increases the cost, time and resources needed for the project and subsequently leads to its failure. (\cite{versionone}, \cite{getzephyr})
Agile Software Development, on the other hand, is built on the foundation of iterative approaches, in which product development happens incrementally and stakeholder participates in the entire \gls{sdlc} from the conception to the delivery of the product. One such methodology is \textit{Sprint} in which a larger functionality is broken into smaller pieces called \textit{User Stories} which can be delivered in a time span of just two weeks. This approach has several advantages such as the time consuming documentation is replaced with face to face communication with the stakeholders reducing the misinterpretation and hence a providing better understanding of stakeholder requirements.
Agile also provides separate methodologies for testing which makes testing easier with a quick feedback from stakeholders. A user story is marked complete if it passes all the acceptance tests. They are then evaluated whether to retain them for regression testing. A difference in interpretation of the requirement can be found out immediately with feedback and only a small rework will be needed in case of a difference. Test team members create the test plan, write test specifications and test cases and manage the testing activities within a sprint. The testing activities can be classified as follows.
\subsection{Requirements-Based Testing}
In this testing methodology, the objective is to verify whether the deliverables’ design and code meets the application’s requirements. Hence the test cases, conditions and data are generally derived from the requirements specification document. The testing can be done to verify either the functional attributes or the non-functional attributes like performance, reliability or usability. \cite{tahat2001requirement}
\subsection{Model-Based Testing}
This methodology involves the generation of test cases from models describing the system requirements and behavior. Even though this methodology requires the building of models up-front in the development cycle, it has several advantages like finding inconsistencies in the requirements itself and detection of discrepancies even before the code is generated. \cite{dias2007survey}
\subsection{Code-Based Testing}
In this methodology, test cases are used to evaluate the code to verify whether different test paths of the system are covered. The benefits of this methodology are that the parts of the software not tested in other techniques are covered. \cite{prasanna2005survey}
The main objective of this thesis is to simplify the existing test management process. As bulk of the effort is spent on elicitation and generation of test cases, this thesis aims to simplify the task of test case generation by using an automated tool. Most of the existing approaches for automated generation of test cases can be put under the above three categories. This thesis tries to automate test case generation for the first category which is Requirements-Based Testing.
\section{Model Driven Software Engineering}
An important paradigm shift happened in the field of software development after the introduction of \gls{mda} by \gls{omg}. The underlying idea of \gls{mda} is to make use of models, a key tool in engineering, for software development. In general, models can be defined as abstraction of the system and its environment. An advantage with models is that it can be used to provide different levels of abstraction, with each level highlighting specific aspects or features of the system. Hence a model is essentially a representation of the necessary characteristics of the underlying system, leaving out the rest and thus contains less complexity. Less complexity of the models provides an easy way of system behavior prediction, examination of specific properties and unambiguous communication among software developers.
One of the motivations for \gls{mda} approach is that the developed software will be deployed to one or more platforms. The volatility of the platforms is higher than the higher-level models of the system and the objective of \gls{mda} is to create models that are increasingly independent of the target platform. In \gls{mda}, \glspl{pim} are initially designed in a platform independent language (e.g. \gls{uml}), which are then translated into \glspl{psm} by mapping them to some implementation language (e.g. C++, Java) as illustrated in Figure \ref{fig:Overview_MDA}.
\begin{figure}[htb!]
\centering
\fbox{\includegraphics[width=0.8\textwidth]{content/images/Chapter2/Overview_MDA.pdf}}
\caption{Overview of transformations in MDA \cite{cerny2015separation}}
\label{fig:Overview_MDA}
\end{figure}
A number of \gls{omg} standards such as The \gls{uml}, \gls{xmi}, \gls{cwm}, \gls{mof} form the core of \gls{mda} concepts. Among these standards, \gls{uml} is used to define the \gls{pim} models which will be discussed in detail in the Section \ref{secuml} whereas the other standards are not in the scope of this thesis.
The term \gls{mdsd} or \gls{mdd} describes the family of engineering approaches that use models and their transformations for creating software products. \gls{mdsd} takes advantage of the \gls{mda} facilities, as a result of which the code can either be automatically or semi-automatically generated from models that are described in standard specification languages. The main priority of \gls{mdsd} is to enable the validation of software by the end users and customer stakeholders as early as possible in \gls{sdlc}.
\section{Unified Modeling Language} \label{secuml}
\gls{uml} is a standard from \gls{omg} and is a de-facto industry standard for modeling business applications in \gls{mdsd} \cite{cerny2015separation}. \gls{uml} models help to understand the requirements of the system graphically. They also provide aides to design the system, its components and to model the relationship existing between them right from early stages of software development. \gls{uml} also helps the developers and end users to maintain consistency in their interpretation of the design specification. \gls{uml}’s rapid gain in popularity in object oriented software development has attracted a great deal of research on \gls{uml} in software testing. (\cite{nebut2003requirements},\cite{badri2003use},\cite{nebut2003requirements})
\gls{uml} can be classified broadly into two categories namely structural and behavioral models. The structural diagrams represent the static structure of the system and the relationship between them. The different structural diagrams represent different abstraction and implementation levels. On the other hand, behavioral diagrams represent the dynamic behavior of the objects in the system i.e. the changes that happen in the system over time. Some of the important \gls{uml} diagrams which are used in upcoming sections are elaborated below.
Class Diagram is one example of structural diagrams that acts as a blueprint of the system or subsystem under development. It is extensively used to model the objects that make up the system, the relationship between them, their description and the functions they provide. Class Diagrams usage is across different levels in the development cycle. In analysis stage, it is used to understand the requirements of the problem domain whereas, in the implementation stage, it can be used to create actual software classes. An example of Class Diagram is shown in Figure \ref{fig:umldiagrams_subfigA}.
Activity Diagram is one example of behavioral diagrams which illustrates the sequence of actions in a process. Activity Diagrams usage across different levels in the development cycle are illustrated below. In requirements stage, it can be used to model the flow of events that the use cases describe and in design and implementation stage, it can be used to define the behavior of operations. An example of Activity Diagram is shown in Figure \ref{fig:umldiagrams_subfigB}.
\begin{figure}[htb!]
\centering
\subfloat[A UML Class Diagram]{\includegraphics[width=0.45\textwidth,frame]{content/images/Chapter2/ClassDiagram.pdf} \label{fig:umldiagrams_subfigA}}
\subfloat[A UML Activity Diagram]{\includegraphics[width=0.45\textwidth,frame]{content/images/Chapter2/ActivityDiagram.pdf} \label{fig:umldiagrams_subfigB}}
\caption{Examples of UML diagrams.}
\label{fig:umldiagrams}
\end{figure}
\section{Petri Net}
Petri Net is a graphical and mathematical modeling tool with well defined semantics suitable for formal analysis. The concept of Petri Net was first introduced by Carl Adam Petri in the year 1961, after which it had grown tremendously to support different domains like workflow management, manufacturing, real time systems, distributed systems, embedded systems, protocols and networks, performance evaluation and much more.
In the domain of computer science engineering, Petri nets are mainly used in information processing systems that can be categorized as parallel, concurrent, distributed, asynchronous, non-deterministic and stochastic. Petri nets can be used as communication documents like flow charts, block diagrams and networks but can also simulate the dynamic and concurrent activities of the system with the concept of tokens. Also, it can model the mathematical representations of the systems using algebraic and state equations.
\subsection{Basics of Petri Net}
There are different kinds of Petri Nets depending upon the amount of information that the nets can carry. One such example of low level Petri Net is the Place/Transition Net (PT-Net) which is used in this thesis.
A Petri Net is an example of directed, bipartite and weighted graph in between two nodes called places and transitions. Arcs run between places and transitions and each place can hold specific items called tokens. An example Petri Net is shown in Figure \ref{fig:petrinet_example} and the various elements are elaborated below.
\begin{figure}[htb!]
\centering
\fbox{\includegraphics[width=0.6\textwidth]{content/images/Chapter2/Petrinetwithlabels.pdf}}
\caption{An example of Petri Net}
\label{fig:petrinet_example}
\end{figure}
\textbf{\textit{Places}} are the passive component of the net, and they represent the buffer, communication medium or in general a geographical location. The current state of the system is determined by the number of tokens present in a place and this state is represented by the term \textit{Marking} in Petri Nets.
\textbf{\textit{Transitions}} are the active components of the net, which represent the activities that change the state of the system. Transitions are fired only when certain preconditions are met and these conditions are represented in the net by means of tokens.
\textbf{\textit{Tokens}} are present inside the places and each token represents a condition to be fulfilled or a synchronization condition. In general, tokens represent a physical or information object.
\textbf{\textit{Arcs}} run only between places and transitions or vice versa. In the first case, these are called \textit{input arcs} whereas in the second case these are called \textit{output arcs}. Each arc is also associated with a specific weight that determines the number of tokens required for firing the particular transition.
\subsection{Formal Definition and Basic Terminology}
The terms defined in the above section can be formally defined as the following. The definitions are an excerpt from Calisaya et. al., 2015 \cite{calisaya2016analysis}.
\begin{definition}
\label{def:def1}
A \textbf{place-transition Petri Net} \cite{reisig2012petri} is a five-tuple PN = (P, T, F, W, M$_{0}$) where P = $ \lbrace $p$_{1}$, p$_{2}$, ..., p$_{n} \rbrace $ is a finite set of places, T = $ \lbrace $t$_{1}$, t$_{2}$,..., t$_{n} \rbrace $ is a set of transitions, F (P $\times$ T) $ \cup $ (T $\times$ P) is a set of arcs, W : F $ \subseteq $ $ \rightarrow$ $ \lbrace $1, 2,...$ \rbrace $ is a weight function, M$_{0}$ : P $ \rightarrow\lbrace $0, 1, 2, ...$ \rbrace $ is the initial marking and P $ \cap $ T = $\emptyset $ and P $ \cup $ T
$ \neq $ $ \emptyset $.
\end{definition}
\begin{definition}
\label{def:def2}
For a PN = (P, T, F, W, M$_{0}$), a \textbf{marking} is a function M: P $ \rightarrow$ $ \lbrace $0, 1, 2,...$ \rbrace $, where M (p) is the number of tokens in p. M$_{0}$ represents the initial marking of PN.
\end{definition}
\begin{definition}
\label{def:def3}
A \textbf{transition} t is enabled for firing at a marking M if M (p) $ \geq $ W (p, t) for any p $ \in $ p$ _{in} $ where p$ _{in} $ is the set of input places of t. On firing t, M is changed to M' such that $ \forall $p $ \in $ P: M' (p) = M (p) - W (p,t) + W (t,p).
\end{definition}
\begin{definition}
\label{def:def4}
For a PN, a sequence of transitions $ \sigma $ = $ \langle $t$_{1}$, t$_{2}$, ..., t$_{n} \rangle $ is called a \textbf{firing sequence} if and only if M$_{0}$[t$_{1} \rangle $, [t$_{2} \rangle $,..., [t$_{n} \rangle $M$_{n}$. In notation, M$_{0}$ [PN, $ \sigma \rangle $M$_{n}$ or M$_{0}$ [$ \sigma \rangle $M$_{n}$.
\end{definition}
\begin{definition}
\label{def:def5}
For a PN = (P, T, F, W, M$_{0}$), a marking M is said to be \textbf{reachable} if and only if there exist a firing sequence $ \sigma $ such that M$_{0}$ [$ \sigma \rangle $M.
\end{definition}
\subsection{Analysis of Petri Nets}
One of the main features of Petri Net is the ability to perform analysis on model properties i.e. it can detect defects related to structural and dynamic properties \cite{murata1989petri}. A simple transversal of the flow relation between places and transitions can detect the structural properties whereas initial markings and final markings after transitions can detect the dynamic properties. As defined in \cite{reisig2012petri}, the defects due to dynamic properties of boundedness, liveliness, deadlock free and reachability can be detected using methods like simulation, reachability/coverability or invariant analysis.
The transition behavior in a Petri Net is shown in Figure \ref{fig:petridiagrams}. Figure \ref{fig:petridiagrams} (a \& b) show Petri Nets with markings M0 and M1 respectively. P0 and P1 are the places and T0 is the transition. The places contain tokens and arcs with arc weight of two and one respectively. This marking, which is the required precondition, enables the firing of the transition T0 and subsequently results in the marking as shown in Figure \ref{fig:petrinet_subfigB}. The final marking consists of place P2 with three tokens since the arc from T0 to P2 is of weight 3.
\begin{figure}[htb!]
\centering
\subfloat[before transition]{\includegraphics[width=0.4\textwidth,frame]{content/images/Chapter2/Petrineta.pdf} \label{fig:petrinet_subfigA}}
\subfloat[after transition]{\includegraphics[width=0.4\textwidth,frame]{content/images/Chapter2/Petrinetb.pdf} \label{fig:petrinet_subfigB}}
\caption{Concept of transitions in a Petri Net.}
\label{fig:petridiagrams}
\end{figure}
| {
"alphanum_fraction": 0.7905126518,
"avg_line_length": 127.6106870229,
"ext": "tex",
"hexsha": "177116c9cae751752edaf2fdd558de3afca020fe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "26b739c9ed1e619fa33102d7245ee9f23edcc73a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sureshh90/Master-Thesis-Report",
"max_forks_repo_path": "content/background.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "26b739c9ed1e619fa33102d7245ee9f23edcc73a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sureshh90/Master-Thesis-Report",
"max_issues_repo_path": "content/background.tex",
"max_line_length": 821,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "26b739c9ed1e619fa33102d7245ee9f23edcc73a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sureshh90/Master-Thesis-Report",
"max_stars_repo_path": "content/background.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3870,
"size": 16717
} |
\documentclass[11pt,british,faculty=ea,layout=titlefont,underline=false,titleUppercase=true,titleUnderline=true,hidelinks]{ugent2016-report}
\usepackage[a4paper, total={6in, 10in}]{geometry}
\usepackage[hidelinks]{hyperref}
\usepackage{graphicx}
\graphicspath{./images/}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{wrapfig}
\usepackage{amsmath}
\usepackage[
backend=biber,
style=ieee,
sorting=none
]{biblatex}
\addbibresource{library.bib}
\title{Network and Computer Security}
\subtitle{Summary}
\academicyear{2020-2021}
\author{Lorenzo De Bie}
\begin{document}
\maketitle
\tableofcontents
\chapter{Introduction} \label{cha:introduction}
\section{Security in the media} \label{sec:security-in-the-media}
\begin{itemize}
\item Security <=> User friendly: work of security personel goes unnoticed when everything is good, but they get blamed when things go wrong.
\item Users remains a security risk:
\begin{itemize}
\item Due to lack of knowledge: \citetitle{Rodriguez2014} \cite{Rodriguez2014}
\item Due to incompetence
\item Information can stell be shared non-digitally
\end{itemize}
\item Nobody is safe: \citetitle{Eeckhaut2014} \cite{Eeckhaut2014}
\item Privacy vs Security: sacrificing privacy so data can be used for security.
\begin{itemize}
\item \citetitle{Derix2013} \cite{Derix2013}
\item \citetitle{Follorou2013} \cite{Follorou2013}
\end{itemize}
\item Check yourself using \url{https://haveibeenpwned.com/}
\item Privacy vs Health: tracing apps in times of COVID-19
\item Journalists aren't alwasy exactly IT experts \rightarrow remain a critic, remain sceptic
\item Future trends: blockchains
\begin{itemize}
\item mainly used for data integrity through \textbf{public ledgers}
\item Used to log activity.
\begin{itemize}
\item Detect malicious operations, hackers, foreign surveillance, database modifications
\item Equally important as access restrictions
\end{itemize}
\end{itemize}
\item Future trends: cyber warface
\begin{itemize}
\item Nation wide actions to cause damage or disruption. Can include physical impact and/or harm to human persons
\item Interesting targets: traffic lights, electricity systems, water filtration, power plants
\item Stuxnet:
\begin{itemize}
\item Worm that targeted Iranian nuclear facilities, damaging centrifuges and other hardware
\item Most likely an American-Israeli cyberweapon
\end{itemize}
\item Petya: ransomware or state attack?
\begin{itemize}
\item Focused strongly on Ukraine systems
\item Made very little money
\item Either very bugge, or very damaging by purpose: permanent removal of files, nuclear power plants, ministries, metros and banks offline, possible link with assassination of Maksym Shapoval
\end{itemize}
\item Future trends: IoT: \citetitle{Ford2013} \cite{Ford2013}
\end{itemize}
\end{itemize}
% end section Security in the media
\section{Example Incidents} \label{sec:example-incidents}
\begin{itemize}
\item Ashley Madison (2015)
\item DNC email leak (2016)
\item Mirai (2016)
\item Twitter hack (2020)
\end{itemize}
% end section Example Incidents
\section{Why do we need security? Why Information Security?} \label{sec:why-do-we-need-security}
\begin{itemize}
\item Counterpart of securing material objects
\begin{itemize}
\item Material object have some \textbf{value}
\item Can be stolen or damaged
\item Cost for security/protection takes into account value and risk of theft/damage
\end{itemize}
\item Risk of threats against information security is \textbf{much} greater
\item Value of information sometimes hard to assess, best estimated by damage caused. Losses cannot be undone
\item Threats against information include:
\begin{itemize}
\item \textbf{Loss} of information
\item \textbf{Forged} information
\item \textbf{Unauthorised release} of information
\item \textbf{Repudiation} of information
\end{itemize}
\item Value of information systems hard to asses. Systems used to enable service \rightarrow damage when service unavailable or unreliable
\item Threats against information systems include:
\begin{itemize}
\item \textbf{Unavailability}/disruption of service
\item \textbf{Unauthorised acces} to service
\item Threats against exchanged information
\end{itemize}
\item Security measures for information systems:
\begin{itemize}
\item \textbf{Information Security}: encryption, virus scanners, firewalls\dots
\item Carry some cost (installation, maintenance, computation time)
\item dependent on risk and potential damage
\end{itemize}
\end{itemize}
% end sec:why-do-we-need-security
\chapter{Basic Concepts} \label{cha:basic-concepts}
\section{A security model} \label{sec:a-security⁻model}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{images/network-security-model.png}
\end{figure}
% end sec:a-security⁻model
\section{Security Goals} \label{sec:security-goals}
Possible exam questions:
\begin{itemize}
\item \textbf{Which security goals does this protocol fullfill?}
\item \textbf{Which security goals per chapter?}
\end{itemize}
\subsection{Confidentiality} \label{sub:confidentiality}
\begin{itemize}
\item Data can only be read by those who are allowed to read the data
\item Applications:
\begin{itemize}
\item Communicating confidential data between branches of a corporation
\item Passwords
\item Storage of health data
\end{itemize}
\end{itemize}
\begin{figure}[h]
\centering
\begin{subfigure}{.40\textwidth}
\centering
\includegraphics[width=\linewidth]{images/data-confidentiality-threat.png}
\caption{Passive attack by Carol: \textbf{eavesdropping} upon information channel}
\label{fig:eavesdropping}
\end{subfigure}
\begin{subfigure}{.50\textwidth}
\centering
\includegraphics[width=\linewidth]{images/data-confidentiality-solution.png}
\caption{Solution to eavesdropping}
\label{fig:eavesdropping-solution}
\end{subfigure}
\end{figure}
\subsubsection{Traffic-flow confidentiality} \label{subsub:traffic-flow-confidentiality}
\begin{itemize}
\item Keeping secret who's communicating with whom
\item Much harder to achieve than data confidentiality
\item In \figurename{} \ref{fig:eavesdropping-solution} data confidentiality is OK, traffic-flow confidentiality is NOT OK: Carol can still see that Alice is communicating with Bob
\end{itemize}
% end subsub:traffic-flow-confidentiality
\subsubsection{Confidentiality vs Privacy} \label{subsub:confidentiality-vs-privacy}
Privacy is having the right to choose what information you give away.
It is a fundamental right, legally protected since long.
Not every confidentiality requirement involves privacy: intellectual property in a business requires confidentiality, no privacy.
% end subsub:confidentiality-vs-privacy
% end sub:confidentiality
\subsection{Authentication} \label{sub:authentication}
Authentication is related to \textbf{identification}: it is the \textit{electronic world} equivalent. \textit{Is the person at the other end of the communication who he claims he is?}
Guaranteeing teh authenticity of a communication is based on:
\begin{itemize}
\item \textbf{Entity} authentication: distinguish each entity from another based on collection of data. Each entity has a unique identity.
\item \textbf{Attribute} authentication. Attribute = characteristic of an entity. Entities are often authenicated through authentication of some of its attributes. Do the communicating parties exhibit the characteristics they claim to have?
\item \textbf{Data-origin} authentication: does the data indeed originate from the specified source? Important to evaluate wether data is reliable (\textbf{Data Integrity} see \ref{sub:data-integrity}). Different from entity authentication: \textbf{no interaction with data source}.
\end{itemize}
\begin{figure}[h]
\centering
\begin{subfigure}{.40\textwidth}
\centering
\includegraphics[width=\linewidth]{images/authentication-threat.png}
\caption{Threat}
\label{fig:authentication-threat}
\end{subfigure}
\begin{subfigure}{.50\textwidth}
\centering
\includegraphics[width=\linewidth]{images/authentication-threat-solution.png}
\caption{Solution}
\label{fig:authentication-threat-solution}
\end{subfigure}
\end{figure}
% end sub:authentication
\subsection{Access Control/authorization} \label{sub:access-control-authorization}
\begin{itemize}
\item Determines which user may access which resource (data, computation time, etc.)
\item Requires \textbf{authentication of the entity} requesting access to these resources
\begin{itemize}
\item System determines to what extent entity may access those resources
\item Access rights may \textbf{depend on entity itself or its attributes}
\end{itemize}
\end{itemize}
\subsubsection{Illustration 1: access control in OS} \label{subsub:access-control-in-os}
\begin{itemize}
\item Authentication through login and password
\item Access control determined for this user (entity)
\begin{itemize}
\item Full access to own files
\item Limited acces to some other files
\item No acces to other files
\end{itemize}
\item Access rights different from user to user
\end{itemize}
% end subsub:access-control-in-os
\subsubsection{Illustration 2: access control to medical database} \label{subsub:access-control-to-medical-database}
\begin{itemize}
\item Different rights for different types of Users
\item Requires authentication based on specific \textbf{attributes}
\item Access rights depend on attributes of the user
\item Access rights different from user type to user type (\textbf{roles})
\end{itemize}
% end subsub:access-control-to-medical-database
% end sub:access-control-authorization
\subsection{Data integrity} \label{sub:data-integrity}
\begin{itemize}
\item Guarantee that sent data and received data are identical
\begin{itemize}
\item No tampering with data en route
\item Nothing was added
\item Nothing was deleted
\item Nothing was modified
\item Nothing was replayed
\end{itemize}
\item stronger requirement than data origin authentication: data originates from specified source \textbf{AND} isn't changed on the way
\item Threats
\begin{itemize}
\item Messages can be replayed
\item Messages can be altered
\item Cannot be solved with confidentiality (encryption): encrypted messages can also be replayed
\end{itemize}
\end{itemize}
\subsubsection{Solution} \label{subsub:data-integrity-solution}
\begin{figure}[ht]
\begin{minipage}{0.4\textwidth}
A security footer containing a sequence number which can only be generated by the sender. This footer has to be generated based on the whole message to prevent tampering to the message itself.
No need to encrypt the whole message for data integrity, but the message is not confidential if it isn't encrypted.
\end{minipage}
\begin{minipage}{0.58\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{images/data-integrity-solution.png}
\caption{Data integrity solution}
\end{minipage}
\end{figure}
% end subsub:data-integrity-solution
% end sub:data-integrity
\subsection{Non-repudiation} \label{sub:non-repudiation}
\begin{itemize}
\item Sender can't deny having sent the message. Important for receiver. \textit{Prove order has been placed}
\item Receiver can't deny having received the message. Important for sender. \textit{Prove invoice has been paid}
\item Both sides need to communicate and `sign' their messages to guarantee non-repudiation for both sides.
\end{itemize}
% end sub:non-repudiation
\subsection{Availability} \label{sub:availability}
\begin{itemize}
\item System/service is accessible and usable for authorised Users
\item Security context $\leftrightarrow$ System design context
\end{itemize}
\subsubsection{Threats} \label{subsub:availability-threats}
\begin{itemize}
\item DoS: denial-of-service: target swamped by torrent of messages from attacker
\item DDos: distributed denial-of-service: target swamped by torrent of messages from multiple (and numerous) senders (botnets).
\end{itemize}
% end subsub:availability-threats
% end sub:availability
% end sec:security-goals
\section{Security Threats} \label{sec:security-threats}
Possible exam questions:
\begin{itemize} \bfseries
\item Explain the difference between confidentiality, authentication, acces control/authorization, data integrity, non-repudiation and availability.
\item Which of the above security goals are realized in the network protocols from Chapter 4?
\item Why are sequence numbers (or nonces) added to messages? Is it a good idea to use a time stamp for this purpose?
\item Which counter measurements can be taken against DoS and DDoS attacks?
\item Give 5 examples of active attacks that can be used to compromise the security of a network protocol.
\end{itemize}
\begin{itemize}
\item \textbf{Passive} attacks
\begin{itemize}
\item Eavesdropping
\item Traffic analysis
\end{itemize}
\item \textbf{Active} attacks
\begin{itemize}
\item Message insertion/modification
\item Impersonation/masquerade
\item Replay
\item DoS
\item Hijacking (taking over existing connection, where attacker replaces sender or receiver)
\end{itemize}
\end{itemize}
Hackers first seek a weak point in a network (for example through social engineering), second they wil use passive attacks to gain more information. Lastly they'll use active attacks.
\subsection{Possible attacks} \label{sub:security-threats-possible-attacks}
\begin{itemize}
\item Brute force: Trying all possible keys
\item Cryptanalysis: using knowledge about structure of algorithm, pairs of plaintext and secure messages in order to recover plaintext message or key itself, or to forge secure message
\item Side-channel attacks use physical properties or fault injection in order to recover plaintext or key
\begin{itemize}
\item \citetitle{Cade2014} \cite{Cade2014}
\item \citetitle{Anthony2013} \cite{Anthony2013}
\end{itemize}
\end{itemize}
% end sub:security-threats-possible-attacks
\subsection{Categories of attacks} \label{sub:categories-of-attacks}
\begin{itemize}
\item \textbf{Ciphertext only}: only secure message is known to attacker. Hardest one to break.
\item \textbf{Known plaintext}: one or more pairs obtained with a single key are known to attacker. Easier to break, but still safe.
\item \textbf{Chosen plaintext}: one or more pairs obtained with a single key, plaintext chosen by attacker. Harder to get, easier to break.
\item \textbf{Chosen ciphertext}: one or more pairs obtained with a single key, ciphertext chosen by attacker (plaintext can be garbage). Even harder to get, easier to break.
\item \textbf{Chosen text}: combination of chosen plaintext and chosen ciphertext.
\end{itemize}
% end sub:categories-of-attacks
\subsection{Desired degree of security?} \label{sub:desired-degree-of-security}
\begin{itemize}
\item Unconditionally secure is \textbf{not achieved by any practical security mechanism}.
\item Computationally secure means that the \textbf{time required for breaking is longer than the usefull lifetime} of the information, or that the \textbf{cost of breaking the encryption is larger than the value} of the information.
\end{itemize}
% end sub:desired-degree-of-security
% end sec:security-threats
% end cha:basic-concepts
\chapter{Encryption Algorithms} \label{cha:encryption-algorithms}
\section{Steganography} \label{sec:steganography}
\begin{itemize}
\item Steganography: conceal the existence of the message.
\item Cryptography: render message unintelligible
\item As old as (or older) than cryptography. Used heavily in history
\item Alter digital files (audio, sound, text, pictures\dots) to a certain extend without losing their functionality. Exploiting the human inability to distinguish minor changes.
\end{itemize}
\subsection{Watermarking} \label{sub:watermarking}
Noise can be used as a watermark: the statistic distribution gives info about creator (and copyright information).
Try to implement the noise so that it cannot be removed or modified by any signal processing operation without degrading perceptual quality.
The watermark should be perceptually invisible. \\
Applications include:
\begin{itemize}
\item Covert information exchanges
\item Establish identity
\item Combat illegal copying
\end{itemize}
The biggest drawback is the high overhead to hide relatively few info bits.
% end sub:watermarking
% end sec:steganography
\section{Encryption througout history} \label{sec:encryption-throughout-history}
Encryption is an alternative for steganography.
\subsection{Substitution ciphers} \label{sub:substitution-ciphers}
\subsubsection{Monoalphabetic Substitution Ciphers} \label{subsub:monoalphabetic-substitution-ciphers}
Simplest version: shift letters $X$ places (Caesar Cipher: $X=3$). Disadvantage = only 25 keys ($X$ = 26 is no cipher), which is easily bruteforceable.
\begin{align*}
E(p) = (p + k)\ \%\ 26 \\
D(c) = (c - k)\ \%\ 26
\end{align*}
By shuffling the letters arbitrarily (each plaintext letter maps to a differnt random ciphertext letter) we can extend the key to 26 letters, which makes it not bruteforceable.
Still not fully secure because after analysing the letter frequencies it is relatively easy to decrypt the ciphertext.
\textbf{Monoalphabetic substitution ciphers do not change relative letter frequencies}.
% end subsub:monoalphabetic-substitution-ciphers
\subsubsection{Polyalphabetic Substitution Ciphers} \label{subsub:polyalphabetic-substitution-ciphers}
Further security improvements by using multiple cipher alphabets sequentially.
\begin{itemize}
\item \textbf{Alberti cipher}: Rotate an encyrption disk every few letters
\item \textbf{Vigenère cipher}
\begin{itemize}
\item More generic
\item key = multiple letters ($K = k_1 k_2 \dots k_d$)
\item $i^{th}$ letter specifies $i^{th}$ alphabet to use
\item repeat from start after d letters in message
\item Decryption simply works in reverse, and is thus relatively fast
\item Relativity safe, frequency analysis not possible anymore
\end{itemize}
\end{itemize}
Rotor machines such as the \textit{Hagelin machine} or \textit{enigme} are examples of polyalphabetic ciphers.
% end subsub:polyalphabetic-substitution-ciphers
\subsubsection{Digraph Substition Ciphers} \label{subsub:digraph-substitution-ciphers}
\textbf{Playfair cipher}: reduces predictability of language by encrypting multiple letters simultanuously. It uses a 5x5 matrix of letters based on a keyword (without duplicate letters).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{images/playfair-cipher-encryption.png}
\caption{Playfair cipher encryption}
\end{figure}
Security is much improved since we have 676 (26x26) diagrams versus 26 for a monoalphabetic cipher. Another advantage is that this cipher is easier to use since no machinery is needed.
It was widely used eg. by US \& British military in WW1, but can now easily be broken, given a few hundred letters.
% end subsub:digraph-substitution-ciphers
% end sub:substitution-ciphers
\subsection{Transposition ciphers} \label{sub:transposition-ciphers}
Rearranging the letter order. Not susceptible to frequency analysis since ciphertext has same frequency distribution as plaintext. \\
Some examples:
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/scytale.png}\\
Roll of the ciphertext.
\caption{ScyTale}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/rail-fence-cipher.jpeg} \\
Write message letters diagonally over a number of rows, then read off cipher row by row.\\
defend the east wall $\rightarrow$ dnetleedheswlxftaax
\caption{Rail Fence cipher}
\end{figure}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.5\textwidth]{images/columnar-transposition-cipher.png} \\
A more complex transposition. Write letters of message out in rows over a specified number of columns, then reorder the columns according to some key before reading off the columns.
\caption{Columnar Transposition Cipher}
\end{figure}
% end sub:transposition-ciphers
\subsection{Combination ciphers} \label{sub:combination-ciphers}
Both substitution and transposition have frequency and pattern analysis as vulnerabilities respectively. What about multiple ciphers in succession?
\begin{itemize}
\item Multiple substitution just make a more complex substitution
\item Multiple Transpositions just make a more complex transposition
\item \textbf{Substitution followed by a transposition makes a new much harder cipher}
\end{itemize}
% end sub:combination-ciphers
% end sec:encryption-throughout-history
\section{Modern cryptography} \label{sec:modern-cryptography}
\subsection{Symmetric encryption algorithms} \label{sub:symmetric-encryption-algorithms}
% end sub:symmetric-encryption-algorithms
\subsection{Asymmetric encryption algorithms} \label{sub:asymetric-encryption-algorithms}
% end sub:asymetric-encryption-algorithms
\subsection{HASH algorithms} \label{sub:hash-algorithms}
% end sub:hash-algorithms
% end sec:modern-cryptography
% end cha:encryption-algorithms
\printbibliography
\end{document} | {
"alphanum_fraction": 0.7532824979,
"avg_line_length": 48.1804347826,
"ext": "tex",
"hexsha": "0a55be34faa2b9ba244e82e6ac7dcb40f7283970",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "64798540a0ae17c18a2b9cd3a24c4f84d5ec1fcb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LorenzoDeBie/Network-and-Computer-Security",
"max_forks_repo_path": "Theorie/Theorie.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "64798540a0ae17c18a2b9cd3a24c4f84d5ec1fcb",
"max_issues_repo_issues_event_max_datetime": "2020-10-10T13:41:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-10T13:41:25.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LorenzoDeBie/Network-and-Computer-Security",
"max_issues_repo_path": "Theorie/Theorie.tex",
"max_line_length": 286,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "64798540a0ae17c18a2b9cd3a24c4f84d5ec1fcb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LorenzoDeBie/Network-and-Computer-Security",
"max_stars_repo_path": "Theorie/Theorie.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5986,
"size": 22163
} |
\pdfoptionpdfminorversion=4
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% This is a (brief) model paper using the achemso class
%% The document class accepts keyval options, which should include
%% the target journal and optionally the manuscript type.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[journal=jpcbfk,manuscript=article,layout=traditional]{achemso}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Place any additional packages needed here. Only include packages
%% which are essential, to avoid problems later.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{chemformula} % Formula subscripts using \ch{}
\usepackage[T1]{fontenc} % Use modern font encodings
\usepackage{graphicx}
\usepackage{caption}
\usepackage{amsmath}
\usepackage{mathptmx}
\usepackage[scaled=0.92]{helvet}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% If issues arise when submitting your manuscript, you may want to
%% un-comment the next line. This provides information on the
%% version of every file you have used.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%\listfiles
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Place any additional macros here. Please use \newcommand* where
%% possible, and avoid layout-changing macros (which are not used
%% when typesetting).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand*\mycommand[1]{\texttt{\emph{#1}}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Meta-data block
%% ---------------
%% Each author should be given as a separate \author command.
%%
%% Corresponding authors should have an e-mail given after the author
%% name as an \email command. Phone and fax numbers can be given
%% using \phone and \fax, respectively; this information is optional.
%%
%% The affiliation of authors is given after the authors; each
%% \affiliation command applies to all preceding authors not already
%% assigned an affiliation.
%%
%% The affiliation takes an option argument for the short name. This
%% will typically be something like "University of Somewhere".
%%
%% The \altaffiliation macro should be used for new address, etc.
%% On the other hand, \alsoaffiliation is used on a per author basis
%% when authors are associated with multiple institutions.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\author{Sree Ganesh Balasubramani}
\affiliation{Department of Chemistry and Biochemistry, University of Arizona, Tucson, Arizona 85721, United States}
\author{Steven D. Schwartz}
\affiliation{Department of Chemistry and Biochemistry, University of Arizona, Tucson, Arizona 85721, United States}
%\author{I. Ken Groupleader}
%\altaffiliation{A shared footnote}
\email{[email protected]}
%\phone{+123 (0)123 4445556}
%\fax{+123 (0)123 4445557}
%\affiliation[Unknown University]
%{Department of Chemistry, Unknown University, Unknown Town}
%\alsoaffiliation[Second University]
%{Department of Chemistry, Second University, Nearby Town}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The document title should be given as usual. Some journals require
%% a running title from the author: this should be supplied as an
%% optional argument to \title.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title[]
{Transition path sampling based calculations of free energies for enzymatic
reactions: the case of human methionine adenosyl transferase and plasmodium
vivax adenosine deaminase}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Some journals require a list of abbreviations or keywords to be
%% supplied. These should be set up here, and will be printed after
%% the title and author information, if needed.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\abbreviations{TPS,WHAM}
\keywords{American Chemical Society, \LaTeX}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The manuscript does not need to include \maketitle, which is
%% executed automatically.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The "tocentry" environment can be used to create an entry for the
%% graphical table of contents. It is given here as some journals
%% require that it is printed as part of the abstract page. It will
%% be automatically moved as appropriate.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{tocentry}
%Some journals require a graphical entry for the Table of Contents.
%This should be laid out ``print ready'' so that the sizing of the
%text is correct.
%Inside the \texttt{tocentry} environment, the font used is Helvetica
%8\,pt, as required by \emph{Journal of the American Chemical
%Society}.
%The surrounding frame is 9\,cm by 3.5\,cm, which is the maximum
%permitted for \emph{Journal of the American Chemical Society}
%graphical table of content entries. The box will not resize if the
%content is too big: instead it will overflow the edge of the box.
%This box and the associated title will always be printed on a
%separate page at the end of the document.
%\end{tocentry}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The abstract environment will automatically gobble the contents
%% if an abstract is not used by the target journal.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
Transition path sampling (TPS) is widely used for the
calculations of reaction rates, transition state structures and reaction
coordinates of condensed phase systems. Here we discuss a scheme
for the calculation of free energies using the ensemble of TPS reactive
trajectories in combination with a window based sampling technique
for enzyme catalyzed reactions. We calculate the free energy profiles of
the reactions catalyzed by the human methionine S-adenosyltransferase (MAT2A)
enzyme and the plasmodium vivax adenosine deaminase ($pv$ADA) enzyme to assess
the accuracy of this method. MAT2A catalyzes the formation of S-adenosine-L-methionine
following a S$_{\text{N}}2$ mechanism and using our method we estimate
the free energy barrier for this reaction to
be $16\;\text{Kcal}\;\text{mol}^{-1}$ which
is in excellent agreement with the experimentally measured activation
energy of $17.27\;\text{Kcal}\;\text{mol}^{-1}$. Furthermore, for the
$pv$ADA enzyme catalyzed reaction we estimate a free energy barrier of
$23\;\text{Kcal}\;\text{mol}^{-1}$ and the calculated free energy profile is
similar to that predicted from experimental observations.
Calculating free energies employing our simple method within TPS
provides significant advantages over such methods as umbrella sampling
since it is free from any applied external bias, accurate compared to
experimental measurements and has a reasonable computational cost.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Start the main part of the manuscript here.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
Molecular dynamics (MD) simulations are increasingly being
used to calculate quantitative estimates of experimentally measurable
kinetic and thermodynamic parameters of enzyme catalyzed reactions.
\cite{Karplus02NatStructMolBiol9p646,
Schramm98AnnuRevBiochem67p693,Zhang05AccChemRes38p379,
Schramm11AnnuRevBiochem80p703,Schramm18ChemRev118p11194}
This has helped in proposing mechanisms and explanations for
enzyme activity, fast high throughput screening of thousands of inhibitor
molecules in drug lead discovery, \cite{Jorgensen09AccChemRes42p724,Sliwoski14PharmacolRev66p334}
directed evolution for designing enzymes with enhanced catalytic
activities, \cite{Thyme09Nature461p1300,Bloom09PNAS106p9995,Schafer19JAmChemSoc141p10431} etc.
Straightforward MD simulations adequately sample
the long-lived stable reactant and product states
but the high energy barrier-crossing events that occur rarely and at long
time scales are not easily accessible using this method alone.
To access such states at a higher frequency using MD simulations
several enhanced sampling techniques
have been developed in the past few decades. These include
umbrella sampling, \cite{Kastner11WileyInterdiscipRevComputMolSci1p932}
metadynamics, \cite{Barducci11WileyInterdiscipRevComputMolSci1p826}
steered MD, \cite{Park04JChemPhys120p5946}
milestoning, \cite{Faradjian04JChemPhys120p10880} and transition path sampling
(TPS). \cite{Pratt86JChemPhys85p5045,Bolhuis02AnnRevPhysChem53p291,
Dellago98JChemPhys108p1964,Bolhuis21AdvTheory4p2000237}
TPS is an attractive method since it samples reactive trajectories
in which the rare barrier-crossing events are guaranteed to occur and
it does not require a priori knowledge of the reaction coordinate which can be
complex for large biological systems.
TPS is based on Monte Carlo importance sampling in the
space of trajectories. It uses the shooting and shifting algorithm
to collect an ensemble of reactive pathways that are real dynamical
trajectories generated without the application of any bias forces. \cite{dellago02AdvChemPhys123}
Therefore the TPS ensemble can be used to obtain
kinetic information such as the rate of the reaction as well as to elucidate
the transition state, reaction coordinate and the reaction mechanism.
TPS has been successfully applied to study problems such as enzyme catalysis,
\cite{Antoniou01JPhysChemB105p5553,Hay12NatChem4p161,Schwartz09NatChemBiol5p551}
and protein folding. \cite{Bolhuis03ProcNatlAcadSci100p12129,Juraszek12ChemPhys396p30}
Calculating equilibrium properties such as the free energy of an enzymatic
catalysis reaction within TPS is not straightforward.
The Monte Carlo algorithm used in TPS has an acceptance criterion that
is only satisfied if the trajectory begins in the reactant basin and ends in
the product basin or vice versa, hence trajectories that are localized in the
reactant or product basins, without visiting both are rejected.
If one defines an order parameter $\zeta$, which separates reactant from
product basins, then all trajectories, reactive and non-reactive,
contribute to the probability density distribution function
($P(\zeta)$) which is the probability of a system at equilibrium to have
a particular value of the order parameter. \cite{Dellago09AdvCompSimAppp167}
Frenkel has shown that inclusion of rejected states in a Markov state Monte
Carlo algorithm can help speed up the convergence of the calculations of
the order parameter distribution of a two-dimensional
Ising model. \cite{Frenkel04ProcNatAcadSci101p17571}
Inclusion of rejected trajectories within TPS was first discussed by Radhakrishnan
et al. \cite{Radhakrishnan04JChemPhys121p2436} who developed `BOLAS' which combines
the shooting and shifting algorithms of TPS with a window based sampling technique
in the spirit of umbrella sampling to calculate equilibrium free energy
differences. Peters et al. used a variant of the BOLAS scheme called the
equilibrium path sampling (EPS) where they use aimless shooting
combined with window based sampling for free energy calculations.
\cite{Peters08JAmChemSoc130p17342,Beckham10epsbook}
More recently Brotzakis et al., \cite{Brotzakis19JChemPhys151p174111}
have developed a re-weighted path ensemble scheme within the standard TPS method
called virtual interface exchange TPS (VIE-TPS) to calculate the free energy
landscape. VIE-TPS is shown to produce reasonably accurate free energy profiles for
chemical reactions that are diffusive in nature whereas it is less accurate otherwise.
Transition interface sampling (TIS) has been used to obtain
a re-weighted path ensemble which can give access to free energies
and committor surfaces. \cite{Rogal10JChemPhys17p174109}
Free energy calculations based on the TPS shooting algorithm is
appealing but rare in the literature. Applications of this
method to study enzymatic reactions and comparisons of the calculated
free energies to experimental measurements can shine light on the
accuracy of this approach compared to the standard methods used for
free energy calculations.
In this manuscript we implement and apply the
algorithm developed by Radhakrishnan et al. \cite{Radhakrishnan04JChemPhys121p2436}
to calculate the free energies of reactions catalyzed by two systems:
the human MAT2A enzyme and the plasmodium vivax adenosine deaminase
enzyme. \cite{Luo07JAmChemSoc129p8008,Ho09Biochemistry48p9618}
Human MAT2A catalyzes the formation of S-adenosyl L-methionine (SAM).
SAM is an essential metabolite which is
distributed to almost all body tissues and fluids, is a universal
methyl donor and is of fundamental importance to the metabolism of
compounds such as hormones, neureotransmitters, proteins,
and nucleic acids. \cite{Friedel89Drugs38p389}
Plasmodium vivax is a parasite that is responsible for the largest number
of cases of malaria globally. Targeting proteins that are essential to
the metabolism of this bacteria such as the adenosine deaminase enzyme
is a possible anti-malarial drug development strategy. \cite{Madrid08JBiolChem283p35899}
These enzymes were chosen not only because of their importance in
biochemical applications but also because of the availability of experimental
results with which comparisons can be drawn. This manuscript is organized as
follows: first we discuss the TPS method and the algorithm that we use
to calculate the free energies, then we show calculations which apply this
method on the two enzymatic reactions. Finally we compare our results to experiments
and provide conclusions.
%------------------------------------------------------------------------------
\section{Methods}
%------------------------------------------------------------------------------
\subsection{Transition path sampling}
%------------------------------------------------------------------------------
Here we briefly describe the equations describing the TPS method for the purpose of
introducing notations as well as for the sake of completeness.
Consider the molecular dynamics simulation of a molecular system consisting
of $N$ atoms. Starting from the initial positions and momenta of each atom at time $t_0$,
the time evolution is carried out by finding the position ($\textbf{q}$) and
momenta ($\textbf{p}$) of each atom at regular intervals of time $t_i = t_0 + i\Delta t$
($i = 0,1,2,\ldots$). At each of the time slices, the positions and
momenta can be collectively represented as
the set $z = \{\textbf{q},\textbf{p}\}$. If the MD simulation is
run for a total
time of $\mathcal{T}$ the number of time slices is given by
$L = \mathcal{T}/\Delta t +1$ and for this sequence of
times the state of the system can be represented as
\begin{equation}
z(\mathcal{T}) = \{z_0, z_{\Delta t}, z_{2\Delta t},\ldots,z_{\mathcal{T}}\}
\end{equation}
The probability to obtain a particular sequence of states is determined by the initial
conditions and the type of dynamics used for the time evolution. If the time evolution is
Markovian, the probability to go from $z_{i\Delta t}$ to $z_{(i+1)\Delta t}$ depends only
on $z_{i\Delta t}$ and not on conditions before the time $i\Delta t$ the total path probability can
be expressed as the product of individual probabilities $p(i\rightarrow j)$ as
\begin{equation}
\mathcal{P}[z(\mathcal{T})] = \rho(z_0)\Pi_{i=0}^{\mathcal{T}/\Delta t-1} p(z_{i\Delta t}\rightarrow z_{(i+1)\Delta t}),
\end{equation}
where $\rho(z_0)$ is the equilibrium distribution of the initial conditions and for a canonical ensemble this can be
expressed as
\begin{equation}
\rho(z_0) = \exp(-\beta H(z_0))/Z
\end{equation}
where
\begin{equation}
Z = \int dz \exp(-\beta H(z))
\end{equation}
is the canonical partition function.
Within the transition path sampling, the trajectories of interest start in the reactant region of the
phase space ($\mathcal{A}$) and end in the product region ($\mathcal{B}$). Such reactive trajectories have
a restricted probability distribution function given by
\begin{equation}
\mathcal{P}_{\mathcal{AB}}[z_{\mathcal{T}}] = h_{\mathcal{A}}(z_0)\mathcal{P}[z(\mathcal{T})]
h_{\mathcal{B}}(z_{\mathcal{T}})/Z_{\mathcal{AB}}(\mathcal{T})\label{eqn:tpsensem}
\end{equation}
where
\[
h_{\mathcal{A}/\mathcal{B}}(z)=
\begin{cases}
1, & \text{if } z\in \mathcal{A}/\mathcal{B}\\
0, & \text{otherwise}
\end{cases}
\]
and $Z_{\mathcal{AB}}(\mathcal{T})$ is the normalization factor for this
probability distribution function.
The set of all reactive trajectories characterized by the probability
distribution function given by Eq. \ref{eqn:tpsensem} is the
transition path ensemble.
\subsection{Equilibrium distribution of order parameters}
The free energy of a molecular system as a function of the order parameter $\zeta$
is defined as
\begin{equation}
\text{A}(\zeta) = -k_{\text{B}}T\text{ln}(P(\zeta)) + C, \label{eqn:fenergy}
\end{equation}
where $P(\zeta)$ is the probability distribution function of the reaction coordinate
$\zeta$, $C$ is an arbitrary constant, $k_{\text{B}}$ is the Boltzmann constant
and $T$ is the temperature. At equilibrium the probability distribution of the
reaction coordinate can be obtained from the distribution function for the phase
space $\rho(\textbf{q})$ as
\begin{equation}
P(\zeta) = \int d\textbf{V} \rho(\textbf{q})\delta\left[\zeta-\tilde{\zeta}(\textbf{q})\right].
\end{equation}
The integration is over the entire phase space. In practice, the calculation of this distribution function
often proceeds using histogram based methods where the reaction coordinate is
divided into bins and the frequency of the occurrence of the reaction coordinate
within a particular window during the course of MD simulation is used to calculate
the probability distribution function. Since some values of the reaction coordinates
particularly close to the transition state are
rarely sampled in a conventional MD simulations, enhanced sampling techniques such as the umbrella sampling
are necessary to sample the low probability regions. TPS is designed to sample these regions in phase space
without the application of any external bias, but the trajectories
sampled with TPS are only reactive and hence
they are not distributed according to the equilibrium distribution function.
The restriction that the pathways start from the reactant state and end in the
product state can be relaxed to obtain an ensemble of trajectories that will
be distributed according to the equilibrium distribution.
%------------------------------------------------------------------------------
\subsection{Algorithm for free energy calculations}\label{ssec:algorithm}
%------------------------------------------------------------------------------
We follow the steps proposed by Radhakrishnan et al. \cite{Radhakrishnan04JChemPhys121p2436}
to sample TPS trajectories within windows of the order parameter.
\begin{itemize}
\item An appropriate order parameter is chosen for the reaction of interest
that can distinguish the reactant and product states sufficiently well and the TPS ensemble is harvested
which consists only of reactive trajectories.
\item The order parameter is divided into windows $\{\zeta_i\}$ and within each window,
$\zeta_{i}^{min} < \zeta < \zeta_{i}^{max}$ is to be sampled.
\item Choose a reactive trajectory $\tilde{z}(\mathcal{T})$ from the TPS ensemble as
a guiding trajectory for the window sampling. Pick a time slice from this trajectory
$\tilde{z}_{j\Delta t}$ such that the order parameter calculated for this time slice
$\tilde{\zeta}$ satisfies $\zeta_{i}^{min} < \tilde{\zeta} < \zeta_{i}^{max}$.
\item Using the shooting and shifting algorithm familiar from TPS simulations, \cite{dellago02AdvChemPhys123}
harvest dynamics trajectories starting from the time slice $\tilde{z}_{j\Delta t}$ of the trajectory
$\tilde{z}(\mathcal{T})$ and accept the new trajectory if for any time slice the calculated
order parameter $\hat{\zeta}$ satisfies $\zeta_{i}^{min} < \hat{\zeta} < \zeta_{i}^{max}$.
\item Calculate the probability distribution ($P_i(\zeta)$) within the window $\zeta_{i}^{min}
< \zeta < \zeta_{i}^{max}$ by constructing histograms obtained from the ensemble of
accepted trajectories for the window referenced by the index $i$.
\item Calculate the free energies for each of the windows according to Eq. \ref{eqn:fenergy}
and combine them such that the function $\text{A}(\zeta)$ is continuous
by adjusting the constants $C$.
\end{itemize}
%------------------------------------------------------------------------------
\section{Computational details}
%------------------------------------------------------------------------------
\subsection{System preparation}
%------------------------------------------------------------------------------
We used the crystal structure of the human MAT2A enzyme
bound to methylthioadenosine and di-imido triphosphate (PNPNP)
(protein data bank ID: 7L1A) reported by Niland
et al. \cite{Niland21Biochem60p791,Ghosh21JAmChemSoc143p18325} and the crystal
structure of \textit{pv}ADA in complex with MT-coformycin (protein data bank ID: 3EWC)
reported by Ho et al. \cite{Ho09Biochemistry48p9618} as the starting points of our two
simulations. For calculations involving the MAT2A enzyme the substrates were modified to
L-methionine and adenosyl triphosphate (ATP) whereas for calculations with the \textit{pv}ADA
enzyme the substrates were modified to $2^{'}-$deoxyadenosine and hydroxide ion.
We solvated both the systems using TIP3P water molecules \cite{Jorgensen83JChemPhys79p926}
in a nanodroplet sphere whose volume was fixed at $15$ {\AA}
away from the protein's surface and the total charge was neutralized
using potassium ions.
MD simulations within classical and hybrid quantum mechanics/molecular mechanics
(QMMM) approximations were carried out using the CHARMM program
package. \cite{Brooks83JComputChem4p187}
For the MM calculations the CHARMM36 force field \cite{Brooks09JComputChem30p1545}
was used and for the QM calculations the approximate PM3 semiempirical
method \cite{Repasky02JComputChem23p1601} was used.
For the hybrid QMMM simulations, the MAT2A system was partitioned into a QM
region consisting of the ATP and MET molecules along with three Mg$^{2+}$ ions
and the MM region consists of the rest of the molecules.
For the $pv$ADA system the QM region consisted of
the adenosine molecule, an hydroxide anion, Zn$^{2+}$ ion and
a portion of the Glu229 residue which was partitioned within the generalized
hybrid orbital (GHO) scheme. \cite{Gao98JPhysChemA102p4714}
The energies of both the systems were minimized using 50 steps of the
steepest descent method, followed by 2000 steps of the
adopted basis Newton-Raphson method where only classical molecular mechanics
was used for the dynamics.
The minimized systems were
heated slowly to 300 K for 35 ps beginning with harmonic
constraints on all atoms except on the H atoms and the TIP3P
water molecules with gradual reduction of the restraint forces.
15 ps of equilibration was carried out starting with harmonic
restraint forces followed by 20 ps of constraint free
equilibration to prepare the systems for TPS simulations.
During the heating and equilibration steps, bonds that contain
hydrogen atoms in the MM region were restrained to their equilibrium values
using the SHAKE procedure. \cite{Ryckaert77JComputPhys23p327}
%------------------------------------------------------------------------------
\subsection{Transition path sampling}
%------------------------------------------------------------------------------
TPS simulations require the definition of unique reactant and product states
to collect reactive trajectories based on the Monte Carlo algorithm. Here we
discuss the details of the reactions involving the MAT2A and the $pv$ADA enzymes
to identify and define the order parameters characterizing the reactant and product basins.
The reaction catalyzed by the human MAT2A enzyme is
between adenosyl $5'$-triphosphate (ATP) and L-methionine (MET) molecules
resulting in the formation of S-adenosyl-L-methionine (SAM) which is
depicted in Fig. \ref{fig:mat2a-reaction}. This reaction follows a S$_{\text{N}}2$
mechanism and was experimentally characterized by
Firestone et al. \cite{Firestone17JAmChemSoc139p13754}. The key bond
parameters for this reaction consists of the distance between the
nucleophilic sulfur atom of MET and the $5'-C$ atom of the ATP (denoted as d$_{\mathrm{SC}}$),
and the distance between oxygen atom of the phosphate group and the $5'-C$
atom of ATP (d$_{\mathrm{OC}}$). For the TPS calculations we define the
reactant state to have d$_{\mathrm{OC}}$ to be less than 1.7 {\AA} and the product
state to have the d$_{\mathrm{SC}}$ less than 2.0 {\AA}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/mat2a-reaction.png}
\caption{The reaction catalyzed by the human MAT2A enzyme between ATP and L-methionine resulting in the
formation of S-adenosyl L-methionine and triphosphate leaving group.}
\label{fig:mat2a-reaction}
\end{figure}
$pv$ADA catalyzes the reaction between adenosine and OH$^{-}$ ion
resulting in the formation of inosine and ammonia as shown in
Fig. \ref{fig:ada-reaction}.
The nucleophilic O atom of the OH$^{-}$ anion
attacks the electrophilic C6 atom of the adenosyl ring leading to the
formation of a Meisenheimer complex like structure while the N1 atom
on the adenosyl moiety gets protonated by the Glu229 residue of $pv$ADA.
The key bond parameters are the distance between the approaching hydroxyl
oxygen and the C6 atom (denoted as d$_{\text{OC}}$) and the
dissociating C6$-$N6 (d$_{\text{NC}}$) bond distance. For the TPS simulations
we define the reactant state to have d$_{\text{NC}}$ being less than $1.8$ {\AA}
and the product state to have the d$_{\text{OC}}$ less than $1.8$ {\AA}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/ada-new-reaction.png}
\caption{The reaction catalyzed by the $pv$ADA enzyme between adenosine and hydroxide ion
resulting in the formation of inosine and ammonia.}
\label{fig:ada-reaction}
\end{figure}
A biased initial reactive trajectory is obtained by using harmonic restraint forces
for both the MAT2A and $pv$ADA systems.
Using the biased trajectory as the starting point and after removing the restraint
forces, the shooting algorithm \cite{dellago02AdvChemPhys123} was used to generate
new trajectories. The shooting algorithm begins with the choice of a
random time slice from the biased initial reactive trajectory and perturbs
the momenta of all the atoms in the system. Then the dynamics is propagated for
250 steps forwards and backwards using $1\;fs$ time steps
resulting in a trajectory with a time period of $0.5\;ps$.
A total of 150 reactive trajectories were generated for both the MAT2A and $pv$ADA
systems.
%------------------------------------------------------------------------------
\subsection{Free energy calculations}
%------------------------------------------------------------------------------
We follow the algorithm discussed previously to calculate the free energy
profile for the two enzymatic reactions using the TPS reactive trajectories.
The order parameter for the MAT2A system
is defined to be d$_{\text{OC}}-$d$_{\text{SC}}$ and that for the $pv$ADA is defined as
d$_{\text{NC}}-$d$_{\text{OC}}$. For the MAT2A system, configurations are sampled
for order parameter values within the range $[-2.0,2.0]$ and this range is divided
into 14 overlapping windows ($0.09$ {\AA} between neighboring windows).
Configurations are sampled for the $pv$ADA system for order parameter values
within the interval $[-3.0, 3.0]$ {\AA} which is divided into 20 overlapping
windows ($0.09$ {\AA} between neighboring windows). A reactive
trajectory from the TPS ensemble is chosen to guide the sampling
within each of the windows in the same spirit as a steered MD trajectory
is used to guide the window based umbrella sampling simulations (though in this case,
of course, this is an unbiased dynamically exact reactive trajectory).
To sample in a particular window $\zeta_i$, a frame from this reactive
trajectory is chosen such that the value of its
order parameter ($\zeta$) falls within
$\zeta_{i}^{min} < \zeta < \zeta_{i}^{max}$. Using the shooting algorithm
the momenta of all the atoms in the system are perturbed
and dynamics is run for 10 steps forwards and backwards with a
$1\;fs$ time step to obtain a trajectory with a time period of $20\;fs$.
Longer trajectories in our experience end up sampling outside the window
more often and hence we make use of very short trajectories in this study.
Within each window 3000 TPS trajectories are collected. The
probability distribution function for the order parameter within window $w$
is then calculated as normalized histograms which represent the populations
of the configurations within the window at equilibrium.
Finally we calculate the free energy profile in a similar fashion to the WHAM
procedure \cite{Kumar92JComputChem13p1011} which is generally used to obtain the
potential of mean force from umbrella sampling calculations.
%------------------------------------------------------------------------------
\section{Results and discussion}
%------------------------------------------------------------------------------
\subsection{Human MAT2A}
The equilibrated structure of the MAT2A enzyme in complex with
ATP, MET and two Mg$^{2+}$ ions is shown in Fig. \ref{fig:mat2a-equil}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.2]{figures/mat2a-equil.png}
\caption{Equilibrated structure of the MAT2A enzyme in complex with ATP and
MET along with 2 Mg$^{2+}$ ions.}
\label{fig:mat2a-equil}
\end{figure}
The variation of the key distance parameters of the reaction calculated for
a reactive trajectory from the TPS ensemble are shown in Fig. \ref{fig:mat2a-reactive-traj}.
\begin{figure}
\includegraphics[scale=1.0]{figures/mat2a-diff167.pdf}
\caption{Bond breaking ($\mathrm{d}_{\mathrm{OC}}$), bond forming
($\mathrm{d}_{\mathrm{SC}}$) distances for a typical
reactive trajectory for the reaction catalyzed by the MAT2A enzyme.}
\label{fig:mat2a-reactive-traj}
\end{figure}
Committor analysis and committor distribution analysis are used to deduce
the transition states and the reaction coordinates, respectively.
Committor analysis is carried out for a reactive trajectory
from the TPS ensemble by choosing a time slice randomly from the trajectory, assigning
momenta picked at random from the Boltzmann distribution for all the atoms, and running
dynamics for $250\;fs$. The commitment probability of a time slice is calculated by
repeating this procedure many times and finding the fraction of trajectories that go
to the reactant, product or neither. The transition state for the reaction is defined as
the structure from the time slice that has equal probability to commit to the reactant
and the product state. Repeating this process for several trajectories from the TPS
ensemble results in a collection of transition states called the separatrix.
\begin{figure}[ht!]
\includegraphics[scale=0.12]{blender-images/ada/mat2a-trans-labelled.png}
\caption{Transition state structure of the reaction catalyzed by the human MAT2A enzyme.
The bond parameters d$_{\text{SC}}$ and d$_{\text{OC}}$ are $2.31$ {\AA} and $2.09$ {\AA},
respectively.}
\label{fig:mat2a-trans-struct}
\end{figure}
The transition state resembles a S$_{\text{N}}2$ structure with the
d$_{\text{SC}}$ and d$_{\text{OC}}$ distances being $2.31$ {\AA} and $2.09$ {\AA}, respectively.
Experimentally the transition state structures of enzymatic reactions
can be elucidated using kinetic isotope effects (KIE). \cite{Schramm99MetEnzym308p301}
Firestone et al. \cite{Firestone17JAmChemSoc139p13754} used a combination of experimental
KIE measurements using the whole protein system and computational density functional
theory (DFT) based gas phase simulations of representative molecules to elucidate the
transition state structure for this reaction. From their findings they report
d$_{\text{SC}}$ and d$_{\text{OC}}$ distances of $2.03$ {\AA} and $2.32$ {\AA},
respectively and our calculated values are in close agreement to this result.
The reaction coordinate might contain not just the atoms from the molecules that
are directly involved in the reaction but also residues from side chains that can
promote the reaction. \cite{Schramm18Biochem57p3299} Taking this into account the
committor distribution analysis
begins by constraining the residues and molecules that are hypothesized to be part of the
reaction coordinate and using this constrained system as the starting point
trajectories are launched assigning momenta at random from the Boltzmann distribution at
$300\;\text{K}$ to all the atoms in the system.
First we constrained just the QM region in our committor distribution analysis
which consists of ATP and MET molecules along with two Mg$^{2+}$ ions and the
resulting distribution as shown in Fig. \ref{fig:mat2a-comm-dist} is not peaked
at 0.5 which is expected for the reaction coordinate.
Next we constrained the residues Gln113, Ser114, Arg249 and Arg264 along with the
QM region and obtain a distribution that is peaked at 0.5 and hence we
conclude that the reaction coordinate consists of these residues as well.
\begin{figure*}[ht!]
\centering
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{figures/comm-60-mat2a-nocons.pdf}
\label{fig:minipage1}
\end{minipage}
\quad
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=\textwidth]{figures/comm-60-mat2a.pdf}
\label{fig:minipage2}
\end{minipage}
\caption{Committor distribution analysis for obtaining the reaction coordinate of the
MAT2A catalyzed reaction. The figure on the left has the QM region constrained and the figure
on the right has the QM region along with the Gln113, Ser114, Arg249 and Arg264 residues constrained.}
\label{fig:mat2a-comm-dist}
\end{figure*}
The activation energy of this reaction was experimentally measured by
Niland et al. \cite{Niland21Biochem60p791} where they measured the rate
constant of the reaction at various temperatures while keeping the
substrate concentrations near saturation. Using the Arrhenius equation
they estimated the activation energy to be $17.27\;\pm\;1.5\;
\text{Kcal mol}^{-1}$.
\begin{figure}
\includegraphics[scale=1.0]{figures/mat2a-fenergy.pdf}
\caption{The free energy as a function of the order
parameter for the reaction catalyzed
by the human MAT2A enzyme. The activation barrier is
calculated to be $16\;\text{Kcal mol}^{-1}$. The solid colored disks represent the free energies
calculated within the individual windows which is adjusted to make the free energy as a function
of the order parameter a continuous function.}
\label{fig:mat2a-fenergy}
\end{figure}
The free energy profile from our TPS based calculations is shown in
Fig. \ref{fig:mat2a-fenergy} where the barrier is $16\;\text{Kcal mol}^{-1}$.
This is in excellent agreement with the experimental result.
%------------------------------------------------------------------------------
\subsection{$pv$ADA}
%------------------------------------------------------------------------------
The equilibrated structure for the ADA enzyme complexed with adenosine,
OH$^{-}$ anion and Zn$^{2+}$ cation is shown in Fig. \ref{fig:ada-equil}.
The Zn$^{2+}$ cation interacts with the His42, His44 and His226 residues
and stabilizes the OH$^{-}$ anion in the pocket. The N1 atom on the adensoyl
molecule hydrogen bonds with the carboxylic acid hydrogen atom of the Glu229
residue.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.2]{./figures/ada-equil.png}
\caption{Equilibrated structure of $pv$ADA enzyme complexed with adenosine,
OH$^{-}$ anion and Zn$^{2+}$.}
\label{fig:ada-equil}
\end{figure}
The bond forming (d$_{\text{OC}}$) and bond breaking (d$_{\text{NC}}$) distances
for a reactive trajectory from the TPS ensemble is depicted
in Fig. \ref{fig:ada-reactive-traj}.
\begin{figure}[ht!]
\includegraphics[scale=1.0]{figures/ada-diff60.pdf}
\caption{Bond breaking ($\mathrm{d}_{\mathrm{NC}}$), bond forming
($\mathrm{d}_{\mathrm{OC}}$) distances for a typical
reactive trajectory for the reaction catalyzed by the $pv$ADA enzyme.}
\label{fig:ada-reactive-traj}
\end{figure}
The deamination is an aromatic nucleophilic substitution reaction
where an intermediate Meisenheimer complex like structure is formed
at the transition state as shown in Fig. \ref{fig:ada-trans}.
The approaching O atom of the OH$^{-}$ anion is at a distance of $1.49$ {\AA}
from the C6 atom whereas the N atom of the amine leaving group is at $1.62$ {\AA}
from the C6 atom.
The N$1-$H bond distance is $1.1\;${\AA}
suggesting that the proton transfer from the Glu229 residue is almost complete.
\begin{figure}
\centering
\includegraphics[scale=0.12]{blender-images/ada/new-ada-trans.png}
\caption{The transition state structure of the reaction catalyzed by the ADA enzyme.
The formation of the Meisenheimer complex like structure with
d$_{\text{NC}}=1.62$ {\AA} and d$_{\text{OC}}=1.49$ {\AA} is observed.}
\label{fig:ada-trans}
\end{figure}
These observations are consistent with the
experimental KIE studies from Luo et al. \cite{Luo07JAmChemSoc129p8008}
who predict the transition state structure for the deamination reaction
catalyzed by the plasmodium falciparum ADA ($pf$ADA) enzyme which shares
$72\%$ identity with the $pv$ADA enzyme that we analyze here. The next
step in the reaction is the proton transfer from the OH$^{-}$ ion to the
amine leaving group resulting in the formation of inosine and ammonia.
%At the transition state, experimental findings
%suggest that the dissociation of the C6$-$N6 bond as well as the protonation of the N1
%atom are almost complete. \cite{Luo07JAmChemSoc129p8008}
The free energy profile that we calculated is shown in Fig. \ref{fig:ada-fenergy}.
\begin{figure}[h!]
\centering
\includegraphics[scale=1.0]{./figures/ada-fenergy.pdf}
\caption{The free energy as a function of the reaction
parameter for the ADA catalyzed reaction. The activation barrier is calculated as
$21\;\text{Kcal}\;\text{mol}^{-1}$. The solid colored disks represent the free energies
calculated within the individual windows which is adjusted to make the free energy as a function
of the order parameter a continuous function.}
\label{fig:ada-fenergy}
\end{figure}
The free energy barrier is calculated to be $21\;\text{Kcal}\;\text{mol}^{-1}$
and the overall shape of the function is very similar to that hypothesized
by Luo et al. \cite{Luo07JAmChemSoc129p8008}
%------------------------------------------------------------------------------
\section{Conclusions}
%------------------------------------------------------------------------------
In this study, we presented an application of the TPS method for the calculation
of free energy profiles of enzymatic reactions. The order parameter along which the
free energy profile is to be obtained is partitioned into windows as is done in
umbrella sampling, but instead of using bias potentials to sample the configurations
within the windows the shooting algorithm is used to
obtain an ensemble of trajectories according to the Monte Carlo algorithm. A
trajectory is accepted if it visits the window at least in one of the time steps.
Using this ensemble of trajectories the unbiased equilibrium probability distribution
function of the order parameter of the reaction within each window is obtained.
Furthermore the free energy as a function of the order parameter is computed within
each of the windows and the entire profile is obtained by shifting the free energies
such that the function is continuous at the boundaries.
The MAT2A enzymatic reaction was analyzed in detail to elucidate the transition state,
reaction coordinate and the free energy profile. The reaction proceeds through a
S$_\text{N}$2 mechanism, the d$_{\text{OC}}$ and d$_{\text{SC}}$ are $2.09$ {\AA}
and $2.31$ {\AA}, respectively. The reaction coordinate is determined to involve
the Gln113, Ser114, Arg249 and Arg264 residues.
The free energy barrier for this reaction was calculated to be
$16\;\text{Kcal}\;\text{mol}^{-1}$ which is in good agreement
with the experimentally measured activation energy of $17.27\;\pm\;1.5\;\text{Kcal mol}^{-1}$.
Furthermore the reaction catalyzed by the $pv$ADA enzyme is analyzed in detail to obtain the
transition state and calculate the free energy profile.
The transition state has a Meisenheimer complex like structure with the
d$_\text{NC}$ and d$_\text{OC}$ being $1.62$ {\AA} and $1.49$ {\AA}, respectively.
The free energy barrier for this reaction for found to be $21\;\text{Kcal}\;\text{mol}^{-1}$
with a double well shape similar to that measured from experiments.
These results show that free energy simulations based on unbiased transition
path sampling computations can result in accurate and computationally
efficient predictions of both free energy profiles and mechanisms.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The "Acknowledgement" section can be given in all manuscript
%% classes. This should be given within the "acknowledgement"
%% environment, which will make the correct section or running title.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{acknowledgement}
SGB acknowledges funding from the NIH through grant number GM127594.
\end{acknowledgement}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The same is true for Supporting Information, which should use the
%% suppinfo environment.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{suppinfo}
%\end{suppinfo}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% The appropriate \bibliography command should be placed here.
%% Notice that the class file automatically sets \bibliographystyle
%% and also names the section correctly.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliography{manuscript}
\end{document}
| {
"alphanum_fraction": 0.7228147488,
"avg_line_length": 58.4397260274,
"ext": "tex",
"hexsha": "07304adb0b381eb76fa4bc2bb84d28fd89924b42",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "95bd7055a9d233ae41f0cb42a91c955716b7c2fc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sreeganb/manuscript",
"max_forks_repo_path": "tex/manuscript.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "95bd7055a9d233ae41f0cb42a91c955716b7c2fc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sreeganb/manuscript",
"max_issues_repo_path": "tex/manuscript.tex",
"max_line_length": 120,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "95bd7055a9d233ae41f0cb42a91c955716b7c2fc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sreeganb/manuscript",
"max_stars_repo_path": "tex/manuscript.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10173,
"size": 42661
} |
% Copyright 2019 by Till Tantau
%
% This file may be distributed and/or modified
%
% 1. under the LaTeX Project Public License and/or
% 2. under the GNU Free Documentation License.
%
% See the file doc/generic/pgf/licenses/LICENSE for more details.
\section{Shadings Library}
\label{section-library-shadings}
\begin{pgflibrary}{shadings}
The package defines a number of shadings in addition to the ball and axis
shadings that are available by default.
\end{pgflibrary}
In the following, the shadings defined in the library are listed in
alphabetical order. The colors of some of these shadings can be configured
using special options (like |left color|). These options implicitly select the
shading.
The three shadings |axis|, |ball|, and |radial| are always defined, even when
this library is not used.
\begin{shading}{axis}
In this always-defined shading the colors change gradually between three
horizontal lines. The top line is at the top (uppermost) point of the path,
the middle is in the middle, the bottom line is at the bottom of the path.
%
\begin{key}{/tikz/top color=\meta{color}}
This option sets the color to be used at the top in an |axis| shading.
When this option is given, several things happen:
%
\begin{enumerate}
\item The |shade| option is selected.
\item The |shading=axis| option is selected.
\item The middle color of the axis shading is set to the average of
the given top color \meta{color} and of whatever color is
currently selected for the bottom.
\item The rotation angle of the shading is set to 0.
\end{enumerate}
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \draw[top color=red] (0,0) rectangle (2,1);
\end{codeexample}
\end{key}
\begin{key}{/tikz/bottom color=\meta{color}}
This option works like |top color|, only for the bottom color.
\end{key}
\begin{key}{/tikz/middle color=\meta{color}}
This option specifies the color for the middle of an axis shading. It
also sets the |shade| and |shading=axis| options, but it does not
change the rotation angle.
\emph{Note:} Since both |top color| and |bottom color| change the
middle color, this option should be given \emph{last} if all of these
options need to be given:
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \draw[top color=white,bottom color=black,middle color=red]
(0,0) rectangle (2,1);
\end{codeexample}
\end{key}
\begin{key}{/tikz/left color=\meta{color}}
This option does exactly the same as |top color|, except that the
shading angle is set to $90^\circ$.
\end{key}
\begin{key}{/tikz/right color=\meta{color}}
Works like |left color|.
\end{key}
\end{shading}
\begin{shading}{ball}
This always-defined shading fills the path with a shading that ``looks like
a ball''. The default ``color'' of the ball is blue (for no particular
reason).
\begin{key}{/tikz/ball color=\meta{color}}
This option sets the color used for the ball shading. It sets the
|shade| and |shading=ball| options. Note that the ball will never
``completely'' have the color \meta{color}. At its ``highlight'' spot a
certain amount of white is mixed in, at the border a certain amount of
black. Because of this, it also makes sense to say |ball color=white|
or |ball color=black|
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\begin{tikzpicture}
\shade[ball color=white] (0,0) circle (2ex);
\shade[ball color=red] (1,0) circle (2ex);
\shade[ball color=black] (2,0) circle (2ex);
\end{tikzpicture}
\end{codeexample}
\end{key}
\end{shading}
\begin{shading}{bilinear interpolation}
This shading fills a rectangle with colors that a bilinearly interpolated
between the colors in the four corners of the rectangle. These four colors
are called |lower left|, |lower right|, |upper left|, and |upper right|. By
changing these color, you can change the way the shading looks. The library
also defines four options, called the same way, that can be used to set
these colors and select the shading implicitly.
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz
\shade[upper left=red,upper right=green,
lower left=blue,lower right=yellow]
(0,0) rectangle (3,2);
\end{codeexample}
\begin{key}{/tikz/lower left=\meta{color} (initially white)}
Sets the color to be used in a |bilinear interpolation| shading for the
lower left corner. Also, this options selects this shading and sets the
|shade| option.
\end{key}
\begin{key}{/tikz/upper left=\meta{color} (initially white)}
Works like |lower left|.
\end{key}
\begin{key}{/tikz/upper right=\meta{color} (initially white)}
Works like |lower left|.
\end{key}
\begin{key}{/tikz/lower right=\meta{color} (initially white)}
Works like |lower left|.
\end{key}
\end{shading}
\begin{shading}{color wheel}
\label{shading-color-wheel}%
This shading fills the path with a color wheel.
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \shade[shading=color wheel] (0,0) circle (1.5);
\end{codeexample}
%
To produce a color ring, cut out a circle from the color wheel:
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \shade[shading=color wheel] [even odd rule]
(0,0) circle (1.5)
(0,0) circle (1);
\end{codeexample}
%
\end{shading}
\begin{shading}{color wheel black center}
This shading looks like a color wheel, but the brightness drops to zero in
the center.
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \shade[shading=color wheel black center] (0,0) circle (1.5);
\end{codeexample}
%
\end{shading}
\begin{shading}{color wheel white center}
This shading looks like a color wheel, but the saturation drops to zero in
the center.
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \shade[shading=color wheel white center] (0,0) circle (1.5);
\end{codeexample}
%
\end{shading}
\makeatletter
\begin{shading}{Mandelbrot set}
This shading is just for fun. It fills the path with a zoomable Mandelbrot
set. Note that this is \emph{not} a bitmap graphic. Rather, the Mandelbrot
set is \emph{computed by the \textsc{pdf} renderer} and can be zoomed
arbitrarily (give it a try, if you have a fast computer).
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \shade[shading=Mandelbrot set] (0,0) rectangle (2,2);
\end{codeexample}
%
\end{shading}
\makeatother
\begin{shading}{radial}
This always-defined shading fills the path with a gradual sweep from a
certain color in the middle to another color at the border. If the path is
a circle, the outer color will be reached exactly at the border. If the
shading is not a circle, the outer color will continue a bit towards the
corners. The default inner color is gray, the default outer color is white.
%
\begin{key}{/tikz/inner color=\meta{color}}
This option sets the color used at the center of a |radial| shading.
When this option is used, the |shade| and |shading=radial| options are
set.
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \draw[inner color=red] (0,0) rectangle (2,1);
\end{codeexample}
\end{key}
\begin{key}{/tikz/outer color=\meta{color}}
This option sets the color used at the border and outside of a |radial|
shading.
%
\begin{codeexample}[preamble={\usepgflibrary{shadings}}]
\tikz \draw[outer color=red,inner color=white]
(0,0) rectangle (2,1);
\end{codeexample}
\end{key}
\end{shading}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "pgfmanual-pdftex-version"
%%% End:
| {
"alphanum_fraction": 0.6863653437,
"avg_line_length": 36.1402714932,
"ext": "tex",
"hexsha": "e2afddbb3458b4eae7ef7a6b95d5905d5eb6bd1b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-shadings.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-shadings.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-shadings.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2281,
"size": 7987
} |
\XtoCBlock{ManualSwitch}
\label{block:ManualSwitch}
\begin{figure}[H]\includegraphics{ManualSwitch}\end{figure}
\begin{XtoCtabular}{Inports}
In1 & Input \#1\tabularnewline
\hline
In2 & Input \#2\tabularnewline
\hline
\end{XtoCtabular}
\begin{XtoCtabular}{Outports}
Out & \tabularnewline
\hline
\end{XtoCtabular}
\begin{XtoCtabular}{Mask Parameters}
Toggle & Toggle\tabularnewline
\hline
\end{XtoCtabular}
\subsubsection*{Description:}
Toggling between inputs by double-clicking on block.
% include optional documentation file
\InputIfFileExists{\XcHomePath/Library/General/Doc/ManualSwitch_Info.tex}{\vspace{1ex}}{}
\subsubsection*{Implementations:}
\begin{tabular}{l l}
\textbf{FiP8} & 8 Bit Fixed Point Implementation\tabularnewline
\textbf{FiP16} & 16 Bit Fixed Point Implementation\tabularnewline
\textbf{FiP32} & 32 Bit Fixed Point Implementation\tabularnewline
\textbf{Float32} & 32 Bit Floating Point Implementation\tabularnewline
\textbf{Float64} & 64 Bit Floating Point Implementation\tabularnewline
\end{tabular}
\XtoCImplementation{FiP8}
\index{Block ID!144}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & FiP8 \tabularnewline
\textbf{ID} & 144 \tabularnewline
\textbf{Revision} & 1 \tabularnewline
\textbf{C filename} & ManualSwitch\_FiP8.c \tabularnewline
\textbf{H filename} & ManualSwitch\_FiP8.h \tabularnewline
\end{tabular}
\vspace{1ex}
8 Bit Fixed Point Implementation
\begin{XtoCtabular}{Controller Parameters}
Toggle & Toggle info\tabularnewline
\hline
\end{XtoCtabular}
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
int8 *In1;
int8 *In2;
int8 Out;
int8 Toggle;
} MANUALSWITCH_FIP8;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/General/Doc/Test_ManualSwitch_FiP8.tex}{}{}
\fi
\XtoCImplementation{FiP16}
\index{Block ID!145}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & FiP16 \tabularnewline
\textbf{ID} & 145 \tabularnewline
\textbf{Revision} & 1 \tabularnewline
\textbf{C filename} & ManualSwitch\_FiP16.c \tabularnewline
\textbf{H filename} & ManualSwitch\_FiP16.h \tabularnewline
\end{tabular}
\vspace{1ex}
16 Bit Fixed Point Implementation
\begin{XtoCtabular}{Controller Parameters}
Toggle & Toggle info\tabularnewline
\hline
\end{XtoCtabular}
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
int16 *In1;
int16 *In2;
int16 Out;
int8 Toggle;
} MANUALSWITCH_FIP16;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/General/Doc/Test_ManualSwitch_FiP16.tex}{}{}
\fi
\XtoCImplementation{FiP32}
\index{Block ID!146}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & FiP32 \tabularnewline
\textbf{ID} & 146 \tabularnewline
\textbf{Revision} & 1 \tabularnewline
\textbf{C filename} & ManualSwitch\_FiP32.c \tabularnewline
\textbf{H filename} & ManualSwitch\_FiP32.h \tabularnewline
\end{tabular}
\vspace{1ex}
32 Bit Fixed Point Implementation
\begin{XtoCtabular}{Controller Parameters}
Toggle & Toggle info\tabularnewline
\hline
\end{XtoCtabular}
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
int32 *In1;
int32 *In2;
int32 Out;
int8 Toggle;
} MANUALSWITCH_FIP32;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/General/Doc/Test_ManualSwitch_FiP32.tex}{}{}
\fi
\XtoCImplementation{Float32}
\index{Block ID!147}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & Float32 \tabularnewline
\textbf{ID} & 147 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & ManualSwitch\_Float32.c \tabularnewline
\textbf{H filename} & ManualSwitch\_Float32.h \tabularnewline
\end{tabular}
\vspace{1ex}
32 Bit Floating Point Implementation
\begin{XtoCtabular}{Controller Parameters}
Toggle & Toggle info\tabularnewline
\hline
\end{XtoCtabular}
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
float32 *In1;
float32 *In2;
float32 Out;
int8 Toggle;
} MANUALSWITCH_FLOAT32;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/General/Doc/Test_ManualSwitch_Float32.tex}{}{}
\fi
\XtoCImplementation{Float64}
\index{Block ID!148}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & Float64 \tabularnewline
\textbf{ID} & 148 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & ManualSwitch\_Float64.c \tabularnewline
\textbf{H filename} & ManualSwitch\_Float64.h \tabularnewline
\end{tabular}
\vspace{1ex}
64 Bit Floating Point Implementation
\begin{XtoCtabular}{Controller Parameters}
Toggle & Toggle info\tabularnewline
\hline
\end{XtoCtabular}
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
float64 *In1;
float64 *In2;
float64 Out;
int8 Toggle;
} MANUALSWITCH_FLOAT64;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/General/Doc/Test_ManualSwitch_Float64.tex}{}{}
\fi
| {
"alphanum_fraction": 0.7164623468,
"avg_line_length": 26.8075117371,
"ext": "tex",
"hexsha": "99f8afb1f8d3da7f3b8493ee8fdff04e7e5fa90f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "AlexisTM/X2C",
"max_forks_repo_path": "Library/General/Doc/ManualSwitch.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "AlexisTM/X2C",
"max_issues_repo_path": "Library/General/Doc/ManualSwitch.tex",
"max_line_length": 90,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "AlexisTM/X2C",
"max_stars_repo_path": "Library/General/Doc/ManualSwitch.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1736,
"size": 5710
} |
This thesis presents a collection of static and dynamic snapshots of diverse
microbial systems. Although our new methods are in no way specific to the human
gut, part of our focus has been to bridge the gap between computation and
biology, to get us closer towards a future where drug prescription, diet and
recreational activities can be further \textit{personalized} -- through a layer
invisible to the naked eye but perceivable in most respects of life.
It seems impossible to accurately predict the impact of scientific discoveries.
Historically, we have seen great evidence of seemingly anachronistic findings
(for example neural networks \cite{Tem10}) that later, usually when other
technologies catch up, become new fields of research or cornerstones in
consumer applications. Therefore, while acknowledging this is a tough problem,
I present my personal view on future directions that can build up from the
contributions in this thesis. I divide these into three sections: diagnostic
methods, treatment and analysis.
\section{Diagnostic Methods}
In Chapter~\ref{chapter_dogs} and Chapter~\ref{chapter_ibds}, we presented
examples of biomarkers for \gls{ibd} and \gls{cd} from two different analytical
perspectives. First, in a dog cohort, we showed that we can reformulate the
dysbiosis index we originally developed for humans \cite{RN154}, and make
it specific for a different host\hyp{}species. Both in humans and dogs, this
log-ratio of bacterial groups is associated with decreased phylogenetic
diversity, and in humans, it is also associated with increased inflammation.
Next, after noticing the increased volatility in the microbiome of subjects
with \gls{ibd}, we benchmarked microbial variability in fecal samples as an
effective classifier for disease. By collecting more samples per subject, we
can overcome the low classification accuracy of fecal samples.
Although both approaches are encouraging, translating research into
consumer\hyp{}level applications, commonly presents formidable challenges that
might only be solvable by future generations. Take for example the \gls{ecg},
it took almost 125 years, between the first observation of
biopotentials\footnote{Credited to Luigi Galvani in 1787.}, and the moment when
the first table \gls{ecg}\footnote{Willem Einthoven's ectrocardiograph was
manufactured by the Cambridge Scientific Instrument Company of London in 1911.}
became commercially available \cite{ECGZywietz}. Even after the tremendous
progress, proper validation of computer-generated diagnostics only appeared
more than 200 years after the initial discovery \cite{njem_ecg}.
I use the example of the \gls{ecg}, because much like raw heart biopotentials,
(\textit{i}) microbiome data is plagued with noise, and (\textit{ii})
determining the appropriate filters and thresholds directly depends on the
use-case. However, unlike with the early developments of the \gls{ecg}, we live
in digital and connected world. As such, the following focuses will
shorten the time between future discoveries and innovation:
\begin{description}
\item[Mechanistic Studies]The novelty characteristic of microbiome studies,
has produced a large amount of descriptive studies. Contrastingly,
mechanistic experiments have lagged in validating several of these
findings (likely due to the more expensive requirements). If the goal
is to relate the presence or abundance of a microbiome feature to a
disease state or biochemical process, the underlying methods must be
informed by biological inferences in order for these biomarkers to gain
credibility.
\item[Open Data]Medical research is especially affected by human factors,
being able to consistently collect samples from a subject is not always
easy or deterministic (think of bowel problems and fecal samples). A
powerful practice used to counter underpowered studies is to improve on
the current sample size through the reuse of previously published data.
This approach is only possible through open resources, like
Qiita\footnote{\url{https://qiita.ucsd.edu/}}, that make data reuse
seamless. Importantly, the work in this thesis was only possible
through the reuse of existing datasets (see
Chapter~\ref{exploratory_chapter} through Chapter~\ref{chapter_fmts}).
Although making data openly available is an important first step,
proper standardization of processing protocols will also be key to
maximizes data re-usability. A remarkable example of this practice is
the \gls{emp} \cite{RN4267}.
\end{description}
\section{Treatment}
Microbiome\hyp{}based treatments are generally based on the transplantation of
microbial communities from a \textit{healthy donor} into an \textit{affected
recipient}. Three spearheading examples are: \glspl{fmt} to treat
\gls{cdi} \cite{RN4129}, skin microbiomes transplants to treat atopic
dermatitis \cite{GalloSkin}, and (although still in early stages) a capsule
full of microbial spores to treat \gls{uc} by Seres
Therapeutics\footnote{\url{http://www.serestherapeutics.com}}.
Although most \glspl{fmt} succeed at treating \gls{cdi}, what makes a
successful transplant is not clear yet. The number of variables implicated in
answering this question is immense. From a computational complexity standpoint,
\textit{a priori} determination of whether or not a new community can colonize
an existing ecosystem is considered a hard computational problem, belonging to
the \textbf{\#-P} class \cite{RN4266}. As such our focus should be on
strengthening our systematic understanding of the transplant, and characterize
not only what makes a successful transplant but also what makes a failed one.
For example, the impact of the work presented in Chapter~\ref{section_fmt}
could have been magnified if we had included subjects for which the \gls{fmt}
failed. With appropriate sample sizes, we could have applied a number of
techniques to single out (if any) the common features that lead to a failed
transplant. With this knowledge, we could pre-screen \textit{donors} and
\textit{recipients} to ensure a successful treatment and avoid unexpected
side-effects.
\section{Analysis}
Much as our microbial symbionts depend on the host environment (and vice
versa), the development and funding of analytical tools depends on the
ever-growing necessity to unravel complex patterns in a number of experimental
setups (cross-sectional, longitudinal, multi-'omic, etc). Thus, these tools
must take into consideration the ability to be flexible, scalable, and when
possible interactive.
Novel analytical methods and software infrastructure should be built making
scalability a priority. We have seen a steady increase in our capability to
generate data, and it is likely that this trend will continue. Emperor
(Chapter~\ref{section_emperor}) was partly a response to the limitations in
existing software, and more recently we had to re-architect the underlying
implementation to cope with modern and larger datasets.
Flexibility and compliance with community standards makes software available
to a wider audience. Take the count table, often acting as the core
data-structure for metabolomic, transcriptomic, proteomic, (and other 'omics)
analyses. If the software producing this data complies with a standard, like
the \gls{biom}-format, the end user is now free to select from a variety of
methods as opposed to being limited to \textit{niche-software}. This idea is
being taken a step further with \gls{qiime}-2, where a semantic type system
defines the methods and visualizations that can be applied to any given dataset
(regardless of its biological origin).
Finally, the value of interactively exploring a dataset lies in our ability to
quickly test hypotheses and iteratively develop new ones. Future exploratory
analysis tools should be developed with interactivity and interoperability in
mind. For example, brushing to select a group of samples in a view of data
might act as a filter for a different representation. Modern web technologies,
and software development frameworks like \gls{qiime}-2, will likely be the
pioneers of these global overviews of microbial diversity.
| {
"alphanum_fraction": 0.790388303,
"avg_line_length": 62.2686567164,
"ext": "tex",
"hexsha": "c2e658d45195f5cde26c379f4d3e42e2e6ec053b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-10-19T00:52:37.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-19T00:52:37.000Z",
"max_forks_repo_head_hexsha": "e6fc60eecad0f57070379d7dcc56521d3b588434",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "ElDeveloper/phd-thesis",
"max_forks_repo_path": "chapter_conclusions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e6fc60eecad0f57070379d7dcc56521d3b588434",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "ElDeveloper/phd-thesis",
"max_issues_repo_path": "chapter_conclusions.tex",
"max_line_length": 81,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "e6fc60eecad0f57070379d7dcc56521d3b588434",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "ElDeveloper/phd-thesis",
"max_stars_repo_path": "chapter_conclusions.tex",
"max_stars_repo_stars_event_max_datetime": "2018-08-14T17:37:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-14T16:12:48.000Z",
"num_tokens": 1836,
"size": 8344
} |
\documentclass{article}
\input{include}
\usepackage{bytefield}
\usepackage{color}
\usepackage{fullpage}
% \usepackage[margin=12mm]{geometry}
\usepackage{hyperref}
\usepackage[underline=true]{pgf-umlsd}
\usetikzlibrary{calc}
\newcommand{\entid}{{\rm Ent}_{\rm ID}}
\begin{document}
\title{Entanglement Generation Protocol: Notes on Link+Physical Layer}
\author{Stephanie, Axel, Matthew, Erwin, Ronald}
\maketitle
The objective of this document is to define the link layer in quantum networks connecting quantum processing nodes, and to propose a concrete link layer protocol
based on an existing implementation of the physical layer with certain properties. In analogy to classical networks, the objective of the link layer will be to enable communication between two nodes $A$ and $B$ connected by a \emph{link} on the same network. Here, enabling communication corresponds to producing entanglement between $A$ and $B$, and we will hence refer to such protocols as Entanglement Generation Protocols (EGP).
We propose the desired service, interface to the higher layer, as well as a concrete EGP. We first discuss an EGP between two nodes $A$ and $B$, and discuss extensions
to a proposed architecture connecting many nodes at the end.
To fit the link layer EGP into the future envisioned network stack we briefly sketch the stack framework here, going from higher to lower layer:
\begin{description}
\item[QTP - Qubit transport protocol] (Transport Layer) Responsible for the end to end transmission of qubits.
\item[EMP - Entanglement Management Protocol] (Network Layer) Responsible for the generation of entanglement between two nodes that are not directly connected by a link, i.e. not on the same local network.
\item[EGP - Entanglement Generation Protocol] (Link Layer) Resonsible for the generation of entanglement between to nodes connect by a direct link.
\end{description}
\section{Entanglement Generation Protocols}
Let us first describe the interface, service, as well as performance criteria of entanglement generation protocols.
\subsection{Higher layer to EGP}
An EGP supports a single command from the higher layer, namely a request to produce entanglement, which we call a CREATE command.
This command includes some desired properties of the entanglement such as for example a minimum fidelity, and a maximum waiting time.
In an actual physical implementation, there is a tradeoff between these parameters. More time, for example, may allow the underlying
implementation to use entanglement distillation to produce higher quality pairs.
\begin{description}
\item[CREATE] Produce entanglement with a node on the same network (i.e. connected by a link). Arguments supplied are:\\
\noindent
\begin{tabular}{ll}
Partner ID & ID of the node to generate entanglement with. \\
Number $k$ & Number of pairs we want to create.\\
$F_{\min}$ & Minimum acceptable fidelity (with high confidence). \\
$t_{\max}$ & Maximum acceptable waiting time before request is completed. \\
Purpose ID & Identifying the purpose or application at this node (optional, default 0). \\
Priority & Manual setting of a priority for entanglement production (optional).\\
create ID & Sequence number identifying this CREATE command.
\end{tabular}
\end{description}
\subsection{EGP to higher layer}
Following the reception of the CREATE command, several actions of the EGP are possible. Let us start with the positive outcome, and then consider possible
errors.
\begin{description}
\item[OK] Entangled pair has successfully been produced deterministically (heralded). One message per pair created, delivered immediately (best effort) following pair creation.
With high confidence, the minimum acceptable fidelity $F_{\min}$ has been met, and the entanglement has been generated
within the specified time frame $t_{\max}$. Information about the entanglement generation is provided, including an entanglement identifier. This identifier is required
to be globally unique, and agreed upon by $A$ and $B$. That is, $A$ and $B$ can locally use this entanglement identifier to determine which of their qubits is entangled with the remote
node, and also which qubit belongs to which entangled pair. Entanglement identifiers are meant to be shared in the network by higher layer protocols and carry meaning beyond the nodes
$A$ and $B$. An entanglement identifier ($\entid$) consists of:\\
\noindent
\begin{tabular}{ll}
(Node $A$ ID, Node $B$ ID) & IDs of the two nodes between which this entanglement is shared.\\
seqID & Sequence number. Unique (up to wrap around) between $A$ and $B$, and \\
& globally unique when combined with the node IDs.\\
Goodness & Heuristic estimate for the fidelity of the generated pair.\\
$t_{Goodness}$ & Time when this goodness was established (in EGP, usually \\
& the same as generation time).\\
$t_{Create}$ & Time the pair was produced.\\
\end{tabular}\\
\smallskip
\noindent
In addition the OK message also includes the following local information. We remark that Qubit IDs are exclusively local information (akin to the memory address
in a computer) and not in general shared between network nodes.\\
\noindent
\begin{tabular}{ll}
Qubit ID & Logical Qubit ID of the entangled pair can locally be found.
\end{tabular}
\end{description}
Entanglement generation may fail for wide number of reasons, some of which form an immediate error. It may also be that the entanglement later expires, or is discarded
of which the EGP will inform the higher layer. Let us start by listing the immediate failure modes, where in all such cases the create ID will be included allowing the
higher layer to identify which request has failed.\\
\begin{description}
\item[ERR\_UNSUPP] Operation not supported. For example, creation of entanglement with the specified minimum fidelity is unattainable, or
unattainable within the given time frame, even if the node is not loaded.
\item[ERR\_NOTIME] Cannot meet the desired fidelity demand within the given time frame due to high load.
\item[ERR\_TIMEOUT] Failure to produce entanglement within the specified time frame.
\item[ERR\_OTHER] Failure for unspecified reasons, such as hardware failures.
\end{description}
In addition, the following failure mode can occur later when an entangled pair is expired. The primary use case of this will be to deal with extremely improbable failures
in which recognition of the failure only becomes available after the higher layer has already received an OK message. This allows for
a tradeoff between speed and certainty in recognizing failure modes. Since entanglement is very short lived, increased certainty can if desired be sacrificed for speed.
\begin{description}
\item[EXPIRE] Expire Qubit ID. Any entanglement associated with Qubit ID has become unavailable.
\end{description}
\subsubsection{Questions}
\begin{itemize}
\item The term ''High confidence'' is not defined and we need to decide what we mean by that, and also if this is some parameter where/by whom it is determined.
\end{itemize}
\subsection{Performance metrics}
Apart from correctly fullfilling requests, a variety of performance metrics can be considered for EGPs. Not all of these can be simultaneously optimized, but occasionally impose tradeoffs.
We hereby also draw a distinction between performance metrics of interest to a specific ``user'' requesting entanglement from the EGP, and the overall performance of the network.
Evidently, for all metrics below adverage, variance, and worst case behaviour is of interest. Once more data is available on how quantum networks are used in practise, one may also consider ``typical'' values for these metrics.
Let us first consider ``user'' centric metrics, measuring the experience of one invidual user rather than a behaviour of the network as a whole. We remark that nevertheless these metrics
are a consequence of the total usage of the network.
\begin{description}
\item[Fidelity] Quality of the entanglement produces. By design the fidelity has to exceed the minium requested fidelity $F_{\min}$.
\item[Latency] Time between submission of a CREATE request, and an OK response when successful. By design this time may not exceed $t_{\max}$.
\end{description}
In addition, we can consider measures defined by the behaviour of the network when dealing with a large number of requests.
\begin{description}
\item[Throughput] Number of pairs/s. Refined variants of throughput to be measured include: instantaneous throughput and sustained throughput.
\item[Fairness] Difference in performance metrics between requests originating at $A$ and $B$.
\item[Availability] Availability is a concern here if a network node requires resetting and two nodes require resynchronization at certain time intervals.
\end{description}
We remark that measured values like throughput evidently depend on the request behaviour, including what we will call the \emph{request ratio}, i.e. the number of pairs requested/number of requests total.
\section{Protocol classes}
Before proposing a specific EGP, let us first consider a general class of EGPs that are build
on top of a system supporting heralded entanglement generation at the physical layer.
More precisely, we will consider physical layer protocols that produce entanglement between two nodes $A$ and $B$, by means of an heralding station $M$ between them.
\smallskip
\begin{sequencediagram}
\newinst{a}{Node $A$}
\newinst[3]{mid}{Heralding Station $M$}
\newinst[3]{b}{Node $B$}
\end{sequencediagram}
\section{Sending classical messages}\label{sec:classicalMessages}
\subsection{Starting situation}
It will be assumed that there exists a means to transmit classical data between $A$, $B$ and $M$. How this is realized is not the objective of this document, and it could be achieved both by a dedicated fiber (possibly using two wavelength for bidirectional communication), or interspersed with quantum signals on the same fiber. Of interest are merely standard numbers:
\begin{itemize}
\item Classical channels are bidirectional, meaning data could in principle be sent in both direction at the same time (and, as a relevant consequence, messages can cross and are not ordered in both directions)
\item Likelihood of losses: $p_{\rm loss}$ probability of loss (e.g. following standard fiber loss plus electronics if applicable).
\item Likelihood of errors: $p_{\rm err}$ probability of error - where we remark that as in other classical communication burst errors are probably dominant.
\item Standard delays of interest: propagation delay (over the fiber), transmission delay (incl. delays of the electronics in putting the packets on the fiber), and processing delay, if known. We will assume that given the highly sophisticated electronics and the fact that the rate of classical communication is low due the relatively low repetition rate of entanglement generation attempts , the transmission and processing delay are essentially negligible.
\end{itemize}
\subsection{Enhanced situation}
Two standard methods exist to enhance this situation to the following, whose exact form and choice depends on the parameters above:
\begin{itemize}
\item Error dectection: This can be achieved using standard methods, where probably a simple CRC depending on the length of the headers is most appropriate. This will add a number of bits to the messages headers below if employed. For example, for a standard CRC-32 as used in Ethernet, the CRC is computed over the message and stored in a $32$ bit header field.
\item Message authentication: In this case, this refers to the fact that $A$ knows the messages originate with $B$ (and vice versa).
Similarly, $M$ can authenticate messages if desired. Such authentication can be realized using a message authentication code (MAC) (see eg~\cite{UMAC}). These can be realize with varying levels of security. If $A$ and $B$ share consumable key (such as for example generated by QKD), they can afford to use one-time MAC which - similar to a one-time pad - offers information-theoretic security. Such MACS, for example based on two-universal hashing, can in principle be fast (see e.g.~\cite{UMAC} needing however various levels of key), although it is a question whether they are fast enough to be useful in this context.
\steph{Note that this does not automatically imply the entanglement is authenticated: this would only be the case if the midpoint is trusted which is evidently against all principles here. Given the local nodes take actions, like initiating entanglement generation for example, based on classical messages received - it strikes me as highly desirable to do this in order to ensure some form of robustness, given that this can even affect the quality of the qubits already stored etc}
\end{itemize}
\section{Entanglement Generation}\label{sec:entanglementGeneration}
\subsection{Starting situation}
In the following, I will highly abstract away from the present entanglement generation protocols to focus only on the relevant elements for the final protocol, namely the exchange
of classical messages, what information is available where and at what time and who can make decisions (such as choice of fidelity).
The following model, also applies to the memory-assisted scheme based on entanglement distillation with minor modifications, but for simplicity I will assume that it is
single click or BK.
General assumptions are:
\begin{itemize}
\item An association between the classical control messages $m$ below, and the entanglement generation. For this reason, I will write classical message transmission as simply $m$, and $q$ for arbitrary quantum signal $q$. To make it clear, how I will use this abstract description later, I will always take $m = (req, pass)$ where $req$ is request information only for the mid point and later protocol specific, and $pass$ is something that will by default always be passed onto the other side (also protocol specific). Midpoint will provide
a response $resp$, for example, success or failure.
\item Variables $v_1,\ldots,v_k$, where $v_j$ specifies the number of pairs yet to be created of quality choice $j$ (for example, forming an agregate of a particular choice of bright state population $\alpha$, or other parameters). It is assumed (see Section~\ref{sec:queue}) that $A$, and $B$ agree on the values of these variables.
For now simply take $k=1$.
\item Generation proceeds automatically in each time step, depending on the value of the variables above.
\end{itemize}
Very abstractly, a generation protocol thus takes the following form with respect to the classical states and message exchanges:
\smallskip
\begin{sequencediagram}
\newinst{mema}{:State $A$}
\newinst[1]{a}{Node $A$}
\newinst[3]{mid}{Heralding Station $M$}
\newinst[3]{b}{Node $B$}
\newinst[1]{memb}{:State $B$}
\begin{call}{a}{generate?}{mema}{\shortstack{$j$,yes/no}}
\end{call}
\prelevel
\prelevel
\begin{call}{b}{generate?}{memb}{\shortstack{$j$,yes/no}}
\end{call}
\mess[1]{a}{{$m_{AM} = (req_{AM}, pass_{AM})$, $q$}}{mid}
\prelevel
\prelevel
\mess[1]{b}{{$m_{BM} = (req_{BM}, pass_{BM})$, $q$}}{mid}
\mess[1]{mid}{{$resp_{MA}$, $pass_{BM}$}}{a}
\prelevel
\prelevel
\mess[1]{mid}{{$resp_{MB}$, $pass_{AM}$}}{b}
\end{sequencediagram}
As a simple example, consider the single click protocol used in continuous mode as in~\cite{peterPaper} for the generation of one pair $v_1 = 1$ for an agreed upon quality, $m_{AM},m_{BM}$ are empty, $resp_{MA}, resp_{MB} \in \{OK, FAIL\}$. The protocol proceeds until the success is reached, in which case $v_1 = 0$ and generation stops. (Note that e.g. NV reset is not pictured here, as I'm only interested in classical message exchange)
\section{Enhanced situation}
Based on the general shape of such protocols above, one can now consider a slight ``enhancement'' of a protocol of this form - like single-click -
that makes explicit some (probably obvious) failure modes, and produces a total ordering of pairs that $A$ and $B$ agree upon, even if some messages may go missing.
\bigskip
\begin{bytefield}[bitwidth=1.1em]{32}
\bitheader{0-31} \\
\begin{rightwordgroup}{To be filled later}
\bitbox{32}{Rest of header}
\end{rightwordgroup} \\
\bitbox{32}{Error detection CRC}\\
\bitbox{32}{Message authentication (MAC)}
\end{bytefield}
\end{document}
| {
"alphanum_fraction": 0.7707364224,
"avg_line_length": 73.5688888889,
"ext": "tex",
"hexsha": "cddd9b45f5bce5c18154111496a97c9023e26e24",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-07-26T15:54:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-26T15:54:12.000Z",
"max_forks_repo_head_hexsha": "552f4b59d4deb5e838b21d569b5c4fd835fa1494",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "SoftwareQuTech/QLinkLayerSimulations",
"max_forks_repo_path": "notes/linkLayer/linkLayerNotes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "552f4b59d4deb5e838b21d569b5c4fd835fa1494",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "SoftwareQuTech/QLinkLayerSimulations",
"max_issues_repo_path": "notes/linkLayer/linkLayerNotes.tex",
"max_line_length": 622,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "552f4b59d4deb5e838b21d569b5c4fd835fa1494",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "SoftwareQuTech/QLinkLayerSimulations",
"max_stars_repo_path": "notes/linkLayer/linkLayerNotes.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-17T11:20:44.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-26T15:22:17.000Z",
"num_tokens": 3834,
"size": 16553
} |
\bibliographystyle{plain}
\chapter{The Global Positioning System in a Nutshell}
The Global Positioning System is actually a U.S. government satellite navigation system that provides a civilian signal. As of this writing, the signal is broadcast simultaneously by a constellation of 32 satellites each with a 12 hour orbit. From any given position on the Earth, 8 to 12 satellites are usually visible at a time.
\section{GPS in a Nutshell}
Each satellite broadcasts spread spectrum signals at 1575.42 and 1227.6 MHz, also known as L1 and L2, respectively. Currently the civil signal is broadcast only on L1. The signal contains two components: a time code and a navigation message. By differencing the received time code with an internal time code, the receiver can determine the distance, or range, that the signal has traveled. This range observation is offset by errors in the (imperfect) receiver clock; therefore it is called a pseudorange. The navigation message contains the satellite ephemeris, which is a numerical model of the satellite's orbit.
GPS receivers record, besides the pseudorange, a measurement called the carrier phase (or just phase); it is also a range observation like the pseudorange, except (1) it has an unknown constant added to it (the phase ambiguity) and (2) it is much smoother (about 100 times less measurement noise than the pseudorange!), which makes it useful for precise positioning. Because of the way it is measured, the phase is subject to random, sudden jumps; these discrete changes always come in multiples of the wavelength of the GPS signal, and are called cycle slips.
\subsection{The Position Solution}
The standard solution for the user location requires a pseudorange measurement and an ephemeris for each satellite in view. At least four measurements are required as there are four unknowns: 3 coordinates of position plus the receiver clock offset. The basic algorithm for the solution is described in the official GPS Interface Control Document, or ICD-GPS-200. The position solution is corrupted due to two sources of error: errors in the observations and errors in the ephemeris.
\subsubsection{Reducing Measurement Errors}
The GPS signal travels through every layer of the Earth's atmosphere. Each layer affects the signal differently. The ionosphere, which is the high-altitude, electrically charged part of the atmosphere, introduces a delay, and therefore a range error, into the signal. The ionosphere delay can be predicted using a model. However, the accuracy of ionosphere models is limited. A better alternative is to measure and remove the ionosphere delay. Measurement of the ionosphere delay is possible by taking advantage of the fact that the delay is frequency dependent. It can be directly computed if you have data on both the GPS frequencies. There is also a delay due to the troposphere, the lower part of the atmosphere. Like the ionosphere delay, the atmosphere delay can be either predicted or derived from measurements. There are many other errors associated with the GPS signal: multipath reflections and relativistic effects are two examples.
More precise applications reduce the effect of error sources by a technique referred to as differential GPS (DGPS). By differencing measurements simultaneously collected by the user and a nearby reference receiver, the errors that are common to both receivers (most of them) are removed. The result of DGPS positioning is a position relative to the reference receiver; adding the reference position to the DGPS solution results in the absolute user position.
The alternative to DGPS is to explicitly model and remove errors. Creating new and robust models of phenomena that affect the GPS signal is an area of active research at ARL:UT and other laboratories. The positioning algorithm can be used to explore such models. Essentially, the basic approach is to turn the positioning algorithm inside out to look at the corrections themselves. For example, observations from a network of receivers can create a global map or model of the ionosphere.
\subsubsection{Improved Ephemerides}
The GPS position solution can be directly improved by using an improved satellite ephemeris. The U.S National Geospatial-Intelligence Agency (NGA) generates and makes publicly available a number of precise ephemerides, which are more accurate satellite orbits \cite{ion:gnss04}, \cite{nga:website}. Satellite orbits described by the broadcast navigation message have an error on the order of meters; the precise ephemeris has decimeter accuracy. The International GNSS Service (IGS) is a global, civil cooperative effort that also provides free precise ephemeris products \cite{igs:reference}. Global networks of tracking stations produce the observations that make generation of the precise ephemerides possible.
\section{GPS Data Sources}
GPS observation data from many tracking stations are freely available on the Internet. Many such stations contribute their data to the IGS. In addition, many networks of stations also post their data to the Internet; for example the Australian Regional GPS Network (ARGN) \cite{argn:website} and global cooperatives such as NASA's Crust Dynamics Data Information System (CDDIS) \cite{cddis:website}.
\subsection{GPS File Formats}
Typically GPS observations are recorded in a standardized format developed by and for researchers. Fundamental to this format is the idea that the data should be independent of the type of receiver that collected it. For this reason the format is called Receiver INdependent Exchange, or RINEX. Another format associated with GPS is SP-3, which records the precise ephemeris. The GPSTk supports both RINEX and SP-3 formats.
\subsection{Receiver Protocols}
GPS receivers have become less expensive and more capable over the years, in particular handheld and mobile GPS receivers. The receivers have many features in common. All of the receivers output a position solution every few seconds. All receivers store a list of positions, called waypoints. Many can display maps that can be uploaded. Many can communicate with a PC or handheld to store information or provide position estimates to plotting software.
Typically communication with a PC and other systems follows a standard provided by the National Marine Electronics Association called NMEA-0183. NMEA-0183 defines an ASCII based format for communication of position solutions, waypoints and a variety of receiver diagnostics. Here is an example of a line of NMEA data, or sentence:
\begin{verbatim}
$GPGLL,5133.81,N,00042.25,W*75
\end{verbatim}
The data here is a latitude, longitude fix at 51 deg 33.81 min North, 0 deg 42.25 min West; the last part is a checksum.
As a public standard, the NMEA-0183 format has given the user of GPS freedom of choice. NMEA-0183 is the format most typically used by open source applications that utilize receiver-generated positions.
Closed standards are also common. SiRF is a proprietary protocol that is licensed to receiver manufacturers. Many receiver manufacturers implement their own binary protocols. While some of these protocols have been opened to the public, some have been reverse engineered.
%\bibliography{gpstk}
\putbib[gpstk]
%\end{document}
| {
"alphanum_fraction": 0.8133351701,
"avg_line_length": 123.0338983051,
"ext": "tex",
"hexsha": "37d513173732fa6e2a06773813394456cb44715b",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2021-08-06T06:23:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-05T01:50:29.000Z",
"max_forks_repo_head_hexsha": "2f1ff054b7c5059da80bb3b2f80c05861a02cc36",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wuyou33/Enabling-Robust-State-Estimation-through-Measurement-Error-Covariance-Adaptation",
"max_forks_repo_path": "3rdparty/GPSTk/ref/usersguide/gpsnutshell.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2f1ff054b7c5059da80bb3b2f80c05861a02cc36",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wuyou33/Enabling-Robust-State-Estimation-through-Measurement-Error-Covariance-Adaptation",
"max_issues_repo_path": "3rdparty/GPSTk/ref/usersguide/gpsnutshell.tex",
"max_line_length": 943,
"max_stars_count": 50,
"max_stars_repo_head_hexsha": "e660d031bb1bcea664db1de4946fd8781be5b627",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mfkiwl/ICE",
"max_stars_repo_path": "3rdparty/GPSTk/ref/usersguide/gpsnutshell.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-15T23:28:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-12T01:22:20.000Z",
"num_tokens": 1492,
"size": 7259
} |
\begin{frame}
\frametitle{Random variables}
A variable quantity whose possible values depend, in random manner, on a set of random outcomes events\footnote{\href{https://en.wikipedia.org/wiki/Random_variable}{Wikipedia}}\vspace{1em}
Every random variables is defined over \emph{probability space}: $(\Omega, \mathcal{F}, \mathcal{P})$ \footnote{Additional information can be found, among others, in \href{http://vfu.bg/en/e-Learning/Math--Bertsekas_Tsitsiklis_Introduction_to_probability.pdf}{D.P. Bertsekas and J.N. Tsitsiklis. Introduction to Probability}}\vspace{1em}
Consider the toss of a coin
\begin{itemize}
\item $\Omega=\{H, T\}$ is the set of possible outcomes. In this case, head or tail
\item $\mathcal{F}=\{\{\}, \{H\}, \{T\}, \{H,T\}\}$ is the set of events we consider
\item $\mathcal{P}$ probability function. It associates elements of $\mathcal{F}$ with a probability value. For example $$\mathcal{P}(\{\})=0,\quad\mathcal{P}(\{H\})=0.5, \quad\mathcal{P}(\{T\})=0.5, \quad \mathcal{P}(\{H, T\})=1$$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Moments}
$X$ is a random variable , taking values in $\mathbb{R}$, and having probability density function $f(x)$
\begin{itemize}
\item Mean: $\mu=E[X]=\int_\mathbb{R} sf(s)ds$
\item $n^{th}$ moment: $\int_\mathbb{R} s^nf(s)ds$
\item $n^{th}$ central moment $E[(X-E[X])^n]=\int_\mathbb{R} (s-\mu)^nf(s)ds$
\begin{itemize}
\item Variance: second central moment $E[(X-E[X])^2]=\int_\mathbb{R} (s-\mu)^2f(s)ds$
\end{itemize}
\end{itemize}
\vspace{1em}
Some variables have moments on infinite value. For example \emph{heavy tailed distribution} \footnote{\href{https://en.wikipedia.org/wiki/Heavy-tailed_distribution}{\small{ Wikipedia. Heavy-tailed distribution}}}
$$f(x)=\left\{\begin{aligned}
\frac{1}{x^2} && x \geq 1\\
0&&\textrm{otherwise}
\end{aligned}
\right.$$
\end{frame}
\begin{frame}
\frametitle{Mean and mode (most likely outcome)}
\emph{Mean} and \emph{mode} are different concept\vspace{0.5em}
\begin{itemize}
\item Mean: weighted sum of all of the possible outcomes
\begin{itemize}
\item Mean value could lie outside the set of possible outcomes
\end{itemize}
\item Mode: An outcome with the highest probability value
\end{itemize}
\vspace{0.5em}
\begin{columns}
\column{0.5\textwidth}
$X$ is a random variable taking values
\begin{itemize}
\item 0 with probability 0.2
\item 1 with probability 0.8
\end{itemize}
\vspace*{0.5em}
Mean value: $\mu=0.2\cdot 0 + 0.8 \cdot 1=0.8$\\\vspace{0.5em}
Mode: $\argmax_{x\in\{0,1\}} p(x) \;=1$
\column{0.4\textwidth}
\begin{block}{Probability of $X$}
\centering
\begin{tikzpicture}
%\draw[help lines, color=gray!30, dashed] (-4.9,-4.9) grid (4.9,4.9);
\draw[->, thick] (-0.5,0)--(1.5,0) node[right]{$x$};
\draw[->, thick] (-0.5,0)--(-0.5,1) node[above]{$p(x)$};
\draw (0, 0) -- (0, 0.2) node [above] {$0.2$};
\draw (1, 0) -- (1, 0.8) node [above] {$0.8$};
\node [below] at (0,0) {0};
\node [below] at (1,0) {1};
\filldraw[fill=black!40, draw=black](0,0.2) circle (0.05cm);
\filldraw[fill=black!40, draw=black](1,0.8) circle (0.05cm);
\end{tikzpicture}
\end{block}
\end{columns}
%Under certain assumptions (e.g., Ergodicity) the average of
\takeaway{\bf{The mean value is not even an element of the possible outcomes}}
\end{frame}
\begin{frame}
\frametitle{Sum of independent random variable}
\onslide<1->The distribution of the sum of two random variables must always be carefully computed\vspace{0.5em}
\begin{columns}\onslide<1->
\column{0.5\textwidth}
\begin{itemize}
\item <1->$X$ uniformly distributed between 0 and 1%, $X\sim\mathcal{U}(0, 1)$
\item <1->$Y$ uniformly distributed between 0 and 1
\item <1->$Z=X+Y$ \emph{is not} uniformly distributed between 0 and 1
\item <2-> The distribution of $Z$ depends on the joint distribution of $X$ and $Y$
\item <3-> If $X$ and $Y$ are independent variables, then $Z$ has a triangular distribution\\
\begin{itemize}
\item $X$ and $Y$ are independent if
for all $x$ and $y$ $$P(X\leq x,Y\leq y)=P(X\leq x)\cdot P(Y \leq y)$$
\end{itemize}
\end{itemize}
\column{0.46\textwidth}
\begin{block}{Probability density functions}
\onslide<1->{
\begin{tikzpicture}
\draw[->] (-0.5,0)--(1.5,0) node[right]{$x$};
\draw[->] (-0.5,0)--(-0.5,1.5) node[above]{$p(x)$};
\draw[very thin, dashed] (-0.5, 1)node[left] {1} -- (0, 1);
\draw[thin](0, 0) node[below] {0}-- (0, 1);
\draw[thick] (0, 1) -- (1, 1);
\draw[thin] (1, 0)node[below] {1} -- (1, 1);
\end{tikzpicture}}
\onslide<3->{
\begin{tikzpicture}
\draw[->] (-0.5,0)--(2.5,0) node[right]{$z$};
\draw[->] (-0.5,0)--(-0.5,1.5) node[above]{$p(z)$};
\draw[very thin, dashed] (-0.5, 1)node[left] {1} -- (1, 1);
\draw[thick](0, 0) node[below] {0}-- (1, 1);
\draw[very thin, dashed] (1, 0)node[below]{1} -- (1, 1);
\draw[thick] (2, 0)node[below] {2} -- (1, 1);
\end{tikzpicture}}
\end{block}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Sum of variables normally distributed random variables}
The Gaussian distribution is also called \emph{Normal} distribution and is denoted with the symbol $\mathcal{N}$\vspace{0.5em}
\begin{itemize}
\item The probability density function of a normally distributed random variable $X$ is
$$f_X(x;\mu,\sigma)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^s}{2\sigma^2}}$$
\item <2-> If $X\sim\mathcal{N}(\mu_x, \sigma_x^2)$ and $Y\sim\mathcal{N}(\mu_y, \sigma_y^2)$ and are independent, then
\item <2-> $Z=X+Y$ is also normally distributed;\vspace{1em} $Z\sim\mathcal{N}(\mu_x+\mu_y, \sigma_x^2+\sigma_y^2)$
%\item <2-> If $\alpha$ is a non-zero constant, then $\alpha Z$ is normally distributed
%\begin{itemize}
% \item mean: $\alpha \mu_z$, variance: $\alpha^2\sigma_z^2$
%\end{itemize}
\item <3-> If $\bm{Z}$ has a multivariate normal distribution with mean $\bm{\mu}_z$ and covariance $L=E[(Z-\bm{\mu}_z)(Z-\bm{\mu}_z)^T]$
and $A$ is a matrix, then
\item <3-> The variable $\bm{S}=A\bm{Z}$ is normally distributed with mean $A \bm{\mu}_z$ and covariance matrix is $ALA^T$
\end{itemize}
\onslide<2>{
\takeaway{\large Note however that $X\cdot Y$ is \emph{not} normally distributed}}
\end{frame}
\section{Random variables and dynamic systems}
\separatorslide
\begin{frame}
\frametitle{Gaussian distributions and linear systems}
Assume
\begin{columns}
\column{0.6\textwidth}
\begin{itemize}
\item $X(0)$ is normally distributed, $X(0)\sim\mathcal{N}(\mu_0, \sigma_0^2)$
\item $W(k)$ is normally distributed, $W(k)\sim\mathcal{N}(\mu_{w,k}, \sigma_{w,k}^2)$
\item $X(0)$ and $W(k)$ are independent for all $k$
\item for $k\geq0$ the following recursive equation holds: $X(k+1) =X(k) + W(k)$
\end{itemize}
\column{0.4\textwidth}
\begin{block}{Graphical representation}
\begin{tikzpicture}
\node(X0){$X(0)$};
\node[above=1em of X0](W0){$W(0)$};
\node[right=1em of X0](sum1){$+$};
\node[right=1em of sum1](X1){$X(1)$};
\node[above=1.2em of X1](W1){$W(1)$};
\node[right=1em of X1](sum2){$+$};
\node[right=1em of sum2](X3){$\cdots$};
\draw[->] (W0) to (sum1);
\draw[->] (X0) to (sum1);
\draw[->] (sum1) to (X1);
\draw[->] (W1) to (sum2);
\draw[->] (X1) to (sum2);
\draw[->] (sum2) to (X3);
\end{tikzpicture}
\end{block}
\end{columns}
\vspace*{0.5em}
\onslide<1-> What is the distribution of $X(1)$ ?
\begin{itemize}\onslide<2->
\item $X(1)$ is normally distributed (sum of two independent Gaussian variables)
\item Mean $\mu_1=\mu_0+\mu_{w,0}$, variance $\sigma_1^2 = \sigma_0^2 + \sigma_{w,0}^2$
\end{itemize}
\vspace*{0.5em}
\onslide<2-> What is the distribution of $X(k)$ ?
\begin{itemize}\onslide<3->
\item $X(k)$ is Gaussian distributed% (sum of independent Gaussian variables)
\item Mean $\mu_k=\mu_0+\sum_{j=0}^{k-1}\mu_{w,j}$, variance $\sigma_k^2=\sigma_0^2 + \sum_{j=0}^{k-1}\sigma_{w,j}^2$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Non Gaussian distributions and linear systems}
Assume
\begin{columns}
\column{0.6\textwidth}
\begin{itemize}
\item $X(0)$ is uniformly distributed, $X(0)\sim\mathcal{U}(0, 1)$
\item $W(k)$ is uniformly distributed, $W(k)\sim\mathcal{U}(0, 1)$
\item $X(0)$ and $W(k)$ are independent for all $k$
\item for $k\geq0$ the following recursive equation holds: $X(k+1) =X(k) + W(k)$
\end{itemize}
\column{0.4\textwidth}
\begin{block}{Graphical representation}
\begin{tikzpicture}
\node(X0){$X(0)$};
\node[above=1em of X0](W0){$W(0)$};
\node[right=1em of X0](sum1){$+$};
\node[right=1em of sum1](X1){$X(1)$};
\node[above=1.2em of X1](W1){$W(1)$};
\node[right=1em of X1](sum2){$+$};
\node[right=1em of sum2](X3){$\cdots$};
\draw[->] (W0) to (sum1);
\draw[->] (X0) to (sum1);
\draw[->] (sum1) to (X1);
\draw[->] (W1) to (sum2);
\draw[->] (X1) to (sum2);
\draw[->] (sum2) to (X3);
\end{tikzpicture}
\end{block}
\end{columns}
\vspace*{0.5em}
\onslide<2-> What is the distribution of $X(1)$ ?
\begin{itemize}\onslide<3->
\item $X(1)$ has a triangular distribution
\item Mean $\mu_1=\mu_0+\mu_{w,0}$, variance $\sigma_1^2 = \sigma_0^2 + \sigma_{w,0}^2\;$\footnote{\href{http://eli.thegreenplace.net/2009/01/07/variance-of-the-sum-of-independent-variables}{See also here}}
\end{itemize}
\vspace*{0.5em}
\onslide<4-> What is the distribution of $X(k)$ ?
\begin{itemize}\onslide<4->
\item The distribution of $X(k)$ depends on the distribution of $X(k-1)$ and of $W(k-1)$
\item Mean $\mu_k=\mu_0+\sum_{j=0}^{k-1}\mu_{w,j}$, variance $\sigma_k^2=\sigma_0^2 + \sum_{j=0}^{k-1}\sigma_{w,j}^2$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Gaussian distributions and non linear systems}
Assume
\begin{columns}
\column{0.6\textwidth}
\begin{itemize}
\item $X(0)$ is normally distributed $X(0)\sim\mathcal{N}(0, 1)$
\item $W(k)$ is normally distributed $W(k)\sim\mathcal{N}(0, 1)$
\item $X(0)$ and $W(k)$ are independent for all $k$
\item for $k\geq0$ we have $X(k+1) =X^2(k) + W(k)$
\end{itemize}
\column{0.4\textwidth}
\begin{block}{Graphical representation}
\begin{tikzpicture}
\node(X0){$X(0)$};
\node[above=1em of X0](W0){$W(0)$};
\node[right=1em of X0](sum1){$+$};
\node[right=1em of sum1](X1){$X(1)$};
\node[above=1.2em of X1](W1){$W(1)$};
\node[right=1em of X1](sum2){$+$};
\node[right=1em of sum2](X3){$\cdots$};
\draw[->] (W0) to (sum1);
\draw[->] (X0) to (sum1);
\draw[->] (sum1) to (X1);
\draw[->] (W1) to (sum2);
\draw[->] (X1) to (sum2);
\draw[->] (sum2) to (X3);
\end{tikzpicture}
\end{block}
\end{columns}
\vspace*{0.5em}
\onslide<2-> What is the distribution of $X(1)$ ?
\begin{itemize}\onslide<3->
\item $X(1)$ is not normally distributed.
\item Mean $\mu_1=E[X(0)^2]+\mu_{w,0}$
\item Variance $\sigma_1^2=var(X(0))+\sigma_{w,0}^2$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Few remarks}
\begin{itemize}
\setlength\itemsep{2em}
\item Mean of the sum is always equal to the sum of the means (linear operator)
\item If two random variables are independent, then the variance of the sum is equal to the sum of the variances
%\item Variables $X(k)$ are normally distributed for all $k\geq0$ \emph{only} in the case of linear systems and if the variables $w(k)$ are normally distributed
\item The knowledge of mean and variance rarely completely characterize the distribution of a random variable
\end{itemize}
\end{frame}
\section{Marginal and conditional density functions}
\separatorslide
\begin{frame}
\frametitle{Marginal and conditional density functions}
Assume $X$ and $Y$ are two random variables taking values in the interval $[0, 1]$\vspace{0.5em}\\
and $f_{X,Y}(x,y)$ is the joint probability density function of $X$ and $Y$\vspace{0.5em}
%The following probabilities can be computed from the joint distribution of $X$ and $Y$
\begin{itemize}%\setlength\itemsep{0.5em}
\item <1-> Marginal probability density functions are given by
$$f_{X}(x)=\int_0^1f_{X,Y}(x,z) dz\;, \qquad f_{Y}(y)=\int_0^1f_{X,Y}(z,y) dz$$
%Probability of $X$ taking value $x$ independently of the value taken by $Y$
%\item Marginal probability density function
%Probability of $X$ taking value $x$ independently of the value taken by $Y$
%\item <1-> %Probability of $Y$ taking value $y$ independently of the value taken by $X$
%$P(Y\leq y)=\int_0^y\int_0^1 P(X=z, Y=y) dz$ (Marginal distribution)
%\item <2-> %Probability of $Y$ taking value $y$ once we know that $X$ has value $x$
% $P(Y\leq y|X=x)=\int_0^y \frac{P(X=x, Y=y)}{P(X=x)}$ (Conditional distribution)
%\item <2->Probability of $X$ taking value $x$ once we know that $Y$ has value $y$ %$P(X=x|Y=y)=\frac{P(X=x, Y=y)}{P(Y=y)}$ (Conditional distribution)
\item <2-> Conditional probability density function: $f_{X|Y}(x;y)=\frac{f_{X,Y}(x,y)}{f_{Y}(y)},$ if $f_Y(x)>0$
\item <3-> Conditional mean and variance of $X$ given $Y$
$$E[X|Y=y]\!=\int_0^1\!\! zf_{X|Y}(z;y)dz,\quad var(X|Y=y)\!=\int_0^1 \!\!(z-E[X|Y=y])^2f_{X|Y}(z;y)dz$$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Conditional distribution and expected conditional risk}
Consider the case of estimating the value of a variable $X$ from its measurement $Y$
\begin{itemize}
\item For example $Y=X+W$, where $W$ is random noise independent of $X$
\item The conditional distribution of $X|Y$ provides information about the probability of different values of $X$ given that $Y=y$
\item <3-> For selecting a best value, we need a cost function, e.g., the \emph{expected conditional risk}
$$R(t,y)=E[c(t-X)|Y=y]=\int_{\mathbb{R}}c(t-z)f_{X|Y}(z;y)dz$$
\end{itemize}
\onslide <3->If
\begin{itemize}
\item $c(t-x)$ is symmetric, e.g., $c(z-x)=(z-x)^2$ and
\item the conditional density function $f_{X|Y}(z;y)$ is symmetric around $E(X|Y=y)$,
\end{itemize}
then
\begin{itemize}
\item the minimum of $R(t,y)$ does not depends on the specific cost function $c(t-X)$
\item \emph{the minimum of $R(t,y)$ is given by $E[X|Y=y]$}
\end{itemize}
%\onslide<2->
%\takeaway{Under these assumption, the best estimate of $X$ given $Y=y$ is $E[X|Y=y]$}
\onslide<2>\takeaway{\bf{How could we select a best \emph{value} for $X$?}}
\end{frame}
\begin{frame}
\frametitle{The Maximum a posteriori estimator}
Another approach for selecting a best value for the estimation of $X$ given $Y=y$ could be to take the maximum of $f_{X|Y}(z;y)$ \vspace{2em}
\begin{itemize}
\item We can define $$\hat{x}(y)=\argmax_{z\in\mathbb{R}}f_{X|Y}(z;y)$$
\item This estimator is called \emph{Maximum a posteriori} (MAP)
\end{itemize}
\begin{itemize}
\item For unimodal and symmetric distributions, e.g., Gaussian distribution, then MAP estimator and the expected conditional value estimator coincide
\end{itemize}
\takeaway{In general, MAP and the expected conditional value are different}
\end{frame}
%\begin{frame}
% \frametitle{Some properties of the conditional expectation}
% Assumptions:
% \begin{itemize}
% \item $c(z-x)$ is symmetric, e.g., $c(z-x)=(z-x)^2$
% \item the conditional distribution of $(X|Y=y)$ is symmetric around $E(X|Y=y)$
% \end{itemize}\vspace{1em}
% then
% \begin{itemize}
% \item the minimum of $R(z,y)$ does not depends on the specific cost function $c(z-X)$
% \item the minimum of $R(z,y)$ is given by $E[X|Y=y]$
% \end{itemize}\vspace{1.5 em}
%
% \takeaway{Under these assumptions mean and mode of the conditional distribution coincide}
%\end{frame} | {
"alphanum_fraction": 0.6463344605,
"avg_line_length": 42.018766756,
"ext": "tex",
"hexsha": "7ec56f2a43921d3cf02f50296f06beb631f8f54f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "68ac7ad2c4dabc568e47cb07102703527aee3386",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lparolin/state_estimation",
"max_forks_repo_path": "slides/random_variables.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "68ac7ad2c4dabc568e47cb07102703527aee3386",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lparolin/state_estimation",
"max_issues_repo_path": "slides/random_variables.tex",
"max_line_length": 338,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "68ac7ad2c4dabc568e47cb07102703527aee3386",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lparolin/state_estimation",
"max_stars_repo_path": "slides/random_variables.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6121,
"size": 15673
} |
\chapter{Work and Energy}
In this chapter, we are going to talk about how engineers define work
and energy. We have already talked about force. Force is measured in
newtons, and one newton is equal to the force necessary to accelerate one
kilogram at a rate of $1 m/s^2$.
When you lean on a wall, you are exerting a force on the wall, but you
aren't doing any work. On the other hand, if you push a car for a mile,
you are clearly doing work. Work, to an engineer, is the force you
apply to something, as well as the distance that it moves, in the direction
of the applied force. We measure work in \textit{joules}. A joule is one
newton of force over one meter.
\includegraphics[width=0.8\textwidth]{Work_vs.png}
For example, if you push a car uphill with a force of 10 newtons for 12
meters, you have done 120 joules of work.\index{work}
% ADD: We can represent this with the equations, Work Energy Therom
Work is how energy is transferred from one thing to another. When you
push the car, you also burn sugars(energy of the body) in your blood. That energy is then
transferred to the car: after it has been pushed uphill.
Thus, we measure the energy something consumes or generates in
units of work: joules, kilowatt-hours, horsepower-hours, foot-pounds,
BTUs( British Thermal Unit), and calories.
Let's go over a few different forms that energy can take.
% KA: https://www.khanacademy.org/science/ms-physics/x1baed5db7c1bb50b:energy/x1baed5db7c1bb50b:changes-in-energy/a/changes-in-energy
\section{Heat}\index{heat}
When you heat something, you are transferring energy to it. The BTU
is a common unit for heat: One BTU is the
amount of heat required to raise the temperature of one pound of water,
by one degree. One BTU is about 1,055 joules. In fact, when you buy and sell
natural gas as fuel, it is priced by the BTU.\index{heat} \index{BTU}
\section{Electricity}\index{electricity}
Electricity is the movement of electrons. When you push electrons
through a space that resists their passage (like a light bulb),
energy is transferred from the power source ( a battery)
into the source of the resistance.
Let's say your lightbulb consumes 60 watts of electricity, and you leave it on for 24 hours.
We would say that you have consumed 1.44 kilowatt hours or 3,600,000 joules.
% KA: https://www.khanacademy.org/science/in-in-class10th-physics/in-in-electricity/in-in-electric-current-circuit/v/intro-to-charge
\section{Chemical Energy}\index{chemical energy}
As mentioned early, some chemical reactions consume energy and some
produce energy. Thus, energy can be stored in the structure of a
molecule. When a plant uses photosynthesis to rearrange water and
carbon dioxide into a sugar molecule, it converts the energy in
the sunlight( solar energy) into chemical energy. Remember photosythesis is a process that releases energy.
Therefore, the sugar molecule has more chemical energy than the carbon dioxide and water molecules that were
used in its creation.
% ADD: photosythesis equation
% KA: https://www.khanacademy.org/science/ap-biology/cellular-energetics/photosynthesis/a/intro-to-photosynthesis
In our diet, we measure this energy in \textit{kilocalories}. A
calorie is the energy necessary to raise one gram of water one degree
Celsius: it is about 4.19 joules. This is a very small unit: an apple
has about 100,000 calories( 100 kilocalories), so people working with food started
measuring everything in kilocalories.\index{calories}
% ADD: Conversion chapter should come before this chapter
Here is where things get confusing: People who work with food got tired of
saying ``kilocalories'', so they just started using ``Calorie'' to
mean 1,000 calories. This has created terrible confusion over the
years. So if the C is capitalized, ``Calorie'' probably means kilocalorie.
\section{Kinetic Energy}\index{kinetic energy}
A mass in motion has energy. For example, if you are in a moving car
and you slam on the breaks, the energy from the motion of the
car will be converted into heat in the breaks and under the tires.
How much energy does the car have?
% ADD: section specifically about KE AND U, use roller coaster diagram
\begin{mdframed}[style=important, frametitle={Formula for Kinetic Energy}]
$$E = \frac{1}{2} m v^2$$
where $E$ is the energy in joules, $m$ is the mass in kilograms, and
$v$ is the speed in meters per second.
\end{mdframed}
\section{Gravitational Potential Energy}\index{potential energy!gravitational}
% KA: https://youtu.be/oGzwVYPxKjg
When you lift something heavy onto a shelf, you are giving it
\textit{potential energy}. The amount of energy that you transferred
to it is proportional to its weight and the height that you lifted it.
On the surface of the earth, gravity will accelerate a heavy object downward at
a rate of $9.8 m/s^2$.
\begin{mdframed}[style=important, frametitle={Formula for Gravitational Potential Energy}]
On earth, then, gravitational potential energy is given by
$$E = (9.8)mh$$
where $E$ is the energy in joules, $m$ is the mass of the object you
lifted, and $h$ is the height that you lifted it.
\end{mdframed}
There are other kinds of potential energy. For example, when you draw
a bow, you have given that bow potential energy. When you release it,
the potential energy is transferred to the arrow, which expresses it
as kinetic energy.
% ADD: section about KE and U
\section{Conservation of Energy}
The first law of thermodynamics says ``Energy is neither created nor
destroyed.''\index{energy!conservation of}
Energy can change forms: Your cells consume chemical energy to give
gravitational potential energy to a car you push up a hill. However, the total amount of
energy in a closed system stays constant.
% ADD: Create Systems chapter before introducing concept here
\begin{Exercise}[title={The Energy of Falling}, label=energy_falling]
A 5 kg cannonball falls off the top of a 3 meter ladder. Just before
it hits the floor, all of its gravitational potential energy has been
converted into kinetic energy. How fast is the cannonball going when
it hits the floor?
\end{Exercise}
\begin{Answer}[ref=energy_falling]
At the top of the ladder, the cannonball has $(9.8)(5)(3) = 147$ joules of potential energy.
At the bottom, the kinetic energy $\frac{1}{2}(5)v^2$ must be equal
to 147 joules. So $v^2 = \frac{294}{5}$. Thus it is going about
$7.7$ meters per second.
(Yes, a tiny amount of energy is lost to air resistance. For a dense
object moving at these relatively slow speeds, this energy is
neglible.)
\end{Answer}
\section{Efficiency}
% KA: https://www.khanacademy.org/science/ap-biology/cellular-energetics/cellular-energy/a/the-laws-of-thermodynamics
Although energy is always conserved as it moves through different
forms, scientists aren't always that good at controlling it.\index{efficiency}
For example, a car engine consumes the chemical energy in gasoline. Only
about 20\% of the energy consumed is used to turn the wheels. Most of
the energy is actually lost as heat. If you run a car for a while, the engine
gets very hot and the exhaust going out the tailpipe turns hot.
A human is about 25\% efficient. Most of the loss is in the heat produced
during the chemical reactions that turns food into motion.
% ADD: Cellular Respiration
In general, if you are trying to increase efficiency in any system,
the solution is usually easy to identify because heat is produced. Reduce heat, Increase efficiency.
Light bulbs are an interesting case. To get the light of a 60 watt
incandescent bulb, you can use an 8 watt LED or a 16 watt fluorescent
light. Thus, we say that the LED light is much more efficient: If you
run both, the incandescent bulb will consume 1.44 kilowatt-hours. The
LED will consume only 0.192 kilowatt-hours.
Besides light, the incandescent bulb is producing a lot of heat. If it
is inside your house, what happens to the heat? It warms your house.
In the winter, when you want light and heat, the incandescent bulb is
100\% efficient!
In the summer, if you are running the air conditioner, the
incandescent bulb is worse than just ``inefficient at making light'' --
it is actually counteracting the air conditioner!
| {
"alphanum_fraction": 0.7752343845,
"avg_line_length": 43.6861702128,
"ext": "tex",
"hexsha": "b706310d2502ac2cc15fdb65534223c1cc5db9ab",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-05T00:43:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-05T00:43:58.000Z",
"max_forks_repo_head_hexsha": "b7b4896d804c49cbc93fe86a0d2fce531afbcc1f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "hillegass/sequence",
"max_forks_repo_path": "Modules/MatterEnergy/work_energy-en_US.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b7b4896d804c49cbc93fe86a0d2fce531afbcc1f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "hillegass/sequence",
"max_issues_repo_path": "Modules/MatterEnergy/work_energy-en_US.tex",
"max_line_length": 133,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "b7b4896d804c49cbc93fe86a0d2fce531afbcc1f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "hillegass/sequence",
"max_stars_repo_path": "Modules/MatterEnergy/work_energy-en_US.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-05T00:43:44.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-13T17:19:16.000Z",
"num_tokens": 2089,
"size": 8213
} |
\subsection{Abelian groups}\label{subsec:abelian_groups}
\begin{definition}\label{def:abelian_group}
A \hyperref[def:magma/commutative]{commutative} \hyperref[def:group]{group} is usually called an \term{abelian group}. We denote by \( \cat{Ab} \) the category of abelian groups.
By \fullref{thm:ring_is_integer_algebra}, the abelian groups are precisely the \hyperref[def:algebra_over_ring]{rings} over \( \BbbZ \), and we have an \hyperref[rem:category_similarity/isomorphism]{isomorphism of categories} \( \cat{Ab} \cong \cat{Mod}_\BbbZ \).
\end{definition}
\begin{remark}\label{rem:additive_magma}
General groups often arise as \hyperref[def:automorphism_group]{automorphism groups}, which are, for the most part, non-commutative, while abelian groups are usually used as the main building block for \hyperref[def:ring]{rings} and \hyperref[def:module]{modules}.
To make a further distinction, if the operation is denoted by \( \cdot \) or juxtaposition, we say that the group is a \term{multiplicative group}, and if the operation is denoted by \( + \), we say that the group is an \term{additive group}. This terminology usually, but not necessarily, coincides with the group (or, more generally, the \hyperref[def:magma]{magma}) being \hyperref[def:magma/commutative]{commutative}.
To make things explicit, a \term{multiplicative magma} is any magma as defined in \fullref{def:magma}. Compare this to \term{additive magmas}, where
\begin{thmenum}
\thmitem{rem:additive_magma/addition} The magma operation is denoted by \( + \) and called \term{addition}.
\thmitem{rem:additive_magma/multiplication} The magma \hyperref[def:magma/exponentiation]{exponentiation operation} \( x^n \) is denoted by \( n \cdot x \) or juxtaposition and called \term{multiplication}. Thus, multiplication is not defined for two elements of the magma, but defined for a positive integer and an element of the magma. That is,
\begin{equation}\label{eq:rem:additive_magma/multiplication}
\begin{aligned}
&\cdot: \cdot: \BbbN \times R \to R \\
&n \cdot x \coloneqq \begin{cases}
0_M, &n = 0, \T{initial condition if} M \T{is a monoid} \\
x, &n = 1, \T{initial condition if} M \T{is not a monoid} \\
n \cdot x + x, &n > 1 \\
-(n \cdot x), &n < 0, \\
\end{cases}
\end{aligned}
\end{equation}
In the case of a \hyperref[def:magma/commutative]{commutative} \hyperref[def:monoid]{monoid}, if multiplication is extended to two elements of the monoid, we instead talk about \hyperref[def:semiring]{semirings}.
\thmitem{rem:additive_magma/identity} The \hyperref[def:monoid]{identity} is usually denoted by \( 0 \).
\thmitem{rem:additive_magma/inverse} If an \hyperref[def:monoid_inverse]{inverse} of \( x \) exists, it is denoted by \( -x \) rather than \( x^{-1} \).
\end{thmenum}
\end{remark}
\begin{proposition}\label{thm:abelian_outer_automorphism_group}
In an \hyperref[def:abelian_group]{abelian group}, the full \hyperref[def:automorphism_group]{automorphism group} \( \aut(G) \) is isomorphic to the \hyperref[def:inner_and_outer_automorphisms]{outer automorphism group} \( \op{out}(G) \).
\end{proposition}
\begin{proof}
If the group operation is \hyperref[def:magma/commutative]{commutative}, then \( xyx^{-1} = yxx^{-1} = y \), which makes the \hyperref[def:inner_and_outer_automorphisms]{conjugation action} trivial. Thus, the \hyperref[def:inner_and_outer_automorphisms]{inner automorphism group} \( \op{int}(G) \) is trivial, and hence \( \aut(G) \cong \op{out}(G) \).
\end{proof}
\begin{proposition}\label{thm:abelian_normal_subgroups}
All subgroups of an abelian group are \hyperref[thm:normal_subgroup_equivalences]{normal}.
\end{proposition}
\begin{proof}
Let \( G \) be abelian and \( H \) be a subgroup of \( G \). Then \( x H x^{-1} = xx^{-1} H = H \) for any \( x \in G \) and thus \( H \) is normal.
\end{proof}
\begin{definition}\label{def:congruence_modulo_normal_subgroup}
Given a \hyperref[thm:normal_subgroup_equivalences]{normal subgroup} \( N \) of an \hyperref[def:abelian_group]{abelian group} \( G \), we say that two elements \( x \) and \( y \) of \( G \) are \term{congruent modulo} \( N \) and write \( x \cong y \pmod N \) if \( x - y \in N \).
If \( N = \braket{ z } \), this implies that \( x \cong y \pmod z \) if and only if \( x - y \in \braket{ z } \).
This concept also extends to \hyperref[def:semiring_ideal]{ring ideals} rather than normal subgroups, in which case \( \braket{ z } \) is the \hyperref[def:semiring_ideal/generated]{ideal generated} by \( z \) rather than the \hyperref[def:cyclic_group]{cyclic subgroup} of \( z \).
\end{definition}
\begin{proposition}\label{thm:group_of_integers_modulo}
The \hyperref[def:set_of_integers]{integers} \( \BbbZ \) form an abelian group under addition. For every positive integer \( n \), we define the group
\begin{equation*}
\BbbZ_n \coloneqq \set{ 0, 1, \ldots, n - 1 }
\end{equation*}
with the operation
\begin{equation*}
x \oplus y \coloneqq \rem(x + y, n)
\end{equation*}
so that
\begin{equation*}
x \oplus y \cong x + y \pmod n.
\end{equation*}
The group \( \BbbZ_n \) is called the \term{group of integers modulo} \( n \). Compare this result with \fullref{thm:ring_of_integers_modulo}.
\end{proposition}
\begin{proof}
We will prove that \( \BbbZ_n \) is an abelian group.
\SubProofOf[def:magma/associative]{associativity} Addition in \( \BbbZ_n \) is associative since
\begin{balign*}
(x \oplus y) \oplus z
&=
\rem((x \oplus y) + z, n)
= \\ &=
\rem(\rem(x + y, n) + z, n)
= \\ &=
\rem(x + y - n \quot(x + y, n) + z, n)
= \\ &=
\rem(x + y + z, n)
= \\ &=
\ldots
= \\ &=
x \oplus (y \oplus z).
\end{balign*}
\SubProofOf[def:monoid]{identity} The zero is the identity.
\SubProofOf[def:monoid_inverse]{inverse} Fix \( x \in \BbbZ_n \). If \( x = 0 \), its inverse is \( 0 \). If \( x > 0 \), its inverse is \( n - x \) since \( n - x \in \BbbZ_n \) and
\begin{equation*}
x \oplus (n - x) = x + (n - x) - n = 0.
\end{equation*}
\SubProofOf[def:magma/commutative]{commutativity} Follows from
\begin{equation*}
x \oplus y
=
\rem(x + y, n)
=
\rem(y + x, n)
=
y \oplus x.
\end{equation*}
\end{proof}
\begin{proposition}\label{thm:integers_modulo_isomorphic_to_quotient_group}
The group \( \BbbZ_n \) of \hyperref[thm:group_of_integers_modulo]{integers modulo \( n \)} is isomorphic to the quotient of \( \BbbZ \) by \( n\BbbZ = \set{ nz \given z \in \BbbZ } \). That is,
\begin{equation*}
\BbbZ_n \cong \BbbZ / n\BbbZ.
\end{equation*}
\end{proposition}
\begin{proof}
Define the function
\begin{align*}
&\varphi: \BbbZ_n \to \BbbZ / n\BbbZ \\
&\varphi(x) \coloneqq x + n\BbbZ.
\end{align*}
It is a homomorphism because
\begin{balign*}
\varphi(x \oplus y)
&=
\varphi(\rem(x + y, n))
= \\ &=
\varphi(x + y - n \quot(x + y, n))
= \\ &=
x + y - n \quot(x + y, n) + n\BbbZ
= \\ &=
x + y + n\BbbZ
= \\ &=
(x + n\BbbZ) + (y + n\BbbZ)
= \\ &=
\varphi(x) + \varphi(y).
\end{balign*}
Furthermore, this shows that \( \varphi \) is also an isomorphism.
\end{proof}
\begin{example}\label{ex:lagranges_theorem_for_groups/direct_product_zn}
\Fullref{thm:lagranges_theorem_for_groups} and \fullref{thm:integers_modulo_isomorphic_to_quotient_group} imply that, for any positive integer \( n \), \( (nm, k) \mapsto nm + k \) is a bijection between \( n \BbbZ \times \BbbZ_n \) and \( \BbbZ \). This bijection, however, is not necessarily a group isomorphism because \eqref{eq:def:magma/homomorphism} may not hold.
Consider the tuples \( (nm_1, k_1) \) and \( (nm_2, k_2) \) in \( n \BbbZ \times \BbbZ_n \). We have
\begin{equation*}
(nm_1, k_1) + (nm_2, k_2) = (nm_1 + nm_2, \rem(k_1 + k_2, n)).
\end{equation*}
Therefore, if \( k_1 + k_2 \geq n \),
\begin{equation*}
nm_1 + nm_2 + \rem(k_1 + k_2, n) < (nm_1 + k_1) + (nm_2 + k_2).
\end{equation*}
\end{example}
\begin{proposition}\label{thm:cyclic_group_isomorphic_to_integers_modulo_n}
The \hyperref[def:cyclic_group]{cyclic group} \( C_n \) is isomorphic to the group \hyperref[thm:group_of_integers_modulo]{\( \BbbZ_n \)} of integers modulo \( n \).
\end{proposition}
\begin{proof}
The homomorphism
\begin{equation*}
\begin{aligned}
&\varphi: \BbbZ_n \to C_n \\
&\varphi(k) \coloneqq a^k,
\end{aligned}
\end{equation*}
and the analogous homomorphism for the infinite group, are isomorphisms.
\end{proof}
\begin{definition}\label{def:monoid_grothendieck_completion}\mcite{nLab:grothendieck_group_of_a_commutative_monoid}
Let \( M \) be a \hyperref[def:magma/commutative]{commutative} \hyperref[def:monoid]{monoid}. Define the \hyperref[def:equivalence_relation]{equivalence relation} \( \sim \) on tuples of members of \( M \) to hold for \( (a, b) \sim (a', b') \) if there exists an element \( u \) of \( M \) such that
\begin{equation*}
a + b' + u = a' + b + u.
\end{equation*}
Define addition on the \hyperref[thm:equivalence_partition]{equivalence partition} \( G \coloneqq (M \times M) / {\sim }\) componentwise as
\begin{equation*}
[(a, b)] \oplus [(c, d)] \coloneqq [(a + c, b + d)]
\end{equation*}
and fix a canonical embedding
\begin{equation*}
\begin{aligned}
&\iota_M: M \to G \\
&\iota_M(m) \coloneqq [(m, 0)].
\end{aligned}
\end{equation*}
We call the obtained \hyperref[def:abelian_group]{abelian group} \( (G, \oplus) \) the \term{Grothendieck completion} of \( M \).
\end{definition}
\begin{defproof}
\SubProof{Proof that \( \sim \) is an equivalence relation}
\SubProofOf*[def:binary_relation/reflexive]{reflexivity}
\begin{equation*}
(a, b) \sim (a, b) \T{if and only if} a + b + 0 = a + b + 0
\end{equation*}
\SubProofOf*[def:binary_relation/symmetric]{symmetry} By commutativity, if \( (a, b) \sim (a', b') \), then there exists \( u \) such that
\begin{equation*}
a + b' + u = a' + b + u
=
a' + b + u = a + b' + u,
\end{equation*}
hence \( (a', b') \sim (a, b) \).
\SubProofOf*[def:binary_relation/transitive]{transitivity} Suppose that \( (a, b) \sim (a', b') \) and \( (a', b') \sim (a^\dprime, b^\dprime) \). Thus, there exist elements \( u \) and \( b \) of \( M \) such that
\begin{align*}
a + b' + u &= a' + b + u, \\
a' + b^\dprime + v &= a^\dprime + b' + v.
\end{align*}
Summing both sides, we obtain
\begin{equation*}
(a + b' + u) + (a' + b^\dprime + v) = (a' + b + u) + (a^\dprime + b' + v)
\end{equation*}
We reorder both sides to obtain
\begin{equation*}
(a + b^\dprime) + (a' + b' + u + v) = (a^\dprime + b) + (a' + b' + u + v),
\end{equation*}
which implies \( (a, a^\dprime) \sim (b, b^\dprime) \).
\SubProof{Proof that \( (G, \oplus) \) is an abelian group}
\SubProof*{Proof that \( \oplus \) is well-defined} The addition operation on \( G \) does not depend on the representative of the equivalence class. Indeed, let \( (a, b) \sim (a', b') \) and \( (c, d) \sim (c', d') \). Then there exist \( u \) and \( b \) such that
\begin{align*}
a + b' + u &= a' + b + u, \\
c + d' + v &= c' + d + v.
\end{align*}
When added combined, these give
\begin{equation*}
(a + c) + (b' + d') + (u + v)
=
(a' + c') + (b + d) + (u + v),
\end{equation*}
which implies that
\begin{equation*}
(a + c, b + d) \sim (a' + c', b' + d').
\end{equation*}
\SubProofOf*[def:magma/associative]{associativity} Associativity of multiplication in \( G \) is inherited from multiplication in \( M \).
\SubProofOf*[def:monoid]{identity} The equivalence class \( [(0, 0)] \) is an identity in \( G \) and contains the pairs \( (x, x) \) of identical elements.
\SubProofOf*[def:monoid_inverse]{inverse} For each member \( (a, b) \in M \times M \), its inverse is \( (b, a) \) because
\begin{equation*}
[(a, b)] \oplus [(b, a)] = [(a + b, b + c)],
\end{equation*}
which, by commutativity, belongs to \( [(0, 0)] \).
\SubProofOf*[def:magma/commutative]{commutativity} Commutativity of the group operation \( \oplus \) is also inherited from the monoid operation \( + \).
\end{defproof}
\begin{theorem}[Grothendieck monoid completion universal property]\label{thm:grothendieck_monoid_completion_universal_property}\mcite{nLab:grothendieck_group_of_a_commutative_monoid}
The \hyperref[def:monoid_grothendieck_completion]{Grothendieck completion} \( \overline{M} \) of a commutative monoid \( M \) satisfies the following \hyperref[rem:universal_mapping_property]{universal mapping property}:
\begin{displayquote}
For every abelian group \( G \) and every monoid homomorphism \( \varphi: M \to G \), there exists a unique group homomorphism \( \widetilde{\varphi}: \overline{M} \to G \) such that the following diagram commutes:
\begin{equation}\label{eq:thm:grothendieck_monoid_completion_universal_property/diagram}
\begin{aligned}
\includegraphics[page=1]{output/thm__monoid_grothendieck_completion_universal_property.pdf}
\end{aligned}
\end{equation}
\end{displayquote}
Via \fullref{rem:universal_mapping_property}, \( \overline{\anon} \) becomes \hyperref[def:category_adjunction]{left adjoint} to the \hyperref[def:concrete_category]{forgetful functor}
\begin{equation*}
U: \cat{Ab} \to \cat{CMon}.
\end{equation*}
Compare this result to \fullref{thm:grothendieck_semiring_completion_universal_property}.
\end{theorem}
\begin{proof}
Let \( \varphi: M \to G \) be a monoid homomorphism into an abelian group \( G \). We want to define a homomorphism \( \overline{\varphi} \) such that
\begin{equation*}
\overline{\varphi}(\iota_M(a)) = \overline{\varphi}([(a, 0)]) = \varphi(a).
\end{equation*}
Each equivalence class \( C \) in \( G \) has a unique member \( a \) such that \( (a, 0) \in C \), hence the above condition is well-posed.
Fix pairs \( (a, b) \) and \( (a', b') \) from \( M \times M \). Suppose that \( (a, b) \sim (a', b') \). Then there exists \( u \in M \) such that
\begin{equation*}
a + b' + u = a' + b + u.
\end{equation*}
An additional restriction on \( \overline{\varphi} \) is then
\begin{equation*}
\overline{\varphi}\parens[\Big]{ [(a, b)] }
=
\overline{\varphi}\parens[\Big]{ [(a', b')] }.
\end{equation*}
We need to cancel out \( u \). This uniquely determines \( \overline{\varphi} \) as
\begin{equation*}
\overline{\varphi}([(a, b)]) \coloneqq \varphi(a) - \varphi(b).
\end{equation*}
\end{proof}
\begin{definition}\label{def:group_commutator}\mcite[313]{Knapp2016BasicAlgebra}
Let \( G \) be an arbitrary group. We define the \term{commutator} of the elements \( x \) and \( y \) as
\begin{equation*}
[x, y] \coloneqq xyx^{-1}y^{-1}.
\end{equation*}
The \term{commutator subgroup} \( [G, G] \) of \( G \) is the subgroup \hyperref[def:group/submodel]{generated} by all the commutators in \( G \).
\end{definition}
\begin{theorem}[Group abelianization universal property]\label{thm:group_abelianization_universal_property}\mcite[prop. 7.4]{Knapp2016BasicAlgebra}
The commutator group \( [G, G] \) of any group \( G \) is \hyperref[thm:normal_subgroup_equivalences]{normal} and the quotient \( G / [G, G] \) is an abelian group, which we call the \term{abelianization} of \( G \), satisfies the following \hyperref[rem:universal_mapping_property]{universal mapping property}:
\begin{displayquote}
For every abelian group \( H \), every group homomorphism \( \varphi: G \to H \) \hyperref[def:factors_through]{uniquely factors through} \( G / [G, G] \). That is, there exists a unique group homomorphism \( \widetilde{\varphi}: G / [G, G] \to H \) such that the following diagram commutes:
\begin{equation}\label{eq:thm:group_abelianization_universal_property/diagram}
\begin{aligned}
\includegraphics[page=1]{output/thm__group_abelianization_universal_property.pdf}
\end{aligned}
\end{equation}
\end{displayquote}
Via \fullref{rem:universal_mapping_property}, the abelianization functor becomes \hyperref[def:category_adjunction]{left adjoint} to the \hyperref[def:concrete_category]{forgetful functor}
\begin{equation*}
U: \cat{Ab} \to \cat{Grp}.
\end{equation*}
This result extends to \fullref{thm:ring_abelianization_universal_property}.
\end{theorem}
\begin{proof}
Let \( C \coloneqq [G, G] \).
\SubProof{Proof that \( G / C \) is abelian} Normality of \( G / C \) easily follows from
\begin{equation*}
a xyx^{-1}y^{-1} a^{-1}
=
(a x a^{-1}) (a y a^{-1}) (a x a^{-1})^{-1} (a y a^{-1})^{-1}.
\end{equation*}
Then for the cosets \( a C \) and \( b C \), we have
\begin{equation*}
a C \cdot b C
=
a b C
=
a b (b^{-1} a^{-1} b a) C
=
b a C.
\end{equation*}
Therefore, the quotient group \( G / C \) is abelian.
\SubProof{Proof of universal mapping property} Let \( H \) be an abelian group and let \( \varphi: G \to H \) be a group homomorphism.
Observe that \( \varphi(C) = e_H \). Indeed, since \( H \) is abelian, for \( [x, y] = xyx^{-1}y^{-1} \in C \) we have
\begin{equation*}
\varphi([x, y]) = \varphi(x) \varphi(y) \varphi(x^{-1}) \varphi(y^{-1}) = \varphi(x) \varphi(x^{-1}) \varphi(y) \varphi(y^{-1}).
\end{equation*}
We want \( \overline{\varphi}: G / C \to H \) to satisfy
\begin{equation*}
\overline{\varphi}(\underbrace{\pi_G(x)}_{xC}) = \varphi(x).
\end{equation*}
This suggests the definition
\begin{equation*}
\overline{\varphi}(xC) \coloneqq \varphi(x).
\end{equation*}
It is well-defined because if \( xC = yC \), we have
\begin{equation*}
\varphi(x)
=
\varphi(x) e_H
=
\varphi(x) \varphi(C)
=
\varphi(x C)
=
\varphi(y C)
=
\ldots
=
\varphi(y).
\end{equation*}
\end{proof}
| {
"alphanum_fraction": 0.6455534584,
"avg_line_length": 46.3084832905,
"ext": "tex",
"hexsha": "0c98e32035c42df0c000625a05a5fbd04f5fda4a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "v--/anthology",
"max_forks_repo_path": "src/abelian_groups.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "v--/anthology",
"max_issues_repo_path": "src/abelian_groups.tex",
"max_line_length": 423,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "v--/anthology",
"max_stars_repo_path": "src/abelian_groups.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6266,
"size": 18014
} |
\chapter{Existing proposals for improving kilohertz sensitivity} % Background: existing proposals for improving kilohertz sensitivity
\label{chp:proposals}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% chapter introduction
% Beyond better external squeezing, there are many\jam{(are there?)} existing proposals to improve the kilohertz sensitivity of gravitational-wave detectors. In the next two chapters, I examine two of the front-runners: degenerate internal squeezing and stable optomechanical filtering.
% demonstrate modelling again, for the third and last time before my model -- much of the maths should be quite familiar by now
% set-up the problem: how do these configs get past the four factors in the introduction and why aren't they good enough for kilohertz sensitivity
In this chapter, I critically examine two of the existing configurations that address the problem of increasing kilohertz sensitivity: degenerate internal squeezing in Section~\ref{sec:dIS} and stable optomechanical filtering in Section~\ref{sec:sWLC}.
% I will use the tools of the previous chapter to understand degenerate internal squeezing and demonstrate how I will approach my work. Then, I will discuss stable optomechanical filtering using the same ideas. %, although I do not present a separate model of it for reasons that will become clear.
%Stable optomechanical filtering is the optomechanical analogue of my all-optical configuration which implies that there is .
I present the limitations of these two proposals to motivate my work in subsequent chapters into a configuration that combines their strengths but might be able to overcome their limitations and better improve kilohertz sensitivity.
% I will argue that, while promising, these existing proposals have limited tolerance to optical and mechanical loss, respectively. This will motivate my work in the following chapters into a configuration that combines these two proposals but might be able to overcome their limitations and better improve kilohertz sensitivity.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Degenerate internal squeezing}
\label{sec:dIS}
\begin{figure}[ht]
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{dIS_config.pdf}
% squeezing ellipse and signal arrow plot (+ show the effect of optical loss: detection and intracavity)
\caption{Degenerate internal squeezing configuration (left panel), and the squeezer's effect on the measured signal and noise (right panel) using a noise ellipse and signal arrow representation where the height of the arrow represents the signal response. The signal-recycling cavity resembles the degenerate OPO in Fig.~\ref{fig:OPOs_config}. Detection loss ($R_\text{PD}$) is included in the output field ($\hat{B}_\text{out}$) via the beamsplitter convention in Fig.~\ref{fig:beamsplitter_loss}. The signal enters the detector by moving the end test masses and the noise enters as vacuum from the readout port and losses. The sensitivity is improved by squeezing the noise more than the signal is decreased.}
\label{fig:dIS_config}
\end{figure}
%\jam{(Explain why it works.)}
% has been explained many times before, but do it again here because it is the main reference
% how does this configuration beat the Mizuno limit (four factors in the introduction)? --> squeezing
Degenerate internal squeezing consists of a degenerate squeezer placed inside the signal-recycling cavity of the detector in Fig.~\ref{fig:DRFPMI} such that it squeezes the signal mode as shown in Fig.~\ref{fig:dIS_config}~\cite{korobkoQuantumExpanderGravitationalwave2019}. In this configuration, the vacuum entering the readout port is squeezed (as with external squeezing) and the vacuum from the intra-cavity losses and the gravitational-wave signal are also squeezed (unlike external squeezing)~\footnote{Here, by squeezing the signal I mean that the signal in one output quadrature is decreased while the signal in the other quadrature is increased.}.
%Although a single-pass of the squeezer squeezes the signal and noise equally~\cite{},
% The overall sensitivity, i.e.\ the signal-to-noise ratio, is improved because
The signal comes from the test masses in the arms and the noise comes predominantly from the vacuum entering the readout port. This means that the signal and noise ``see'' the signal-recycling cavity and the squeezer differently as shown in Fig.~\ref{fig:dIS_config}. %, which changes the average number of single-passes of the squeezer each experiences~\cite{}.
Therefore, degenerate internal squeezing improves sensitivity, i.e.\ the signal-to-noise ratio, by squeezing the noise more than the signal as shown in Fig.~\ref{fig:dIS_config}~\cite{korobkoQuantumExpanderGravitationalwave2019}.
%The overall effect is to de-amplify the signal but squeeze the noise more such that the sensitivity improves, as shown in Fig.~\ref{fig:dIS_ellipse_and_arrow}. Where the improvement overcomes the Mizuno Limit from Section~\ref{} through the use of squeezing. %However, this improvement only occurs around the ``sloshing'' frequency~\cite{}, the coupling frequency of the two cavities, which will be shown.
% In this section, I present a model of degenerate internal squeezing and discuss its behaviour. These results exist in the literature~\cite{korobkoQuantumExpanderGravitationalwave2019}, but I demonstrate them, abridged, here to justify the approach I use in my work and establish what is expected from internal squeezing. In particular, I demonstrate how to consider radiation pressure, the gravitational-wave signal, and stability, which I have not yet explained.
%bridge from the OPOs in the previous chapter to my work on nondegenerate internal squeezing in the next chapter.
%The structure of the results presented here will be mirrored in the next chapter which should make those later results more convincing. % what do I intend the reader to get out of it?
%\jam{(I do not just show this model/results because I spent time with them. Is it clear what I want the audience to get out of them?)}
% A proposed technology to further reduce shot noise is degenerate internal squeezing where a degenerate squeezer is included inside the signal-recycling cavity as shown in the left panel of Fig.~\ref{fig:coupled_cavities}~\cite{korobkoQuantumExpanderGravitationalwave2019,adyaQuantumEnhancedKHz2020}.
% To simplify modelling this configuration, the two coupled optical modes of concern, the differential mode from the arm cavities and the mode in the signal-recycling cavity, are approximated as coming from a pair of coupled cavities as shown in the right panel of Fig.~\ref{fig:coupled_cavities}. The quantum noise enters with the vacuum into the signal-recycling cavity, but the signal arises in the arm cavity and so sees the squeezer differently. This difference leads to no change in sensitivity at low frequencies and improved sensitivity at high frequencies around the resonance of the signal-recycling cavity as shown in Fig.~\ref{fig:sensitivity_curve}. The improvement occurs around the signal-recycling cavity resonance as this is where the most quantum noise field is present inside the cavity and so is where the internal squeezer produces the most squeezing. The resulting noise reduction is sufficient to overcome the de-amplification of the signal due to squeezing. The overall sensitivity improvement from degenerate internal squeezing is limited by optical loss in the interferometer and at the photodetector~\cite{korobkoQuantumExpanderGravitationalwave2019}.
% analytic model
\begin{comment}
\subsection{Analytic model}
\label{sec:dIS_model}
\jam{(Is showing this model necessary?)}
% I have verified this model against Korobko
I construct a Hamiltonian model of degenerate internal squeezing using the formalism from Chapter~\ref{chp:background_theory}. This model is based on and verified against Ref.~\cite{korobkoQuantumExpanderGravitationalwave2019}. %I include it here to demonstrate the Hamiltonian method that I later use in my work, particularly the radiation pressure, gravitational-wave signal, and the system's stability\jam{(repetition with section intro)}.
Since degenerate internal squeezing is a degenerate OPO coupled to another cavity, as shown in Fig.~\ref{fig:dIS_config}, I use the degenerate OPO model in Section~\ref{sec:dOPO_model} and add the extra modes associated with the arm cavity. Let the signal-recycling cavity and output modes be labelled as in Section~\ref{sec:dOPO_model} and let the arm cavity mode at carrier frequency $\omega_0$ be $\hat a$ (which is resonant in the single-mode approximation) with an intra-cavity loss port with transmissivity $T_{l,a}$ into vacuum $\hat n^L_a$. Let the gravitational-wave signal $h(t)$ from Section~\ref{sec:gravWaves} be coupled to the arm cavity mode by the test mass mechanical mode given by displacement $\hat x$ and momentum $\hat p$ (approximated as free-falling horizontally as explained in Section~\ref{sec:qnoise_GW_IFO}).
The Hamiltonian of this system is $\hat H = \hat H_0 + \hat H_I + \hat H_\gamma + \hat H_\text{GW}$ where~\cite{}
\jam{(fill in Langevin Hamiltonian)}
\begin{align}
\hat H_0 &= \hbar \omega_0 \hat a^\dag \hat a + \hbar \omega_0 \hat b^\dag \hat b + \hbar 2\omega_0 \hat u^\dag \hat u\\
\hat H_I &= i\hbar\omega_s(\hat a\hat b^\dag-\hat a^\dag\hat b) +\hbar \frac{x}{2} (e^{i\phi} \hat u (\hat b^\dag)^2 + e^{-i\phi} \hat u^\dag \hat b^2)\\
\hat H_\gamma &= \int \ldots \\
\hat H_\text{GW} &= -\alpha (\hat{x}-L_\mathrm{arm}h(t))\left(\frac{\hat{a}+\hat{a}^\dag}{\sqrt{2}}\right)+\frac{1}{2\mu}\hat{p}^2.
\end{align}
% \beta = \frac{\alpha L_\mathrm{arm}}{\sqrt{2}\hbar}, \beta = \sqrt{\frac{2 P_\text{circ} L_\text{arm} \omega_0}{\hbar c}}, \alpha = \sqrt{2} \hbar/L_arm= \sqrt{\frac{4 P_\text{circ} \omega_0 \hbar}{c L_\text{arm}}}
% multiple different hamiltonians for RP
% problem with omega_s formula, only valid below one FSR of the arms
Where $\alpha=\sqrt{\frac{2 P_\text{circ} \omega_0 \hbar}{c L_\text{arm}}}$\jam{(clarify that this is the Li formula which Korobko disagrees with by a factor of rt2 for reasons unknown. Why do they disagree? I use the Li value for comparison to sWLC but this affects the feasibility of nIS! Ask the supervisors.)} is the coupling strength to the gravitational-wave signal~\cite{}, $\mu=M/4$ is the reduced mass\jam{(Explain reduced mass.)} of the test mass with mass $M$~\cite{}, and $\omega_s\approx c\sqrt{\frac{T_\text{ITM}}{4 L_\text{arm} L_\text{SRM}}}$ is an approximation to the sloshing frequency between the coupled cavities (also known as the coupled cavity pole) which holds when it is below one FSR of the arm cavities\jam{(check this)}~\cite{}. I use $\hat H_\text{GW}$ from Ref.~\cite{liBroadbandSensitivityImprovement2020,original source?} which couples the mirror position and gravitational wave length displacement $L_\text{arm} h(t)$ to the amplitude quadrature of the cavity mode~\footnote{Although, perhaps, a more natural formulation would couple the gravitational-wave strain to the mirror position and the mirror position to the cavity mode, this is equivalent~\cite{}.}.
The Heisenberg-Langevin equations-of-motion for $\hat a, \hat b, \hat x$ and $\hat p$ can be found using the bosonic commutation relations, the canonical commutation relation $[\hat x,\hat p]=i\hbar$~\cite{}, and with all other commutators zero. As in Section~\ref{sec:dOPO_model}, I make semi-classical and no-pump-depletion approximations to the pump mode, enter the Interaction Picture, and take fluctuating components of the operators. The resulting equations are
\begin{equation}\label{eq:dIS_EoM}\begin{cases}
\dot{\hat a}=-\omega_s \hat b- \gamma_a \hat{a} + \sqrt{2\gamma_a}\hat{n}^L_a+\frac{i \alpha}{\hbar\sqrt2}(\hat{x}-L_\mathrm{arm}h(t)) \\
\dot{\hat b}=\omega_s \hat a-i\chi e^{i\phi} \hat b^\dag - \gamma^b_\mathrm{tot} \hat{b} + \sqrt{2\gamma^b_R}\hat{B}_\mathrm{in} + \sqrt{2\gamma_b}\hat{n}^L_b\\
\dot{\hat x}=\frac{1}{\mu}\hat p\\
\dot{\hat p}=\alpha\left(\frac{\hat{a}+\hat{a}^\dag}{\sqrt{2}}\right).
\end{cases}\end{equation}
By entering the Fourier domain, I solve these equations for $\vec{\hat b}(\Omega)=[\hat b(\Omega),\hat b^\dag(-\Omega)]^\text{T}$ in terms of similar vectors of the vacuum sources and signal, i.e.\ $\vec h(\Omega)=[\tilde h(\Omega),\tilde h^*(-\Omega)]^\text{T}$, I find that
\begin{align}
\text{M}_a\vec{\hat a}(\Omega)&=-\omega_s\vec{\hat b}(\Omega) + \sqrt{2\gamma_a}\vec{\hat n}^L_a(\Omega)-i\beta\begin{bsmallmatrix}1 & 0 \\0 & -1\end{bsmallmatrix}\vec h(\Omega) \\
\text{M}_a&= (\gamma_a-i \Omega)\text{I}+\frac{i \rho}{\Omega^2 \sqrt 2}\begin{bsmallmatrix}1 & 1 \\-1 & -1\end{bsmallmatrix} \\
\therefore\text{M}_b\vec{\hat b}(\Omega)&=\sqrt{2\gamma^b_R}\vec{\hat B}_\mathrm{in}(\Omega) + \sqrt{2\gamma_b}\vec{\hat n}^L_b(\Omega)+\omega_s \text{M}_a^{-1}\left(\sqrt{2\gamma_a}\vec{\hat n}^L_a(\Omega)-i\beta\begin{bsmallmatrix}1 & 0 \\0 & -1\end{bsmallmatrix}\vec h(\Omega)\right)\\
\text{M}_b&= (\gamma^b_\mathrm{tot}-i \Omega)\text{I}+\chi \begin{bsmallmatrix}0 & i e^{i\phi}\\-i e^{-i\phi} & 0\end{bsmallmatrix}+\omega_s^2\text{M}_a^{-1}.
\end{align}
Where the re-scaled coupling constants for the gravitational-wave signal and the radiation pressure, respectively~\footnote{As expected, $\mu=M/4\rightarrow\infty$ turns off the radiation pressure. Therefore, I will use $\rho=0$ to turn off the radiation-pressure noise.}, are
\begin{equation}\label{eq:beta_and_rho}
\beta = \frac{\alpha L_\mathrm{arm}}{\sqrt{2}\hbar}=\sqrt{\frac{ P_\text{circ}L_\text{arm} \omega_0 }{c \hbar}},\quad \rho = \frac{\alpha^2}{\sqrt{2}\hbar\mu}=\frac{\sqrt{2} P_\text{circ} \omega_0}{c \mu L_\text{arm}}.
\end{equation}
\jam{(this disagrees with Korobko since I use the alpha from Li, note this somewhere)}
Using the same input/output relations as Section~\ref{sec:dOPO_model} and $\Gamma=\frac{1}{\sqrt2}\begin{bsmallmatrix}1 & 1 \\-i & i\end{bsmallmatrix}$, I find the quadratures at the photodetector to be
\begin{align}
\vec{\hat X}_\mathrm{PD}(\Omega)&=\text{T}\vec h(\Omega)+\text{R}_\text{in}\vec{\hat X}_\mathrm{in}(\Omega)+\text{R}^L_a\vec{\hat X}^L_a(\Omega)+\text{R}^L_b\vec{\hat X}^L_b(\Omega)+\text{R}^L_\text{PD}\vec{\hat X}^L_\text{PD}(\Omega)\\
\text{T}&=-\sqrt{1-R_\text{PD}}\omega_s(-i\beta)\Gamma \sqrt{2\gamma^b_R}\text{M}_b^{-1}\text{M}_a^{-1}\begin{bsmallmatrix}1 & 0 \\0 & -1\end{bsmallmatrix}\\
\text{R}_\text{in}&=\sqrt{1-R_\text{PD}}\Gamma\left(\text{I}-2\gamma^b_R\text{M}_b^{-1}\right)\Gamma^{-1}\\
\text{R}^L_a&=-\sqrt{1-R_\text{PD}}\omega_s\Gamma 2\sqrt{\gamma^b_R \gamma_a}\text{M}_b^{-1}\text{M}_a^{-1}\Gamma^{-1}\\
\text{R}^L_b&=-\sqrt{1-R_\text{PD}}\Gamma 2\sqrt{\gamma^b_R \gamma_b}\text{M}_b^{-1}\Gamma^{-1}\\
\text{R}^L_\text{PD}&=\sqrt{R_\text{PD}} \text{I}.
\end{align}
Where the noise transfer matrices $\text{R}_\text{in},\text{R}^L_b,\text{R}^L_\text{PD}$ are the same as Eq.~\ref{eq:dOPO_PD_as_fn_of_vac} except for the difference in $\text{M}_b$ which accounts for the arm cavity and the radiation-pressure noise. %, as expected.
The total quantum noise is given by %the spectral density matrix %, i.e.\ shot noise plus quantum radiation-pressure noise,
\begin{equation}
\text{S}_X=\text{R}_\text{in}\text{R}_\text{in}^\dag+\text{R}^L_a{\text{R}^L_a}^\dag+\text{R}^L_b{\text{R}^L_b}^\dag+\text{R}^L_\text{PD}{\text{R}^L_\text{PD}}^\dag.
\end{equation}
This matrix has a similar form to Eq.~\ref{eq:dOPO_full_freedom} for the degenerate OPO but with terms for the radiation pressure noise such that the variances and covariances no longer reduce to 1 and 0, respectively, when the squeezer is off.\jam{(Display the matrix in some shortened form? Maybe in an appendix?)}
The signal transfer function (matrix) is defined with respect to $\tilde h(\Omega)$ ($\vec h(\Omega)$) but since $h(t)$ is real $\tilde h(\Omega)=\tilde h(-\Omega)^*$ and $\vec h(\Omega)=\tilde h(\Omega) \begin{bsmallmatrix}1 \\1\end{bsmallmatrix}$.
% \begin{align}
% \text{T}\begin{bsmallmatrix}1 \\1\end{bsmallmatrix}=\frac{1}{\rho \chi \cos (\phi ) (\ldots)+\Omega ^4 (\ldots)}\begin{bsmallmatrix}4 \beta ^2 \gamma_R (1-R_\text{PD}) \chi ^2 \Omega ^4 \omega_s^2 \left(\gamma_a^2+\Omega ^2\right) \cos ^2(\phi ) \\2 \beta ^2 \gamma_R (1-R_\text{PD}) \Omega ^4 \omega_s^2 \left(\left(\gamma_a^2+\Omega ^2\right) \left(2 {\gamma^b_\text{tot}}^2+\chi ^2+2 \Omega ^2\right)-4 \chi \sin (\phi ) \left({\gamma^b_\text{tot}} \left(\gamma_a^2+\Omega ^2\right)+\gamma_a \omega_s^2\right)-\chi ^2 \left(\gamma_a^2+\Omega ^2\right) \cos (2 \phi )+4 \omega_s^2 \left(\gamma_a {\gamma^b_\text{tot}}-\Omega ^2\right)+2 \omega_s^4\right)\end{bsmallmatrix}
% \end{align}
Inspecting $\text{T}\begin{bsmallmatrix}1 \\1\end{bsmallmatrix}$, i.e.\ the vector of signal transfer functions to each quadrature, shows that there are two terms: (1) rotates between the quadratures with the pump phase and (2) stays in the second quadrature and never vanishes with the pump phase\jam{(is it worth showing this?)}. I consider measuring the second quadrature at the photodetector since the signal is always there~\footnote{This does not mean that it is necessarily optimal to do so since the profile of the noise between the two quadratures is different to the signal, but it will suffice here~\cite{}.\jam{(What happens if I use $\phi=\pi$ and observe the first quadrature instead?)}}, and therefore the sensitivity ($\sqrt{S_h}$ is the noise-to-signal ratio) is
\begin{equation}
S_h = \frac{(\text{S}_X)_{2,2}}{\abs{(\text{T}\begin{bsmallmatrix}1 \\1\end{bsmallmatrix})_2}^2}.
\end{equation}
\end{comment}
\subsection{Understanding the behaviour using a Hamiltonian model}
\label{sec:dIS_results}
% priorities: radiation pressure, gravitational-wave signal, stability
% threshold, radiation pressure, and pump phase
\begin{figure}
\centering
\includegraphics[width=\textwidth]{dIS_lossless_N_S_NSR_annotated.pdf}
\caption{\jam{(Explain units.)} Degenerate internal squeezing's quantum noise response (upper-left panel), gravitational-wave signal response (bottom-left panel), and sensitivity (right panel) without optical losses. The squeezer (red curves) improves sensitivity around the sloshing frequency compared to the detector without squeezing (blue curves). The quantum noise response is shown in dB and the signal response is unitless~\cite{danilishinQuantumMeasurementTheory2012}. The sensitivity is conventionally shown as the noise-to-signal ratio in $\text{Hz}^{-1/2}$ and henceforth the goal is to lower the sensitivity curve. I use the parameters in Table~\ref{tab:dIS_parameters}. The readout rate ($\gamma^b_R$) determines the width of the squeezing peak centred on the sloshing frequency ($\omega_s$). %, this value of 5~kHz is at the upper end of the kilohertz sensitivity improvement regime compared to, for example, 90~kHz for broadband sensitivity~\cite{korobkoQuantumExpanderGravitationalwave2019}.
The radiation-pressure coupling constant ($\rho$, explained later) controls whether radiation-pressure noise is seen below $\sim10$~Hz ($\rho\neq0$) or not ($\rho=0$).
}
\label{fig:dIS_sensitivity}
\end{figure}
\begin{table}
\centering
\begin{tabular}{@{}ll|ll@{}}
\toprule
carrier wavelength, $\lambda_0$ & 2 $\mu\text{m}$ & sloshing frequency, $\omega_s$ & 5 kHz \\
arm cavity length, $L_\text{arm}$ & 4 km & signal mode transmissivity, $T_{\text{SRM},b}$ & 0.046 \\
signal-recycling cavity length, $L_\text{SRC}$ & 112.4 m & signal readout rate, $\gamma^b_R$ & 5 kHz \\
circulating arm power, $P_\text{circ}$ & 3 MW & arm intra-cavity loss, $T_{l,a}$ & 100 ppm \\
test mass mass, $M$ & 200 kg & signal mode intra-cavity loss, $T_{l,b}$ & 1000 ppm \\
input test mass transmissivity, $T_\text{ITM}$ & 0.0197 & detection loss, $R_\text{PD}$ & $10\%$ \\ \bottomrule
\end{tabular}
\caption{Parameter set based on LIGO~Voyager~\cite{LIGO_Voyager} but with deviations to achieve the sloshing frequency and readout rate shown. In particular, the signal-recycling cavity is made longer to increase the peak sensitivity via narrowing the peak. I use realistic future optical losses~\cite{zhangBroadbandSignalRecycling2021,Danilishin_2019} in parts-per-million (ppm). There is debate about $2~\mu\text{m}$ versus $1.064~\mu\text{m}$ as the preferred carrier wavelength~\cite{wills2018gravitational}, but I will only consider $2~\mu\text{m}$.}
\label{tab:dIS_parameters}
\end{table}
% In particular, as a baseline I will use\jam{(tabulate these parameters to reference later)} $3$~MW of circulating power, a $4$~km arm cavity, a $56$~m signal-recycling cavity, $200$~kg test masses, $2~\mu\text{m}$ carrier wavelength~\footnote{T}, $0.002$ transmission for the input test mass, and $0.046$ transmission for the signal-recycling mirror.
My work in the following chapters will follow a similar method to that presented below for degenerate internal squeezing which, accordingly, I show in detail.\jam{(is this necessary?)}
%\jam{(''Focus on the science, not the equations'' - VA. Consider what order these results should come in.)}
% comment on general performance, signal
% similarly to external squeezing, improving shot noise worsens radiation-pressure noise.
Using the \emph{Hamiltonian model} from Ref.~\cite{korobkoQuantumExpanderGravitationalwave2019}~\footnote{With an added factor of $\sqrt{2}$ to $G_0$ from that reference to match the convention for the gravitational-wave coupling constant $\alpha_\text{GW}$ from Ref.~\cite{liBroadbandSensitivityImprovement2020} that I use in Chapter~\ref{chp:nIS_analytics}.}, let the interaction Hamiltonian be
\begin{equation}
\hat{H}_I=i\hbar\omega_s(\hat{a}\hat{b}^\dag-\hat{a}^\dag\hat{b})+\frac{\hbar\chi}{2}(e^{i\phi} (\hat b^\dag)^2 - e^{-i\phi} \hat b^2).
\end{equation}
Here $\hat a$ is the differential arm cavity mode~\footnote{I assume that the Michelson interferometer is tuned such that the light from the arms destructively interferes at the output of the beamsplitter, called the ``dark port''~\cite{bond_2010}.} at the carrier frequency $\omega_0$ coupled to the signal-recycling cavity signal mode $\hat b$ with coupling rate (called the ``sloshing'' frequency) $\omega_s$ determined by the transmission through the input test mass and the lengths of the two coupled cavities~\cite{korobkoQuantumExpanderGravitationalwave2019}. %\jam{(should I state the approximation here or leave it for the nIS analytics?)}.
The second term in the Hamiltonian is the same as the degenerate OPO in Section~\ref{sec:dOPO_model}~\footnote{Up to the phase of the pump.}.
The model also includes an intra-cavity loss port in the arms with transmissivity $T_{l,a}$ to vacuum $\hat n^L_a$ as shown in Fig.~\ref{fig:dIS_config}~\footnote{Ref.~\cite{korobkoQuantumExpanderGravitationalwave2019} does not include this but I include it in my model for completeness.}.
Using this model, the sensitivity of the detector to a gravitational-wave strain $h(t)$ is the signal-to-noise ratio $\frac{\abs{T}}{\sqrt{S_X}}$ in the measured quadrature $\hat X_\text{PD}(\Omega)=\sum_i R_i \hat X_i^\text{vac}(\Omega) + T \tilde h(\Omega)$, given by the noise $\sqrt{S_X}$ and signal $T$ responses (also called ``transfer functions''). However, I will plot the noise-to-signal ratio throughout this thesis, i.e.\ $\sqrt{S_h}=\frac{\sqrt{S_X}}{\abs{T}}$ which has units of $\text{Hz}^{-1/2}$~\cite{danilishinQuantumMeasurementTheory2012}\jam{(why if $S_X$ and T are unitless?)}, as it is the convention in the gravitational-wave literature (e.g.\ see Ref.~\cite{AdvancedLIGO:2015}), and, therefore, smaller values in $\text{Hz}^{-1/2}$ indicate better sensitivity. %~\footnote{This is the inverse quantity to the signal-to-noise ratios quoted in Section~\ref{sec:qnoise_GW_IFO}.}.
The resulting \emph{noise and signal responses and the sensitivity} are shown in Fig.~\ref{fig:dIS_sensitivity} without optical losses and for the parameter set of LIGO~Voyager but with a readout rate of 5~kHz~\footnote{I increase the readout rate from $0.5$ to 5~kHz but keep the sloshing frequency fixed at 5~kHz by shortening the signal-recycling cavity and decreasing the transmission through the input test mass.} as shown in Table~\ref{tab:dIS_parameters}.
LIGO~Voyager~\cite{LIGO_Voyager} is a planned series of upgrades to the Advanced~LIGO detectors that I use throughout this thesis to represent the next generation of gravitational-wave detectors. %~\footnote{Similar parameters are used in the literature~\cite{liBroadbandSensitivityImprovement2020,miaoDesignGravitationalWaveDetectors2018,korobkoQuantumExpanderGravitationalwave2019} but might be biased against one configuration over another. I have partially mitigated this by varying the coupling rates between the modes which appears to best characterise the different classes of parameter sets~\cite{}.}.
% From this baseline parameter set, I will change the readout rate through the signal-recycling mirror by varying the length of the signal-recycling cavity~\footnote{I will also vary the input test mass's transmissivity to counteract the effect of changing the signal-recycling cavity's length on the coupling rate to the arm cavity (known as the sloshing frequency~\cite{}).} as there is much variation of this parameter in the literature that I compare to~\cite{liBroadbandSensitivityImprovement2020,miaoDesignGravitationalWaveDetectors2018,korobkoQuantumExpanderGravitationalwave2019,}.
The squeezer parameter $\chi$ affects the sensitivity as shown in Fig.~\ref{fig:dIS_sensitivity} and needs to be optimised for each configuration separately. With the squeezer turned off, the configuration becomes the detector in Fig.~\ref{fig:DRFPMI}. %and the noise, signal, and sensitivity reduce to the baseline curves~\cite{,}, where the signal transfer function peaks at $\omega_s$ with bandwidth of the peak $\gamma^b_R$~\cite{}~\footnote{Not to be confused with the overall bandwidth of the signal response which is still the arm cavity free-spectral range.}.
Turning the squeezer on (1) squeezes the shot noise and de-amplifies the signal around the sloshing frequency $\omega_s$, (2) does not affect the radiation-pressure noise below 100~Hz, and therefore (3) improves sensitivity around the sloshing frequency while not affecting it at lower frequencies.
Threshold in the lossless case is $\chi_\text{thr}=\gamma^b_R$ where the squeezed noise goes to zero at $\Omega=\omega_s$~\footnote{And the anti-squeezed quadrature diverges.}. In the lossy case, the situation is more complicated and the threshold is not quoted in the literature, which I will address in Section~\ref{sec:singularity_threshold}. Since configurations must be stable to be feasible, I confirm that degenerate internal squeezing is stable below threshold in Appendix~\ref{app:dIS_stability}.
% does not affect RP, do not make this mistake!
% Increasing the squeezer parameter $\chi$ further increases the squeezing of the noise and signal but it might not continue to improve the sensitivity, which I will cover shortly.
%\jam{(Explain the ``two regimes it can operate it I.e. broadening sensitivity (Korobko) or kHz sensitivity (Adya).'' -- VA)}
Degenerate internal squeezing can be operated in two regimes depending on the choice of the sloshing frequency ($\omega_s$) and the bandwidth of the signal-recycling cavity ($\gamma^b_R$)~\footnote{Not to be confused with the overall bandwidth of the signal response which is from the arm cavities.}: (1) broadband sensitivity when $\gamma^b_R$ is large (e.g.\ $L_\text{SRC}$ is short) and the sensitivity is improved typically from around 0.1 to 10~kHz~\cite{korobkoQuantumExpanderGravitationalwave2019} or (2) kilohertz sensitivity when $\gamma^b_R$ is small (e.g.\ $L_\text{SRC}$ is long) and the sensitivity is narrowly, but strongly, improved around $\omega_s$ (e.g.\ 5~kHz) by more than an order of magnitude at the peak as shown in Fig.~\ref{fig:dIS_sensitivity}~\cite{adyaQuantumEnhancedKHz2020}. I consider this latter regime because I am interested in kilohertz improvement.
% why improvement only around sloshing frequency
With degenerate internal squeezing, the squeezing of the shot noise and signal is localised to the sloshing frequency because of the \emph{resonance structure of the coupled cavity system}. At the sloshing frequency, energy is strongly coupled from the arm cavity into the signal-recycling cavity which becomes resonant~\footnote{The phase acquired upon reflecting off the input test mass depends on whether the arm cavity is resonant and means that the signal-recycling cavity can be chosen to be resonant at frequencies where the arm cavity is not resonant. At the sloshing frequency, the arm cavity is not resonant, as seen in the falling signal response in Fig.~\ref{fig:dIS_sensitivity}.}~\cite{korobkoTamingQuantumNoiseHow2020}. As the squeezer is only effective when the cavity is resonant, e.g.\ the squeezing drops off beyond $\gamma^b_R$ away from the peak in Fig.~\ref{fig:dOPO_variances}, the signal and noise are only squeezed around the sloshing frequency.\jam{(I do not understand this)} Away from the sloshing frequency, the cavity is not resonant and degenerate internal squeezing does not affect the sensitivity~\footnote{Assuming that the arm cavity loss is realistically small.}.
%\jam{(Which of these results are necessary to know for the results chapters, none of them?)}
% There is much behaviour to analyse for this configuration~\cite{korobkoQuantumExpanderGravitationalwave2019,adyaQuantumEnhancedKHz2020,korobkoTamingQuantumNoiseHow2020} that motivates how I analyse the configuration in my work and which are useful to compare to.
% To later compare to the behaviour of the configuration in my work\jam{(do I compare all of this behaviour or just the loss tolerance?)}, I will briefly discuss degenerate internal squeezing's (1) threshold, (2) stability, and (3) tolerance to optical loss.
% threshold, lossless threshold but leave lossy threshold to research chapters? ``To not conflate existing knowledge with my work, I leave the lossy threshold to Section~\ref{}.'' just state result and justify later?
% \begin{equation}
% \label{eq:dIS_lossless}
% (\text{S}_X)_{2,2}=1-\frac{4 \gamma^b_R \chi \Omega ^2}{\Omega ^2 (\gamma^b_R+\chi )^2+(\Omega ^2-\omega_s^2)^2}.
% \end{equation}
% Firstly, for the lossless noise response as shown in the top panel of Fig.~\ref{fig:dIS_sensitivity}~\cite{korobkoQuantumExpanderGravitationalwave2019}, the squeezed noise goes to zero at $\chi_\text{thr}=\gamma^b_R$ and $\Omega=\omega_s$\jam{(do I need to show this?)}, which defines threshold~\cite{}. In the anti-squeezed quadrature, the noise instead diverges at $\Omega=\omega_s$ at threshold. This behaviour is similar to the lossless degenerate OPO in Fig.~\ref{fig:dOPO_variances}, except that the peak occurs at the sloshing frequency instead of at DC. From the perspective of gains and losses inside the signal-recycling cavity, any light lost to the arms must return since the arm loss is zero and, therefore, the net loss from the cavity mode in the steady-state is still $\gamma^b_R$\jam{(is this argument correct?)}. In the lossy case, the situation is more complicated and the threshold is not quoted in the literature, which I will address in Section~\ref{sec:singularity_threshold}.
% \begin{figure}
% \centering
% % \includegraphics[width=\textwidth]{}
% \caption{\jam{(Make sure that x-axis is consistent with nIS plot. Is this plot necessary?)} Degenerate internal squeezing sensitivity versus quantum noise, varying the squeezer parameter up to threshold for squeezing and anti-squeezing and different detection losses\jam{(show for other losses as well?)}. The optimal value of the squeezer parameter is shown.}
% \label{fig:dIS_optimal_squeezing}
% \end{figure}
% pump phase, foreshadow that maximum of anti-squeezing is not necessarily the minimum of squeezing
% RP is not (anti)squeezed, look at the plots!
% A complication when the system is lossy is that maximising the anti-squeezed quadrature is not necessarily the same as minimising the squeezed quadrature, as shown in Fig.~\ref{fig:dIS_sensitivity}, the two move apart at high arm losses. This is a result of worsening the uncertainty product in the Heisenberg Uncertainty Principle.\jam{(But why does this happen?)}
% optimal squeezing curves against loss, in really high loss should antisqueeze
% difference to caves's amplifier
% \subsubsection{Stability}
% establish how stability is analysed
\subsection{Limitation: tolerance to optical loss}
% Problems with proposal -
\label{sec:dIS_optical_loss}
% \begin{figure}
% \centering
% \includegraphics[width=\textwidth]{dIS_noise_budget.pdf}
% \caption{\jam{(Cut all these noise budget figures?)} Degenerate internal squeezing breakdown of noise sources, showing the contribution to the total quantum noise response from each vacuum input. The radiation-pressure noise and the squeezing around $\omega_s$ (compared to the noise without squeezing which is not shown) appear in all of the interferometer noise sources and not in the detection loss. The squeezing affects the readout port vacuum more than the intra-cavity noises\jam{(why?)}. The detection loss dominates the other losses except below 10~Hz where the radiation-pressure noise is present and the signal loss dominates, however, except around the peak, the readout port vacuum dominates all the losses. However, Fig.~\ref{fig:dIS_loss_tolerance} provides a better understanding of the tolerance to losses because it also accounts for the signal response.}
% \label{fig:dIS_noise_budget}
% \end{figure}
%\jam{(``Focus on the science not the equations. Once you explain the behaviour, explain the effect of losses on this system.'' -- VA.)}
% intracavity loss behaves differently to dOPO
Degenerate internal squeezing has different tolerances to the three types of optical loss it experiences: detection loss in the readout, intra-cavity loss in the signal mode of the signal-recycling cavity, and intra-cavity loss in the arms. %, but the tolerance is similar for the two regimes of broadband and kilohertz improvement. %, where now the signal's tolerance must be considered.
% The squeezer squeezes the vacuum from the intra-cavity losses and the readout port~\cite{}. %, as shown in Fig.~\ref{fig:dIS_noise_budget}.
The general effects of these losses are shown in Fig.~\ref{fig:dIS_loss_tolerance}. Firstly, detection loss uniformly pulls the noise towards the vacuum and pulls the signal towards zero because it uniformly mixes in vacuum. Secondly, signal intra-cavity loss behaves differently to the degenerate OPO, as the noise response remains within the lossless envelope, the radiation-pressure noise is increased, and the signal and noise are worsened around the sloshing frequency\jam{(why doesn't the response broaden like before?)}. Finally, arm intra-cavity loss diminishes the squeezing of the noise, worsens the DC response to the signal, but improves the radiation-pressure noise\jam{(check this)}.
% and moves the peak frequency away from the sloshing frequency
%With arm loss, the signal response remains within the lossless signal envelope but worsens the DC response to the signal.
% \begin{figure}
% \centering
% % \includegraphics[width=\textwidth]{}
% \caption{\jam{(Remove this plot)} Degenerate internal squeezing shot noise response in the limit of large arm loss compared to the theoretical limiting degenerate OPO with fully reflective input test mass.}
% \label{fig:dIS_limit_dOPO}
% \end{figure}
% \subsubsection{Reduction to degenerate OPO}
% arm cavity loss gives reduction to dOPO
% The behaviour against different parameters will be similar to that seen for the degenerate OPO in Section~\ref{}.
% The behaviour in the high arm loss limit can be understood as the degenerate OPO limit of degenerate internal squeezing, i.e.\ when the arm cavity is removed. Formally taking the limit $\gamma_a\rightarrow\infty$ of the equations-of-motion in Ref.~\cite{korobkoQuantumExpanderGravitationalwave2019} shows that the system reduces to a degenerate OPO between the signal-recycling mirror and a fully reflective input test mass~\footnote{This is why the sensitivity away from the sloshing frequency is affected when the arm loss is high.}.\jam{(give evidence? check if equivalent to sending sloshing frequency to zero)} Although I initially expected the input test mass to instead become another loss port with its original transmissivity, this can be explained as the disappearance of the arm cavity mode altogether, vacuum or otherwise\jam{(check this)}. For $\gamma_a\rightarrow\infty$, the equation-of-motion for $\hat a$~\cite{korobkoQuantumExpanderGravitationalwave2019} becomes $\dot{\hat a}\approx -\gamma_a \hat a$ which quickly decays, and therefore any vacuum $\hat n^L_a$ cannot couple to $\hat b$, nor any of $\hat b$ couple out through $\omega_s$, because all terms involving $\hat a$ vanish~\footnote{$\hat a$ is implicitly the fluctuating component $\delta \hat a$ throughout this discussion.}.
% However, I suspect that this is a false consequence of the single-mode approximation and that if a ``transfer matrix'' approach~\cite{finesse,}~\footnote{Not to be confused with the transfer matrices describing the signal and noise responses.} was instead used where the fields at a point are propagated inside the cavities and the cavity modes, e.g.\ $\hat a$, are not explicit, the limit would instead be a degenerate OPO with added intra-cavity loss to account for the open input test mass port. I leave verifying this to future work\jam{(check?)}.
% But, since realistic arm losses for future detectors are below this high loss regime\jam{(quantify)}, this behaviour is not of concern for applications.
% This is confusing, I expected the initial test mass to become a loss port, but this can be understood as there no longer being an arm cavity mode, vacuum or otherwise, since it can not make a round-trip.
With losses included in the model, the \emph{optimal squeezing} can be below threshold~\cite{korobkoCompensatingQuantumDecoherenceTalk2021}. For example, once the noise is limited by detection loss, which is not squeezed by the internal squeezer, further squeezing will only decrease the signal and the sensitivity. Moreover, in the high loss regime, any amount of squeezing is detrimental and it is instead optimal to anti-squeeze internally~\cite{korobkoCompensatingQuantumDecoherenceTalk2021}. This improves sensitivity because the signal is anti-squeezed more than the noise, and also improves the tolerance to losses because they now decrease the signal and the noise since the noise is above the vacuum value~\footnote{This will be explained more in the next chapter.}. However, future detectors do not belong to this high loss regime. %, unlike using a Caves's amplifier.
% \begin{figure}
% \centering
% % \includegraphics[width=\textwidth]{}
% \caption{\jam{(Combine with Fig.~\ref{fig:dIS_loss_tolerance})} Degenerate internal squeezing sensitivity for realistic losses, using the same parameters as Fig.~\ref{fig:dIS_sensitivity} and a squeezer parameter close to threshold.\jam{(Show that detection loss dominates.)}}
% \label{fig:dIS_realistic_loss}
% \end{figure}
If the \emph{realistic losses} in Table~\ref{tab:dIS_parameters}~\footnote{What losses are realistic for future detectors is inexact given the unknown progress of future technology, but the literature suggests that these values, which might be better called the ``best-estimated future'' losses, are conservative~\cite{zhangBroadbandSignalRecycling2021,flaminio2010study}.} are assumed, then the sensitivity improvement significantly degrades to less than a factor of two at the sloshing frequency as shown in Fig.~\ref{fig:dIS_loss_tolerance} compared to the lossless case in Fig.~\ref{fig:dIS_sensitivity} that improved it by over an order of magnitude. For these realistic losses, detection loss is responsible for most of the degradation seen in Fig.~\ref{fig:dIS_loss_tolerance} since it dominates the noise apart from the readout port vacuum. %, as seen in Fig.~\ref{fig:dIS_noise_budget}\jam{(check)}.
% \subsection{Connection to nondegenerate internal squeezing}
% at high enough losses, it becomes optimal to instead anti-squeeze internally, this means that you might as well use nondegenerate internal squeezing because then you can anti-squeeze and potentially exploit the correlations using a combined readout
% conclusions about dIS?
The low tolerance to detection loss of degenerate internal squeezing motivates investigating other methods which might improve sensitivity more given the same losses.
%, such as internal anti-squeezing using nondegenerate internal squeezing~\footnote{Which should anti-squeeze by comparison to the nondegenerate OPO.}.\jam{(What is the optimal squeezing value given realistic losses? I need to rule out degenerate anti-squeezing.)} %This does not mean that degenerate internal squeezing is not useful, it is worth further investigation, especially in low loss applications~\cite{}, but I will consider the nondegenerate case and whether it fares better.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{dIS_loss_tolerance.pdf}
\caption{Degenerate internal squeezing sensitivity for realistic losses. The dashed curves show the effect on the sensitivity of increasing the loss from the realistic value in Table~\ref{tab:dIS_parameters} to show the tolerance to each loss. Increasing each of the three losses separately shows that realistic arm loss ($T_{l,a}$) is negligible (the cyan curve lies on top of the red curve), signal mode loss ($T_{l,b}$) decreases the peak sensitivity (i.e.\ diminishes the trough shown around 4~kHz but I will refer to this as a ``peak'' henceforth), and detection loss ($R_\text{PD}$) broadly decreases sensitivity. This means that the detection loss dominates the losses but the signal loss would matter around the peak if it was worse than the desired 1000~ppm. I use the parameters in Table~\ref{tab:dIS_parameters}.}
\label{fig:dIS_loss_tolerance}
\end{figure}
% Nondegenerate internal squeezing, where the internal squeezer is instead nondegenerate, has
% been proposed as an alternative to degenerate internal squeezing~\cite{yapadyaPersonalCommunication}, although a comprehensive analysis of nondegenerate internal squeezing is yet to be done~\cite{liBroadbandSensitivityImprovement2020}. Because nondegenerate squeezing results in two entangled photons with different frequencies, these photons will not interfere with each other in the same manner as the degenerate case. Without this interference, nondegenerate internal squeezing increases the signal and the noise instead of decreasing them like the degenerate case. Therefore, nondegenerate internal squeezing is predicted to be more resistant to photodetector loss since the signal amplitude is greater. This project aims to investigate the potential benefits of nondegenerate internal squeezing over degenerate internal squeezing.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Stable optomechanical filtering}
\label{sec:sWLC}
% the point of this section is to explain the existing design, what is wrong with it, and how it connects to nIS
\begin{figure}[ht]
\centering
\includegraphics[angle=-90,width=0.8\textwidth]{sWLC_config.pdf}
\caption{Stable optomechanical filtering configuration with a mechanical idler mode $\hat{c}_m$, e.g.\ a suspended mirror, at a mechanical resonance frequency of $\omega_m$ coupled to the signal-recycling cavity optical mode $\hat b$ at $\omega_0$ via radiation pressure. $\hat b$ and $\hat{c}_m$ are driven by a blue-detuned pump mode at $\omega_0+\omega_m$.}
\label{fig:sWLC_config}
\end{figure}
% \jam{(``No need to go into details about White light cavities - introduce it, mention the challenges for it, cite references.'' -- VBA)}
% Read liEnhancingInterferometerSensitivity2021 to check if this is still current.
% explain the design
Stable optomechanical filtering uses a modified signal-recycling cavity compared to the conventional detector shown in Fig.~\ref{fig:DRFPMI}~\cite{liBroadbandSensitivityImprovement2020}. Here, the signal-recycling cavity acts as an optomechanical filter cavity which couples the signal mode to the mechanical ``idler'' mode of suspended mirror~\footnote{The exact position and type of the mechanical oscillator do not matter in this simplified model, compare Refs.~\cite{liBroadbandSensitivityImprovement2020,liEnhancingInterferometerSensitivity2021} which have different positions but the same Hamiltonian.}, as shown in Fig.~\ref{fig:sWLC_config}. The mechanical mode and signal mode are driven by a blue-detuned pump mode, and this configuration, where the signal mode is measured, is dynamically stable~\cite{liBroadbandSensitivityImprovement2020}~\footnote{The previous, unstable configuration measured the arm cavity mode instead~\cite{miaoEnhancingBandwidthGravitationalWave2015}.}.
% is it necessary to talk about unstable design? --> not the focus, just give it a mention, Kramers-Kronig relations?
% This configuration is dynamically stable~\cite{} and improved upon an unstable design~\cite{} by changing which mode was read out~\footnote{The unstable configuration read out the arm cavity mode directly instead of the signal-recycling cavity mode. The addition of a control system was required to stabilise the naturally unstable system~\cite{}.}. The stable design is more relevant to this thesis because its mode structure is more closely related to my work. %, which I will elaborate on shortly.
The filter cavity amplifies/suppresses certain frequencies given the choice of cavity parameters and can be designed to partially counter the arm cavity's resonance that decreases the signal response at kilohertz. %Although the signal-recycling cavity already filters the signal, e.g.\ the length of the cavity affects the signal transfer function's peak and bandwidth~\footnote{Given the role of $\omega_s$ and $\gamma^b_R$ as peak frequency and bandwidth of the peak, respectively.}, the optomechanical coupling changes the resonance behaviour and can be used to more selectively filter the signal~\cite{}. %I use the term in this context to be consistent with the literature~\cite{}. % waste of time explaining this?
% how does this configuration beat the Mizuno limit (four factors in the introduction)? --> cancels arm cavity resonance
% the lossless system does not affect the noise, therefore is not fundamentally squeezing, but the lossy system will antisqueeze % idler loss complicates this later
%, where the parameter of interest is the optomechanical coupling compared to the optical coupling between the arm and signal-recycling cavities at the input test mass. % Again, without increasing the circulating power in the arms.
This partially achieves the ``white-light cavity'' idea: to broaden the resonance by changing the filter cavity's phase response at each frequency to be opposite to that of the arm cavity~\cite{miaoEnhancingBandwidthGravitationalWave2015,WICHT1997431}.\jam{(Why does this avoid the Mizuno limit?)} % However, the literature has only considered the case without optical loss~\cite{,} and, in the following chapters, my work will suggest that the behaviour of the system is complicated when optical loss is introduced. %~\footnote{To preview the results, this is because the quantum noise becomes anti-squeezed and therefore the Mizuno limit is beaten by squeezing instead of/along with the countering-the-arm-resonance explanation above.}.\jam{(Should not talk about results here?)}
The behaviour of the configuration is more complicated when optical losses are introduced, but, for this section, it should just be considered as changing the signal response. % through the resonance structure of the interferometer.
% explain the results of the design
% I omit the model of this configuration here, because in the lossless case it is exactly the model in the next chapter, see Section~\ref{}.
I emphasise two aspects of this configuration: (1) its dependence on the optomechanical and optical coupling rates and (2) its vulnerability to mechanical loss. %, and (3) its connection to nondegenerate internal squeezing. Because of this last point,
% \begin{figure}
% \centering
% % \includegraphics[width=\textwidth]{}
% \caption{\jam{(Remove this plot since the results are compared to later. Need to show on-threshold behaviour. Is this figure redundant with the later comparison?)} Stable optomechanical filtering's sensitivity, showing the effect of mechanical loss, the dominant noise source. Optical loss is not included in this model. This figure was generated using the code from Ref.~\cite{} with permission from the authors~\cite{LiPersonalCommunication}\jam{(check this)}, the parameters used are\jam{(... fill this in)} and $T_\text{env}=4$~K and $Q_m=8\times10^9$.}
% \label{fig:sWLC_sensitivity}
% \end{figure}
% but I need to talk about their results? in particular, the exceptional point of PT-symmetric, stability, and sensitivity at threshold. maybe do show a lossless, shot-noise only model -- better yet, cite the results in their paper and reference the next section for further explanation?
% exceptional point, stability and PT symmetry
In the lossless case, comparing the coupling rates of the arm and the signal, and the signal and the mechanical idler, shows that when the two coupling rates are equal, the behaviour is exceptional.
Let the interaction Hamiltonian of the system be~\cite{liBroadbandSensitivityImprovement2020}~\footnote{This is similar to the Hamiltonian I will use in my work which I will detail in Chapter~\ref{chp:nIS_analytics}.}
\begin{equation}\label{eq:sWLC_HI}
\hat{H}_I=i\hbar\omega_s(\hat{a}\hat{b}^\dag-\hat{a}^\dag\hat{b})+i\hbar\chi_m(\hat{b}^\dag\hat{c}_m^\dag-\hat{b}\hat{c}_m).
\end{equation}
Here $\hat a, \hat b, \omega_s$ are the same notation as degenerate internal squeezing, $\hat{c}_m$ annihilates the mechanical mode, and $\chi_m$ is the optomechanical coupling rate. When this coupling rate $\chi_m$ equals the sloshing frequency ($\omega_s$), the interaction Hamiltonian becomes invariant under the transformation $\hat a\mapsto\hat{c}_m^\dag, \hat{c}_m\mapsto\hat a^\dag$ which corresponds to the composition of parity, $\hat a\leftrightarrow \hat{c}_m$, and time, $\hat a\leftrightarrow \hat a^\dag,\hat {c}_m\leftrightarrow \hat {c}_m^\dag$, transformations, and that leaves $\hat b$ invariant. This \emph{parity-time (PT) symmetry} causes other changes in the system,
% Although, the most notable feature of PT-symmetric systems is that quantum mechanics can\jam{(be reformulated to)} also allow non-Hermitian, PT-symmetric Hamiltonians~\cite{}. And since this system is still Hermitian, the importance of PT-symmetry is lessened.\jam{(What did Carl Bender say? --> Hermitiancy is more complicated and requires further examination, see Section~\ref{sec:future_work}.)}}.
namely, the lossless, PT-symmetric system is borderline stable, with one complex $\Omega$ pole on the real axis; is at an Exceptional Point of its eigenmodes as two or more eigenvalues are degenerate; and the shot noise--limited, integrated sensitivity becomes unbounded~\cite{liBroadbandSensitivityImprovement2020}~\footnote{A review of PT-symmetry is given in Ref.~\cite{el2018non}. I leave checking that the PT-symmetry causes this sensitivity improvement to future work.}. With radiation pressure included in the model~\footnote{There is a complication with radiation pressure coupling the arm cavity mode to the test mass mechanical mode, as for PT-symmetry to be maintained the filter cavity mechanical mode must then be coupled to a back-action evasion mode with negative effective mass~\cite{liBroadbandSensitivityImprovement2020}, but I will not consider this for simplicity.}, the integrated sensitivity becomes bounded and, although the kilohertz sensitivity improves, the main improvement is from 100-1000~Hz. %, as shown for the lossless case\jam{in Fig.~\ref{fig:sWLC_sensitivity} (talk about results without figure)}. %This foreshadows an aspect of my work, that I will discuss later, about broadband versus kilohertz sensitivity improvement.
% A recent proposal uses a stable optomechanical filter cavity to avoid this limit and increase high-frequency sensitivity without fully sacrificing low-frequency sensitivity nor increasing the power~\cite{liBroadbandSensitivityImprovement2020}.
% Stable optomechanical filtering consists of an auxiliary filter cavity inside the signal-recycling cavity. One of the filter cavity's mirrors is a mechanical oscillator, such as a suspended mirror, driven by a laser whose frequency is appropriate to excite the mechanical mode~\cite{liBroadbandSensitivityImprovement2020}. This design is dynamically stable unlike previous designs for optomechanical filter cavities~\cite{miaoEnhancingBandwidthGravitationalWave2015,pageEnhancedDetectionHigh2018,miaoDesignGravitationalWaveDetectors2018}. This system has a parity-time symmetry between the differential optical mode of the interferometer and the mechanical mode;
\subsection{Limitation: tolerance to mechanical loss}
% Problems with proposal -
\label{sec:sWLC_loss}
% thermal noise and mechanical quality factor
Stable optomechanical filtering could potentially improve the sensitivity of future detectors but addressing its vulnerability to \emph{mechanical loss} demands progress beyond current technology. Mechanical loss damps the mechanical mode due to the dissipation of energy into the thermal bath of the mass and its surroundings. This raises the temperature of the mass and increases the thermal noise, which becomes radiation pressure noise in the filter cavity mode, and then degrades sensitivity. The thermal noise from mechanical loss dominates the losses of stable optomechanical filtering~\cite{liBroadbandSensitivityImprovement2020}. The results in Ref.~\cite{liBroadbandSensitivityImprovement2020} assume the ratio of the environmental temperature $T_\text{env}$ to quality factor $Q_m$ to be small, i.e.\
% \begin{equation}\label{eq:sWLC_mechanical_loss}
$T_\text{env}/Q_m\leq \hbar \gamma_\text{single-cavity}/(8 k_B) \approx 6\times 10^{-10}K$~\cite{miaoEnhancingBandwidthGravitationalWave2015}.
% \end{equation}
Here $\gamma_\text{single-cavity}$ is the bandwidth of the Fabry-Perot Michelson interferometer, i.e.\ without the signal-recycling cavity, and $k_B$ is the Boltzmann constant.
The quality factor required to satisfy this bound is $Q_m=8\times10^9$~\cite{miaoEnhancingBandwidthGravitationalWave2015} which is beyond that possible with current technology~\cite{masonetal2019,pageEnhancedDetectionHigh2018}. % and would require improvements, i.e.\ dilution factors, that have not been experimentally demonstrated to date~\cite{pageEnhancedDetectionHigh2018,,}.
Therefore, an all-optical alternative~\footnote{Since all other systems that use optomechanical filtering have the same requirement~\cite{miaoEnhancingBandwidthGravitationalWave2015}.} is appealing because the losses required might be more realistic; this is the focus of my work in the following chapters.
% For example, this is satisfied for the LIGO~Voyager parameter set\jam{(check this)} for $T_\text{env}=4$~K and $Q_m=8\times10^9$~\cite{liBroadbandSensitivityImprovement2020}. But this quality factor $Q_m$ is beyond that possible with current technology~\cite{pageEnhancedDetectionHigh2018,,} and if the mechanical loss is higher than the above bound, then the sensitivity significantly degrades\jam{(quantify)}. Therefore, for stable optomechanical filtering to be feasible, the mechanical loss must improve\jam{(by how much?)}, towards which there is research in the literature~\cite{pageEnhancedDetectionHigh2018,,}\jam{(what citations?)}.Alternatively, it could be made feasible by finding a configuration with the same mode structure but different losses, which I will discuss in the next chapter.
% I will not discus this configuration further because without optical and mechanical loss it is equivalent to the lossless model in the next chapter, which I will discuss later.
% optical loss --> bridge into next subsection?
% However, it requires cryogenic (around 4~K) environmental temperature and a higher mechanical quality factor than is currently possible. An all-optical alternative to this optomechanical proposal without these requirements is desirable.
% It is designed for implementation in later-generation detectors and assumes technological improvements in the near future in arm length, power, optical loss, Brownian noise, and, most stringently, in the thermal noise and quality factor of the mechanical oscillator. These assumptions are necessary to achieve the target sensitivity for astrophysical applications~\cite{miaoDesignGravitationalWaveDetectors2018}. By investigating nondegenerate internal squeezing, I aim to find more realistic requirements for a future detector.
\begin{comment}
\subsection{Connection to nondegenerate internal squeezing}
\label{sec:modal_equivalence}
\begin{figure}
\centering
% \includegraphics[width=\textwidth]{}
\caption{\jam{(Add idler readout)} Mode diagrams of the OPOs, degenerate internal squeezing, stable optomechanical filtering, and nondegenerate internal squeezing. The latter two configurations are modally the same but are optomechanical and all-optical, respectively, which means that their performance might be different given the different losses they encounter. The idler readout scheme and the unstable case of optomechanical filtering are also shown. The parallels between the degenerate OPO and degenerate internal squeezing, and the nondegenerate OPO and nondegenerate internal squeezing, are also shown.}
\label{fig:mode_diagram}
\end{figure}
\jam{(Break out this section into motivation for nIS either here or at the start of chapter 4 (which might be cleaner).)}
% However,
% all-optical alternative
Finally, there is an alternative route to progress, to replace the optomechanical interaction with an all-optical one and replace the mechanical loss with optical loss. This is to consider nondegenerate internal squeezing, where the internal squeezer squeezes signal and idler modes and the idler is not resonant in the arms~\footnote{So that the idler mode is not coupled to the arm cavity mode.}.
% although nIS is closer on first inspection to dIS, the underlying structure is closer to sWLC, but both motivate it.
% equivalent mode structures, just the different noise sources
Although it might seem closer to degenerate internal squeezing than stable optomechanical filtering, the underlying mode structure of nondegenerate internal squeezing is equivalent to the latter, by mapping the idler optical mode and squeezer parameter $\hat c, \chi$ to the mechanical mode and optomechanical coupling $\hat{c}_m, \chi_m$, respectively, in the Hamiltonian~\cite{}, as shown in Fig.~\ref{fig:mode_diagram}. Although this is only true in the case with no optical or mechanical loss, as thermal noise in $\hat{c}_m$ behaves differently to shot noise in $\hat c$~\cite{}\jam{(it does experimentally and the parameter regimes are very different in application, but the Langevin terms are the same)}, it is reasonable to predict that the lossy configurations behave similarly to each other because the abstract dynamics are the same. Therefore, nondegenerate internal squeezing might achieve the sensitivity improvement of stable optomechanical filtering but at more realistic optical loss than the mechanical loss required above.
\jam{(different experimental constraints on optical loss versus mechanical loss)}
\end{comment}
% the lossless system does not affect the noise, therefore is not fundamentally squeezing, but the lossy system will antisqueeze % idler loss complicates this later --> cover this in results if it matters
% replicating this [PT-]symmetry with an internal squeezer requires the squeezer to be nondegenerate to mimic the distinction between the filter cavity optical mode and the mechanical mode~\cite{liBroadbandSensitivityImprovement2020}. Stable optomechanical filtering improves high-frequency sensitivity by cancelling the effect of the resonance behaviour of the interferometer cavities.
% Nondegenerate internal squeezing is also motivated by a connection between it and the use of
% a stable optomechanical filter cavity to improve high-frequency sensitivity~\cite{yapadyaPersonalCommunication,liBroadbandSensitivityImprovement2020}. The Hamiltonians of the two systems are equivalent under some mapping of the creation and annihilation operators of certain optical fields to certain mechanical fields. This connection exploits the fact that the nondegenerate squeezer interacts with three distinct frequencies to introduce a symmetry into the all-optical system that the optomechanical system has. This means that using nondegenerate internal squeezing may achieve the benefits of a stable optomechanical filter cavity without the optomechanical drawbacks of requiring cryogenic (around 4~K~\cite{miaoEnhancingBandwidthGravitationalWave2015}) environmental temperature and high mechanical quality factor. Therefore, understanding stable optomechanical filtering should lead to a better understanding of what nondegenerate internal squeezing might achieve.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Chapter summary}
% there are solutions in the literature to improving kilohertz sensitivity, but they have their problems, and motivate considering a combined configuration
In this chapter, I have reviewed two configurations proposed to improve the kilohertz sensitivity of future gravitational-wave detectors: degenerate internal squeezing and stable optomechanical filtering.
The low tolerance to realistic losses limits the feasibility of these two configurations. % unless further technological progress is made.
% Firstly, I reviewed the literature of possible configurations to improve kilohertz sensitivity, from which I focussed on two configurations: degenerate internal squeezing and stable optomechanical filtering.
% Firstly, I examined degenerate internal squeezing and discussed its sensitivity, stability, and how it is limited by a low tolerance to optical loss. % and how, in the high loss regime, it becomes beneficial to anti-squeeze instead of squeeze internally. %, which suggests that nondegenerate internal squeezing might be more resilient to optical loss than degenerate internal squeezing.
% Secondly, I examined stable optomechanical filtering. % which deviates from all other configurations in this thesis by use of a mechanical oscillator instead of an optical squeezer, and avoids the Mizuno limits through partially cancelling the arm cavity resonance.
% I have also explained that PT-symmetry in the lossless limit of stable optomechanical filtering, at a particular value of the coupling rates, leads to an Exceptional Point and enhanced sensitivity, but that this sensitivity improvement is limited by requiring low mechanical loss.
% However, this optomechanical configuration is equivalent to the all-optical nondegenerate internal squeezing, and this equivalence might mean that nondegenerate internal squeezing can achieve the same sensitivity improvement but be more feasible due to the technological differences between optical and mechanical loss.
This motivates investigating configurations that are more resistant to loss. %; I will combine these two proposals to try and improve the tolerance to loss.
| {
"alphanum_fraction": 0.7840964461,
"avg_line_length": 167.8654353562,
"ext": "tex",
"hexsha": "cc49bf96f891ba96ab0c63af60bf3aa56478a2ca",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "570598b59ac8c70dee6387088698ed768a2d1247",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "daccordeon/nondegDog",
"max_forks_repo_path": "thesis/existing_proposals.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "570598b59ac8c70dee6387088698ed768a2d1247",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "daccordeon/nondegDog",
"max_issues_repo_path": "thesis/existing_proposals.tex",
"max_line_length": 1308,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "570598b59ac8c70dee6387088698ed768a2d1247",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "daccordeon/nondegDog",
"max_stars_repo_path": "thesis/existing_proposals.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-24T23:42:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-22T05:25:14.000Z",
"num_tokens": 16016,
"size": 63621
} |
\nonstopmode
\documentclass[a4paper]{article}
\usepackage{times}
\usepackage{xspace}
\usepackage{ifthen}
\usepackage{amsmath}
\usepackage{amssymb}
%\usepackage{amsthm}
\usepackage{stmaryrd}
%\usepackage[all]{xy}
% More space for floats
\renewcommand{\topfraction}{0.99}
\renewcommand{\bottomfraction}{0.99}
\renewcommand{\floatpagefraction}{0.90}
\renewcommand{\textfraction}{0.1}
\ifdefined\LONGVERSION
\relax \else
% short version:
\newcommand{\LONGVERSION}[1]{}
\newcommand{\SHORTVERSION}[1]{#1}
% % long version:
% \newcommand{\LONGVERSION}[1]{#1}
% \newcommand{\SHORTVERSION}[1]{}
% \newcommand{\SHORTVERSION}[1]{BEGIN~SHORT\ #1 \ END~SHORT}
\fi
\newcommand{\LONGSHORT}[2]{\LONGVERSION{#1}\SHORTVERSION{#2}}
\newtheorem{theorem}{Theorem}
%\newtheorem*{theorem*}{Theorem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
%\newcommand{\qed}{\hfill\ensuremath{\Box}}
% from llncs.cls
%\def\squareforqed{\hbox{\rlap{$\sqcap$}$\sqcup$}}
\def\squareforqed{\ensuremath{\Box}}
\def\qed{\ifmmode\squareforqed\else{\unskip\nobreak\hfil
\penalty50\hskip1em\null\nobreak\hfil\squareforqed
\parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi}
\newenvironment{proof}[1][]{\noindent\ifthenelse{\equal{#1}{}}{{\it
Proof.}}{{\it Proof #1.}}\hspace{2ex}}{\qed\bigskip}
\newenvironment{proof*}[1][]{\noindent\ifthenelse{\equal{#1}{}}{{\it
Proof.}}{{\it Proof #1.}}\hspace{2ex}}{\bigskip}
%\input{prooftree}
\newcommand{\inst}{}
\newcommand{\institute}[1]{}
\input{macros}
%\renewenvironment{gather*}{\begin{displaymath}\begin{array}{c}}{%
% \end{array}\end{displaymath}}
\newcommand{\OTm}{\mathsf{OTm}}
\newcommand{\OSubst}{\mathsf{OSubst}}
\newcommand{\oann}[1]{{}^{#1}\kern-0.15ex}
\newcommand{\ovar}{\mathord{\bullet}}
\newcommand{\oapp}[1]{\,\oann{#1}}
\newcommand{\olam}[1]{\lambda^{#1}.\,}
\newcommand{\opi}[2]{\Pi\,#1\,^{#2}}
%\newcommand{\oprint}[1]{\langle#1\rangle}
%\newcommand{\oparse}[3]{#1 \leadsto \oprint{#2} #3}
\newcommand{\oprint}[2]{{}^{\lhd}\esubst{#1}{#2}}
\newcommand{\oprintp}[2]{\oprint{(#1)}{#2}}
\newcommand{\oparse}[1]{#1^{\rhd}}
\newcommand{\lrhd}{\mathrel{\mathord{\lhd}\mathord{\rhd}}}
\newcommand{\osyn}[3]{#1 \lrhd \esubst{#2}{#3}}
\newcommand{\osynp}[3]{\osyn{#1}{(#2)}{#3}}
\renewcommand{\esubst}[2]{#1[#2]}
\newcommand{\esubstp}[2]{\esubst{(#1)}{#2}}
\renewcommand{\cempty}{\varepsilon}
\renewcommand{\sempty}{\varepsilon}
\newcommand{\oclos}[2]{#1[#2]}
\newcommand{\oclosp}[2]{(#1)[#2]}
\newcommand{\ochk}[4]{#1 \der #2 \jchk \oclos{#4}{#3}}
\newcommand{\oinf}[4]{#1 \der #2 \jinf \oclos{#4}{#3}}
\newcommand{\Head}{\mathsf{Head}}
\newcommand{\appa}{\mathrel{@}}
\newcommand{\vapp}[1]{\mathrel{@^{#1}}}
\renewcommand{\eval}[2]{\dent{#1}{}[#2]}
\newcommand{\evalid}[2]{\dent{#1}{}[\ovar^{#2}]}
\renewcommand{\funT}[2]{\Pi\,#1\of#2.\,}
\title{A Nameless Term Representation Based on Ordered Logic}
\author{Andreas Abel, Nicolai Kraus}
\institute{
Department of Computer Science \\
Ludwig-Maximilians-University Munich \\
\email{[email protected]}
}
\date{October 2010}
\begin{document}
\maketitle
\begin{abstract}
We introduce a new nameless representation of lambda terms based on
ordered logic. Sharing of variables is explicit and happens at
lambda-abstraction. Our term representation enables access to the
set of free variables in constant time. To maintain the free
variable set we have a logarithmic overhead in the decomposition of
applications and abstractions.
We get evaluation using
explicit substitutions without memory leaks.
\end{abstract}
\section{Syntax}
\paradot{Ordered syntax}
\[
\begin{array}{lllrl@{\qquad}l}
\OTm & \ni & t,u & ::= & \ovar & \mbox{variable (nameless)} \\
&&& \mid & t \oapp k u & \mbox{application} \\
&&& \mid & \olam {\vec k} t & \mbox{abstraction} \\
\end{array}
\]
Contexts $\Gamma,\Delta$ are lists of types. We write $L^n$ to
indicate that list $L$ has length $n$.
\paradot{Simple typing} Simple types follow the grammar $A, B ::= X
\mid A \to B$. The typing judgement $\Gamma \der t : A$ is inductively
defined by the following rules.
\begin{gather*}
\ru{}{A \der \ovar : A}
\qquad
\ru{\Gamma \der t : A \to B \qquad
\Delta \der u : A
}{\Gamma,\Delta^k \der t \oapp k u : B}
\qquad
\ru{\Gamma_0,A,\Gamma_1,\dots,A,\Gamma_n \der t : B
}{\Gamma_0^{k_0},\Gamma_1^{k_1},\dots,\Gamma_n^{k_n} \der
\olam {\vec k} t : A \to B}
\end{gather*}
The typing rules for variable and application are linear, but the
abstraction rule has explicit sharing built in. Explicit sharing is,
on a global scale, as powerful as structural rules for weakening,
contraction, and exchange.
\paradot{Printing} We define a conversion $\oprint t {\vec x}$
from ordered syntax $t$ to named syntax $M$.
\[
\begin{array}{lll}
\oprint \ovar x
& = & x \\
\oprintp {t \oapp k u} {\vec x,\vec y^k}
& = & \oprint t {\vec x} \, \oprint u {\vec y} \\
\oprintp {\olam {\vec k} t}
{\vec x_0^{k_0},\vec x_1^{k_1},\dots,\vec x_n^{k_n}}
& = & \lambda x.\, \oprint t {\vec x_0,x,\vec x_1,\dots,x,\vec x_n} \\
\end{array}
\]
\para{Parsing} of named syntax into ordered syntax is defined via the
judgement $\osyn M t {\vec x}$, given inductively by the following rules.
\begin{gather*}
\ru{}{\osyn x \ovar x}
\qquad
\ru{\osyn M t {\vec x} \qquad
\osyn N u {\vec y^k}
}{\osynp {M\,N} {t \oapp k u} {\vec x, \vec y}}
\\[2ex]
\rux{\osyn M t {\vec x_o^{k_0},x,\vec x_1^{k_1},\dots,x,\vec x_n^{k_n}}
}{\osynp {\lambda x.M} {\olam {\vec k} t}
{\vec x_0,\vec x_1,\dots,\vec x_n}
}{x \not\in \vec x_0,\dots,\vec x_n}
\end{gather*}
Parsing and printing are inverses of each other.
\section{Explicit Substitutions}
\para{Substitutions} $\sigma$ are list of pairs $t^k$ of a term $t$ together
with a natural number $k$. Substitutions are (linearily) typed by the judgement
$\Gamma \der \sigma : \Delta$ which is given inductively by the
following rules:
% \[
% \begin{array}{lllrl@{\qquad}l}
% \Subst & \ni & \sigma,\tau & ::= &
% \end{array}
% \]
\begin{gather*}
\ru{
}{\cempty \der \sempty : \cempty}
\qquad
\ru{\Gamma_0 \der \sigma : \Delta \qquad
\Gamma^k \der t : A
}{\Gamma_0,\Gamma \der \sigma, \oann k t : \Delta,A}
\end{gather*}
We extend the term grammar by $\esubst t \sigma$, an explicit substitution of
$\sigma$ in term $t$, with the usual typing rule:
\begin{gather*}
\ru{\Gamma \der \sigma : \Delta \qquad
\Delta \der t : A
}{\Gamma \der \esubst t \sigma : A}
\end{gather*}
The identity substitution $\sid^n$ for context $\Gamma^n$ is a list of
$\ovar$s of length $n$. Clearly, $\Gamma^n \der \sid^n : \Gamma^n$.
\paradot{Reduction}
\[
\begin{array}{lll}
\esubstp {\olam {\vec k} t} {\sigma_0^{k_0},\dots,\sigma_n^{k_n}} \oapp k u
& \red & \esubst t {\sigma_0,\oann k u,\sigma_1, \dots, \oann k u,\sigma_n}
\\[1ex]
\esubst \ovar {\oann k u}
& \red & u
\\
\esubstp {t \oapp k u} {\sigma,\oann l {\tau^k}}
& \red & \esubst t \sigma \oapp l \esubst u \tau
\\
\esubstp {\olam {\vec k} t} {\oann{l_0}{\sigma_0^{k_0}},\dots,\oann{l_n}{\sigma_n^{k_n}}}
& \red & \olam {\vec l} \esubst t {\sigma_0,\oann 1
\ovar,\sigma_1,\dots,\oann 1 \ovar,\sigma_n}
\end{array}
\]
\section{Dependent Types}
\paradot{Named syntax}
\[
\begin{array}{lllrl@{\qquad}l}
\Sort & \ni & s & ::= & \ttype \mid \tkind & \mbox{sort} \\[1ex]
\Const & \ni & c &&& \mbox{constant} \\
\Tm & \ni & M,N,A,B & ::= & s \mid c \mid x \mid \lam x A M
\mid \lambda x M \mid M \, N \mid \Pi\,A\,\lambda x B
& \mbox{named term}\\
\end{array}
\]
\paradot{Ordered syntax} O-terms are given by the following grammar:
\[
\begin{array}{lllrl@{\qquad}l}
\OTm & \ni & t,u,T,U & ::= & \ovar & \mbox{variable (nameless)} \\
&&& \mid & t \oapp k u & \mbox{application} \\
&&& \mid & \olam {\vec k} t & \mbox{abstraction} \\
&&& \mid & c \mid s & \mbox{constant, sort} \\
&&& \mid & \opi U k T & \mbox{function type} \\
&&& \mid & \esubst t \sigma & \mbox{explicit substitution}
\\[1ex]
\OSubst & \ni & \sigma,\tau & ::= & \vec t & \mbox{sequence of terms}\\
\end{array}
\]
\para{Meaning} of ordered syntax is defined in terms of named
syntax via the
judgement $\osyn M t {\vec x}$, given inductively by the following rules.
\begin{gather*}
\ru{}{\osyn s s \epsilon}
\qquad
\ru{}{\osyn c c \epsilon}
\qquad
\ru{}{\osyn x \ovar x}
\qquad
\ru{\osyn M t {\vec x} \qquad
\osyn N u {\vec y^k}
}{\osynp {M\,N} {t \oapp k u} {\vec x, \vec y}}
\\[2ex]
\rux{\osyn M t {\vec x_0^{k_0},x,\vec x_1^{k_1},\dots,x,\vec x_n^{k_n}}
}{\osynp {\lambda x.M} {\olam {\vec k} t}
{\vec x_0,\vec x_1,\dots,\vec x_n}
}{x \not\in \vec x_0,\dots,\vec x_n}
\\[2ex]
\rux{\osyn A U {\vec y} \qquad
\osyn B T {\vec x_0^{k_0},x,\vec x_1^{k_1},\dots,x,\vec x_n^{k_n}}
}{\osynp {\funT x A B} {\opi U k {\olam {\vec k} t}}
{\vec y,\vec x_0,\vec x_1,\dots,\vec x_n}
}{x \not\in \vec x_0,\dots,\vec x_n;
k = \sum_{i=0}^n k_i}
\end{gather*}
Parsing and printing are inverses of each other.
\subsection{Values without Abstraction}
\para{Values} are o-closed o-terms in weak head normal form with named
free variables.
\[
\begin{array}{lllrl@{\qquad}l}
\Head & \ni & h & ::= & x \mid c & \mbox{head} \\[1ex]
\Val & \ni & f,v,w,V,W & ::= & h \, {\vec v} & \mbox{neutral
value} \\
&&& \mid & \esubst{(\olam {\vec k} t)}{\rho} & \mbox{function closure} \\
&&& \mid & s & \mbox{sort} \\
&&& \mid & \piT V W & \mbox{function type} \\[1ex]
\Env & \ni & \rho & ::= & \vec v & \mbox{environment} \\[1ex]
C\Val& \ni & P,Q & ::= & \esubst V {\vec x} & \mbox{name-closed value} \\
\end{array}
\]
\para{Application and evaluation} $f \appa v$ and $\eval t \rho$.
\[
\begin{array}{lll}
h \,{\vec v} \appa v
& = & h \, {(\vec v,v)} \\
\esubst{(\olam {\vec k} t)}{\rho_0^{k_0},\dots,\rho_n^{k_n}} \appa v
& = & \eval t
{\rho_0^{k_0},v,\dots,v,\rho_n^{k_n}}
\\[1ex]
\eval c {\sempty} & = & c \\
\eval \ovar {v} & = & v \\
\eval {\olam{\vec k}t} \rho & = & \esubst {(\olam{\vec k}t)} \rho
\\
\eval {t \oapp k u} {v_n,\dots,v_{k+1},
v_k,\dots,v_1}
& = & \eval t {v_n,\dots,v_{k+1}} \appa \eval u {v_k,\dots,v_1}
\\
\eval s \sempty & = & s \\
\eval {\opi U k T} {v_n,\dots,v_{k+1},
v_k,\dots,v_1}
& = & \piT {\eval U {v_n,\dots,v_{k+1}}} {\eval T {v_k,\dots,v_1}}
\\
\end{array}
\]
\paradot{Contexts}
\[
\begin{array}{lllrl@{\qquad}l}
&& \Gamma,\Delta & ::= & \cempty \mid \Delta, x \of \oclos{T}{\vec y}
& \mbox{typing context } (\vec y \subseteq \dom(\Delta))
\\
&& \vec x &&& \mbox{occurrence context} \\
\end{array}
\]
\paradot{Judgements}
\[
\begin{array}{ll}
% \ochk \Delta M {\vec x} W & \mbox{term $M$ checks against type $W$}
% \\
% \oinf \Delta M {\vec x} W & \mbox{term $M$ has inferred type $W$}
% \\[1ex]
\Delta \der M \jchk P & \mbox{term $M$ checks against type $P$}
\\
\Delta \der M \jinf P & \mbox{term $M$ has inferred type $P$}
\\[1ex]
\Delta \der P = Q & \mbox{types $P$ and $Q$ are equal}
\\
\Delta \der p = p' \jchk P & \mbox{values $p$ and $p'$ are equal at
type $P$}
\\
\Delta \der p = p' \jinf P & \mbox{neutral values $p$ and $p'$ are
equal, inferring type $P$}
\\
\end{array}
\]
Rules for inference:
\begin{gather*}
\ru{}{\Delta \der x \jinf \Delta(x)}
\\[2ex]
\ru{\oinf \Delta M {\vec x,{\vec y}^n} {(\opi V n W)} \qquad
\ochk \Delta N {\vec x} V \qquad
\osyn N u {{\vec z}^m} \qquad
\eval u {\ovar^m} = v
}{\oinf \Delta {M\,N} {\vec y,\vec z} {(W \vapp m v)}}
\\[2ex]
\rux{\ochk \Delta A {\sempty} \ttype \qquad
\osyn A U {{\vec y}^m} \qquad
\evalid U m = V \\
\oinf {\Delta, x \of \oclos V {\vec y}} M
{{\vec x_0}^{k_0},x,\dots,x,{\vec x_n}^{k_n}} W
}{\oinf \Delta {\lambda x \of A.\,M} {\vec y,\vec x_0,\dots,\vec x_n}
{(\opi V {k} {\olam {\vec k} W})}
}{k = \sum_{i=0}^n k_i}
\\[2ex]
\ru{}{\oinf \Delta \ttype {\sempty} \tkind}
\\[2ex]
\ru{\ochk \Delta A {\sempty} \ttype \qquad
\osyn A U {{\vec y}^m} \qquad
\evalid U m = V \qquad
\oinf {\Delta, x \of \oclos V {\vec y}} B {} s
}{\oinf \Delta {\Pi x \of A.\,B} {} s}
\end{gather*}
Rules for checking:
\begin{gather*}
% \ru{\oinf \Delta M {\vec x} W \qquad
% \Delta \der \oclos W {\vec x} = \oclos V {\vec y}
% }{\ochk \Delta M {\vec y} V}
% \qquad
\ru{\Delta \der M \jinf P \qquad
\Delta \der P = Q
}{\Delta \der M \jchk Q}
\qquad
\ru{\ochk {\Delta, x \of \oclos V {\vec y}} M {\vec z,x} {(W \vapp 1 \ovar)}
}{\ochk \Delta {\lambda x.M} {\vec y,{\vec z}^n} {(\opi V n W)}}
% \ru{\ochk {\Delta, x \of \oclos V {\vec y}} M {}
% }{\ochk \Delta {\lambda x.M} {\vec y,{\vec z_i}^{k_i}} {(\opi V n
% (\olam{\vec k} W)}
\end{gather*}
Rules for equality (inference mode):
\begin{gather*}
\ru{}{\Delta \der \oclos c \sempty = \oclos c \sempty \jinf \Sigma(c)}
\qquad
\ru{}{\Delta \der \oclos \ovar x = \oclos \ovar x \jinf \Delta(x)}
\\[2ex]
\ru{\Delta \der \oclos f {\vec x} = \oclos {f'}{\vec x'} \jinf
\oclosp{\opi V l F}{\vec z,{\vec {z'}}^l} \qquad
\Delta \der \oclos v {\vec y} = \oclos {v'}{\vec y'} \jchk
\oclos V {\vec z}
}{\Delta \der \oclosp {f \oapp k v}{\vec x,\vec y^k}
= \oclosp {f' \oapp {k'} v'}{\vec x',{\vec {y'}}^{k'}}
\jinf \oclosp{F \vapp{k} v}{\vec z',\vec y}
}
\end{gather*}
Rules for equality (checking mode):
\begin{gather*}
\ru{\Delta, x \of \oclos V {\vec y} \der
\oclosp {f \vapp 1 \ovar}{\vec x ,x} =
\oclosp {f' \vapp 1 \ovar}{\vec x',x} \jchk
\oclosp {F \vapp 1 \ovar}{\vec z, x}
}{\Delta \der \oclos f {\vec x} = \oclos {f'} {\vec x'}
\jchk \oclosp {\opi V k F}{\vec y,\vec z^k}}
\\[2ex]
\rux{\Delta \der p = p' \jinf P
}{\Delta \der p = p' \jchk Q
}{Q \mbox{ not a function type}}
\end{gather*}
Rules for equality (type mode):
\begin{gather*}
\ru{}{\Delta \der \oclos s \sempty = \oclos s \sempty}
\qquad
\ru{\Delta \der P = P' \jinf \oclos \ttype \sempty
}{\Delta \der P = P'}
\\[2ex]
\ru{\Delta \der \oclos V {\vec x} = \oclos {V'}{\vec x'} \qquad
\Delta, x\of\oclos V {\vec x} \der
\oclosp{F \vapp 1 \ovar}{\vec y,x} =
\oclosp{F' \vapp 1 \ovar}{\vec y',x}
}{\Delta \der \oclosp {\opi {V }{k }{F }}{\vec x ,\vec y^k}
= \oclosp {\opi {V'}{k'}{F'}}{\vec x',\vec {y'}^{k'}}
}
\end{gather*}
\end{document}
| {
"alphanum_fraction": 0.5924038592,
"avg_line_length": 33.1486486486,
"ext": "tex",
"hexsha": "e89df9286d332d269bcfed617d21353bf7760a73",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-04-22T16:23:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-10-01T10:55:18.000Z",
"max_forks_repo_head_hexsha": "e8f19dbec8f9c5db303ec2e711e325644d1784b0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "andreasabel/helf",
"max_forks_repo_path": "notes/ordered.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e8f19dbec8f9c5db303ec2e711e325644d1784b0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "andreasabel/helf",
"max_issues_repo_path": "notes/ordered.tex",
"max_line_length": 91,
"max_stars_count": 21,
"max_stars_repo_head_hexsha": "e8f19dbec8f9c5db303ec2e711e325644d1784b0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "andreasabel/helf",
"max_stars_repo_path": "notes/ordered.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-06T12:15:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-15T23:21:21.000Z",
"num_tokens": 5868,
"size": 14718
} |
\section{Conclusions}
\begin{frame}
\frametitle{Conclusions}
The static type and effect system solves the following challenges:
\begin{itemize}
\item dynamic lock creation
\item passing of lock references
\item aliasing of lock references
\end{itemize}
Assuring absence of lock errors is thus basically a sequential problem, as one can ignore interference (a parallel program can be dealt with compositionally).
\end{frame}
\begin{frame}
\frametitle{Conclusions}
The main assumption restricts passing the lock references via instance fields.
\\ \newline
There are basically only two possible ways hand over the identity of a lock:
\begin{itemize}
\item via the thread constructor
\item via an instance field
\end{itemize}
\end{frame}
| {
"alphanum_fraction": 0.7752956636,
"avg_line_length": 27.1785714286,
"ext": "tex",
"hexsha": "9dd585c81cadec88b23ee8c146b38fd9b7fe8704",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b438fe5f02ef543a50d549e53450e22e6463e69f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alexandrustoica/university.ubb.master",
"max_forks_repo_path": "Methodologies Software Processes/Assignments/Safe Locking for Multi-Threaded Java Presentation/slides/conclusions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b438fe5f02ef543a50d549e53450e22e6463e69f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alexandrustoica/university.ubb.master",
"max_issues_repo_path": "Methodologies Software Processes/Assignments/Safe Locking for Multi-Threaded Java Presentation/slides/conclusions.tex",
"max_line_length": 158,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b438fe5f02ef543a50d549e53450e22e6463e69f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alexandrustoica/university.ubb.master",
"max_stars_repo_path": "Methodologies Software Processes/Assignments/Safe Locking for Multi-Threaded Java Presentation/slides/conclusions.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-28T18:25:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-28T18:25:57.000Z",
"num_tokens": 178,
"size": 761
} |
\input{"preamble.tex"}
\addbibresource{UGA\_Algebra\_Qual\_Solutions.bib}
\let\Begin\begin
\let\End\end
\newcommand\wrapenv[1]{#1}
\makeatletter
\def\ScaleWidthIfNeeded{%
\ifdim\Gin@nat@width>\linewidth
\linewidth
\else
\Gin@nat@width
\fi
}
\def\ScaleHeightIfNeeded{%
\ifdim\Gin@nat@height>0.9\textheight
0.9\textheight
\else
\Gin@nat@width
\fi
}
\makeatother
\setkeys{Gin}{width=\ScaleWidthIfNeeded,height=\ScaleHeightIfNeeded,keepaspectratio}%
\title{
\textbf{
UGA Algebra Qualifying Exam Solutions (Spring 2011 -- Spring 2021)
}
}
\begin{document}
\date{}
\maketitle
\newpage
% Note: addsec only in KomaScript
\addsec{Table of Contents}
\tableofcontents
\newpage
\hypertarget{preface}{%
\section{Preface}\label{preface}}
I'd like to extend my gratitude to the following people for helping
supply solutions and proofs:
\begin{itemize}
\tightlist
\item
Paco Adajar
\item
Swaroop Hegde
\end{itemize}
Many other solutions contain input and ideas from other graduate
students and faculty members at UGA, along with questions and answers
posted on Math Stack Exchange or Math Overflow.
\hypertarget{group-theory-general}{%
\section{Group Theory: General}\label{group-theory-general}}
\hypertarget{cosets}{%
\subsection{Cosets}\label{cosets}}
\hypertarget{spring-2020-2-done}{%
\subsubsection{\texorpdfstring{Spring 2020 \#2
\(\done\)}{Spring 2020 \#2 \textbackslash done}}\label{spring-2020-2-done}}
Let \(H\) be a normal subgroup of a finite group \(G\) where the order
of \(H\) and the index of \(H\) in \(G\) are relatively prime. Prove
that no other subgroup of \(G\) has the same order as \(H\).
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Division algorithm: \((a,b)= d\implies as+bt =1\) for some \(s, t\).
\item
Coset containment trick: \(X\subseteq N \iff xN = N\) for all \(x\).
\end{itemize}
\end{concept}
\begin{strategy}
Recognize that it suffices to show \(hN = N\). Context cue: coprimality
hints at division algorithm. Descend to quotient so you can leverage
both the order of \(h\) \emph{and} the order of cosets simultaneously.
\end{strategy}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
For ease of notation, replace \(H\) in the problem with \(N\) so we
remember which one is normal.
\item
Write \(n\coloneqq\# N\) and \(m \coloneqq[G:N] = \#G/N\), where the
quotient makes sense since \(N\) is normal.
\item
Let \(H \leq G\) with \(\# H = n\), we'll show \(H=N\).
\begin{itemize}
\tightlist
\item
Since \(\# H = \# N\) it suffices to show \(H \subseteq N\).
\item
It further suffices to show \(hN = N\) for all \(h\in H\).
\end{itemize}
\item
Noting \(\gcd(m, n)=1\), use the division algorithm to write
\(1 = ns + mt\) for some \(s,t\in {\mathbb{Z}}\).
\item
The result follows from a computation:
\begin{align*}
hN
&= h^1 N \\
&= h^{ns + mt}N \\
&= h^{ns} N \cdot h^{mt}N \\
&= \qty{h^n N}^s \cdot \qty{h^t N}^m \\
&= (eN)^s \cdot N \\
&= N
,\end{align*}
\begin{itemize}
\tightlist
\item
We've used that \(h\in H \implies o(h) \divides \# H = n\) by
Lagrange, so \(h^n = e\).
\item
We've also used that \(\# G/N = m\), so \((xH)^m = H\) for any
\(xH\in G/H\).
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2014-6-done}{%
\subsubsection{\texorpdfstring{Fall 2014 \#6
\(\done\)}{Fall 2014 \#6 \textbackslash done}}\label{fall-2014-6-done}}
Let \(G\) be a group and \(H, K < G\) be subgroups of finite index. Show
that
\begin{align*}
[G: H\cap K] \leq [G: H] ~ [G:K]
.\end{align*}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
For \(H, K\leq G\), intersection is again a subgroup of everything:
\(H\cap K \leq H, K, G\) by the one-step subgroup test.
\item
Counting in towers: \(A\leq B \leq C \implies [C:A] = [C:B][B:A]\).
\item
Fundamental theorem of cosets: \(xH = yH \iff xy^{-1}\in H\).
\item
Common trick: just list out all of the darn cosets!
\end{itemize}
\end{concept}
\begin{strategy}
Count in towers, show that distinct coset reps stay distinct.
\end{strategy}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
\(H \cap K \leq H \leq G \implies [G: H \cap K] = [G: H] [H : H \cap K]\)
\item
So it suffices to show \([H: H \cap K] \leq [G: K]\)
\item
Write \(H/H \cap K = \left\{{ h_1 J, \cdots, h_m J }\right\}\) as
distinct cosets where \(J \coloneqq H \cap J\).
\item
Then \(h_i J\neq h_j J \iff h_i h_j^{-1}\not\in J = H \cap K\).
\item
\(H\) is a subgroup, so \(h_i h_j^{-1}\in H\) forces this not to be in
\(K\).
\item
But then \(h_i K \neq h_j K\), so these are distinct cosets in
\(G/K\).
\item
So \(\#G/K \geq m\).
\end{itemize}
\end{solution}
\hypertarget{spring-2013-3-done}{%
\subsubsection{\texorpdfstring{Spring 2013 \#3
\(\done\)}{Spring 2013 \#3 \textbackslash done}}\label{spring-2013-3-done}}
Let \(P\) be a finite \(p{\hbox{-}}\)group. Prove that every nontrivial
normal subgroup of \(P\) intersects the center of \(P\) nontrivially.
\todo[inline]{Clean up, sketchy argument.}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
Let \(N{~\trianglelefteq~}P\), then for each conjugacy class \([n_i]\)
in \(N\), \(H \cap[g_i] = [g_i]\) or is empty.
\item
\(G = {\textstyle\coprod}_{i\leq M} [g_i]\) is a disjoint union of
conjugacy classes, and the conjugacy classes of \(H\) are of the form
\([g_i] \cap H\).
\item
Then pull out the center
\begin{align*}
H = \displaystyle\coprod_{i\leq M} [g_i] \cap H = \qty{ Z(G) \cap H } {\textstyle\coprod}\displaystyle\coprod_{i\leq M'} [g_i]
.\end{align*}
\item
Taking cardinalities,
\begin{align*}
\# H = \# \qty{ Z(G) \cap H} + \sum_{i\leq M'} \# [g_i]
.\end{align*}
\item
\(p\) divides \(H\) since \(H\leq P\) and \(P\) is a
\(p{\hbox{-}}\)group.
\item
Each \(\# [g_i] \geq 2\) since the trivial conjugacy classes appear in
the center, forcing \(\# [g_i] \geq p\).
\item
\(p\) divides \(\# [g_i]\) since \(\# [g_i]\) must divide
\(\# P = p^k\)
\item
So \(p\) must divide the remaining term \(Z(G) \cap H\), which makes
it nontrivial.
\end{itemize}
\end{solution}
\hypertarget{burnside-class-equation}{%
\subsection{Burnside / Class Equation}\label{burnside-class-equation}}
\hypertarget{spring-2019-4-done}{%
\subsubsection{\texorpdfstring{Spring 2019 \#4
\(\done\)}{Spring 2019 \#4 \textbackslash done}}\label{spring-2019-4-done}}
For a finite group \(G\), let \(c(G)\) denote the number of conjugacy
classes of \(G\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if two elements of \(G\) are chosen uniformly at
random,then the probability they commute is precisely
\begin{align*}
\frac{c(G)}{{\left\lvert {G} \right\rvert}}
.\end{align*}
\item
State the class equation for a finite group.
\item
Using the class equation (or otherwise) show that the probability in
part (a) is at most
\begin{align*}
\frac 1 2 + \frac 1 {2[G : Z(G)]}
.\end{align*}
\end{enumerate}
\begin{quote}
Here, as usual, \(Z(G)\) denotes the center of \(G\).
\end{quote}
\begin{warnings}
(DZG) This is a slightly anomalous problem! It's fun and worth doing,
because it uses the major counting formulas. Just note that the
techniques used in this problem perhaps don't show up in other group
theory problems.
\end{warnings}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Notation: \(X/G\) is the set of \(G{\hbox{-}}\)orbits
\item
Notation:
\(X^g = \left\{{x\in X{~\mathrel{\Big|}~}g\cdot x = x}\right\}\)
\item
Burnside's formula: \(\#{X/G} = {1 \over \# G} \sum \# {X^g}\).
\item
Definition of conjugacy class:
\(C(g) = \left\{{ hgh^{-1}{~\mathrel{\Big|}~}h\in G }\right\}\).
\end{itemize}
\end{concept}
\begin{strategy}
Fixed points of the conjugation action are precisely commuting elements.
Apply Burnside. Context clue: \(1/[G:Z(G)]\) is weird, right? Use that
\([G:Z(G)] = \# G/\# Z(G)\), so try to look for \(\#Z(G)/\#(G)\)
somewhere. Count sizes of centralizers.
\end{strategy}
\begin{solution}
\envlist
\begin{proof}[Part a]
\envlist
\begin{itemize}
\item
Define a sample space \(\Omega = G \times G\), so
\(\# {\Omega} = (\# {G})^2\).
\item
Identify the event we want to analyze:
\begin{align*}
A \coloneqq\left\{{(g,h) \in G\times G {~\mathrel{\Big|}~}[g,h] = 1}\right\} \subseteq \Omega
.\end{align*}
\item
Note that the slices are centralizers:
\begin{align*}
A_g \coloneqq\left\{{(g, h) \in \left\{{ g }\right\} \times G {~\mathrel{\Big|}~}[g, h] = 1}\right\} = Z(g) \implies A = \displaystyle\coprod_{g\in G} Z(g)
.\end{align*}
\item
Set \(n\) be the number of conjugacy classes, note we want to show
\(P(A) = n / {\left\lvert {G} \right\rvert}\).
\item
Let \(G\) act on itself by conjugation, which partitions \(G\) into
conjugacy classes.
\begin{itemize}
\item
What are the orbits?
\begin{align*}
\mathcal{O}_g = \left\{{hgh^{-1}{~\mathrel{\Big|}~}h\in G}\right\}
,\end{align*}
which is the \textbf{conjugacy class} of \(g\). In particular, the
number of orbits is the number of conjugacy classes.
\item
What are the fixed points?
\begin{align*}X^g = \left\{{h\in G {~\mathrel{\Big|}~}hgh^{-1}= g}\right\},\end{align*}
which are the elements of \(G\) that commute with \(g\), which is
isomorphic to \(A_g\).
\end{itemize}
\item
Identifying centralizers with fixed points,
\begin{align*}
\#{A} = \#{\displaystyle\coprod_{g\in G} Z(g) } = \sum_{g\in G} \#{Z(g)} = \sum_{g\in G}\# {X^g}
.\end{align*}
\item
Apply Burnside
\begin{align*}
\# {X / G} = \frac { 1 } { \# G } \sum _ { g \in G } \# X ^ { g } ,
\end{align*}
\item
Note \(\#{X/G} = n\), i.e.~the number of conjugacy classes is the
number of orbits.
\item
Rearrange and use definition:
\begin{align*}
n \cdot \#{G}
= \qty{\#{X/G} }\cdot \#{G}
= \sum _ { g \in G } \# X ^ { g }
\end{align*}
\item
Compute probability:
\begin{align*}
P(A)
= {\# A \over \# \Omega}
= \displaystyle\sum _{ g \in G } \frac{\# X ^ { g }}{ ( \# {G} )^2}
= \frac{\qty{ \# {X/G}} \cdot \#{G}}{ (\#{G})^2}
= \frac{n \cdot \#{G}}{( \#{G} )^2}
= \frac n {\# G}
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[Part b]
Statement of the class equation:
\begin{align*}
{\left\lvert {G} \right\rvert} = Z(G) + \sum_{\substack{\text{One $x$ from each} \\ \text{conjugacy class}}}[G: Z(x)]
\end{align*}
where \(Z(x) = \left\{{g\in G {~\mathrel{\Big|}~}[g, x] = 1}\right\}\)
is the centralizer of \(x\) in \(G\).
\end{proof}
\begin{proof}[Part c]
\envlist
\begin{quote}
(DZG): I couldn't convince myself that a previous proof using the class
equation actually works. Instead, I'll borrow the proof from
\href{https://math.berkeley.edu/~tb65536/Commuting_Probability.pdf}{this
note}
\end{quote}
\begin{itemize}
\tightlist
\item
Write the event as
\(A = \displaystyle\coprod_{g\in G} \left\{{g}\right\} \times Z(g)\),
then
\begin{align*}
P(A)
= {\# A\over (\# G)^2}
= {1\over (\# G)^2} \sum_{g\in G} \# Z(g)
.\end{align*}
\item
Attempt to estimate the sum: pull out central elements \(g\in Z(G)\).
\begin{itemize}
\tightlist
\item
Note \(Z(g) = G\) for central \(g\), so \(\# Z(g) = \# G\)
\item
Note
\begin{align*}
g\not\in Z(G)\implies \# Z(g) \leq {1\over 2} \# G
,\end{align*}
since \(Z(g) \leq G\) is a subgroup, and
\begin{align*}
[G:Z(g)] \neq 1 \implies [G: Z(g)] \geq 2
.\end{align*}
\end{itemize}
\item
Use these facts to calculate:
\begin{align*}
P(A)
&= {1\over (\# G)^2 } \qty{ \sum_{g\in Z(g)} \# Z(g) + \sum_{g\not\in Z(g)} \# Z(g) } \\
&= {1\over (\# G)^2 } \qty{ \sum_{g\in Z(g)} \# G + \sum_{g\not\in Z(g)} \# Z(g) } \\
&= {1\over (\# G)^2 } \qty{ \# Z(G) \cdot \# G + \sum_{g\not\in Z(g)} \# Z(g) } \\
&\leq {1\over (\# G)^2 } \qty{ \# Z(G) \cdot \# G + \sum_{g\not\in Z(g)} {1\over 2} \# G } \\
&= {1\over (\# G)^2 } \qty{ \# Z(G) \cdot \# G + \qty{ \sum_{g\not\in Z(g)} {1\over 2} } \cdot \# G } \\
&= {1\over (\# G) } \qty{ \# Z(G) + \sum_{g\not\in Z(g)} {1\over 2} } \\
&= {1\over (\# G) } \qty{ \# Z(G) + {1\over 2} \sum_{g\not\in Z(g)} 1 } \\
&= {1\over (\# G) } \qty{ \# Z(G) + {1\over 2} \#(G \setminus Z(G) ) } \\
&= {1\over (\# G) } \qty{ \# Z(G) + {1\over 2} \#G - {1\over 2} \# Z(G) } \\
&= {1\over (\# G) } \qty{ {1\over 2} \# Z(G) + {1\over 2} \#G } \\
&= {1\over 2} \qty{1 + { \# Z(G) \over \# G }} \\
&= {1\over 2} \qty{1 + { 1 \over [G : Z(G)] }}
.\end{align*}
\end{itemize}
\end{proof}
\todo[inline]{Redo part c}
\end{solution}
\hypertarget{group-actions-representations}{%
\subsection{Group Actions /
Representations}\label{group-actions-representations}}
\hypertarget{spring-2017-1-done}{%
\subsubsection{\texorpdfstring{Spring 2017 \#1
\(\done\)}{Spring 2017 \#1 \textbackslash done}}\label{spring-2017-1-done}}
Let \(G\) be a finite group and \(\pi: G\to \operatorname{Sym}(G)\) the
Cayley representation.
\begin{quote}
(Recall that this means that for an element \(x\in G\), \(\pi(x)\) acts
by left translation on \(G\).)
\end{quote}
Prove that \(\pi(x)\) is an odd permutation \(\iff\) the order
\({\left\lvert {\pi(x)} \right\rvert}\) of \(\pi(x)\) is even and
\({\left\lvert {G} \right\rvert} / {\left\lvert {\pi(x)} \right\rvert}\)
is odd.
\begin{warnings}
(DZG): This seems like an unusually hard group theory problem. My guess
is this year's qual class spent more time than usual on the proof of
Cayley's theorem.
\end{warnings}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(\operatorname{Sym}(G) \coloneqq\mathop{\mathrm{Aut}}_{\mathsf{Set}}(G, G)\)
is the group of set morphisms from \(G\) to itself, i.e.~permutations
of elements of \(G\).
\item
More standard terminology: this is related to the \textbf{left regular
representation} where \(g\mapsto \phi_g\) where \(\phi_g(x) = gx\),
regarded instead as a permutation representation.
\begin{itemize}
\tightlist
\item
This action is transitive!
\end{itemize}
\item
Cayley's theorem: every \(G\) is isomorphic to a subgroup of a
permutation group. In particular, take
\(\left\{{ \phi_g {~\mathrel{\Big|}~}G\in G }\right\}\) with function
composition as a subgroup of
\(\mathop{\mathrm{Aut}}_{\mathsf{Set}}(G)\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{quote}
(DZG): Warning!! I haven't checked this solution very carefully, and
this is kind of a delicate parity argument. Most of the key ideas are
borrowed
\href{https://math.stackexchange.com/questions/3028603/show-that-phig-is-an-even-permutation}{from
here}.
\end{quote}
\begin{itemize}
\tightlist
\item
Write \(k \coloneqq o(\pi_g)\), then since \(\pi\) is injective,
\(k = o(g)\) in \(G\).
\item
Since \(\pi_g\) as a cycle is obtained from the action of \(g\), we
can pick an element \(x_0\) in \(G\), take the orbit under the action,
and obtain a cycle of length \(k\) since the order of \(g\) is \(k\).
Then continue by taking any \(x_1\) not in the first orbit and taking
\emph{its} orbit. Continuing this way exhausts all group elements and
yields a decomposition into disjoint cycles:
\begin{align*}
\pi_g =
(x_0, gx_0, g^2 x_0, \cdots, g^{k-1} x_0)
(x_1, gx_1, g^2 x_1, \cdots, g^{k-1} x_1)
\cdots
(x_m, gx_m, g^2 x_m, \cdots, g^{k-1} x_m)
.\end{align*}
\item
So there are \(m\) orbits all of length exactly \(k\). Proceed by
casework.
\item
If \(k\) is even:
\begin{itemize}
\tightlist
\item
This yields \(m\) odd cycles, and thus \(\pi\) has zero (an even
number) of even cycles.
\item
Thus \(\pi \in \ker \operatorname{sgn}\) and is an even permutation.
\end{itemize}
\item
If \(k\) is odd
\begin{itemize}
\tightlist
\item
This yields \(m\) even cycles, thus an even number of even cycles
iff \(m\) is even
\end{itemize}
\item
The claim is that the number of orbit representatives \(m\) is equal
to \([G:H] = \# G/H\) for \(H = \left\langle{ g }\right\rangle\).
\begin{itemize}
\tightlist
\item
Proof: define a map
\begin{align*}
\left\{{ \text{Orbit representatives } x_i }\right\} &\to G/H \\
x &\mapsto xH
.\end{align*}
\item
This is injective and surjective because
\begin{align*}
xH = yH &\iff xy^{-1}\in H = \left\langle{ g }\right\rangle \\
&\iff xy^{-1}= g^\ell \\
&\iff x=g^\ell y \\
&\iff y\in {\mathcal{O}}_x
,\end{align*}
so \(y\) and \(x\) are in the same orbit and have the same orbit
representative.
\end{itemize}
\item
We now have
\begin{align*}
\pi_g \text{ is an even permutation } \iff
\begin{cases}
k \text{ is odd and } m \text{ is even} &
\\
\text{ or } & \\
k \text{ is even}
& .
\end{cases}
\end{align*}
\item
Everything was an iff, so flip the evens to odds:
\begin{align*}
\pi_g \text{ is an odd permutation } \iff
\begin{cases}
k \text{ is even and } m \text{ is odd} &
\\
\text{ or } & \\
k \text{ is odd}
& .
\end{cases}
.\end{align*}
\item
Then just recall that \(k\coloneqq o(\pi_g)\) and
\begin{align*}
m= [G: \left\langle{ g }\right\rangle] = \# G / \# \left\langle{ g }\right\rangle= \# G / o(g) = \# G/ o(\pi_g)
.\end{align*}
\end{itemize}
\end{solution}
\hypertarget{fall-2015-1-done}{%
\subsubsection{\texorpdfstring{Fall 2015 \#1
\(\done\)}{Fall 2015 \#1 \textbackslash done}}\label{fall-2015-1-done}}
Let \(G\) be a group containing a subgroup \(H\) not equal to \(G\) of
finite index. Prove that \(G\) has a normal subgroup which is contained
in every conjugate of \(H\) which is of finite index.
\begin{quote}
(DZG) A remark: it's not the conjugates that should be finite index
here, but rather the normal subgroup.
\end{quote}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
Let \(H\leq G\) and define \(n\coloneqq[G:H]\).
\item
Write \(G/H = \left\{{ x_1 H, \cdots, x_n H }\right\}\) for the
finitely many cosets.
\item
Let \(G\) act on \(G/H\) by left translation, so
\(g\cdot xH \coloneqq gxH\).. Call the action
\(\psi: G\to \operatorname{Sym}(G/H)\).
\item
Then \({\operatorname{Stab}}(xH) = xHx^{-1}\) is a subgroup conjugate
to \(H\), and
\(K\coloneqq\ker \psi = \displaystyle\bigcap_{i=1}^n xHx^{-1}\) is the
intersection of all conjugates of \(H\).
\item
Kernels are normal, so \(K{~\trianglelefteq~}G\), and
\(K\subseteq xHx^{-1}\) for all \(x\), meaning \(K\) is contained in
every conjugate of \(H\).
\item
The index \([G:K]\) is finite since
\(G/K \cong \operatorname{im}\psi\) by the first isomorphism theorem,
and
\(\# \operatorname{im}\psi \leq \# \operatorname{Sym}(G/H) = \# S_n = n! < \infty\).
\end{itemize}
\end{solution}
\hypertarget{conjugacy-classes}{%
\subsection{Conjugacy Classes}\label{conjugacy-classes}}
\hypertarget{spring-2021-2-done}{%
\subsubsection{\texorpdfstring{Spring 2021 \#2
\(\done\)}{Spring 2021 \#2 \textbackslash done}}\label{spring-2021-2-done}}
Let \(H {~\trianglelefteq~}G\) be a normal subgroup of a finite group
\(G\), where the order of \(H\) is the smallest prime \(p\) dividing
\({\left\lvert {G} \right\rvert}\). Prove that \(H\) is contained in the
center of \(G\).
\begin{quote}
Solution due to Swaroop Hegde, typed up + modifications added by DZG.
\end{quote}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(x\in Z(G)\) iff \(\# C_x = 1\), i.e.~the size of its conjugacy class
is one.
\item
Normal subgroups are disjoint unions of (some) conjugacy classes in
\(G\).
\begin{itemize}
\tightlist
\item
In fact, this is a characterization of normal subgroups (i.e.~\(H\)
is normal iff \(H\) is a union of conjugacy classes in \(G\)).
\item
Why: if \(H{~\trianglelefteq~}G\) then \(ghg^{-1}\in H\) for all
\(g\), so \(C_h \subseteq H\) and
\(\displaystyle\bigcup_h C_h = H\). Conversely, if
\(H = \displaystyle\bigcup_{h\in H} C_h\), then
\(ghg^{-1}\in C_h \subseteq H\) and thus \(gHg^{-1}= H\).
\end{itemize}
\item
Orbit stabilizer theorem: \(\# C_g = \# G/ \# K_g\) where \(C_g\) is
the centralizer and \(K_g\) is the conjugacy class of \(g\).
\begin{itemize}
\tightlist
\item
In particular, \(\# C_g\) divides \(\#G\).
\end{itemize}
\end{itemize}
\end{concept}
\begin{strategy}
Show an element \(x\) is central by showing \(\# C_x = 1\).
\end{strategy}
\begin{proof}[?]
\envlist
\begin{itemize}
\item
Let \(p \coloneqq\#H\).
\item
Let \(\left\{{ C_i }\right\}_{i\leq n}\) be the conjugacy classes in
\(G\), then \(G = {\textstyle\coprod}_{i\leq n} C_i\)
\item
By the first fact, there is a sub-collection
\(\left\{{ C_{i_j}}\right\}_{j\leq k }\) such that
\begin{align*}
H = {\textstyle\coprod}_{j\leq k} C_{i_j}
.\end{align*}
\item
The identity is always in a single conjugacy class, so
\(C_e = \left\{{ e }\right\}\).
\item
Since \(e\in H\), without loss of generality, label
\(C_{i_1} = \left\{{ e }\right\}\).
\item
So
\begin{align*}
H
= \displaystyle\coprod_{j\leq k} C_{i_j}
= C_{i_1}{\textstyle \coprod} \displaystyle\displaystyle\coprod_{\substack{ j\leq k \\ j\neq 1} } C_{i_j}
.\end{align*}
\item
Take cardinality in the above equation
\begin{align*}
p = 1 + \sum_{\substack{ j\leq k \\ j\neq 1 }} \# C_{i_j}
.\end{align*}
\item
So \(\# C_{i_j} \leq p-1\) for all \(j\neq 1\).
\item
Every \(\# C_{i_j}\) divides \(\# G\), but \(p\) was the
\emph{minimal} prime dividing \(\# G\), forcing \(\# C_{i_j} = 1\) for
all \(j \neq 1\).
\begin{itemize}
\tightlist
\item
This rules out \(\# C_{i_j}\) being a prime less than \(p\), but
also rules out composites: if a prime \(q\divides \# C_{i_j}\), then
\(q<p\) and \(q\divides \# G\), a contradiction.
\end{itemize}
\item
By fact 3, each \(x\in C_{i_j}\) satisfies \(x\in Z(G)\).
\item
\(\cup C_{i_j} = H\), so \(H \subseteq Z(G)\).
\end{itemize}
\end{proof}
\hypertarget{spring-2015-1-done}{%
\subsubsection{\texorpdfstring{Spring 2015 \#1
\(\done\)}{Spring 2015 \#1 \textbackslash done}}\label{spring-2015-1-done}}
For a prime \(p\), let \(G\) be a finite \(p{\hbox{-}}\)group and let
\(N\) be a normal subgroup of \(G\) of order \(p\). Prove that \(N\) is
contained in the center of \(G\).
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Definition of conjugacy class:
\([x] = \left\{{gxg^{-1}{~\mathrel{\Big|}~}g\in G}\right\}\).
\item
A conjugacy class \([x]\) is trivial iff
\([x] = \left\{{ x }\right\}\) iff \(x\in Z(G)\).
\item
Sizes of conjugacy classes divide the order of the group they live in.
\begin{itemize}
\tightlist
\item
This is orbit-stabilizer: \(G\curvearrowright G\) by
\(g\cdot x \coloneqq gxg^{-1}\), so \({\mathcal{O}}(x) = [x]\). Then
\(\# {\mathcal{O}}(x) = \# G / \# {\operatorname{Stab}}(x)\), so
\(\# {\mathcal{O}}(x)\) divides \(\# G\).
\end{itemize}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
Use that \(N{~\trianglelefteq~}G \iff N = {\textstyle\coprod}' [n_i]\)
is a \emph{disjoint} union of (full) conjugacy classes.
\item
Take cardinalities:
\begin{align*}
p = \# N = \sum_{i=1}^m \# [n_i] = 1 + \sum_{i=2}^m [n_i]
.\end{align*}
\item
The size of each conjugacy class divides the size of \(H\) by
orbit-stabilizer, so \(\# [n_i] \divides p\) for each \(i\).
\item
But the entire second term must sum to \(p-1\) for this equality to
hold, which forces \(\#[n_i] = 1\) (and incidentally \(m=p-1\))
\item
Then \([n_i] = \left\{{ n_i }\right\} \iff n_i \in Z(G)\), and this
holds for all \(i\), so \(N \subseteq Z(G)\).
\end{itemize}
\end{solution}
\hypertarget{unsorted-counting-arguments}{%
\subsection{Unsorted / Counting
Arguments}\label{unsorted-counting-arguments}}
\hypertarget{fall-2019-midterm-5-done}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#5
\(\done\)}{Fall 2019 Midterm \#5 \textbackslash done}}\label{fall-2019-midterm-5-done}}
Let \(G\) be a nonabelian group of order \(p^3\) for \(p\) prime. Show
that \(Z(G) = [G, G]\).
\begin{quote}
Note: this is a good problem, it tests several common theorems at once.
Proof due to Paco Adajar.
\end{quote}
\begin{concept}
\envlist
Important notations and definitions:
\begin{itemize}
\item
The \textbf{center} of \(G\), denoted by \(Z(G)\), is the subset of
elements of \(G\) which commute with all elements of \(G\). That is,
if \(x \in Z(G)\), then for all \(g \in G\), \(gx = xg\):
\begin{align*}Z(G) = \{ x \in G : gx = xg \, \text{for all } g \in G \}.\end{align*}
In fact, \(Z(G)\) is not just a subset of \(G\), but a normal subgroup
of \(G\).
\item
The \textbf{commutator subgroup} of \(G\), denoted by \([G, G]\), is
the subgroup of \(G\) generated by the commutators of \(G\), i.e., the
elements of the form \(ghg^{-1}h^{-1}\):
\begin{align*}[G, G] = \langle ghg^{-1}h^{-1} : g, h \in G \rangle.\end{align*}
The commutator subgroup \([G,G]\) is the smallest normal subgroup of
\(G\) whose quotient is abelian. That is, if \(H\) is a normal
subgroup of \(G\) for which \(G/H\) is abelian, then \([G, G] \le H\).
Moreover, \(G\) is abelian if and only if \([G,G]\) is trivial.
\end{itemize}
Theorems to remember and know how to prove:
\begin{itemize}
\item
\textbf{\(G/Z(G)\) Theorem}: If \(G/Z(G)\) is cyclic, then \(G\) is
abelian, i.e., \(G/Z(G)\) is in fact trivial.
\item
\textbf{Lagrange's Theorem}: If \(G\) is a group of finite order and
\(H\) is a subgroup of \(G\), then the order of \(H\) divides that of
\(G\).
\begin{itemize}
\tightlist
\item
One consequence of this is that every group of prime order is
cyclic.
\end{itemize}
\item
A \(p\)-group (a group of order \(p^n\) for some prime \(p\) and some
positive integer \(n\)) has nontrivial center.
\item
A consequence of the theorems above: every group of order \(p^2\)
(where \(p\) is prime) is abelian.
\end{itemize}
\end{concept}
\begin{solution}
Since \(Z(G)\) is a subgroup of \(G\) and \(|G| = p^3\), by Lagrange's
theorem, \(|Z(G)| \in \{1, p, p^2, p^3\}\).
Since we stipulated that \(G\) is nonabelian, \(|Z(G)| \ne p^3\). Also,
since \(G\) is a \(p\)-group, it has nontrivial center, so
\(|Z(G)| \ne 1\). Finally, by the \(G/Z(G)\) theorem,
\(|Z(G)| \ne p^2\): if \(|Z(G)| = p^2\), then \(|G/Z(G)| = p\) and so
\(G/Z(G)\) would be cyclic, meaning that \(G\) is abelian. Hence,
\(|Z(G)| = p\).
Then, since \(|Z(G)| = p\), we have that \(|G/Z(G)| = p^2\), and so
\(G/Z(G)\) is abelian. Thus, \([G, G] \in Z(G)\). Since \(|Z(G)| = p\),
then \(|[G,G]| \in \{ 1, p\}\) again by Lagrange's theorem. If
\(|[G,G]| = p\) then \([G,G] = Z(G)\) and we are done. And, indeed, we
must have \(|[G,G]| = p\), because \(G\) is nonabelian and so
\(|[G,G]| \ne 1\).
\end{solution}
\hypertarget{spring-2012-2-done}{%
\subsubsection{\texorpdfstring{Spring 2012 \#2
\(\done\)}{Spring 2012 \#2 \textbackslash done}}\label{spring-2012-2-done}}
Let \(G\) be a finite group and \(p\) a prime number such that there is
a normal subgroup \(H{~\trianglelefteq~}G\) with
\({\left\lvert {H} \right\rvert} = p^i > 1\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(H\) is a subgroup of any Sylow \(p{\hbox{-}}\)subgroup of
\(G\).
\item
Show that \(G\) contains a nonzero abelian normal subgroup of order
divisible by \(p\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(p\) groups have nontrivial centers.
\item
Definition of maximality and \(p{\hbox{-}}\)groups
\item
Sylows are conjugate
\item
\(Z(G) \operatorname{ch}G\) always.
\item
Transitivity of characteristic: \(A \operatorname{ch}B\) and
\(B{~\trianglelefteq~}C\) implies \(A {~\trianglelefteq~}C\).
\end{itemize}
\end{concept}
\begin{strategy}
Just use maximality for (a). For (b), centers are always abelian, so
\(Z(H)\) is good to consider, just need to ensure it's normal in \(G\).
Use transitivity of characteristic.
\end{strategy}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
By definition, \(S\in {\operatorname{Syl}}_p(G) \iff S\) is a
\emph{maximal} \(p{\hbox{-}}\)subgroup: \(S<G\) is a
\(p{\hbox{-}}\)group, so \(\#S = p^k\) for some \(k\), \(S\) is a
proper subgroup, and \(S\) is maximal in the sense that there are no
proper \(p{\hbox{-}}\)subgroups \(S'\) with
\(S \subseteq S' \subseteq G\).
\item
Since \(\# H = p^i\), \(H\) is a \(p{\hbox{-}}\)subgroup of \(G\).
\item
If \(H\) is maximal, then by definition
\(H\in {\operatorname{Syl}}_p(G)\)
\item
Otherwise, if \(H\) is not maximal, there exists an \(H' \supseteq H\)
with \(H'\leq G\) a \(p{\hbox{-}}\)subgroup properly containing \(H\).
\begin{itemize}
\tightlist
\item
In this apply the same argument to \(H'\): this yields a proper
superset containment at every stage, and since \(G\) is finite,
there is no infinite ascending chain of proper supersets.
\item
So this terminates in some maximal \(p{\hbox{-}}\)subgroup \(S\),
i.e.~a Sylow \(p{\hbox{-}}\)subgroup.
\end{itemize}
\item
So \(H \subseteq S\) for some \(S\in {\operatorname{Syl}}_p(G)\).
\item
All Sylows are conjugate, so for any
\(S' \in {\operatorname{Syl}}_p(G)\) we can write \(S' = gSg^{-1}\)
for some \(g\).
\item
Then using that \(H\) is normal,
\(H \subseteq S \implies H = gHg^{-1}\subseteq gSg^{-1}\coloneqq S'\).
So \(H\) is contained in every Sylow \(p{\hbox{-}}\)subgroup.
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
Claim: \(Z(H) \leq H\) works.
\begin{itemize}
\tightlist
\item
It is nontrivial since \(H\) is a \(p{\hbox{-}}\)group and
\(p{\hbox{-}}\)groups have nontrivial centers
\item
It is abelian since \(Z(Z(H)) = Z(H)\).
\item
\(\#Z(H) = p^\ell\) for some \(\ell \leq i\) by Lagrange
\end{itemize}
\item
It thus remains to show that \(Z(H) {~\trianglelefteq~}G\).
\item
Use that \(Z(H) \operatorname{ch}H\) and use transitivity of
characteristic to conclude \(Z(H) {~\trianglelefteq~}H\).
\item
That \(Z(H) \operatorname{ch}H\): let
\(\psi \in \mathop{\mathrm{Aut}}(H)\) and \(x=\psi(y)\in \psi(Z(H))\)
so \(y\in Z(H)\), then for arbitrary \(h\in H\),
\begin{align*}
\psi(y)h
&= \psi(y) (\psi \circ \psi^{-1})(h) \\
&= \psi( y \cdot \psi^{-1}(h) ) \\
&= \psi( \psi^{-1}(h) \cdot y ) && \text{since } \psi^{-1}(h)\in H, \, y\in Z(H) \\
&= h\psi(y)
.\end{align*}
\item
That
\(A \operatorname{ch}B {~\trianglelefteq~}C \implies A{~\trianglelefteq~}C\):
\begin{itemize}
\tightlist
\item
\(A\operatorname{ch}B\) iff \(A\) is fixed by every
\(\psi\in \mathop{\mathrm{Aut}}(B)\)., WTS \(cAc^{-1}= A\) for all
\(c\in C\).
\item
Since \(B{~\trianglelefteq~}C\), the automorphism
\(\psi({-}) \coloneqq c({-})c^{-1}\) descends to an element of
\(\mathop{\mathrm{Aut}}(B)\).
\item
Then \(\psi(A) = A\) since \(A\operatorname{ch}B\), so
\(cAc^{-1}= A\) and \(A{~\trianglelefteq~}C\).
\end{itemize}
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2016-1-done}{%
\subsubsection{\texorpdfstring{Fall 2016 \#1
\(\done\)}{Fall 2016 \#1 \textbackslash done}}\label{fall-2016-1-done}}
Let \(G\) be a finite group and \(s, t\in G\) be two distinct elements
of order 2. Show that subgroup of \(G\) generated by \(s\) and \(t\) is
a dihedral group.
\begin{quote}
Recall that the dihedral groups of order \(2m\) for \(m\geq 2\) are of
the form
\begin{align*}
D_{2m} = \left\langle{\sigma, \tau {~\mathrel{\Big|}~}\sigma^m = 1 = \tau^2, \tau \sigma = \sigma^{-1}\tau}\right\rangle
.\end{align*}
\end{quote}
\begin{solution}
\envlist
\begin{itemize}
\item
Suppose \(G = \left\langle{ a, b}\right\rangle\) with
\(a^2 = b^2 = e\), satisfying some unknown relations.
\item
Consider \(ab\). Since \(G\) is finite, this has finite order, so
\((ab)^n = e\) for some \(n\geq 2\).
\item
Note
\(\left\langle{ab, b}\right\rangle \subseteq \left\langle{a, b}\right\rangle\),
since any finite word in \(ab, b\) is also a finite word in \(a, b\).
\item
Since \((ab)b = ab^2 = a\), we have
\(\left\langle{ab, b}\right\rangle \subseteq \left\langle{a, b}\right\rangle\),
so
\(\left\langle{ab, b}\right\rangle = \left\langle{a, b}\right\rangle\).
\item
Write \(D_{2n} = F(r, s) / \ker \pi\) for \(\pi: F(r, s)\to D_{2n}\)
the canonical presentation map.
\item
Define
\begin{align*}
\psi: F(r, s) &\to G \\
r &\mapsto ab \\
t &\mapsto b
.\end{align*}
\item
This is clearly surjective since it hits all generators.
\item
We'll show that \(ab, a\) satisfy all of the relations defining
\(D_{2n}\), which factors \(\psi\) through \(\ker \pi\), yielding a
surjection \(\tilde \psi: D_{2n} \twoheadrightarrow G\).
\begin{itemize}
\tightlist
\item
\((ab)^n = e\) by construction, \(b^2 = e\) by assumption, and
\begin{align*}
b (ab) b^{-1}= babb^{-1}= ba = b^{-1}a^{-1}= (ab)^{-1}
,\end{align*}
corresponding to the relation \(srs^{-1}= r^{-1}\). Here we've used
that \(o(a) = o(b) = 2\) implies \(a=a^{-1}, b=b^{-1}\).
\end{itemize}
\item
Surjectivity of \(\tilde \psi\) yields \(2n = \# D_{2n} \geq \# G\).
\item
The claim is that \(\# G \geq 2n\), which forces \(\# G = 2n\). Then
\(\tilde \psi\) will be a surjective group morphism between groups of
the same order, and thus an isomorphism.
\begin{itemize}
\tightlist
\item
We have \(\left\langle{ ab }\right\rangle\leq G\), so
\(n\divides \# G\).
\item
Since \(b\not\in \left\langle{ ab }\right\rangle\), this forces
\(\# G > n\), so \(\# G \geq 2n\).
\end{itemize}
\end{itemize}
\begin{quote}
Remark: see a more direct proof in
\href{https://kconrad.math.uconn.edu/blurbs/grouptheory/dihedral2.pdf}{Theorem
2.1 and Theorem 1.1 here}
\end{quote}
\end{solution}
\hypertarget{fall-2019-midterm-1-done}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#1
\(\done\)}{Fall 2019 Midterm \#1 \textbackslash done}}\label{fall-2019-midterm-1-done}}
Let \(G\) be a group of order \(p^2q\) for \(p, q\) prime. Show that
\(G\) has a nontrivial normal subgroup. :::
\begin{solution}
\envlist
\begin{itemize}
\item
Write \(\# G = p^2 q\)
\item
Cases: first assume \(p>q\), then do \(q<p\).
\item
In any case, we have
\begin{align*}
n_p \divides q &\implies n_p \in \left\{{ 1,q }\right\} \\ \\
n_q \divides p^2 &\implies n_q \in \left\{{ 1, p, p^2}\right\}
.\end{align*}
\item
If \(n_p=1\) or \(n_q=1\), we're done, so suppose otherwise.
\item
\textbf{Case 1:} \(:p>q\).
\begin{itemize}
\tightlist
\item
Using that \([n_p]_p \equiv 1\), consider reducing elements in
\(\left\{{1, q}\right\} \operatorname{mod}p\).
\item
Since \(q<p\), we just have \(q\operatorname{mod}p = q\), and as
long as \(q\neq 1\) we have \(q\not\equiv 1\operatorname{mod}p\).
But since \(n_p\neq 1\) and \(n_p\neq q\), this is a contradiction.
\(\contradiction\)
\end{itemize}
\item
\textbf{Case 2:} \(p< q\):
\begin{itemize}
\item
Using that \([n_q]_q \equiv 1\), consider reducing
\(\left\{{1, p, p^2}\right\}\operatorname{mod}q\).
\item
Since now \(p<q\), we have \(p\operatorname{mod}q = p\) itself, so
\(p\operatorname{mod}q \neq 1\) and we can rule it out.
\item
The remaining possibility is \(n_q = p^2\).
\item
Supposing that \(n_p \neq 1\), we have \(n_p=q\), so we can count
\begin{align*}
\text{Elements from Sylow } q: n_q( \# S_q - 1) &= p^2(q-1) + 1
,\end{align*}
where we've used that distinct Sylow \(q\)s can only intersect at
the identity, and although Sylow \(p\)s \emph{can} intersect
trivially, they can also intersect in a subgroup of size \(p\).
\item
Suppose all Sylow \(p\)s intersect trivially, we get at least
\begin{align*}
\text{Elements from Sylow } p: n_p( \# S_p - 1) &= q(p^2-1)
.\end{align*}
Then we get a count of how many elements the Sylow \(p\)s and \(q\)s
contribute:
\begin{align*}
q(p^2-1) + p^2(q-1) + 1
= p^2q - q + p^2q - p^2 + 1
= p^2q + (p^2-1)(q-1)
> p^2q = \# G
,\end{align*}
provided \((p^2-1)(q-1) \neq 0\), which is fine for \(p\geq 2\)
since this is at least \((2^2-1)(3-2) = 3\) (since \(p<q\) and
\(q=3\) is the next smallest prime). \(\contradiction\)
\item
Otherwise, we get two Sylow \(p\)s intersecting nontrivially, which
must be in a subgroup of order at least \(p\) since the intersection
is a subgroup of both. In this case, just considering these two
subgroups, we get
\begin{align*}
\text{Elements from Sylow } p: n_p( \# S_p - 1) &> p^2 + p^2 - p = 2p^2-p -1
.\end{align*}
Then a count:
\begin{align*}
p^2(q-1) + (2p^2-p - 1) + 1
&= p^2 q- p^2 + 2p^2 -p \\
&= p^2 q + p^2 -p \\
&= p^2q + p(p-1) \\
&> p^2q = \# G
,\end{align*}
a contradiction since this inequality is strict provided
\(p\geq 2\). \(\contradiction\)
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2019-midterm-4-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#4
\(\work\)}{Fall 2019 Midterm \#4 \textbackslash work}}\label{fall-2019-midterm-4-work}}
Let \(p\) be a prime. Show that
\(S_p = \left\langle{\tau, \sigma}\right\rangle\) where \(\tau\) is a
transposition and \(\sigma\) is a \(p{\hbox{-}}\)cycle.
\hypertarget{groups-group-actions}{%
\section{Groups: Group Actions}\label{groups-group-actions}}
\hypertarget{fall-2012-1-work}{%
\subsection{\texorpdfstring{Fall 2012 \#1
\(\work\)}{Fall 2012 \#1 \textbackslash work}}\label{fall-2012-1-work}}
Let \(G\) be a finite group and \(X\) a set on which \(G\) acts.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(x\in X\) and
\(G_x \coloneqq\left\{{g\in G {~\mathrel{\Big|}~}g\cdot x = x}\right\}\).
Show that \(G_x\) is a subgroup of \(G\).
\item
Let \(x\in X\) and
\(G\cdot x \coloneqq\left\{{g\cdot x {~\mathrel{\Big|}~}g\in G}\right\}\).
Prove that there is a bijection between elements in \(G\cdot x\) and
the left cosets of \(G_x\) in \(G\).
\end{enumerate}
\hypertarget{fall-2015-2-work}{%
\subsection{\texorpdfstring{Fall 2015 \#2
\(\work\)}{Fall 2015 \#2 \textbackslash work}}\label{fall-2015-2-work}}
Let \(G\) be a finite group, \(H\) a \(p{\hbox{-}}\)subgroup, and \(P\)
a sylow \(p{\hbox{-}}\)subgroup for \(p\) a prime. Let \(H\) act on the
left cosets of \(P\) in \(G\) by left translation.
Prove that this is an orbit under this action of length 1.
Prove that \(xP\) is an orbit of length 1 \(\iff H\) is contained in
\(xPx^{-1}\).
\hypertarget{spring-2016-5-work}{%
\subsection{\texorpdfstring{Spring 2016 \#5
\(\work\)}{Spring 2016 \#5 \textbackslash work}}\label{spring-2016-5-work}}
Let \(G\) be a finite group acting on a set \(X\). For \(x\in X\), let
\(G_x\) be the stabilizer of \(x\) and \(G\cdot x\) be the orbit of
\(x\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that there is a bijection between the left cosets \(G/G_x\) and
\(G\cdot x\).
\item
Prove that the center of every finite \(p{\hbox{-}}\)group \(G\) is
nontrivial by considering that action of \(G\) on \(X=G\) by
conjugation.
\end{enumerate}
\hypertarget{fall-2017-1-work}{%
\subsection{\texorpdfstring{Fall 2017 \#1
\(\work\)}{Fall 2017 \#1 \textbackslash work}}\label{fall-2017-1-work}}
Suppose the group \(G\) acts on the set \(A\). Assume this action is
faithful (recall that this means that the kernel of the homomorphism
from \(G\) to \(\operatorname{Sym}(A)\) which gives the action is
trivial) and transitive (for all \(a, b\) in \(A\), there exists \(g\)
in \(G\) such that \(g \cdot a = b\).)
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
For \(a \in A\), let \(G_a\) denote the stabilizer of \(a\) in \(G\).
Prove that for any \(a \in A\),
\begin{align*}
\displaystyle\bigcap_{\sigma\in G} \sigma G_a \sigma^{-1}= \left\{{1}\right\}
.\end{align*}
\item
Suppose that \(G\) is abelian. Prove that \(|G| = |A|\). Deduce that
every abelian transitive subgroup of \(S_n\) has order \(n\).
\end{enumerate}
\hypertarget{fall-2018-2-done}{%
\subsection{\texorpdfstring{Fall 2018 \#2
\(\done\)}{Fall 2018 \#2 \textbackslash done}}\label{fall-2018-2-done}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Suppose the group \(G\) acts on the set \(X\) . Show that the
stabilizers of elements in the same orbit are conjugate.
\item
Let \(G\) be a finite group and let \(H\) be a proper subgroup. Show
that the union of the conjugates of \(H\) is strictly smaller than
\(G\), i.e.
\begin{align*}
\displaystyle\bigcup_{g\in G} gHg^{-1}\subsetneq G
\end{align*}
\item
Suppose \(G\) is a finite group acting transitively on a set \(S\)
with at least 2 elements. Show that there is an element of \(G\) with
no fixed points in \(S\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Orbit:
\(G\cdot x \coloneqq\left\{{g\cdot x {~\mathrel{\Big|}~}g\in G}\right\} \subseteq X\)
\item
Stabilizer:
\(G_x \coloneqq\left\{{g\in G{~\mathrel{\Big|}~}g\cdot x = x}\right\} \leq G\)
\item
Orbit-Stabilizer: \(G\cdot x \simeq G/G_x\).
\item
\(abc\in H \iff b\in a^{-1}H c^{-1}\)
\item
Set of orbits for \(G\curvearrowright X\), notated \(X/G\).
\item
Set of fixed points for \(G\curvearrowright X\), notated \(X^g\).
\item
Burnside's Lemma:
\({\left\lvert {X/G} \right\rvert} \cdot {\left\lvert {G} \right\rvert} = \sum_{g\in G} {\left\lvert {X^g} \right\rvert}\)
\begin{itemize}
\tightlist
\item
Number of orbits equals average number of fixed points.
\end{itemize}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
Fix \(x\), then \(y\in {\mathrm{Orb}}(x) \implies g\cdot x = y\) for
some \(g\), and \(x = g^{-1}\cdot y\).
\item
Then
\begin{align*}
h \in {\operatorname{Stab}}(x)
&\iff h\cdot x = x && \text{by being in the stabilizer} \\
&\iff h\cdot (g^{-1}\cdot y) = g^{-1}\cdot y \\
&\iff (g h g^{-1}) \cdot y = y \\
&\iff ghg^{-1}\in G_y && \text{by definition}\\
&\iff h\in g ^{-1} {\operatorname{Stab}}(y) g
,\end{align*}
so \({\operatorname{Stab}}(x) = g^{-1}{\operatorname{Stab}}(y) g\).
\end{itemize}
\end{proof}
\begin{proof}[of b]
Let \(G\) act on its subgroups by conjugation,
\begin{itemize}
\item
The orbit \(G\cdot H\) is the set of all subgroups conjugate to \(H\),
and
\item
The stabilizer of \(H\) is \(G_H = N_G(H)\).
\item
By orbit-stabilizer,
\begin{align*}
G\cdot H = [G: G_H] = [G: N_G(H)]
.\end{align*}
\item
Since \({\left\lvert {H} \right\rvert} = n\), and all of its conjugate
also have order \(n\).
\item
Note that
\begin{align*}
H\leq N_G(H) \implies {\left\lvert {H} \right\rvert} \leq {\left\lvert {N_G(H)} \right\rvert} \implies {1\over {\left\lvert {N_G(H)} \right\rvert}} \leq {1\over {\left\lvert {H} \right\rvert}}
,\end{align*}
\item
Now \emph{strictly} bound the size of the union by overcounting their
intersections at the identity:
\begin{align*}
{\left\lvert {\displaystyle\bigcup_{g\in G}gHg^{-1}} \right\rvert}
&< (\text{Number of Conjugates of } H) \cdot (\text{Size of each conjugate}) \\
& \text{strictly overcounts since they intersect in at least the identity} \\
&= [G: N_G(H)] {\left\lvert {H} \right\rvert} \\
&= {{\left\lvert {G} \right\rvert} \over {\left\lvert {N_G(H)} \right\rvert}} {\left\lvert {H} \right\rvert} \\
& \text{since $G$ is finite} \\
&\leq {{\left\lvert {G} \right\rvert} \over {\left\lvert {H} \right\rvert}} {\left\lvert {H} \right\rvert} \\
&= {\left\lvert {G} \right\rvert}
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\tightlist
\item
Let \(G\curvearrowright X\) transitively where
\({\left\lvert {X} \right\rvert} \geq 2\).
\item
An action is transitive iff there is only one orbit, so
\({\left\lvert {X/G} \right\rvert} = 1\).
\item
Apply Burnside's Lemma
\begin{align*}
1 = {\left\lvert {X/G} \right\rvert} = \frac{1}{{\left\lvert {G} \right\rvert}} \sum_{g\in G} {\left\lvert { \mathrm{Fix} (g)} \right\rvert} \implies {\left\lvert {G} \right\rvert} = \sum_{g\in G} {\left\lvert { \mathrm{Fix} (g)} \right\rvert} = \mathrm{Fix} (e) + \sum_{\substack{g\in G \\ g\neq e}} {\left\lvert { \mathrm{Fix} (g)} \right\rvert}
\end{align*}
\item
Note that \(\mathrm{Fix} (e) = X\), since the identity must fix every
element, so \({\left\lvert { \mathrm{Fix} (e)} \right\rvert} \geq 2\).
\item
If \({\left\lvert { \mathrm{Fix} (g)} \right\rvert} > 0\) for all
\(g\neq e\), the remaining term is at least
\({\left\lvert {G} \right\rvert} -1\). But then the right-hand side
yields is at least
\(2 + ({\left\lvert {G} \right\rvert} -1) = {\left\lvert {G} \right\rvert} + 1\),
contradicting the equality.
\item
So not every \({\left\lvert { \mathrm{Fix} (g)} \right\rvert} > 0\),
and \({\left\lvert { \mathrm{Fix} (g) } \right\rvert} = 0\) for some
\(g\), which says \(g\) has no fixed points in \(X\).
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{groups-sylow-theory}{%
\section{Groups: Sylow Theory}\label{groups-sylow-theory}}
\hypertarget{fall-2019-1-done}{%
\subsection{\texorpdfstring{Fall 2019 \#1
\(\done\)}{Fall 2019 \#1 \textbackslash done}}\label{fall-2019-1-done}}
Let \(G\) be a finite group with \(n\) distinct conjugacy classes. Let
\(g_1 \cdots g_n\) be representatives of the conjugacy classes of \(G\).
Prove that if \(g_i g_j = g_j g_i\) for all \(i, j\) then \(G\) is
abelian.
\begin{concept}
\envlist
\begin{itemize}
\item
\(Z(g) = G \iff g\in Z(G)\), i.e.~if the centralizer of \(g\) is the
whole group, \(g\) is central.
\item
If \(H\leq G\) is a \emph{proper} subgroup, then
\(\displaystyle\bigcup_{g\in G} hGh^{-1}\) is again a proper subgroup
(subset?) I.e. \(G\) is not a union of conjugates of any proper
subgroup.
\item
So if \(G\) \emph{is} a union of conjugates of \(H\), then \(H\) must
not be proper, i.e.~\(H= G\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
We have \(g_j \subseteq Z(g_k)\) for all \(k\) by assumption.
\item
If we can show \(Z(g_k) = G\) for all \(k\), then \(g_k \in Z(G)\) for
all \(k\).
\begin{itemize}
\tightlist
\item
Then each conjugacy class is size 1, and since
\(G = {\textstyle\coprod}_{i=1}^n [g_i] = {\textstyle\coprod}_{i=1}^n \left\{{g_i}\right\}\),
every \(g\in G\) is some \(g_i\). So \(G \subseteq Z(G)\), forcing
\(G\) to be abelian.
\end{itemize}
\item
If we can show
\(G \subseteq \displaystyle\bigcup_{h\in H} h Z(g_k) h^{-1}\) for some
\(k\), this forces \(Z(g_k) = G\) and \(g_k \in Z(G)\).
\begin{itemize}
\tightlist
\item
If we can do this for all \(k\), we're done!
\end{itemize}
\item
Since \(g\in G\) is in some conjugacy class, write \(g=hg_j h^{-1}\)
for some \(h\in G\) and some \(1\leq j\leq n\).
\item
Now use \(g_j \in Z(g_k)\) for all \(k\):
\begin{align*}
g\in G &\implies g = hg_j h^{-1}&& \text{for some } h\in H \\
g_j \in Z(g_k) \forall k &\implies g\in hZ(g_k)h^{-1}&&\text{for some }h, \, \forall 1\leq k \leq n \\
&\implies g\in \displaystyle\bigcup_{h\in G} h Z(g_k) h^{-1}
&&\forall 1\leq k \leq n \\
.\end{align*}
\begin{itemize}
\tightlist
\item
Note that it's necessary to get rid of the \(h\) dependence, since
now now every \(g\in G\) is in
\(\displaystyle\bigcup_{h\in G} hZ(g_k)h^{-1}\).
\end{itemize}
\item
Now
\begin{align*}
G \subseteq \displaystyle\bigcup_{h\in G} hZ(g_k) \subseteq G \,\,\forall k \implies Z(g_k) = G\,\, \forall k
,\end{align*}
and we're done.
\end{itemize}
\end{solution}
\hypertarget{fall-2019-midterm-2-work}{%
\subsection{\texorpdfstring{Fall 2019 Midterm \#2
\(\work\)}{Fall 2019 Midterm \#2 \textbackslash work}}\label{fall-2019-midterm-2-work}}
Let \(G\) be a finite group and let \(P\) be a sylow
\(p{\hbox{-}}\)subgroup for \(p\) prime. Show that \(N(N(P)) = N(P)\)
where \(N\) is the normalizer in \(G\).
\hypertarget{fall-2013-2-work}{%
\subsection{\texorpdfstring{Fall 2013 \#2
\(\work\)}{Fall 2013 \#2 \textbackslash work}}\label{fall-2013-2-work}}
Let \(G\) be a group of order 30.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(G\) has a subgroup of order 15.
\item
Show that every group of order 15 is cyclic.
\item
Show that \(G\) is isomorphic to some semidirect product
\({\mathbb{Z}}_{15} \rtimes{\mathbb{Z}}_2\).
\item
Exhibit three nonisomorphic groups of order 30 and prove that they are
not isomorphic. You are not required to use your answer to (c).
\end{enumerate}
\hypertarget{spring-2014-2-work}{%
\subsection{\texorpdfstring{Spring 2014 \#2
\(\work\)}{Spring 2014 \#2 \textbackslash work}}\label{spring-2014-2-work}}
Let \(G\subset S_9\) be a Sylow-3 subgroup of the symmetric group on 9
letters.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(G\) contains a subgroup \(H\) isomorphic to
\({\mathbb{Z}}_3 \times{\mathbb{Z}}_3 \times{\mathbb{Z}}_3\) by
exhibiting an appropriate set of cycles.
\item
Show that \(H\) is normal in \(G\).
\item
Give generators and relations for \(G\) as an abstract group, such
that all generators have order 3. Also exhibit elements of \(S_9\) in
cycle notation corresponding to these generators.
\item
Without appealing to the previous parts of the problem, show that
\(G\) contains an element of order 9.
\end{enumerate}
\hypertarget{fall-2014-2-work}{%
\subsection{\texorpdfstring{Fall 2014 \#2
\(\work\)}{Fall 2014 \#2 \textbackslash work}}\label{fall-2014-2-work}}
Let \(G\) be a group of order 96.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(G\) has either one or three 2-Sylow subgroups.
\item
Show that either \(G\) has a normal subgroup of order 32, or a normal
subgroup of order 16.
\end{enumerate}
\hypertarget{spring-2016-3-work}{%
\subsection{\texorpdfstring{Spring 2016 \#3
\(\work\)}{Spring 2016 \#3 \textbackslash work}}\label{spring-2016-3-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
State the three Sylow theorems.
\item
Prove that any group of order 1225 is abelian.
\item
Write down exactly one representative in each isomorphism class of
abelian groups of order 1225.
\end{enumerate}
\hypertarget{spring-2017-2-work}{%
\subsection{\texorpdfstring{Spring 2017 \#2
\(\work\)}{Spring 2017 \#2 \textbackslash work}}\label{spring-2017-2-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
How many isomorphism classes of abelian groups of order 56 are there?
Give a representative for one of each class.
\item
Prove that if \(G\) is a group of order 56, then either the Sylow-2
subgroup or the Sylow-7 subgroup is normal.
\item
Give two non-isomorphic groups of order 56 where the Sylow-7 subgroup
is normal and the Sylow-2 subgroup is \emph{not} normal. Justify that
these two groups are not isomorphic.
\end{enumerate}
\hypertarget{fall-2017-2-work}{%
\subsection{\texorpdfstring{Fall 2017 \#2
\(\work\)}{Fall 2017 \#2 \textbackslash work}}\label{fall-2017-2-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Classify the abelian groups of order 36.
\begin{quote}
For the rest of the problem, assume that \(G\) is a non-abelian group
of order 36. You may assume that the only subgroup of order 12 in
\(S_4\) is \(A_4\) and that \(A_4\) has no subgroup of order 6.
\end{quote}
\item
Prove that if the 2-Sylow subgroup of \(G\) is normal, \(G\) has a
normal subgroup \(N\) such that \(G/N\) is isomorphic to \(A_4\).
\item
Show that if \(G\) has a normal subgroup \(N\) such that \(G/N\) is
isomorphic to \(A_4\) and a subgroup \(H\) isomorphic to \(A_4\) it
must be the direct product of \(N\) and \(H\).
\item
Show that the dihedral group of order 36 is a non-abelian group of
order 36 whose Sylow-2 subgroup is not normal.
\end{enumerate}
\hypertarget{fall-2012-2-work}{%
\subsection{\texorpdfstring{Fall 2012 \#2
\(\work\)}{Fall 2012 \#2 \textbackslash work}}\label{fall-2012-2-work}}
Let \(G\) be a group of order 30.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(G\) contains normal subgroups of orders 3, 5, and 15.
\item
Give all possible presentations and relations for \(G\).
\item
Determine how many groups of order 30 there are up to isomorphism.
\end{enumerate}
\hypertarget{fall-2018-1-done}{%
\subsection{\texorpdfstring{Fall 2018 \#1
\(\done\)}{Fall 2018 \#1 \textbackslash done}}\label{fall-2018-1-done}}
Let \(G\) be a finite group whose order is divisible by a prime number
\(p\). Let \(P\) be a normal \(p{\hbox{-}}\)subgroup of \(G\) (so
\({\left\lvert {P} \right\rvert} = p^c\) for some \(c\)).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(P\) is contained in every Sylow \(p{\hbox{-}}\)subgroup of
\(G\).
\item
Let \(M\) be a maximal proper subgroup of \(G\). Show that either
\(P \subseteq M\) or \(|G/M | = p^b\) for some \(b \leq c\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Sylow 2: All Sylow \(p{\hbox{-}}\)subgroups are conjugate.
\item
\({\left\lvert {HK} \right\rvert} = {\left\lvert {H} \right\rvert} {\left\lvert {K} \right\rvert} / {\left\lvert {H\cap K} \right\rvert}\).
\item
Lagrange's Theorem:
\(H\leq G \implies {\left\lvert {H} \right\rvert} \divides {\left\lvert {G} \right\rvert}\)
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\item
Every \(p{\hbox{-}}\)subgroup is contained in some Sylow
\(p{\hbox{-}}\)subgroup, so \(P \subseteq S_p^i\) for some
\(S_p^i \in \mathrm{Syl}_p(G)\).
\item
\(P {~\trianglelefteq~}G \iff gPg^{-1}= P\) for all \(g\in G\).
\item
Let \(S_p^j\) be any other Sylow \(p{\hbox{-}}\)subgroup,
\item
Since Sylow \(p{\hbox{-}}\)subgroups are all conjugate
\(gS_p^i g^{-1}= S_p^j\) for \emph{some} \(g\in G\).
\item
Then
\begin{align*}
P = gPg^{-1}\subseteq gS_p^i g^{-1}= S_p^j
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\item
If \(P\) is not contained in \(M\), then \(M < MP\) is a proper
subgroup
\item
By maximality of \(M\), \(MP = G\)
\item
Note that \(M\cap P \leq P\) and
\({\left\lvert {P} \right\rvert} = p^c\) implies
\({\left\lvert {M\cap P} \right\rvert} = p^a\) for some \(a\leq c\) by
Lagrange
\item
Then write
\begin{align*}
G = MP
&\iff {\left\lvert {G} \right\rvert} = \frac{{\left\lvert {M} \right\rvert} {\left\lvert {P} \right\rvert}}{{\left\lvert {M\cap P} \right\rvert}} \\ \\
&\iff { {\left\lvert {G} \right\rvert} \over {\left\lvert {M} \right\rvert}} = {{\left\lvert {P} \right\rvert} \over {\left\lvert {M\cap P} \right\rvert}} = {p^c \over p^a} = p^{c-a} \coloneqq p^b
\end{align*}
where \(a\leq c \implies 0 \leq c-b \leq c\) so \(0\leq b \leq c\).
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2019-2-done}{%
\subsection{\texorpdfstring{Fall 2019 \#2
\(\done\)}{Fall 2019 \#2 \textbackslash done}}\label{fall-2019-2-done}}
Let \(G\) be a group of order 105 and let \(P, Q, R\) be Sylow 3, 5, 7
subgroups respectively.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that at least one of \(Q\) and \(R\) is normal in \(G\).
\item
Prove that \(G\) has a cyclic subgroup of order 35.
\item
Prove that both \(Q\) and \(R\) are normal in \(G\).
\item
Prove that if \(P\) is normal in \(G\) then \(G\) is cyclic.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\item
The \(pqr\) theorem.
\item
Sylow 3: \({\left\lvert {G} \right\rvert} = p^n m\) implies
\(n_p \divides m\) and \(n_p \cong 1 \operatorname{mod}p\).
\item
\textbf{Theorem}: If \(H, K \leq G\) and any of the following
conditions hold, \(HK\) is a subgroup:
\begin{itemize}
\tightlist
\item
\(H{~\trianglelefteq~}G\) (wlog)
\item
\([H, K] = 1\)
\item
\(H \leq N_G(K)\)
\end{itemize}
\item
\textbf{Theorem}: For a positive integer \(n\), all groups of order
\(n\) are cyclic \(\iff n\) is squarefree and, for each pair of
distinct primes \(p\) and \(q\) dividing \(n\),
\(q - 1 \neq 0 \operatorname{mod}p\).
\item
\textbf{Theorem:}
\begin{align*}
A_i{~\trianglelefteq~}G, \quad G = A_1 \cdots A_k,\quad A_k \cap\prod_{i\neq k} A_i = \emptyset \implies G = \prod A_i
.\end{align*}
\item
The intersection of subgroups is a again a subgroup.
\item
Any subgroups of coprime order intersect trivially?
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of 1]
\envlist
\begin{itemize}
\item
We have
\item
\(n_3 \divides 5\cdot 7, \quad n_3 \cong 1 \operatorname{mod}3 \implies n_3 \in \left\{{1, 5, 7, 35}\right\} \setminus \left\{{5, 35}\right\}\)
\item
\(n_5 \divides 3\cdot 7, \quad n_5 \cong 1 \operatorname{mod}5 \implies n_5 \in \left\{{1, 3, 7, 21}\right\}\setminus \left\{{3, 7}\right\}\)
\item
\(n_7 \divides 3\cdot 5, \quad n_7 \cong 1 \operatorname{mod}7 \implies n_7 \in \left\{{1, 3, 5, 15}\right\}\setminus\left\{{3, 5}\right\}\)
\item
Thus
\begin{align*}
n_3 \in \left\{{1, 7}\right\} \quad n_5 \in \left\{{1, 21}\right\} \quad n_7 \in \left\{{1, 15}\right\}
.\end{align*}
\item
Toward a contradiction, if \(n_5\neq 1\) and \(n_7 \neq 1\), then
\begin{align*}
{\left\lvert {{\operatorname{Syl}}(5) \cup{\operatorname{Syl}}(7)} \right\rvert} = (5-1)n_5 + (7-1)n_7 + 1
&= 4(21) + 6(15) = 174 > 105 \text{ elements}
\end{align*}
using the fact that Sylow \(p{\hbox{-}}\)subgroups for distinct primes
\(p\) intersect trivially (?).
\end{itemize}
\end{proof}
\begin{proof}[of 2]
\envlist
\begin{itemize}
\tightlist
\item
By (a), either \(Q\) or \(R\) is normal.
\item
Thus \(QR \leq G\) is a subgroup, and it has order
\({\left\lvert {Q} \right\rvert} \cdot {\left\lvert {R} \right\rvert} = 5\cdot 7 = 35\).
\item
By the \(pqr\) theorem, since \(5\) does not divide \(7-1=6\), \(QR\)
is cyclic.
\end{itemize}
\end{proof}
\todo[inline]{Part (b) not finished!}
\begin{proof}[of 3]
\envlist
\begin{itemize}
\item
We want to show \(Q, R{~\trianglelefteq~}G\), so we proceed by showing
\(\textbf{not }\qty{n_5 = 21 \text{ or } n_7 = 15}\), which is
equivalent to \(\qty{n_5 = 1 \text{ and } n_7 = 1}\) by the previous
restrictions.
\item
Note that we can write
\begin{align*}
G = \left\{{\text{elements of order } n}\right\} {\textstyle\coprod}\left\{{\text{elements of order not } n}\right\}
.\end{align*}
for any \(n\), so we count for \(n=5, 7\):
\begin{itemize}
\tightlist
\item
Elements in \(QR\) of order \textbf{not} equal to 5:
\({\left\lvert {QR - Q\left\{{\operatorname{id}}\right\} + \left\{{\operatorname{id}}\right\}} \right\rvert} = 35 - 5 + 1 = 31\)
\item
Elements in \(QR\) of order \textbf{not} equal to 7:
\({\left\lvert {QR - \left\{{\operatorname{id}}\right\}R + \left\{{\operatorname{id}}\right\}} \right\rvert} = 35 - 7 + 1 = 29\)
\end{itemize}
\item
Since \(QR \leq G\), we have
\begin{itemize}
\tightlist
\item
Elements in \(G\) of order \textbf{not} equal to 5 \(\geq 31\).
\item
Elements in \(G\) of order \textbf{not} equal to 7 \(\geq 29\).
\end{itemize}
\item
Now both cases lead to contradictions:
\begin{itemize}
\item
\(n_5 = 21\):
\begin{align*}
{\left\lvert {G} \right\rvert} &= {\left\lvert {\left\{{\text{elements of order } 5}\right\} {\textstyle\coprod}\left\{{\text{elements of order not } 5}\right\}} \right\rvert} \\
&\geq n_5(5-1) + 31 = 21(4) + 31 = 115 > 105 = {\left\lvert {G} \right\rvert}
.\end{align*}
\item
\(n_7 = 15\):
\begin{align*}
{\left\lvert {G} \right\rvert} &= {\left\lvert {\left\{{\text{elements of order } 7}\right\} {\textstyle\coprod}\left\{{\text{elements of order not } 7}\right\}} \right\rvert} \\
&\geq n_7(7-1) + 29 = 15(6) + 29 = 119 > 105 = {\left\lvert {G} \right\rvert}
.\end{align*}
\end{itemize}
\end{itemize}
\end{proof}
\begin{proof}[of 4]
Suppose \(P\) is normal and recall
\({\left\lvert {P} \right\rvert} = 3, {\left\lvert {Q} \right\rvert} = 5, {\left\lvert {R} \right\rvert} = 7\).
\begin{itemize}
\tightlist
\item
\(P\cap QR = \left\{{e}\right\}\) since \((3, 35) = 1\)
\item
\(R\cap PQ = \left\{{e}\right\}\) since \((5, 21) = 1\)
\item
\(Q\cap RP = \left\{{e}\right\}\) since \((7, 15) = 1\)
\end{itemize}
We also have \(PQR = G\) since
\({\left\lvert {PQR} \right\rvert} = {\left\lvert {G} \right\rvert}\)
(???).
We thus have an internal direct product
\begin{align*}
G \cong P\times Q \times R \cong {\mathbb{Z}}_3 \times{\mathbb{Z}}_5 \times{\mathbb{Z}}_7 \cong {\mathbb{Z}}_{105}
.\end{align*}
by the Chinese Remainder Theorem, which is cyclic.
\end{proof}
\end{solution}
\hypertarget{spring-2021-3-work}{%
\subsection{\texorpdfstring{Spring 2021 \#3
\(\work\)}{Spring 2021 \#3 \textbackslash work}}\label{spring-2021-3-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that every group of order \(p^2\) with \(p\) prime is abelian.
\item
State the 3 Sylow theorems.
\item
Show that any group of order \(4225 = 5^2 13^2\) is abelian.
\item
Write down one representative from each isomorphism class of abelian
groups of order 4225.
\end{enumerate}
\hypertarget{fall-2020-1-work}{%
\subsection{\texorpdfstring{Fall 2020 \#1
\(\work\)}{Fall 2020 \#1 \textbackslash work}}\label{fall-2020-1-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Using Sylow theory, show that every group of order \(2p\) where \(p\)
is prime is not simple.
\item
Classify all groups of order \(2p\) and justify your answer. For the
nonabelian group(s), give a presentation by generators and relations.
\end{enumerate}
\hypertarget{fall-2020-2-work}{%
\subsection{\texorpdfstring{Fall 2020 \#2
\(\work\)}{Fall 2020 \#2 \textbackslash work}}\label{fall-2020-2-work}}
Let \(G\) be a group of order 60 whose Sylow 3-subgroup is normal.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(G\) is solvable.
\item
Prove that the Sylow 5-subgroup is also normal.
\end{enumerate}
\hypertarget{groups-classification}{%
\section{Groups: Classification}\label{groups-classification}}
\hypertarget{spring-2020-1-work}{%
\subsection{\texorpdfstring{Spring 2020 \#1
\(\work\)}{Spring 2020 \#1 \textbackslash work}}\label{spring-2020-1-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that any group of order 2020 is solvable.
\item
Give (without proof) a classification of all abelian groups of order
2020.
\item
Describe one nonabelian group of order 2020.
\end{enumerate}
\todo[inline]{Work this problem.}
\hypertarget{spring-2019-3-done}{%
\subsection{\texorpdfstring{Spring 2019 \#3
\(\done\)}{Spring 2019 \#3 \textbackslash done}}\label{spring-2019-3-done}}
How many isomorphism classes are there of groups of order 45?
Describe a representative from each class.
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Sylow theorems:
\item
\(n_p \cong 1 \operatorname{mod}p\)
\item
\(n_p \divides m\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\item
It turns out that \(n_3 = 1\) and \(n_5 = 1\), so
\(G \cong S_3 \times S_5\) since both subgroups are normal.
\item
There is only one possibility for \(S_5\), namely
\(S_5\cong {\mathbb{Z}}/(5)\).
\item
There are two possibilities for \(S_3\), namely
\(S_3 \cong {\mathbb{Z}}/(3^2)\) and \({\mathbb{Z}}/(3)^2\).
\item
Thus
\item
\(G \cong {\mathbb{Z}}/(9) \times{\mathbb{Z}}/(5)\), or
\item
\(G \cong {\mathbb{Z}}/(3)^2 \times{\mathbb{Z}}/(5)\).
\end{itemize}
\end{solution}
\todo[inline]{Revisit, seems short.}
\hypertarget{spring-2012-3-work}{%
\subsection{\texorpdfstring{Spring 2012 \#3
\(\work\)}{Spring 2012 \#3 \textbackslash work}}\label{spring-2012-3-work}}
Let \(G\) be a group of order 70.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(G\) is not simple.
\item
Exhibit 3 nonisomorphic groups of order 70 and prove that they are not
isomorphic.
\end{enumerate}
\hypertarget{fall-2016-3-work}{%
\subsection{\texorpdfstring{Fall 2016 \#3
\(\work\)}{Fall 2016 \#3 \textbackslash work}}\label{fall-2016-3-work}}
How many groups are there up to isomorphism of order \(pq\) where
\(p<q\) are prime integers?
\hypertarget{spring-2018-1-done}{%
\subsection{\texorpdfstring{Spring 2018 \#1
\(\done\)}{Spring 2018 \#1 \textbackslash done}}\label{spring-2018-1-done}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Use the Class Equation (equivalently, the conjugation action of a
group on itself) to prove that any \(p{\hbox{-}}\)group (a group whose
order is a positive power of a prime integer \(p\)) has a nontrivial
center.
\item
Prove that any group of order \(p^2\) (where \(p\) is prime) is
abelian.
\item
Prove that any group of order \(5^2 \cdot 7^2\) is abelian.
\item
Write down exactly one representative in each isomorphism class of
groups of order \(5^2 \cdot 7^2\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\item
Centralizer:
\(C_G(x) = \left\{{g\in G {~\mathrel{\Big|}~}[gx] = 1}\right\}\).
\item
Class Equation:
\({\left\lvert {G} \right\rvert} = {\left\lvert {Z(G)} \right\rvert} + \sum [G: C_G(x_i)]\)
\item
\(G/Z(G)\) cyclic \(\iff G\) is abelian.
\begin{align*}
G/Z(G) = \left\langle{xZ}\right\rangle
&\iff g\in G \implies gZ = x^mZ \\
&\iff g(x^m)^{-1}\in Z \\
&\iff g = x^m z {\quad \operatorname{for some} \quad}z\in Z\\
&\implies gh = x^mz_1 x^n z_2 = x^n z_2 x^m z_1 = hg
.\end{align*}
\item
Every group of order \(p^2\) is abelian.
\item
Classification of finite abelian groups.
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
Strategy: get \(p\) to divide \({\left\lvert {Z(G)} \right\rvert}\).
\begin{itemize}
\item
Apply the class equation:
\begin{align*}
{\left\lvert {G} \right\rvert} = {\left\lvert {Z(G)} \right\rvert} + \sum [G: C_G(x_i)]
.\end{align*}
\item
Since \(C_G(x_i) \leq G\) and
\({\left\lvert {G} \right\rvert} = p^k\), by Lagrange
\({\left\lvert {C_G(x_i)} \right\rvert} = p^\ell\) for some
\(0\leq \ell \leq k\).
\item
Since \({\left\lvert {G} \right\rvert} = p^k\) for some \(k\) and
\(Z(G), C_G(x_i) \leq G\) are subgroups, their orders are powers of
\(p\).
\item
Use
\begin{align*}[G: C_G(x_i)] = 1 \iff C_G(x_i) = G \iff \left\{{g\in G{~\mathrel{\Big|}~}gx_ig^{-1}= x_i}\right\} = G \iff x_i \in Z(G).\end{align*}
\begin{itemize}
\tightlist
\item
Thus every index appearing in the sum is greater than 1, and thus
equal to \(p^{\ell_i}\) for some \(1\leq \ell_i \leq k\)
\item
So \(p\) divides every term in the sum
\end{itemize}
\item
Rearrange
\begin{align*}
{\left\lvert {G} \right\rvert} - \sum [G: C_G(x_i)]
= {\left\lvert {Z(G)} \right\rvert}
.\end{align*}
\item
\(p\) divides both terms on the LHS, so must divide the RHS, so
\({\left\lvert {Z(G)} \right\rvert} \geq p\).
\end{itemize}
\end{proof}
\begin{proof}[of b]
Strategy: examine \({\left\lvert {G/Z(G)} \right\rvert}\) by cases.
\begin{itemize}
\tightlist
\item
\(1\): Then \(G = Z(G)\) and \(G\) is abelian.
\item
\(p\): Then \(G/Z(G)\) is cyclic so \(G\) is abelian
\item
\(p^2\): Not possible, since \({\left\lvert {Z(G)} \right\rvert} > 1\)
by (a).
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
By Sylow
\begin{itemize}
\tightlist
\item
\(n_5 \divides 7^2,\quad n_5\cong 1\operatorname{mod}5 \implies n_5\in\left\{{1, 7, 49}\right\}\setminus\left\{{7, 49}\right\} = \left\{{1}\right\} \implies n_5 = 1\)
\item
\(n_7 \divides 5^2, \quad n_7 \cong 1 \operatorname{mod}7 \implies n_7 \in \left\{{1, 5, 25}\right\}\setminus\left\{{5, 25}\right\} =\left\{{1}\right\} \implies n_7 = 1\)
\end{itemize}
\item
By recognition of direct products, \(G = S_5 \times S_7\)
\begin{itemize}
\tightlist
\item
By above, \(S_5, S_7{~\trianglelefteq~}G\)
\item
Check \(S_5\cap S_7 = \left\{{e}\right\}\) since they have coprime
order.
\item
Check \(S_5S_7 = G\) since
\({\left\lvert {S_5 S_7} \right\rvert} = 5^2 7^2 = {\left\lvert {G} \right\rvert}\)
\end{itemize}
\item
By (b), \(S_5, S_7\) are abelian since they are groups of order
\(p^2\)
\item
The direct product of abelian groups is abelian.
\end{itemize}
\end{proof}
\begin{proof}[of d]
\envlist
\begin{itemize}
\tightlist
\item
\({\mathbb{Z}}_{5^2} \times{\mathbb{Z}}_{7^2}\)
\item
\({\mathbb{Z}}_{5}^2 \times{\mathbb{Z}}_{7^2}\)
\item
\({\mathbb{Z}}_{5^2} \times{\mathbb{Z}}_{7}^2\)
\item
\({\mathbb{Z}}_{5}^2 \times{\mathbb{Z}}_{7}^2\)
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{groups-simple-and-solvable}{%
\section{Groups: Simple and Solvable}\label{groups-simple-and-solvable}}
\hypertarget{star-fall-2016-7-work}{%
\subsection{\texorpdfstring{\(\star\) Fall 2016 \#7
\(\work\)}{\textbackslash star Fall 2016 \#7 \textbackslash work}}\label{star-fall-2016-7-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define what it means for a group \(G\) to be \emph{solvable}.
\item
Show that every group \(G\) of order 36 is solvable.
\end{enumerate}
\begin{quote}
Hint: you can use that \(S_4\) is solvable.
\end{quote}
\hypertarget{spring-2015-4-work}{%
\subsection{\texorpdfstring{Spring 2015 \#4
\(\work\)}{Spring 2015 \#4 \textbackslash work}}\label{spring-2015-4-work}}
Let \(N\) be a positive integer, and let \(G\) be a finite group of
order \(N\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(\operatorname{Sym}G\) be the set of all bijections from
\(G\to G\) viewed as a group under composition. Note that
\(\operatorname{Sym}G \cong S_N\). Prove that the Cayley map
\begin{align*}
C: G&\to \operatorname{Sym}G\\
g &\mapsto (x\mapsto gx)
\end{align*}
is an injective homomorphism.
\item
Let \(\Phi: \operatorname{Sym}G\to S_N\) be an isomorphism. For
\(a\in G\) define \({\varepsilon}(a) \in \left\{{\pm 1}\right\}\) to
be the sign of the permutation \(\Phi(C(a))\). Suppose that \(a\) has
order \(d\). Prove that \({\varepsilon}(a) = -1 \iff d\) is even and
\(N/d\) is odd.
\item
Suppose \(N> 2\) and \(n\equiv 2 \operatorname{mod}4\). Prove that
\(G\) is not simple.
\end{enumerate}
\begin{quote}
Hint: use part (b).
\end{quote}
\hypertarget{spring-2014-1-work}{%
\subsection{\texorpdfstring{Spring 2014 \#1
\(\work\)}{Spring 2014 \#1 \textbackslash work}}\label{spring-2014-1-work}}
Let \(p, n\) be integers such that \(p\) is prime and \(p\) does not
divide \(n\). Find a real number \(k = k (p, n)\) such that for every
integer \(m\geq k\), every group of order \(p^m n\) is not simple.
\hypertarget{fall-2013-1-work}{%
\subsection{\texorpdfstring{Fall 2013 \#1
\(\work\)}{Fall 2013 \#1 \textbackslash work}}\label{fall-2013-1-work}}
Let \(p, q\) be distinct primes.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let
\(\mkern 1.5mu\overline{\mkern-1.5muq\mkern-1.5mu}\mkern 1.5mu \in {\mathbb{Z}}_p\)
be the class of \(q\operatorname{mod}p\) and let \(k\) denote the
order of
\(\mkern 1.5mu\overline{\mkern-1.5muq\mkern-1.5mu}\mkern 1.5mu\) as an
element of \({\mathbb{Z}}_p^{\times}\). Prove that no group of order
\(pq^k\) is simple.
\item
Let \(G\) be a group of order \(pq\), and prove that \(G\) is not
simple.
\end{enumerate}
\hypertarget{spring-2013-4-work}{%
\subsection{\texorpdfstring{Spring 2013 \#4
\(\work\)}{Spring 2013 \#4 \textbackslash work}}\label{spring-2013-4-work}}
Define a \emph{simple group}. Prove that a group of order 56 can not be
simple.
\hypertarget{fall-2019-midterm-3-work}{%
\subsection{\texorpdfstring{Fall 2019 Midterm \#3
\(\work\)}{Fall 2019 Midterm \#3 \textbackslash work}}\label{fall-2019-midterm-3-work}}
Show that there exist no simple groups of order 148.
\hypertarget{commutative-algebra}{%
\section{Commutative Algebra}\label{commutative-algebra}}
\hypertarget{ufds-pids-etc}{%
\subsection{UFDs, PIDs, etc}\label{ufds-pids-etc}}
\hypertarget{spring-2013-2-done}{%
\subsubsection{\texorpdfstring{Spring 2013 \#2
\(\done\)}{Spring 2013 \#2 \textbackslash done}}\label{spring-2013-2-done}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define a \emph{Euclidean domain}.
\item
Define a \emph{unique factorization domain}.
\item
Is a Euclidean domain an UFD? Give either a proof or a counterexample
with justification.
\item
Is a UFD a Euclidean domain? Give either a proof or a counterexample
with justification.
\end{enumerate}
\begin{solution}
\envlist
\begin{itemize}
\item
\(R\) is Euclidean iff it admits a Euclidean algorithm: there is a
degree function \(f: R\to {\mathbb{Z}}_{\geq 0}\) such that for all
\(a,b\in R\), there exist \(q, r\in R\) such that \(a = bq + r\) where
\(f(r) <f(b)\) or \(r=0\).
\item
\(R\) is a UFD iff every \(r\in R\) can be written as
\(r = u \prod_{i=1}^n p_i\) with \(n\geq 0\), \(u\in R^{\times}\), and
\(p_i\) irreducible. This is unique up to associates of the \(p_i\)
and reordering.
\item
Euclidean implies UFD:
\begin{itemize}
\tightlist
\item
Euclidean implies PID:
\begin{itemize}
\tightlist
\item
If \(I \in \operatorname{Id}(R)\) one can use the degree function
to find any \(b \in I\) where \(f(b)\) is minimal.
\item
Then \(I = \left\langle{b}\right\rangle\), since if \(a\in I\) one
can write \(a = bq + r\) and use that
\(a-bq \in I \implies r\in I\).
\item
But by minimality, we can't have \(f(r)<f(b)\), so \(r=0\) and
\(a \divides b\), so \(b\in \left\langle{a}\right\rangle\).
\end{itemize}
\item
PID implies UFD:
\begin{itemize}
\tightlist
\item
Use that irreducible implies prime in a PID, so every \(x\in R\)
has some factorization into finitely many primes.
\item
Supposing \(x = u_p \prod_{i=1}^m p_i = u_q \prod_{i=1}^n q_i\),
use that \(p_1\) divides the LHS and so \(p_1\) divides the RHS.
WLOG, \(p_1\divides q_1\), so \(q_1 = u_1 p_1\) for
\(u\in R^{\times}\), so \(x = u_q u_1 p_1 \prod_{i=2}^m q_i\) by
rewriting a term on the RHS.
\item
Note that this makes \(p_1, q_1\) associates.
\item
Continuing up to \(m\), we get
\begin{align*}
x
&= u_p \prod_{i=1}^m p_i \\
&=
u_q \prod_{i=1}^m u_i p_i \prod_{k=m+1}^n q_i \\
\implies
u_p
&= u_q \prod_{i=1}^m u_i \prod_{k=m+1}^n q_i \\
\tilde u
&= \prod_{k=m+1}^n q_i
,\end{align*}
where we've moved all units to the LHS. This makes \(p_i, q_i\)
associates for \(i\leq m\).
\item
But primes aren't units and the product of nontrivial primes can't
be a unit, so the right-hand side product must be empty.
\item
So \(m=n\) and all \(p_i, q_i\) are associate, QED.
\end{itemize}
\end{itemize}
\item
UFD does not imply Euclidean:
\begin{itemize}
\tightlist
\item
It suffices to find a UFD that is not a PID.
\item
Take \(R \coloneqq{\mathbb{C}}[x, y]\), which is a UFD by the usual
factorization of polynomials. It is not a PID, since
\(\left\langle{2, x}\right\rangle\) is not principal.
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2017-6-done}{%
\subsubsection{\texorpdfstring{Fall 2017 \#6
\(\done\)}{Fall 2017 \#6 \textbackslash done}}\label{fall-2017-6-done}}
For a ring \(R\), let \(U(R)\) denote the multiplicative group of units
in \(R\). Recall that in an integral domain \(R\), \(r \in R\) is called
\emph{irreducible} if \(r\) is not a unit in R, and the only divisors of
\(r\) have the form \(ru\) with \(u\) a unit in \(R\).
We call a non-zero, non-unit \(r \in R\) \emph{prime} in \(R\) if
\(r \divides ab \implies r \divides a\) or \(r \divides b\). Consider
the ring \(R = \{a + b \sqrt{-5}{~\mathrel{\Big|}~}a, b \in Z\}\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove \(R\) is an integral domain.
\item
Show \(U(R) = \{\pm1\}\).
\item
Show \(3, 2 + \sqrt{-5}\), and \(2 - \sqrt{-5}\) are irreducible in
\(R\).
\item
Show 3 is not prime in \(R\).
\item
Conclude \(R\) is not a PID.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Integral domain: \(ab=0 \implies a\neq 0 \text{ or } b\neq 0\).
\item
Prime: \(p \divides ab \implies p\divides a\) or \(b\).
\item
Reducible: \(a = xy\) where \(x, y\) are proper divisors.
\item
Irreducible implies prime in a UFD.
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
\(R\) is an integral domain:
\begin{itemize}
\tightlist
\item
Let \(\alpha = a + b\sqrt{-5}\) and \(\beta = c + d \sqrt{-5}\) and
set
\(\mkern 1.5mu\overline{\mkern-1.5mu\alpha\mkern-1.5mu}\mkern 1.5mu, \mkern 1.5mu\overline{\mkern-1.5mu\beta\mkern-1.5mu}\mkern 1.5mu\)
be their conjugates.
\item
Then
\begin{align*}
0 = \alpha \beta = \alpha\mkern 1.5mu\overline{\mkern-1.5mu\alpha \mkern-1.5mu}\mkern 1.5mu\beta\mkern 1.5mu\overline{\mkern-1.5mu\beta \mkern-1.5mu}\mkern 1.5mu= (a^2-5b^2)(c^2-5d^2) \in {\mathbb{Z}}
,\end{align*}
so one factor is zero.
\item
If \(a^2 = 5b^2\) then \(a = \sqrt{5} b \not\in {\mathbb{Z}}\)
unless \(a=b=0\). Otherwise, the same argument forces \(c=d=0\).
\end{itemize}
\item
The units are \(\pm 1\):
\begin{itemize}
\tightlist
\item
Use that \(u\in R^{\times}\implies N(u) = \pm 1\), and
\(N(\alpha) = \alpha \mkern 1.5mu\overline{\mkern-1.5mu\alpha \mkern-1.5mu}\mkern 1.5mu= (a+b\sqrt{-5})(a-b\sqrt{-5}) = a^2 + 5b^2 = 1\)
forces \(b=0\) and \(a=\pm 1\).
\end{itemize}
\item
Irreducible elements:
\begin{itemize}
\tightlist
\item
\(2, 3\) are irreducible because if (say) \(3=xy\) then
\(N(x)N(y) = N(3) = 9\), and if neither \(x,y\) are units then
\(N(x) = N(y) = 3\). But \(N(a + b\sqrt{-5}) = a^2 + 5b^2\) and
\(a^2 + 5b^2 = 3\) has no solutions. The same argument works for
\(2\).
\item
\(2\pm \sqrt{-5}\) are irreducible because
\(N(2 + \sqrt{-5}) = 2^2 + 5(1) = 9\), and in fact
\(N(2 - \sqrt{-5}) = 2^2 + 5(-1)^2 = 9\). By the same argument as
above, this forces irreducibility.
\end{itemize}
\item
\(3\) is not prime:
\begin{itemize}
\tightlist
\item
We can write \(6 = (3)(2) = (1 + \sqrt{-5})(1 - \sqrt{-5})\), so if
we assume \(3\) is prime we get \(3\divides (1 \pm \sqrt{-5})\).
\item
But writing \((1\pm \sqrt{-5}) = 3r\) for some \(r\in R\) yields
\begin{align*}
(1 \pm \sqrt{-5}) = 3(a + b\sqrt{-5}) \implies 3a=1, 3b = \pm 1
.\end{align*}
These have no solutions \(a, b\in {\mathbb{Z}}\). \(\contradiction\)
\end{itemize}
\item
\(R\) is not a PID:
\begin{itemize}
\tightlist
\item
Use that irreducibles are prime in a UFD, which is not true here.
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{spring-2017-4-done}{%
\subsubsection{\texorpdfstring{Spring 2017 \#4
\(\done\)}{Spring 2017 \#4 \textbackslash done}}\label{spring-2017-4-done}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(R\) be an integral domain with quotient field \(F\). Suppose
that \(p(x), a(x), b(x)\) are monic polynomials in \(F[x]\) with
\(p(x) = a(x) b(x)\) and with \(p(x) \in R[x]\), \(a(x)\) not in
\(R[x]\), and both \(a(x), b(x)\) not constant.
Prove that \(R\) is not a UFD.
\begin{quote}
(You may assume Gauss' lemma)
\end{quote}
\item
Prove that \({\mathbb{Z}}[2\sqrt{2}]\) is not a UFD.
\begin{quote}
Hint: let \(p(x) = x^2-2\).
\end{quote}
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Gauss' lemma: for \(R\) a UFD with fraction field \(F\), if \(f\) is
reducible in \(F[x]\) with \(f=pq\) then there are \(r,s\in R\) such
that \(f = (rp)(sq)\) reduces in \(R[x]\).
\item
Corollary: \(R\) is a UFD iff \(R[x]\) is a UFD.
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of 1]
\envlist
\begin{itemize}
\tightlist
\item
The important assumption is \(a(x)\not\in R[x]\), we'll assume \(R\)
is a UFD and try to contradict this.
\item
Write \(f(x) = a(x)b(x)\in F[x]\), then if \(R\) is a UFD we have
\(r,s\in F\) such that \(f(x) = ra(x) sb(x) \in R[x]\).
\item
Since \(a(x), b(x)\) are monic and \(f=ab\), \(f\) is monic, and by
the factorization in \(R[x]\) we have \(rs=1\). So
\(r,s\in R^{\times}\).
\item
Then using that \(ra(x)\in R[x]\), we have
\(r^{-1}ra(x) = a(x)\in R[x]\). \(\contradiction\)
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
Set \(R = {\mathbb{Z}}[2\sqrt 2], F = {\mathbb{Q}}[2\sqrt 2]\).
\item
Let \(p(x) \coloneqq x^2-2 \in R[x]\) which splits as
\(p(x) = (x+ \sqrt{2} )(x - \sqrt{2} ) \coloneqq a(x) b(x) \in F[x]\).
\item
Note neither \(a(x), b(x)\) are in \(R[x]\).
\begin{itemize}
\tightlist
\item
Explicitly, every monic linear \(p\in R[x]\) is of the form
\(x + 2t\sqrt{2}\) with \(t\in {\mathbb{Z}}\), and
\(\pm \sqrt{2} \neq 2t\sqrt{2}\) for any \(t\).
\end{itemize}
\item
So we have \(p(x) \in R[x]\) splitting as \(p=ab\) in \(F[x]\) with
\(a\not \in R[x]\), so part (a) applies.
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{ideals-prime-maximal-proper-principal-etc}{%
\subsection{Ideals (Prime, Maximal, Proper, Principal,
etc)}\label{ideals-prime-maximal-proper-principal-etc}}
\hypertarget{fall-2013-3-done}{%
\subsubsection{\texorpdfstring{Fall 2013 \#3
\(\done\)}{Fall 2013 \#3 \textbackslash done}}\label{fall-2013-3-done}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define \emph{prime ideal}, give an example of a nontrivial ideal in
the ring \({\mathbb{Z}}\) that is not prime, and prove that it is not
prime.
\item
Define \emph{maximal ideal}, give an example of a nontrivial maximal
ideal in \({\mathbb{Z}}\) and prove that it is maximal.
\end{enumerate}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
\({\mathfrak{p}}\) is \textbf{prime} iff
\(xy\in {\mathfrak{p}}\implies x\in {\mathfrak{p}}\) or
\(y\in {\mathfrak{p}}\).
\begin{itemize}
\tightlist
\item
An ideal \(I{~\trianglelefteq~}{\mathbb{Z}}\) that is not prime:
\(I \coloneqq 8{\mathbb{Z}}\).
\item
For example, \(2\cdot 4\in 8{\mathbb{Z}}\) but neither 2 nor 4 is a
multiple of 8.
\end{itemize}
\item
\({\mathfrak{m}}\) is \textbf{maximal} iff whenever
\(I\supseteq{\mathfrak{m}}\) is an ideal in \(R\), then either
\(I={\mathfrak{m}}\) or \(I = R\).
\begin{itemize}
\tightlist
\item
A maximal ideal in \({\mathbb{Z}}\): \(p{\mathbb{Z}}\). This is
because primes are maximal in a PID and \(p{\mathbb{Z}}\) is a prime
ideal. Alternatively, ``to contain is to divide'' holds for Dedekind
domains, so
\(m{\mathbb{Z}}\supseteq p{\mathbb{Z}}\iff m\divides p\), which
forces \(m=1,p\), so either \(m{\mathbb{Z}}= p{\mathbb{Z}}\) or
\(m{\mathbb{Z}}= {\mathbb{Z}}\).
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2014-8-work}{%
\subsubsection{\texorpdfstring{Fall 2014 \#8
\(\work\)}{Fall 2014 \#8 \textbackslash work}}\label{fall-2014-8-work}}
Let \(R\) be a nonzero commutative ring without unit such that \(R\)
does not contain a proper maximal ideal. Prove that for all \(x\in R\),
the ideal \(xR\) is proper.
\begin{quote}
You may assume the axiom of choice.
\end{quote}
\hypertarget{fall-2014-7-done}{%
\subsubsection{\texorpdfstring{Fall 2014 \#7
\(\done\)}{Fall 2014 \#7 \textbackslash done}}\label{fall-2014-7-done}}
Give a careful proof that \({\mathbb{C}}[x, y]\) is not a PID.
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
If \(R[x]\) is a PID, then \(R\) is a field (not explicitly used).
\item
In \(P \coloneqq R[x_1, \cdots, x_n]\), there are degree functions
\(\deg_{x_n}: P\to {\mathbb{Z}}_{\geq 0}\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
The claim is that \(I \coloneqq\left\langle{x, y}\right\rangle\) is
not principal.
\item
Toward a contradiction, if so, then
\(\left\langle{x, y}\right\rangle = \left\langle{f}\right\rangle\).
\item
So write \(x = fg\) for some \(g\in {\mathbb{C}}[x, y]\), then
\begin{itemize}
\tightlist
\item
\(\deg_x(x) = 1\), so \(\deg_x(fg) = 1\) which forces
\(\deg_x(f) \leq 1\).
\item
\(\deg_y(y) = 1\), so \(\deg_y(fg) = 1\) which forces
\(\deg_y(f) \leq 1\).
\item
So \(f(x, y) = ax + by + c\) for some \(a,b,c\in {\mathbb{C}}\).
\item
\(\deg_x(y) = 0\) and thus \(\deg_x(fg) = 0\), forcing \(a=0\)
\item
\(\deg_y(x) = 0\) and thus \(\deg_y(fg) = 0\), forcing \(b=0\)
\item
So \(f(x, y) = c \in {\mathbb{C}}\).
\end{itemize}
\item
But \({\mathbb{C}}[x]\) is a field, so \(c\) is a unit in
\({\mathbb{C}}\) and thus \({\mathbb{C}}[x, y]\), so
\(\left\langle{f}\right\rangle = \left\langle{c}\right\rangle = {\mathbb{C}}[x, y]\).
\item
This is a contradiction, since
\(1\not\in \left\langle{x, y}\right\rangle\):
\begin{itemize}
\tightlist
\item
Every element in \(\alpha(x, y) \in\left\langle{x, y}\right\rangle\)
is of the form \(\alpha(x, y) = xp(x, y) + yq(x, y)\).
\item
But \(\deg_x(\alpha) \geq 1, \deg_y(\alpha)\geq 1\), while
\(\deg_x(1) = \deg_y(1) = 0\).
\item
So \(\left\langle{x, y}\right\rangle \neq {\mathbb{C}}[x, y]\).
\end{itemize}
\item
Alternatively, \(\left\langle{x, y}\right\rangle\) is proper since
\({\mathbb{C}}[x, y] / \left\langle{x, y}\right\rangle \cong {\mathbb{C}}\neq {\mathbb{C}}[x, y]\).
\end{itemize}
\end{solution}
\hypertarget{spring-2019-6-done}{%
\subsubsection{\texorpdfstring{Spring 2019 \#6
\(\done\)}{Spring 2019 \#6 \textbackslash done}}\label{spring-2019-6-done}}
Let \(R\) be a commutative ring with 1.
\begin{quote}
Recall that \(x \in R\) is nilpotent iff \(xn = 0\) for some positive
integer \(n\).
\end{quote}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that every proper ideal of \(R\) is contained within a maximal
ideal.
\item
Let \(J(R)\) denote the intersection of all maximal ideals of \(R\).
Show that \(x \in J(R) \iff 1 + rx\) is a unit for all \(r \in R\).
\item
Suppose now that \(R\) is finite. Show that in this case \(J(R)\)
consists precisely of the nilpotent elements in R.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\item
Definitions:
\begin{align*}
N(R) &\coloneqq\left\{{x\in R {~\mathrel{\Big|}~}x^n = 0 \text{ for some } n}\right\} \\
J(R) &\coloneqq\cap_{{\mathfrak{m}}\in \operatorname{mSpec}} {\mathfrak{m}}
.\end{align*}
\item
Zorn's lemma: if \(P\) is a poset in which every chain has an upper
bound, \(P\) contains a maximal element.
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
Define the set of proper ideals
\begin{align*}
S = \left\{{J {~\mathrel{\Big|}~}I \subseteq J < R}\right\}
,\end{align*}
which is a poset under set inclusion.
Given a chain \(J_1 \subseteq \cdots\), there is an upper bound
\(J \coloneqq\cup J_i\), so Zorn's lemma applies.
\end{proof}
\begin{proof}[of b, $\implies$]
\(\implies\):
\begin{itemize}
\item
We will show that \(x\in J(R) \implies 1+x \in R^{\times}\), from
which the result follows by letting \(x=rx\).
\item
Let \(x\in J(R)\), so it is in every maximal ideal, and suppose toward
a contradiction that \(1+x\) is \textbf{not} a unit.
\item
Then consider
\(I = \left\langle{1+x}\right\rangle {~\trianglelefteq~}R\). Since
\(1+x\) is not a unit, we can't write \(s(1+x) = 1\) for any
\(s\in R\), and so \(1 \not\in I\) and \(I\neq R\)
\item
So \(I < R\) is proper and thus contained in some maximal proper ideal
\(\mathfrak{m} < R\) by part (1), and so we have
\(1+x \in \mathfrak{m}\). Since \(x\in J(R)\), \(x\in \mathfrak{m}\)
as well.
\item
But then \((1+x) - x = 1 \in \mathfrak{m}\) which forces
\(\mathfrak{m} = R\).
\end{itemize}
\end{proof}
\begin{proof}[of b, $\impliedby$]
\(\impliedby\)
\begin{itemize}
\item
Fix \(x\in R\), and suppose \(1+rx\) is a unit for all \(r\in R\).
\item
Suppose towards a contradiction that there is a maximal ideal
\(\mathfrak{m}\) such that \(x\not \in \mathfrak{m}\) and thus
\(x\not\in J(R)\).
\item
Consider
\begin{align*}
M' \coloneqq\left\{{rx + m {~\mathrel{\Big|}~}r\in R,~ m\in M}\right\}
.\end{align*}
\item
Since \(\mathfrak{m}\) was maximal, \(\mathfrak{m} \subsetneq M'\) and
so \(M' = R\).
\item
So every element in \(R\) can be written as \(rx + m\) for some
\(r\in R, m\in M\). But \(1\in R\), so we have
\begin{align*}
1 = rx + m
.\end{align*}
\item
So let \(s = -r\) and write \(1 = sx - m\), and so \(m = 1 + sx\).
\item
Since \(s\in R\) by assumption \(1+sx\) is a unit and thus
\(m \in \mathfrak{m}\) is a unit, a contradiction.
\item
So \(x\in \mathfrak{m}\) for every \(\mathfrak{m}\) and thus
\(x\in J(R)\).
\end{itemize}
\end{proof}
\begin{proof}[of c: $J(R) = \mathfrak N(R)$]
\(\mathfrak N(R) \subseteq J(R)\):
\begin{itemize}
\tightlist
\item
Use the fact \(x\in \mathfrak N(R) \implies x^n = 0 \implies 1 + rx\)
is a unit \(\iff x\in J(R)\) by (b):
\begin{align*}
\sum_{k=1}^{n-1} (-x)^k = \frac{1 - (-x)^n}{1- (-x)} = (1+x)^{-1}
.\end{align*}
\end{itemize}
\(J(R) \subseteq \mathfrak N(R)\):
\begin{itemize}
\item
Let \(x \in J(R) \setminus \mathfrak N(R)\).
\item
Since \(R\) is finite, \(x^m = x\) for some \(m > 0\).
\item
Without loss of generality, we can suppose \(x^2 = x\) by replacing
\(x^m\) with \(x^{2m}\).
\item
If \(1-x\) is not a unit, then \(\left\langle{1-x}\right\rangle\) is a
nontrivial proper ideal, which by (a) is contained in some maximal
ideal \({\mathfrak{m}}\). But then \(x\in {\mathfrak{m}}\) and
\(1-x \in {\mathfrak{m}}\implies x + (1-x) = 1 \in {\mathfrak{m}}\), a
contradiction.
\item
So \(1-x\) is a unit, so let \(u = (1-x)^{-1}\).
\item
Then
\begin{align*}
(1-x)x &= x - x^2 = x - x = 0 \\
&\implies u (1-x)x = x = 0 \\
&\implies x=0
.\end{align*}
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{spring-2018-8-done}{%
\subsubsection{\texorpdfstring{Spring 2018 \#8
\(\done\)}{Spring 2018 \#8 \textbackslash done}}\label{spring-2018-8-done}}
Let \(R = C[0, 1]\) be the ring of continuous real-valued functions on
the interval \([0, 1]\). Let I be an ideal of \(R\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that if \(f \in I, a \in [0, 1]\) are such that \(f (a) \neq 0\),
then there exists \(g \in I\) such that \(g(x) \geq 0\) for all
\(x \in [0, 1]\), and \(g(x) > 0\) for all \(x\) in some open
neighborhood of \(a\).
\item
If \(I \neq R\), show that the set
\(Z(I) = \{x \in [0, 1] {~\mathrel{\Big|}~}f(x) = 0 \text{ for all } f \in I\}\)
is nonempty.
\item
Show that if \(I\) is maximal, then there exists \(x_0 \in [0, 1]\)
such that \(I = \{ f \in R {~\mathrel{\Big|}~}f (x_0 ) = 0\}\).
\end{enumerate}
\begin{remark}
Cool problem, but pretty specific topological tricks needed.
\end{remark}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
Suppose \(c\coloneqq f(a)\neq 0\), noting that \(c\) may not be
positive.
\item
By continuity, pick \({\varepsilon}\) small enough so that
\({\left\lvert {x-a} \right\rvert}<{\varepsilon}\implies {\left\lvert {f(x) - f(a)} \right\rvert} < c/2\).
Since we're on the interval, we have
\(f(x) \in (f(a) - c/2, f(a) + c/2) = (c/2, 3c/2)\) which is a ball of
radius \(c/2\) about \(c\), which thus doesn't intersect \(0\).
\item
So \(f(x) \neq 0\) on this ball, and \(g \coloneqq f^2 > 0\) on it.
Note that ideals are closed under products, so \(g\in I\)
\item
Moreover \(f^2(x) \geq 0\) since squares are non-negative, so
\(g\geq 0\) on \([0, 1]\).
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
By contrapositive, suppose \(V(I)= \emptyset\), we'll show \(I\)
contains a unit and thus \(I=R\).
\item
For each fixed \(x\in [0, 1]\), since \(V(I)\) is empty there is some
\(f_x\) such that \(f_x(x) \neq 0\).
\item
By (a), there is some \(g_x\) with \(g_x(x) > 0\) on a neighborhood
\(U_x\ni x\) and \(g_x \geq 0\) everywhere.
\item
Ranging over all \(x\) yields a collection
\(\left\{{(g_x, U_x) {~\mathrel{\Big|}~}x\in [0, 1]}\right\}\) where
\(\left\{{U_x}\right\}\rightrightarrows[0, 1]\).
\item
By compactness there is a finite subcover, yielding a finite
collection \(\left\{{(g_k, U_k)}\right\}_{k=1}^n\) for some \(n\).
\item
Define the candidate unit as
\begin{align*}
G(x) \coloneqq{1\over \sum_{k=1}^n g_k(x)}
.\end{align*}
\item
This is well-defined: fix an \(x\), then the denominator is zero at
\(x\) iff \(g_k(x) = 0\) for all \(k\). But since the \(U_k\) form an
open cover, \(x\in U_\ell\) for some \(\ell\), and \(g_\ell > 0\) on
\(U_\ell\).
\item
Since ideals are closed under sums,
\(H\coloneqq{1\over G} \coloneqq\sum g_k \in I\). But \(H\) is clearly
a unit since \(HG = \operatorname{id}\).
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\tightlist
\item
If \(I{~\trianglelefteq~}R\) is maximal, \(I\neq R\), and so by (b) we
have \(V(I) \neq \emptyset\).
\item
So there is some \(x_0\in[0, 1]\) with \(f(x_0) = 0\) for all
\(f\in I\).
\item
Define
\({\mathfrak{m}}_{x_0} \coloneqq\left\{{f\in R {~\mathrel{\Big|}~}f(x_0) = 0}\right\}\),
which is clearly an ideal.
\begin{itemize}
\tightlist
\item
This is a proper ideal, since constant nonzero functions are
continuous and thus in \(R\), not not \({\mathfrak{m}}_{x_0}\).
\end{itemize}
\item
We thus have \(I \subseteq {\mathfrak{m}}_{x_0}\), and by maximality
they are equal.
\end{itemize}
\end{proof}
\todo[inline]{I'm not super convinced by c!}
\end{solution}
\hypertarget{zero-divisors-and-nilpotents}{%
\subsection{Zero Divisors and
Nilpotents}\label{zero-divisors-and-nilpotents}}
\hypertarget{spring-2014-5-done}{%
\subsubsection{\texorpdfstring{Spring 2014 \#5
\(\done\)}{Spring 2014 \#5 \textbackslash done}}\label{spring-2014-5-done}}
Let \(R\) be a commutative ring and \(a\in R\). Prove that \(a\) is not
nilpotent \(\iff\) there exists a commutative ring \(S\) and a ring
homomorphism \(\phi: R\to S\) such that \(\phi(a)\) is a unit.
\begin{quote}
Note: by definition, \(a\) is nilpotent \(\iff\) there is a natural
number \(n\) such that \(a^n = 0\).
\end{quote}
\begin{solution}
\(\not A\implies \not B\):
\begin{itemize}
\tightlist
\item
Suppose \(a\) is nilpotent, so \(a^m = 0_R\), and suppose
\(\phi: R\to S\) is a ring morphism.
\item
Ring morphisms send zero to zero, so
\(0_S = \phi(0_R) = \phi(a^m) = \phi(a)^m\) and \(\phi(a)\) is
nilpotent.
\item
But nontrivial rings can't contain nilpotent units: if \(u\) is a unit
and \(ut= 1\) with \(u^k=0\), then \(1 = 1^k = (ut)^k = u^k t^k=0\)
and \(R=0\).
\end{itemize}
\(A\implies B\):
\begin{itemize}
\tightlist
\item
If \(a\) is not nilpotent, localize at the infinite multiplicative
subset \(A \coloneqq\left\{{1, a, a^2, \cdots}\right\}\) to obtain
\(R \left[ { \scriptstyle A^{-1}} \right]\). Since \(0\not\in A\),
this is not the zero ring.
\item
By the universal property, there is a map
\(\phi: R\to R \left[ { \scriptstyle A^{-1}} \right]\), and the claim
is that \(\phi(a)\) is a unit in
\(R \left[ { \scriptstyle A^{-1}} \right]\).
\item
More directly,
\(\phi(a) = [a/1] \in \left\{{p,q {~\mathrel{\Big|}~}p\in R, q\in A}\right\}\),
which has inverse \([a/1]\).
\end{itemize}
\end{solution}
\hypertarget{spring-2021-5-done}{%
\subsubsection{\texorpdfstring{Spring 2021 \#5
\(\done\)}{Spring 2021 \#5 \textbackslash done}}\label{spring-2021-5-done}}
\begin{problem}[Spring 2021]
Suppose that \(f(x) \in ({\mathbb{Z}}/n{\mathbb{Z}})[x]\) is a zero
divisor. Show that there is a nonzero
\(a\in {\mathbb{Z}}/n{\mathbb{Z}}\) with \(af(x) = 0\).
\end{problem}
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
Write \(f(x) = \sum_{k=0}^n a_k x^k\), and supposing it's a zero
divisor choose \(g(x) = \sum_{k=0}^m b_k x^k\) of minimal degree so
that \(g\neq 0, b_m\neq 0\), and \(f(x)g(x) = 0\).
\item
The claim is that the top coefficient \(b_m\) will suffice.
\item
Write the product:
\begin{align*}
0 = f(x)g(x)
= (a_0 + \cdots + a_{n-1}x^{n-1} + a_n x^n)
(b_0 + \cdots + b_{m-1}x^{m-1} + b_m x^m)
.\end{align*}
\item
Equating coefficients, the coefficient for \(x^{m+n}\) must be zero,
so (\textbf{importantly}) \(a_n b_m = 0\).
\begin{itemize}
\tightlist
\item
Since \(a_n b_m=0\), consider \(a_ng(x)\). This has degree
\(d_1 \leq m-1\) but satisfies \(a_ng(x)f(x) = a_n(g(x)f(x)) = 0\),
so by minimality \(a_ng(x) = 0\).
\item
This forces \(a_n b_0 = \cdots = a_n b_{m-1} = 0\), so \(a_n\)
annihilates all of the \(b_k\).
\end{itemize}
\item
Now consider the coefficient of \(x^{m+n-1}\), given by
\(a_{n-1}b_m + a_{n}b_{m-1}\).
\begin{itemize}
\tightlist
\item
The second term \(a_n b_{m-1}=0\) since \(a_n\) annihilates all
\(b_k\), so (\textbf{importantly}) \(a_{n-1} b_m = 0\).
\item
Considering now \(a_{n-1}g(x)\):
\begin{itemize}
\tightlist
\item
The same argument shows this has degree \(d_2 \leq m-1\) but
\(a_{n-1}g(x)f(x) = 0\), so \(a_{n-1}g(x) = 0\).
\item
So \(a_{n-1}\) annihilates all \(b_k\), and allowing this process
to continue inductively.
\end{itemize}
\end{itemize}
\item
For good measure, the coefficient of \(x^{m+n-2}\) is
\(a_{n-2}b_m + a_{n-1}b_{m-1} + a_{n}b_{m-2}\).
\begin{itemize}
\tightlist
\item
Note that \(a_n, a_{n-1}\) annihilate all \(b_k\), so
(\textbf{importantly}) \(a_{n-2} b_m=0\), and so on.
\end{itemize}
\item
Thus \(a_k b_m = 0\) for all \(0\leq k \leq n\), and by linearity and
commutativity, we have
\begin{align*}
b_m f(x) = b_m \sum_{k=0}^n a_k x^k = \sum_{k=0}^n (b_m a_k) x^k = 0
.\end{align*}
\end{itemize}
\end{solution}
\hypertarget{fall-2018-7-done}{%
\subsubsection{\texorpdfstring{Fall 2018 \#7
\(\done\)}{Fall 2018 \#7 \textbackslash done}}\label{fall-2018-7-done}}
Let \(R\) be a commutative ring.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(r \in R\). Show that the map
\begin{align*}
r\bullet : R &\to R \\
x &\mapsto r x
.\end{align*}
is an \(R{\hbox{-}}\)module endomorphism of \(R\).
\item
We say that \(r\) is a \textbf{zero-divisor} if \(r\bullet\) is not
injective. Show that if \(r\) is a zero-divisor and \(r \neq 0\), then
the kernel and image of \(R\) each consist of zero-divisors.
\item
Let \(n \geq 2\) be an integer. Show: if \(R\) has exactly \(n\)
zero-divisors, then \(\#R \leq n^2\) .
\item
Show that up to isomorphism there are exactly two commutative rings
\(R\) with precisely 2 zero-divisors.
\end{enumerate}
\begin{quote}
You may use without proof the following fact: every ring of order 4 is
isomorphic to exactly one of the following:
\begin{align*}
\frac{ {\mathbb{Z}}}{ 4{\mathbb{Z}}}, \quad
\frac{ \frac{ {\mathbb{Z}}}{ 2{\mathbb{Z}}} [t]}{(t^2 + t + 1)}, \quad
\frac{ \frac{ {\mathbb{Z}}}{ 2{\mathbb{Z}}} [t]}{ (t^2 - t)}, \quad
\frac{ \frac{ {\mathbb{Z}}}{2{\mathbb{Z}}}[t]}{(t^2 )}
.\end{align*}
\end{quote}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Testing module morphisms: \(\phi(sm + n) = s\phi(m) + \phi(n)\).
\item
First isomorphism theorem used for sizes:
\(R/\ker f \cong \operatorname{im}f\), so
\(\# R = \# \ker f \cdot \# \operatorname{im}f\).
\item
See 1964 Annals ``Properties of rings with a finite number of zero
divisors''
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
Let \(\phi\) denote the map in question, it suffices to show that
\(\phi\) is \(R{\hbox{-}}\)linear,
i.e.~\(\phi(s\mathbf{x} + \mathbf{y}) = s\phi(\mathbf{x}) + \phi(\mathbf{y})\):
\begin{align*}
\phi(s\mathbf{x} + \mathbf{y})
&= r(s\mathbf{x} + \mathbf{y}) \\
&= rs\mathbf{x} + r\mathbf{y} \\
&= s(r\mathbf{x}) + (r\mathbf{y}) \\
&= s\phi(\mathbf{x}) + \phi(\mathbf{y})
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[of b]
Let \(\phi_r(x) \coloneqq rx\) be the multiplication map.
\begin{itemize}
\item
Let
\(x\in \ker \phi_r \coloneqq\left\{{x\in R {~\mathrel{\Big|}~}rx = 0}\right\}\).
\item
Since \(R\) is commutative \(0 = rx = xr\), and so
\(r\in \ker \phi_x\), so \(\ker \phi_x \neq 0\) and \(x\) is a zero
divisor by definition.
\item
Let
\(y\in \operatorname{im}\phi_r \coloneqq\left\{{y \coloneqq rx {~\mathrel{\Big|}~}x\in R}\right\}\),
we want to show \(\ker \phi_y\) is nontrivial by producing some \(z\)
such that \(yz=0\). Write \(y\coloneqq rx\) for some \(x\in R\).
\item
Since \(r\) is a zero divisor, we can produce some
\(z\neq 0 \in \ker \phi_r\), so \(rz = 0\).
\item
Now using that \(R\) is commutative, we can compute
\begin{align*}
yz = (rx)z = (xr)z = x (rz) = x(0) = 0
,\end{align*}
so \(z\in \ker \phi_y\).
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
Let \(Z \coloneqq\left\{{z_i}\right\}_{i=1}^n\) be the set of \(n\)
zero divisors in \(R\).
\item
Let \(\phi_i\) be the \(n\) maps \(x \mapsto z_i x\), and let
\(K_i = \ker \phi_i\) be the corresponding kernels.
\item
Fix an \(i\).
\item
By (b), \(K_i\) consists of zero divisors, so
\begin{align*}
{\left\lvert {K_i} \right\rvert} \leq n < \infty \quad \text{for each } i
.\end{align*}
\item
Now consider \(R/K_i \coloneqq\left\{{r + K_i}\right\}\).
\item
By the first isomorphism theorem,
\(R/K_i \cong \operatorname{im}\phi\), and by (b) every element in the
image is a zero divisor, so
\begin{align*}
[R: K_i] = {\left\lvert {R/K_i} \right\rvert} = {\left\lvert {\operatorname{im}\phi_i} \right\rvert} \leq n
.\end{align*}
\item
But then
\begin{align*}
{\left\lvert {R} \right\rvert} = [R:K_i]\cdot {\left\lvert {K_i} \right\rvert} \leq n\cdot n = n^2
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[of d]
\envlist
\begin{itemize}
\item
By (c), if there are exactly 2 zero divisors then
\({\left\lvert {R} \right\rvert} \leq 4\). Since every element in a
finite ring is either a unit or a zero divisor, and
\({\left\lvert {R^{\times}} \right\rvert} \geq 2\) since \(\pm 1\) are
always units, we must have \({\left\lvert {R} \right\rvert} = 4\).
\item
Since the characteristic of a ring must divide its size, we have
\(\operatorname{ch}R = 2\) or \(4\).
\item
Using the hint, we see that only \({\mathbb{Z}}/(4)\) has
characteristic 4, which has exactly 2 zero divisors given by \([0]_4\)
and \([2]_4\).
\item
If \(R\) has characteristic 2, we can check the other 3 possibilities.
\item
We can write
\({\mathbb{Z}}/(2)[t]/(t^2) = \left\{{a + bt {~\mathrel{\Big|}~}a,b\in {\mathbb{Z}}/(2)}\right\}\),
and checking the multiplication table we have
\begin{align*}
\begin{array}{c|cccc}
& 0 & 1 & t & 1+t \\ \hline
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 1 & t & 1+t \\
t & 0 & t & \mathbf{0} & t \\
1 + t & 0 & 1+t & t & 1 \\
\end{array}
,\end{align*}
and so we find that \(t, 0\) are the zero divisors.
\item
In \({\mathbb{Z}}/(2)[t]/(t^2 - t)\), we can check that
\(t^2 = t \implies t t^2 = t^2 \implies t(t^2 + 1) = 0 \implies t(t+1) = 0\),
so both \(t\) and \(t+1\) are zero divisors, along with zero, so this
is not a possibility.
\item
Similarly, in \({\mathbb{Z}}/(2)[t]/(t^2 + t + 1)\), we can check the
bottom-right corner of the multiplication table to find
\begin{align*}
\left[\begin{array}{c|cc}
& t & 1 +t \\ \hline
t & 1+t & 1 \\
t & 1 & t \\
\end{array}\right]
,\end{align*}
and so this ring only has one zero divisor.
\item
Thus the only possibilities are:
\begin{align*}
R &\cong {\mathbb{Z}}/(4) \\
R &\cong {\mathbb{Z}}/(2)[t] / (t^2)
.\end{align*}
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{zorns-lemma}{%
\subsection{Zorn's Lemma}\label{zorns-lemma}}
\hypertarget{fall-2013-4-work}{%
\subsubsection{\texorpdfstring{Fall 2013 \#4
\(\work\)}{Fall 2013 \#4 \textbackslash work}}\label{fall-2013-4-work}}
Let \(R\) be a commutative ring with \(1\neq 0\). Recall that \(x\in R\)
is \emph{nilpotent} iff \(x^n = 0\) for some positive integer \(n\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that the collection of nilpotent elements in \(R\) forms an
ideal.
\item
Show that if \(x\) is nilpotent, then \(x\) is contained in every
prime ideal of \(R\).
\item
Suppose \(x\in R\) is not nilpotent and let
\(S = \left\{{x^n {~\mathrel{\Big|}~}n\in {\mathbb{N}}}\right\}\).
There is at least on ideal of \(R\) disjoint from \(S\), namely
\((0)\).
By Zorn's lemma the set of ideals disjoint from \(S\) has a maximal
element with respect to inclusion, say \(I\). In other words, \(I\) is
disjoint from \(S\) and if \(J\) is any ideal disjoint from \(S\) with
\(I\subseteq J \subseteq R\) then \(J=I\) or \(J=R\).
Show that \(I\) is a prime ideal.
\item
Deduce from (a) and (b) that the set of nilpotent elements of \(R\) is
the intersection of all prime ideals of \(R\).
\end{enumerate}
\hypertarget{fall-2015-3-done}{%
\subsubsection{\texorpdfstring{Fall 2015 \#3
\(\done\)}{Fall 2015 \#3 \textbackslash done}}\label{fall-2015-3-done}}
Let \(R\) be a rng (a ring without 1) which contains an element \(u\)
such that for all \(y\in R\), there exists an \(x\in R\) such that
\(xu=y\).
Prove that \(R\) contains a maximal left ideal.
\begin{quote}
Hint: imitate the proof (using Zorn's lemma) in the case where \(R\)
does have a 1.
\end{quote}
\begin{solution}
\envlist
\begin{itemize}
\item
Define the map
\begin{align*}
\phi_u: R &\to R\\
x &\mapsto xu
,\end{align*}
which is right-multiplication by \(u\). By assumption, \(\phi_u\) is
surjective, so the principal ideal \(Ru\) equals \(R\).
\item
Then \(K \coloneqq\ker \phi_u \in \operatorname{Id}(R)\) is an ideal.
\item
\(K\) is proper -- otherwise, noting \(Ru=R\), if \(K=R\) we have
\(Ru = 0\) and thus \(R=0\). So suppose \(R\neq 0\).
\item
Take a poset
\(S \coloneqq\left\{{J\in \operatorname{Id}(R) {~\mathrel{\Big|}~}J \supseteq K, J\neq R}\right\}\),
the set of all ideals containing \(K\), ordered by subset inclusion.
Note that \(K \in S\), so \(S\) is nonempty.
\item
Apply Zorn's lemma: let \(C: C_1 \subseteq C_2 \subseteq \cdots\) be a
chain (totally ordered sub-poset) in \(S\). If \(C\) is the empty
chain, \(K\) is an upper bound. Otherwise, if \(C\) is nonempty,
define \(\widehat{C} \coloneqq\displaystyle\bigcup_{i=1}^\infty C_i\).
\begin{itemize}
\tightlist
\item
\(\widehat{C}\) is an ideal: if \(a,b\in \widehat{C}\), then
\(a\in C_i, b\in C_j\) for some \(i,j\). But without loss of
generality, using that chains are totally ordered,
\(C_i \subseteq C_j\), so \(a,b\in C_j\) and thus \(ab\in C_j\).
Similarly for \(x\in \widehat{C}\), \(x\in C_j\) for some \(j\), and
thus \(Rx \subseteq C_j\) since \(C_j\) is an ideal.
\item
\(\widehat{C}\) is in \(S\): It clearly contains \(K\), since for
example \(K \subseteq C_1 \subseteq \widehat{C}\).
\begin{itemize}
\tightlist
\item
That \(\widehat{C} \neq R\): an ideal equals \(R\) iff it contains
a unit. But if \(r\in \widehat{C}\) is a unit, \(r\in C_j\) for
some \(j\) is a unit, making \(C_j=R\). \(\contradiction\)
\end{itemize}
\end{itemize}
\item
So by Zorn's lemma, \(\widehat{C}\) contains a maximal ideal
(incidentally containing \(K\)).
\end{itemize}
\end{solution}
\hypertarget{spring-2015-7-done}{%
\subsubsection{\texorpdfstring{Spring 2015 \#7
\(\done\)}{Spring 2015 \#7 \textbackslash done}}\label{spring-2015-7-done}}
Let \(R\) be a commutative ring, and \(S\subset R\) be a nonempty subset
that does not contain 0 such that for all \(x, y\in S\) we have
\(xy\in S\). Let \({\mathcal{I}}\) be the set of all ideals
\(I{~\trianglelefteq~}R\) such that \(I\cap S = \emptyset\).
Show that for every ideal \(I\in {\mathcal{I}}\), there is an ideal
\(J\in {\mathcal{I}}\) such that \(I\subset J\) and \(J\) is not
properly contained in any other ideal in \({\mathcal{I}}\).
Prove that every such ideal \(J\) is prime.
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
Restating, take the poset
\(S\coloneqq\left\{{J\in \operatorname{Id}(R) {~\mathrel{\Big|}~}J \cap S = \emptyset, I\neq R, I \subseteq J}\right\}\)
ordered by inclusion. Note that \(S\) is nonempty since it contains
\(I\). It suffices to produce a maximal element of \(S\).
\item
Applying Zorn's lemma, let \(C: C_1 \subseteq C_2 \subseteq \cdots\)
be a chain and define \(\widehat{C} \coloneqq\cup C_i\).
\item
By standard arguments, \(\widehat{C} \in \operatorname{Id}(R)\) and
\(\widehat{C} \supseteq I\), and it suffices to show
\(\widehat{C} \cap S = \emptyset\) and \(\widehat{C}\neq R\).
\item
\(\widehat{C} \cap S = \emptyset\):
\begin{itemize}
\tightlist
\item
By contradiction, if \(x\in \widehat{C} \cap S\) then \(x\in C_j\)
for some \(j\), and \(x\in S\). But then
\(x \in C_j \cap S = \emptyset\).
\end{itemize}
\item
\(\widehat{C} \neq R\):
\begin{itemize}
\tightlist
\item
By contradiction, if so then
\(1 \in \widehat{C} \implies 1 \in C_j\) for some \(j\), forcing
\(C_j = R\).
\end{itemize}
\item
So Zorn's lemma applies and we obtain some ideal \({\mathfrak{p}}\),
which we now claim is prime.
\item
Let \(ab\in {\mathfrak{p}}\), we want to show \(a\in {\mathfrak{p}}\)
or \(b\in{\mathfrak{p}}\).
\item
Suppose not, then neither \(a,b\in {\mathfrak{p}}\). By maximality,
\({\mathfrak{p}}+ Ra = R\), and so \({\mathfrak{p}}+ Ra\) intersects
\(S\). Similarly \({\mathfrak{p}}+ Rb = R\) so \({\mathfrak{p}}+ Rb\)
intersects \(S\).
\item
Produce elements
\(x\coloneqq p_1 + r_1a, y\coloneqq p_2 + r_2b\in S\), then since
\(S\) is multiplicatively closed,
\begin{align*}
xy&\coloneqq(p_1 + r_1 a)(p_2 + r_2b)\in S \\
&\implies p_1 p_2 + p_1r_2 b + p_2 r_1 a + r_1 r_2 ab \in S \\
&\implies xy\in {\mathfrak{p}}+ {\mathfrak{p}}Rb + {\mathfrak{p}}Ra + R{\mathfrak{p}}&& \text{since } p_i, ab\in {\mathfrak{p}}\\
&\implies xy \in ({\mathfrak{p}}+ Rb + Ra + R){\mathfrak{p}}\subseteq {\mathfrak{p}}
.\end{align*}
But then \(xy\in S \cap{\mathfrak{p}}\), a contradiction.
\end{itemize}
\end{solution}
\hypertarget{spring-2013-1-done}{%
\subsubsection{\texorpdfstring{Spring 2013 \#1
\(\done\)}{Spring 2013 \#1 \textbackslash done}}\label{spring-2013-1-done}}
Let \(R\) be a commutative ring.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define a \emph{maximal ideal} and prove that \(R\) has a maximal
ideal.
\item
Show than an element \(r\in R\) is not invertible \(\iff r\) is
contained in a maximal ideal.
\item
Let \(M\) be an \(R{\hbox{-}}\)module, and recall that for
\(0\neq \mu \in M\), the \emph{annihilator} of \(\mu\) is the set
\begin{align*}
\operatorname{Ann}(\mu) = \left\{{r\in R {~\mathrel{\Big|}~}r\mu = 0}\right\}
.\end{align*}
Suppose that \(I\) is an ideal in \(R\) which is maximal with respect
to the property that there exists an element \(\mu \in M\) such that
\(I = \operatorname{Ann}(\mu)\) for some \(\mu \in M\). In other
words, \(I = \operatorname{Ann}(\mu)\) but there does not exist
\(\nu\in M\) with \(J = \operatorname{Ann}(\nu) \subsetneq R\) such
that \(I\subsetneq J\).
Prove that \(I\) is a prime ideal.
\end{enumerate}
\begin{solution}
\envlist
\begin{proof}[part a and b]
\envlist
\begin{itemize}
\tightlist
\item
Maximal: a proper ideal \(I{~\trianglelefteq~}R\), so \(I\neq R\),
such that if \(J\supseteq I\) is any other ideal, \(J = R\).
\item
Existence of a maximal ideal: use that \(0\in \operatorname{Id}(R)\)
always, so
\(S\coloneqq\left\{{I\in \operatorname{Id}(R) {~\mathrel{\Big|}~}I\neq R}\right\}\)
is a nonempty poset under subset inclusion. Applying the usual Zorn's
lemma argument produces a maximal element.
\end{itemize}
\end{proof}
\begin{proof}[part c]
\(\impliedby\): By contrapositive: if \(r\in R\) is a unit and
\({\mathfrak{m}}\) is maximal, then
\(r\in {\mathfrak{m}}\implies {\mathfrak{m}}= R\), contradicting that
\({\mathfrak{m}}\) is proper.
\(\implies\):
\begin{itemize}
\tightlist
\item
Suppose \(a\) is not a unit, we'll produce a maximal ideal containing
it.
\item
Let \(I\coloneqq Ra\) be the principal ideal generated by \(a\), then
\(Ra \neq R\) since \(a\) is not a unit.
\item
Take a poset
\(S \coloneqq\left\{{J\in \operatorname{Id}(R) {~\mathrel{\Big|}~}J\supseteq Ra, J\neq R}\right\}\)
ordered by set inclusion.
\begin{itemize}
\item
Let \(C_*\) be a chain in \(S\), set
\(\widehat{C} \coloneqq\cup C_i\). Then \(\widehat{C} \in S\):
\begin{itemize}
\tightlist
\item
\(\widehat{C} \neq R\), since if so it contains a unit, forcing
some \(C_i\) to contain a unit and thus equal \(R\).
\item
\(\widehat{C} \supseteq Ra\), since
e.g.~\(\widehat{C} \supseteq C_1 \supseteq Ra\).
\item
\(\widehat{C}\) is an ideal since
\(xy\in \widehat{C} \implies x\in C_i, y\in C_j\) and
\(C_i \subseteq C_j\) without loss of generality, so
\(xy\in C_j \subseteq \widehat{C}\).
\end{itemize}
\end{itemize}
\item
Then \(Ra \subseteq \widehat{C}\), some maximal ideal.
\end{itemize}
\end{proof}
\begin{proof}[of d]
\envlist
\begin{itemize}
\tightlist
\item
Write \(I \coloneqq\operatorname{Ann}(u)\) for some \(u\), and toward
a contradiction suppose \(ab\in I\) but \(a,b\not\in I\).
\item
Then \(abu=0\) but \(au\neq 0, bu\neq 0\).
\item
Since \(abu=0\), we have \(a\in \operatorname{Ann}(bu)\). Note that
\(\operatorname{Ann}(bu) \supseteq\operatorname{Ann}(u)\), since
\(su = 0\implies bsu=sbu=0\).
\item
We can't have \(\operatorname{Ann}(bu) = R\), since if \(sbu=0\) for
all \(s\in R\), so we could take \(s=1\) to get \(bu=0\) and
\(b\in \operatorname{Ann}(u)\).
\item
By maximality, this forces
\(\operatorname{Ann}(u) = \operatorname{Ann}(bu)\), so
\(sbu = 0 \implies su=0\) for any \(s\in R\).
\item
Now take \(s=a\), and since \(abu=0\) we get \(au=0\) and
\(a\in \operatorname{Ann}(u)\). \(\contradiction\)
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2019-6-done}{%
\subsubsection{\texorpdfstring{Fall 2019 \#6
\(\done\)}{Fall 2019 \#6 \textbackslash done}}\label{fall-2019-6-done}}
Let \(R\) be a commutative ring with multiplicative identity. Assume
Zorn's Lemma.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that
\begin{align*}
N = \{r \in R \mathrel{\Big|}r^n = 0 \text{ for some } n > 0\}
\end{align*}
is an ideal which is contained in any prime ideal.
\item
Let \(r\) be an element of \(R\) not in \(N\). Let \(S\) be the
collection of all proper ideals of \(R\) not containing any positive
power of \(r\). Use Zorn's Lemma to prove that there is a prime ideal
in \(S\).
\item
Suppose that \(R\) has exactly one prime ideal \(P\) . Prove that
every element \(r\) of \(R\) is either nilpotent or a unit.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\item
Prime ideal: \(\mathfrak{p}\) is prime iff
\(ab \in \mathfrak{p} \implies a\in \mathfrak{p}\) or
\(b\in \mathfrak{p}\).
\item
Silly fact: 0 is in every ideal!
\item
\textbf{Zorn's Lemma:} Given a poset, if every chain has an upper
bound, then there is a maximal element. (Chain: totally ordered
subset.)
\item
\textbf{Corollary:} If \(S\subset R\) is multiplicatively closed with
\(0\not\in S\) then
\(\left\{{I {~\trianglelefteq~}R {~\mathrel{\Big|}~}J\cap S = \emptyset}\right\}\)
has a maximal element.
\todo[inline]{Prove this}
\item
\textbf{Theorem:} If \(R\) is commutative, maximal \(\implies\) prime
for ideals.
\todo[inline]{Prove this}
\item
\textbf{Theorem:} Non-units are contained in a maximal ideal. (See
HW?)
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
Let \(\mathfrak{p}\) be prime and \(x\in N\).
\item
Then \(x^k = 0 \in \mathfrak{p}\) for some \(k\), and thus
\(x^k = x x^{k-1} \in \mathfrak p\).
\item
Since \(\mathfrak p\) is prime, inductively we obtain
\(x\in\mathfrak p\).
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\item
Let \(S = \left\{{r^k \mathrel{\Big|}k\in {\mathbb{N}}}\right\}\) be
the set of positive powers of \(r\).
\item
Then \(S^2 \subseteq S\), since \(r^{k_1}r^{k_2} = r^{k_1+k_2}\) is
also a positive power of \(r\), and \(0\not\in S\) since \(r\neq 0\)
and \(r\not\in N\).
\item
By the corollary,
\(\left\{{I {~\trianglelefteq~}R {~\mathrel{\Big|}~}I\cap S = \emptyset}\right\}\)
has a maximal element \(\mathfrak{p}\).
\item
Since \(R\) is commutative, \(\mathfrak{p}\) is prime.
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
Suppose \(R\) has a unique prime ideal \(\mathfrak{p}\).
\item
Suppose \(r\in R\) is not a unit, and toward a contradiction, suppose
that \(r\) is also not nilpotent.
\item
Since \(r\) is not a unit, \(r\) is contained in some maximal (and
thus prime) ideal, and thus \(r \in \mathfrak{p}\).
\item
Since \(r\not\in N\), by (b) there is a maximal ideal \(\mathfrak{m}\)
that avoids all positive powers of \(r\). Since \(\mathfrak{m}\) is
prime, we must have \(\mathfrak{m} = \mathfrak{p}\). But then
\(r\not\in \mathfrak{p}\), a contradiction.
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{noetherian-rings}{%
\subsection{Noetherian Rings}\label{noetherian-rings}}
\hypertarget{fall-2015-4-done}{%
\subsubsection{\texorpdfstring{Fall 2015 \#4
\(\done\)}{Fall 2015 \#4 \textbackslash done}}\label{fall-2015-4-done}}
Let \(R\) be a PID and \((a_1) < (a_2) < \cdots\) be an ascending chain
of ideals in \(R\). Prove that for some \(n\), we have \((a_j) = (a_n)\)
for all \(j\geq n\).
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
Let \(I\coloneqq\cup Ra_i\) which is an ideal in a PID and thus
\(I = Rb\) for some \(b\).
\item
Using that \(b\in I\), which is a union, we have \(Rb\in Ra_m\) for
some \(m\).
\item
Thus \(I = R_b \subseteq Ra_m\), and \(Ra_m \subseteq I\) by
definition of \(I\), so \(Rb = Ra_m\).
\item
In particular, since \(Ra_{m} \subseteq Ra_{m+1}\) by assumption, and
\(Ra_{m+1} \subseteq Rb \subseteq Ra_m\) since \(Rb = I\), we have
\(Ra_m = Ra_{m+1}\). So inductively, the chain stabilizes at \(m\).
\end{itemize}
\end{solution}
\hypertarget{spring-2021-6-work}{%
\subsubsection{\texorpdfstring{Spring 2021 \#6
\(\work\)}{Spring 2021 \#6 \textbackslash work}}\label{spring-2021-6-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Carefully state the definition of \textbf{Noetherian} for a
commutative ring \(R\).
\item
Let \(R\) be a subset of \({\mathbb{Z}}[x]\) consisting of all
polynomials
\begin{align*}
f(x) = a_ 0 + a_1 x + a_2 x^2 + \cdots + a_nx^n
\end{align*}
such that \(a_k\) is even for \(1\leq k \leq n\). Show that \(R\) is a
subring of \({\mathbb{Z}}[x]\).
\item
Show that \(R\) is not Noetherian.
\end{enumerate}
\begin{quote}
\emph{Hint: consider the ideal generated by
\(\left\{{ 2x^k {~\mathrel{\Big|}~}1\leq k \in {\mathbb{Z}}}\right\}\).}
\end{quote}
\begin{solution}
\begin{itemize}
\item
A ring is \textbf{Noetherian} iff \(R\) satisfies the ascending chain
condition: every chain of ideals
\(A_1 \subseteq A_2 \subseteq \cdots\) eventually stabilizes, so
\(A_m \subseteq A_{m+1} = A_{m+2} = \cdots\).
\item
That \(R\) is a subring of \({\mathbb{Z}}[x]\):
\begin{itemize}
\tightlist
\item
\((R, +)\) is an abelian subgroup: note that
\(f(x) + g(x) = \sum a_k x^k + \sum b_k x^k = \sum (a_k + b_k) x^k\),
so if \(a_k, b_k\) are even then \(a_k + b_k\) is even. It's closed
under inverses, since \(a_k\) is even iff \(-a_k\) is even, and
contains zero.
\item
\((R, \cdot)\) is a submonoid:
\(f(x) g(x) = \sum_{n=1}^N \qty{ \sum_{k=1}^n a_k b_{n-k}} x^k\)
where without loss of generality, \(\deg f = \deg g = n\) by setting
coefficients to zero. Then sums and products of even integers are
even, so \(fg \in R\).
\end{itemize}
\item
That \(R\) is not Noetherian: it suffices to show that \(R\) contains
an ideal that is not finitely generated.
\item
The claim is that setting
\(S \coloneqq\left\{{2x^k}\right\}_{k\in {\mathbb{Z}}_{\geq 1}}\) and
taking
\begin{align*}
I \coloneqq\left\langle{S}\right\rangle = \sum_{k\in {\mathbb{Z}}_{\geq 1}} R\cdot 2x^k \coloneqq\left\{{ \sum_{i=1}^m r_k(x) 2x^k {~\mathrel{\Big|}~}r_k(x) \in 2{\mathbb{Z}}[x], m\in {\mathbb{Z}}_{\geq 0}}\right\}
\end{align*}
yields an ideal that is not finitely generated.
\item
Suppose toward a contradiction that
\(\left\{{g_1, g_2, \cdots, g_M}\right\}\) were a finite generating
set, where each \(g_i \in I\).
\end{itemize}
\todo[inline]{???}
\end{solution}
\hypertarget{simple-rings}{%
\subsection{Simple Rings}\label{simple-rings}}
\hypertarget{fall-2017-5-done}{%
\subsubsection{\texorpdfstring{Fall 2017 \#5
\(\done\)}{Fall 2017 \#5 \textbackslash done}}\label{fall-2017-5-done}}
A ring \(R\) is called \emph{simple} if its only two-sided ideals are
\(0\) and \(R\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Suppose \(R\) is a commutative ring with 1. Prove \(R\) is simple if
and only if \(R\) is a field.
\item
Let \(k\) be a field. Show the ring \(M_n (k)\), \(n \times n\)
matrices with entries in \(k\), is a simple ring.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Nonzero proper ideals contain at least one nonzero element.
\item
\(I=R\) when \(1\in I\).
\item
Effects of special matrices: let \(A_{ij}\) be a matrix with only a 1
in the \(ij\) position.
\begin{itemize}
\tightlist
\item
Left-multiplying \(A_{ij}M\) moves row \(j\) to row \(i\) and zeros
out the rest of \(M\).
\item
Right-multiplying \(MA_{ij}\) moves column \(i\) to column \(j\) and
zeros out the rest.
\item
So \(A_{ij} M A_{kl}\) moves entry \(j, k\) to \(i, l\) and zeros
out the rest.
\end{itemize}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\(\implies\):
\begin{itemize}
\tightlist
\item
Suppose
\(\operatorname{Id}(R) = \left\{{\left\langle{0}\right\rangle, \left\langle{1}\right\rangle}\right\}\).
Then for any nonzero \(r\in R\), the ideal
\(\left\langle{r}\right\rangle = \left\langle{1}\right\rangle\) is the
entire ring.
\item
In particular, \(1\in \left\langle{r}\right\rangle\), so we can write
\(a = tr\) for some \(t\in R\).
\item
But then \(r\in R^{\times}\) with \(t\coloneqq r^{-1}\).
\end{itemize}
\(\impliedby\):
\begin{itemize}
\tightlist
\item
Suppose \(R\) is a field and \(I\in \operatorname{Id}(R)\) is an
ideal.
\item
If \(I \neq \left\langle{0}\right\rangle\) is proper and nontrivial,
then \(I\) contains at least one nonzero element \(a\in I\).
\item
Since \(R\) is a field, \(a^{-1}\in R\), and \(aa^{-1}= 1\in I\)
forces \(I = \left\langle{1}\right\rangle\).
\end{itemize}
\end{proof}
\begin{proof}[of b]
\begin{itemize}
\tightlist
\item
Let \(J{~\trianglelefteq~}R\) be a nonzero two-sided ideal, noting
that \(R\) is noncommutative -- the claim is that \(J\) contains
\(I_n\), the \(n\times n\) identity matrix, and thus \(J = R\).
\item
Pick a nonzero element \(M\in I\), then \(M\) has a nonzero entry
\(m{ij}\).
\item
Let \(A_{ij}\) be the matrix with a 1 in the \(i,j\) position and
zeros elsewhere.
\begin{itemize}
\tightlist
\item
Left-multiplying \(A_{ij}M\) moves row \(j\) to row \(i\) and zeros
out the rest of \(M\).
\item
Right-multiplying \(MA_{ij}\) moves column \(i\) to column \(j\) and
zeros out the rest.
\item
So \(A_{ij} M A_{kl}\) moves entry \(j, k\) to \(i, l\) and zeros
out the rest.
\end{itemize}
\item
So define \(B \coloneqq A_{i, i}MA_{j, i}\), which movies \(m_{ij}\)
to the \(i,i\) position on the diagonal and has zeros elsewhere.
\item
Then \(m_{ij}^{-1}{\varepsilon}_{ii} \coloneqq A_{ii}\) is a matrix
with \(1\) in the \(i, i\) spot for any \(i\). Since \(I\) is an
ideal, we have \({\varepsilon}_{ii}\in I\) for every \(i\).
\item
We can write the identity \(I_n\) as
\(\sum_{i=1}^n {\varepsilon}_{ii}\), so \(I_n \in I\) and \(I=R\).
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{spring-2016-8-done}{%
\subsubsection{\texorpdfstring{Spring 2016 \#8
\(\done\)}{Spring 2016 \#8 \textbackslash done}}\label{spring-2016-8-done}}
Let \(R\) be a simple rng (a nonzero ring which is not assume to have a
1, whose only two-sided ideals are \((0)\) and \(R\)) satisfying the
following two conditions:
\begin{enumerate}
\def\labelenumi{\roman{enumi}.}
\tightlist
\item
\(R\) has no zero divisors, and
\item
If \(x\in R\) with \(x\neq 0\) then \(2x\neq 0\), where
\(2x\coloneqq x+x\).
\end{enumerate}
Prove the following:
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
For each \(x\in R\) there is one and only one element \(y\in R\) such
that \(x = 2y\).
\item
Suppose \(x,y\in R\) such that \(x\neq 0\) and \(2(xy) = x\), then
\(yz = zy\) for all \(z\in R\).
\end{enumerate}
\begin{quote}
You can get partial credit for (b) by showing it in the case \(R\) has a
1.
\end{quote}
\begin{remark}
A general opinion is that this is not a great qual problem! Possibly
worth skipping.
\end{remark}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(R\) has no left zero divisors iff \(R\) has the left cancellation
property: \(xa=xb \implies a=b\).
\item
\(R\) has no right zero divisors iff \(R\) has the right cancellation
property: \(ax=bx \implies a=b\).
\end{itemize}
\end{concept}
\begin{solution}
Note: solutions borrowed from folks on Math twitter!
\begin{proof}[part 1]
\envlist
\begin{itemize}
\tightlist
\item
Existence: the claim is that
\(2R \coloneqq\left\{{2y {~\mathrel{\Big|}~}y\in R}\right\}\) is a
nontrivial two-sided ideal of \(R\), forcing \(2R = R\) by simpleness.
\begin{itemize}
\tightlist
\item
That \(2R \neq 0\) follows from condition (1): Provided \(y\neq 0\),
we have \(2y\neq 0\), and so if \(R\neq 0\) then there exists some
nonzero \(a\in R\), in which case \(2a\neq 0\) and \(2a\in 2R\).
\item
That \(2R\) is a right ideal: clear, since
\((2y)\cdot r = 2(yr)\in 2R\).
\item
That \(2R\) is a left ideal: use that multiplication is
distributive:
\begin{align*}
r\cdot 2y \coloneqq r(y+y) = ry + ry \coloneqq 2(ry) \in 2R
.\end{align*}
\end{itemize}
\item
So \(2R = R\) by simpleness.
\item
Uniqueness:
\begin{itemize}
\tightlist
\item
Use the contrapositive of condition (1), so that
\(2x = 0 \implies x=0\).
\item
Suppose toward a contradiction that \(x=2y_1 = 2y_2\), then
\begin{align*}
0 = x-x = 2y_1 - 2y_2 = 2(y_1 - y_2) \implies y_1 - y_2 = 0 \implies y_1 = y_2
.\end{align*}
\end{itemize}
\end{itemize}
\end{proof}
\begin{proof}[part 2]
\envlist
\begin{itemize}
\item
First we'll show \(z=2(yz)\):
\begin{align*}
xy + xy &= x \\
\implies xy + xy - x &= 0 \\
\implies xyz + xyz - xz &= 0 \\
\implies x(yz + yz - z) &= 0 \\
\implies yz + yz - z &= 0 && \text{since } x\neq 0 \text{ and no zero divisors }\\
\implies 2(yz) &= z
.\end{align*}
\item
Now we'll show \(z=2(zy)\):
\begin{align*}
yz + yz &= z \\
\implies zyz + zyz &= zz \\
\implies zyz + zyz - zz &= 0 \\
\implies (zy + zy - z)z &= 0\\
\implies z=0 \text{ or } zy+zy-z &= 0 && \text{ no zero divisors }
.\end{align*}
\item
Then if \(z=0\), we have \(yz = 0 = zy\) and we're done.
\item
Otherwise, \(2(zy) = z\), and thus
\begin{align*}
2(zy) = z = 2(yz) \implies 2(zy - yz) = 0 \implies zy-yz = 0
,\end{align*}
so \(zy=yz\).
\end{itemize}
\end{proof}
\begin{proof}[of 2, if $R$ is unital]
\envlist
\begin{itemize}
\tightlist
\item
If \(1\in R\),
\begin{align*}
2xy &= x \\
\implies 2xy-x &= 0 \\
\implies x(2y-1) &= 0 \\
\implies 2y-1 &= 0 && x\neq 0 \text{ and no zero divisors}\\
\implies 2y &= 1
.\end{align*}
\item
Now use
\begin{align*}
1\cdot z &= z\cdot 1 \\
\implies (2y)z &= z(2y) \\
\implies (y+y)z &= z(y+y) \\
\implies yz+yz &= zy+zy \\
\implies 2(yz) &= 2(zy) \\
\implies 2(yz-zy) &= 0 \\
\implies yz-zy &= 0 \\
,\end{align*}
using condition (2).
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{unsorted}{%
\subsection{Unsorted}\label{unsorted}}
\hypertarget{fall-2019-3-done}{%
\subsubsection{\texorpdfstring{Fall 2019 \#3
\(\done\)}{Fall 2019 \#3 \textbackslash done}}\label{fall-2019-3-done}}
Let \(R\) be a ring with the property that for every
\(a \in R, a^2 = a\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(R\) has characteristic 2.
\item
Prove that \(R\) is commutative.
\end{enumerate}
\begin{strategy}
\envlist
\begin{itemize}
\tightlist
\item
Just fiddle with direct computations.
\item
Context hint: that we should be considering things like \(x^2\) and
\(a+b\).
\end{itemize}
\end{strategy}
\begin{solution}
\envlist
\begin{proof}[of a]
\begin{align*}
2a = (2a)^2 = 4a^2 = 4a \implies 2a = 0
.\end{align*}
Note that this implies \(x = -x\) for all \(x\in R\).
\end{proof}
\begin{proof}[of b]
\begin{align*}
a+b = (a+b)^2 &= a^2 + ab + ba + b^2 = a + ab + ba + b \\
&\implies ab + ba = 0 \\
&\implies ab = -ba \\
&\implies ab = ba \quad\text{by (a)}
.\end{align*}
\end{proof}
\end{solution}
\hypertarget{spring-2018-5-done}{%
\subsubsection{\texorpdfstring{Spring 2018 \#5
\(\done\)}{Spring 2018 \#5 \textbackslash done}}\label{spring-2018-5-done}}
Let
\begin{align*}
M=\left(\begin{array}{ll}{a} & {b} \\ {c} & {d}\end{array}\right)
\quad \text{and} \quad
N=\left(\begin{array}{cc}{x} & {u} \\ {-y} & {-v}\end{array}\right)
\end{align*}
over a commutative ring \(R\), where \(b\) and \(x\) are units of \(R\).
Prove that
\begin{align*}
M N=\left(\begin{array}{ll}{0} & {0} \\ {0} & {*}\end{array}\right)
\implies MN = 0
.\end{align*}
\begin{solution}
\envlist
\begin{itemize}
\item
Multiply everything out to get
\begin{align*}
{
\begin{bmatrix}
{ax-by} & {au-bv}
\\
{cx-dy} & {cu-dv}
\end{bmatrix}
}
,\end{align*}
so it suffices to show \(cu=dv\) given
\begin{align*}
ax &= by \\
cx &= dy \\
au &= bv
.\end{align*}
\item
Writing \(cu\):
\begin{itemize}
\tightlist
\item
Use that \(b\in R^{\times}\), left-multiply (1) by \(b^{-1}\) to get
\(b^{-1}a x = y\)
\item
Substitute \(y\) into (2) to get \(cx = d(b^{-1}a x)\).
\item
Since \(x\in R^{\times}\), right-multiply by \(x^{-1}\) to get
\(c = db^{-1}a\) and thus \(cu = db^{-1}a u\).
\item
Summary:
\begin{align*}
ax = by
&\implies b^{-1}ax = y \\
&\implies cx = dy = d(b^{-1}a x) \\
&\implies c = db^{-1}a \\
&\implies cu = db^{-1}au
.\end{align*}
\end{itemize}
\item
Writing \(dv\):
\begin{itemize}
\tightlist
\item
Left-multiply (3) by \(b^{-1}\) to get \(b^{-1}au = v\).
\item
Left-multiply by \(d\) to get \(db^{-1}au = dv\)
\item
Summary:
\begin{align*}
au = bv
&\implies b^{-1}a u = v \\
&\implies db^{-1}au = dv
.\end{align*}
\end{itemize}
\item
So
\begin{align*}
cu = db^{-1}a u = dv
.\end{align*}
\end{itemize}
\end{solution}
\hypertarget{spring-2014-6-work}{%
\subsubsection{\texorpdfstring{Spring 2014 \#6
\(\work\)}{Spring 2014 \#6 \textbackslash work}}\label{spring-2014-6-work}}
\(R\) be a commutative ring with identity and let \(n\) be a positive
integer.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that every surjective \(R{\hbox{-}}\)linear endomorphism
\(T: R^n \to R^n\) is injective.
\item
Show that an injective \(R{\hbox{-}}\)linear endomorphism of \(R^n\)
need not be surjective.
\end{enumerate}
\hypertarget{galois-theory}{%
\section{Galois Theory}\label{galois-theory}}
\hypertarget{general-galois-extensions}{%
\subsection{General Galois Extensions}\label{general-galois-extensions}}
\hypertarget{fall-2020-4-done}{%
\subsubsection{\texorpdfstring{Fall 2020 \#4
\(\done\)}{Fall 2020 \#4 \textbackslash done}}\label{fall-2020-4-done}}
Let \(K\) be a Galois extension of \(F\), and let
\(F \subset E \subset K\) be inclusions of fields. Let
\(G \coloneqq{ \mathsf{Gal}} (K/F)\) and
\(H \coloneqq{ \mathsf{Gal}} (K/E)\), and suppose \(H\) contains
\(N_G(P)\), where \(P\) is a Sylow \(p\)-subgroup of \(G\) for \(p\) a
prime. Prove that \([E: F] \equiv 1 \operatorname{mod}p\).
\begin{concept}
The correspondence:
\begin{center}
\begin{tikzcd}
K &&&& 1 \\
\\
E &&&& {H \coloneqq{ \mathsf{Gal}} (K/E)\hspace{4em}} \\
\\
F &&&& {G \coloneqq{ \mathsf{Gal}} (K/F)\hspace{4em}}
\arrow["{[E:F]}", hook, from=5-1, to=3-1]
\arrow["{[K:E]}", hook, from=3-1, to=1-1]
\arrow[""{name=0, anchor=center, inner sep=0}, "{[K:F]}"', curve={height=30pt}, hook, from=5-1, to=1-1]
\arrow["{[H:1]}"', hook, from=1-5, to=3-5]
\arrow["{[G:H]}"', hook, from=3-5, to=5-5]
\arrow["{[G:1]}", curve={height=-30pt}, hook, from=1-5, to=5-5]
\arrow["{{ \mathsf{Gal}} (K/{-})}"', shift right=5, shorten <=18pt, Rightarrow, from=0, to=3-5]
\end{tikzcd}
\end{center}
\begin{quote}
\href{https://q.uiver.app/?q=WzAsNixbMCwyLCJFIl0sWzAsMCwiSyJdLFswLDQsIkYiXSxbNCwwLCIxIl0sWzQsMiwiSCBcXGRhIFxcR2FsKEsvRSlcXGhzcGFjZXs0ZW19Il0sWzQsNCwiRyBcXGRhIFxcR2FsKEsvRilcXGhzcGFjZXs0ZW19Il0sWzIsMCwiW0U6Rl0iLDAseyJzdHlsZSI6eyJ0YWlsIjp7Im5hbWUiOiJob29rIiwic2lkZSI6InRvcCJ9fX1dLFswLDEsIltLOkVdIiwwLHsic3R5bGUiOnsidGFpbCI6eyJuYW1lIjoiaG9vayIsInNpZGUiOiJ0b3AifX19XSxbMiwxLCJbSzpGXSIsMix7ImN1cnZlIjo1LCJzdHlsZSI6eyJ0YWlsIjp7Im5hbWUiOiJob29rIiwic2lkZSI6InRvcCJ9fX1dLFszLDQsIltIOjFdIiwyLHsic3R5bGUiOnsidGFpbCI6eyJuYW1lIjoiaG9vayIsInNpZGUiOiJ0b3AifX19XSxbNCw1LCJbRzpIXSIsMix7InN0eWxlIjp7InRhaWwiOnsibmFtZSI6Imhvb2siLCJzaWRlIjoidG9wIn19fV0sWzMsNSwiW0c6MV0iLDAseyJjdXJ2ZSI6LTUsInN0eWxlIjp7InRhaWwiOnsibmFtZSI6Imhvb2siLCJzaWRlIjoidG9wIn19fV0sWzgsNCwiXFxHYWwoSy9cXHdhaXQpIiwyLHsib2Zmc2V0Ijo1LCJzaG9ydGVuIjp7InNvdXJjZSI6MjB9fV1d}{Link
to Diagram}
\end{quote}
Normalizers:
\begin{align*}
N_G(P) = \left\{{g\in G {~\mathrel{\Big|}~}gPg^{-1}= P}\right\}
.\end{align*}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\item
Reduce to a group theory problem: \([E:F] = [G:H]\), despite the fact
that \(E/F\) is not necessarily Galois. This is because we can count
in towers:
\begin{align*}
[K:F] = [K:E][E:F] &\implies [G:1] = [K:E][H:1] \\
&\implies \# G = [K:E] \# H \\
&\implies [G:H] = {\# G \over \# H} = [K:E]
.\end{align*}
\item
Essential fact: if \(P \in {\operatorname{Syl}}_p(G)\), we can use
that \(P \subseteq N_G(P) \subset H\) and so
\(P\in {\operatorname{Syl}}_p(H)\) as well.
\item
Now use that \(N_G(P) \subseteq H\), and do Sylow theory for \(P\) in
both \(G\) and \(H\):
\begin{itemize}
\tightlist
\item
Sylow 3 on \(G\) yields
\(n_p(G) = [G: N_G(P)] \equiv 1 \operatorname{mod}p\).
\item
Sylow 3 on \(H\) yields
\(n_p(H) = [G: N_H(P)] \equiv 1 \operatorname{mod}p\).
\end{itemize}
\item
Claim: \(N_H(P) = N_G(P)\).
\begin{itemize}
\tightlist
\item
We have \(N_H(P) \subseteq N_G(P)\) since \(H \subseteq G\), so
\(hPh^{-1}= P\) remains true regarding either \(h\in H\) or
\(h\in G\).
\item
For \(N_G(P) \subseteq N_H(P)\), use that \(N_G(P) \subseteq H\) and
so \(gPg^{-1}= P\) implies \(g\in H\), so \(g\in N_H(P)\).
\end{itemize}
\item
Now morally one might want to apply an isomorphism theorem:
\begin{align*}
{G/ N_G(P) \over H/N_H(P)}=
{G/ N_H(P) \over H/N_H(P)}\cong
{G\over H}
,\end{align*}
but we don't have normality. However, we can still get away with the
corresponding counting argument if everything is finite:
\begin{align*}
{[G: N_G(P)] \over [H:N_H(P)] }=
{[G: N_H(P)] \over [H:N_H(P)] }=
{\# G / \# N_H(P) \over \# H / \#N_H(P)}
= {\# G \over \# H}
= [G: H]
.\end{align*}
\item
We have an equation of the form \(n_p(G)/n_p(H) = m\), and we want to
show \(m\equiv 1 \operatorname{mod}p\). So write
\begin{align*}
{n_p(G) \over n_p(H) }
= m \implies m n_p(H) &= n_p(G) \\
\implies m n_p(H) &\equiv n_p(G) \operatorname{mod}p \\
\implies m\cdot 1 &\equiv 1 \operatorname{mod}p \\
\implies m &\equiv 1 \operatorname{mod}p
.\end{align*}
\end{itemize}
\end{solution}
\hypertarget{fall-2019-midterm-9-done}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#9
\(\done\)}{Fall 2019 Midterm \#9 \textbackslash done}}\label{fall-2019-midterm-9-done}}
Let \(n\geq 3\) and \(\zeta_n\) be a primitive \(n\)th root of unity.
Show that
\([{\mathbb{Q}}(\zeta_n + \zeta_n^{-1}): {\mathbb{Q}}] = \phi(n)/2\) for
\(\phi\) the totient function.
\begin{solution}
\envlist
\begin{itemize}
\item
Some notation: let \(\alpha_k \coloneqq\zeta_n^k + \zeta_n^{-k}\).
\item
Let \(m(x)\) be the minimal polynomial of
\(\alpha_1 \coloneqq\zeta_n + \zeta_n^{-1}\). Note that
\(\alpha_1 \in {\mathbb{Q}}(\zeta_n)\).
\item
Use that
\({ \mathsf{Gal}} ({\mathbb{Q}}(\zeta_n)/{\mathbb{Q}}) \cong C_n^{\times}\),
consisting of maps \(\sigma_k: \zeta \mapsto \zeta^k\) for
\(\gcd(k, n) = 1\), of which there are \(\phi(n)\) many.
\item
Galois transitively permutes the roots of irreducible polynomials, so
the roots of \(m\) are precisely the Galois conjugates of \(\alpha\),
i.e.~the Galois orbit of \(\alpha\), so we can just compute it. For
illustrative purposes, suppose \(n\) is prime, then
\begin{align*}
\sigma_1(\zeta_n + \zeta_n^{-1}) &= \zeta_n + \zeta_n^{-1}=\alpha_1 \\
\sigma_2(\zeta_n + \zeta_n^{-1}) &= \zeta_n^2 + \zeta_n^{-2} = \alpha_2 \\
\sigma_3(\zeta_n + \zeta_n^{-1}) &= \zeta_n^3 + \zeta_n^{-3} = \alpha_3 \\
\vdots&\\
\sigma_{n-1}(\zeta_n + \zeta_n^{-1}) &= \zeta_n^{n-1} + \zeta_n^{-(n-1)} = \zeta_n^{-1} + \zeta_n^{1} = \alpha_1 \\
\sigma_{n-2}(\zeta_n + \zeta_n^{-1}) &= \zeta_n^{n-2} + \zeta_n^{-(n-2)} = \zeta_n^{-2} + \zeta_n^{2} = \alpha_2 \\
\sigma_{n-3}(\zeta_n + \zeta_n^{-1}) &= \zeta_n^{n-3} + \zeta_n^{-(n-3)} = \zeta_n^{-3} + \zeta_n^{3} = \alpha_3
,\end{align*}
where we've used that \(\zeta^{k} = \zeta^{k\operatorname{mod}n}\).
From this, we see that \(\sigma_{k}(\alpha_1)=\sigma_{n-k}(\alpha_1)\)
and we pick up \((n-1)/2\) distinct conjugates.
\item
For \(n\) not prime, the exact same argument runs through \(\phi(n)\)
values of \(k\) for \(\sigma_k\), and again yields
\(\sigma_{k}(\alpha_1) = \sigma_{\phi(n) - k}(\alpha_1)\). Matching
them up appropriately yields \(\phi(n)/2\) distinct roots.
\end{itemize}
\end{solution}
\hypertarget{fall-2019-midterm-10-done}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#10
\(\done\)}{Fall 2019 Midterm \#10 \textbackslash done}}\label{fall-2019-midterm-10-done}}
Let \(L/K\) be a finite normal extension.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that if \(L/K\) is cyclic and \(E/K\) is normal with \(L/E/K\)
then \(L/E\) and \(E/K\) are cyclic.
\item
Show that if \(L/K\) is cyclic then there exists exactly one extension
\(E/K\) of degree \(n\) with \(L/E/K\) for each divisor \(n\) of
\([L:K]\).
\end{enumerate}
\begin{solution}
The setup:
\begin{center}
\begin{tikzcd}
L &&&& 1 \\
\\
E &&&& {H\coloneqq{ \mathsf{Gal}} (L/E)\hspace{4em} } \\
\\
K &&&& {G\coloneqq{ \mathsf{Gal}} (L/K) = C_n}
\arrow[from=1-5, to=3-5]
\arrow["n", from=3-5, to=5-5]
\arrow["n"', from=5-1, to=3-1]
\arrow["g", from=3-1, to=1-1]
\arrow["g", curve={height=-30pt}, from=5-1, to=1-1]
\arrow[curve={height=-30pt}, from=1-5, to=5-5]
\end{tikzcd}
\end{center}
\begin{quote}
\href{https://q.uiver.app/?q=WzAsNixbMCwwLCJMIl0sWzAsMiwiRSJdLFswLDQsIksiXSxbNCwwLCIxIl0sWzQsMiwiSFxcZGEgXFxHYWwoTC9FKSJdLFs0LDQsIkdcXGRhIFxcR2FsKEwvSykgPSBDX24iXSxbMyw0XSxbNCw1LCJuIl0sWzIsMSwibiIsMl0sWzEsMCwiZyJdLFsyLDAsImciLDAseyJjdXJ2ZSI6LTV9XSxbMyw1LCIiLDIseyJjdXJ2ZSI6LTV9XV0=}{Link
to Diagram}
\end{quote}
Part 1:
\begin{itemize}
\tightlist
\item
\(L/K\) is cyclic means \(L/K\) is Galois and
\(G\coloneqq{ \mathsf{Gal}} (L/K) = C_n\) for some \(n\).
\item
By the FTGT, setting \(H \coloneqq{ \mathsf{Gal}} (L/E)\), we get
\(H {~\trianglelefteq~}G\) precisely because \(E/K\) is normal, and
\({ \mathsf{Gal}} (L/E) = G/H\).
\item
But then if \(G\) is cyclic, \(H \leq G\) must be cyclic, and \(G/H\)
is cyclic as well since writing
\(G = C_n = \left\langle{x}\right\rangle\), we have
\(G/H = \left\langle{xH}\right\rangle\).
\end{itemize}
Part 2:
\begin{itemize}
\tightlist
\item
Letting \(G\coloneqq{ \mathsf{Gal}} (L/K) = C_n\), by elementary group
theory we have subgroups \(H\coloneqq C_d \leq C_n\) for every \(d\)
dividing \(n\).
\begin{itemize}
\tightlist
\item
A observation we'll need: every subgroup is normal here since \(G\)
is abelian.
\end{itemize}
\item
By the fundamental theorem, taking the fixed field of
\(H \leq { \mathsf{Gal}} (L/K)\), we obtain some intermediate
extension \(E\coloneqq K^H\) fitting into a tower \(L/E/K\).
\item
By the fundamental theorem, \([E: K] = [G:H] = n/d\), where we've used
that \(H{~\trianglelefteq~}G\).
\item
Letting \(d\) range through divisors lets \(n/d\) range through
divisors, so we get extensions of every degree \(d\) dividing \(n\).
\end{itemize}
\end{solution}
\hypertarget{fall-2019-midterm-8-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#8
\(\work\)}{Fall 2019 Midterm \#8 \textbackslash work}}\label{fall-2019-midterm-8-work}}
Let \(k\) be a field of characteristic \(p\neq 0\) and \(f\in k[x]\)
irreducible. Show that \(f(x) = g(x^{p^d})\) where \(g(x) \in k[x]\) is
irreducible and separable.
Conclude that every root of \(f\) has the same multiplicity \(p^d\) in
the splitting field of \(f\) over \(k\).
\hypertarget{fall-2019-midterm-7-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#7
\(\work\)}{Fall 2019 Midterm \#7 \textbackslash work}}\label{fall-2019-midterm-7-work}}
Show that a field \(k\) of characteristic \(p\neq 0\) is perfect
\(\iff\) for every \(x\in k\) there exists a \(y\in k\) such that
\(y^p=x\).
\hypertarget{spring-2012-4-work}{%
\subsubsection{\texorpdfstring{Spring 2012 \#4
\(\work\)}{Spring 2012 \#4 \textbackslash work}}\label{spring-2012-4-work}}
Let \(f(x) = x^7 - 3\in {\mathbb{Q}}[x]\) and \(E/{\mathbb{Q}}\) be a
splitting field of \(f\) with \(\alpha \in E\) a root of \(f\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(E\) contains a primitive 7th root of unity.
\item
Show that \(E\neq {\mathbb{Q}}(\alpha)\).
\end{enumerate}
\hypertarget{fall-2013-5-done}{%
\subsubsection{\texorpdfstring{Fall 2013 \#5
\(\done\)}{Fall 2013 \#5 \textbackslash done}}\label{fall-2013-5-done}}
Let \(L/K\) be a finite extension of fields.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define what it means for \(L/K\) to be \emph{separable}.
\item
Show that if \(K\) is a finite field, then \(L/K\) is always
separable.
\item
Give an example of a finite extension \(L/K\) that is not separable.
\end{enumerate}
\begin{solution}
\envlist
\begin{itemize}
\item
\(L/k\) is \textbf{separable} iff every element \(\alpha\) is
separable, i.e.~the minimal polynomial \(m(x)\) of \(\alpha\) is a
separable polynomial, i.e.~\(m(x)\) has no repeated roots in (say) the
algebraic closure of \(L\) (or just any splitting field of \(m\)).
\item
If \(\operatorname{ch}k = p\), suppose toward a contradiction that
\(L/k\) is not separable. Then there is some \(\alpha\) with an
inseparable (and irreducible) minimal polynomial \(f(x)\in k[x]\).
\item
Claim: since \(f\) is inseparable and irreducible, \(f(x) = g(x^p)\)
for some \(g\in k[x]\).
\begin{itemize}
\tightlist
\item
Note: write \(g(x) \coloneqq\sum a_k x^k\), so that
\(f(x) = \sum a_k (x^p)^k = \sum a_k x^{pk}\).
\end{itemize}
\item
This is a contradiction, since it makes \(f\) reducible by using the
``Freshman's dream'':
\begin{align*}
f(x) = \sum a_k x^{pk} = \qty{ \sum a_k^{1\over p} x^k}^p \coloneqq(h(x))^p
.\end{align*}
\item
Proof of claim: in \(\operatorname{ch}k = p, f\) inseparable
\(\implies f(x) = g(x^p)\).
\begin{itemize}
\tightlist
\item
Use that \(f\) is inseparable iff \(\gcd(f, f') \neq 1\), and since
\(f\) is irreducible this forces \(f' \equiv 0\), so \(ka_k = 0\)
for all \(k\).
\item
Then \(a_k\neq 0\) forces \(p\divides k\), so
\(f(x) = a_0 + a_px^p + a_{2p}x^{2p} + \cdots\) and one takes
\(g(x) \coloneqq\sum a_{kp}x^{kp}\).
\end{itemize}
\item
A finite inseparable extension:
\begin{itemize}
\tightlist
\item
It's a theorem that finite extensions of perfect fields are
separable, so one needs a non-perfect field.
\item
Take
\(L/k \coloneqq{\mathbb{F}}_p(t^{1\over p}) / {\mathbb{F}}_p(t)\),
which is a degree \(p\) extension (although both fields are infinite
are characteristic \(p\)).
\item
Then the minimal polynomial of \(t\) is
\(f(x) \coloneqq x^p - t \in {\mathbb{F}}_p(t)[x]\), where
\(f'(x) = px^p \equiv 0\) Alternatively, just note that \(f\)
factors as \(f(x) = (x-t^{1\over p})^p\) in \(L[x]\), which has
multiple roots.
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2012-4-work}{%
\subsubsection{\texorpdfstring{Fall 2012 \#4
\(\work\)}{Fall 2012 \#4 \textbackslash work}}\label{fall-2012-4-work}}
Let \(f(x) \in {\mathbb{Q}}[x]\) be a polynomial and \(K\) be a
splitting field of \(f\) over \({\mathbb{Q}}\). Assume that
\([K:{\mathbb{Q}}] = 1225\) and show that \(f(x)\) is solvable by
radicals.
\hypertarget{galois-groups-concrete-computations}{%
\subsection{Galois Groups: Concrete
Computations}\label{galois-groups-concrete-computations}}
\hypertarget{exercise-gx2-2}{%
\subsubsection{\texorpdfstring{Exercise:
\(G(x^2-2)\)}{Exercise: G(x\^{}2-2)}}\label{exercise-gx2-2}}
\begin{exercise}[?]
Compute the Galois group of \(x^2-2\).
\end{exercise}
\begin{solution}
\({\mathbb{Z}}/2{\mathbb{Z}}\)?
\end{solution}
\hypertarget{exercise-gxp-2}{%
\subsubsection{\texorpdfstring{Exercise:
\(G(x^p-2)\)}{Exercise: G(x\^{}p-2)}}\label{exercise-gxp-2}}
\begin{exercise}[?]
Let \(p \in \mathbb{Z}\) be a prime number. Then describe the elements
of the Galois group of the polynomial \(x^{p}-2\).
\end{exercise}
\begin{solution}
\({\mathbb{Q}}(2^{1\over p}, \zeta_p)\), which has degree \(p(p-1)\) and
is generated by the maps
\begin{align*}
\sqrt[p]{2} & \mapsto \sqrt[p]{2} \zeta^{a} \\
\zeta & \mapsto \zeta^{b}
.\end{align*}
\end{solution}
\hypertarget{fall-2020-3-work}{%
\subsubsection{\texorpdfstring{Fall 2020 \#3
\(\work\)}{Fall 2020 \#3 \textbackslash work}}\label{fall-2020-3-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define what it means for a finite extension of fields \(E\) over \(F\)
to be a \emph{Galois} extension.
\item
Determine the Galois group of \(f(x) = x^3 - 7\) over
\({\mathbb{Q}}\), and justify your answer carefully.
\item
Find all subfields of the splitting field of \(f(x)\) over
\({\mathbb{Q}}\).
\end{enumerate}
\begin{solution}
Part a:
\begin{itemize}
\tightlist
\item
A finite extension \(E/F\) is \textbf{Galois} if it is normal and
separable:
\begin{itemize}
\tightlist
\item
Normal: every \(f\in F[x]\) either has no roots in \(E\) or all
roots in \(E\).
\item
Separable: every element \(e\in E\) has a separable minimal
polynomial \(m(x)\), i.e.~\(m\) has no repeated roots.
\end{itemize}
\end{itemize}
Part b:
\begin{itemize}
\item
Note \(f\) is irreducible by Eisenstein with \(p=7\), and since
\({\mathbb{Q}}\) is perfect, irreducible implies separable.
\item
Writing \(L \coloneqq\operatorname{SF}(f)/{\mathbb{Q}}\), this is a
Galois extension:
\begin{itemize}
\tightlist
\item
\(L\) is separable: it is a finite extension of a perfect field,
which is automatically separable.
\item
\(L\) is normal: \(L\) is the splitting field of a separable
polynomial, and thus normal.
\end{itemize}
\item
Since \(f\) is degree 3, we have
\(G\coloneqq{ \mathsf{Gal}} (L/k) \leq S_3\), and since \(G\) is a
transitive subgroup the only possibilities are
\begin{align*}
G = S_3 \cong D_3, A_3 \cong C_3
.\end{align*}
\item
Factor \(x^3 - 7 = (x-\omega)(x-\zeta_3\omega)(x-\zeta_3^2\omega)\)
where \(\omega \coloneqq 7^{1\over 3}\) and \(\zeta_3\) is a primitive
3rd root of unity. Then \(L = {\mathbb{Q}}(\zeta_3, \omega)\).
\begin{itemize}
\tightlist
\item
Aside: label the roots in this order, so
\(r_1 = \omega, r_2 = \zeta_3\omega, r_3 = \zeta_3^2\omega\).
\end{itemize}
\item
Write \(\min_{\omega, {\mathbb{Q}}}(x) = x^3 - 7\) and let
\(L_0/{\mathbb{Q}}\coloneqq{\mathbb{Q}}(\omega)/{\mathbb{Q}}\) yields
\([L_0: {\mathbb{Q}}] = 3\).
\item
Write
\(\min_{\zeta_3, {\mathbb{Q}}}(x) = (x^3-1)/(x-1) = x^2 + x + 1\), and
note that this is still the minimal polynomial over \(L_0\) since
\(L_0 \subseteq {\mathbb{R}}\) and
\(\zeta_3 \in {\mathbb{C}}\setminus{\mathbb{R}}\). So \([L:L_0] = 2\).
\item
Counting in towers,
\begin{align*}
[L:{\mathbb{Q}}] = [L:L_0][L_0: {\mathbb{Q}}] = (2)(3) = 6
.\end{align*}
\item
But \(\# S_3 = 6\) and \(\# A_3 = 3\), so \(G = S_3\).
\item
Explicitly, since we can write
\(\operatorname{SF}(f) = {\mathbb{Q}}(\omega, \zeta_3)\), we can find
explicit generators:
\begin{align*}
\sigma:
&\begin{cases}
\omega &\mapsto \omega
\\
\zeta_3 &\mapsto \zeta_3\cdot \zeta_3.
\end{cases}
&&
\implies \sigma \sim (1,2,3) \\
\tau:
&\begin{cases}
\omega &\mapsto \omega
\\
\zeta_3 &\mapsto \mkern 1.5mu\overline{\mkern-1.5mu\zeta_3\mkern-1.5mu}\mkern 1.5mu.
\end{cases}
&&
\implies \tau \sim (2, 3)
.\end{align*}
So
\(G = \left\langle{\sigma, \tau {~\mathrel{\Big|}~}\sigma^3, \tau^2}\right\rangle\).
\end{itemize}
Part c:
\begin{itemize}
\tightlist
\item
Note that the subgroup lattice for \(S_3\) looks like the following:
\end{itemize}
\includegraphics{figures/2021-08-14_18-00-51.png}
\begin{itemize}
\tightlist
\item
Note that we can identify
\begin{itemize}
\tightlist
\item
\(\tau = (2,3)\) which fixes \(r_1\)
\item
\(\sigma \tau = (1,2)\) which fixes \(r_3\)
\item
\(\sigma^2\tau = (1, 3)\) which fixes \(r_2\)
\item
\(\sigma = (1,2,3)\), for which we need to calculate the fixed
field. Using that \(\sigma(\omega) =\zeta\omega\) and
\(\sigma(\zeta)=\zeta\), supposing \(\sigma(\alpha) = \alpha\) we
have
\begin{align*}
\sigma(\alpha) &\coloneqq\sigma(a + b\zeta_3 + c\zeta_3^2 + d\omega + e\zeta_3\omega + f\zeta_3^2\omega) \\
&= a + b\zeta_3 + c\zeta_3^2 + d\zeta_3\omega + e\zeta_3^2\omega + f\omega \\
\implies \alpha &= a + b\zeta_3 + c\zeta_3^2 + t_1(\omega + \zeta_3\omega + \zeta_3^2\omega) \\
\implies \alpha &= a + b\zeta_3 + c\zeta_3^2 + t_1\omega (1 + \zeta_3+ \zeta_3^2) \\
\implies \alpha &= a + b\zeta_3 + c\zeta_3^2
,\end{align*}
using the general fact that \(\sum_{k=0}^{n-1}\zeta_n^k = 0\). So
the fixed field is
\({\mathbb{Q}}(1, \zeta, \zeta^2) = {\mathbb{Q}}(\zeta)\).
\end{itemize}
\item
We thus get the following lattice correspondence:
\end{itemize}
\begin{center}
\begin{tikzcd}
&& {{\mathbb{Q}}(\zeta_3,\omega)} \\
\\
{{\mathbb{Q}}(\omega) = {\mathbb{Q}}(r_1)} & {{\mathbb{Q}}(\zeta_3\omega) = {\mathbb{Q}}(r_2)} & {{\mathbb{Q}}(\zeta_3^2\omega) = {\mathbb{Q}}(r_3)} && {{\mathbb{Q}}(\zeta_3)} \\
\\
&& {\mathbb{Q}}\\
&& 1 \\
\\
{\left\langle{(2,3) = \tau}\right\rangle \cong C_2} & {\left\langle{(1,3) = \sigma^2\tau}\right\rangle \cong C_2} & {\left\langle{(1,2) = \sigma\tau}\right\rangle \cong C_2} && {\left\langle{(1,2,3) = \sigma}\right\rangle \cong C_3} \\
\\
&& {\left\langle{\sigma, \tau}\right\rangle\cong S_3}
\arrow["3"{description}, from=5-3, to=3-1]
\arrow["3"{description}, from=5-3, to=3-3]
\arrow["2"{description}, from=3-1, to=1-3]
\arrow["2"{description}, from=3-2, to=1-3]
\arrow["2"{description}, from=3-3, to=1-3]
\arrow["2"{description}, from=5-3, to=3-5]
\arrow["3"{description}, from=3-5, to=1-3]
\arrow["3"{description}, from=5-3, to=3-2]
\arrow["2"{description}, from=6-3, to=8-1]
\arrow["3"{description}, from=8-1, to=10-3]
\arrow["3"{description}, from=8-3, to=10-3]
\arrow["2"{description}, from=6-3, to=8-3]
\arrow["3"{description}, from=6-3, to=8-5]
\arrow["2"{description}, from=8-5, to=10-3]
\arrow["3"{description}, from=8-2, to=10-3]
\arrow["2"{description}, from=6-3, to=8-2]
\end{tikzcd}
\end{center}
\begin{quote}
\href{https://q.uiver.app/?q=WzAsMTIsWzIsMCwiXFxRUShcXHpldGFfMyxcXG9tZWdhKSJdLFswLDIsIlxcUVEoXFxvbWVnYSkgPSBcXFFRKHJfMSkiXSxbMiwyLCJcXFFRKFxcemV0YV8zXjJcXG9tZWdhKSA9IFxcUVEocl8zKSJdLFsyLDQsIlxcUVEiXSxbMSwyLCJcXFFRKFxcemV0YV8zXFxvbWVnYSkgPSBcXFFRKHJfMikiXSxbNCwyLCJcXFFRKFxcemV0YV8zKSJdLFswLDcsIlxcZ2Vuc3soMiwzKSA9IFxcdGF1fSBcXGNvbmcgQ18yIl0sWzIsOSwiXFxnZW5ze1xcc2lnbWEsIFxcdGF1fVxcY29uZyBTXzMiXSxbMiw1LCIxIl0sWzIsNywiXFxnZW5zeygxLDIpID0gXFxzaWdtYVxcdGF1fSBcXGNvbmcgQ18yIl0sWzQsNywiXFxnZW5zeygxLDIsMykgPSBcXHNpZ21hfSBcXGNvbmcgQ18zIl0sWzEsNywiXFxnZW5zeygxLDMpID0gXFxzaWdtYV4yXFx0YXV9IFxcY29uZyBDXzIiXSxbMywxLCIzIiwxXSxbMywyLCIzIiwxXSxbMSwwLCIyIiwxXSxbNCwwLCIyIiwxXSxbMiwwLCIyIiwxXSxbMyw1LCIyIiwxXSxbNSwwLCIzIiwxXSxbMyw0LCIzIiwxXSxbOCw2LCIyIiwxXSxbNiw3LCIzIiwxXSxbOSw3LCIzIiwxXSxbOCw5LCIyIiwxXSxbOCwxMCwiMyIsMV0sWzEwLDcsIjIiLDFdLFsxMSw3LCIzIiwxXSxbOCwxMSwiMiIsMV1d}{Link
to Diagram}
\end{quote}
\end{solution}
\hypertarget{spring-2021-4-work}{%
\subsubsection{\texorpdfstring{Spring 2021 \#4
\(\work\)}{Spring 2021 \#4 \textbackslash work}}\label{spring-2021-4-work}}
Define
\begin{align*}
f(x) \coloneqq x^4 + 4x^2 + 64 \in {\mathbb{Q}}[x]
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the splitting field \(K\) of \(f\) over \({\mathbb{Q}}\).
\item
Find the Galois group \(G\) of \(f\).
\item
Exhibit explicitly the correspondence between subgroups of \(G\) and
intermediate fields between \({\mathbb{Q}}\) and \(K\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Useful trick: given \(a + \sqrt{b}\), try to rewrite this as
\((\sqrt{c} + \sqrt{d})^2\) for some \(c, d\) to get a better basis
for \(\operatorname{SF}(f)\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\item
First consider \(g(z) \coloneqq z^2 + 4z + 64\). Applying the
quadratic formula yields
\begin{align*}
z = {-4 \pm \sqrt{16 - 64} \over 2} = -2 \pm {1\over 2}\sqrt{ -15 \cdot 16 } = -2 \pm 2i \sqrt{15}
.\end{align*}
\item
Substituting \(z=x^2\) yields the splitting field of \(f\) as
\(L\coloneqq{\mathbb{Q}}(\pm \sqrt{ -2 \pm 2i\sqrt{15}})\).
\begin{itemize}
\tightlist
\item
Note that this factorization shows that \(f\) is irreducible over
\({\mathbb{Q}}\), since the two quadratic factors have irrational
coefficients and none of the roots are real.
\item
Irreducible implies separable over a perfect field, so
\(L/{\mathbb{Q}}\) is a separable extension.
\item
\(L\) is the splitting field of a separable polynomial and thus
normal, making \(L\) Galois.
\end{itemize}
\item
In this form, it's not clear what the degree \([L:{\mathbb{Q}}]\) is,
so we can find a better basis by rewriting the roots of \(g\):
\begin{align*}
z = -2 \pm 2i\sqrt{15} = \qty{\sqrt{5}}^2 - \qty{\sqrt 3}^2 \pm 2i\sqrt{5}\sqrt{3} = (\sqrt 5 \pm i\sqrt{3})^2
,\end{align*}
and so the roots of \(f\) are \(x = \pm \sqrt{5} \pm i\sqrt{3}\) and
\(L = {\mathbb{Q}}(\sqrt 5, i\sqrt 3)\).
\item
Counting in towers,
\begin{align*}
[L:{\mathbb{Q}}] = [{\mathbb{Q}}(\sqrt 5, i \sqrt{3} ) : {\mathbb{Q}}\sqrt{5} ][{\mathbb{Q}}\sqrt{5} : {\mathbb{Q}}] = (2)(2) = 4
,\end{align*}
where we've used that \(\min_{\sqrt 5, {\mathbb{Q}}}(x) = x^2-5\) and
\(\min_{i\sqrt 3, {\mathbb{Q}}}(x) = x^2 + 3\), which remains the
minimal polynomial over
\({\mathbb{Q}}(\sqrt 5) \subseteq {\mathbb{R}}\) since both roots are
not real.
\item
So \(G\coloneqq{ \mathsf{Gal}} (L/{\mathbb{Q}}) \leq S_4\) is a
transitive subgroup of size 4, making it either \(C_4\) or \(C_2^2\).
\item
Label the roots:
\begin{align*}
r_1 &= \sqrt 5 + i\sqrt 3 \\
r_2 &= \sqrt{5} - i \sqrt{3} \\
r_3 &= - \sqrt 5 + i\sqrt 3 = -r_2 \\
r_4 &= -\sqrt{5} - i\sqrt{3} = -r_1
.\end{align*}
\item
We can start writing down automorphisms:
\begin{align*}
\sigma_1:
\begin{cases}
\sqrt 5 &\mapsto -\sqrt 5
\\
i\sqrt 3 &\mapsto i\sqrt 3 .
\end{cases}
&& \sigma_1 \sim (1,3)(2,4)
\\
\sigma_2
\begin{cases}
\sqrt 5 &\mapsto \sqrt 5
\\
i\sqrt 3 &\mapsto -i\sqrt 3 .
\end{cases}
&& \sigma_2 \sim (1, 2)(3, 4)
.\end{align*}
Note that these define automorphisms because we've specified what
happens to a basis and they send roots to other roots.
\item
Checking that \(\sigma_1^2 = \sigma_2^2 = \operatorname{id}\), this
produces two distinct order 2 elements, forcing \(G \cong C_2^2\)
since \(C_4\) only has one order 2 element. Explicitly, we have
\begin{align*}
C_2^2 \cong G = \left\langle{\tau_1, \tau_2}\right\rangle = \left\{{\operatorname{id}, \tau_1, \tau_2, \tau_1 \tau_2}\right\} = \left\{{\operatorname{id}, (1,3)(2,4), (1,2)(3,4), (1,4)(2,3) }\right\}
,\end{align*}
and the generic subgroup lattice looks like:
\end{itemize}
\includegraphics{figures/2021-08-15_00-02-28.png}
\begin{itemize}
\item
Computing some fixed fields. Write \(i \sqrt{3} = x, \sqrt{5} = y\),
then elements in the splitting field are of the form
\(\alpha = 1 + ax + by + cxy\).
\begin{itemize}
\item
For \(\sigma_1\), we have \(x\mapsto -x\), so
\begin{align*}
\sigma_1(\alpha) = 1 - ax + by - cxy
= \alpha \implies a=-a=0, c=-c=0
,\end{align*}
so this preserves \(1+by\), making the fixed field
\({\mathbb{Q}}(1, y) = {\mathbb{Q}}(i \sqrt{3})\).
\item
For \(\sigma_2\), we have \(y\mapsto -y\), so
\begin{align*}
\sigma_2(\alpha) = 1 +ax -by -cxy = \alpha \implies b=-b=0,c=-c=0
,\end{align*}
preserving \(1 + ax\) and making the fixed field
\({\mathbb{Q}}(1, x) = {\mathbb{Q}}(\sqrt 5)\).
\item
For \(\sigma_1 \sigma_2\), we have \(x\mapsto -x\) and
\(y\mapsto -y\), so
\begin{align*}
\sigma_1\sigma_2(\alpha) = 1 -ax -by +cxy = \alpha \implies a=-a=-, b=-b=0
,\end{align*}
preserving \(1 + cxy\) and yielding
\({\mathbb{Q}}(xy) = {\mathbb{Q}}(i\sqrt 3 \sqrt 5)\).
\end{itemize}
\item
So the lattice correspondence we get here is
\end{itemize}
\begin{center}
\begin{tikzcd}
&& {{\mathbb{Q}}(\sqrt{5}, i\sqrt{3})} \\
\\
{{\mathbb{Q}}(i \sqrt 3)} && {{\mathbb{Q}}(i\sqrt{3}\sqrt{5})} && {{\mathbb{Q}}(\sqrt 5)} \\
\\
&& {\mathbb{Q}}\\
&& 1 \\
{} &&&& {} \\
{\left\langle{\sigma_1}\right\rangle} && {\left\langle{\sigma_1\sigma_2}\right\rangle} && {\left\langle{\sigma_2}\right\rangle} \\
\\
&& {G = \left\langle{\tau_1, \tau_2}\right\rangle}
\arrow["2"{description}, from=5-3, to=3-1]
\arrow["2"{description}, from=5-3, to=3-3]
\arrow["2"{description}, from=5-3, to=3-5]
\arrow["2"{description}, from=3-3, to=1-3]
\arrow["2"{description}, from=3-1, to=1-3]
\arrow["2"{description}, from=3-5, to=1-3]
\arrow["2"{description}, from=6-3, to=8-1]
\arrow["2"{description}, from=6-3, to=8-3]
\arrow["2"{description}, from=6-3, to=8-5]
\arrow["2"{description}, from=8-1, to=10-3]
\arrow["2"{description}, from=8-3, to=10-3]
\arrow["2"{description}, from=8-5, to=10-3]
\end{tikzcd}
\end{center}
\begin{quote}
\href{https://q.uiver.app/?q=WzAsMTIsWzIsMCwiXFxRUShcXHNxcnR7NX0sIGlcXHNxcnR7M30pIl0sWzAsMiwiXFxRUShpIFxcc3FydCAzKSJdLFsyLDIsIlxcUVEoaVxcc3FydHszfVxcc3FydHs1fSkiXSxbNCwyLCJcXFFRKFxcc3FydCA1KSJdLFsyLDQsIlxcUVEiXSxbMiw1LCIxIl0sWzAsNl0sWzQsNl0sWzIsNywiXFxnZW5ze1xcc2lnbWFfMVxcc2lnbWFfMn0iXSxbMCw3LCJcXGdlbnN7XFxzaWdtYV8xfSJdLFs0LDcsIlxcZ2Vuc3tcXHNpZ21hXzJ9Il0sWzIsOSwiRyA9IFxcZ2Vuc3tcXHRhdV8xLCBcXHRhdV8yfSJdLFs0LDEsIjIiLDFdLFs0LDIsIjIiLDFdLFs0LDMsIjIiLDFdLFsyLDAsIjIiLDFdLFsxLDAsIjIiLDFdLFszLDAsIjIiLDFdLFs1LDksIjIiLDFdLFs1LDgsIjIiLDFdLFs1LDEwLCIyIiwxXSxbOSwxMSwiMiIsMV0sWzgsMTEsIjIiLDFdLFsxMCwxMSwiMiIsMV1d}{Link
to Diagram}
\end{quote}
\end{solution}
\hypertarget{fall-2019-midterm-6-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Midterm \#6
\(\work\)}{Fall 2019 Midterm \#6 \textbackslash work}}\label{fall-2019-midterm-6-work}}
Compute the Galois group of
\(f(x) = x^3-3x -3\in {\mathbb{Q}}[x]/{\mathbb{Q}}\).
\hypertarget{spring-2018-2-done}{%
\subsubsection{\texorpdfstring{Spring 2018 \#2
\(\done\)}{Spring 2018 \#2 \textbackslash done}}\label{spring-2018-2-done}}
Let \(f(x) = x^4 - 4x^2 + 2 \in {\mathbb{Q}}[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the splitting field \(K\) of \(f\), and compute
\([K: {\mathbb{Q}}]\).
\item
Find the Galois group \(G\) of \(f\), both as an explicit group of
automorphisms, and as a familiar abstract group to which it is
isomorphic.
\item
Exhibit explicitly the correspondence between subgroups of \(G\) and
intermediate fields between \({\mathbb{Q}}\) and \(k\).
\end{enumerate}
\todo[inline]{Not the nicest proof! Would be better to replace the ad-hoc computations at the end.}
\begin{solution}
\envlist
\begin{proof}[of a]
Note that \(g(x) = x^2 - 4x + 2\) has roots \(\beta = 2 \pm \sqrt{2}\),
and so \(f\) has roots
\begin{align*}
\alpha_1 &= \sqrt{2 + \sqrt 2} \\
\alpha_2 &= \sqrt{2 - \sqrt 2} \\
\alpha_3 &= -\alpha_1 \\
\alpha_4 &= -\alpha_2
.\end{align*}
and splitting field \(K = {\mathbb{Q}}(\left\{{\alpha_i}\right\})\).
\end{proof}
\begin{proof}[of b]
\(K\) is the splitting field of a separable polynomial and thus Galois
over \({\mathbb{Q}}\). Moreover, Since \(f\) is irreducible by
Eisenstein with \(p=2\), the Galois group is a transitive subgroup of
\(S^4\), so the possibilities are:
\begin{itemize}
\tightlist
\item
\(S_4\)
\item
\(A_4\)
\item
\(D_4\)
\item
\({\mathbb{Z}}/(2) \times{\mathbb{Z}}/(2)\)
\item
\({\mathbb{Z}}/(4)\)
\end{itemize}
We can note that \(g\) splits over \(L \coloneqq{\mathbb{Q}}(\sqrt 2)\),
an extension of degree 2.
We can now note that \(\min(\alpha, L)\) is given by
\(p(x) = x^2 - (2 + \sqrt 2)\), and so \([K: L] = 2\).
We then have
\begin{align*}
[K: {\mathbb{Q}}] = [K: L] [L : {\mathbb{Q}}] = (2)(2) = 4
.\end{align*}
This
\({\left\lvert {{ \mathsf{Gal}} (K/{\mathbb{Q}})} \right\rvert} = 4\),
which leaves only two possibilities:
\begin{itemize}
\tightlist
\item
\({\mathbb{Z}}/(2) \times{\mathbb{Z}}/(2)\)
\item
\({\mathbb{Z}}/(4)\)
\end{itemize}
We can next check orders of elements. Take
\begin{align*}
\sigma &\in { \mathsf{Gal}} (K/{\mathbb{Q}}) \\
\alpha_1 &\mapsto \alpha_2
.\end{align*}
Computations show that
\begin{itemize}
\tightlist
\item
\(\alpha_1^2 \alpha_2^2 = 2\), so \(\alpha_1 \alpha_2 = \sqrt 2\)
\item
\(\alpha_1^2 = 2 + \sqrt 2 \implies \sqrt 2 = \alpha_1^2 - 2\)
\end{itemize}
and thus
\begin{align*}
\sigma^2(\alpha_1)
&= \sigma(\alpha_2) \\
&= \sigma\left(\frac{\sqrt 2}{\alpha_1}\right) \\
&= \frac{\sigma(\sqrt 2)}{\sigma(\alpha_1)} \\
&= \frac{\sigma(\alpha_1^2 - 2)}{\alpha_2} \\
&= \frac{\alpha_2^2 - 2}{\alpha_2} \\
&= \alpha_2 -2\alpha_2^{-1}\\
&= \alpha_2 - \frac{2\alpha_1}{\sqrt 2} \\
&= \alpha_2 -\alpha_1 \sqrt 2 \\
&\neq \alpha_1
,\end{align*}
and so the order of \(\sigma\) is strictly greater than 2, and thus 4,
and thus
\({ \mathsf{Gal}} (K/{\mathbb{Q}}) = \left\{{\sigma^k {~\mathrel{\Big|}~}1\leq k \leq 4}\right\} \cong {\mathbb{Z}}/(4)\).
\end{proof}
\begin{proof}[of c]
?? The subgroup of index 2 \(\left\langle{\sigma^2}\right\rangle\)
corresponds to the field extension \(Q(\sqrt 2) / {\mathbb{Q}}\).
\end{proof}
\todo[inline]{Finish (c)}
\end{solution}
\hypertarget{spring-2020-4-work}{%
\subsubsection{\texorpdfstring{Spring 2020 \#4
\(\work\)}{Spring 2020 \#4 \textbackslash work}}\label{spring-2020-4-work}}
Let \(f(x) = x^4-2 \in {\mathbb{Q}}[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define what it means for a finite extension field \(E\) of a field
\(F\) to be a Galois extension.
\item
Determine the Galois group \({ \operatorname{Gal}} (E/{\mathbb{Q}})\)
for the polynomial \(f(x)\), and justify your answer carefully.
\item
Exhibit a subfield \(K\) in \((b)\) such that
\({\mathbb{Q}}\leq K \leq E\) with \(K\) not a Galois extension over
\({\mathbb{Q}}\). Explain.
\end{enumerate}
\hypertarget{spring-2017-8-work}{%
\subsubsection{\texorpdfstring{Spring 2017 \#8
\(\work\)}{Spring 2017 \#8 \textbackslash work}}\label{spring-2017-8-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(K\) denote the splitting field of \(x^5 - 2\) over
\({\mathbb{Q}}\). Show that the Galois group of \(K/{\mathbb{Q}}\) is
isomorphic to the group of invertible matrices
\begin{align*}
\left(\begin{array}{ll}
a & b \\
0 & 1
\end{array}\right)
{\quad \operatorname{where} \quad} a\in {\mathbb{F}}_5^{\times}\text{ and } b\in {\mathbb{F}}_5
.\end{align*}
\item
Determine all intermediate fields between \(K\) and \({\mathbb{Q}}\)
which are Galois over \({\mathbb{Q}}\).
\end{enumerate}
\hypertarget{fall-2016-4-work}{%
\subsubsection{\texorpdfstring{Fall 2016 \#4
\(\work\)}{Fall 2016 \#4 \textbackslash work}}\label{fall-2016-4-work}}
Set \(f(x) = x^3 - 5 \in {\mathbb{Q}}[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the splitting field \(K\) of \(f(x)\) over \({\mathbb{Q}}\).
\item
Find the Galois group \(G\) of \(K\) over \({\mathbb{Q}}\).
\item
Exhibit explicitly the correspondence between subgroups of \(G\) and
intermediate fields between \({\mathbb{Q}}\) and \(K\).
\end{enumerate}
\hypertarget{spring-2016-2-work}{%
\subsubsection{\texorpdfstring{Spring 2016 \#2
\(\work\)}{Spring 2016 \#2 \textbackslash work}}\label{spring-2016-2-work}}
Let \(K = {\mathbb{Q}}[\sqrt 2 + \sqrt 5]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find \([K: {\mathbb{Q}}]\).
\item
Show that \(K/{\mathbb{Q}}\) is Galois, and find the Galois group
\(G\) of \(K/{\mathbb{Q}}\).
\item
Exhibit explicitly the correspondence between subgroups of \(G\) and
intermediate fields between \({\mathbb{Q}}\) and \(K\).
\end{enumerate}
\hypertarget{fall-2015-5-work}{%
\subsubsection{\texorpdfstring{Fall 2015 \#5
\(\work\)}{Fall 2015 \#5 \textbackslash work}}\label{fall-2015-5-work}}
Let \(u = \sqrt{2 + \sqrt{2}}\), \(v = \sqrt{2 - \sqrt{2}}\), and
\(E = {\mathbb{Q}}(u)\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find (with justification) the minimal polynomial \(f(x)\) of \(u\)
over \({\mathbb{Q}}\).
\item
Show \(v\in E\), and show that \(E\) is a splitting field of \(f(x)\)
over \({\mathbb{Q}}\).
\item
Determine the Galois group of \(E\) over \({\mathbb{Q}}\) and
determine all of the intermediate fields \(F\) such that
\({\mathbb{Q}}\subset F \subset E\).
\end{enumerate}
\hypertarget{spring-2015-5-work}{%
\subsubsection{\texorpdfstring{Spring 2015 \#5
\(\work\)}{Spring 2015 \#5 \textbackslash work}}\label{spring-2015-5-work}}
Let \(f(x) = x^4 - 5 \in {\mathbb{Q}}[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Compute the Galois group of \(f\) over \({\mathbb{Q}}\).
\item
Compute the Galois group of \(f\) over \({\mathbb{Q}}(\sqrt{5})\).
\end{enumerate}
\hypertarget{fall-2014-3-work}{%
\subsubsection{\texorpdfstring{Fall 2014 \#3
\(\work\)}{Fall 2014 \#3 \textbackslash work}}\label{fall-2014-3-work}}
Consider the polynomial \(f(x) = x^4 - 7 \in {\mathbb{Q}}[x]\) and let
\(E/{\mathbb{Q}}\) be the splitting field of \(f\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
What is the structure of the Galois group of \(E/{\mathbb{Q}}\)?
\item
Give an explicit description of all of the intermediate subfields
\({\mathbb{Q}}\subset K \subset E\) in the form
\(K = {\mathbb{Q}}(\alpha), {\mathbb{Q}}(\alpha, \beta), \cdots\)
where \(\alpha, \beta\), etc are complex numbers. Describe the
corresponding subgroups of the Galois group.
\end{enumerate}
\hypertarget{fall-2013-6-work}{%
\subsubsection{\texorpdfstring{Fall 2013 \#6
\(\work\)}{Fall 2013 \#6 \textbackslash work}}\label{fall-2013-6-work}}
Let \(K\) be the splitting field of \(x^4-2\) over \({\mathbb{Q}}\) and
set \(G = { \operatorname{Gal}} (K/{\mathbb{Q}})\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(K/{\mathbb{Q}}\) contains both \({\mathbb{Q}}(i)\) and
\({\mathbb{Q}}(\sqrt[4]{2})\) and has degree 8 over \({\mathbb{Q}}\)/
\item
Let \(N = { \operatorname{Gal}} (K/{\mathbb{Q}}(i))\) and
\(H = { \operatorname{Gal}} (K/{\mathbb{Q}}(\sqrt[4]{2}))\). Show that
\(N\) is normal in \(G\) and \(NH = G\).
\begin{quote}
Hint: what field is fixed by \(NH\)?
\end{quote}
\item
Show that \({ \operatorname{Gal}} (K/{\mathbb{Q}})\) is generated by
elements \(\sigma, \tau\), of orders 4 and 2 respectively, with
\(\tau \sigma\tau^{-1}= \sigma^{-1}\).
\begin{quote}
Equivalently, show it is the dihedral group of order 8.
\end{quote}
\item
How many distinct quartic subfields of \(K\) are there? Justify your
answer.
\end{enumerate}
\hypertarget{spring-2014-4-work}{%
\subsubsection{\texorpdfstring{Spring 2014 \#4
\(\work\)}{Spring 2014 \#4 \textbackslash work}}\label{spring-2014-4-work}}
Let \(E\subset {\mathbb{C}}\) denote the splitting field over
\({\mathbb{Q}}\) of the polynomial \(x^3 - 11\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if \(n\) is a squarefree positive integer, then
\(\sqrt{n}\not\in E\).
\begin{quote}
Hint: you can describe all quadratic extensions of \({\mathbb{Q}}\)
contained in \(E\).
\end{quote}
\item
Find the Galois group of \((x^3 - 11)(x^2 - 2)\) over
\({\mathbb{Q}}\).
\item
Prove that the minimal polynomial of \(11^{1/3} + 2^{1/2}\) over
\({\mathbb{Q}}\) has degree 6.
\end{enumerate}
\hypertarget{spring-2013-8-work}{%
\subsubsection{\texorpdfstring{Spring 2013 \#8
\(\work\)}{Spring 2013 \#8 \textbackslash work}}\label{spring-2013-8-work}}
Let \(F\) be the field with 2 elements and \(K\) a splitting field of
\(f(x) = x^6 + x^3 + 1\) over \(F\). You may assume that \(f\) is
irreducible over \(F\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that if \(r\) is a root of \(f\) in \(K\), then \(r^9 = 1\) but
\(r^3\neq 1\).
\item
Find \({ \operatorname{Gal}} (K/F)\) and express each intermediate
field between \(F\) and \(K\) as \(F(\beta)\) for an appropriate
\(\beta \in K\).
\end{enumerate}
\hypertarget{galois-groups-indirect-computations-facts}{%
\subsection{Galois Groups: Indirect Computations /
Facts}\label{galois-groups-indirect-computations-facts}}
\hypertarget{fall-2019-7-done}{%
\subsubsection{\texorpdfstring{Fall 2019 \#7
\(\done\)}{Fall 2019 \#7 \textbackslash done}}\label{fall-2019-7-done}}
Let \(\zeta_n\) denote a primitive \(n\)th root of 1
\(\in {\mathbb{Q}}\). You may assume the roots of the minimal polynomial
\(p_n(x)\) of \(\zeta_n\) are exactly the primitive \(n\)th roots of 1.
Show that the field extension \({\mathbb{Q}}(\zeta_n )\) over
\({\mathbb{Q}}\) is Galois and prove its Galois group is
\(({\mathbb{Z}}/n{\mathbb{Z}})^{\times}\).
How many subfields are there of \({\mathbb{Q}}(\zeta_{20} )\)?
\begin{concept}
\envlist
\begin{itemize}
\item
\textbf{Galois} = normal + separable.
\item
\textbf{Separable}: Minimal polynomial of every element has distinct
roots.
\item
\textbf{Normal (if separable)}: Splitting field of an irreducible
polynomial.
\item
\(\zeta\) is a primitive root of unity \(\iff o(\zeta) = n\) in
\({\mathbb{F}}^{\times}\).
\item
\(\phi(p^k) = p^{k-1}(p-1)\)
\item
The lattice:
\begin{figure}
\centering
\includegraphics{figures/image_2021-04-17-02-44-48.png}
\caption{image\_2021-04-17-02-44-48}
\end{figure}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
Let \(K = {\mathbb{Q}}(\zeta)\). Then \(K\) is the splitting field of
\(f(x) = x^n - 1\), which is irreducible over \({\mathbb{Q}}\), so
\(K/{\mathbb{Q}}\) is normal. We also have \(f'(x) = nx^{n-1}\) and
\(\gcd(f, f') = 1\) since they can not share any roots.
\begin{quote}
Or equivalently, \(f\) splits into distinct linear factors
\(f(x) = \prod_{k\leq n}(x-\zeta^k)\).
\end{quote}
Since it is a Galois extension,
\({\left\lvert {{ \mathsf{Gal}} (K/{\mathbb{Q}})} \right\rvert} = [K: {\mathbb{Q}}] = \phi(n)\)
for the totient function.
We can now define maps
\begin{align*}
\tau_j: K &\to K \\
\zeta &\mapsto \zeta^j
\end{align*}
and if we restrict to \(j\) such that \(\gcd(n, j) = 1\), this yields
\(\phi(n)\) maps. Noting that if \(\zeta\) is a primitive root, then
\((n, j) = 1\) implies that that \(\zeta^j\) is also a primitive root,
and hence another root of \(\min(\zeta, {\mathbb{Q}})\), and so these
are in fact automorphisms of \(K\) that fix \({\mathbb{Q}}\) and thus
elements of \({ \mathsf{Gal}} (K/{\mathbb{Q}})\).
So define a map
\begin{align*}
\theta: {\mathbb{Z}}_n^{\times}&\to K \\
[j]_n &\mapsto \tau_j
.\end{align*}
from the \emph{multiplicative} group of units to the Galois group.
The claim is that this is a surjective homomorphism, and since both
groups are the same size, an isomorphism.
\begin{proof}[of surjectivity]
Letting \(\sigma \in K\) be arbitrary, noting that \([K: {\mathbb{Q}}]\)
has a basis \(\left\{{1, \zeta, \zeta^2, \cdots, \zeta^{n-1}}\right\}\),
it suffices to specify \(\sigma(\zeta)\) to fully determine the
automorphism. (Since \(\sigma(\zeta^k) = \sigma(\zeta)^k\).)
In particular, \(\sigma(\zeta)\) satisfies the polynomial \(x^n - 1\),
since \(\sigma(\zeta)^n = \sigma(\zeta^n) = \sigma(1) = 1\), which means
\(\sigma(\zeta)\) is another root of unity and
\(\sigma(\zeta) = \zeta^k\) for some \(1\leq k \leq n\).
Moreover, since \(o(\zeta) = n \in K^{\times}\), we must have
\(o(\zeta^k) = n \in K^{\times}\) as well. Noting that
\(\left\{{\zeta^i}\right\}\) forms a cyclic subgroup
\(H\leq K^{\times}\), then \(o(\zeta^k) = n \iff (n, k) = 1\) (by
general theory of cyclic groups).
Thus \(\theta\) is surjective.
\end{proof}
\begin{proof}[of being a homomorphism]
\begin{align*}
\tau_j \circ \tau_k (\zeta) =\tau_j(\zeta^k) = \zeta^{jk} \implies
\tau_{jk} = \theta(jk) = \tau_j \circ \tau_k
.\end{align*}
\end{proof}
\begin{proof}[of part 2]
We have \(K \cong {\mathbb{Z}}_{20}^{\times}\) and \(\phi(20) = 8\), so
\(K \cong {\mathbb{Z}}_8\), so we have the following subgroups and
corresponding intermediate fields:
\begin{itemize}
\tightlist
\item
\(0 \sim {\mathbb{Q}}(\zeta_{20})\)
\item
\({\mathbb{Z}}_2 \sim {\mathbb{Q}}(\omega_1)\)
\item
\({\mathbb{Z}}_4 \sim {\mathbb{Q}}(\omega_2)\)
\item
\({\mathbb{Z}}_8 \sim {\mathbb{Q}}\)
\end{itemize}
For some elements \(\omega_i\) which exist by the primitive element
theorem.
\end{proof}
\end{solution}
\hypertarget{fall-2018-3-done}{%
\subsubsection{\texorpdfstring{Fall 2018 \#3
\(\done\)}{Fall 2018 \#3 \textbackslash done}}\label{fall-2018-3-done}}
Let \(F \subset K \subset L\) be finite degree field extensions. For
each of the following assertions, give a proof or a counterexample.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
If \(L/F\) is Galois, then so is \(K/F\).
\item
If \(L/F\) is Galois, then so is \(L/K\).
\item
If \(K/F\) and \(L/K\) are both Galois, then so is \(L/F\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Every quadratic extension over \({\mathbb{Q}}\) is Galois.
\end{itemize}
\end{concept}
\begin{solution}
Let \(L/K/F\).
\begin{proof}[of a]
\textbf{False}: Take
\(L/K/F = {\mathbb{Q}}(\zeta_2, \sqrt[3] 2) \to {\mathbb{Q}}(\sqrt[3] 2) \to {\mathbb{Q}}\).
Then \(L/F\) is Galois, since it is the splitting field of \(x^3 - 2\)
and \({\mathbb{Q}}\) has characteristic zero.
But \(K/F\) is not Galois, since it is not the splitting field of any
irreducible polynomial.
\end{proof}
\begin{proof}[of b]
\textbf{True}: If \(L/F\) is Galois, then \(L/K\) is normal and
separable:
\begin{itemize}
\item
\(L/K\) is normal, since if \(\sigma: L \hookrightarrow\overline K\)
lifts the identity on \(K\) and fixes \(L\), i-t also lifts the
identity on \(F\) and fixes \(L\) (and \(\overline K = \overline F\)).
\item
\(L/K\) is separable, since \(F[x] \subseteq K[x]\), and so if
\(\alpha \in L\) where \(f(x) \coloneqq\min(\alpha, F)\) has no
repeated factors, then \(f'(x) \coloneqq\min(\alpha, K)\) divides
\(f\) and thus can not have repeated factors.
\end{itemize}
\end{proof}
\begin{proof}[of c]
\textbf{False}: Use the fact that every quadratic extension is Galois,
and take
\(L/K/F = {\mathbb{Q}}(\sqrt[4] 2) \to {\mathbb{Q}}(\sqrt 2) \to {\mathbb{Q}}\).
Then each successive extension is quadratic (thus Galois) but
\({\mathbb{Q}}(\sqrt[4] 2)\) is not the splitting field of any
polynomial (noting that it does not split \(x^4 - 2\) completely.)
\end{proof}
\end{solution}
\hypertarget{spring-2018-3-done}{%
\subsubsection{\texorpdfstring{Spring 2018 \#3
\(\done\)}{Spring 2018 \#3 \textbackslash done}}\label{spring-2018-3-done}}
Let \(K\) be a Galois extension of \({\mathbb{Q}}\) with Galois group
\(G\), and let \(E_1 , E_2\) be intermediate fields of \(K\) which are
the splitting fields of irreducible \(f_i (x) \in {\mathbb{Q}}[x]\).
Let \(E = E_1 E_2 \subset K\).
Let \(H_i = { \mathsf{Gal}} (K/E_i)\) and \(H = { \mathsf{Gal}} (K/E)\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(H = H_1 \cap H_2\).
\item
Show that \(H_1 H_2\) is a subgroup of \(G\).
\item
Show that
\begin{align*}
{ \mathsf{Gal}} (K/(E_1 \cap E_2 )) = H_1 H_2
.\end{align*}
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
The Galois correspondence:
\begin{itemize}
\tightlist
\item
\(H_1 \cap H_2 \rightleftharpoons E_1 E_2\),
\item
\(H_1 H_2 \rightleftharpoons E_1 \cap E_2\).
\end{itemize}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
By the Galois correspondence, it suffices to show that the fixed field
of \(H_1 \cap H_2\) is \(E_1 E_2\).
Let \(\sigma \in H_1 \cap H_2\); then
\(\sigma \in \mathop{\mathrm{Aut}}(K)\) fixes both \(E_1\) and \(E_2\).
\begin{quote}
Not sure if this works -- compositum is not literally product..?
\end{quote}
Writing \(x \in E_1E_2\) as \(x=e_1 e_2\), we have
\begin{align*}
\sigma(x) = \sigma(e_1 e_2) = \sigma(e_1) \sigma(e_2) = e_1 e_2 =x,
\end{align*}
so \(\sigma\) fixes \(E_1 E_2\).
\end{proof}
\begin{proof}[of b]
That \(H_1 H_2 \subseteq G\) is clear, since if
\(\sigma = \tau_1 \tau_2 \in H_1 H_2\), then each \(\tau_i\) is an
automorphism of \(K\) that fixes \(E_i \supseteq {\mathbb{Q}}\), so each
\(\tau_i\) fixes \({\mathbb{Q}}\) and thus \(\sigma\) fixes
\({\mathbb{Q}}\).
\begin{claim}
All elements in this subset commute.
\end{claim}
\begin{proof}[of claim]
\envlist
\begin{itemize}
\item
Let \(\sigma = \sigma_1 \sigma_2 \in H_1 H_2\).
\item
Note that \(\sigma_1(e) = e\) for all \(e\in E_1\) by definition,
since \(H_1\) fixes \(E_1\), and \(\sigma_2(e) \in E_1\) (?).
\item
Then
\begin{align*}
\sigma_1(e) = e \quad \forall e \in E_1 \implies \sigma_1(\sigma_2(e)) = \sigma_2(e)
\end{align*}
and substituting \(e = \sigma_1(e)\) on the RHS yields
\begin{align*}
\sigma_1 \sigma_2(e) = \sigma_2 \sigma_1(e)
,\end{align*}
where a similar proof holds for \(e\in E_2\) and thus for arbitrary
\(x\in E_1 E_2\).
\end{itemize}
\end{proof}
\end{proof}
\begin{proof}[of c]
By the Galois correspondence, the subgroup \(H_1H_2 \leq G\) will
correspond to an intermediate field \(E\) such that \(K/E/{\mathbb{Q}}\)
and \(E\) is the fixed field of \(H_1 H_2\).
But if \(\sigma \in H_1 H_2\), then \(\sigma = \tau_1 \tau_2\) where
\(\tau_i\) is an automorphism of \(K\) that fixes \(E_i\), and so
\begin{align*}
\sigma(x) = x \iff \tau_1\tau_2(x) = x
&\iff \tau_2(x) = x
\\
&~\&~
\\
\tau_1(x) = x &\iff x \in E_1 \cap E_2
.\end{align*}
.
\end{proof}
\end{solution}
\hypertarget{fall-2017-4-work}{%
\subsubsection{\texorpdfstring{Fall 2017 \#4
\(\work\)}{Fall 2017 \#4 \textbackslash work}}\label{fall-2017-4-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(f (x)\) be an irreducible polynomial of degree 4 in
\({\mathbb{Q}}[x]\) whose splitting field \(K\) over \({\mathbb{Q}}\)
has Galois group \(G = S_4\).
Let \(\theta\) be a root of \(f(x)\). Prove that
\({\mathbb{Q}}[\theta]\) is an extension of \({\mathbb{Q}}\) of degree
4 and that there are no intermediate fields between \({\mathbb{Q}}\)
and \({\mathbb{Q}}[\theta]\).
\item
Prove that if \(K\) is a Galois extension of \({\mathbb{Q}}\) of
degree 4, then there is an intermediate subfield between \(K\) and
\({\mathbb{Q}}\).
\end{enumerate}
\hypertarget{spring-2017-7-work}{%
\subsubsection{\texorpdfstring{Spring 2017 \#7
\(\work\)}{Spring 2017 \#7 \textbackslash work}}\label{spring-2017-7-work}}
Let \(F\) be a field and let \(f(x) \in F[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Define what a splitting field of \(f(x)\) over \(F\) is.
\item
Let \(F\) now be a finite field with \(q\) elements. Let \(E/F\) be a
finite extension of degree \(n>0\). Exhibit an explicit polynomial
\(g(x) \in F[x]\) such that \(E/F\) is a splitting field of \(g(x)\)
over \(F\). Fully justify your answer.
\item
Show that the extension \(E/F\) in (b) is a Galois extension.
\end{enumerate}
\hypertarget{spring-2016-6-work}{%
\subsubsection{\texorpdfstring{Spring 2016 \#6
\(\work\)}{Spring 2016 \#6 \textbackslash work}}\label{spring-2016-6-work}}
Let \(K\) be a Galois extension of a field \(F\) with \([K: F] = 2015\).
Prove that \(K\) is an extension by radicals of the field \(F\).
\hypertarget{fall-2015-6-work}{%
\subsubsection{\texorpdfstring{Fall 2015 \#6
\(\work\)}{Fall 2015 \#6 \textbackslash work}}\label{fall-2015-6-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(G\) be a finite group. Show that there exists a field extension
\(K/F\) with \({ \operatorname{Gal}} (K/F) = G\).
\begin{quote}
You may assume that for any natural number \(n\) there is a field
extension with Galois group \(S_n\).
\end{quote}
\item
Let \(K\) be a Galois extension of \(F\) with
\({\left\lvert {{ \operatorname{Gal}} (K/F)} \right\rvert} = 12\).
Prove that there exists an intermediate field \(E\) of \(K/F\) with
\([E: F] = 3\).
\item
With \(K/F\) as in (b), does an intermediate field \(L\) necessarily
exist satisfying \([L: F] = 2\)? Give a proof or counterexample.
\end{enumerate}
\hypertarget{fall-2014-1-work}{%
\subsubsection{\texorpdfstring{Fall 2014 \#1
\(\work\)}{Fall 2014 \#1 \textbackslash work}}\label{fall-2014-1-work}}
Let \(f\in {\mathbb{Q}}[x]\) be an irreducible polynomial and \(L\) a
finite Galois extension of \({\mathbb{Q}}\). Let
\(f(x) = g_1(x)g_2(x)\cdots g_r(x)\) be a factorization of \(f\) into
irreducibles in \(L[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that each of the factors \(g_i(x)\) has the same degree.
\item
Give an example showing that if \(L\) is not Galois over
\({\mathbb{Q}}\), the conclusion of part (a) need not hold.
\end{enumerate}
\hypertarget{spring-2013-7-work}{%
\subsubsection{\texorpdfstring{Spring 2013 \#7
\(\work\)}{Spring 2013 \#7 \textbackslash work}}\label{spring-2013-7-work}}
Let \(f(x) = g(x) h(x) \in {\mathbb{Q}}[x]\) and \(E,B,C/{\mathbb{Q}}\)
be the splitting fields of \(f,g,h\) respectively.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \({ \operatorname{Gal}} (E/B)\) and
\({ \operatorname{Gal}} (E/C)\) are normal subgroups of
\({ \operatorname{Gal}} (E/{\mathbb{Q}})\).
\item
Prove that
\({ \operatorname{Gal}} (E/B) \cap{ \operatorname{Gal}} (E/C) = \left\{{1}\right\}\).
\item
If \(B\cap C = {\mathbb{Q}}\), show that
\({ \operatorname{Gal}} (E/B) { \operatorname{Gal}} (E/C) = { \operatorname{Gal}} (E/{\mathbb{Q}})\).
\item
Under the hypothesis of (c), show that
\({ \operatorname{Gal}} (E/{\mathbb{Q}}) \cong { \operatorname{Gal}} (E/B) \times { \operatorname{Gal}} (E/C)\).
\item
Use (d) to describe
\({ \operatorname{Gal}} ({\mathbb{Q}}[\alpha]/{\mathbb{Q}})\) where
\(\alpha = \sqrt 2 + \sqrt 3\).
\end{enumerate}
\hypertarget{fall-2012-3-work}{%
\subsubsection{\texorpdfstring{Fall 2012 \#3
\(\work\)}{Fall 2012 \#3 \textbackslash work}}\label{fall-2012-3-work}}
Let \(f(x) \in {\mathbb{Q}}[x]\) be an irreducible polynomial of degree
5. Assume that \(f\) has all but two roots in \({\mathbb{R}}\). Compute
the Galois group of \(f(x)\) over \({\mathbb{Q}}\) and justify your
answer.
\hypertarget{pth-roots-and-xpk-x}{%
\subsection{\texorpdfstring{\(p\)th Roots and
\(x^{p^k}-x\)}{pth Roots and x\^{}\{p\^{}k\}-x}}\label{pth-roots-and-xpk-x}}
\hypertarget{spring-2021-7-done}{%
\subsubsection{\texorpdfstring{Spring 2021 \#7
\(\done\)}{Spring 2021 \#7 \textbackslash done}}\label{spring-2021-7-done}}
Let \(p\) be a prime number and let \(F\) be a field of characteristic
\(p\). Show that if \(a\in F\) is not a \(p\)th power in \(F\), then
\(x^p-a \in F[x]\) is irreducible.
\begin{strategy}
\envlist
\begin{itemize}
\tightlist
\item
Contradiction: go to splitting field, apply Freshman's dream.
\item
Use that this polynomial is ramified, and its only factors are
\((x-a)\).
\end{itemize}
\end{strategy}
\begin{solution}[Likely the 'right' solution]
\envlist
\begin{itemize}
\tightlist
\item
Suppose \(a\) is not a \(p\)th power in \(F\), then
\(f(x) \coloneqq x^p-a\) has no roots in \(F\).
\item
Toward a contradiction, suppose \(f\) is reducible in \(F[x]\).
\item
In \(\operatorname{SF}(f)\), since \(\operatorname{ch}F = p\) we have
\(f(x) = (x-\zeta)^p\) for some \(\zeta = a^{1\over p}\).
\begin{itemize}
\tightlist
\item
So if \(f\) is reducible in \(F[x]\), we have
\(f(x) = p_1(x) p_2(x)\) where \(p(x) = (x-\zeta)^q\in F[x]\) for
some \(1\leq q < p\), since these are the only factors of \(f\).
\item
The claim is that \(\zeta\in F\) as well, which is a contradiction
since \(\zeta\) is a \(p\)th root of \(a\).
\end{itemize}
\item
We have \(x^q-\zeta^q \in F[x]\), so \(\zeta^q\in F\).
\item
We know \(a = \zeta^p\in F\), and thus \(\zeta^{d} = \zeta\in F\) for
\(d \coloneqq\gcd(p, n) = 1\). \(\contradiction\)
\begin{itemize}
\tightlist
\item
Why this is true: write \(d = \gcd(p, n)\) in \({\mathbb{Z}}\) to
obtain \(d = tp + sn\) for some \(t, s\).
\item
Then
\(\zeta^d = \zeta^{tp+sn} = (\zeta^p)^t \cdot (\zeta^n)^s \in F\).
\end{itemize}
\end{itemize}
\end{solution}
\begin{strategy}[for an alternative solution]
\envlist
\begin{itemize}
\tightlist
\item
By contrapositive, show that
\(f(x) \coloneqq x^p-a \in {\mathbb{F}}[x]\) reducible \(\implies a\)
is a \(p\)th power in \({\mathbb{F}}\).
\item
Eventually show \(a^\ell = b^p\) for some \(\ell\in {\mathbb{N}}\) and
some \(b\in {\mathbb{F}}\), then \(\gcd(\ell, p) = 1\) forces \(b=a\)
and \(\ell=p\).
\item
Use the fact that the constant term of any \(g\in {\mathbb{F}}[x]\) is
actually in \({\mathbb{F}}\).
\end{itemize}
\end{strategy}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Reducible: \(f\in {\mathbb{F}}[x]\) is reducible iff there exists
\(g, h\in {\mathbb{F}}[x]\) nonconstant with \(f = g h\).
\begin{itemize}
\tightlist
\item
Importantly, this factorization needs to happen in
\({\mathbb{F}}[x]\), since we can \emph{always} find such
factorizations in the splitting field \(\operatorname{SF}(f)[x]\).
\end{itemize}
\item
Bezout's identity: \(\gcd(p, q) = d \implies\) there exist
\(s,t\in {\mathbb{Z}}\) such that
\begin{align*}
sp + tq = d
.\end{align*}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{itemize}
\item
WTS: \(f(x) \coloneqq x^p - a\in {\mathbb{F}}[x]\) reducible
\(\implies f\) has a root in the \emph{base field} \({\mathbb{F}}\).
\item
Write \(f(x) = g(x) h(x)\) and factor
\(f(x) = \prod_{i=1}^p (x- r_i) \in \operatorname{SF}(f)[x]\) where
the \(r_i\) are not necessarily distinct roots.
\item
WLOG, \(g(x) = \prod_{i=1}^\ell (x-r_i)\) for some
\(1\leq \ell \leq p-1\), i.e.~rearrange the factors so that \(g\) is
the first \(\ell\) of them.
\begin{itemize}
\tightlist
\item
\(\ell \neq 1, p\) since \(f\) is reducible, making \(g, h\)
nonconstant.
\end{itemize}
\item
Set \(R_\ell \coloneqq\prod_{i=1}^\ell r_i\), which is the constant
term in \(g\), so \(R_\ell \in {\mathbb{F}}\) since
\(g\in {\mathbb{F}}[x]\).
\item
Each \(r_i\) is a root of \(f\), so \(r_i^p - a = 0\) for all \(i\),
so \(r_i^p = a\).
\item
Trick: what is the \(p\)th power of \(R_\ell\)?
\begin{align*}
R_\ell^p
&\coloneqq\qty{ \prod_{i=1}^\ell}^p \\
&= \prod_{i=1}^\ell r_i^p \\
&= \prod_{i=1}^\ell a \\
&= a^\ell
,\end{align*}
so \(R_\ell^p = a^\ell\).
\item
Use Bezout: \(\gcd(\ell, p) = 1\) since \(p\) is prime, so write
\(tp + s\ell = 1\) for some \(t,s\in {\mathbb{Z}}\)
\item
Use this to build a root of \(f\) that's in \({\mathbb{F}}\): write
\begin{align*}
a &= a^1\\
&= a^{tp + s\ell} \\
&= a^{tp} a^{s\ell} \\
&=a^{tp} (a^\ell)^s\\
&= a^{tp} (R_\ell^p)^s \\
&= (a^t R_\ell^s)^p \\
&\coloneqq\beta^p
,\end{align*}
so \(a = \beta^p\).
\begin{itemize}
\tightlist
\item
Check \(\beta\in {\mathbb{F}}\): use that
\(R_\ell \in {\mathbb{F}}\) since it was a constant term of a
polynomial in \({\mathbb{F}}[x]\), \(a\in {\mathbb{F}}\) by
assumption, and fields are closed under taking powers and products.
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2019-4-done}{%
\subsubsection{\texorpdfstring{Fall 2019 \#4
\(\done\)}{Fall 2019 \#4 \textbackslash done}}\label{fall-2019-4-done}}
Let \(F\) be a finite field with \(q\) elements. Let \(n\) be a positive
integer relatively prime to \(q\) and let \(\omega\) be a primitive
\(n\)th root of unity in an extension field of \(F\). Let
\(E = F [\omega]\) and let \(k = [E : F]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(n\) divides \(q^{k}-1\).
\item
Let \(m\) be the order of \(q\) in
\({\mathbb{Z}}/n{\mathbb{Z}}^{\times}\). Prove that \(m\) divides
\(k\).
\item
Prove that \(m = k\).
\end{enumerate}
\todo[inline]{Revisit, tricky!}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\({\mathbb{F}}^{\times}\) is always cyclic for \({\mathbb{F}}\) a
field.
\item
Lagrange: \(H\leq G \implies \#H \divides \# G\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\item
Since \({\left\lvert {F} \right\rvert} = q\) and \([E:F] = k\), we
have \({\left\lvert {E} \right\rvert} = q^k\) and
\({\left\lvert {E^{\times}} \right\rvert} = q^k-1\).
\item
Noting that \(\zeta \in E^{\times}\) we must have
\(n = o(\zeta) \divides {\left\lvert {E^{\times}} \right\rvert} = q^k-1\)
by Lagrange's theorem.
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
Rephrasing (a), we have
\begin{align*}
n \divides q^k-1
&\iff q^k-1 \cong 0 \operatorname{mod}n \\
&\iff q^k \cong 1 \operatorname{mod}n \\
&\iff m \coloneqq o(q) \divides k
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
Since \(m\divides k \iff k = \ell m\), (\textbf{claim}) there is an
intermediate subfield \(M\) such that
\begin{align*}
E \leq M \leq F \quad k = [F:E] = [F:M] [M:E] = \ell m
,\end{align*}
so \(M\) is a degree \(m\) extension of \(E\).
\item
Now consider \(M^{\times}\).
\item
By the argument in (a), \(n\) divides
\(q^m - 1 = {\left\lvert {M^{\times}} \right\rvert}\), and
\(M^{\times}\) is cyclic, so it contains a cyclic subgroup \(H\) of
order \(n\).
\item
But then \(x\in H \implies p(x)\coloneqq x^n-1 = 0\), and since
\(p(x)\) has at most \(n\) roots in a field.
\item
So \(H = \left\{{x \in M {~\mathrel{\Big|}~}x^n-1 = 0}\right\}\),
i.e.~\(H\) contains all solutions to \(x^n-1\) in \(E[x]\).
\item
But \(\zeta\) is one such solution, so
\(\zeta \in H \subset M^{\times}\subset M\).
\item
Since \(F[\zeta]\) is the smallest field extension containing
\(\zeta\), we must have \(F = M\), so \(\ell = 1\), and \(k = m\).
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{spring-2019-2-done}{%
\subsubsection{\texorpdfstring{Spring 2019 \#2
\(\done\)}{Spring 2019 \#2 \textbackslash done}}\label{spring-2019-2-done}}
Let \(F = {\mathbb{F}}_p\) , where \(p\) is a prime number.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that if \(\pi(x) \in F[x]\) is irreducible of degree \(d\), then
\(\pi(x)\) divides \(x^{p^d} - x\).
\item
Show that if \(\pi(x) \in F[x]\) is an irreducible polynomial that
divides \(x^{p^n} - x\), then \(\deg \pi(x)\) divides \(n\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Go to a field extension.
\begin{itemize}
\tightlist
\item
Orders of multiplicative groups for finite fields are known.
\end{itemize}
\item
\({\mathbb{GF}}(p^n)\) is the splitting field of
\(x^{p^n} - x \in {\mathbb{F}}_p[x]\).
\item
\(x^{p^d} - x \divides x^{p^n} - x \iff d \divides n\)
\item
\({\mathbb{GF}}(p^d) \leq {\mathbb{GF}}(p^n) \iff d\divides n\)
\item
\(x^{p^n} - x = \prod f_i(x)\) over all irreducible monic \(f_i\) of
degree \(d\) dividing \(n\).
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
We can consider the quotient
\(K = \displaystyle{\frac{{\mathbb{F}}_p[x]}{\left\langle{\pi(x)}\right\rangle}}\),
which since \(\pi(x)\) is irreducible is an extension of
\({\mathbb{F}}_p\) of degree \(d\) and thus a field of size \(p^d\) with
a natural quotient map of rings \(\rho: {\mathbb{F}}_p[x] \to K\).
Since \(K^{\times}\) is a group of size \(p^d-1\), we know that for any
\(y \in K^{\times}\), we have by Lagrange's theorem that the order of
\(y\) divides \(p^d-1\) and so \(y^{p^d} = y\).
So every element in \(K\) is a root of \(q(x) = x^{p^d}-x\).
Since \(\rho\) is a ring morphism, we have
\begin{align*}
\rho(q(x)) = \rho(x^{p^d} - x) &= \rho(x)^{p^d} - \rho(x)
= 0 \in K \\
&\iff q(x) \in \ker \rho \\
&\iff q(x) \in \left\langle{\pi(x)}\right\rangle \\
&\iff \pi(x) \divides q(x) = x^{p^d}-x
,\end{align*}
where we've used that ``to contain is to divide'' in the last step.
\end{proof}
\begin{proof}[of b]
\begin{claim}
\(\pi(x)\) divides \(x^{p^n}-x \iff \deg \pi\) divides \(n\).
\end{claim}
\begin{proof}[of claim, $\implies$]
Let \(L \cong {\mathbb{GF}}(p^n)\) be the splitting field of
\(\phi_n(x) \coloneqq x^{p^n}-x\); then since \(\pi \divides \phi_n\) by
assumption, \(\pi\) splits in \(L\). Let \(\alpha \in L\) be any root of
\(\pi\); then there is a tower of extensions
\({\mathbb{F}}_p \leq {\mathbb{F}}_p(\alpha) \leq L\).
Then \({\mathbb{F}}_p \leq {\mathbb{F}}_p(\alpha) \leq L\), and so
\begin{align*}
n &= [L: {\mathbb{F}}_p] \\
&= [L: {\mathbb{F}}_p(\alpha)]~[{\mathbb{F}}_p(\alpha): {\mathbb{F}}_p] \\
&= \ell d
,\end{align*}
for some \(\ell \in {\mathbb{Z}}^{\geq 1}\), so \(d\) divides \(n\).
\end{proof}
\begin{proof}[of claim, $\impliedby$]
\(\impliedby\): If \(d\divides n\), use the fact (claim) that
\(x^{p^n} - x = \prod f_i(x)\) over all irreducible monic \(f_i\) of
degree \(d\) dividing \(n\). So \(f = f_i\) for some \(i\).
\end{proof}
\end{proof}
\end{solution}
\hypertarget{star-fall-2016-5-work}{%
\subsubsection{\texorpdfstring{\(\star\) Fall 2016 \#5
\(\work\)}{\textbackslash star Fall 2016 \#5 \textbackslash work}}\label{star-fall-2016-5-work}}
How many monic irreducible polynomials over \({\mathbb{F}}_p\) of prime
degree \(\ell\) are there? Justify your answer.
\hypertarget{star-fall-2013-7-work}{%
\subsubsection{\texorpdfstring{\(\star\) Fall 2013 \#7
\(\work\)}{\textbackslash star Fall 2013 \#7 \textbackslash work}}\label{star-fall-2013-7-work}}
Let \(F = {\mathbb{F}}_2\) and let
\(\mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\) denote
its algebraic closure.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that
\(\mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\) is
not a finite extension of \(F\).
\item
Suppose that
\(\alpha \in \mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu\)
satisfies \(\alpha^{17} = 1\) and \(\alpha\neq 1\). Show that
\(F(\alpha)/F\) has degree 8.
\end{enumerate}
\hypertarget{general-field-extensions}{%
\subsection{General Field Extensions}\label{general-field-extensions}}
\hypertarget{spring-2020-3-work}{%
\subsubsection{\texorpdfstring{Spring 2020 \#3
\(\work\)}{Spring 2020 \#3 \textbackslash work}}\label{spring-2020-3-work}}
Let \(E\) be an extension field of \(F\) and \(\alpha\in E\) be
algebraic of odd degree over \(F\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(F(\alpha) = F(\alpha^2)\).
\item
Prove that \(\alpha^{2020}\) is algebraic of odd degree over \(F\).
\end{enumerate}
\hypertarget{spring-2012-1-work}{%
\subsubsection{\texorpdfstring{Spring 2012 \#1
\(\work\)}{Spring 2012 \#1 \textbackslash work}}\label{spring-2012-1-work}}
Suppose that \(F\subset E\) are fields such that \(E/F\) is Galois and
\({\left\lvert {{ \operatorname{Gal}} (E/F)} \right\rvert} = 14\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that there exists a unique intermediate field \(K\) with
\(F\subset K \subset E\) such that \([K: F] = 2\).
\item
Assume that there are at least two distinct intermediate subfields
\(F \subset L_1, L_2 \subset E\) with \([L_i: F]= 7\). Prove that
\({ \operatorname{Gal}} (E/F)\) is nonabelian.
\end{enumerate}
\hypertarget{spring-2019-8-done}{%
\subsubsection{\texorpdfstring{Spring 2019 \#8
\(\done\)}{Spring 2019 \#8 \textbackslash done}}\label{spring-2019-8-done}}
Let \(\zeta = e^{2\pi i/8}\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
What is the degree of \({\mathbb{Q}}(\zeta)/{\mathbb{Q}}\)?
\item
How many quadratic subfields of \({\mathbb{Q}}(\zeta)\) are there?
\item
What is the degree of \({\mathbb{Q}}(\zeta, \sqrt[4] 2)\) over
\({\mathbb{Q}}\)?
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(\zeta_n \coloneqq e^{2\pi i \over n}\), and \(\zeta_n^k\) is a
primitive \(n\)th root of unity \(\iff \gcd(n, k) = 1\)
\begin{itemize}
\tightlist
\item
In general, \(\zeta_n^k\) is a primitive \({n \over \gcd(n, k)}\)th
root of unity.
\end{itemize}
\item
\(\deg \Phi_n(x) = \phi(n)\)
\item
\(\phi(p^k) = p^k - p^{k-1} = p^{k-1}(p-1)\)
\begin{itemize}
\tightlist
\item
Proof: for a nontrivial gcd, the possibilities are
\begin{align*}
p, 2p, 3p, 4p, \cdots, p^{k-2}p, p^{k-1}p
.\end{align*}
\end{itemize}
\item
\({ \mathsf{Gal}} ({\mathbb{Q}}(\zeta)/{\mathbb{Q}}) \cong {\mathbb{Z}}/(n)^{\times}\)
\end{itemize}
\end{concept}
\begin{solution}
\envlist
Let \(K = {\mathbb{Q}}(\zeta)\).
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
\(\zeta \coloneqq e^{2\pi i / 8}\) is a primitive \(8\)th root of
unity
\item
The minimal polynomial of an \(n\)th root of unity is the \(n\)th
cyclotomic polynomial \(\Phi_n\)
\item
The degree of the field extension is the degree of \(\Phi_8\), which
is
\begin{align*}
\phi(8) = \phi(2^3) = 2^{3-1} \cdot (2-1) = 4
.\end{align*}
\item
So \([{\mathbb{Q}}(\zeta): {\mathbb{Q}}] = 4\).
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
\({ \mathsf{Gal}} ({\mathbb{Q}}(\zeta)/{\mathbb{Q}}) \cong {\mathbb{Z}}/(8)^{\times}\cong {\mathbb{Z}}/(4)\)
by general theory
\item
\({\mathbb{Z}}/(4)\) has exactly one subgroup of index 2.
\item
Thus there is exactly \textbf{one} intermediate field of degree 2 (a
quadratic extension).
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
Let \(L = {\mathbb{Q}}(\zeta, \sqrt[4] 2)\).
\item
Note \({\mathbb{Q}}(\zeta) = {\mathbb{Q}}(i, \sqrt 2)\)
\begin{itemize}
\tightlist
\item
\({\mathbb{Q}}(i, \sqrt{2})\subseteq {\mathbb{Q}}(\zeta)\)
\begin{itemize}
\tightlist
\item
\(\zeta_8^2 = i\), and \(\zeta_8 = \sqrt{2}^{-1}+ i\sqrt{2}^{-1}\)
so \(\zeta_8 + \zeta_8 ^{-1}= 2/\sqrt{2} = \sqrt{2}\).
\end{itemize}
\item
\({\mathbb{Q}}(\zeta) \subseteq {\mathbb{Q}}(i, \sqrt{2})\):
\begin{itemize}
\tightlist
\item
\(\zeta = e^{2\pi i / 8} = \sin(\pi/4) + i\cos(\pi/4) = {\sqrt 2 \over 2}\qty{1+i}\).
\end{itemize}
\end{itemize}
\item
Thus
\(L = {\mathbb{Q}}(i, \sqrt{2})(\sqrt[4]{2}) = {\mathbb{Q}}(i, \sqrt 2, \sqrt[4] 2) = {\mathbb{Q}}(i, \sqrt[4]{2})\).
\begin{itemize}
\tightlist
\item
Uses the fact that
\({\mathbb{Q}}(\sqrt 2) \subseteq {\mathbb{Q}}(\sqrt[4] 2)\) since
\(\sqrt[4]{2}^2 = \sqrt{2}\)
\end{itemize}
\item
Conclude
\begin{align*}
[L: {\mathbb{Q}}] = [L: {\mathbb{Q}}(\sqrt[4] 2)] ~[{\mathbb{Q}}(\sqrt[4] 2): {\mathbb{Q}}] = 2 \cdot 4 = 8
\end{align*}
using the fact that the minimal polynomial of \(i\) over any subfield
of \({\mathbb{R}}\) is always \(x^2 + 1\), so
\(\min_{{\mathbb{Q}}(\sqrt[4] 2)}(i) = x^2 + 1\) which is degree 2.
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2017-3-work}{%
\subsubsection{\texorpdfstring{Fall 2017 \#3
\(\work\)}{Fall 2017 \#3 \textbackslash work}}\label{fall-2017-3-work}}
Let \(F\) be a field. Let \(f(x)\) be an irreducible polynomial in
\(F[x]\) of degree \(n\) and let \(g(x)\) be any polynomial in \(F[x]\).
Let \(p(x)\) be an irreducible factor (of degree \(m\)) of the
polynomial \(f(g(x))\).
Prove that \(n\) divides \(m\). Use this to prove that if \(r\) is an
integer which is not a perfect square, and \(n\) is a positive integer
then every irreducible factor of \(x^{2n} - r\) over \({\mathbb{Q}}[x]\)
has even degree.
\hypertarget{spring-2015-2-work}{%
\subsubsection{\texorpdfstring{Spring 2015 \#2
\(\work\)}{Spring 2015 \#2 \textbackslash work}}\label{spring-2015-2-work}}
Let \({\mathbb{F}}\) be a finite field.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Give (with proof) the decomposition of the additive group
\(({\mathbb{F}}, +)\) into a direct sum of cyclic groups.
\item
The \emph{exponent} of a finite group is the least common multiple of
the orders of its elements. Prove that a finite abelian group has an
element of order equal to its exponent.
\item
Prove that the multiplicative group \(({\mathbb{F}}^{\times}, \cdot)\)
is cyclic.
\end{enumerate}
\hypertarget{spring-2014-3-work}{%
\subsubsection{\texorpdfstring{Spring 2014 \#3
\(\work\)}{Spring 2014 \#3 \textbackslash work}}\label{spring-2014-3-work}}
Let \(F\subset C\) be a field extension with \(C\) algebraically closed.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that the intermediate field \(C_{\text{alg}} \subset C\)
consisting of elements algebraic over \(F\) is algebraically closed.
\item
Prove that if \(F\to E\) is an algebraic extension, there exists a
homomorphism \(E\to C\) that is the identity on \(F\).
\end{enumerate}
\hypertarget{modules}{%
\section{Modules}\label{modules}}
\hypertarget{general-questions}{%
\subsection{General Questions}\label{general-questions}}
\hypertarget{spring-2017-3-work}{%
\subsection{\texorpdfstring{Spring 2017 \#3
\(\work\)}{Spring 2017 \#3 \textbackslash work}}\label{spring-2017-3-work}}
Let \(R\) be a commutative ring with 1. Suppose that \(M\) is a free
\(R{\hbox{-}}\)module with a finite basis \(X\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Let \(I {~\trianglelefteq~}R\) be a proper ideal. Prove that \(M/IM\)
is a free \(R/I{\hbox{-}}\)module with basis \(X'\), where \(X'\) is
the image of \(X\) under the canonical map \(M\to M/IM\).
\item
Prove that any two bases of \(M\) have the same number of elements.
You may assume that the result is true when \(R\) is a field.
\end{enumerate}
\hypertarget{spring-2020-5-done}{%
\subsection{\texorpdfstring{Spring 2020 \#5
\(\done\)}{Spring 2020 \#5 \textbackslash done}}\label{spring-2020-5-done}}
Let \(R\) be a ring and \(f: M\to N\) and \(g: N\to M\) be
\(R{\hbox{-}}\)module homomorphisms such that
\(g\circ f = \operatorname{id}_M\). Show that
\(N \cong \operatorname{im}f \oplus \ker g\).
\begin{solution}
\envlist
\begin{itemize}
\tightlist
\item
We have the following situation:
\end{itemize}
\begin{center}
\begin{tikzcd}
M &&& N
\arrow["f", from=1-1, to=1-4]
\arrow["g"', curve={height=24pt}, dashed, from=1-4, to=1-1]
\end{tikzcd}
\end{center}
\begin{quote}
\href{https://q.uiver.app/?q=WzAsMixbMCwwLCJNIl0sWzMsMCwiTiJdLFswLDEsImYiXSxbMSwwLCJnIiwyLHsiY3VydmUiOjQsInN0eWxlIjp7ImJvZHkiOnsibmFtZSI6ImRhc2hlZCJ9fX1dXQ==}{Link
to Diagram}
\end{quote}
\begin{itemize}
\tightlist
\item
Claim: \(\operatorname{im}f + \ker g \subseteq N\), and this is in
fact an equality.
\begin{itemize}
\tightlist
\item
For \(n\in N\), write
\begin{align*}
n = n + (f\circ g)(n) - (f\circ g)(n) = \qty{n - (f\circ g)(n) } + (f\circ g)(n)
.\end{align*}
\item
The first term is in \(\ker g\):
\begin{align*}
g \qty{ n - (f\circ g)(n) }
&= g(n) - (g\circ f \circ g)(n)\\
&= g(n) - (\operatorname{id}_N \circ g)(n)\\
&= g(n) - g(n) \\
&= 0
.\end{align*}
\item
The second term is clearly in \(\operatorname{im}f\).
\end{itemize}
\item
Claim: the sum is direct.
\begin{itemize}
\tightlist
\item
Suppose \(n\in \ker(g) \cap\operatorname{im}(f)\), so \(g(n) = 0\)
and \(n=f(m)\) for some \(m\in M\). Then
\begin{align*}
0 = g(n) = g(f(m)) = (g\circ f)(m)
= \operatorname{id}_M(m) = m
,\end{align*}
so \(m=0\) and since \(f\) is a morphism in \(R{\hbox{-}}\)modules,
\(n\coloneqq f(m) = 0\).
\end{itemize}
\end{itemize}
\end{solution}
\hypertarget{fall-2018-6-done}{%
\subsubsection{\texorpdfstring{Fall 2018 \#6
\(\done\)}{Fall 2018 \#6 \textbackslash done}}\label{fall-2018-6-done}}
Let \(R\) be a commutative ring, and let \(M\) be an
\(R{\hbox{-}}\)module. An \(R{\hbox{-}}\)submodule \(N\) of \(M\) is
maximal if there is no \(R{\hbox{-}}\)module \(P\) with
\(N \subsetneq P \subsetneq M\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that an \(R{\hbox{-}}\)submodule \(N\) of \(M\) is maximal
\(\iff M /N\) is a simple \(R{\hbox{-}}\)module: i.e., \(M /N\) is
nonzero and has no proper, nonzero \(R{\hbox{-}}\)submodules.
\item
Let \(M\) be a \({\mathbb{Z}}{\hbox{-}}\)module. Show that a
\({\mathbb{Z}}{\hbox{-}}\)submodule \(N\) of \(M\) is maximal
\(\iff \#M /N\) is a prime number.
\item
Let \(M\) be the \({\mathbb{Z}}{\hbox{-}}\)module of all roots of
unity in \({\mathbb{C}}\) under multiplication. Show that there is no
maximal \({\mathbb{Z}}{\hbox{-}}\)submodule of \(M\).
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Todo
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
By the correspondence theorem, submodules of \(M/N\) biject with
submodules \(A\) of \(M\) containing \(N\).
So
\begin{itemize}
\item
\(M\) is maximal:
\item
\(\iff\) no such (proper, nontrivial) submodule \(A\) exists
\item
\(\iff\) there are no (proper, nontrivial) submodules of \(M/N\)
\item
\(\iff M/N\) is simple.
\end{itemize}
\end{proof}
\begin{proof}[of b]
Identify \({\mathbb{Z}}{\hbox{-}}\)modules with abelian groups, then by
(a), \(N\) is maximal \(\iff\) \(M/N\) is simple \(\iff\) \(M/N\) has no
nontrivial proper subgroups.\\
By Cauchy's theorem, if \({\left\lvert {M/N} \right\rvert} = ab\) is a
composite number, then \(a\divides ab \implies\) there is an element
(and thus a subgroup) of order \(a\). In this case, \(M/N\) contains a
nontrivial proper cyclic subgroup, so \(M/N\) is not simple. So
\({\left\lvert {M/N} \right\rvert}\) can not be composite, and therefore
must be prime.
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
Let
\(G = \left\{{x \in {\mathbb{C}}{~\mathrel{\Big|}~}x^n=1 \text{ for some }n\in {\mathbb{N}}}\right\}\),
and suppose \(H < G\) is a proper submodule.
\item
Since \(H\neq G\), there is some \(p\) and some \(k\) such that
\(\zeta_{p^k}\not\in H\).
\begin{itemize}
\tightlist
\item
Otherwise, if \(H\) contains every \(\zeta_{p^k}\) it contains every
\(\zeta_n\)
\end{itemize}
\end{itemize}
Then there must be a prime \(p\) such that the
\(\zeta_{p^k} \not \in H\) for all \(k\) greater than some constant
\(m\) -- otherwise, we can use the fact that if \(\zeta_{p^k} \in H\)
then \(\zeta_{p^\ell} \in H\) for all \(\ell \leq k\), and if
\(\zeta_{p^k} \in H\) for all \(p\) and all \(k\) then \(H = G\).
But this means there are infinitely many elements in \(G\setminus H\),
and so \(\infty = [G: H] = {\left\lvert {G/H} \right\rvert}\) is not a
prime. Thus by (b), \(H\) can not be maximal, a contradiction.
\end{proof}
\end{solution}
\hypertarget{fall-2019-final-2-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#2
\(\work\)}{Fall 2019 Final \#2 \textbackslash work}}\label{fall-2019-final-2-work}}
Consider the \({\mathbb{Z}}{\hbox{-}}\)submodule \(N\) of
\({\mathbb{Z}}^3\) spanned by
\begin{align*}
f_1 &= [-1, 0, 1], \\
f_2 &= [2,-3,1], \\
f_3 &= [0, 3, 1], \\
f_4 &= [3,1,5]
.\end{align*}
Find a basis for \(N\) and describe \({\mathbb{Z}}^3/N\).
\hypertarget{spring-2018-6-work}{%
\subsubsection{\texorpdfstring{Spring 2018 \#6
\(\work\)}{Spring 2018 \#6 \textbackslash work}}\label{spring-2018-6-work}}
Let
\begin{align*}
M &= \{(w, x, y, z) \in {\mathbb{Z}}^4 {~\mathrel{\Big|}~}w + x + y + z \in 2{\mathbb{Z}}\} \\
N &= \left\{{
(w, x, y, z) \in {\mathbb{Z}}^4 {~\mathrel{\Big|}~}4\divides (w - x),~ 4\divides (x - y),~ 4\divides ( y - z)
}\right\}
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(N\) is a \({\mathbb{Z}}{\hbox{-}}\)submodule of \(M\) .
\item
Find vectors \(u_1 , u_2 , u_3 , u_4 \in {\mathbb{Z}}^4\) and integers
\(d_1 , d_2 , d_3 , d_4\) such that
\begin{align*}
\{
u_1 , u_2 , u_3 , u_4
\}
&& \text{is a free basis for }M
\\
\{
d_1 u_1,~ d_2 u_2,~ d_3 u_3,~ d_4 u_4
\}
&& \text{is a free basis for }N
\end{align*}
\item
Use the previous part to describe \(M/N\) as a direct sum of cyclic
\({\mathbb{Z}}{\hbox{-}}\)modules.
\end{enumerate}
\hypertarget{spring-2018-7-work}{%
\subsubsection{\texorpdfstring{Spring 2018 \#7
\(\work\)}{Spring 2018 \#7 \textbackslash work}}\label{spring-2018-7-work}}
Let \(R\) be a PID and \(M\) be an \(R{\hbox{-}}\)module. Let \(p\) be a
prime element of \(R\). The module \(M\) is called
\emph{\(\left\langle{p}\right\rangle{\hbox{-}}\)primary} if for every
\(m \in M\) there exists \(k > 0\) such that \(p^k m = 0\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Suppose M is \(\left\langle{p}\right\rangle{\hbox{-}}\)primary. Show
that if \(m \in M\) and
\(t \in R, ~t \not\in \left\langle{p}\right\rangle\), then there
exists \(a \in R\) such that \(atm = m\).
\item
A submodule \(S\) of \(M\) is said to be \emph{pure} if
\(S \cap r M = rS\) for all \(r \in R\). Show that if \(M\) is
\(\left\langle{p}\right\rangle{\hbox{-}}\)primary, then \(S\) is pure
if and only if \(S \cap p^k M = p^k S\) for all \(k \geq 0\).
\end{enumerate}
\hypertarget{fall-2016-6-work}{%
\subsubsection{\texorpdfstring{Fall 2016 \#6
\(\work\)}{Fall 2016 \#6 \textbackslash work}}\label{fall-2016-6-work}}
Let \(R\) be a ring and \(f: M\to N\) and \(g: N\to M\) be
\(R{\hbox{-}}\)module homomorphisms such that
\(g\circ f = \operatorname{id}_M\). Show that
\(N\cong \operatorname{im}f \oplus \ker g\).
\hypertarget{spring-2016-4-work}{%
\subsubsection{\texorpdfstring{Spring 2016 \#4
\(\work\)}{Spring 2016 \#4 \textbackslash work}}\label{spring-2016-4-work}}
Let \(R\) be a ring with the following commutative diagram of
\(R{\hbox{-}}\)modules, where each row represents a short exact sequence
of \(R{\hbox{-}}\)modules:
\begin{center}
\begin{tikzcd}
0 \ar[r] & A \ar[d, "\alpha"] \ar[r, "f"] & B \ar[d, "\beta"] \ar[r, "g"] & C \ar[r] \ar[d, "\gamma"] & 0 \\
0 \ar[r] & A' \ar[r, "f'"] & B'\ar[r, "g'"] & C' \ar[r] & 0
\end{tikzcd}
\end{center}
Prove that if \(\alpha\) and \(\gamma\) are isomorphisms then \(\beta\)
is an isomorphism.
\hypertarget{spring-2015-8-work}{%
\subsubsection{\texorpdfstring{Spring 2015 \#8
\(\work\)}{Spring 2015 \#8 \textbackslash work}}\label{spring-2015-8-work}}
Let \(R\) be a PID and \(M\) a finitely generated \(R{\hbox{-}}\)module.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that there are \(R{\hbox{-}}\)submodules
\begin{align*}
0 = M_0 \subset M_1 \subset \cdots \subset M_n = M
\end{align*}
such that for all \(0\leq i \leq n-1\), the module \(M_{i+1}/M_i\) is
cyclic.
\item
Is the integer \(n\) in part (a) uniquely determined by \(M\)? Prove
your answer.
\end{enumerate}
\hypertarget{fall-2012-6-work}{%
\subsubsection{\texorpdfstring{Fall 2012 \#6
\(\work\)}{Fall 2012 \#6 \textbackslash work}}\label{fall-2012-6-work}}
Let \(R\) be a ring and \(M\) an \(R{\hbox{-}}\)module. Recall that
\(M\) is \emph{Noetherian} iff any strictly increasing chain of
submodule \(M_1 \subsetneq M_2 \subsetneq \cdots\) is finite. Call a
proper submodule \(M' \subsetneq M\) \emph{intersection-decomposable} if
it can not be written as the intersection of two proper submodules
\(M' = M_1\cap M_2\) with \(M_i \subsetneq M\).
Prove that for every Noetherian module \(M\), any proper submodule
\(N\subsetneq M\) can be written as a finite intersection
\(N = N_1 \cap\cdots \cap N_k\) of intersection-indecomposable modules.
\hypertarget{fall-2019-final-1-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#1
\(\work\)}{Fall 2019 Final \#1 \textbackslash work}}\label{fall-2019-final-1-work}}
Let \(A\) be an abelian group, and show \(A\) is a
\({\mathbb{Z}}{\hbox{-}}\)module in a unique way.
\hypertarget{fall-2020-6-work}{%
\subsubsection{\texorpdfstring{Fall 2020 \#6
\(\work\)}{Fall 2020 \#6 \textbackslash work}}\label{fall-2020-6-work}}
Let \(R\) be a ring with \(1\) and let \(M\) be a left
\(R{\hbox{-}}\)module. If \(I\) is a left ideal of \(R\), define
\begin{align*}
IM \coloneqq\left\{{ \sum_{i=1}^{N < \infty} a_i m_i {~\mathrel{\Big|}~}a_i \in I, m_i \in M, n\in {\mathbb{N}}}\right\}
,\end{align*}
i.e.~the set of finite sums of of elements of the form \(am\) where
\(a\in I, m\in M\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(IM \leq M\) is a submodule.
\item
Let \(M, N\) be left \(R{\hbox{-}}\)modules, \(I\) a nilpotent left
ideal of \(R\), and \(f: M\to N\) an \(R{\hbox{-}}\)module morphism.
Prove that if the induced morphism
\(\mkern 1.5mu\overline{\mkern-1.5muf\mkern-1.5mu}\mkern 1.5mu: M/IM \to N/IN\)
is surjective, then \(f\) is surjective.
\end{enumerate}
\hypertarget{torsion-and-the-structure-theorem}{%
\subsection{Torsion and the Structure
Theorem}\label{torsion-and-the-structure-theorem}}
\hypertarget{star-fall-2019-5-done}{%
\subsubsection{\texorpdfstring{\(\star\) Fall 2019 \#5
\(\done\)}{\textbackslash star Fall 2019 \#5 \textbackslash done}}\label{star-fall-2019-5-done}}
Let \(R\) be a ring and \(M\) an \(R{\hbox{-}}\)module.
\begin{quote}
Recall that the set of torsion elements in M is defined by
\begin{align*}
\operatorname{Tor}(M) = \{m \in M {~\mathrel{\Big|}~}\exists r \in R, ~r \neq 0, ~rm = 0\}
.\end{align*}
\end{quote}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if \(R\) is an integral domain, then
\(\operatorname{Tor}(M )\) is a submodule of \(M\) .
\item
Give an example where \(\operatorname{Tor}(M )\) is not a submodule of
\(M\).
\item
If \(R\) has zero-divisors, prove that every non-zero
\(R{\hbox{-}}\)module has non-zero torsion elements.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
One-step submodule test.
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
It suffices to show that
\begin{align*}
r\in R, ~t_1, t_2\in \operatorname{Tor}(M) \implies rt_1 + t_2 \in \operatorname{Tor}(M)
.\end{align*}
We have
\begin{align*}
t_1 \in \operatorname{Tor}(M) &\implies \exists s_1 \neq 0 \text{ such that } s_1 t_1 = 0 \\
t_2 \in \operatorname{Tor}(M) &\implies \exists s_2 \neq 0 \text{ such that } s_2 t_2 = 0
.\end{align*}
Since \(R\) is an integral domain, \(s_1 s_2 \neq 0\). Then
\begin{align*}
s_1 s_2(rt_1 + t_2)
&= s_1 s_2 r t_1 + s_1 s_2t_2 \\
&= s_2 r (s_1 t_1) + s_1 (s_2 t_2) \quad\text{since $R$ is commutative} \\
&= s_2 r(0) + s_1(0) \\
&= 0
.\end{align*}
\end{proof}
\begin{proof}[of b]
Let \(R = {\mathbb{Z}}/6{\mathbb{Z}}\) as a
\({\mathbb{Z}}/6{\mathbb{Z}}{\hbox{-}}\)module, which is not an integral
domain as a ring.
Then \([3]_6\curvearrowright[2]_6 = [0]_6\) and
\([2]_6\curvearrowright[3]_6 = [0]_6\), but \([2]_6 + [3]_6 = [5]_6\),
where 5 is coprime to 6, and thus
\([n]_6\curvearrowright[5]_6 = [0] \implies [n]_6 = [0]_6\). So
\([5]_6\) is \emph{not} a torsion element.
So the set of torsion elements are not closed under addition, and thus
not a submodule.
\end{proof}
\begin{proof}[of c]
Suppose \(R\) has zero divisors \(a,b \neq 0\) where \(ab = 0\). Then
for any \(m\in M\), we have \(b\curvearrowright m \coloneqq bm \in M\)
as well, but then
\begin{align*}
a\curvearrowright bm = (ab)\curvearrowright m = 0\curvearrowright m = 0_M
,\end{align*}
so \(m\) is a torsion element for any \(m\).
\end{proof}
\end{solution}
\hypertarget{star-spring-2019-5-done}{%
\subsubsection{\texorpdfstring{\(\star\) Spring 2019 \#5
\(\done\)}{\textbackslash star Spring 2019 \#5 \textbackslash done}}\label{star-spring-2019-5-done}}
Let \(R\) be an integral domain. Recall that if \(M\) is an
\(R{\hbox{-}}\)module, the \emph{rank} of \(M\) is defined to be the
maximum number of \(R{\hbox{-}}\)linearly independent elements of \(M\)
.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that for any \(R{\hbox{-}}\)module \(M\), the rank of
\(\operatorname{Tor}(M )\) is 0.
\item
Prove that the rank of \(M\) is equal to the rank of of
\(M/\operatorname{Tor}(M )\).
\item
Suppose that M is a non-principal ideal of \(R\).
\end{enumerate}
Prove that \(M\) is torsion-free of rank 1 but not free.
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Todo
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\tightlist
\item
Suppose toward a contradiction \(\operatorname{Tor}(M)\) has rank
\(n \geq 1\).
\item
Then \(\operatorname{Tor}(M)\) has a linearly independent generating
set \(B = \left\{{\mathbf{r}_1, \cdots, \mathbf{r}_n}\right\}\), so in
particular
\begin{align*}
\sum_{i=1}^n s_i \mathbf{r}_i = 0 \implies s_i = 0_R \,\forall i
.\end{align*}
\item
Let \(\mathbf{r}\) be any of of these generating elements.
\item
Since \(\mathbf{r}\in \operatorname{Tor}(M)\), there exists an
\(s\in R\setminus 0_R\) such that \(s\mathbf{r} = 0_M\).
\item
Then \(s\mathbf{r} = 0\) with \(s\neq 0\), so
\(\left\{{\mathbf{r}}\right\} \subseteq B\) is \emph{not} a linearly
independent set, a contradiction.
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
Let \(n = \operatorname{rank}M\), and let
\(\mathcal B = \left\{{\mathbf{r}_i}\right\}_{i=1}^n \subseteq R\) be
a generating set.
\item
Let \(\tilde M \coloneqq M/\operatorname{Tor}(M)\) and
\(\pi: M \to M'\) be the canonical quotient map.
\end{itemize}
\begin{claim}
\begin{align*}
\tilde {\mathcal{B}}\coloneqq\pi(\mathcal B) = \left\{{\mathbf{r}_i + \operatorname{Tor}(M)}\right\}
\end{align*}
is a basis for \(\tilde M\).
\end{claim}
Note that the proof follows immediately.
\end{proof}
\begin{proof}[of claim: linearly independent]
\envlist
\begin{itemize}
\item
Suppose that
\begin{align*}
\sum_{i=1}^n s_i (\mathbf{r}_i + \operatorname{Tor}(M)) = \mathbf{0}_{\tilde M}
.\end{align*}
\item
Then using the definition of coset addition/multiplication, we can
write this as
\begin{align*}
\sum_{i=1}^n \qty { s_i \mathbf{r}_i + \operatorname{Tor}(M)} =
\qty{ \sum_{i=1}^n s_i \mathbf{r}_i} + \operatorname{Tor}(M) = 0_{\tilde M}
.\end{align*}
\item
Since
\(\tilde{\mathbf{x}} = 0 \in \tilde M \iff \tilde{\mathbf{x}} = \mathbf{x} + \operatorname{Tor}(M)\)
where \(\mathbf{x} \in \operatorname{Tor}(M)\), this forces
\(\sum s_i \mathbf{r}_i \in \operatorname{Tor}(M)\).
\item
Then there exists a scalar \(\alpha\in R^{\bullet}\) such that
\(\alpha \sum s_i \mathbf{r}_i = 0_M\).
\item
Since \(R\) is an integral domain and \(\alpha \neq 0\), we must have
\(\sum s_i \mathbf{r}_i = 0_M\).
\item
Since \(\left\{{\mathbf{r}_i}\right\}\) was linearly independent in
\(M\), we must have \(s_i = 0_R\) for all \(i\).
\end{itemize}
\end{proof}
\begin{proof}[of claim: spanning]
\envlist
\begin{itemize}
\item
Write
\(\pi(\mathcal B) = \left\{{\mathbf{r}_i + \operatorname{Tor}(M)}\right\}_{i=1}^n\)
as a set of cosets.
\item
Letting \(\mathbf{x} \in M'\) be arbitrary, we can write
\(\mathbf{x} = \mathbf{m} + \operatorname{Tor}(M)\) for some
\(\mathbf{m} \in M\) where \(\pi(\mathbf{m}) = \mathbf{x}\) by
surjectivity of \(\pi\).
\item
Since \(\mathcal B\) is a basis for \(M\), we have
\(\mathbf{m} = \sum_{i=1}^n s_i \mathbf{r}_i\), and so
\begin{align*}
\mathbf{x}
&= \pi(\mathbf{m}) \\
&\coloneqq\pi\qty{ \sum_{i=1}^n s_i \mathbf{r}_i} \\
&= \sum_{i=1}^n s_i \pi(\mathbf{r}_i) \quad\text{since $\pi$ is an $R{\hbox{-}}$module morphism}\\
&\coloneqq\sum_{i=1}^n s_i \mathbf{(}\mathbf{r}_i + \operatorname{Tor}(M))
,\end{align*}
which expresses \(\mathbf{x}\) as a linear combination of elements in
\(\mathcal B'\).
\end{itemize}
\end{proof}
\begin{proof}[of c]
\begin{quote}
Notation: Let \(0_R\) denote \(0\in R\) regarded as a ring element, and
\(\mathbf{0} \in R\) denoted \(0_R\) regarded as a module element (where
\(R\) is regarded as an \(R{\hbox{-}}\)module over itself)
\end{quote}
\begin{proof}[that $M$ is not free]
\envlist
\begin{itemize}
\item
\textbf{Claim}: If \(I\subseteq R\) is an ideal \emph{and} a free
\(R{\hbox{-}}\)module, then \(I\) is principal .
\begin{itemize}
\item
Suppose \(I\) is free and let \(I = \left\langle{B}\right\rangle\)
for some basis, we will show
\({\left\lvert {B} \right\rvert} = 1\)\textgreater{}
\item
Toward a contradiction, suppose
\({\left\lvert {B} \right\rvert} \geq 2\) and let \(m_1, m_2\in B\).
\item
Then since \(R\) is commutative, \(m_2 m_1 - m_1 m_2 = 0\) and this
yields a linear dependence
\item
So \(B\) has only one element \(m\).
\item
But then \(I = \left\langle{m}\right\rangle = R_m\) is cyclic as an
\(R{\hbox{-}}\) module and thus principal as an ideal of \(R\).
\item
Now since \(M\) was assumed to \emph{not} be principal, \(M\) is not
free (using the contrapositive of the claim).
\end{itemize}
\end{itemize}
\end{proof}
\begin{proof}[that $M$ is rank 1]
\envlist
\begin{itemize}
\item
For any module, we can take an element \(\mathbf{m}\in M^{\bullet}\)
and consider the cyclic submodule \(R\mathbf{m}\).
\item
Since \(M\) is not principle, it is not the zero ideal, and contains
at least two elements. So we can consider an element
\(\mathbf{m}\in M\).
\item
We have \(\operatorname{rank}_R(M) \geq 1\), since
\(R\mathbf{m} \leq M\) and \(\left\{{m}\right\}\) is a subset of some
spanning set.
\item
\(R\mathbf{m}\) can not be linearly dependent, since \(R\) is an
integral domain and \(M\subseteq R\), so
\(\alpha \mathbf{m} = \mathbf{0} \implies \alpha = 0_R\).
\item
Claim: since \(R\) is commutative,
\(\operatorname{rank}_R(M) \leq 1\).
\begin{itemize}
\tightlist
\item
If we take two elements \(\mathbf{m}, \mathbf{n} \in M^{\bullet}\),
then since \(m, n\in R\) as well, we have \(nm = mn\) and so
\begin{align*}
(n)\mathbf{m} + (-m)\mathbf{n} = 0_R = \mathbf{0}
\end{align*}
is a linear dependence.
\end{itemize}
\end{itemize}
\textbf{\(M\) is torsion-free}:
\begin{itemize}
\item
Let \(\mathbf{x} \in \operatorname{Tor}M\), then there exists some
\(r\neq 0\in R\) such that \(r\mathbf{x} = \mathbf{0}\).
\item
But \(\mathbf{x}\in R\) as well and \(R\) is an integral domain, so
\(\mathbf{x}=0_R\), and thus
\(\operatorname{Tor}(M) = \left\{{0_R}\right\}\).
\end{itemize}
\end{proof}
\end{proof}
\end{solution}
\hypertarget{star-spring-2020-6-done}{%
\subsubsection{\texorpdfstring{\(\star\) Spring 2020 \#6
\(\done\)}{\textbackslash star Spring 2020 \#6 \textbackslash done}}\label{star-spring-2020-6-done}}
Let \(R\) be a ring with unity.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Give a definition for a free module over \(R\).
\item
Define what it means for an \(R{\hbox{-}}\)module to be torsion free.
\item
Prove that if \(F\) is a free module, then any short exact sequence of
\(R{\hbox{-}}\)modules of the following form splits:
\begin{align*}
0 \to N \to M \to F \to 0
.\end{align*}
\item
Let \(R\) be a PID. Show that any finitely generated
\(R{\hbox{-}}\)module \(M\) can be expressed as a direct sum of a
torsion module and a free module.
\end{enumerate}
\begin{quote}
You may assume that a finitely generated torsionfree module over a PID
is free.
\end{quote}
\begin{solution}
Let \(R\) be a ring with 1.
\begin{proof}[of a]
An \(R{\hbox{-}}\)module \(M\) is \textbf{free} if any of the following
conditions hold:
\begin{itemize}
\tightlist
\item
\(M\) admits an \(R{\hbox{-}}\)linearly independent spanning set
\(\left\{{\mathbf{b}_\alpha}\right\}\), so
\begin{align*}m\in M \implies m = \sum_\alpha r_\alpha \mathbf{b}_\alpha\end{align*}
and
\begin{align*}\sum_\alpha r_\alpha \mathbf{b}_\alpha = 0_M \implies r_\alpha = 0_R\end{align*}
for all \(\alpha\).
\item
\(M\) admits a decomposition \(M \cong \bigoplus_{\alpha} R\) as a
direct sum of \(R{\hbox{-}}\)submodules.
\item
There is a nonempty set \(X\) an monomorphism \(X\hookrightarrow M\)
of sets such that for every \(R{\hbox{-}}\)module \(N\), every set map
\(X\to N\) lifts to a unique \(R{\hbox{-}}\)module morphism
\(M\to N\), so the following diagram commutes:
\end{itemize}
\begin{center}
\begin{tikzcd}
M \ar[rd, dotted, "\exists ! \tilde f"] & \\
X \ar[u, hook] \ar[r, "f"] & N
\end{tikzcd}
\end{center}
Equivalently,
\begin{align*}
\mathop{\mathrm{Hom}}_{\mathsf{Set}}(X, \mathop{\mathrm{Forget}}(N)) \xrightarrow{\sim} \mathop{\mathrm{Hom}}_{{\mathsf{R}{\hbox{-}}\mathsf{Mod}}}(M, N)
.\end{align*}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
Define the annihilator:
\begin{align*}
\operatorname{Ann}(m) \coloneqq\left\{{r\in R {~\mathrel{\Big|}~}r\cdot m = 0_M}\right\} {~\trianglelefteq~}R
.\end{align*}
\begin{itemize}
\tightlist
\item
Note that \(mR \cong R/\operatorname{Ann}(m)\).
\end{itemize}
\item
Define the torsion submodule:
\begin{align*}
M_t \coloneqq\left\{{m\in M {~\mathrel{\Big|}~}\operatorname{Ann}(m) \neq 0}\right\} \leq M
\end{align*}
\item
\(M\) is \textbf{torsionfree} iff \(M_t = 0\) is the trivial
submodule.
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
\begin{itemize}
\item
Let the following be an SES where \(F\) is a free
\(R{\hbox{-}}\)module:
\begin{align*}
0 \to N \to M \xrightarrow{\pi} F \to 0
.\end{align*}
\item
Since \(F\) is free, there is a generating set
\(X = \left\{{x_\alpha}\right\}\) and a map
\(\iota:X\hookrightarrow F\) satisfying the 3rd property from (a).
\begin{itemize}
\tightlist
\item
If we construct any map \(f: X\to M\), the universal property
modules will give a lift \(\tilde f: F\to M\)
\end{itemize}
\item
Identify \(X\) with \(\iota(X) \subseteq F\).
\item
For every \(x\in X\), the preimage \(\pi^{-1}(x)\) is nonempty by
surjectivity. So arbitrarily pick any preimage.
\item
\(\left\{{\iota(x_\alpha)}\right\} \subseteq F\) and \(\pi\) is
surjective, so choose fibers \(\left\{{y_\alpha}\right\} \subseteq M\)
such that \(\pi(y_\alpha) = \iota(x_\alpha)\) and define
\begin{align*}
f: X&\to M \\
x_\alpha &\mapsto y_\alpha
.\end{align*}
\item
The universal property yields \(h: F\to M\):
\end{itemize}
\begin{center}
\begin{tikzcd}
& & & X=\left\{{x_\alpha}\right\} \ar[dd, hook, "\iota"]\ar[ddl, "f"'] & \\ \\
0 \ar[r]& N \ar[r] & M\ar[r, "\pi"'] & \ar[l, bend right, dotted ,"\exists ! h"'] F \ar[r] & 0
\end{tikzcd}
\end{center}
\begin{itemize}
\tightlist
\item
It remains to check that it's a section.
\begin{itemize}
\tightlist
\item
Write \(f= \sum r_i x_i\), then since both maps are
\(R{\hbox{-}}\)module morphism, by \(R{\hbox{-}}\)linearity we can
write
\begin{align*}
(\pi \circ h)(f)
&= (\pi \circ h)\qty{ \sum r_i x_i } \\
&= \sum r_i (\pi \circ h)(x_i)
,\end{align*}
but since \(h(x_i) \in \pi^{-1}(x_i)\), we have
\((\pi \circ h)(x_i) = x_i\). So this recovers \(f\).
\end{itemize}
\end{itemize}
\end{proof}
\begin{proof}[of c, shorter proof]
\envlist
\begin{itemize}
\item
Free implies projective
\begin{itemize}
\tightlist
\item
Universal property of \textbf{projective} objects: for every
epimorphism \(\pi:M\twoheadrightarrow N\) and every \(f:P\to N\)
there exists a unique lift \(\tilde f: P\to M\):
\end{itemize}
\begin{center}
\begin{tikzcd}
& P\ar[d, "f"] \ar[dl, dotted, "\exists ! \tilde f"'] \\
M \ar[r, "\pi"] & N
\end{tikzcd}
\end{center}
\begin{itemize}
\tightlist
\item
Construct \(\phi\) in the following diagram using the same method as
above (surjectivity to pick elements in preimage):
\end{itemize}
\end{itemize}
\begin{center}
\begin{tikzcd}
&& X \\
\\
&& F \\
\\
M && N && 0
\arrow["\iota", hook, from=1-3, to=3-3]
\arrow["f", from=3-3, to=5-3]
\arrow["\pi"', two heads, from=5-1, to=5-3]
\arrow[from=5-3, to=5-5]
\arrow["{\exists \tilde \phi}"', dashed, from=3-3, to=5-1]
\arrow["\phi"', curve={height=24pt}, from=1-3, to=5-1]
\end{tikzcd}
\end{center}
\begin{quote}
\href{https://q.uiver.app/?q=WzAsNSxbMCw0LCJNIl0sWzIsNCwiTiJdLFs0LDQsIjAiXSxbMiwyLCJGIl0sWzIsMCwiWCJdLFs0LDMsIlxcaW90YSIsMCx7InN0eWxlIjp7InRhaWwiOnsibmFtZSI6Imhvb2siLCJzaWRlIjoidG9wIn19fV0sWzMsMSwiZiJdLFswLDEsIlxccGkiLDIseyJzdHlsZSI6eyJoZWFkIjp7Im5hbWUiOiJlcGkifX19XSxbMSwyXSxbMywwLCJcXGV4aXN0cyBcXHRpbGRlIFxccGhpIiwyLHsic3R5bGUiOnsiYm9keSI6eyJuYW1lIjoiZGFzaGVkIn19fV0sWzQsMCwiXFxwaGkiLDIseyJjdXJ2ZSI6NH1dXQ==}{Link
to Diagram}
\end{quote}
\begin{itemize}
\tightlist
\item
Now take the identity map, then commutativity is equivalent to being a
section.
\end{itemize}
\begin{center}
\begin{tikzcd}
& & & F\ar[d, "\one_F"]\ar[dl, "\exists ! h"'] & \\
0 \ar[r] & N\ar[r] & M\ar[r] & F \ar[r] & 0
\end{tikzcd}
\end{center}
\end{proof}
\begin{proof}[of d]
\envlist
There is a SES
\begin{center}
\begin{tikzcd}
0 \ar[r] & M_t \ar[r] & M \ar[r] & M/M_t \ar[r] & 0
\end{tikzcd}
\end{center}
\begin{claim}
\(M/M_t\) is a free \(R{\hbox{-}}\)module, so this sequence splits and
\(M\cong M_t \oplus {M\over M_t}\), where \(M_t\) is a torsion
\(R{\hbox{-}}\)module.
\begin{quote}
Note that by the hint, since \(R\) is a PID, it suffices to show that
\(M/M_t\) is torsionfree.
\end{quote}
\end{claim}
\begin{itemize}
\tightlist
\item
Let \(m+M_t \in M/M_t\) be arbitrary. Suppose this is a torsion
element, the claim is that it must be the trivial coset. This will
follow if \(m\in M_t\)
\item
Since this is torsion, there exists \(r\in R\) such that
\begin{align*}
M_t = r(m + M_t) \coloneqq(rm) + M_t \implies rm\in M_t
.\end{align*}
\item
Then \(rm\) is torsion in \(M\), so there exists some \(s\in R\) such
\(s(rm) = 0_M\).
\item
Then \((sr)m = 0_M\) which forces \(m\in M_t\)
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{spring-2012-5-work}{%
\subsubsection{\texorpdfstring{Spring 2012 \#5
\(\work\)}{Spring 2012 \#5 \textbackslash work}}\label{spring-2012-5-work}}
Let \(M\) be a finitely generated module over a PID \(R\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
\(M_t\) be the set of torsion elements of \(M\), and show that \(M_t\)
is a submodule of \(M\).
\item
Show that \(M/M_t\) is torsion free.
\item
Prove that \(M \cong M_t \oplus F\) where \(F\) is a free module.
\end{enumerate}
\hypertarget{spring-2017-5-work}{%
\subsubsection{\texorpdfstring{Spring 2017 \#5
\(\work\)}{Spring 2017 \#5 \textbackslash work}}\label{spring-2017-5-work}}
Let \(R\) be an integral domain and let \(M\) be a nonzero torsion
\(R{\hbox{-}}\)module.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that if \(M\) is finitely generated then the annihilator in
\(R\) of \(M\) is nonzero.
\item
Give an example of a non-finitely generated torsion
\(R{\hbox{-}}\)module whose annihilator is \((0)\), and justify your
answer.
\end{enumerate}
\hypertarget{fall-2019-final-3-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#3
\(\work\)}{Fall 2019 Final \#3 \textbackslash work}}\label{fall-2019-final-3-work}}
Let \(R = k[x]\) for \(k\) a field and let \(M\) be the
\(R{\hbox{-}}\)module given by
\begin{align*}
M=\frac{k[x]}{(x-1)^{3}} \oplus \frac{k[x]}{\left(x^{2}+1\right)^{2}} \oplus \frac{k[x]}{(x-1)\left(x^{2}+1\right)^{4}} \oplus \frac{k[x]}{(x+2)\left(x^{2}+1\right)^{2}}
.\end{align*}
Describe the elementary divisors and invariant factors of \(M\).
\hypertarget{fall-2019-final-4-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#4
\(\work\)}{Fall 2019 Final \#4 \textbackslash work}}\label{fall-2019-final-4-work}}
Let \(I = (2, x)\) be an ideal in \(R = {\mathbb{Z}}[x]\), and show that
\(I\) is not a direct sum of nontrivial cyclic \(R{\hbox{-}}\)modules.
\hypertarget{fall-2019-final-5-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#5
\(\work\)}{Fall 2019 Final \#5 \textbackslash work}}\label{fall-2019-final-5-work}}
Let \(R\) be a PID.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Classify irreducible \(R{\hbox{-}}\)modules up to isomorphism.
\item
Classify indecomposable \(R{\hbox{-}}\)modules up to isomorphism.
\end{enumerate}
\hypertarget{fall-2019-final-6-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#6
\(\work\)}{Fall 2019 Final \#6 \textbackslash work}}\label{fall-2019-final-6-work}}
Let \(V\) be a finite-dimensional \(k{\hbox{-}}\)vector space and
\(T:V\to V\) a non-invertible \(k{\hbox{-}}\)linear map. Show that there
exists a \(k{\hbox{-}}\)linear map \(S:V\to V\) with \(T\circ S = 0\)
but \(S\circ T\neq 0\).
\hypertarget{fall-2019-final-7-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#7
\(\work\)}{Fall 2019 Final \#7 \textbackslash work}}\label{fall-2019-final-7-work}}
Let \(A\in M_n({\mathbb{C}})\) with \(A^2 = A\). Show that \(A\) is
similar to a diagonal matrix, and exhibit an explicit diagonal matrix
similar to \(A\).
\hypertarget{fall-2019-final-10-work}{%
\subsubsection{\texorpdfstring{Fall 2019 Final \#10
\(\work\)}{Fall 2019 Final \#10 \textbackslash work}}\label{fall-2019-final-10-work}}
Show that the eigenvalues of a Hermitian matrix \(A\) are real and that
\(A = PDP^{-1}\) where \(P\) is an invertible matrix with orthogonal
columns.
\hypertarget{fall-2020-7-work}{%
\subsubsection{\texorpdfstring{Fall 2020 \#7
\(\work\)}{Fall 2020 \#7 \textbackslash work}}\label{fall-2020-7-work}}
Let \(A \in \operatorname{Mat}(n\times n, {\mathbb{R}})\) be arbitrary.
Make \({\mathbb{R}}^n\) into an \({\mathbb{R}}[x]{\hbox{-}}\)module by
letting \(f(x).\mathbf{v} \coloneqq f(A)(\mathbf{v})\) for
\(f(\mathbf{v})\in {\mathbb{R}}[x]\) and
\(\mathbf{v} \in {\mathbb{R}}^n\). Suppose that this induces the
following direct sum decomposition:
\begin{align*}
{\mathbb{R}}^n \cong
{ {\mathbb{R}}[x] \over \left\langle{ (x-1)^3 }\right\rangle }
\oplus
{ {\mathbb{R}}[x] \over \left\langle{ (x^2+1)^2 }\right\rangle }
\oplus
{ {\mathbb{R}}[x] \over \left\langle{ (x-1)(x^2-1)(x^2+1)^4 }\right\rangle }
\oplus
{ {\mathbb{R}}[x] \over \left\langle{ (x+2)(x^2+1)^2 }\right\rangle }
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Determine the elementary divisors and invariant factors of \(A\).
\item
Determine the minimal polynomial of \(A\).
\item
Determine the characteristic polynomial of \(A\).
\end{enumerate}
\hypertarget{linear-algebra-diagonalizability}{%
\section{Linear Algebra:
Diagonalizability}\label{linear-algebra-diagonalizability}}
\hypertarget{fall-2017-7-work}{%
\subsection{\texorpdfstring{Fall 2017 \#7
\(\work\)}{Fall 2017 \#7 \textbackslash work}}\label{fall-2017-7-work}}
Let \(F\) be a field and let \(V\) and \(W\) be vector spaces over \(F\)
.
Make \(V\) and \(W\) into \(F[x]{\hbox{-}}\)modules via linear operators
\(T\) on \(V\) and \(S\) on \(W\) by defining \(X \cdot v = T (v)\) for
all \(v \in V\) and \(X \cdot w = S(w)\) for all w \(\in\) W .
Denote the resulting \(F[x]{\hbox{-}}\)modules by \(V_T\) and \(W_S\)
respectively.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that an \(F[x]{\hbox{-}}\)module homomorphism from \(V_T\) to
\(W_S\) consists of an \(F{\hbox{-}}\)linear transformation
\(R : V \to W\) such that \(RT = SR\).
\item
Show that \(VT \cong WS\) as \(F[x]{\hbox{-}}\)modules \(\iff\) there
is an \(F{\hbox{-}}\)linear isomorphism \(P : V \to W\) such that
\(T = P^{-1}SP\).
\item
Recall that a module \(M\) is \emph{simple} if \(M \neq 0\) and any
proper submodule of \(M\) must be zero. Suppose that \(V\) has
dimension 2. Give an example of \(F\), \(T\) with \(V_T\) simple.
\item
Assume \(F\) is algebraically closed. Prove that if \(V\) has
dimension 2, then any \(V_T\) is not simple.
\end{enumerate}
\hypertarget{spring-2015-3-work}{%
\subsection{\texorpdfstring{Spring 2015 \#3
\(\work\)}{Spring 2015 \#3 \textbackslash work}}\label{spring-2015-3-work}}
Let \(F\) be a field and \(V\) a finite dimensional
\(F{\hbox{-}}\)vector space, and let \(A, B: V\to V\) be commuting
\(F{\hbox{-}}\)linear maps. Suppose there is a basis \({\mathcal{B}}_1\)
with respect to which \(A\) is diagonalizable and a basis
\({\mathcal{B}}_2\) with respect to which \(B\) is diagonalizable.
Prove that there is a basis \({\mathcal{B}}_3\) with respect to which
\(A\) and \(B\) are both diagonalizable.
\hypertarget{fall-2016-2-work}{%
\subsection{\texorpdfstring{Fall 2016 \#2
\(\work\)}{Fall 2016 \#2 \textbackslash work}}\label{fall-2016-2-work}}
Let \(A, B\) be two \(n\times n\) matrices with the property that
\(AB = BA\). Suppose that \(A\) and \(B\) are diagonalizable. Prove that
\(A\) and \(B\) are \emph{simultaneously} diagonalizable.
\hypertarget{spring-2019-1-done}{%
\subsection{\texorpdfstring{Spring 2019 \#1
\(\done\)}{Spring 2019 \#1 \textbackslash done}}\label{spring-2019-1-done}}
Let \(A\) be a square matrix over the complex numbers. Suppose that
\(A\) is nonsingular and that \(A^{2019}\) is diagonalizable over
\({\mathbb{C}}\).
Show that \(A\) is also diagonalizable over \({\mathbb{C}}\).
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(A\) is diagonalizable iff \(\min_A(x)\) is separable.
\begin{itemize}
\tightlist
\item
See
\href{https://math.stackexchange.com/questions/3027664/if-a-is-invertible-and-an-is-diagonalizable-then-a-is-diagonalizable}{further
discussion here}.
\end{itemize}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{claim}
If \(A \in \operatorname{GL}(m, {\mathbb{F}})\) is invertible and
\(A^n/{\mathbb{F}}\) is diagonalizable, then \(A/{\mathbb{F}}\) is
diagonalizable.
\end{claim}
\begin{proof}[of claim]
\begin{itemize}
\item
Let \(A \in \operatorname{GL}(m, {\mathbb{F}})\).
\item
Since \(A^n\) is diagonalizable, \(\min_{A^n}(x) \in {\mathbb{F}}[x]\)
is separable and thus factors as a product of \(m\) \textbf{distinct}
linear factors:
\begin{align*}
\min_{A^n}(x) = \prod_{i=1}^m (x-\lambda_i), \quad \min_{A^n}(A^n) = 0
\end{align*}
where \(\left\{{\lambda_i}\right\}_{i=1}^m \subset {\mathbb{F}}\) are
the \textbf{distinct} eigenvalues of \(A^n\).
\item
Moreover
\(A\in \operatorname{GL}(m,{\mathbb{F}}) \implies A^n \in \operatorname{GL}(m,{\mathbb{F}})\):
\(A\) is invertible
\(\iff \operatorname{det}(A) = d \in {\mathbb{F}}^{\times}\), and so
\(\operatorname{det}(A^n) = \operatorname{det}(A)^n = d^n \in {\mathbb{F}}^{\times}\)
using the fact that the determinant is a ring morphism
\(\operatorname{det}: \operatorname{Mat}(m\times m) \to{\mathbb{F}}\)
and \({\mathbb{F}}^{\times}\) is closed under multiplication.
\item
So \(A^n\) is invertible, and thus has trivial kernel, and thus zero
is not an eigenvalue, so \(\lambda_i \neq 0\) for any \(i\).
\item
Since the \(\lambda_i\) are distinct and nonzero, this implies \(x^k\)
is not a factor of \(\mu_{A^n}(x)\) for any \(k\geq 0\). Thus the
\(m\) terms in the product correspond to precisely \(m\)
\textbf{distinct linear} factors.
\item
We can now construct a polynomial that annihilates \(A\), namely
\begin{align*}
q_A(x) \coloneqq\min_{A^n}(x^n) = \prod_{i=1}^m (x^n-\lambda_i) \in {\mathbb{F}}[x],
\end{align*}
where we can note that \(q_A(A) = \min_{A^n}(A^n) = 0\), and so
\(\min_A(x) \divides q_A(x)\) by minimality.
\end{itemize}
\begin{claim}
\(q_A(x)\) has exactly \(nm\) distinct linear factors in
\({ \mkern 1.5mu\overline{\mkern-1.5mu \mathbb{F}\mkern-1.5mu}\mkern 1.5mu }[x]\)
\end{claim}
\begin{itemize}
\item
This reduces to showing that no pair \(x^n-\lambda_i, x^n-\lambda_j\)
share a root. and that \(x^n-\lambda_i\) does not have multiple roots.
\item
For the first claim, we can factor
\begin{align*}
x^n - \lambda_i = \prod_{k=1}^n (x - \lambda_i^{1\over n} e^{2\pi i k \over n}) \coloneqq\prod_{k=1}^n (x-\lambda^{1\over n} \zeta_n^k)
,\end{align*}
where we now use the fact that
\(i\neq j \implies \lambda_i^{1\over n} \neq \lambda_j^{1\over n}\).
Thus no term in the above product appears as a factor in
\(x^n - \lambda_j\) for \(j\neq i\).
\item
For the second claim, we can check that
\({\frac{\partial }{\partial x}\,}\qty{x^n - \lambda_i} = nx^{n-1}\neq 0\in {\mathbb{F}}\),
and \(\gcd(x^n-\lambda_i, nx^{n-1}) = 1\) since the latter term has
only the roots \(x=0\) with multiplicity \(n-1\), whereas
\(\lambda_i\neq 0 \implies\) zero is not a root of \(x^n-\lambda_i\).
\end{itemize}
But now since \(q_A(x)\) has exactly distinct linear factors in
\(\mkern 1.5mu\overline{\mkern-1.5mu{\mathbb{F}}\mkern-1.5mu}\mkern 1.5mu[x]\)
and \(\min_A(x) \divides q_A(x)\), \(\min_A(x) \in {\mathbb{F}}[x]\) can
only have distinct linear factors, and \(A\) is thus diagonalizable over
\({\mathbb{F}}\).
\end{proof}
\end{solution}
\hypertarget{linear-algebra-misc}{%
\section{Linear Algebra: Misc}\label{linear-algebra-misc}}
\hypertarget{star-spring-2012-6-work}{%
\subsection{\texorpdfstring{\(\star\) Spring 2012 \#6
\(\work\)}{\textbackslash star Spring 2012 \#6 \textbackslash work}}\label{star-spring-2012-6-work}}
Let \(k\) be a field and let the group
\(G = \operatorname{GL}(m, k) \times\operatorname{GL}(n, k)\) acts on
the set of \(m\times n\) matrices \(M_{m, n}(k)\) as follows:
\begin{align*}
(A, B) \cdot X = AXB^{-1}
\end{align*}
where \((A, B) \in G\) and \(X\in M_{m, n}(k)\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
State what it means for a group to act on a set. Prove that the above
definition yields a group action.
\item
Exhibit with justification a subset \(S\) of \(M_{m, n}(k)\) which
contains precisely one element of each orbit under this action.
\end{enumerate}
\hypertarget{star-spring-2014-7-work}{%
\subsection{\texorpdfstring{\(\star\) Spring 2014 \#7
\(\work\)}{\textbackslash star Spring 2014 \#7 \textbackslash work}}\label{star-spring-2014-7-work}}
Let \(G = \operatorname{GL}(3, {\mathbb{Q}}[x])\) be the group of
invertible \(3\times 3\) matrices over \({\mathbb{Q}}[x]\). For each
\(f\in {\mathbb{Q}}[x]\), let \(S_f\) be the set of \(3\times 3\)
matrices \(A\) over \({\mathbb{Q}}[x]\) such that
\(\operatorname{det}(A) = c f(x)\) for some nonzero constant
\(c\in {\mathbb{Q}}\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that for \((P, Q) \in G\times G\) and \(A\in S_f\), the formula
\begin{align*}
(P, Q)\cdot A \coloneqq PAQ^{-1}
\end{align*}
gives a well defined map \(G\times G \times S_f \to S_f\) and show
that this map gives a group action of \(G\times G\) on \(S_f\).
\item
For \(f(x) = x^3(x^2+1)^2\), give one representative from each orbit
of the group action in (a), and justify your assertion.
\end{enumerate}
\hypertarget{fall-2012-7-work}{%
\subsection{\texorpdfstring{Fall 2012 \#7
\(\work\)}{Fall 2012 \#7 \textbackslash work}}\label{fall-2012-7-work}}
Let \(k\) be a field of characteristic zero and \(A, B \in M_n(k)\) be
two square \(n\times n\) matrices over \(k\) such that \(AB - BA = A\).
Prove that \(\operatorname{det}A = 0\).
Moreover, when the characteristic of \(k\) is 2, find a counterexample
to this statement.
\hypertarget{fall-2012-8-work}{%
\subsection{\texorpdfstring{Fall 2012 \#8
\(\work\)}{Fall 2012 \#8 \textbackslash work}}\label{fall-2012-8-work}}
Prove that any nondegenerate matrix \(X\in M_n({\mathbb{R}})\) can be
written as \(X = UT\) where \(U\) is orthogonal and \(T\) is upper
triangular.
\hypertarget{fall-2012-5-work}{%
\subsection{\texorpdfstring{Fall 2012 \#5
\(\work\)}{Fall 2012 \#5 \textbackslash work}}\label{fall-2012-5-work}}
Let \(U\) be an infinite-dimensional vector space over a field \(k\),
\(f: U\to U\) a linear map, and
\(\left\{{u_1, \cdots, u_m}\right\} \subset U\) vectors such that \(U\)
is generated by
\(\left\{{u_1, \cdots, u_m, f^d(u_1), \cdots, f^d(u_m)}\right\}\) for
some \(d\in {\mathbb{N}}\).
Prove that \(U\) can be written as a direct sum \(U \cong V\oplus W\)
such that
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\(V\) has a basis consisting of some vector
\(v_1, \cdots v_n, f^d(v_1), \cdots, f^d(v_n)\) for some
\(d\in {\mathbb{N}}\), and
\item
\(W\) is finite-dimensional.
\end{enumerate}
Moreover, prove that for any other decomposition
\(U \cong V' \oplus W'\), one has \(W' \cong W\).
\hypertarget{fall-2015-7-work}{%
\subsection{\texorpdfstring{Fall 2015 \#7
\(\work\)}{Fall 2015 \#7 \textbackslash work}}\label{fall-2015-7-work}}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that two \(3\times 3\) matrices over \({\mathbb{C}}\) are similar
\(\iff\) their characteristic polynomials are equal and their minimal
polynomials are equal.
\item
Does the conclusion in (a) hold for \(4\times 4\) matrices? Justify
your answer with a proof or counterexample.
\end{enumerate}
\hypertarget{fall-2014-4-work}{%
\subsection{\texorpdfstring{Fall 2014 \#4
\(\work\)}{Fall 2014 \#4 \textbackslash work}}\label{fall-2014-4-work}}
Let \(F\) be a field and \(T\) an \(n\times n\) matrix with entries in
\(F\). Let \(I\) be the ideal consisting of all polynomials
\(f\in F[x]\) such that \(f(T) =0\).
Show that the following statements are equivalent about a polynomial
\(g\in I\):
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
\(g\) is irreducible.
\item
If \(k\in F[x]\) is nonzero and of degree strictly less than \(g\),
then \(k[T]\) is an invertible matrix.
\end{enumerate}
\hypertarget{fall-2015-8-work}{%
\subsection{\texorpdfstring{Fall 2015 \#8
\(\work\)}{Fall 2015 \#8 \textbackslash work}}\label{fall-2015-8-work}}
Let \(V\) be a vector space over a field \(F\) and \(V {}^{ \vee }\) its
dual. A \emph{symmetric bilinear form} \(({-}, {-})\) on \(V\) is a map
\(V\times V\to F\) satisfying
\begin{align*}
(av_1 + b v_2, w) = a(v_1, w) + b(v_2, w) {\quad \operatorname{and} \quad} (v_1, v_2) = (v_2, v_1)
\end{align*}
for all \(a, b\in F\) and \(v_1, v_2 \in V\). The form is
\emph{nondegenerate} if the only element \(w\in V\) satisfying
\((v, w) = 0\) for all \(v\in V\) is \(w=0\).
Suppose \(({-}, {-})\) is a nondegenerate symmetric bilinear form on
\(V\). If \(W\) is a subspace of \(V\), define
\begin{align*}
W^{\perp} \coloneqq\left\{{v\in V {~\mathrel{\Big|}~}(v, w) = 0 \text{ for all } w\in W}\right\}
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that if \(X, Y\) are subspaces of \(V\) with \(Y\subset X\), then
\(X^{\perp} \subseteq Y^{\perp}\).
\item
Define an injective linear map
\begin{align*}
\psi: Y^{\perp}/X^{\perp} \hookrightarrow(X/Y) {}^{ \vee }
\end{align*}
which is an isomorphism if \(V\) is finite dimensional.
\end{enumerate}
\hypertarget{fall-2018-4-done}{%
\subsection{\texorpdfstring{Fall 2018 \#4
\(\done\)}{Fall 2018 \#4 \textbackslash done}}\label{fall-2018-4-done}}
Let \(V\) be a finite dimensional vector space over a field (the field
is not necessarily algebraically closed).
Let \(\phi : V \to V\) be a linear transformation. Prove that there
exists a decomposition of \(V\) as \(V = U \oplus W\) , where \(U\) and
\(W\) are \(\phi{\hbox{-}}\)invariant subspaces of \(V\) ,
\({\left.{{\phi}} \right|_{{U}} }\) is nilpotent, and
\({\left.{{\phi}} \right|_{{W}} }\) is nonsingular.
\todo[inline]{Revisit.}
\begin{solution}
Let \(m(x)\) be the minimal polynomial of \(\phi\). If the polynomial
\(f(x) = x\) doesn't divide \(m\), then \(f\) does not have zero as an
eigenvalue, so \(\phi\) is nonsingular and since \(0\) is nilpotent,
\(\phi + 0\) works.
Otherwise, write \(\phi(x) = x^m \rho(x)\) where
\(\gcd(x, \rho(x)) = 1\).
Then
\begin{align*}
V \cong \frac{k[x]}{m(x)} \cong \frac{k[x]}{(x^m)} \oplus \frac{k[x]}{(\rho)}
\coloneqq U \oplus W
\end{align*}
by the Chinese Remainder theorem.
We can now note that \({\left.{{\phi}} \right|_{{U}} }\) is nilpotent
because it has characteristic polynomial \(x^m\), and
\({\left.{{\phi}} \right|_{{W}} }\) is nonsingular since \(\lambda = 0\)
is not an eigenvalue by construction.
\end{solution}
\hypertarget{fall-2018-5-done}{%
\subsection{\texorpdfstring{Fall 2018 \#5
\(\done\)}{Fall 2018 \#5 \textbackslash done}}\label{fall-2018-5-done}}
Let \(A\) be an \(n \times n\) matrix.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Suppose that \(v\) is a column vector such that the set
\(\{v, Av, . . . , A^{n-1} v\}\) is linearly independent. Show that
any matrix \(B\) that commutes with \(A\) is a polynomial in \(A\).
\item
Show that there exists a column vector \(v\) such that the set
\(\{v, Av, . . . , A^{n-1} v\}\) is linearly independent \(\iff\) the
characteristic polynomial of \(A\) equals the minimal polynomial of A.
\end{enumerate}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Powers of \(A\) commute with polynomials in \(A\).
\item
The image of a linear map is determined by the image of a basis
\end{itemize}
\end{concept}
\begin{strategy}
\envlist
\begin{itemize}
\tightlist
\item
Use Cayley-Hamilton to relate the minimal polynomial to a linear
dependence.
\item
Get a lower bound on the degree of the minimal polynomial.
\item
Use \(A\curvearrowright k[x]\) to decompose into cyclic
\(k[x]{\hbox{-}}\)modules, and use special form of denominators in the
invariant factors.
\item
Reduce to monomials.
\end{itemize}
\end{strategy}
\begin{solution}
\envlist
\begin{proof}[of a]
Letting \(\mathbf{v}\) be fixed, since
\(\left\{{A^j \mathbf{v}}\right\}\) spans \(V\) we have A
\begin{align*}
B\mathbf{v} = \sum_{j=0}^{n-1}c_j A^j \mathbf{v}
.\end{align*}
So let \(p(x) = \sum_{j=0}^{n-1}c_jx^j\). Then consider how \(B\) acts
on any basis vector \(A^k \mathbf{v}\).
We have
\begin{align*}
BA^k \mathbf{v}
&= A^k B\mathbf{v} \\
&= A^k p(A) \mathbf{v} \\
&= p(A) A^k \mathbf{v}
,\end{align*}
so \(B = p(A)\) as operators since their actions agree on every basis
vector in \(V\).
\end{proof}
\begin{proof}[of b, $\implies$]
\envlist
\begin{itemize}
\item
If
\(\left\{{A^j \mathbf{v}_k {~\mathrel{\Big|}~}0\leq j \leq n-1}\right\}\)
is linearly independent, this means that \(A\) does satisfy any
polynomial of degree \(d < n\).
\item
So \(\deg m_A(x) = n\), and since \(m_A(x)\) divides \(\chi_A(x)\) and
both are monic degree polynomials of degree \(n\), they must be equal.
\end{itemize}
\end{proof}
\begin{proof}[of b, $\impliedby$]
\envlist
\begin{itemize}
\item
Let \(A\curvearrowright k[x]\) by
\(A \curvearrowright p(x) \coloneqq p(A)\). This induces an invariant
factor decomposition \(V =\cong \bigoplus k[x]/(f_i)\).
\item
Since the product of the invariant factors is the characteristic
polynomial, the largest invariant factor is the minimal polynomial,
and these two are equal, there can only be one invariant factor and
thus the invariant factor decomposition is
\begin{align*}
V\cong \frac{k[x]}{(\chi_A(x))}
\end{align*}
as an isomorphism of \(k[x]{\hbox{-}}\)modules.
\item
So \(V\) is a cyclic \(k[x]\) module, which means that
\(V = k[x]\curvearrowright\mathbf{v}\) for some \(\mathbf{v}\in V\)
such that \(\operatorname{Ann}(\mathbf{v}) = \chi_A(x)\), i.e.~there
is some element \(\mathbf{v}\in V\) whose orbit is all of \(V\).
\item
But then noting that monomials span \(k[x]\) as a
\(k{\hbox{-}}\)module, we can write
\begin{align*}
V &\cong
k[x] \curvearrowright\mathbf{v} \\
&\coloneqq\left\{{f(x) \curvearrowright\mathbf{v} {~\mathrel{\Big|}~}f \in k[x]}\right\} \\
&= {\operatorname{span}}_k \left\{{x^k \curvearrowright\mathbf{v} {~\mathrel{\Big|}~}k \geq 0}\right\} \\
&\coloneqq{\operatorname{span}}_k \left\{{A^k\mathbf{v} {~\mathrel{\Big|}~}k \geq 0}\right\}
,\end{align*}
where we've used that \(x\) acts by \(A\) and thus \(x^k\) acts by
\(A^k\).
\item
Moreover, we can note that if \(\ell \geq \deg \chi_A(x)\), then
\(A^\ell\) is a linear combination of
\(\left\{{A^j \mathrel{\Big|}0 \leq j \leq n-1}\right\}\), and so
\begin{align*}
V &\cong {\operatorname{span}}_k \left\{{A^\ell\mathbf{v} {~\mathrel{\Big|}~}\ell \geq 0}\right\} \\
&= {\operatorname{span}}_k \left\{{A^\ell \mathbf{v} {~\mathrel{\Big|}~}1 \leq \ell \leq n-1}\right\}
.\end{align*}
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2019-8-work}{%
\subsection{\texorpdfstring{Fall 2019 \#8
\(\work\)}{Fall 2019 \#8 \textbackslash work}}\label{fall-2019-8-work}}
Let \(\{e_1, \cdots, e_n \}\) be a basis of a real vector space \(V\)
and let
\begin{align*}
\Lambda \coloneqq\left\{{ \sum r_i e_i \mathrel{\Big|}r_i \in {\mathbb{Z}}}\right\}
\end{align*}
Let \(\cdot\) be a non-degenerate (\(v \cdot w = 0\) for all
\(w \in V \iff v = 0\)) symmetric bilinear form on \(V\) such that the
Gram matrix \(M = (e_i \cdot e_j )\) has integer entries.
Define the dual of \(\Lambda\) to be
\begin{align*}
\Lambda {}^{ \vee }\coloneqq\{v \in V {~\mathrel{\Big|}~}v \cdot x \in {\mathbb{Z}}\text{ for all } x \in \Lambda
\}
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(\Lambda \subset \Lambda {}^{ \vee }\).
\item
Prove that \(\operatorname{det}M \neq 0\) and that the rows of
\(M^{-1}\) span \(\Lambda {}^{ \vee }\).
\item
Prove that \(\operatorname{det}M = |\Lambda {}^{ \vee }/\Lambda|\).
\end{enumerate}
\todo[inline]{Todo, missing part (c).}
\begin{solution}
\envlist
\begin{proof}[of a]
\envlist
\begin{itemize}
\item
Let \(\mathbf{v} \in \Lambda\), so
\(\mathbf{v} = \sum_{i=1}^n r_i \mathbf{e}_i\) where
\(r_i \in {\mathbb{Z}}\) for all \(i\).
\item
Then if \(\mathbf{x} = \sum_{j=1}^n s_j \mathbf{e}_j \in \Lambda\) is
arbitrary, we have \(s_j \in {\mathbb{Z}}\) for all \(j\) and
\begin{align*}
{\left\langle {\mathbf{v}},~{\mathbf{x}} \right\rangle}
&= {\left\langle {\sum_{i=1}^n r_i \mathbf{e}_i},~{\sum_{j=1}^n s_j \mathbf{e}_j } \right\rangle} \\
&= \sum_{i=1}^n \sum_{j=1}^n r_i s_j {\left\langle {\mathbf{e}_i},~{\mathbf{e}_j } \right\rangle} \in {\mathbb{Z}}
\end{align*}
since this is a sum of products of integers (since
\({\left\langle {\mathbf{e}_i},~{\mathbf{e}_j} \right\rangle} \in {\mathbb{Z}}\)
for each \(i, j\) pair by assumption) so
\(\mathbf{v} \in \Lambda {}^{ \vee }\) by definition.
\end{itemize}
\end{proof}
\begin{proof}[of b]
\begin{claim}
The determinant is nonzero.
\end{claim}
\begin{itemize}
\item
Suppose \(\operatorname{det}M = 0\). Then \(\ker M \neq \mathbf{0}\),
so let \(\mathbf{v} \in \ker M\) be given by
\(\mathbf{v} = \sum_{i=1}^n v_i \mathbf{e}_i \neq \mathbf{0}\).
\item
Note that
\begin{align*}
M\mathbf{v} = 0 &\implies
\left[
\begin{array}{ccc}
\mathbf{e}_1 \cdot \mathbf{e}_1 & \mathbf{e}_1 \cdot \mathbf{e}_2 & \cdots \\
\mathbf{e}_2 \cdot \mathbf{e}_1 & \mathbf{e}_2 \cdot \mathbf{e}_2 & \cdots \\
\vdots & \vdots & \ddots
\end{array}
\right]
\left[\begin{array}{c}
v_1 \\ v_2 \\ \vdots
\end{array}\right] = \mathbf{0} \\ \\
&\implies \sum_{j=1}^n v_j{\left\langle {\mathbf{e}_k},~{\mathbf{e}_j} \right\rangle} = 0 {\quad \operatorname{for each fixed} \quad} k
.\end{align*}
\item
We can now note that
\({\left\langle {\mathbf{e}_k},~{\mathbf{v}} \right\rangle} = \sum_{j=1}^n v_j {\left\langle {\mathbf{e}_k},~{\mathbf{e}_j} \right\rangle} = 0\)
for every \(k\) by the above observation, which forces
\(\mathbf{v} = 0\) by non-degeneracy of
\({\left\langle {{-}},~{{-}} \right\rangle}\), a contradiction.
\end{itemize}
\end{proof}
\begin{proof}[of c]
\envlist
???
\todo[inline]{Missing work!}
\end{proof}
\end{solution}
\begin{solution}[Alternative]
Write \(M = A^tA\) where \(A\) has the \(\mathbf{e}_i\) as columns. Then
\begin{align*}
M\mathbf{x} = 0
&\implies A^t A \mathbf{x} = 0 \\
&\implies \mathbf{x}^t A^t A \mathbf{x} = 0 \\
&\implies {\left\lVert {A \mathbf{x}} \right\rVert}^2 = 0 \\
&\implies A\mathbf{x} = 0 \\
&\implies \mathbf{x} = 0
,\end{align*}
since \(A\) has full rank because the \(\mathbf{e}_i\) are linearly
independent.
Let \(A = [\mathbf{e}_1^t, \cdots, \mathbf{e}_n^t]\) be the matrix with
\(\mathbf{e}_i\) in the \(i\)th column.
\begin{claim}
The rows of \(A^{-1}\) span \(\Lambda {}^{ \vee }\). Equivalently, the
columns of \(A^{-t}\) span \(\Lambda {}^{ \vee }\).
\end{claim}
\begin{itemize}
\item
Let \(B = A^{-t}\) and let \(\mathbf{b}_i\) denote the columns of
\(B\), so
\(\operatorname{im}B = {\operatorname{span}}{\left\{{\mathbf{b}_i}\right\}}\).
\item
Since \(A \in \operatorname{GL}(n, {\mathbb{Z}})\),
\(A^{-1}, A^t, A^{-t} \in \operatorname{GL}(n, {\mathbb{Z}})\) as
well.
\begin{align*}
\mathbf{v} \in \Lambda {}^{ \vee }
&\implies {\left\langle {\mathbf{e}_i},~{\mathbf{v}} \right\rangle} = z_i \in {\mathbb{Z}}\quad \forall i \\
&\implies A^t \mathbf{v} = \mathbf{z} \coloneqq[z_1, \cdots, z_n] \in {\mathbb{Z}}^n \\
&\implies \mathbf{v} = A^{-t} \mathbf{z} \coloneqq B\mathbf{z} \in \operatorname{im}B \\
&\implies \mathbf{v} \in \operatorname{im}B \\
&\implies \Lambda {}^{ \vee }\subseteq \operatorname{im}B
,\end{align*}
and
\begin{align*}
B^t A = (A^{-t})^t A = A^{-1}A = I \\
\implies \mathbf{b}_i \cdot \mathbf{e}_j = \delta_{ij} \in {\mathbb{Z}}\\
\implies \operatorname{im}B \subseteq {\operatorname{span}}~ \Lambda {}^{ \vee }
.\end{align*}
\end{itemize}
\end{solution}
\hypertarget{spring-2013-6-done}{%
\subsection{\texorpdfstring{Spring 2013 \#6
\(\done\)}{Spring 2013 \#6 \textbackslash done}}\label{spring-2013-6-done}}
Let \(V\) be a finite dimensional vector space over a field \(F\) and
let \(T: V\to V\) be a linear operator with characteristic polynomial
\(f(x) \in F[x]\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that \(f(x)\) is irreducible in \(F[x] \iff\) there are no proper
nonzero subspaces \(W< V\) with \(T(W) \subseteq W\).
\item
If \(f(x)\) is irreducible in \(F[x]\) and the characteristic of \(F\)
is 0, show that \(T\) is diagonalizable when we extend the field to
its algebraic closure.
\end{enumerate}
\todo[inline]{Is there a proof without matrices? What if $V$ is infinite dimensional?}
\todo[inline]{How to extend basis?}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Every \(\mathbf{v}\in V\) is \(T{\hbox{-}}\)cyclic
\(\iff \chi_T(x)/{\mathbb{k}}\) is irreducible.
\begin{itemize}
\tightlist
\item
\(\implies\): Same as argument below.
\item
\(\impliedby\): Suppose \(f\) is irreducible, then \(f\) is equal to
the minimal polynomial of \(T\).
\end{itemize}
\item
Characterization of diagonalizability: \(T\) is diagonalizable over
\(F \iff \min_{T, F}\) is squarefree in
\(\mkern 1.5mu\overline{\mkern-1.5muF\mkern-1.5mu}\mkern 1.5mu[x]\)?
\end{itemize}
\end{concept}
\begin{solution}
\envlist
Let \(f\) be the characteristic polynomial of \(T\).
\begin{proof}[of a, $\implies$. Matrix-dependent]
\(\implies\):
\begin{itemize}
\tightlist
\item
By contrapositive, suppose there is a proper nonzero invariant
subspace \(W<V\) with \(T(W) \subseteq W\), we will show the
characteristic polynomial \(f \coloneqq\chi_{V, T}(x)\) is reducible.
\item
Since \(T(W)\subseteq W\), the restriction
\(g\coloneqq\chi_{V, T}(x) \mathrel{\Big|}_W: W\to W\) is a linear
operator on \(W\).
\end{itemize}
\begin{claim}
\(g\) divides \(f\) in \({\mathbb{F}}[x]\) and \(\deg(g) < \deg(f)\).
\end{claim}
\begin{itemize}
\item
Choose an ordered basis for \(W\), say
\({\mathcal{B}}_W \coloneqq\left\{{\mathbf{w}_1, \cdots, \mathbf{w}_k}\right\}\)
where \(k=\dim_F(W)\)
\item
Claim: this can be extended to a basis of \(V\), say
\({\mathcal{B}}_V \coloneqq\left\{{\mathbf{w}_1, \cdots, \mathbf{w}_k, \mathbf{v}_1, \cdots, \mathbf{v}_j}\right\}\)
where \(k+j = \dim_F(V)\).
\begin{itemize}
\tightlist
\item
Note that since \(W<V\) is proper, \(j\geq 1\).
\end{itemize}
\item
Restrict \(T\) to \(W\) to get \(T_W\), then let
\(B = [T_W]_{{\mathcal{B}}_W}\) be the matrix representation of
\(T_W\) with respect to \({\mathcal{B}}_W\).
\item
Now consider the matrix representation \([T]_{{\mathcal{B}}_V}\), in
block form this is given by
\begin{align*}
[T]_{{\mathcal{B}}_V} =
\begin{bmatrix}
B & C \\
0 & D
\end{bmatrix}
\end{align*}
where we've used that \(W<V\) is proper to get the existence of
\(C, D\) (there is at least one additional row/column since
\(j\geq 1\) in the extended basis.) \todo[inline]{Why?}
\item
Now expand along the first column block to obtain
\begin{align*}
\chi_{T, V}(x) \coloneqq\operatorname{det}([T]_{{\mathcal{B}}_V} - xI) = \operatorname{det}(B - xI)\cdot \operatorname{det}(D - xI) \coloneqq\chi_{T, W}(x) \cdot \operatorname{det}(D-xI)
.\end{align*}
\item
Claim: \(\operatorname{det}(D - xI) \in xF[x]\) is nontrivial
\item
The claim follows because this forces
\(\deg(\operatorname{det}(D-xI)) \geq 1\) and so \(\chi_{T, W}(x)\) is
a proper divisor of \(\chi_{T, V}(x)\).
\item
Thus \(f\) is reducible.
\end{itemize}
\end{proof}
\begin{proof}[of a, $\impliedby$]
\(\impliedby\)
\begin{itemize}
\tightlist
\item
Suppose \(f\) is reducible, then we will produce a proper
\(T{\hbox{-}}\)invariant subspace.
\item
Claim: if \(f\) is reducible, there exists a nonzero, noncyclic vector
\(\mathbf{v}\).
\item
Then \({\operatorname{span}}_k\left\{{T^j\mathbf{v}}\right\}_{j=1}^d\)
is a \(T{\hbox{-}}\)invariant subspace that is nonzero, and not the
entire space since \(\mathbf{v}\) is not cyclic.
\end{itemize}
\end{proof}
\begin{proof}[of b]
\envlist
\begin{itemize}
\tightlist
\item
Let \(\min_{T, F}(x)\) be the minimal polynomial of \(T\) and
\(\chi_{T, F}(x)\) be its characteristic polynomial.
\item
By Cayley-Hamilton, \(\min_{T, F}(x)\) divides \(\chi_{T, F}\)
\item
Since \(\chi_{T, F}\) is irreducible, these polynomials are equal.
\item
Claim: \(T/F\) is diagonalizable \(\iff \min_{T, F}\) splits over
\(F\) and is squarefree.
\item
Replace \(F\) with its algebraic closure, then \(\min_{T, F}\) splits.
\item
Claim: in characteristic zero, every irreducible polynomial is
separable
\begin{itemize}
\tightlist
\item
Proof: it must be the case that either \(\gcd(f, f') = 1\) or
\(f' \equiv 0\), where the second case only happens in
characteristic \(p>0\).
\item
The first case is true because \(\deg f' < \deg f\), and if
\(\gcd(f, f') = p\), then \(\deg p < \deg f\) and \(p\divides f\)
forces \(p=1\) since \(f\) is irreducible.
\end{itemize}
\item
So \(\min_{T, F}\) splits into distinct linear factors
\item
Thus \(T\) is diagonalizable.
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2020-8-work}{%
\subsection{\texorpdfstring{Fall 2020 \#8
\(\work\)}{Fall 2020 \#8 \textbackslash work}}\label{fall-2020-8-work}}
Let \(A\in \operatorname{Mat}(n\times n, {\mathbb{C}})\) such that the
group generated by \(A\) under multiplication is finite. Show that
\begin{align*}
\operatorname{Tr}(A^{-1}) ={\overline{{\operatorname{Tr}(A) }}}
,\end{align*}
where \({\overline{{({-})}}}\) denotes taking the complex conjugate and
\(\operatorname{Tr}({-})\) is the trace.
\hypertarget{linear-algebra-canonical-forms}{%
\section{Linear Algebra: Canonical
Forms}\label{linear-algebra-canonical-forms}}
\hypertarget{star-spring-2012-8-work}{%
\subsection{\texorpdfstring{\(\star\) Spring 2012 \#8
\(\work\)}{\textbackslash star Spring 2012 \#8 \textbackslash work}}\label{star-spring-2012-8-work}}
Let \(V\) be a finite-dimensional vector space over a field \(k\) and
\(T:V\to V\) a linear transformation.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Provide a definition for the \emph{minimal polynomial} in \(k[x]\) for
\(T\).
\item
Define the \emph{characteristic polynomial} for \(T\).
\item
Prove the Cayley-Hamilton theorem: the linear transformation \(T\)
satisfies its characteristic polynomial.
\end{enumerate}
\hypertarget{star-spring-2020-8-work}{%
\subsection{\texorpdfstring{\(\star\) Spring 2020 \#8
\(\work\)}{\textbackslash star Spring 2020 \#8 \textbackslash work}}\label{star-spring-2020-8-work}}
Let \(T:V\to V\) be a linear transformation where \(V\) is a
finite-dimensional vector space over \({\mathbb{C}}\). Prove the
Cayley-Hamilton theorem: if \(p(x)\) is the characteristic polynomial of
\(T\), then \(p(T) = 0\). You may use canonical forms.
\hypertarget{star-spring-2012-7-work}{%
\subsection{\texorpdfstring{\(\star\) Spring 2012 \#7
\(\work\)}{\textbackslash star Spring 2012 \#7 \textbackslash work}}\label{star-spring-2012-7-work}}
Consider the following matrix as a linear transformation from
\(V\coloneqq{\mathbb{C}}^5\) to itself:
\begin{align*}
A=\left(\begin{array}{ccccc}
-1 & 1 & 0 & 0 & 0 \\
-4 & 3 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 2
\end{array}\right)
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the invariant factors of \(A\).
\item
Express \(V\) in terms of a direct sum of indecomposable
\({\mathbb{C}}[x]{\hbox{-}}\)modules.
\item
Find the Jordan canonical form of \(A\).
\end{enumerate}
\hypertarget{fall-2019-final-8-work}{%
\subsection{\texorpdfstring{Fall 2019 Final \#8
\(\work\)}{Fall 2019 Final \#8 \textbackslash work}}\label{fall-2019-final-8-work}}
Exhibit the rational canonical form for
\begin{itemize}
\tightlist
\item
\(A\in M_6({\mathbb{Q}})\) with minimal polynomial
\((x-1)(x^2 + 1)^2\).
\item
\(A\in M_{10}({\mathbb{Q}})\) with minimal polynomial
\((x^2+1)^2(x^3 + 1)\).
\end{itemize}
\hypertarget{fall-2019-final-9-work}{%
\subsection{\texorpdfstring{Fall 2019 Final \#9
\(\work\)}{Fall 2019 Final \#9 \textbackslash work}}\label{fall-2019-final-9-work}}
Exhibit the rational and Jordan canonical forms for the following matrix
\(A\in M_4({\mathbb{C}})\):
\begin{align*}
A=\left(\begin{array}{cccc}
2 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 \\
-2 & -2 & 0 & 1 \\
-2 & 0 & -1 & -2
\end{array}\right)
.\end{align*}
\hypertarget{spring-2016-7-work}{%
\subsection{\texorpdfstring{Spring 2016 \#7
\(\work\)}{Spring 2016 \#7 \textbackslash work}}\label{spring-2016-7-work}}
Let \(D = {\mathbb{Q}}[x]\) and let \(M\) be a
\({\mathbb{Q}}[x]{\hbox{-}}\)module such that
\begin{align*}
M \cong \frac{\mathbb{Q}[x]}{(x-1)^{3}} \oplus \frac{\mathbb{Q}[x]}{\left(x^{2}+1\right)^{3}} \oplus \frac{\mathbb{Q}[x]}{(x-1)\left(x^{2}+1\right)^{5}} \oplus \frac{\mathbb{Q}[x]}{(x+2)\left(x^{2}+1\right)^{2}}
.\end{align*}
Determine the elementary divisors and invariant factors of \(M\).
\hypertarget{spring-2020-7-work}{%
\subsection{\texorpdfstring{Spring 2020 \#7
\(\work\)}{Spring 2020 \#7 \textbackslash work}}\label{spring-2020-7-work}}
Let
\begin{align*}
A=\left[\begin{array}{ccc}
2 & 0 & 0 \\
4 & 6 & 1 \\
-16 & -16 & -2
\end{array}\right] \in M_{3}(\mathrm{C})
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the Jordan canonical form \(J\) of \(A\).
\item
Find an invertible matrix \(P\) such that \(P^{-1}A P = J\).
\item
Write down the minimal polynomial of \(A\).
\end{enumerate}
\begin{quote}
You should not need to compute \(P^{-1}\).
\end{quote}
\hypertarget{spring-2019-7-done}{%
\subsection{\texorpdfstring{Spring 2019 \#7
\(\done\)}{Spring 2019 \#7 \textbackslash done}}\label{spring-2019-7-done}}
Let \(p\) be a prime number. Let \(A\) be a \(p \times p\) matrix over a
field \(F\) with 1 in all entries except 0 on the main diagonal.
Determine the Jordan canonical form (JCF) of \(A\)
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
When \(F = {\mathbb{Q}}\),
\item
When \(F = {\mathbb{F}}_p\).
\end{enumerate}
\begin{quote}
Hint: In both cases, all eigenvalues lie in the ground field. In each
case find a matrix \(P\) such that \(P^{-1}AP\) is in JCF.
\end{quote}
\begin{strategy}
\envlist
\begin{itemize}
\tightlist
\item
Work with matrix of all ones instead.
\item
Eyeball eigenvectors.
\item
Coefficients in minimal polynomial: size of largest Jordan block
\item
Dimension of eigenspace: number of Jordan blocks
\item
We can always read off the \emph{characteristic} polynomial from the
spectrum.
\end{itemize}
\end{strategy}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
Todo
\end{itemize}
\end{concept}
\begin{solution}
\textbf{Proof of (a)}: Let \(A\) be the matrix in the question, and
\(B\) be the matrix containing 1's in every entry.
\begin{itemize}
\item
Noting that \(B = A+I\), we have
\begin{align*}
&B\mathbf{x} = \lambda \mathbf{x} \\
&\iff (A+I) \mathbf{x} = \lambda \mathbf{x} \\
&\iff A \mathbf{x} = (\lambda - 1) \mathbf{x}
,\end{align*}
so we will find the eigenvalues of \(B\) and subtract one from each.
\item
Note that
\(B\mathbf{v} = {\left[ {\sum v_i, \sum v_i, \cdots, \sum v_i} \right]}\),
i.e.~it has the effect of summing all of the entries of \(\mathbf{v}\)
and placing that sum in each component.
\item
We proceed by finding \(p\) eigenvectors and eigenvalues, since the
JCF and minimal polynomials will involve eigenvalues and the
transformation matrix will involve (generalized) eigenvectors.
\end{itemize}
\begin{claim}[1]
Each vector of the form
\(\mathbf{p}_i \coloneqq\mathbf{e}_1 - \mathbf{e}_{i+1} = {\left[ {1, 0, 0,\cdots, 0 -1, 0, \cdots, 0 } \right]}\)
where \(i\neq j\) is also an eigenvector with eigenvalues
\(\lambda_0 = 0\), and this gives \(p-1\) linearly independent vectors
spanning the eigenspace \(E_{\lambda_0}\)
\end{claim}
\begin{claim}[2]
\(\mathbf{v}_1 = {\left[ {1, 1, \cdots, 1} \right]}\) is an eigenvector
with eigenvalue \(\lambda_1 = p\).
\end{claim}
\begin{itemize}
\item
Using that the eigenvalues of \(A\) are \(1+\lambda_i\) for
\(\lambda_i\) the above eigenvalues for \(B\),
\begin{align*}
\operatorname{Spec}(B) \coloneqq\left\{{(\lambda_i, m_i)}\right\} &= \left\{{(p, 1), (0, p-1)}\right\} \implies \chi_{B}(x) = (x-p)x^{p-1} \\
\implies \operatorname{Spec}(A) &= \left\{{(p-1,1), (-1, p-1) }\right\} \implies \chi_{A}(x) = (x- p+1)(x+1)^{p-1} \\
\end{align*}
\item
The dimensions of eigenspaces are preserved, thus
\begin{align*}
JCF_{\mathbb{Q}}(A)
= J_{p-1}^{1} \oplus (p-1)J_{-1}^1
=
\left[\begin{array}{r|r|r|r|r|r}
p-1 & 0 & 0 & \cdots & 0 & 0 \\
\hline
0& -1 & 0 & 0 & 0 & 0 \\ \hline
0& 0 & -1 & 0 & 0 & 0 \\ \hline
0& 0 & 0 & \ddots & \ddots & 0 \\ \hline
0& 0 & 0 & \cdots & -1 & 0 \\ \hline
0& 0 & 0 & \cdots & 0 & -1 \\
\end{array}\right]
.\end{align*}
\item
The matrix \(P\) such that \(A = PJP^{-1}\) will have columns the
bases of the generalized eigenspaces.
\item
In this case, the generalized eigenspaces are the usual eigenspaces,
so
\begin{align*}
P = [\mathbf{v}_1, \mathbf{p}_1, \cdots, \mathbf{p}_{p-1}] =
\left[\begin{array}{rrrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
1 & -1 & 0 & 0 & 0 & 0 \\
1 & 0 & -1 & 0 & 0 & 0 \\
1 & 0 & 0 & -1 & 0 & 0 \\
1 & \vdots & \vdots & \vdots & \ddots & \vdots\\
1 & 0 & 0 & 0 & 0 & -1 \\
\end{array}\right]
.\end{align*}
\end{itemize}
\begin{proof}[of claim 1]
\envlist
\begin{itemize}
\tightlist
\item
Compute
\begin{align*}B \mathbf{p}_i = {\left[ { 1 + 0 + \cdots + 0 + (-1) + 0 + \cdots + 0} \right]} = {\left[ {0, 0, \cdots, 0} \right]}\end{align*}
\item
So every \(\mathbf{p}_i \in \ker(B)\), so they are eigenvectors with
eigenvalue 0.
\item
Since the first component is fixed and we have \(p-1\) choices for
where to place a \(-1\), this yields \(p-1\) possibilities for
\(\mathbf{p}_i\)
\item
These are linearly independent since the \((p-1)\times (p-1)\) matrix
\({\left[ { \mathbf{p}_1^t, \cdots, \mathbf{p}_{p-1}^t} \right]}\)
satisfies
\begin{align*}
\operatorname{det}
\begin{bmatrix}
1 & 1 & 1 & \cdots & 1\\
-1 & 0 & 0 & \cdots & 0\\
0 & -1 & 0 & \cdots & 0\\
0 & 0 & -1 & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & -1\\
\end{bmatrix}
&= (1) \cdot \operatorname{det}
\begin{bmatrix}
-1 & 0 & 0 & \cdots & 0\\
0 & -1 & 0 & \cdots & 0\\
0 & 0 & -1 & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & -1\\
\end{bmatrix}
\\
&= (-1)^{p-2} \neq 0
.\end{align*}
\end{itemize}
where the first equality follows from expanding along the first row and
noting this is the first minor, and every other minor contains a row of
zeros.
\end{proof}
\begin{proof}[of claim 2]
\envlist
\begin{itemize}
\tightlist
\item
Compute
\begin{align*}B\mathbf{v} = {\left[ {\sum_{i=1}^p 1, \sum_{i=1}^p 1, \cdots, \sum_{i=1}^p 1} \right]} = {\left[ {p, p, \cdots, p} \right]} = p {\left[ {1, 1, \cdots, 1} \right]} = p\mathbf{v}_1,\end{align*}
thus \(\lambda_1 = p\)
\item
\(\dim E_{\lambda_1} = 1\) since the eigenspaces are orthogonal and
\(E_{\lambda_0} \oplus E_{\lambda_1} \leq F^p\) is a subspace, so
\(p > \dim(E_{\lambda_0}) + \dim E_{\lambda_1} = p-1 + \dim E_{\lambda_1}\)
and it isn't zero dimensional.
\end{itemize}
\end{proof}
\textbf{Proof of (b)}:
For \(F = {\mathbb{F}}_p\), all eigenvalues/vectors still lie in
\({\mathbb{F}}_p\), but now \(-1 = p-1\), making
\((x-(p-1))(x+1)^{p-1} = (x+1)(x+1)^{p-1}\), so
\(\chi_{A, {\mathbb{F}}_p}(x) = (x+1)^p\), and the Jordan blocks may
merge.
\begin{itemize}
\tightlist
\item
A computation shows that \((A+I)^2 = pA = 0 \in M_p({\mathbb{F}}_p)\)
and \((A+I) \neq 0\), so \(\min_{A, {\mathbb{F}}_p}(x) = (x+1)^2\).
\begin{itemize}
\tightlist
\item
Thus the largest Jordan block corresponding to \(\lambda = -1\) is
of size 2
\end{itemize}
\item
Can check that
\(\operatorname{det}(A) = \pm 1 \in {\mathbb{F}}_p^{\times}\), so the
vectors \(\mathbf{e}_1 - \mathbf{e}_i\) are still linearly independent
and thus \(\dim E_{-1} = p-1\)
\begin{itemize}
\tightlist
\item
So there are \(p-1\) Jordan blocks for \(\lambda = 0\).
\end{itemize}
\end{itemize}
Summary:
\begin{align*}
\min_{A, {\mathbb{F}}_p}(x) &= (x+1)^2 \\
\chi_{A, {\mathbb{F}}_p}(x) &\equiv (x+1)^p \\
\dim E_{-1} &= p-1
.\end{align*}
Thus
\begin{align*}
JCF_{{\mathbb{F}}_p}(A)
= J_{-1}^{2} \oplus (p-2)J_{-1}^1
= \left[\begin{array}{rr|r|r|r|r}
-1 & 1 & 0 & \cdots & 0 & 0 \\
0& -1 & 0 & 0 & 0 & 0 \\
\hline
0& 0 & -1 & 0 & 0 & 0 \\ \hline
0& 0 & 0 & \ddots & \ddots & 0 \\ \hline
0& 0 & 0 & \cdots & -1 & 0 \\ \hline
0& 0 & 0 & \cdots & 0 & -1 \\
\end{array}\right]
.\end{align*}
To obtain a basis for \(E_{\lambda = 0}\), first note that the matrix
\(P = [\mathbf{v}_1, \mathbf{p}_1, \cdots , \mathbf{p}_{p-1}]\) from
part (a) is singular over \({\mathbb{F}}_p\), since
\begin{align*}
\mathbf{v}_1 + \mathbf{p}_1 + \mathbf{p}_2 + \cdots + \mathbf{p}_{p-2}
&= [p-1, 0, 0, \cdots, 0, 1] \\
&= [-1, 0,0,\cdots, 0, 1] \\
&= - \mathbf{p}_{p-1}
.\end{align*}
We still have a linearly independent set given by the first \(p-1\)
columns of \(P\), so we can extend this to a basis by finding one
linearly independent generalized eigenvector.
Solving \((A-I\lambda)\mathbf{x} = \mathbf{v}_1\) is our only option
(the others won't yield solutions). This amounts to solving
\(B\mathbf{x} = \mathbf{v}_1\), which imposes the condition
\(\sum x_i = 1\), so we can choose \(\mathbf{x} = [1, 0, \cdots, 0]\).
Thus
\begin{align*}
P = [\mathbf{v}_1, \mathbf{x}, \mathbf{p}_1, \cdots, \mathbf{p}_{p-2}] =
\left[\begin{array}{rrrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
1 & 0 & -1 & 0 & 0 & 0 \\
1 & 0 & 0 & -1 & 0 & 0 \\
1 & \vdots & \vdots & \vdots & \ddots & \vdots \\
1 & 0 & 0 & 0 & 0 & -1\\
1 & 0 & 0 & 0 & 0 & 0 \\
\end{array}\right]
.\end{align*}
\end{solution}
\hypertarget{spring-2018-4-work}{%
\subsection{\texorpdfstring{Spring 2018 \#4
\(\work\)}{Spring 2018 \#4 \textbackslash work}}\label{spring-2018-4-work}}
Let
\begin{align*}
A=\left[\begin{array}{lll}{0} & {1} & {-2} \\ {1} & {1} & {-3} \\ {1} & {2} & {-4}\end{array}\right] \in M_{3}(\mathbb{C})
\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the Jordan canonical form \(J\) of \(A\).
\item
Find an invertible matrix \(P\) such that \(P^{-1}AP = J\).
\end{enumerate}
\begin{quote}
You should not need to compute \(P^{-1}\).
\end{quote}
\hypertarget{spring-2017-6-work}{%
\subsection{\texorpdfstring{Spring 2017 \#6
\(\work\)}{Spring 2017 \#6 \textbackslash work}}\label{spring-2017-6-work}}
Let \(A\) be an \(n\times n\) matrix with all entries equal to \(0\)
except for the \(n-1\) entries just above the diagonal being equal to 2.
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
What is the Jordan canonical form of \(A\), viewed as a matrix in
\(M_n({\mathbb{C}})\)?
\item
Find a nonzero matrix \(P\in M_n({\mathbb{C}})\) such that
\(P^{-1}A P\) is in Jordan canonical form.
\end{enumerate}
\hypertarget{spring-2016-1-work}{%
\subsection{\texorpdfstring{Spring 2016 \#1
\(\work\)}{Spring 2016 \#1 \textbackslash work}}\label{spring-2016-1-work}}
Let
\begin{align*}
A=\left(\begin{array}{ccc}
-3 & 3 & -2 \\
-7 & 6 & -3 \\
1 & -1 & 2
\end{array}\right) \in M_{3}(\mathrm{C})
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the Jordan canonical form \(J\) of \(A\).
\item
Find an invertible matrix \(P\) such that \(P^{-1}A P = J\). You do
not need to compute \(P^{-1}\).
\end{enumerate}
\hypertarget{spring-2015-6-work}{%
\subsection{\texorpdfstring{Spring 2015 \#6
\(\work\)}{Spring 2015 \#6 \textbackslash work}}\label{spring-2015-6-work}}
Let \(F\) be a field and \(n\) a positive integer, and consider
\begin{align*}
A=\left[\begin{array}{ccc}
1 & \dots & 1 \\
& \ddots & \\
1 & \dots & 1
\end{array}\right] \in M_{n}(F)
.\end{align*}
Show that \(A\) has a Jordan normal form over \(F\) and find it.
\begin{quote}
Hint: treat the cases \(n\cdot 1 \neq 0\) in \(F\) and \(n\cdot 1 = 0\)
in \(F\) separately.
\end{quote}
\hypertarget{fall-2014-5-work}{%
\subsection{\texorpdfstring{Fall 2014 \#5
\(\work\)}{Fall 2014 \#5 \textbackslash work}}\label{fall-2014-5-work}}
Let \(T\) be a \(5\times 5\) complex matrix with characteristic
polynomial \(\chi(x) = (x-3)^5\) and minimal polynomial
\(m(x) = (x-3)^2\). Determine all possible Jordan forms of \(T\).
\hypertarget{spring-2013-5-work}{%
\subsection{\texorpdfstring{Spring 2013 \#5
\(\work\)}{Spring 2013 \#5 \textbackslash work}}\label{spring-2013-5-work}}
Let \(T: V\to V\) be a linear map from a 5-dimensional
\({\mathbb{C}}{\hbox{-}}\)vector space to itself and suppose
\(f(T) = 0\) where \(f(x) = x^2 + 2x + 1\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that there does not exist any vector \(v\in V\) such that
\(Tv = v\), but there \emph{does} exist a vector \(w\in V\) such that
\(T^2 w= w\).
\item
Give all of the possible Jordan canonical forms of \(T\).
\end{enumerate}
\hypertarget{spring-2021-1-done}{%
\subsection{\texorpdfstring{Spring 2021 \#1
\(\done\)}{Spring 2021 \#1 \textbackslash done}}\label{spring-2021-1-done}}
Let m
\begin{align*}
A \coloneqq
\begin{bmatrix}
4 & 1 & -1 \\
-6 & -1 & 2 \\
2 & 1 & 1
\end{bmatrix}
\in \operatorname{Mat}(3\times 3, {\mathbb{C}})
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the Jordan canonical form \(J\) of \(A\).
\item
Find an invertible matrix \(P\) such that \(J = P ^{-1}A P\).
\item
Write down the minimal polynomial of \(A\).
\end{enumerate}
\begin{quote}
You should not need to compute \(P^{-1}\)
\end{quote}
\begin{concept}
\envlist
\begin{itemize}
\tightlist
\item
\(\chi_A(t) = t^n - {\mathrm{tr}}\qty{\bigwedge\nolimits^1 A}t^{n-1} + {\mathrm{tr}}\qty{\bigwedge\nolimits^2 A}t^{n-2} - \cdots \pm \operatorname{det}(A)\)
\item
Finding generalized eigenvectors: let \(B = A-\lambda I\), get
eigenvector \(v\), solve \(Bw_1 = v, Bw_2 = w_1, \cdots\) to get a
Jordan block. Repeat with any other usual eigenvectors.
\item
Convention: construct Jordan blocks in decreasing order of magnitude
of eigenvalues.
\item
Polynomial exponent data:
\begin{itemize}
\tightlist
\item
Minimal polynomial exponents: sizes of \textbf{largest} Jordan
blocks.
\item
Characteristic polynomial exponents: \textbf{sum of sizes} of Jordan
blocks, i.e.~how many times \(\lambda\) is on the diagonal of
\(\operatorname{JCF}(A)\).
\end{itemize}
\end{itemize}
\end{concept}
\begin{solution}
\envlist
\begin{proof}[parts a and b]
\envlist
\begin{itemize}
\tightlist
\item
Write \(\chi_A(t) = t^3 - T_1 t^2 + T_2 t - T_3\) where
\(T_i \coloneqq{\mathrm{tr}}\qty{\bigwedge\nolimits^i A}\):
\begin{itemize}
\tightlist
\item
\(T_1 = {\mathrm{tr}}(A) = 4-1+1=4\).
\item
\(T_2 = (-1-2) + (4+2) + (-4-6) = 5\).
\item
\(T_3 = \operatorname{det}(A) = 4(-1-2) -1(-10) + (-1)(-6+2) = 2\).
\end{itemize}
\item
So \(\chi_A(t) = t^3 - 4t^2 + 5t-2\).
\item
Try rational roots test: \(r \in \left\{{\pm 2/1}\right\}\), and check
that 2 is root.
\item
By polynomial long division,
\(\chi_A(t) / (t-2) = t^2-2t+1 = (t-1)^2\).
\item
So the eigenvalues are \(\lambda = 2, 1\).
\item
\(\lambda = 2\):
\begin{itemize}
\tightlist
\item
Set \(U\coloneqq A-\lambda I\), then find \(\operatorname{RREF}(U)\)
to compute its kernel:
\begin{align*}
U \coloneqq
\begin{bmatrix}
2 & 1 & -1
\\
-6 & -3 & 2
\\
2 & 1 & -1
\end{bmatrix}
\leadsto
\begin{bmatrix}
2 & 1 & 0
\\
0 & 0 & 1
\\
0 & 0 & 0
\end{bmatrix}
,\end{align*}
which yields \(v_1 = [1,-2,0]\).
\end{itemize}
\item
\(\lambda = 2\):
\begin{itemize}
\item
Similarly,
\begin{align*}
U \coloneqq
\begin{bmatrix}
3 & 1 & -1 \\
-6 & -2 & 2 \\
2 & 1 & 0
\end{bmatrix}
\leadsto
\begin{bmatrix}
1 & 0 & -1
\\
0 & 1 & 2
\\
0 & 0 & 0
\end{bmatrix}
,\end{align*}
which yields \(v_2 = [1,-2,1]\).
\item
Solve \(Uw = v_3\):
\begin{align*}
\begin{bmatrix}
3 & 1 & -1 & 1 \\
-6 & -2 & 2 & -2 \\
2 & 1 & 0 & 1
\end{bmatrix}
\leadsto
\begin{bmatrix}
1 & 0 & -1 & 0 \\
0 & 1 & 2 & 1 \\
0 & 0 & 0 & 0
\end{bmatrix}
,\end{align*}
so take \(v_3 = [0,1,0]\).
\end{itemize}
\item
Putting things together:
\begin{align*}
A &= P^{-1}J P \text{ where } \\
J = J_1(\lambda = 2) \oplus J_2(\lambda = 1)
&=
\begin{bmatrix}
2 & 0 & 0
\\
0 & 1 & 1
\\
0 & 0 & 1
\end{bmatrix} \\
P = [v_1, v_2, v_3]
&=
\begin{bmatrix}
1 & 1 & 0
\\
-2 & -2 & 1
\\
0 & 1 & 0
\end{bmatrix}
.\end{align*}
\end{itemize}
\end{proof}
\begin{proof}[part c]
\envlist
\begin{itemize}
\tightlist
\item
Write \(\min_A(t) = (t-2)(t-1)^{\ell_1}\), then since \(\min_A(t)\)
divides \(\chi_A(t)\) either \(\ell_1 = 1, 2\).
\item
\(\ell_1\) is the size of the \textbf{largest} block corresponding to
\(\lambda = 1\), which is size 2, so \(\lambda_1=2\).
\item
Thus
\begin{align*}
\min_A(t) = (t-2)(t-1)^2
.\end{align*}
\end{itemize}
\end{proof}
\end{solution}
\hypertarget{fall-2020-5-work}{%
\subsection{\texorpdfstring{Fall 2020 \#5
\(\work\)}{Fall 2020 \#5 \textbackslash work}}\label{fall-2020-5-work}}
Consider the following matrix:
\begin{align*}
B \coloneqq
\begin{bmatrix}
1 & 3 & 3
\\
2 & 2 & 3
\\
-1 & -2 & -2
\end{bmatrix}
.\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Find the minimal polynomial of \(B\).
\item
Find a \(3\times 3\) matrix \(J\) in Jordan canonical form such that
\(B = JPJ^{-1}\) where \(P\) is an invertible matrix.
\end{enumerate}
\hypertarget{extra-problems}{%
\section{Extra Problems}\label{extra-problems}}
\begin{quote}
Many many fundamental problems here:
\url{https://math.ucr.edu/~mpierce/teaching/qual-algebra/fun/groups/}
\end{quote}
\hypertarget{linear-algebra}{%
\subsection{Linear Algebra}\label{linear-algebra}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
For a division ring \(D\), let \(V_{i}\) be a finite dimensional
vector space over \(D\) for \(i \in\{1, \ldots, k\}\). Suppose the
sequence
\begin{align*}
0 \longrightarrow V_{1} \longrightarrow V_{2} \longrightarrow \cdots V_{k} \longrightarrow 0
\end{align*}
is exact. Prove that
\(\sum_{i=1}^{k}(-1)^{i} \operatorname{dim}_{D} V_{i}=0\).
\item
Prove that if \(A\) and \(B\) are invertible matrices over a field
\(\boldsymbol{k}\), then \(A+\lambda B\) is invertible for all but
finitely many \(\lambda \in \boldsymbol{k}\).
\item
For the ring of \(n \times n\) matrices over a commutative unital ring
\(R\), which we'll denote \(\operatorname{Mat}_{n}(R)\), recall the
definition of the determinant map det:
\(\operatorname{Mat}_{n}(R) \rightarrow R\). For
\(A \in \operatorname{Mat}_{n}(R)\) also recall the definition of the
classical adjoint \(A^{a}\) of \(A\). Prove that:
\end{enumerate}
\begin{itemize}
\tightlist
\item
\(\operatorname{det}\left(A^{a}\right)=\operatorname{det}(A)^{n-1}\)
\item
\(\left(A^{a}\right)^{a}=\operatorname{det}(A)^{n-2} A\)
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
If \(R\) is an integral domain and \(A\) is an \(n \times n\) matrix
over \(R\), prove that if a system of linear equations \(A x=0\) has a
nonzero solution then \(\operatorname{det} A=0\). Is the converse
true? What if we drop the assumption that \(R\) is an integral domain?
\item
What is the companion matrix \(M\) of the polynomial \(f=x^{2}-x+2\)
over \(C\) ? Prove that \(f\) is the minimal polynomial of \(M\).
\item
Suppose that \(\phi\) and \(\psi\) are commuting endomorphisms of a
finite dimensional vector space \(E\) over a field \(\boldsymbol{k}\),
so \(\phi \psi=\psi \phi\).
\end{enumerate}
\begin{itemize}
\tightlist
\item
Prove that if \(k\) is algebraically closed, then \(\phi\) and
\(\psi\) have a common eigenvector.
\item
Prove that if \(E\) has a basis consisting of eigenvectors of \(\phi\)
and \(E\) has a basis consisting of eigenvectors of \(\psi\), then
\(E\) has a basis consisting of vectors that are eigenvectors for both
\(\phi\) and \(\psi\) simultaneously.
\end{itemize}
\hypertarget{galois-theory-1}{%
\subsection{Galois Theory}\label{galois-theory-1}}
\begin{quote}
Taken from here:
\url{https://math.ucr.edu/~mpierce/teaching/qual-algebra/fun/galois/}
\end{quote}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Suppose that for an extension field \(F\) over \(K\) and for
\(a \in F\), we have that \(b \in F\) is algebraic over \(K(a)\) but
transcendental over \(K\). Prove that \(a\) is algebraic over
\(K(b)\).
\item
Suppose that for a field \(F / K\) that \(a \in F\) is algebraic and
has odd degree over \(K\). Prove that \(a^{2}\) is also algebraic and
has odd degree over \(K\), and furthermore that
\(K(a)=K\left(a^{2}\right)\)
\item
For a polynomial \(f \in K[x]\), prove that if \(r \in F\) is a root
of \(f\) then for any \(\sigma \in \mathbf{A u t}_{K} F, \sigma(r)\)
is also a root of \(f\)
\item
Prove that as extensions of \(\boldsymbol{Q}, \boldsymbol{Q}(x)\) is
Galois over \(\boldsymbol{Q}\left(x^{2}\right)\) but not over
\(\boldsymbol{Q}\left(x^{3}\right)\).
\item
If \(F\) is over \(E\), and \(E\) is \(\quad\) over \(K\) is \(F\)
necessarily over \(K\) ? Answer this question for each of the words
``algebraic,'' ``normal,'' and ``separable'' in the blanks.
\item
If \(F\) is over \(K\), and \(E\) is an intermediate extension of
\(F\) over \(K\), is \(F\) necessarily over \(E ?\) Answer this
question for each of the words ``algebraic,'' ``normal,'' and
``separable'' in the blanks.
\item
If \(F\) is some (not necessarily Galois) field extension over \(K\)
such that \([F: K]=6\) and Aut \(_{K} F \simeq S_{3}\), then \(F\) is
the splitting field of an irreducible cubic over \(K[x]\).
\item
Recall the definition of the join of two subgroups \(H \vee G\) (or
\(H+G\) ). For \(F\) a finite dimensional Galois extension over \(K\)
and let \(A\) and \(B\) be intermediate extensions. Prove that
\end{enumerate}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\tightlist
\item
\(\operatorname{Aut}_{A B} F=\mathrm{Aut}_{A} F \cap \mathrm{Aut}_{B} F\)
\item
Aut \(_{A \cap B} F=\mathrm{Aut}_{A} F \vee \mathrm{Aut}_{B} F\)
\end{enumerate}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{8}
\tightlist
\item
For a field \(K\) take \(f \in K[x]\) and let
\(n=\operatorname{deg} f\). Prove that for a splitting field \(F\) of
\(f\) over \(K\) that \([F: K] \leq n !\). Furthermore prove that
\([F: K]\) divides \(n !\).
\item
Let \(F\) be the splitting field of \(f \in K[x]\) over \(K\). Prove
that if \(g \in K[x]\) is irreducible and has a root in \(F\), then
\(g\) splits into linear factors over \(F\).
\item
Prove that a finite field cannot be algebraically closed.
\item
For \(u=\sqrt{2+\sqrt{2}}\), What is the Galois group of
\(\boldsymbol{Q}(u)\) over \(\boldsymbol{Q} ?\) What are the
intermediate fields of the extension \(\boldsymbol{Q}(u)\) over
\(\boldsymbol{Q}\) ?
\item
Characterize the splitting field and all intermediate fields of the
polynomial
\(\left(x^{2}-2\right)\left(x^{2}-3\right)\left(x^{2}-5\right)\) over
\(Q\). Using this characterization, find a primitive element of the
splitting field.
\item
Characterize the splitting field and all intermediate fields of the
polynomial \(x^{4}-3\) over \(Q\)
\item
Consider the polynomial \(f=x^{3}-x+1\) in \(\boldsymbol{F}_{3}[x]\).
Prove that \(f\) is irreducible. Calculate the degree of the splitting
field of \(f\) over \(\boldsymbol{F}_{3}\) and the cardinality of the
splitting field of \(f\).
\item
Given an example of a finite extension of fields that has infinitely
many intermediate fields.
\item
Let \(u=\sqrt{3+\sqrt{2}}\). Is \(\boldsymbol{Q}(u)\) a splitting
field of \(u\) over \(\boldsymbol{Q}\) ? (MathSE)
\item
Prove that the multiplicative group of units of a finite field must be
cyclic, and so is generated by a single element.
\item
Prove that \(\boldsymbol{F}_{p^{n}}\) is the splitting field of
\(x^{p^{n}}-x\) over \(\boldsymbol{F}_{p}\).
\item
Prove that for any positive integer \(n\) there is an irreducible
polynomial of degree \(n\) over \(\boldsymbol{F}_{p}\)
\item
Recall the definition of a perfect field. Give an example of an
imperfect field, and the prove that every finite field is perfect.
\item
For \(n>2\) let \(\zeta_{n}\) denote a primitive \(n\) th root of
unity over \(Q\). Prove that
\begin{align*}
\left[\boldsymbol{Q}\left(\zeta_{n}+\zeta_{n}^{-1}: \boldsymbol{Q}\right)\right]=\frac{1}{2} \varphi(n)
\end{align*}
where \(\varphi\) is Euler's totient function.
\item
Suppose that a field \(K\) with characteristic not equal to 2 contains
an primitive \(n\) th root of unity for some odd integer \(n\). Prove
that \(K\) must also contain a primitive \(2 n\) th root of unity.
\item
Prove that the Galois group of the polynomial \(x^{n}-1\) over \(Q\)
is abelian.
\end{enumerate}
\hypertarget{fall-2021}{%
\section{Fall 2021}\label{fall-2021}}
\hypertarget{fall-2021-1}{%
\subsection{Fall 2021 \#1}\label{fall-2021-1}}
Let \(G\) be a group. An automorphism \(\phi: G \rightarrow G\) is
called \emph{inner} if the automorphism is given by conjugation by a
fixed group element \(g\), i.e.,
\begin{align*}
\phi=\phi_{g}: h \mapsto g h g^{-1} .
\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that the set of inner automorphisms forms a normal subgroup of
the group of all automorphisms of \(G\).
\item
Give an example of a finite group with an automorphism which is not
inner.
\item
Denote by \(S_{n}\) the group of permutations of the set
\(\{1, \ldots, n\}\). Suppose that \(g \in S_{n}\) sends \(i\) to
\(g_{i}\) for \(i=1, \ldots, n .\) Let \((a, b)\) denote as usual the
cycle notation for the transposition which permutes \(a\) and \(b\).
For \(i \in\{1, \ldots, n-1\}\), compute \(\phi_{g}((i, i+1))\).
\item
Suppose that an automorphism
\(\phi \in \operatorname{Aut}\left(S_{n}\right)\) preserves cycle
type, i.e., that for every element \(s\) of \(S_{n}, s\) and
\(\phi(s)\) have the same cycle type. Show that \(\phi\) is inner.
\end{enumerate}
\begin{quote}
Hint: Consider the images of generators
\(\phi((1,2)), \phi((2,3)), \cdots, \phi((n-1, n))\).
\end{quote}
\hypertarget{fall-2021-2}{%
\subsection{Fall 2021 \#2}\label{fall-2021-2}}
Give generators and relations for the non-commutative group \(G\) of
order 63 containing an element of order \(9 .\)
\hypertarget{fall-2021-3}{%
\subsection{Fall 2021 \#3}\label{fall-2021-3}}
What is the Jordan normal form over \(\mathbb{C}\) of a \(7 \times 7\)
matrix \(A\) which satisfies all of the following conditions:
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
\(A\) has real coefficients,
\item
\(\mathrm{rk} A=5\),
\item
\(\mathrm{rk} A^{2}=4\),
\item
\(\mathrm{rk} A-I=6\),
\item
\(\mathrm{rk} A^{3}-I=4\),
\item
\(\operatorname{tr} A=1 ?\)
\end{enumerate}
\hypertarget{fall-2021-4}{%
\subsection{Fall 2021 \#4}\label{fall-2021-4}}
Recall that for a given positive integer \(n\), the cyclotomic field
\(\mathbb{Q}\left(\zeta_{n}\right)\) is generated by a primitive
\(n\)-th root of unity \(\zeta_{n}\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
What is the degree of \(Q\left(\zeta_{n}\right)\) over \(Q\) ?
\item
Define what it means for a finite field extension \(L / K\) to be
Galois, and prove that the cyclotomic field
\(Q\left(\zeta_{n}\right)\) is Galois over \(\mathbb{Q}\).
\item
What is the Galois group of \(\mathbb{Q}\left(\zeta_{n}\right)\) over
\(\mathbb{Q}\) ?
\item
How many subfields of \(\mathbb{Q}\left(\zeta_{2021}\right)\) have
degree 2 over Q? Note that \(2021=43 \cdot 47\)
\end{enumerate}
\hypertarget{fall-2021-5}{%
\subsection{Fall 2021 \#5}\label{fall-2021-5}}
Let \(R\) be an algebra over \(\mathbb{C}\) which is finite-dimensional
as a \({\mathbb{C}}{\hbox{-}}\)vector space. Recall that an ideal \(I\)
of \(R\) can be considered as a \({\mathbb{C}}{\hbox{-}}\)subvector
space of \(R\). We define the codimension of \(I\) in \(R\) to be
\begin{align*}
\operatorname{codim}_R I \coloneqq
\dim_{{\mathbb{C}}} R - \dim_{{\mathbb{C}}} I
,\end{align*}
the difference between the dimension of \(R\) as a
\(\mathbb{C}{\hbox{-}}\)vector space, \(\dim_{{\mathbb{C}}} R\), and the
dimension of \(I\) as a \({\mathbb{C}}{\hbox{-}}\)vector space,
\(\dim_{\mathbb{C}}I\).
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Show that any maximal ideal \(m \subset R\) has codimension 1 .
\item
Suppose that \(\operatorname{dim}_{C} R=2\). Show that there exists a
surjective homomorphism of \({\mathbb{C}}{\hbox{-}}\)algebras from the
polynomial ring \({\mathbb{C}}[t]\) to \(R\).
\item
Classify such algebras \(R\) for which \(\dim_{{\mathbb{C}}} R=2\),
and list their maximal ideals.
\end{enumerate}
\hypertarget{fall-2021-6}{%
\subsection{Fall 2021 \#6}\label{fall-2021-6}}
Let \(R\) be a commutative ring with unit and let \(M\) be an
\(R\)-module. Define the annihilator of \(M\) to be
\begin{align*}
\operatorname{Ann}(M):=\{r \in R \mathrel{\Big|}r \cdot m=0 \text { for all } m \in M\}
\end{align*}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
Prove that \(\operatorname{Ann}(M)\) is an ideal in \(R\).
\item
Conversely, prove that every ideal in \(R\) is the annihilator of some
\(R\)-module.
\item
Give an example of a module \(M\) over a ring \(R\) such that each
element \(m \in M\) has a nontrivial annihilator
\(\operatorname{Ann}(m):=\{r \in R \mathrel{\Big|}r \cdot m=0\}\), but
\(\operatorname{Ann}(M)=\{0\}\)
\end{enumerate}
\printbibliography[title=Bibliography]
\end{document}
| {
"alphanum_fraction": 0.6272444723,
"avg_line_length": 29.4193897638,
"ext": "tex",
"hexsha": "8440124d27320ed4dea4c6c4947b2c78411a890a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1c76b4d40a244bb4a623707445fc059056489bb4",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "dzackgarza/Qual-Review-and-Solutions",
"max_forks_repo_path": "Algebra/UGA Question (with solutions)/UGA_Algebra_Qual_Solutions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1c76b4d40a244bb4a623707445fc059056489bb4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "dzackgarza/Qual-Review-and-Solutions",
"max_issues_repo_path": "Algebra/UGA Question (with solutions)/UGA_Algebra_Qual_Solutions.tex",
"max_line_length": 867,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1c76b4d40a244bb4a623707445fc059056489bb4",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "dzackgarza/Qual-Review-and-Solutions",
"max_stars_repo_path": "Algebra/UGA Question (with solutions)/UGA_Algebra_Qual_Solutions.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-23T08:12:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-06-23T08:12:08.000Z",
"num_tokens": 116225,
"size": 298901
} |
\documentclass[040-assessment.tex]{subfiles}
\begin{document}
This was a pilot study looking at how different kinds of assessment questions
and how they can be used to refine workshop content in a backward design approach.
Some of the code submissions suggested that not all students utilized all parts of the
coding platform,
so the use of the auto-grader could not assumed it was used in treatment arms 3 and 4.
The analysis was run with all 4 treatment groups, and with just 2 groups comparing
the blank question with the faded question.
Even with the low sample size from the study,
there are still findings that are applicable to data science instructors.
\subsection{Formative Assessments Engage Students In Remote Workshops}
The workshops were given in an online setting via Zoom.
The observation from the instructor of the workshop was there was almost zero
interaction of any kind during the workshop.
The vast majority of chat messages came from the instructor posting the
relevant links for each part of the workshop.
Very few questions or discussions occur in the zoom chatting platform.
There were more participants who took the exercises during the workshop than
questions and comments in the Zoom meeting room chat.
However, a surprisingly high number of students accomplished the exercises.
The amount of attrition was less than expected,
especially when comparing it to attrition from workshop registration to workshop attendance.
This finding suggests that even without grades as an incentive,
participants who volunteering opted in to participate in the workshop
were engaged with the materials, even in an online zoom setting.
\subsection{Give Learners Time to Practice and Learn Asynchronously}
Results from Figure \ref{fig:time-to-complete-combined-treatments} suggest more about
how much more time it takes a novice learner to complete exercises compared to experts.
Learners almost took the full 5-minutes for the formative assessment questions and
almost the full 15-minutes for the summative assessment question.
Using a conservative estimate for the instructor to go over the formative assessment solutions,
learners take about 4-times as long to complete formative assessment questions,
and almost 10-time as long to complete the summative assessment question.
During 1-hour of instruction, this means about 15-minutes would be needed for formative assessments,
leaving about 45-minutes to complete the main teaching materials.
In a workshop setting over multiple hours or sessions, lessons can be balanced across other parts of the workshop.
However, in individual workshop settings, additional time for setup would need to be
considered for every lesson.
This suggests that curating additional worked-out examples as formative assessment questions
should to be provided to learners
for asynchronous supplemental learning outside the main instructional period.
\subsection{Conclusion}
Although this was a pilot study, the results were able to show how pre-requisite knowledge plays a
role in learning new skills.
This study also showed how much more time learners need on answering formative and summative assessment questions compared to what an instructor imagines.
Both of these findings translate to live teaching sessions where prior knowledge can affect the learner's ability to pick up new information,
and planning how much content can be covered during an instructional period.
This study did not have a treatment arm that did not use any formative assessment,
but the participation rates from this study did hint that having formative assessments did force learners to be actively engaged with
the materials.
Using faded examples can reduce the amount of time spent on formative assessment questions when compared with a blank question box,
but it is possible that faded examples reduce too much of the cognitive load for solving problem from scratch.
This finding suggests that multiple types of formative assessment question types should be used in practice to balance
teaching time, student engagement, and student cognitive load.
\end{document}
| {
"alphanum_fraction": 0.7879069767,
"avg_line_length": 62.3188405797,
"ext": "tex",
"hexsha": "03e1a96ffe88fc90222f4e40da50cb1ff52e1f52",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e9df7198b28bc5f81885a94cc27b114fae94817",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "chendaniely/dissertation-edt",
"max_forks_repo_path": "040-assessment/040-040-discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e9df7198b28bc5f81885a94cc27b114fae94817",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "chendaniely/dissertation-edt",
"max_issues_repo_path": "040-assessment/040-040-discussion.tex",
"max_line_length": 158,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1e9df7198b28bc5f81885a94cc27b114fae94817",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "chendaniely/dissertation-edt",
"max_stars_repo_path": "040-assessment/040-040-discussion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 821,
"size": 4300
} |
\documentclass{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{tikz}
\usepackage[all]{xy}
\usetikzlibrary{positioning,chains,fit,shapes,calc}
\begin{document}
\title{Homework 8}
\author{Josh Cai}
\maketitle
\section*{Section 10.4}
\textbf{2a)} $f(1) = -6, f(2) = 12, f(3) = -24, f(4) = 48, f(5) = -96$
\noindent\textbf{2b)} $f(1) = 16, f(2) = 55, f(3) = 172, f(4) = 523, f(5) = 1576$
\noindent\textbf{2c)} $f(1) = 1, f(2) = -3, f(3) = 13, f(4) = 141, f(5) = 19597$
\\\\\noindent\textbf{4a)} $f(2) = 0, f(3) = -1, f(4) = -1, f(5) = 0$
\noindent\textbf{4b)} $f(2) = 1, f(3) = 1, f(4) = 1, f(5) = 1$
\noindent\textbf{4c)} $f(2) = 2, f(3) = 5, f(4) = 33, f(5) = 1214$
\\\\\noindent\textbf{6b)} $f(n) = 0$ if $n \equiv 1 \mod{3}$ and $f(n) = 2^{\lfloor (n+1)/3 \rfloor}$ otherwise
\\Basis step: $f(0) = 1, f(1) = 0, f(2) = 2$
\\Recursive step: $f(n) = 2f(n-3) = 2*0 = 0$ if $n \equiv 1 \mod{3}$
\\$f(n) = 2f(n-3) = 2*2^{\lfloor (n-2)/3 \rfloor} = 2^{\lfloor (n-2)/3 \rfloor+1} = 2^{\lfloor (n+1)/3 \rfloor}$ otherwise
\\\\\noindent\textbf{8a)} $f(1) = 2, f(n+1) = f(n)+4$
\noindent\textbf{8b)} $f(1) = 2, f(n+1) = f(n)+2(-1)^{n+1}$
\noindent\textbf{8c)} $f(1) = 2, f(n+1) = f(n)(n+2)/(n)$
\noindent\textbf{8d)} $f(1) = 1, f(n+1) = f(n)+2n+1$
\\\\\noindent\textbf{10)} $f_m(0)=m, f_m(n+1) = f_m(n)+1$
\\\\\noindent\textbf{24a)} $1\in S; n\in S \Rightarrow n-2\in S \wedge n+2 \in S$
\\\\\noindent\textbf{24b)} $1\in S; n\in S \Rightarrow 3n\in S$
\\\\\noindent\textbf{24c)} $x\in S \wedge 1 \in S; a\in S \wedge b\in S \Rightarrow a+b\in S \wedge a-b\in S \wedge ab\in S$
\\\\\noindent\textbf{32a)} $ones(\lambda) = 0; ones(s1) = ones(s)+1; ones(s0)=ones(s)$
\\\\\noindent\textbf{32b)} Basis step: $ones(x\lambda)=ones(x)=ones(x)+0=ones(x)+ones(\lambda)$, so $P(\lambda)$ is true.
\\Recursive step: Assume $P(y)$ true, need to show $ones(xy1)=ones(x)+ones(y1)$ and $ones(xy0)=ones(x)+ones(y0)$. Since $ones(xy1) = ones(xy)+1$, $ones(y1)=ones(y)+1$, and $ones(xy)=ones(x)+ones(y)$ by assumption, $ones(xy1)=ones(xy)+1=ones(x)+ones(y)+1=ones(x)+ones(y1)$. Since $ones(xy0) = ones(xy)$, $ones(y0)=ones(y)$, and $ones(xy)=ones(x)+ones(y)$ by assumption, $ones(xy0)=ones(xy)=ones(x)+ones(y)=ones(x)+ones(y0)$.
\\\\\noindent\textbf{38)} $1\in S \wedge 0 \in S; s\in S \Rightarrow 1s1\in S \wedge 0s0\in S$
\\\\\noindent\textbf{40)} $0\in S; s\in S \Rightarrow 0s,s0,s10,s01,1s0,10s,0s1,01s\in S$
\end{document} | {
"alphanum_fraction": 0.5717791411,
"avg_line_length": 38.203125,
"ext": "tex",
"hexsha": "90d3c61bbf1f233cf188c9f9e564b561a66dcfb6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f896f4d54aca2d6e8c7354f0dbd1c898f21e1f82",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "joshcai/math-hw",
"max_forks_repo_path": "discrete_math/hw9.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f896f4d54aca2d6e8c7354f0dbd1c898f21e1f82",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "joshcai/math-hw",
"max_issues_repo_path": "discrete_math/hw9.tex",
"max_line_length": 424,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f896f4d54aca2d6e8c7354f0dbd1c898f21e1f82",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "joshcai/math-hw",
"max_stars_repo_path": "discrete_math/hw9.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1207,
"size": 2445
} |
\chapter{Index Creation}
\section{BUET}
Bangladesh University of Engineering and Technology, abbreviated as
BUET\index{BUET}, is one of the most prestigious institutions for
higher studies in the country. About 5500 students are pursuing
undergraduate\index{BUET!undergraduate} and
postgraduate\index{BUET!postgraduate} studies in engineering,
architecture, planning and science in this institution. At present,
BUET has sixteen teaching departments under five faculties and it has
three institutes. Every year the intake of undergraduate students is
around 900, while the intake of graduate students in Master's and PhD
programs is around 1000. A total of about five hundred teachers are
teaching in these departments and institutes. There are additional
teaching posts like Dr.\ Rashid Professor, Professor Emeritus and
Supernumerary Professors.
\section{Campus}
The BUET campus is in the heart of Dhaka\index{Dhaka} --- the capital
city of Bangladesh. It has a compact campus with halls of residence
within walking distances of the academic buildings. The physical
expansion of the University over the last three decades has been
impressive with construction of new academic buildings,
auditorium\index{BUET!auditorium} complex, halls of residence, etc.
\section{History}\index{BUET!History}
BUET is the oldest institution for the study of Engineering and
Architecture in Bangladesh. The history of this institution dates back
to the days of Dhaka Survey School which was established at
Nalgola\index{Nalgola}, in Old Dhaka in 1876 to train Surveyors for
the then Government of Bengal of British India. As the years passed,
the Survey School became the Ahsanullah School of
Engineering\index{Ahsanullah School of Engineering} offering
three-year diploma courses in Civil, Electrical and Mechanical
Engineering. In recognition of the generous financial contribution
from the then Nawab of Dhaka, it was named after his father Khawja
Ahsanullah. It moved to its present premises in 1912. In 1947, the
School was upgraded to Ahsanullah Engineering College as a Faculty of
Engineering under the University of Dhaka, offering four-year
bachelor’s courses in Civil, Electrical, Mechanical, Chemical and
Metallurgical Engineering. In order to create facilities for
postgraduate studies and research, Ahsanullah Engineering College was
upgraded to the status of a University in 1962 and was named East
Pakistan University of Engineering and Technology. After the War of
Liberation in 1971\index{1971|see {War of Liberation}}\index{War of
Liberation}, Bangladesh became an independent state and the
university was renamed as the Bangladesh University of Engineering and
Technology.
\section{Students}
Till today, it has produced around 25,000 graduates in different
branches of engineering and architecture, and has established a good
reputation all over the world for the quality of its graduates, many
of whom have excelled in their profession in different parts of the
globe. It was able to attract students from countries like
India\index{India}, Nepal\index{Nepal}, Iran\index{Iran},
Jordan\index{Jordan}, Malaysia\index{Malaysia}, Sri Lanka\index{Sri
Lanka}, Pakistan\index{Pakistan} and Palestine\index{Palestine}.
\section{Departments}
Both Undergraduate and Postgraduate studies and research are now among
the primary functions of the University. Eleven departments under five
faculties offer Bachelor Degrees, while most of the departments and
institutes offer Master's Degrees and some of the departments have
Ph.D. programs. In addition to its own research programs, the
university undertakes research programs sponsored by outside
organizations like European Union, UNO,
Commonwealth\index{Commonwealth}, UGC\index{UGC}, etc. The expertise
of the University teachers and the laboratory facilities of the
University are also utilized to solve problems and to provide
up-to-date engineering and technological knowledge to the various
organizations of the country.
\endinput | {
"alphanum_fraction": 0.8223947896,
"avg_line_length": 53.9459459459,
"ext": "tex",
"hexsha": "f4a591cce2be0fbcf2eff4dbebbfa42b1c8c90ba",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "52ae9fbd2cf5ca20dce6ad44441cc6ed402bc5b6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "t33rtha/Building-A-Knowledge-Graph-Using-Twitter-Data",
"max_forks_repo_path": "cseugthesisindexcreation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "52ae9fbd2cf5ca20dce6ad44441cc6ed402bc5b6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "t33rtha/Building-A-Knowledge-Graph-Using-Twitter-Data",
"max_issues_repo_path": "cseugthesisindexcreation.tex",
"max_line_length": 70,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "52ae9fbd2cf5ca20dce6ad44441cc6ed402bc5b6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "t33rtha/Building-A-Knowledge-Graph-Using-Twitter-Data",
"max_stars_repo_path": "cseugthesisindexcreation.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-06T11:04:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-06T11:04:57.000Z",
"num_tokens": 896,
"size": 3992
} |
%% Assignment #5 for crapsimum flow
\documentclass[10pt,fullpage]{article}
\usepackage{amsmath,amssymb,amsthm,amsfonts} % Typical maths resource packages
%Check if we are compiling under latex or pdflatex
\ifx\pdftexversion\undefined
\usepackage[dvips]{graphicx}
\DeclareGraphicsExtensions{.jpg,.eps,.pnm}
\DeclareGraphicsRule{.jpg}{eps}{.jpg.bb}{`jpeg2ps -h #1}
\DeclareGraphicsRule{.pnm}{eps}{.pnm.bb}{`pnmtops #1} \else
\usepackage[pdftex]{graphicx}
\fi
\usepackage{hyperref} % For creating hyperlinks in cross references
\usepackage{listings}
\topmargin -1.5cm \oddsidemargin -0.04cm \evensidemargin -0.04cm
\textwidth 16.00cm \textheight 23.50cm
\parskip 7.2pt
\parindent 0.25in
\makeindex
\title{ Advanced Algorithms Assignment V }
\author{Matthew Bennett \\
{\small\em Maximum Flow\ Draft date \today }}
\date{ }
\begin{document}
\maketitle
\subsection*{Exercise 27.1-1 (1st edition)}
\textbf{Given vertices u and v where c(u,v) = 5 and c(v,u) = 8,
suppose 3 units of flow are shipped from u to v and 4 are shipped
from v to u. What is the net flow from u to v? Draw this situation
like Fig. 27.2}\\
Net flow is -4+3 = -1\\
\includegraphics[scale=0.5]{fig2702.png}
\subsection*{Exercise 27.1-2 (1st edition)}
\textbf{Verify the three maximum flow properties for the flow f
shown in Figure 27.1(b)}
Capacity constraint: For all $u, v \epsilon V$, we require $f(u,v)
\leq c(u,v)$.\\
\noindent
$f(s,v_1) = 11\leq 16 = c(s,v_1)$\\
$f(s,v_2) = 8\leq 13 = c(s,v_2)$\\
$f(v_1,v_2) = 0\leq 10 = c(v_1,v_2)$\\
$f(v_1,v_3) = 12\leq 12 = c(v_1,v_3)$\\
$f(v_2,v_1) = 1\leq 4 = c(v_2, v_1)$\\
$f(v_2,v_4) = 11\leq 14 = c(v_2,v_4)$\\
$f(v_3,v_2) = 4\leq 9 = c(v_3,v_2)$\\
$f(v_3,t) = 15\leq 20 = c(v_3,t)$\\
$f(v_4,v_3) = 7\leq 7 = c(v_4,v_3)$\\
$f(v_4,t) = 4\leq 4 = c(v_4,t)$\\
Skew symmetry: For all $u,v\epsilon V$, we require $f(u, v) = - f
(v,u)$.\\
Using $v$\noindent as an iterator over all connected vertices, we
have:
\\
\noindent
%$f(s,v) = 11 + 8 = -(-11-8) = f(v,s)$\\
$f(v_1,v) = 12 = -(-11-1) = f(v,v_1)$\\
$f(v_2,v) = 1 + 11 = -(-8-4) = f(v,v_2)$\\
$f(v_3,v) = 4 + 15 = -(-12-7) = f(v,v_3)$\\
$f(v_4,v) = 7 + 4 = -(-11) = f(v,v_4)$\\
%$f(t,v) = 15 + 4 = -(-4-15) = f(v,t)$\\
Flow conservation: For all $u, v \epsilon V - {s, t}$, we require
$\sum_{u \epsilon V}f(u,v) = 0$.\\
\noindent
$f(v_1,v) + f(v,v_1) = 12 - 1$\\
$f(v_2,v) + f(v,v_2) = 1 + 11 - 4$\\
$f(v_3,v) + f(v,v_3) = 4 - 12 - 7$\\
$f(v_4,v) + f(v,v_4) = 7 - 11$\\
\noindent
$12-1+1+11-4+4-12-7+7-11$\\\\
$12-12-1+1+11-11-4+4-7+7 = 0$\\
\subsection*{Exercise 26.1-5}
\textbf{For the flow network $G = (V, E)$ and flow f shown in Figure
26.1(b), find a pair of subsets $X, Y \subseteq V$ for which $f(X,
Y) = - f(V - X, Y)$. Then, find a pair of subsets $X, Y \subseteq V$
for which $f (X, Y) = - f(V - X, Y)$.}
The trivial case is not excluded. So we have $X = V/{s,t}, Y = V -
{s,t}$, (the set of intermediate vertices). Since $X = Y, f(X,Y) =
0$ because of flow conservation (property 3 from the second page).
We also have $- f(V - X, Y) = 11 + 8 - 15 - 4 = 0 = f(X,Y)$.
Now, for the case where $f(X,Y) \neq - f(V - X, Y)$. The simplest
obvious cut is to remove either the source or sink. So let $X = {t},
Y = V / {t}$. Then $f(X,Y) = -15 - 4 = - 19 \neq 0 = -f(V-X, Y)$
since $V-X = V/{t} = Y$, and is zero by property 3.
\subsection*{Exercise 26.2-1}
\textbf{In Figure 26.1(b), what is the flow across the cut (${s,
v_2, v_4}, {v_1, v_3, t}$)? What is the capacity of this cut?}
The flow is: $11+1-4+7+4 = 19$\\
The capacity is: $16+4+7+4 = 31$ (capacity from former to latter) or $10 + 9 = 19$ (capacity from latter to former)\\
\newpage
\subsection*{Exercises 26.2-2}
\textbf{Show the execution of the Edmunds Karp algorithm on the flow
network of Figure 26.1(a).} I did not show the residual network
because that is simple to see at each step, by reversing the edge
directions. The residual numbers
are given in gray.\\
\includegraphics[scale=0.6]{karp2601.png}
\subsection*{Exercises 26.2-3}
\textbf{In the example of Figure 26.5, what is the minimum cut
corresponding to the maximum flow shown? Of the augmenting paths
appearing in the example, which two cancel flow?}
Note that this corresponds to the result obtained in the previous
exercise, since Edmund Karp is an instance of the general Ford
Fulkerson method. The minimum cut which corresponds to the maximum
flow is $( V/{t}, {t} )$ and the flow is $19 + 4 = 23.$ I did not
graphically show the augmenting paths, but those that cancel flow
are $(s,v_1), and (v_3,t)$.
\end{document}
| {
"alphanum_fraction": 0.6437486315,
"avg_line_length": 33.3357664234,
"ext": "tex",
"hexsha": "f788d6dc01923a1e9f2738573e3c4f88c9bfeac7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "caf0b7cf6e89d061544ec3c4a26a88aa77b0c153",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "twinbee/advancedAlgorithms",
"max_forks_repo_path": "asn05.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "caf0b7cf6e89d061544ec3c4a26a88aa77b0c153",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "twinbee/advancedAlgorithms",
"max_issues_repo_path": "asn05.tex",
"max_line_length": 117,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "caf0b7cf6e89d061544ec3c4a26a88aa77b0c153",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "twinbee/advancedAlgorithms",
"max_stars_repo_path": "asn05.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1831,
"size": 4567
} |
%%
%% Copyright 2019-2020 Elsevier Ltd
%%
%% This file is part of the 'CAS Bundle'.
%% --------------------------------------
%%
%% It may be distributed under the conditions of the LaTeX Project Public
%% License, either version 1.2 of this license or (at your option) any
%% later version. The latest version of this license is in
%% http://www.latex-project.org/lppl.txt
%% and version 1.2 or later is part of all distributions of LaTeX
%% version 1999/12/01 or later.
%%
%% The list of all files belonging to the 'CAS Bundle' is
%% given in the file `manifest.txt'.
%%
%% Template article for cas-sc documentclass for
%% single column output.
%\documentclass[a4paper,fleqn,longmktitle]{cas-sc}
\documentclass[a4paper,fleqn]{cas-sc}
%\usepackage[numbers]{natbib}
\usepackage[authoryear]{natbib}
%\usepackage[authoryear,longnamesfirst]{natbib}
%%% Author macros
% Define small caps acronyms?
\def\tsc#1{\csdef{#1}{\textsc{\lowercase{#1}}\xspace}}
\tsc{AMOC}
%%%
\begin{document}
\let\WriteBookmarks\relax
\def\floatpagepagefraction{1}
\def\textpagefraction{.001}
\shorttitle{}
\shortauthors{Worthington et~al.}
%\begin{frontmatter}
\title [mode = title]{Simplifying the dynamics of the Atlantic meridional overturning circulation at 26\textdegree N}
\tnotemark[1]
%\tnotetext[2]{The second title footnote which is a longer text matter
% to fill through the whole text width and overflow into
% another line in the footnotes area of the first page.}
\author[1]{E.L.Worthington}[type=author,
auid=000, bioid=1,
role=Researcher,
orcid=0000-0002-6444-6461]
\cormark[1]
\fnmark[1]
\ead{[email protected]}
%\ead[url]{www.cvr.cc, [email protected]}
%\credit{Conceptualization of this study, Methodology, Software}
\author[2]{G.D.McCarthy}[]
\author[3]{B.I.Moat}[]
\author[3]{D.A.Smeed}[]
\author[3]{J.V.Mecking}[%
role=Supervisor,
]
\author[1]{R.Marsh}[%
role=Supervisor,
]
%\credit{Data curation, Writing - Original draft preparation}
\address[1]{University of Southampton, European Way, Southampton, SO14 3ZH, UK}
\address[2]{ICARUS, Maynooth University, Maynooth, Co. Kildare, Ireland}
\address[3]{National Oceanography Centre, European Way, Southampton, SO14 3ZH, UK}
\cortext[cor1]{Corresponding author}
%\cortext[cor2]{Principal corresponding author}
%\fntext[fn1]{This is the first author footnote. but is common to third
% author as well.}
%\fntext[fn2]{Another author footnote, this is a very long footnote and
% it should be a really long footnote. But this footnote is not yet
% sufficiently long enough to make two lines of footnote text.}
%\nonumnote{This note has no numbers. In this work we demonstrate $a_b$
% the formation Y\_1 of a new type of polariton on the interface
% between a cuprous oxide slab and a polystyrene micro-sphere placed
% on the slab.
% }
\begin{abstract}
A decline in Atlantic meridional overturning circulation (AMOC) strength has been observed between 2004 and 2008 by the RAPID array, this weakened state of the AMOC persisting until 2017. Climate model and paleo-oceanographic research suggests that the AMOC may have been declining for decades or even centuries before this, however direct observations are sparse prior to 2004, giving only ‘snapshots’ of the overturning circulation. Previous studies have used linear models based on upper layer temperature anomalies to extend earlier, but these ignore changes in the deep circulation that are beginning to emerge in the observations of AMOC decline.
We use a linear statistical model of AMOC variability based on RAPID data, and associated physically with changes in thickness of the persistent upper, intermediate and deep layers at 26\textdegree N. Boundary density anomalies at depths representing each layer are used to develop a multiple linear regression model which explains approximately 70\% variance in the open ocean component. Using this regression model, we can estimate relative AMOC strength from a reduced number of observations, opening up the use of historical data that are insufficient for the usual AMOC estimation method.
\end{abstract}
\begin{graphicalabstract}
\includegraphics{figs/grabs.pdf}
\end{graphicalabstract}
\begin{highlights}
\item Research highlights item 1
\item Research highlights item 2
\item Research highlights item 3
\end{highlights}
\begin{keywords}
\sep \AMOC \sep \sep
\end{keywords}
\maketitle
\section{Introduction}
A simple linear regression representing the AMOC as a single-layer dynamic model showed that the western boundary temperature anomaly at 400 decibar (dbar) explained 53\% of the variance in the transport anomaly of the thermocline (0-800 m) layer \citep{longworth2011}.
\section{Materials and methods}
We repeated the simple linear regression from \citet{longworth2011} using monthly mean temperatures from the RAPID western boundary moorings instead of CTD data. Despite the resulting regression having almost identical slope and intercept, we found that it explained only 10\% of the variance of the thermocline layer transport, rather than 53\% as \citet{longworth2011} found. Concluding that representing a single layer did not sufficiently explain the AMOC dynamics at 26\textdegree N, we investigated representing two, three and four layers: first with boundary temperature and salinity anomalies; and then with boundary density anomalies.
- creation of model
- dynamic relationship
- justification of using a linear regression model?
- testing of model
- cross-validation?
- testing against RAPID
- evaluating uncertainty - Monte Carlo method/bootstrapping
- prediction intervals
- model assumptions - autocorrelation of residuals, non-stationarity of variables
- types of model - OLS, GLSAR, ARIMA, SARIMAX?
The model was created using data from the RAPID project from ?? Apr 2004 to ?? ?? 2017. To create dynamic height profiles to calculate geostrophic transport across the basin, several moorings on both western and eastern boundaries are combined to make single full height profiles of temperature and salinity, interpolated over a 20 dbar grid. The full method is detailed in \cite{mccarthy2015}. Data from a subsequent RAPID cruise was used to test the model against RAPID's own MOC estimate.
The historical hydrographic data came from multiple sources: the Western Boundary Time Series (WBTS); the underlying profiles used to create the Met Office EN4 reanalysis; the World Ocean Database (WOD); TODO.
- issues with using reanalysis data, i.e., no real deep (4100 dbar) data
%\section{Theory/calculation}
\section{Results}
Initially applying the model to density anomaly data derived from transatlantic hydrographic sections shows the estimated AMOC transport anomalies to be generally larger than those estimated by \citet{bryden2005} (\autoref{fig:occ_pred}). The statistical model and Bry05 estimates are within the approximately 2 Sv uncertainty with the exception of the 1992 results. The estimates post-2004 also agree very well with the RAPID AMOC transport anomaly, capturing the large downturn in 2009-2010.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{../phd-code/hydrographic_prediction/figures/occupation_moc_prediction_1980_2017.png}
\caption{AMOC transport anomaly estimated from the statistical model using density anomalies from transatlantic hydrographic sections, compared to estimates from \citet{bryden2005} and RAPID. The uncertainties shown for the model-derived values are the model's prediction intervals; the B05 uncertainty is 2 Sv.}
\label{fig:occ_pred}
\end{figure}
Applying the same model to density anomaly data derived from the EN4 underlying profiles shows
\begin{figure}
\centering
\includegraphics[width=\linewidth]{../phd-code/hydrographic_prediction/figures/en4_prediction_moc.png}
\caption{AMOC transport anomaly estimated from the statistical model using density anomalies from EN4 underlying profiles, compared to estimates from the transatlantic hydrographic sections (see \autoref{fig:occ_pred}) and RAPID}
\label{fig:en4_pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{../phd-code/hydrographic_prediction/figures/wod_prediction_moc.png}
\caption{AMOC transport anomaly estimated from the statistical model using density anomalies derived from World Ocean Database data, compared to estimates from the transatlantic hydrographic sections (see \autoref{fig:occ_pred}) and RAPID}
\label{fig:wod_pred}
\end{figure}
-- transatlantic section data
-- en4 data
-- other hydrographic data
--
\section{Discussion}
\section{Conclusion}
\section{Acknowledgements}
\section{Funding}
This work was supported by the Natural Environmental Research Council [grant number NE/L002531/1]
\begin{figure}
\centering
\includegraphics[scale=.75]{figs/Fig1.pdf}
\caption{The evanescent light - $1S$ quadrupole coupling
($g_{1,l}$) scaled to the bulk exciton-photon coupling
($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and
the \PMS is placed directly on the cuprous oxide sample ($\delta
r=0$, See also Table \protect\ref{tbl1}).}
\label{FIG:1}
\end{figure}
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{This is a test caption. This is a test caption. This is a test
caption. This is a test caption.}\label{tbl1}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
Col 1 & Col 2 & Col 3 & Col4\\
\midrule
12345 & 12345 & 123 & 12345 \\
12345 & 12345 & 123 & 12345 \\
12345 & 12345 & 123 & 12345 \\
12345 & 12345 & 123 & 12345 \\
12345 & 12345 & 123 & 12345 \\
\bottomrule
\end{tabular*}
\end{table}
\appendix
\section{My Appendix}
Appendix sections are coded under \verb+\appendix+.
\verb+\printcredits+ command is used after appendix sections to list
author credit taxonomy contribution roles tagged using \verb+\credit+
in frontmatter.
\printcredits
%% Loading bibliography style file
%\bibliographystyle{model1-num-names}
\bibliographystyle{cas-model2-names}
% Loading bibliography database
\bibliography{../../OneDrive/PhD/References/simplified_model}
%\vskip3pt
%\bio{}
%Author biography without author photo.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%\endbio
%
%\bio{figs/pic1}
%Author biography with author photo.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%\endbio
%
%\bio{figs/pic1}
%Author biography with author photo.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%Author biography. Author biography. Author biography.
%\endbio
\end{document}
| {
"alphanum_fraction": 0.7707902001,
"avg_line_length": 39.5631399317,
"ext": "tex",
"hexsha": "357bec3c5486f55c4ca02b977666c34e5aed775f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "000ec9338740dd4456d9248147f542a80d97b083",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "emmaworthington/simplified-model-paper-elsevier",
"max_forks_repo_path": "simplified-model.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "000ec9338740dd4456d9248147f542a80d97b083",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "emmaworthington/simplified-model-paper-elsevier",
"max_issues_repo_path": "simplified-model.tex",
"max_line_length": 652,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "000ec9338740dd4456d9248147f542a80d97b083",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "emmaworthington/simplified-model-paper-elsevier",
"max_stars_repo_path": "simplified-model.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2909,
"size": 11592
} |
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[pdftex]{hyperref}
\usepackage{fancyhdr}
\usepackage{fontawesome5}
\usepackage{multicol}
\setlength{\multicolsep}{-3.0pt}
\setlength{\columnsep}{-1pt}
\input{glyphtounicode}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.375in}
\addtolength{\evensidemargin}{-0.375in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
%-------------------------
% Custom commands
\newcommand{\resumeItem}[2]{
\item\small{
\textbf{#1}{: #2 \vspace{-2pt}}
}
}
\newcommand{\resumeProjectItem}[1]{
\item\small{
{#1 \vspace{-5pt}}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeProjectHeading}[2]{
\item
\begin{tabular*}{1.001\textwidth}{l@{\extracolsep{\fill}}r}
\small#1 & \textbf{\small #2}\\
\end{tabular*}\vspace{-2pt}
}
\newcommand{\resumeProjectSubSubheading}[2]{
\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textit{\small#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}}
\renewcommand{\labelitemii}{$\circ$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\begin{center}
{\Huge \scshape Nooruddin Shaikh}
\end{center}
\begin {tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
\text{Andheri West, Mumbai, MH, 400058} & Mobile: +91 88791 14831\\
\text{\href{mailto:[email protected]}{\faEnvelope\hspace{2 mm}\color{blue}[email protected]}} & \href{https://github.com/noor12401}{\faGithub\hspace{2 mm}\color{blue}github.com/noor12401}\\
\href{https://www.linkedin.com/in/nooruddin-shaikh/}{\faLinkedin\hspace{2 mm}\color{blue}linkedin.com/in/nooruddin-shaikh} & {\href{https://noor12401.medium.com/}{\faMedium\hspace{2 mm}\color{blue}noor12401.medium.com}}
\end{tabular*}
%-----------EXPERIENCE-----------------
\section{Experience}
\resumeSubHeadingListStart
\resumeSubheading
{\href{https://s3.ap-south-1.amazonaws.com/internship.ineuron.ai/certificates/31e40962-e2d6-41a0-ae05-ffaca1e58b52.pdf}{\color{blue}Ineuron.ai}}{}
{Machine Learning Intern}{Aug 2021 - Sep 2021}
\resumeItemListStart
\item Applied statistical analysis, deep learning and machine learning methods to large data sets for data exploration, feature engineering, bias correction and prediction.
\item Evaluated and analyzed the data set to achieve the highest accuracy for the model.
\item Maintained \textbf{High Level Document, Low Level Document, WireFrame and Report}.
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------EDUCATION-----------------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{\href{https://www.mhssce.ac.in/}{\color{blue}M H Saboo Siddik College of Enginnering}}{Mumbai, MH, IN}
{Bachelor of Engineering in Electronics and Telecommunication; CGPA: 7.70/10}{Aug 2018 - May 2022}
\resumeSubheading
{\href{https://bhavans.ac.in/}{\color{blue}Bhavans College}}{Mumbai, MH, IN}
{Higher Secondary Certificate; Grades: 64\%}{July 2016 - May 2018}
\resumeSubheading
{Sheth M.A. High School}{Mumbai, MH, IN}
{Secondary School Certificate; Grades: 85\%}{May 2016}
\resumeSubHeadingListEnd
%-----------PROJECTS-----------
\section{Projects}
\vspace{-10pt}
\resumeProjectSubHeadingListStart
\resumeProjectHeading
{\href{https://github.com/noor12401/Projects/tree/main/AQI}{\color{blue}\textbf{Air Quality Index Prediction}} $|$ \emph{Python, Streamlit, ML, Matplotlib, EDA}}{September 2021}
\resumeItemListStart
\resumeProjectItem {Developed a Machine Learning Model to predict quality of air using algorithms including Linear Regression, Lasso Regression, Decision Tree, XGBoost, Random Forest Regressor which is deployed on \textbf{\href{https://airqualityindexcheckerr.herokuapp.com/}{\color{blue}Heroku}}.}
\resumeProjectItem{Built the \textbf{Explore} page for the users to visualize different graphs to understand the relationship between parameters.}
\resumeItemListEnd
\vspace{-24pt}
\resumeProjectSubHeadingListStart
\resumeProjectHeading
{\href{https://github.com/noor12401/Virtual-Health-Check-Up}{\color{blue}\textbf{Virtual Health CheckUp}} $|$ \emph{Python, PyWebIo, Heroku Cloud}}{April 2021}
\resumeItemListStart
\resumeProjectItem {Developed a python script to provide the basic details such as BMI, BMR, Diabetes Prediction, Average Protein, and Fats Intake, Fitness Score using common details such as Body Temperature, Age, Height, Weight, etc.}
\resumeItemListEnd
\vspace{-12pt}
\resumeProjectSubHeadingListStart
\resumeProjectHeading
{\href{https://github.com/noor12401/COVID-19-Analysis}{\color{blue}\textbf{COVID-19 Analysis}} $|$ \emph{Python, NumPy, Pandas, Matplotlib}}{July 2020}
\resumeItemListStart
\resumeProjectItem {Used \textbf{Linear Regression} to analyze the relationship between Confirmed Cases and Life Factors (like Social Support, Life Expectancy, Countries with GDP per Capita, etc.) and the relationship between Deaths Rates and Life Factors.}
\resumeItemListEnd
\resumeProjectSubHeadingListEnd
\vspace{-15pt}
%-----------ACHIEVEMENTS-----------------
\section{Achievements}
\resumeItemListStart
\item Recipient of AWS Machine Learning Scholarship offered by AWS and Udacity (out of 18,000+ applications from 149 countries).
\vspace{-5pt}
\item Recipient of COL-Udemy Skills for Work Scholarship offered by Commonwealth of Learning's Skills & P N Panicker Foundation
\vspace{-5pt}
\item Cleared Python Level 1 Qualification conducted by Cambridge Certification Authority
\resumeItemListEnd
\vspace{-12pt}
%-----------CERTIFICATIONS-----------------
\section{Certifications}
\resumeItemListStart
\item \textbf{\href{https://www.coursera.org/account/accomplishments/professional-cert/HHSB8SHSDDBN}{\color{blue}IBM Data Science Professional Certificate}}
\hfill
\textbf{IBM, Coursera}
\vspace{-8pt}
\item \textbf{\href{https://www.credential.net/5794018e-8523-428f-8f57-6b862a786523#gs.bf0vw7}{\color{blue}Machine Learning Concepts}}
\hfill
\textbf{UpGrad}
\vspace{-8pt}
\item \textbf{\href{https://leapsdata.analyttica.com/certificates/b53ad4c9-776c-43c5-ab8c-6c7edeaa7257/certificate_LEAPSCE00001915.png}{\color{blue}Fundamentals of Data Analytics}}
\hfill
\textbf{Analyttica TreasureHunt}
\vspace{-8pt}
\item \textbf{\href{https://courses.packtpub.com/certificates/kljy0gexga}{\color{blue}Python Workshop}}
\hfill
\textbf{Packt}
\resumeItemListEnd
\vspace{-12pt}
%
%--------SKILLS------------
\section{Skills}
\resumeItemListStart
\item \textbf{Tools/Frameworks}{: Pandas, NumPy, Scikit-learn, OpenCV, Jupyter notebook, Flask, Git}
\vspace{-8pt}
\item \textbf{Languages}{: Python, SQL, C, MATLAB, HTML, CSS, \LaTeX}
\vspace{-8pt}
\item \textbf{Interests}{: Machine Learning, Data Structures and Algorithms, Deep Learning, Data Science, Blockchain}
\resumeItemListEnd
%-------------------------------------------
% \item{
% \textbf{Languages}{: Scala, Python, Javascript, C++, SQL, Java}
% \hfill
% \textbf{Technologies}{: AWS, Play, React, Kafka, GCE}
% }
\resumeSubHeadingListEnd
%-------------------------------------------
%-------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.6868055556,
"avg_line_length": 39.2727272727,
"ext": "tex",
"hexsha": "8046311a363bfc7345867475089010f03444bb7e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "37c4f7dce74f2c158c0d7109af375bcc2f60b7ab",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "noor12401/resume-latex",
"max_forks_repo_path": "Nooruddin_Shaikh_Resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37c4f7dce74f2c158c0d7109af375bcc2f60b7ab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "noor12401/resume-latex",
"max_issues_repo_path": "Nooruddin_Shaikh_Resume.tex",
"max_line_length": 308,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "37c4f7dce74f2c158c0d7109af375bcc2f60b7ab",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "noor12401/resume-latex",
"max_stars_repo_path": "Nooruddin_Shaikh_Resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2591,
"size": 8640
} |
\documentclass[a4paper,titlepage]{article}
\usepackage[utf8]{inputenc}
\usepackage{fullpage}
\usepackage{indentfirst}
\usepackage[per-mode=symbol]{siunitx}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{color}
\usepackage{amsmath}
\usepackage{array}
\usepackage[hidelinks]{hyperref}
\usepackage[format=plain,font=it]{caption}
\usepackage{subcaption}
\usepackage{standalone}
\usepackage[nottoc]{tocbibind}
\usepackage[noabbrev,capitalize,nameinlink]{cleveref}
\usepackage{titlesec}
\usepackage{booktabs}
\usepackage{csvsimple}
\usepackage[super]{nth}
% Custom commands
\newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}}
\newcommand{\code}[1]{\texttt{#1}}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\titleformat*{\section}{\normalsize\bfseries}
%opening
\title{
\textbf{ECSE 526 \\ Assignment 3}
\\ \large Reinforcement Learning
}
\author{Sean Stappas \\ 260639512}
\date{November \nth{7}, 2017}
\begin{document}
\sloppy
\maketitle
\twocolumn
\section{Description of approach to generalization} \label{sec:generalization_description}
% 10. provides expression for distance metric between two states and describes state representation used by the RL agent, also includes rationale for choice of components of the distance metric
Q-learning is the learning algorithm chosen for the agent. Many different distance metrics experimented with in the program, as described by Mahadevan and Connell \cite{mahadevan}. Here are three simple distance metrics, with the rationale for each choice of components.
\begin{description}
\item[Manhattan] The \textsc{Manhattan} distance represents how close two states are on the Qbert pyramid, i.e., the number of blocks between them. This is a natural representation of distance in this environment, since Qbert can only change state by switching blocks and can only travel one block at a time (with the exception of using disks to jump back to the starting position). A distance of 1 was used for \textsc{Manhattan}.
\item[Hamming] The \textsc{Hamming} distance represents the number of different state bits between two given states. The rationale behind this distance metric is that nearby states should have a similar bit pattern, assuming a sane state representation. A distance of 1 was used for \textsc{Hamming}.
\item[SameResult] The \textsc{SameResult} approach groups adjacent states together, such that, whenever a Q-update occurs for a certain Qbert position, all possible states that could have directly led to that state also get updated. The rationale here is that, for most states in the Qbert world, the path Qbert takes to get to that state is usually not important.
\end{description}
Many different state representations were used with varying results. First, a state representation was used keeping track of the color of every block, the position of disks, Qbert's position, the position of enemies (purple agents) and the position of friendlies (green agents). While this state representation completely encapsulates Qbert's world, it is very verbose and intractable if not using any distance metric, as described previously.
With 21 blocks in the pyramid, an agent has 21 possible positions, representing 5 bits of entropy. This applies for Qbert, the enemies and the friendlies. A simple representation of the colors of the blocks can indicate if the block color is the goal color or not. With this binary value for each block, the colors can be represented with 21 bits. Finally, there are 12 possible locations for the disks, with each disk either being present or not, representing 12 bits of entropy. With all these elements in a state representation, we would need a total of $5*3 + 21 + 12 = 48$ bits to represent a state completely. This represents a state space with $2^{48} \approx 3 \times 10^{14}$ states. Clearly, this number of states is intractable for learning, showing the necessity for generalization with an appropriate distance metric. This state representation is referred to as \textsc{Verbose} in the code.
Another approach to generalization is to use a simpler state representation. A simpler state may not completely represent the game world, but can be enough for an agent to learn how to behave correctly. For example, instead of keeping track of all enemies, blocks and friendlies, one can only keep track of enemies, blocks and friendlies around Qbert, since that is what is important to make an immediate action. One problem with this approach, however, is that it can be hard for the agent to learn to go towards all the blocks in a level to change all the colors and finish the level. More information, such as the number of colored blocks on the board, can then be provided as well. This state representation is called \textsc{Simple} in the code.
Finally, another way to deal with a very large state space is to separate the learning problem into sub-tasks, allowing components of the state to add in complexity instead of multiply. Indeed, if we have separate tasks of enemy avoidance, friendly catching and block finding, we can have a simple specialized state for each task. This is the \textsc{Subsumption} model described by Mahadevan and Connell \cite{mahadevan}, and is explored further in \cref{sec:performance}, where the \textsc{FriendlyCatcher}, \textsc{BlockFinder} and \textsc{EnemyAvoider} learners are described in detail.
To simplify the learning process and avoid recurring sudden death, Qbert's motion is restricted to within the pyramid (with the exception of reaching discs). This allows Qbert to focus on learning to find blocks, avoid enemies and find friendlies.
\section{Results of generalization}
% 10. results provided for at least 3 different generalization approaches (i.e., choice of components of the distance metric) and meaningful discussion regarding consequences to behaviour of game agent
For all the plots in this report, scores were reported for 100 episodes of playing Qbert. For each episode, Qbert keeps playing until he has no more lives yet, and the total score obtained is recorded. Also, for the following plots showing various generalization results, no exploration is being used.
The results of the Qbert agent with no generalization can be seen in \cref{fig:no_generalization}. We can see that learning takes place relatively slowly because of the massive state space.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/no_generalization.pdf}
\caption
{Score achieved by the agent with no generalization and no exploration.}
\label{fig:no_generalization}
\end{figure}
The results of the Qbert agent using the \textsc{Manhattan} distance metric can be seen in \cref{fig:manhattan}. It seems that learning is actually hindered with this metric, with overall lower scores being obtained. This may be explained by the fact that assigning the same Q-value to adjacent states may not always be wise. For instance, if Qbert dies to an enemy at a specific block, it probably doesn't make to assign the same penalty to all nearby blocks.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/manhattan.pdf}
\caption
{Score achieved by the agent using the \textsc{Manhattan} distance metric.}
\label{fig:manhattan}
\end{figure}
The results of the Qbert agent using the \textsc{Hamming} distance metric can be seen in \cref{fig:hamming}. While the learning converges relatively quickly to a score, it clearly fails to converge to a very high one. Once again, the distance metric may have caused unintended states to have the same reward, causing repetitive destructive behaviour.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/hamming.pdf}
\caption
{Score achieved by the agent using the \textsc{Hamming} distance metric.}
\label{fig:hamming}
\end{figure}
The results of the Qbert agent using the \textsc{SameResult} distance metric can be seen in \cref{fig:same_result}. Here, the distance metric clearly failed to provide learning, with the score converging to a lower value than the starting one. By assigning the same utility to states that lead to the same other states, the agent has degenerated to repetitive behaviour, leading to easy prey for the purple enemies.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/same_result.pdf}
\caption
{Score achieved by the agent using the \textsc{SameResult} distance metric.}
\label{fig:same_result}
\end{figure}
It is clear that all the described distance metrics have not been successful. This can be explained by the simple fact that, even with distance metrics, the \textsc{Verbose} state space is much too big. To address this issue, we have created a \textsc{Subsumption} agent using the \textsc{Simple} state representation. The details of this model will be described in \cref{sec:performance}, but its performance as a generalization technique can be seen in \cref{fig:subsumption_generalization}. Here, we see a clear upward learning trend despite the spiky data. Note that the spikiness in the data is due to the great danger in the Qbert world, where death is very easy near purple enemies.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/subsumption_generalization.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent using a \textsc{Simple} state representation.}
\label{fig:subsumption_generalization}
\end{figure}
Note that exploration has been completely ignored here, since it will be discussed in the next section. This explains the relatively mediocre score results. It should be noted, however, that with the subsumption model, the agent learns with the \textsc{EnemyAvoider} quickly enough to avoid purple enemies, and this acts in a way as a form of exploration, since the enemies will force Qbert to go around the pyramid to avoid death. Indeed, this will force Qbert to visit states that he would normally not visit.
\section{Description of approach to exploration}
% 10 provides expression for optimistic prior for (state, action) pairs with clear explanation of how agent chose action at each step and convincing rationale for the approach taken
First, the $\epsilon$-greedy approach to exploration was implemented, where a random action is chosen with probability $\epsilon$ and the greedy action is chosen otherwise. An $\epsilon$ value of \SI{20}{\percent} was chosen, since exploration is very important to finding all the blocks to complete a level. The $\epsilon$-greedy approach is the one chosen by many researchers using reinforcement learning in the ALE \cite{defazio,bellemare,mnih}. This method is attractive for many reasons. First, it is very simple to implement with any standard random number generator. Second, despite its simplicity, it can lead to very powerful results, since it can allow an agent to explore states it would never normally explore. However, convergence to an optimal policy can be slow with $\epsilon$-greedy. Also, this method can be dangerous in the Qbert game, where a bad random action can lead to sudden death. For this reason, the random action is only chosen for the \textsc{FriendlyCatcher} and \textsc{BlockFinder}, since a bad random action from the \textsc{EnemyAvoider} can lead to sudden death. This approach will be referred to as \textsc{Random} exploration.
Next, weighting with optimistic priors was implemented, using $N(s, a)$, i.e., the number of times action $a$ has been attempted in state $s$. The simple exploration function used can be seen in \cref{eq:exploration_function}. An optimistic score of 100 was chosen, since it is higher than the score obtained by jumping on a block without a goal color (25). This method can often lead to meaningful convergence much faster than $\epsilon$-greedy, since it will directly explore unvisited states, instead of choosing randomly. This method is theoretically very attractive in the Qbert world, since exploring unexplored states usually means going on blocks without the goal color, turning them to the correct color. This approach will be referred to as \textsc{Optimistic} exploration.
\begin{equation} \label{eq:exploration_function}
f(u, n) =
\begin{cases}
100 & n < 1 \\
u & n \geq 1
\end{cases}
\end{equation}
Finally, a combination of $\epsilon$-greedy and optimistic prior was also experimented with. With this approach, the agent chooses a random action with probability $\epsilon$ and uses the exploration function given in \cref{eq:exploration_function}. This can theoretically offer the benefits of both $\epsilon$-greedy and optimistic prior, i.e., the benefits of random exploration and optimistic unexplored states. This approach will be referred to as \textsc{Combined} exploration.
\section{Results of exploration}
% 10. results provided for at least 2 different exploration functions (i.e., weighting or N[s,a] in optimistic prior calculation) and meaningful discussion regarding consequences to behaviour of game agent
The results of the \textsc{Subsumption} agent using \textsc{Random} exploration can be seen in \cref{fig:subsumption_random}. The results are relatively good, although it is hard to tell if the agent is correctly learning. This is most probably due to the fact that 100 is a relatively low number of episodes.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/subsumption_random.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent with \textsc{Random} exploration.}
\label{fig:subsumption_random}
\end{figure}
One problem that was clear with $\epsilon$-greedy exploration is that it can be very slow to learn and waste a great deal of training time in the same states. For example, with $\epsilon$-greedy, Qbert would often oscillate between two adjacent blocks, not seeing the need to explore further until the it randomly chose (with probability $\epsilon$) to explore further. This can be seen in \cref{fig:random_oscillations}, where Qbert is oscillating between the bottom right block and the block to the top right of it. This can explain the slow learning seen in \cref{fig:subsumption_random}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{screenshots/random_oscillations_2.png}
\caption
{Screen capture during Qbert's oscillation between two states.}
\label{fig:random_oscillations}
\end{figure}
The results of the \textsc{Subsumption} agent using \textsc{Optimistic} exploration can be seen in \cref{fig:subsumption_optimistic}. These results are not as encouraging, with the score oscillating around a relatively low value. This can most likely be explained by the fact that we used a relatively simplistic exploration function. With a more sophisticated one, we can most likely obtain better results.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/subsumption_optimistic.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent with \textsc{Optimistic} exploration.}
\label{fig:subsumption_optimistic}
\end{figure}
The results of the \textsc{Subsumption} agent using \textsc{Combined} exploration can be seen in \cref{fig:subsumption_combined}. These results are much more encouraging. We can see the same initial high score values as for \textsc{Random} exploration, with a clear increasing trend. This approach encourages Qbert to visit unexplored states, while also randomly visiting states that may lead to more unexplored states.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/subsumption_combined.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent with \textsc{Combined} exploration ($\epsilon$-greedy and optimistic-prior exploration).}
\label{fig:subsumption_combined}
\end{figure}
Of course, as described previously, the task of avoiding enemies is itself a good way of exploring the state space and completing a level, since it will force Qbert to jump around the pyramid and visit blocks to make them the right color.
\section{Agent Performance} \label{sec:performance}
% 30. as above, including analysis of effects of game events on agent behaviour and strategies (e.g., enemy avoidance) AND explanation of results over trials with multiple seeds to demonstrate generalization of learning
Based on the results from the previous sections, it is clear that the \textsc{Subsumption} approach described by Mahadevan and Connell \cite{mahadevan} gives the best results. The model for this approach will now be described in detail.
The agent is separated into three learners: \textsc{EnemyAvoider}, \textsc{FriendlyCatcher} and \textsc{BlockFinder}. The structure is shown in \cref{fig:subsumption}, where the ``S'' suppressor nodes denote priority, i.e., avoiding enemies takes priority over catching friendlies and going on blocks. This approach allows the agent to learn various aspects of the Qbert game independently, with a reduced state space for each task. This separation also makes intuitive sense, since learning how to avoid enemies should have little to do with finding blocks. Combined with the \textsc{Simple} state representation described in \cref{sec:generalization_description}, the \textsc{Subsumption} model can lead to a very small state space which can be learned very quickly.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/subsumption.pdf}
\caption
{Subsumption network used by the \textsc{Subsumption} agent.}
\label{fig:subsumption}
\end{figure}
The \textsc{EnemyAvoider} is active whenever an enemy (\textsc{Coily} and \textsc{PurpleBall}) is close to Qbert, i.e. in any of the blocks directly adjacent to Qbert or 2 blocks away. An important strategy is to kill the purple snake enemy (\textsc{Coily}) by jumping on to the floating disks on the left and right of the board. If \textsc{Coily} is nearby, he will jump off the board, giving Qbert 500 points. This maneuver can be seen in \cref{fig:kill_coily}. This score is fed as a reward to the enemy avoider learner. As a penalty for dying to \textsc{Coily}, a negative reward of $-100$ is given to the learner. This combination of positive and negative reinforcement allows the agent to learn how to avoid and kill \textsc{Coily}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{screenshots/kill_coily.png}
\caption
{Screen capture during Qbert's jump on a disk to kill Coily.}
\label{fig:kill_coily}
\end{figure}
Indeed, avoiding \textsc{Coily} is probably the most important task to learn in the game. Not only does it prevent Qbert from dying, but it inadvertently makes him explore the entire map while avoiding \textsc{Coily}. This avoidance is essential, and after many runs, the \textsc{Subsumption} agent learns to go back and forth between tiles to out-maneuver \textsc{Coily}. This can be seen in \cref{fig:avoid_coily}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{screenshots/avoid_coily.png}
\caption
{Screen capture during Qbert's back-and-forth maneuver on the bottom corner to avoid \textsc{Coily}.}
\label{fig:avoid_coily}
\end{figure}
The \textsc{FriendlyCatcher} is active whenever a friendly green agent (\textsc{Sam} and \textsc{GreenBall}) is within 2 blocks of Qbert. This learner encourages Qbert to catch \textsc{Sam} for 300 points and \textsc{GreenBall} for 100 points. Stopping Sam is also important because he changes the colors of blocks back to their original values, making Qbert do more work.
The \textsc{BlockFinder} is always active and keeps track of the colors of blocks within 2 blocks' distance of Qbert. It specifically keeps track of the number of adjacent blocks that are not of the desired color, encouraging Qbert to change them to their correct color to eventually complete the level. It receives a 25 point reward for turning a block to the goal color, and a 3100 point reward for completing a level.
Note that, although one learner's decision always has priority for executing an action, multiple learners can still learn simultaneously. For example, while Qbert is avoiding an enemy, the block finder learner can still be learning from the actions being taken.
The score results of the \textsc{Subsumption} agent with seeds of 123, 459 and 598 can be seen in \cref{fig:seed123,fig:seed459,fig:seed598}. We can see that the agent performs relatively well for all of these seed values, showing that the agent generalizes well. To fully appreciate the upward trend of learning, it would require more than 100 episodes, but the training is relatively time consuming.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/seed123.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent with a seed of 123.}
\label{fig:seed123}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/seed459.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent with a seed of 459.}
\label{fig:seed459}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{plots/seed598.pdf}
\caption
{Score achieved by the \textsc{Subsumption} agent with a seed of 598.}
\label{fig:seed598}
\end{figure}
With repeated training, the highest score achieved by the agent is around 41800 and the maximum level reached is 9. There are multiple possible improvements to be made to the agent's behaviour. One would be to discourage it from staying near the top of the pyramid, since this is where \textsc{PurpleBall} spawns. This could possibly be achieved by keeping the \textsc{EnemyAvoider} learner active even when there are no enemies, or only when Qbert is near the top of the pyramid. Another problem is that Qbert doesn't seem to explore very well when he is not avoiding \textsc{Coily}. This could possibly be addressed by encouraging Qbert further to find uncolored blocks by specifying how many blocks are left without the goal color.
\section*{Acknowledgments}
There was a discussion with Andrei Purcarus, Andrew Lowther and Juliette Valenchon concerning extracting relevant data from the ALE and high-level learning strategies. Notably, the positions of various important bytes in the ALE RAM were discussed, such as the bytes indicating a safe move update and the byte indicating a level change.
% \renewcommand\refname{}
\bibliographystyle{unsrt}
\bibliography{readings}{}
\end{document}
| {
"alphanum_fraction": 0.7876494024,
"avg_line_length": 86.2213740458,
"ext": "tex",
"hexsha": "c9f3b9e03f48ceefb65df660d8c719aa94014975",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3d9c8b0821ba6df07d1711c0199a6e876ebc4ad7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "seanstappas/qbert-reinforcement-learning",
"max_forks_repo_path": "report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3d9c8b0821ba6df07d1711c0199a6e876ebc4ad7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "seanstappas/qbert-reinforcement-learning",
"max_issues_repo_path": "report/report.tex",
"max_line_length": 1165,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "3d9c8b0821ba6df07d1711c0199a6e876ebc4ad7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "seanstappas/qbert-reinforcement-learning",
"max_stars_repo_path": "report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2020-08-21T03:05:03.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-21T03:05:03.000Z",
"num_tokens": 5574,
"size": 22590
} |
In this section, we will closely follow\ \cite{pitfalls}
\subsubsection{Introduction}
Normally, samples in machine learning are of huge sample sizes, for which most MCMC algorithms are not designed to process.
As a result of the computational cost, several new approaches were proposed recently, Stochastic Gradient Langevin Dynamics (SGLD) is a popular one.
SGLD is based on the Langevin Monte Carlo (LMC)
LMC – a discretization of a continuous-time process, it requires to compute the gradient of the log-posterior at the current fit of the parameter and avoid the accept/reject step.
SGLD – use unbiased estimator of the gradient of log-posterior based on subsampling, suitable for samples of huge size.
\subsubsection{Governing Equation}
Recall the following equations:
Euler discretization of the Langevin SDE:
$$\theta_{n+1} = \theta_n - h \nabla U(\theta_n)+\sqrt{2h} Z_{n+1}$$ where $h > 0$ is a constant step size and $(Z_n)_{n\geq1}$ is a sequence of i.i.d standard d - dimensional Guassian vectors.
To reduce the costs of the algorithms, we will switch to SGLD, for which we will replace $\nabla U$ with an unbiased estimate $\nabla U_0+(\frac{N}{p}) \sum_{i\in S}\nabla U_i$, where S is a minibatch of \{1,...., N\} with replacement of size p. Our iterations were then updated as
$$\theta_{n+1} = \theta_{n}-h \Bigg (\nabla U_0(\theta_n)+\frac{N}{p}\sum_{i\in S_{n+1}}\nabla U_i(\theta_n)\Bigg )+\sqrt{2h}Z_{n+1}$$
Stochastic Gradient Descent (SGD) is characterised by the same recursion as SGLD without the Gaussian noise, (the last term):
$$\theta_{n+1} = \theta_{n}-h \Bigg (\nabla U_0(\theta_n)+\frac{N}{p}\sum_{i\in S_{n+1}}\nabla U_i(\theta_n)\Bigg)$$
\textbf{Analysis in Wasserstein distance}
\subsubsection{Definitions and Notations in Markov Chain theory}
Recall the following definitions:
$\mathcal{P}_2(\mathbb{R}^d)$ the set of probablity measures with finite second momet.\\
$\mathcal{B}(\mathbb{R}^d)$ the Borel $\sigma$ - algebra of $\mathbb{R}^d$.\\
For $\lambda, \nu \in \mathcal{P}_2(\mathbb{R}^d)$, we define the Wasserstein distance by
$$W_2(\lambda, \nu) =\inf_{\xi \in \Pi(\lambda, \nu)}(\int_{\mathbb{\mathbb{R}^d \times \mathbb{R}^d}}||\theta-\vartheta)||^2 \xi(d\theta, d\vartheta))^{\frac{1}{2}}$$
where, $\Pi(\lambda, \nu)$ is the set of probablity measures $\xi$ on $\mathcal{B}(\mathbb{R}^d)\otimes\mathcal{B}(\mathbb{R}^d)$ satisfying for all $A \in \mathcal{B}(\mathbb{R}^d), \xi(A \times \mathbb{R}^d)= \lambda(A)$ and $\xi (\mathbb{R}^d \times A) = \nu(A)$.\\
For any probablity measure $\lambda$ on $\mathcal{B}(\mathbb{R}^d)$, we define $\lambda R$ for all $A \in \mathcal{B}(\mathbb{R}^d)$ by $\lambda R(A) = \int_{\mathbb{R}^d}\lambda(d\theta)R(\theta, A)$.\\
For all $k\in \mathbb{N}*$, we define the Markov kernel $R^k$ recursively by $R^1 = R$ and for all $\theta \in \mathbb{R}^d$ and $A \in \mathcal{B}(\mathbb{R}^d)$, $R^{k+1}(\theta, A) = \int_{\mathbb{R}^d} R^k(\theta, d\vartheta)R(\vartheta, A).$\\
A probablity measure $\bar{\pi}$ is invariant for R if $\bar{\pi}R = \bar{\pi}$.\\
Our algorithms LMC, SGLD, SGD and SGLDFP algorithms are homogeneous Markov chains with Markov kernels denoted $R_{LMC}, R_{SGLD}, R_{SGD}$ and $R_{FP}$.
\subsubsection{Results}
Recall our assumptions:
\begin{enumerate}[label={\bf H\arabic*}]
\item For all $i \in \{0,...,N\}, \ U_i$ is four times continuously differentiable and for all $j \in \{2, 3, 4\},$$sup_{\theta \in \mathbb{R}^d}||D^j U_i(\theta)||\leq \Tilde{L}.$ In particular for all $i\in \{0, ..., N\}$, $U_i$ $\Tilde{L}$-gradient Lipschitz, i.e. for all $\theta_1, \theta_2\in \mathbb{R}^d$, $||\nabla U_i(\theta_1) - \nabla U_i (\theta_2)|| \leq \Tilde{L}||\theta_1 - \theta_2||.$
\item U is m-strongly convex, i.e. for all $\theta_1, \theta_2 \in \mathbb{R}^d$,$\left< \nabla U(\theta_1) - \nabla U(\theta_2), \theta_1 - \theta_2\right>\geq m ||\theta_1 - \theta_2||^2.$
\item For all $i\in\{0,...,N\}$, $U_i$ is convex.
\end{enumerate}
For the below Lemma, Theorem and Corollary, we assume H1, H2 and H3.
\begin{lemma}
For any step size $h \in (0, \frac{2}{L})$, $R_{SGLD}$(respectively $R_{LMC}, R_{SGD}, R_{FP}$) has a unique invariant measure $\pi_{SGLD}\in \mathcal{P}_2(\mathbb{R}^d)$(respectively $\pi_{LMC}, \pi_{SGD}, \pi_{FP}$). In addition, for all $h \in (0, \frac{1}{L}], \theta\in \mathbb{R}^d and k\in\mathbb{N}$,
$$W_2^2(R_{SGLD}^k(\theta, \cdot), \pi_{SGLD})\leq(1-mh)^k\int_{\mathbb{R}^d}||\theta-\vartheta||^2\pi_{SGLD}(d\vartheta)$$
same inequality holds for LMC, SGD and SGLDFP.
\end{lemma}
\begin{theorem}
For all $h\in(0,\frac{1}{L}], \lambda, \nu\in \mathcal{P}_2(\mathbb{R}^d)\ and \ n\in\mathbb{N}$, we have the following upper- bounds in Wasserstein distance between
\begin{enumerate}
\item
LMC and SGLDFP,
\begin{dmath}
W_2^2(\lambda R_{LMC}^n, \nu R_{FP}^n)\leq(1-mh)^nW_2^2(\lambda, \nu) + \frac{2L^2h d}{pm^2}+\frac{L^2h^2}{p}n(1-mh)^{n-1}\int_{\mathbb{R}^d}||\vartheta-\theta*||^2 \mu(d\vartheta)
\end{dmath},
\item
the Langevin diffusion and LMC,
\begin{dmath}
W_2^2(\lambda R_{LMC}^n, \mu P_{nh})\leq2(1-\frac{mLh}{m+L})^nW_2^2(\lambda, \mu)+dh\frac{m+L}{2m}(3+\frac{L}{m})(\frac{13}{6}+\frac{L}{m})\\+ne^{-(\frac{m}{2})h(n-1)}L^3h^3(1+\frac{m+L}{2m})\int_{\mathbb{R}^d}||\vartheta - \theta*||^2 \mu(d\vartheta)
\end{dmath},
\item
SGLD and SGD
\begin{dmath}
W_2^2(\lambda R_{SGLD}^n, \mu R_{SGD}^n)\leq (1-mh)^n W_2^2(\lambda, \mu)+\frac{(2d)}{m}.
\end{dmath}
\end{enumerate}
\end{theorem}
Proof omitted.
\begin{cor}
Set $h - \frac{\eta}{N} \ with \ \eta \in (0, \frac{1}{(2L)}]$ and assume that $lim \inf_{N \to \infty}mN^{-1}>0$. Then
\begin{enumerate}
\item
for all $n \in N$, we get $W_2(R_{LMC}^n(\theta*, \cdot), R_{FP}^{n}(\theta*, \cdot)) = \sqrt{d\eta}\mathcal{O}(N^{-\frac{1}{2}})$ and $W_2(\pi_{LMC}, \pi_{FP}) = \sqrt{d\eta}\mathcal{O}(N^{-\frac{1}{2}})$.
\item
for all $n\in \mathbb{N}, we get W_2(R_{SGLD}^{n}(\theta*, \cdot), R^n_{SGD}(\theta*, \cdot)) = \sqrt{d}\mathcal{O}(N^{-\frac{1}{2}})$, and $W_2(\pi_{SGLD}, \pi_{SGD}) = \sqrt{d}\mathcal{O}(N^{-\frac{1}{2}})$.
\end{enumerate}
\end{cor} | {
"alphanum_fraction": 0.6583524778,
"avg_line_length": 74.3170731707,
"ext": "tex",
"hexsha": "6878885cc6256e7b626614dd828e7c2eeaf8a0b2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-01-19T17:44:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-19T17:44:19.000Z",
"max_forks_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "swyoon/LangevinMC",
"max_forks_repo_path": "WriteUp/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "swyoon/LangevinMC",
"max_issues_repo_path": "WriteUp/report.tex",
"max_line_length": 407,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Tom271/LangevinMC",
"max_stars_repo_path": "WriteUp/report.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-04T13:35:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-07T12:51:19.000Z",
"num_tokens": 2477,
"size": 6094
} |
\chapter{Requirements}
From the user stories a number of functional requirements are established.
A user should be able to:
\begin{enumerate}
\item share details of a specific performance on Facebook.
\item see detailed information of a specific performance including performance attributes and meta data on this.
\item see all performances recorded in the system.
\item see an overview of the latest performances and some averaged performance attributes.
\end{enumerate}
Further more some non-functional requirements are identified. The user must be able to interact easily and fast with the application with expected gestures.
With this in mind a number of mocks to support the requirements are drafted, figure \ref{fig:mockups}. To the left is the main overview that shows a graph representing the history of performances as well some attributes for the overall performance level of the athlete. By swiping to the left a history list is shown with all recorded performances. By selecting one, a details view is presented with all attributes as well as the recorded meta data.
On all views it is possible to get a menu by clicking the upper left corner where a Profile option is available to let the athlete modify some personal details.
\graphic{1}{mockups}{GUI mock ups to enable identified requirements}{fig:mockups}
| {
"alphanum_fraction": 0.8085585586,
"avg_line_length": 66.6,
"ext": "tex",
"hexsha": "914b469d9f0bb7947adda877535411b67d766240",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ed11b8d590e812078afb5d8760e0fcb4aa1c4cad",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "racketometer/report",
"max_forks_repo_path": "files/Requirements_specification.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ed11b8d590e812078afb5d8760e0fcb4aa1c4cad",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "racketometer/report",
"max_issues_repo_path": "files/Requirements_specification.tex",
"max_line_length": 449,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ed11b8d590e812078afb5d8760e0fcb4aa1c4cad",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "racketometer/report",
"max_stars_repo_path": "files/Requirements_specification.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 259,
"size": 1332
} |
\chapter{Textbooks and articles that use MARSS modeling for population modeling}
\label{chap:SSreferences}
\section*{Textbooks Describing the Estimation of Process and Non-process Variance}
There are many textbooks on Kalman filtering and estimation of state-space models. The following are a sample of books on state-space modeling that we have found especially helpful.
\bigskip
Shumway, R. H., and D. S. Stoffer. 2006. Time series analysis and its applications. Springer-Verlag.
Harvey, A. C. 1989. Forecasting, structural time series models and the Kalman filter. Cambridge University Press.
Durbin, J., and S. J. Koopman. 2001. Time series analysis by state space methods. Oxford University Press.
Kim, C. J. and Nelson, C. R. 1999. State space models with regime switching. MIT Press.
King, R., G. Olivier, B. Morgan, and S. Brooks. 2009. Bayesian analysis for population ecology. CRC Press.
Giovanni, P., S. Petrone, and P. Campagnoli. 2009. Dynamic linear models in R. Springer-Verlag.
Pole, A., M. West, and J. Harrison. 1994. Applied Bayesian forecasting and time series analysis. Chapman and Hall.
Bolker, B. 2008. Ecological models and data in R. Princeton University Press.
West, M. and Harrison, J. 1997. Bayesian forecasting and dynamic models. Springer-Verlag.
Tsay, R. S. 2010. Analysis of financial time series. Wiley.
\section*{Maximum-likelihood papers}
This is just a sample of the papers from the population modeling literature.
\bigskip
de Valpine, P. 2002. Review of methods for fitting time-series models with process and observation error and likelihood calculations for nonlinear, non-Gaussian state-space models. Bulletin of Marine Science 70:455-471.
de Valpine, P. and A. Hastings. 2002. Fitting population models incorporating process noise and observation error. Ecological Monographs 72:57-76.
de Valpine, P. 2003. Better inferences from population-dynamics experiments using Monte Carlo state-space likelihood methods. Ecology 84:3064-3077.
de Valpine, P. and R. Hilborn. 2005. State-space likelihoods for nonlinear fisheries time series. Canadian Journal of Fisheries and Aquatic Sciences 62:1937-1952.
Dennis, B., J.M. Ponciano, S.R. Lele, M.L. Taper, and D.F. Staples. 2006. Estimating density dependence, process noise, and observation error. Ecological Monographs 76:323-341.
Ellner, S.P. and E.E. Holmes. 2008. Resolving the debate on when extinction risk is predictable. Ecology Letters 11:E1-E5.
Erzini, K. 2005. Trends in NE Atlantic landings (southern Portugal): identifying the relative importance of fisheries and environmental variables. Fisheries Oceanography 14:195-209.
Erzini, K., Inejih, C. A. O., and K. A. Stobberup. 2005. An application of two techniques for the analysis of short, multivariate non-stationary time-series of Mauritanian trawl survey data ICES Journal of Marine Science 62:353-359.
Hinrichsen, R.A. and E.E. Holmes. 2009. Using multivariate state-space models to study spatial structure and dynamics. In Spatial Ecology (editors Robert Stephen Cantrell, Chris Cosner, Shigui Ruan). CRC/Chapman Hall.
Hinrichsen, R.A. 2009. Population viability analysis for several populations using multivariate state-space models. Ecological Modelling 220:1197-1202.
Holmes, E.E. 2001. Estimating risks in declining populations with poor data. Proceedings of the National Academy of Sciences of the United States of America 98:5072-5077.
Holmes, E.E. and W.F. Fagan. 2002. Validating population viability analysis for corrupted data sets. Ecology 83:2379-2386.
Holmes, E.E. 2004. Beyond theory to application and evaluation: diffusion approximations for population viability analysis. Ecological Applications 14:1272-1293.
Holmes, E.E., W.F. Fagan, J.J. Rango, A. Folarin, S.J.A., J.E. Lippe, and N.E. McIntyre. 2005. Cross validation of quasi-extinction risks from real time series: An examination of diffusion approximation methods. U.S. Department of Commerce, NOAA Tech. Memo. NMFS-NWFSC-67, Washington, DC.
Holmes, E.E., J.L. Sabo, S.V. Viscido, and W.F. Fagan. 2007. A statistical approach to quasi-extinction forecasting. Ecology Letters 10:1182-1198.
Kalman, R.E. 1960. A new approach to linear filtering and prediction problems. Journal of Basic Engineering 82:35-45.
Lele, S.R. 2006. Sampling variability and estimates of density dependence: a composite likelihood approach. Ecology 87:189-202.
Lele, S.R., B. Dennis, and F. Lutscher. 2007. Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods. Ecology Letters 10:551-563.
Lindley, S.T. 2003. Estimation of population growth and extinction parameters from noisy data. Ecological Applications 13:806-813.
Ponciano, J.M., M.L. Taper, B. Dennis, S.R. Lele. 2009. Hierarchical models in ecology: confidence intervals, hypothesis testing, and model selection using data cloning. Ecology 90:356-362.
Staples, D.F., M.L. Taper, and B. Dennis. 2004. Estimating population trend and process variation for PVA in the presence of sampling error. Ecology 85:923-929.
Zuur, A. F., and G. J. Pierce. 2004. Common trends in Northeast Atlantic squid time series. Journal of Sea Research 52:57-72.
Zuur, A. F., I. D. Tuck, and N. Bailey. 2003. Dynamic factor analysis to estimate common trends in fisheries time series. Canadian Journal of Fisheries and Aquatic Sciences 60:542-552.
Zuur, A. F., R. J. Fryer, I. T. Jolliffe, R. Dekker, and J. J. Beukema. 2003. Estimating common trends in multivariate time series using dynamic factor analysis. Environmetrics 14:665-685.
\section*{Bayesian papers}
This is a sample of the papers from the population modeling and animal tracking literature.
\bigskip
Buckland, S.T., K.B. Newman, L. Thomas and N.B. Koestersa. 2004. State-space models for the dynamics of wild animal populations. Ecological modeling 171:157-175.
Calder, C., M. Lavine, P. M{\"u}ller, J.S. Clark. 2003. Incorporating multiple sources of stochasticity into dynamic population models. Ecology 84:1395-1402.
Chaloupka, M. and G. Balazs. 2007. Using Bayesian state-space modelling to assess the recovery and harvest potential of the Hawaiian green sea turtle stock. Ecological Modelling 205:93-109.
Clark, J.S. and O.N. Bj{\o}rnstad. 2004. Population time series: process variability, observation errors, missing values, lags, and hidden states. Ecology 85:3140-3150.
Jonsen, I.D., R.A. Myers, and J.M. Flemming. 2003. Meta-analysis of animal movement using state space models. Ecology 84:3055-3063.
Jonsen, I.D, J.M. Flemming, and R.A. Myers. 2005. Robust state-space modeling of animal movement data. Ecology 86:2874-2880.
Meyer, R. and R.B. Millar. 1999. BUGS in Bayesian stock assessments. Can. J. Fish. Aquat. Sci. 56:1078-1087.
Meyer, R. and R.B. Millar. 1999. Bayesian stock assessment using a state-space implementation of the delay difference model. Can. J. Fish. Aquat. Sci. 56:37-52.
Meyer, R. and R.B. Millar. 2000. Bayesian state-space modeling of age-structured data: fitting a model is just the beginning. Can. J. Fish. Aquat. Sci. 57:43-50.
Newman, K.B., S.T. Buckland, S.T. Lindley, L. Thomas, and C. Fern{\'a}ndez. 2006. Hidden process models for animal population dynamics. Ecological Applications 16:74-86.
Newman, K.B., C. Fern{\'a}ndez, L. Thomas, and S.T. Buckland. 2009. Monte Carlo inference for state-space models of wild animal populations. Biometrics 65:572-583
Rivot, E., E. Pr{\'e}vost, E. Parent, and J.L. Baglini{\`e}re. 2004. A Bayesian state-space modelling framework for fitting a salmon stage-structured population dynamic model to multiple time series of field data. Ecological Modeling 179:463-485.
Schnute, J.T. 1994. A general framework for developing sequential fisheries models. Canadian J. Fisheries and Aquatic Sciences 51:1676-1688.
Swain, D.P., I.D. Jonsen, J.E. Simon, and R.A. Myers. 2009. Assessing threats to species at risk using stage-structured state-space models: mortality trends in skate populations. Ecological Applications 19:1347-1364.
Thogmartin, W.E., J.R. Sauer, and M.G. Knutson. 2004. A hierarchical spatial model of avian abundance with application to cerulean warblers. Ecological Applications 14:1766-1779.
Trenkel, V.M., D.A. Elston, and S.T. Buckland. 2000. Fitting population dynamics models to count and cull data using sequential importance sampling. J. Am. Stat. Assoc. 95:363-374.
Viljugrein, H., N.C. Stenseth, G.W. Smith, and G.H. Steinbakk. 2005. Density dependence in North American ducks. Ecology 86:245-254.
Ward, E.J., R. Hilborn, R.G. Towell, and L. Gerber. 2007. A state-space mixture approach for estimating catastrophic events in time series data. Can. J. Fish. Aquat. Sci., Can. J. Fish. Aquat. Sci. 644:899-910.
Wikle, C.K., L.M. Berliner, and N. Cressie. 1998. Hierarchical Bayesian space-time models. Journal of Environmental and Ecological Statistics 5:117-154
Wikle, C.K. 2003. Hierarchical Bayesian models for predicting the spread of ecological processes. Ecology 84:1382-1394.
| {
"alphanum_fraction": 0.7686765684,
"avg_line_length": 73.9508196721,
"ext": "tex",
"hexsha": "db3d9e370992005244d983a4fa13a6635b1f90a8",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-28T07:06:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-15T20:32:14.000Z",
"max_forks_repo_head_hexsha": "62c874483d58a4ffeb354888b606e4d3cf355838",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "ashaffer/MARSS",
"max_forks_repo_path": "vignettes/tex/SSreferences.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "62c874483d58a4ffeb354888b606e4d3cf355838",
"max_issues_repo_issues_event_max_datetime": "2022-02-09T21:35:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-17T18:34:48.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "ashaffer/MARSS",
"max_issues_repo_path": "vignettes/tex/SSreferences.tex",
"max_line_length": 288,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "62c874483d58a4ffeb354888b606e4d3cf355838",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "ashaffer/MARSS",
"max_stars_repo_path": "vignettes/tex/SSreferences.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-02T09:18:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-18T22:19:56.000Z",
"num_tokens": 2541,
"size": 9022
} |
\pdfoutput=1
\documentclass{l4proj}
%
% put any packages here
%
\usepackage{listings}
\usepackage{color}
\usepackage{float}
\usepackage{natbib}
\usepackage{url}
\usepackage{enumitem}
\definecolor{dkgreen}{rgb}{0,0.6,0}
\definecolor{gray}{rgb}{0.5,0.5,0.5}
\definecolor{mauve}{rgb}{0.58,0,0.82}
\lstset{
language=Scala,
aboveskip=3mm,
belowskip=3mm,
showstringspaces=false,
columns=flexible,
basicstyle={\small\ttfamily},
numbers=none,
numberstyle=\tiny\color{gray},
keywordstyle=\color{blue},
commentstyle=\color{dkgreen},
stringstyle=\color{mauve},
breaklines=true,
breakatwhitespace=true,
tabsize=2
}
\lstdefinelanguage{Ini}
{
basicstyle=\ttfamily\small,
columns=fullflexible,
morecomment=[s][\color{Orchid}\bfseries]{[}{]},
morecomment=[l]{\#},
morecomment=[l]{;},
commentstyle=\color{gray}\ttfamily,
stringstyle=\color{mauve},
numberstyle=\tiny\color{gray},
morekeywords={},
otherkeywords={=,:},
keywordstyle={\color{blue}\bfseries}
}
\newcommand{\code}[1]{\texttt{#1}}
\begin{document}
\title{Who To Follow In Context}
\author{Darren Burns}
\date{\today}
\maketitle
\begin{abstract}
A large number of users of Twitter rely on the service to follow discussions relating to ongoing events. The ``Who To Follow In Context'' project assists these users in finding other Twitter users who have been recently discussing such events. In particular, this project introduces real-time, frequently updating suggestions of users which model the evolving nature of the events that these recommendations are being made for. By architecting and designing the project as a message-driven, reactive application, we are able to process incoming streams of social media data in real-time whilst retaining overall system responsiveness. After implementation of the design concluded, it was extensively evaluated using a series of questionnaires, user evaluations, and experiments. This evaluation suggested that users appreciate the problem areas targeted by the project. It also showed that users found the system to be highly usable, and that they were overall pleased with the suggestions made.
\end{abstract}
\renewcommand{\abstractname}{Acknowledgements}
\begin{abstract}
I would like to thank my supervisor Dr Craig Macdonald for his extensive support and guidance throughout the course of the project.
\end{abstract}
\educationalconsent
%
%NOTE: if you include the educationalconsent (above) and your project is graded an A then
% it may be entered in the CS Hall of Fame
%
\tableofcontents
%==============================================================================
\chapter{Introduction}
\pagenumbering{arabic}
This chapter will introduce the goals and motivations behind the ``Who To Follow In Context'' project and examine related products and research.
\section{Aims}
The aim of this project was to create an application which provides a relevant set of Twitter accounts to follow for information about events. In particular, we are interested in recommending accounts which are relevant at the current moment in time. Therefore, a large portion of the project is concerned with providing real-time updates and corrections to recommendations based on the evidence provided via a stream of Twitter statuses.
\section{Motivations}
Guiding users towards the information they require is becoming increasingly important aspect in the development of online services such as social media and e-commerce websites. Without this guidance, users are much less likely to find what they are looking for, and therefore less likely to remain engaged with the service.
The impact of this guidance should not be understated: Celma \& Lamere noted that two thirds of movies rented via Netflix were through recommendation, and 35\% of sales made via Amazon are through recommendation \cite{celmaLamere}. For large companies, this may equate to billions of pounds of additional revenue.
From a social media perspective, recommending interesting people to follow is exceptionally important in retaining users. A Deutsche Bank market research survey recently showed that the primary reasons people quit Twitter are directly related to the service not meeting their information needs \cite{leavingTwitter}. The top two reasons cited for quitting were:
\begin{enumerate}
\item \textit{``I was getting the information from somewhere else.'' - 82\% of respondents. }
\item \textit{``There was no useful information on Twitter.'' - 77\% of respondents.}
\end{enumerate}
Hence, by directing users towards Twitter accounts which meet their information needs, it is hypothesised that they are less likely to quit the service. For social media platforms in particular, retaining users is of vital importance since it is directly associated with their primary source of income: advertisements.
Another study by the American Press Institute showed that 28\% of people who use Twitter do so in order to follow live events such as sports matches, TV shows, and breaking news \cite{twitterNews}. These users have different information needs from those who use it as a standard social network. When looking for information about ongoing events, intuition tells us that users are more interested in people who are \textit{currently} discussing the event over those who have discussed a similar event in the past. Suggesting accounts with a general, long-term interest in the topic does not assist the user in finding the latest, up-to-the-moment information they desire.
Although several products exist today which attempt to solve the problem of user recommendation on social media, they tend to disregard the quickly evolving nature of live events by providing a static set of results and focus on suggesting users based on their long-term interests. Given the rapidly changing and sometimes fleeting nature of events, immediate consideration of new tweets may give users insight into how new ``event experts'' emerge as they become relevant, and how that relevance changes over time. By providing a user suggestion service centered around ongoing events, we can assist the 28\% of users who rely on Twitter for live ``infotainment'' in meeting their information needs.
\section{Related Products}
\subsection{Twitter Search}
Twitter's existing search service (shown in Figure \ref{twittersearch}) allows searching for accounts based on a search query. This works similarly to traditional expert search engines in that the user enters a query and the service returns a list of users that attempt to satisfy the user's need for expertise based on that query. While this service works well for suggesting users based on their \textit{long term} interests, it is not so useful for an ongoing event which may only occur for a few hours. Figure \ref{twittersearch} presents a scenario where a user has visited Twitter Search in order to try and find information about the 2016 BRIT Awards, which were ongoing at the time. By searching for the hashtag associated with the event (\#brits), the user may expect to see suggestions for accounts currently discussing the BRIT Awards. However, the service appears to ignore the fact that the query is a hashtag, and makes suggestions based on Twitter usernames and biographies. Whilst usernames and biographies may signal the long-term interests of a user, they are independent from the discussions that the user has recently participated in, and thus have little use in determining the \textit{immediate} relevance of the user with respect to a live event.
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{twittersearch.png}
\caption{Twitter's own search service. Searching for a hashtag appears to either disregard or give very little weight to users who use the hashtag in their tweets.}
\label{twittersearch}
\end{figure}
In order to determine the short-term interests of Twitter accounts, the ``Who To Follow In Context'' project relies on the content of tweets recently posted by the account over their username and bio. Additionally,
evidence from new tweets is immediately applied, and suggestions update in real-time, modelling the evolving nature of the events that many users are interested in. If a query result page is left open, the ranking of accounts will be adjusted as people start and stop discussing the query topic, providing users with an insight into how new participants in the discussion emerge. This differs from Twitter Search, which provides a static set of results based on a query.
\subsection{Twitter's Whom To Follow Service}
``Whom To Follow'' (known as ``Who To Follow'' until March 2016) is the service used at Twitter which recommends accounts to follow based on the people already followed by an account. In order to do this, it relies on a ``user graph'' \cite{twitterWTF}. The user graph is a directed graph where vertices represent Twitter accounts, and edges represent a ``following'' relationship. That is, an edge $(A, B)$ in the directed graph means that user $A$ \textit{follows} user $B$ on Twitter, and is therefore subscribed to their tweets.
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{whomtofollow.png}
\caption{The user interface of Twitter's Whom To Follow service}
\label{whomtofollow}
\end{figure}
Since the service relies on the relationships between users in the user graph, it suggests accounts similar to those that you already follow. While this provides excellent suggestions based on long-term interests, it cannot be configured or queried in order to provide recommendations under a different context (such as events). Additionally, recommendations for users are done in batches, and are therefore not updated in real-time. This means that even if the user graph did provide signals as to the immediate relevance of a Twitter user with respect to an ongoing event, it is likely that this relevance will have decayed by the time the recommendations are delivered.
Since the initial deployment of this service in 2010, Twitter announced they are reimplementing the service using Pig and Hadoop to improve scalability, and also to apply machine learning approaches using a wider variety of signals. It is unclear whether the new system is now in place.
\subsection{Klout}
Klout provides a well-known system for ranking the influence of users across numerous social media platforms and online communities. It takes into account over 3600 features from these sources, and processes around 500 million user interactions each day \cite{klout}. Although Klout does not attempt to solve the exact same problem as this project (Klout ranks users based on a static context: \textit{``How influential is this user?''} rather than their relevance based on a query), its well documented architecture and popularity in the domain make it worthy of mention.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{klout.png}
\caption{The three users with the highest Klout scores.}
\label{kloutimage}
\end{figure}
\section{Relevant Research}
\subsection{Searching For Quality Microblog Posts}
This paper examines an array of metrics for examining whether a post on a microblog is high quality or not \cite{Vosecky2012}. It introduces an array of features which can be extracted from tweets, and examines the impact of the reputation of external URLs linked to from the tweet. Many of the techniques examined in this paper are applied in the filtering stages of this project, in order to discard tweets which do not meet a threshold on the extracted features.
\section{Summary}
After introducing the goals and motivations behind the project, existing services which tackle similar problems were examined. These services were shown to be effective at finding users based on their long-term interests, but to be lacking in the context of live events. The approach taken by the ``Who To Follow In Context'' application was also introduced, in terms of how it attempts to solve the flaws present in the existing services.
\chapter{Requirements}
Requirements were gathered during the initial meetings between the author and the project supervisor. These requirements were frequently updated as new considerations were made and new restrictions realised, but in general, they aimed to solve the issues that the related products listed in Chapter 1 have when it comes to live events. To ensure the customer (the project supervisor) that progress was being made in satisfying these requirements, weekly status reports were produced during the implementation stage.
\section{Functional Requirements}
Functional requirements were gathered in order to describe the specific behaviours the system may offer. In order to prioritise these requirements, the \textit{MoSCoW framework} was used. That is, requirements were categorised in terms of their importance into the sets ``must have'', ``should have'', ``could have'', ``won't have''.
\subsection{Must Have}
Requirements listed in this section were vital to the success of the project. Without satisfying all of these requirements, the project would be regarded as unsuccessful.
The system \textit{must have} the following capabilities:
\begin{enumerate}[label=\textbf{M.\arabic*}]
\item Present a web-based user interface providing querying and result-viewing capabilities.
\item Read tweets live from the Twitter Streaming API.
\item Extract the following features from incoming tweets:
\begin{enumerate}
\item The length of the tweet.
\item The number of correctly spelled words in the tweet.
\item The number of capitalised words in the tweet.
\item The hashtags used in the tweet.
\item The number of ``likes'' the tweet had at the time it was processed.
\item The number of retweets the tweet had at the time it was processed.
\item The number of URLs the tweet contains.
\item The number of users mentioned in the tweet.
\end{enumerate}
\item Filter out Twitter users who post low quality tweets.
\item Index tweets and relevant metadata so that they can be quickly retrieved given a user query.
\item Display the results of the query as they update in real-time.
\item Display the timeline of Twitter accounts suggested by the application, without leaving the application.
\item Retrieve user suggestions based on an event identifying ``hashtag'' query.
\end{enumerate}
\subsection{Should Have}
``Should have'' requirements were considered important for the project, however they are not given the same level of importance as ``must have'' requirements. The exclusion of some of these requirements would not result in an unsuccessful project.
The system \textit{should have} the following capabilities:
\begin{enumerate}[label=\textbf{S.\arabic*}]
\item Read historical tweets from a file to improve evaluation capabilities.
\item When reading tweets from a file, any user timelines viewed should be a snapshot of their timeline corresponding to the time the tweets in the file were sampled from.
\item Alter its behaviour according to options defined in a configuration file.
\item Allow users to provide feedback in order to evaluate and improve the relevance of future results.
\item Provide users with visual feedback to inform them of the current status of the system.
\item Store tweet features and relevance judgements for evaluation purposes.
\item Examine a user's timeline to for additional tweets that can be used as evidence for their relevance.
\end{enumerate}
\subsection{Could Have}
``Could have'' requirements are those that were not considered important, but could improve some aspect of the project such as result relevance or user experience.
The system \textit{could have} the following capabilities:
\begin{enumerate}[label=\textbf{C.\arabic*}]
\item Show users any new queries as soon as they are entered in order to encourage interaction.
\item The reputation of the URLs the tweet contains.
\item The sentiment of the tweet (e.g. number of positive or negative emoticons used).
\item Show how user relevance changes over time.
\item Counts for several punctuation marks used within the tweet. Research has suggested that this may be an indicator of microblog post quality \cite{Vosecky2012}.
\end{enumerate}
\subsection{Won't Have}
``Won't have'' requirements are those which the student and project supervisor agreed should not be implemented.
The system \textit{won't have} the following capabilities:
\begin{enumerate}[label=\textbf{W.\arabic*}]
\item Each query should result in a live stream of relevant tweets being displayed alongside the suggested results for that query. This feature was not implemented due to the restriction imposed by Twitter that each application can only have a single stream open.
\item Provide the ability to ``follow'' users when providing relevance feedback
\end{enumerate}
\section{Non-Functional Requirements}
In addition to the functional requirements defined in the previous section, as set of non-functional requirements were agreed on. These requirements describe criteria on how the system should be judged, rather than examining the behaviour it offers. The application was implemented and designed taking these requirements into account as much as possible.
\begin{enumerate}[label=\textbf{N.\arabic*}]
\item Support multiple users concurrently without there being a noticeable impact on performance.
\item Support multiple queries concurrently without there being a noticeable impact on performance.
\item Be robust enough to handle a variety of issues such as network problems without failing.
\item Be constructed in a scalable manner so that it can easily be extended to run on a cluster of machines if required in the future.
\item Be constructed in a maintainable manner.
\item Display a response (whether that is results or an indication that no results were available) to queries within one second of them being issued.
\item Operate within the restrictions imposed by the Twitter REST API v1.1 rate limits.
\end{enumerate}
\section{Summary}
This chapter has presented both the functional and non-functional requirements for the ``Who To Follow In Context'' project. The functional requirements were prioritised using the MoSCoW framework, and this was used to steer the implementation of the project.
\chapter{Architecture}
This chapter describes the architecture of the system, and goes on to discuss how technologies used within this architecture were evaluated so as to best meet the requirements.
\section{Model-View-Controller}
The application loosely follows the \textit{Model-View-Controller} architectural pattern. This architecture consists of three high level layers: the \textit{model}, the \textit{view}, and the \textit{controller}. The model is responsible for the storage and retrieval of data based on instructions from the controller. The view is the user interface of the system, which users can interact with. These interactions are sent to the controller, which is the mediator of communication between the model and the view.
\begin{figure}[H]
\centering
\includegraphics[scale=0.70]{mvc.pdf}
\caption{The high-level architecture of the application using the Model-View-Controller architectural pattern.}
\label{mvc}
\end{figure}
``Who To Follow In Context'' is designed with a web interface, as per requirement \textbf{M.1}. This interface is implemented entirely on the client, and communication between client and server is handled using HTTP (Hyper Text Transfer Protocol). The client can send HTTP requests to the server, and the controller will handle that request by communicating it to the relevant component within the application. HTTP requests sent from the view to the controller can either request that data be retrieved from or stored in the model layer. Additionally, an HTTP request can ask the server to upgrade the connection to a WebSocket \cite{websocket}. A request of this form is required in the case that a continuous stream of data is to be sent to the user (this is required to satisfy requirement \textbf{M.6}, for example).
\subsection{Model Layer Technologies}
Terrier, Redis, and MongoDB provide the persistence layer of the application. Each of these technologies provide different benefits and so the decision of which to use and where to use it varied depending on context.
\subsubsection{Indexing Tweets}
The model layer of the application contains an ``indexer'' component which stores incoming tweets in an index, so that they can be efficiently retrieved. The index structure used is provided by \textit{Terrier}, an information retrieval platform developed at the University of Glasgow \cite{macdonald2012puppy}. The \code{MemoryIndex} structure used enables the index to be built online, meaning that it can be queried as it is being constructed. This capability was vital in order to satisfy requirements \textbf{M.5} and \textbf{M.6}.
\subsubsection{Feature Storage}
\textit{Redis} is an in-memory data structure storage engine which is used for storage of extracted features (requirement \textbf{S.6})\cite{redis}. By keeping data in memory it enables quick reads and writes in comparison to on disk solutions such as relational or NoSQL databases. Therefore, at the cost of memory, the server spends less time reading and writing data and has more resources spare to perform other tasks. Redis's built in data structures also simplified the implementation of certain features. For example, a Redis SortedSet data structure is used for the calculation of the median time since the user last made use of the query term as a hashtag. Since the application must process vastly more reads than writes for individual users, allowing Redis to perform the sorting step at write time makes the system more efficient than if we were required to sort an array each time the latest results were retrieved for a user.
\subsubsection{Miscellaneous Storage}
\textit{MongoDB} is a NoSQL database used for the caching of Twitter user metadata in order to reduce the number of Twitter API requests required \cite{mongo}. This assists the application in staying within the restrictions of the Twitter API rate-limiting policy (as per requirement \textbf{N.7}). For example, we store the URL of the user's profile picture and timeline colour in MongoDB and check for its existence here before making an additonal API request for the information. We also store user relevance feedback in MongoDB for evaluation purposes.
NoSQL was used due to the lack of need of relational features. Each category of item stored in this database is entirely independent from the others, and thus guaranteed that relational database features such as joins would not be necessary.
\subsection{View Layer Technologies}
The entirety of the ``view'' layer of the system is handled on the front-end. On receiving a request for the application's index page, the server will send an empty HTML document to the client containing only a ``mount'' element, which the client side will render the view within. The view layer is constructed using a number of different technologies such as TypeScript, React, and React Router.
\subsubsection{Programming Language}
The programming language used to create the front-end of the application was \textit{TypeScript}. TypeScript is a typed superset of JavaScript which provides compile-time type checking which eliminates a large number of run-time errors \cite{typescript}. Existing JavaScript libraries can be used from within TypeScript by including a ``type definitions'' file in the project. The purpose of such files is to provide a mapping from the non-typed JavaScript constructs to their corresponding TypeScript types. Since TypeScript is a type checked language, we also gain the benefits of code-completion and refactoring capabilities within certain integrated development environments.
The front-end was initially written in JavaScript and then migrated to TypeScript as the number of type related errors began to increase due to the increasing complexity of the front-end. Although the initial migration to TypeScript was extremely time consuming, the number of run-time errors encountered significantly decreased, making the system far more robust than before (thereby furthering requirement \textbf{N.3}).
\subsubsection{User Interface}
The components which combine to form the overall user interface are written using \textit{React}. React is a JavaScript library developed by Facebook for constructing user interfaces in a component-based fashion \cite{react}. There is an abundance of modern libraries and frameworks for developing user interfaces. React was chosen primarily for its enforcement of the idea of ``separation of concerns'' and its renowned performance.
The other popular client library considered for the project was AngularJS by Google \cite{angular}. However, Angular is framework rather than a library and as such forces the developer to write their application within its restrictions. Additionally, Angular is well known to have considerably worse performance in comparison to React, due to slower DOM (Document Object Model) manipulation operations \cite{reactvsangular}. Due to the fact that the view of this application updates frequently, performance of the view framework chosen was of paramount importance.
The client side of the application is constructed as a single-page application (SPA) meaning that the entire application is mounted at a single URL (excluding API endpoints which return JSON rather than text/html content). Navigation between pages is then managed entirely by client-side code. The decision to make routing a concern of the front-end was driven by the existence of \textit{React Router} and the fact that the application view has a limited number of overall view states \cite{reactrouter}. Using this library allowed views to be declared hierarchically, so that only required subtrees of the view are re-rendered on a URL change.
Additionally, the entirety of the views are implemented on the client side, and data is fetched either via a REST API (as opposed to server-side rendering of views) or through an open WebSocket. This approach provides the benefit of reducing the number of requests made to the server, and greatly improves the responsiveness of the front-end.
Although we gain an overall more responsive experience using this architecture, it does have some associated flaws. The initial page load is often slower, since the client contains more code than it otherwise would if rendering and routing was managed on the server. Additionally, the improvement in page load times can place a burden on the server, since it has to ensure that the data will be available when a user requests it. If the data is not available when required by the user, it may prove irritating as they wait on certain portions of the interface to update.
\subsection{Controller Layer Technologies}
Since the product is implemented as a web application, the controller chosen must be able to mediate communication between client and server over HTTP.
\subsubsection{Play Framework}
The \textit{Play Framework} was used for its WebSocket implementation and to create REST API endpoints, such as the endpoint used in fetching the timeline and metadata for a user when they are clicked on in a result set \cite{play}. Since routing and rendering was done on the front-end, only a small subset of the features of Play were required. This framework provided the benefit of having excellent integration with other technologies used in the implementation, such as Akka (Play is also written using Akka) and Guice, allowing WebSocket connections to be accepted using actors. An additional benefit of using this framework is that it is one of the most popular web frameworks for Java and Scala, and as such it has an active developer community with clear and extensive documentation.
\section{Server Architecture}
Whilst interaction between the client and server follows the Model-View-Controller pattern, other components exist within the architecture which do not fit so precisely into this framework. These components perform a variety of functions, such as the processing of incoming tweets and the real-time reporting of system statistics. Figure \ref{architecture} shows how the system is designed in terms of the flow of data between components, and the following sections will describe the purpose and technology-related decisions behind each component.
\begin{figure}[H]
\centering
\includegraphics[height=278px,width=496px]{architecture.pdf}
\caption{The high-level view of the feature extraction pipeline of the system. The ``Implementation'' chapter looks at the internal workings of each component in detail.}
\label{architecture}
\end{figure}
\subsection{Feature Extraction and Tweet Filtering}
Incoming tweets are initially passed to the feature extraction component. \textit{Spark Streaming} (and Twitter4J for authentication purposes \cite{twitter4j}) provided a convenient way to access the Twitter streaming API, and distribute the feature extraction workload across multiple processing cores \cite{sparkstreaming}. Spark is designed to work in a distributed environment, and therefore also leaves open the possibility of distributing the stream across multiple machines in a cluster should the need arise. Spark was chosen over alternatives such as Apache Storm due to the fact that the author is more familiar with it, it has more extensive documentation, and it is known to work well with Akka (Spark is written on top of Akka), a technology used to maximise the concurrency within the application.
The feature extraction component extracts features from the content of the tweet and its metadata, and thus (partially) satisfies requirement \textbf{M.3}. The intention of performing this extraction is to determine the quality of tweets. These features are passed to the filtering component which will discard tweets which do not meet a configurable quality threshold. The purpose of filtering these tweets is to prevent unnecessary computational load and also to avoid hitting the rate limiting restrictions of the Twitter API as per requirement \textbf{N.7}.
After low quality tweets have been filtered (according to requirement \textbf{M.4}), the features of the remaining tweets are stored for future use, and the remaining tweets are progressed to the indexing stage of the pipeline.
Both the feature extraction and filtering components may receive tweets from other locations within the system. If the system ever deems a user to be relevant (for example, if they appear in a result stream for an open channel, or a user of the system views their profile), then the tweets present on that user's timeline are fed back into the pipeline in order to extract further evidence in the hopes of improving the relevance of results. This ability to examine a user's timeline is stated in requirement \textbf{S.1}.
\subsection{Indexing \& Retrieval}
After low quality tweets have been discarded, those remaining are sent to the indexer (part of the model layer) where they will be stored for efficient retrieval.
When a query is received, the Query Service will construct a result stream between the information retrieval engine and the user who entered the query. Terrier will then be frequently polled for new results, and the new results will be sent to the user as soon as they are available. Any user which enters the same query thereafter will see the same stream of results. The implementation details of this component are left for the ``Implementation'' chapter.
\subsection{Metrics Reporting}
Throughout different stages of the pipeline, different metrics are collected. These metrics are both logged and sent to the user in real-time to provide insight into the operation of the system (thus satisfying requirement \textbf{S.5}). This is important as it ensures the user that the system is working as expected. If a large period of time passes and no new users have appeared in the stream of results they are viewing, they might think that something has went wrong. By keeping them constantly updated on the number of users the application has processed, they are ensured that it is working as expected.
Numerous different technologies were examined for providing this functionality, including the \textit{Comet model} in HTTP 1.X, the WebSocket protocol, and using the persistent connection and multiplexing capabilities of HTTP 2.0.
Comet sockets were introduced to provide streaming capabilities before the introduction of the WebSocket API, and as such rely on many browser hacks to work properly \cite{comet}. However, with the correct hacks this provides excellent compatibility with older browsers which do not support WebSockets. Facebook Messenger is one such service that (as of 2013) relied on Comet for providing real-time notifications.
HTTP 2.0 is the latest version of the HTTP specification which provides an abundance of new features and exceptional performance improvements over the currently dominant HTTP 1.X protocols \cite{http2}. These features (such as connection multiplexing) mean that the ``hacks'' required with the first version of the protocol which enable streaming are no longer required. However, support for HTTP 2.0 in existing libraries and frameworks are limited as of the time of writing. As such, relying on HTTP 2.0 would have added too much development overhead to make it practical.
After consideration of the above technologies, the WebSocket protocol was chosen to enable real-time communication between the server and connected clients \cite{websocket}. The WebSocket protocol is well supported in modern browsers, and is specifically designed for immediate, full-duplex (simultaneous, two-way) communication between client and server. Additionally, WebSockets are well supported in modern web frameworks meaning there is a large community of developers and supporting documentation available.
A downside of using a WebSocket reliant architecture is that the protocol is much slimmer than HTTP, and therefore it was required to implement the notion of a ``keep-alive'' to ensure that channels are cleaned up in the case that no client registers an interest in it.
\section{Design Decisions}
A number of decisions had to be made with regards to the overall architecture of the application. These included which programming language to employ, how to handle concurrency, and how to design the system so as to minimise the coupling between components.
\subsection{Server Programming Language}
The server is written using the statically typed \textit{Scala} programming language, which is a hybrid of the object-oriented and functional paradigms \cite{scala}. Scala was chosen for its lightweight syntax and the fact that it compiles to Java Virtual Machine byte-code, allowing for clean interoperability with existing Java libraries and frameworks. In fact, a number of extremely popular Java libraries are written in Scala, including Apache Spark, Akka, and the Play Framework. Additionally, Scala provides powerful means of asynchronous programming through \code{Future}s and \code{Promise}s, and its \code{Option[T]} type and powerful pattern-matching features make \code{NullPointerException}s impossible since it explicitly requires the developer to check for the possibility that the \code{Option[T]} type contains a \code{None} value. Scala also benefits from its powerful REPL (Read-Execute-Print Loop) which comes bundled with the Scala compiler. This allows developers to immediately evaluate expressions and see the results. Existing code can even be loaded into the REPL so that the effects of performing actions on user defined objects can be examined.
Other languages examined in the initial investigations were Python \cite{python}, NodeJS \cite{node}, and Java \cite{java}. Python was ruled out due to its poor support for concurrency, and its dynamic typing would likely have caused difficulty as the complexity of the project source code increased.
NodeJS was considered for its renowned support for real-time web applications. However, the ecosystem surrounding the language has not expanded into the field of processing data streams. As with Python, NodeJS is dynamically typed, and this also factored into the decision to rule it out.
Finally, Java was considered due to its unmatched ecosystem of libraries and frameworks and its type system. Java has a massive range of libraries available, but they make excessive use of \code{null} values, which greatly increase the possibility of run-time failure. Java is also extremely verbose in comparison to Scala, as can be seen in Listings \ref{scala} (Scala) and \ref{java} (Java), which show Spark Streaming code for splitting a line from a text stream into an array of words.
\begin{lstlisting}[caption=Scala example of splitting a line in Spark Streaming.,label=scala]
// Split each line into words
val words = lines.flatMap(_.split(" "))
\end{lstlisting}
\begin{lstlisting}[language=Java,caption=Java example of splitting a line in Spark Streaming,label=java]
// Split each line into words
JavaDStream<String> words = lines.flatMap(
new FlatMapFunction<String, String>() {
@Override public Iterable<String> call(String x) {
return Arrays.asList(x.split(" "));
}
});
\end{lstlisting}
\subsection{Concurrency Model}
The real-time nature of the application means that if response times are poor then it will immediately impact user experience. Should feature extraction for a batch of tweets from a user's timeline take longer than expected, then a user will have to wait until the system gets around to processing that user. As such, high levels of concurrency were deemed vital in order to meet user requests in a timely manner (as stated in requirements \textbf{N.1} and \textbf{N.2}). To accommodate such a requirement, the system was designed using the \textit{Actor model}.
\subsubsection{The Actor Model}
Actor systems provide an alternative means of concurrency that avoid the pitfalls of typical synchronisation methods such as the use of shared mutable state and locks \cite{actormodel}. The actor model of concurrency was first popularised by the Erlang programming language, and has increased in popularity in recent years with the release of applications such as WhatsApp which use the model extensively \cite{erlang}. An actor system consists of a set of actors. Actors are entities capable of performing some computation on a thread in response to messages. Actors also have the ability to create new “child” actors, and to send messages to other actors in the system, who will then react in their own way depending on the information contained within that message. By communicating between actors solely through asynchronous \textit{immutable} message passing, we guarantee that the computations performed by a single actor cannot result in issues such as thread interference.
\subsubsection{Akka}
\textit{Akka} is a framework for Scala and Java which implements the actor model \cite{akka}. Despite vastly simplifying the concurrency aspect of the implementation, Akka required very little overhead in terms of memory. The actor model framework provided by Akka supports passing approximately 50 million messages per second on an average machine, and around 2.5 million actors can exist per gigabyte of heap space. Each component of the architecture was implemented as one or more actors, and communication between components in the system was facilitated through message passing.
Despite being primarily used for its concurrency model, Akka provided a number of features ``out-of-the-box'' which satisfied several project requirements. Actors in Akka have a ``supervision strategy'' which defines what actions should be taken when an exception occurs within them. This allows the system to meet the high degrees of fault tolerance and resilience required. The default supervision strategy (simply restarting the actor when an exception occurs) solved many issues that were encountered. For example, it was common for the Twitter streaming API to become unavailable (due to network issues) and an exception was therefore thrown. The actor handling the stream was automatically restarted and the stream handle resent to all of the actors that required access to it. No explicit error handling code was required, yet the application was able to self-heal and become fully functional as soon as the streaming API became available again. As a result, using Akka greatly assisted in furthering requirement \textbf{N.3}.
\subsection{Reactive Systems}
The architecture of the system is implemented to conform to many of the tenets described in the ``Reactive Manifesto'' \cite{reactive}. That is, the system architecture is designed to be responsive, resilient, elastic, and message-driven. It was hoped that this architecture would assist in the satisfaction of several of the non-functional requirements listed in the previous chapter.
\subsubsection{Responsiveness}
Responsiveness is a key aim of the design. Messages passed between actors in potential areas of bottleneck are load-balanced in order to minimise the work queued on a single actor or thread and maximise throughput. Within each defined actor is a response time guarantee which, if not met, causes the actor to be restarted in attempt to regain responsiveness.
\subsubsection{Resilience}
The application provides the guarantee of resilience through ``self-healing''. Should an actor executing some computation encounter an exception, its parent is notified and takes the appropriate action in order to cleanly recover. Additionally, if an exception occurs within any actor, it is treated in isolation, and other actors are not affected unless the parent actor's supervision strategy explicitly calls for it. Therefore, problems are compartmentalised and the cascading effect of errors spreading through the system is reduced.
\subsubsection{Elasticity}
Some actors within the system are implemented as state machines with the ability to change their behaviour at run-time, in response to certain events. This can be useful, for example, if the application hits the Twitter API rate-limit. The behaviour of the corresponding actor can be modified to stop making API requests for a specified period of time, and then modified back to its original API-reliant behaviour after this time period has expired.
\subsubsection{Message-Driven}
The architecture is entirely message driven. The root messages are batches of tweets read from a stream of tweets, or user interactions with the system. Each of these messages results in a chain of messages being sent asynchronously between actors as they handle the situation. Using a message driven architecture also provides an excellent abstraction for handling real-time data processing.
\subsection{Dependency Injection}
The system relies heavily on the \textit{dependency injection} design pattern. This pattern greatly reduces coupling between components, and enforces a separation of concerns by separating dependency creation from behavioural code \cite{di}. By making extensive use of this pattern, requirement \textbf{N.5} is further satisfied.
\subsubsection{Google Guice}
\textit{Google Guice} is a dependency injection framework for Java which manages the dependency graph that exists between objects in an object-oriented environment \cite{guice}. Using Guice, one can request an instance of an object when required by ``injecting'' the instance into a ``setter'' method or a constructor.
Guice was primarily used for the injection of actor references into other actors. If we ever wish to send a message to an existing actor in order for it to perform some computation, we simply ask Guice for the instance of the required actor by injecting the actor into the constructor of the current class using the \code{@Inject} and \code{@Named} annotations provided by the framework.
\section{Summary}
The architecture of the system uses a number of different technologies which were chosen because it was believed that they would make improve the implementation of the project in some way. The technologies chosen for both client and server emphasise were picked so as to maximise the use of good software engineering principals.
\chapter{Implementation}
Both client and server side were implemented so as to resemble the architecture described in the previous section. This chapter will examine the implementation challenges that arose during the construction of both client and server.
\section{Server Implementation}
The server was implemented using many software development tools and practices. It consists of a number of different components, as was displayed in Figure \ref{architecture}. This section aims to examine several of these components in greater detail by looking at some of the considerations and challenges that were had during their construction.
\subsection{Tools \& Practices}
The implementation of the server involved a wide range of tools and practices which aimed to assist in the development process. These tools ranged in purpose from simplifying source code management, to building and executing the application.
\subsubsection{Git \& Continuous Integration}
\textit{Git} is a distributed version control system which was used extensively during the development of the project \cite{git}. A working implementation of the project was always maintained on the \code{master} branch. Features were developed on their own branches so as to not interfere with \code{master}. These branches were frequently merged into the \code{master} branch so as to avoid ``integration hell'', whereby feature branches become so complex that it is extremely difficult to resolve conflicts when merging.
\subsubsection{GitHub}
The project repository is also linked to a remote repository hosted on \textit{GitHub} \cite{github}. By hosting on a remote repository, we avoid the consequences of local disk failure, since there will always exist a recent backup of the project. Hosting a remote repository is also beneficial in that it makes the project more portable, since it can be cloned onto another machine.
\subsubsection{Agile Software Development}
The \textit{Agile} software development process was followed throughout the lifetime of the project. Although there was an initial requirements gathering stage, these requirements frequently changed, and the product itself changed as a result. The project was implemented with the knowledge that requirements are highly dynamic, and no commitment was ever made to any single implementation.
\subsubsection{Kanban \& Trello}
Project task management was done using the ``Kanban'' system \cite{kanban}, where projects are represented as a set of named lists and cards. \textit{Trello} is an online platform which implements this system, and it was used extensively throughout all stages of the project \cite{trello}.
\subsubsection{SBT (Scala Build Tool)}
\textit{SBT} is a build management system commonly used in Scala projects \cite{sbt} which is similar to popular Java build tools such as Maven \cite{maven}. SBT was used in the project to maintain and resolve dependencies, run tests, and to build and run the project.
% Each subsection should be modelled around one of the components mentioned in the requirements section.
\subsection{Feature Extraction}
As described in the chapter on ``Architecture'', and shown in Figure \ref{architecture}, the feature extraction component extracts several features from incoming tweets. These features aim to classify the tweet as either high-quality or low-quality. In order to implement this component, Spark Streaming was used to distribute computation across multiple threads.
Spark Streaming provides us with a \code{ReceiverInputDStream[Status]} (discretised stream) representing the incoming stream of tweets. The tweet stream was mapped to a ``windowed'' tweet stream. This allows us to view the stream through a ``sliding window'', and perform operations on these individual windows of tweets. The tweets in each window are batched and sent through the rest of the pipeline together. The width of the sliding window is defined temporally, as is shown in Figure \ref{slidingwindow}.
\begin{figure}
\centering
\includegraphics[scale=0.8]{slidingwindow.pdf}
\caption{Tweets are batched temporally and sent throughout feature extraction together.}
\label{slidingwindow}
\end{figure}
The \code{FeatureExtraction} actor maps our windowed stream of tweets into a windowed stream of tweet feature vectors, which is immediately filtered to discard those tweets which do not meet the threshold quality criteria. These features are then sent to Redis for storage, and the original status is sent to the indexing actor for storage in a Terrier \code{MemoryIndex}.
In early iterations of the project, features were stored in MongoDB \cite{mongo}. However, this proved to be a bottleneck on performance, since the implementation was unable to keep up with the rate at which new features were extracted. There are several reasons which explain why this situation occurred:
\begin{itemize}
\item MongoDB persists data on disk, resulting in slow read/write times compared to in-memory solutions.
\item An asynchronous MongoDB driver was not used and so blocking occurred frequently.
\item MongoDB does not have built-in data structures that could be used to avoid additional computations.
\item Parallel insertions were not used. The MongoDB Scala driver had no serialisation/deserialisation support and this made using the parallel insertion API overly cumbersome.
\end{itemize}
To overcome this bottleneck Redis \cite{redis} was employed as the storage engine for features. This resulted in immediate performance improvements, but the system was still unable to store features at the rate they were being generated. Three additional steps were taken in order to ensure that features were persisted at an acceptable rate:
\begin{itemize}
\item Connection pooling was used to ensure that the application did not have to reconnect to the Redis server with each request.
\item Requests to read and write features from Redis were partitioned by defining a \code{RedisRequest} type with two subtypes: \code{RedisReadRequest} and \code{RedisWriteRequest}. This allowed for the separation of reads and writes so that they can be processed by separate worker pools.
\item Read and write requests were sent to a ``routing'' actor (named \code{RedisActor}) which maintains two pools of actors: one pool containing workers which deal with reading and unmarshalling data from Redis, and one containing workers which handle storing new features to Redis. These requests are distributed to workers using a Round-Robin algorithm, to ensure that the work is evenly distributed. Figure \ref{loadbalancing} shows how tasks were distributed between workers to maximise concurrency.
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.7]{loadbalancing.pdf}
\caption{An example of how requests were load-balanced in order to prevent overworked actors and maximise concurrency. This is a lower level view into the inner workings of the ``Feature Storage'' component shown in Figure \ref{architecture}}
\label{loadbalancing}
\end{figure}
Akka made implementation of this load balancing simple since it provides extensive built-in support for such tasks, including protocols much more advanced than the Round-Robin method used in this component. The code for implementing a worker pool was a single line, as can be seen in the code for creating the \code{WriteWorkerPool} shown in Listing \ref{actorpool}.
\begin{lstlisting}[caption=Creating an actor pool to reduce the work required by any single actor.,label=actorpool]
val writingWorkerPool = context.actorOf(RoundRobinPool(4)
.props(Props[RedisWriterWorker]), "writingWorkerPool")
\end{lstlisting}
After implementing the feature storage component using these techniques, it was found that it no longer restricted overall system performance.
\subsection{Batched Feature Extraction}
When a user becomes relevant and is displayed on an open result stream, their latest tweets are fetched (unless their timeline has already been processed recently). As a result, the number of tweets that we must process increases. If $N$ users become relevant at once, we will typically have around $20N$ additional tweets to process (since we process the 20 most recent tweets from a timeline) on top of the tweets being processed from the stream. It is important in this case that there is no serious impact on performance.
To minimise the disruption caused by any sudden influx of new tweets, extraction of features from tweets from a user's timeline was performed using the \code{Future} asynchronous programming abstraction provided by Scala. A \code{Future} represents a value that may become available \textit{at some point in the future}. Therefore, the value contained within a \code{Future} cannot be used until an associated computation is completed. A \textit{callback function} can be used to access the value when it is available.
Since we want to avoid extracting features for the same tweet twice, we must check to see if the tweet has already been processed. To do this, we \textit{ask} the \code{RedisActor} if this is the case by sending it a message asynchronously. This actor has to go and do additional work to figure out how to reply, and therefore will not be able to respond immediately. Since the response is delayed, a \code{Future} is an ideal abstraction. The annotated code in Listing \ref{asyncfutures} shows how for each status in the batch we check if it has been processed in a non-blocking way. (Note: the \code{?} syntax is how we asynchronously send another actor a message and capture the eventual response in a \code{Future}.)
\begin{lstlisting}[caption=Check asynchronously whether a status has already been processed.,label=asyncfutures]
var tweetAnalysisHistory =
scala.collection.immutable.Set.empty[Future[(Long, Boolean)]]
// No iteration of this loop blocks waiting for a response, ensuring that
// every ask is sent immediately rather than after the response of the
// previous ask
tweets.foreach(tweet => {
// Add the Future representing the redisActor's eventual response to the set
tweetAnalysisHistory +=
(redisActor ? HasStatusBeenProcessed(tweet)).mapTo[(Long, Boolean)]
})
\end{lstlisting}
\code{tweetAnalysisHistory} is a set of \code{Future}s, each of which will eventually contain a pair of the form (TweetID, Boolean), where the boolean value is true if the tweet already been processed. Only after \textit{all} of these \code{Future}s are complete can we determine which tweets we have seen before and so will be discarded.
To determine when all of the \code{Future}s in the set are complete, we can \textit{compose} them into a single combined \code{Future}. Specifically, the set of type \code{Set[Future[(Long, Boolean)]]} is mapped to the type \code{Future[Set[(Long, Boolean)]]} using the \code{Future.sequence} method. Thus, rather than writing a callback function for every status in the set (which itself is impossible since we cannot know its cardinality until run-time), we can write a single callback function to handle the case where all \code{Future}s are complete, and this gives us access to the \code{Set[(Long, Boolean)]} contained within. The code showing this process is shown in Listing \ref{composefutures}, and has been annotated with comments and types for clarity (adapted from \code{BatchFeatureExtraction.scala}).
\begin{lstlisting}[caption=Composing Futures unto a single future and registering the onComplete callback.,label=composefutures]
// Compose the Futures in tweetAnalysisHistory and register the
// callback to be fired when the composed Future completes.
Future.sequence(tweetAnalysisHistory) onComplete {
case Success(seenBefore: Set[(Long, Boolean)]) =>
// Future completed successfully, perform feature extraction
// Original tweet list is filtered and feature extraction occurs here.
...
case Failure(error) =>
// Future unsuccessful, an actor may have missed response time guarantee
// Error is logged
}
\end{lstlisting}
The feature extraction code itself (contained in the ``Success'' case of the pattern matching expression in the snippet above) is computationally expensive and as a result also relies heavily on asynchronous constructs, and uses many of the techniques described in this section.
\subsection{Indexing}
Tweets received at the indexing component are stored in a Terrier \code{MemoryIndex}. This is a special type of index that can be updated online, and therefore is especially suited to this application. Each document stored in the index consists of one or more tweets from a single Twitter user. More specifically, the index stores a mapping from the terms in tweets to lists of Twitter users who have used that term. By storing data in this way, Terrier can achieve an $O(1)$ lookup of every user who has used a query term, allowing it to carry out retrieval quickly and efficiently. Since the index is stored in memory, further work will be required to ensure that the index is at least partially stored on disk when memory usage becomes too high.
A secondary, ``metadata'' index is also maintained. The purpose of this index is to quickly retrieve metadata for any users it retrieves. Terrier will fetch only the document ID (equivalent to the Twitter user ID) for relevant users, and this auxiliary structure allows us to retrieve the metadata associated with that ID. The benefit of doing this is that it avoids the need to ask the Twitter API for that information just to display the results to the user. Preventing overuse of the Twitter API was a key aim of the application, as stated in non-functional requirement \textbf{N.7}. The metadata stored in this structure for each user includes their username, their screen name, and their profile biography.
\subsection{Retrieval}
To maximise performance, it was important to ensure that two users entering the same query does not result in the system computing the same stream of results twice. The traditional means of preventing this is through caching query results. Unfortunately, caching is not as practical in a system that is performing real-time indexing and providing results as a stream, since the results of a query change so frequently.
To get around this limitation, each new query received opens a ``channel'', which the results of that query are forwarded through. The results that get forwarded through the channel are constructed by repeated polling of Terrier. Since the index is being constructed online, these results will be modified as the contents of the index changes. Every change in the set of results that is returned is sent through the channel to any connected clients. Clients can then use this data to re-render the user interface to display these changes. Figure \ref{channels} gives a simplified example of what the clients/channels relationship may look when the system is running. When a user inputs a query to the system, they are connected to a channel and will see the same stream of results for that query as any other user who has entered the query. If the channel corresponding to their query does not already exist, then it is created.
\begin{figure}
\centering
\includegraphics[scale=0.75]{channels.pdf}
\caption{\textit{An example of the one-to-many relationship that exists between users and channels}. Users A and B both connect to a single channel after entering the same query (``superbowl''), and will receive the results it outputs. This channel will perform the exact same actions regardless of the number of users connected.}
\label{channels}
\end{figure}
When a channel is created, it registers a scheduled request to poll Terrier for the latest result set in a configurable interval. The code in Listing \ref{pollterrier} shows how the polling rate was read from a dependency injected configuration file in a way that avoids \code{null} values, and then used to schedule the channel's work.
\begin{lstlisting}[caption=Reading configuration from a file and scheduling the updates of results.,label=pollterrier]
// Read the polling duration from the config file
val resultPollDuration = config.getInt("results.pollingrate").getOrElse(1000)
// Schedule the requests for the latest query results
val fetchTick = context.system.scheduler
.schedule(Duration.Zero, FiniteDuration(resultPollDuration, TimeUnit.MILLISECONDS), self, FetchLatestQueryResults)
\end{lstlisting}
Through repeated polling, the result sets obtained from Terrier were transformed into a stream of results and sent to clients. These result sets are calculated using the BM25 weighting model, which is a popular information retrieval model known to generally give relevant results across a variety of test collections. Further investigation would be required to determine the most effective retrieval technique.
Channels will remain open for a query for as long as at least one user is connected to the result stream. In order to determine whether any user is interested in the channel, connected users automatically send special ``keep-alive'' messages that the server interprets as a user expressing a desire to keep the channel open. Each open channel monitors the timestamp associated with the latest keep-alive received, and will free its associated resources if the latest keep-alive is ever older than some configurable threshold. In this way we ensure that a channel is never closed while a user is viewing a stream of results. Using this technique also allows us to close result streams when they are no longer required and therefore prevents resource leaks.
\subsection{Metrics Reporting}
This component is implemented as a single actor which can receive data from any other component in the system. By sending information to this actor, and specifying how it should translate this information (e.g. a Scala case class) to JSON, it allows any component to communicate with the client. In the example shown in Listing \ref{forward}, the \code{MetricsReporting} actor receives a message from some other actor informing it of the number of Twitter users that have been processed, and sends this message to all connected clients.
\begin{lstlisting}[caption=Forwarding new messages through the WebSocket to connected clients.,label=forward]
override def receive = {
case numUsers @ NumberOfUsersSeen(_) =>
channelMeta.channel push Json.toJson(numUsers)
...
}
\end{lstlisting}
The code which maps user-defined Scala types to valid JSON relies on the Play Framework's ``Writes'' converters. The converter for the example above is shown in Listing \ref{writesconverter}.
\begin{lstlisting}[caption=Code for conversion of the NumberOfUsersSeen Scala case class to JSON format.,label=writesconverter]
implicit val numberOfUsersSeenWrites = new Writes[NumberOfUsersSeen] {
def writes(numberOfUsersSeen: NumberOfUsersSeen) = Json.obj(
"numUsersSeen" -> numberOfUsersSeen.numUsers
)
}
\end{lstlisting}
\subsection{Real-time Trending Hashtags}
One of the more interesting features to implement was the real-time hashtag usage counter. This component counts the number of occurrences of each hashtag within a configurable window of time, and updates this number as more tweets are processed. The user interface of this aspect of the system is shown in Figure \ref{trending}. The code which generates the data for this component is written using Apache Spark Streaming, and consists of several steps which in combination produce an output of \code{(hashtag, count)} pairs which are then sent to connected clients for rendering.
The initial step of this process maps the incoming stream of tweets into a stream of \code{(String, Int)} tuples, where the string is the text of the hashtag and the integer is the value 1. Figure \ref{mappingstep} shows this process diagrammatically, and the code used in this mapping process is shown in Listing \ref{tweetstohashtags}.
\begin{lstlisting}[caption=Initial mapping of tweets to hashtags pairs.,label=tweetstohashtags]
// Map the tweets in the stream to a stream of (hashtag, 1) tuples
val hashtags = stream flatMap(status => {
status.getHashtagEntities map(hashtag => (hashtag.getText toLowerCase, 1))
})
\end{lstlisting}
\begin{figure}
\centering
\includegraphics[scale=0.75]{mappingstep.pdf}
\caption{Example of how a tweet stream is mapped to a stream of pairs in the initial step of the real-time hashtag counting process.}
\label{mappingstep}
\end{figure}
The new stream of tuples is then \textit{reduced} within a sliding window using the \code{reduceByKeyAndWindow} operation provided by Spark Streaming. This operation takes place based on the key of the tuple (the first element; in this case the hashtag), and is analogous to a ``\code{GROUP BY}'' clause in SQL combined with a custom \textit{reducer function} which tells Spark how to reduce/combine two values from the stream into one. Since the stream is an array of tuples of the form \code{(<hashtag>, 1)}, after grouping by the hashtags, we want to reduce these tuples by summing their values in order to count the total number of times the hashtag has occurred in the stream (within the current window). Therefore, our reducer function is a simple addition:
\begin{lstlisting}
(p: Int, q: Int) => p + q
\end{lstlisting}
After applying this reduction, the data in the stream is now of the required form. After sorting the stream, a subset of the results are sent to the client to be displayed to connected users. The code which performs windowed reduction, sorts the results, and sends those results to all connected clients is shown in Listing \ref{reduce}.
\begin{lstlisting}[caption=Counting the hashtags used within a temporal window.,label=reduce]
// Aggregate hashtags within the window
val hashtagCountInWindow = hashtags
.reduceByKeyAndWindow(
(p:Int, q:Int) => p+q, Minutes(trendingHistoryMins), Seconds(reportFrequency)
)
.map{case (hashtag, count) => (count, hashtag)} // Reverse tuples
.transform(_.sortByKey(ascending = false)) // Sort by counts
.map{case (count, hashtag) => HashtagCount(hashtag, count)}
// Send latest trending data to connected clients
hashtagCountInWindow foreachRDD(rdd => {
val hashtagCounts = rdd.collect take numberOfHashtagsToShow
metricsReporting ! TrendingHashtags(hashtagCounts.toList)
})
\end{lstlisting}
\begin{figure}
\centering
\includegraphics[scale=0.75]{trending.png}
\caption{The trending hashtags component. This screenshot was taken during the 2016 BRIT awards and the Kids Choice Awards.}
\label{trending}
\end{figure}
\subsection{HTTP Interface}
Using the Play Framework, we expose an interface that the client side of the application can interact with. This interface consists of HTTP endpoints which can instruct the server to store or retrieve data from the model layer, or to open up a new channel (see Section 4.1.5). The endpoints available to the client are shown in Listing \ref{endpoints}.
\begin{lstlisting}[label=endpoints,caption=The exposed interface of the server.]
GET /
GET /ws/query/:queryString
GET /ws/user/:screenName
GET /ws/main/
GET /learning/timeline/:screenName
POST /learning/rate-user
GET /learning/get-user-relevance
\end{lstlisting}
Each of these URLs are mapped to an associated method defined within a Play Framework Controller. By sending a request of the appropriate type to these endpoints, the client can interact with the server. The first defined endpoint (``GET /'') returns a sparse HTML document which contains a mount point. The client side renders the application's user interface inside this mount, as shown in Listing \ref{routerdef}.
\section{Client Implementation}
The implementation of the client made use of several software development tools and practices which aimed to ease the development process. As described in Chapter 3, the front-end is constructed as a single page application which relies heavily on AJAX (Asynchronous JavaScript and XML) and WebSockets to efficiently fetch updates from the server, using those endpoints defined in Listing \ref{endpoints}.
\subsection{Tools \& Practices}
A number of tools were used on the front-end to assist in the design and build processes.
\subsubsection{LESS}
\textit{LESS} is a compiled superset of CSS (Cascading Style Sheets) which introduces a number of additional features including variables, nested definitions, and a module/import system \cite{less}.
\subsubsection{CSS Flexbox}
\textit{Flexbox} is a W3C specification for laying out elements within some parent element \cite{flexbox}. The majority of user interface elements used in ``Who To Follow In Context'' are built specifically for this project rather than from an external component library, and Flexbox was used extensively to align child elements within these components.
\subsubsection{CommonJS Modules}
\textit{CommonJS} introduces a module system into JavaScript/TypeScript \cite{commonjs}. As a result, it avoids the need to add one script tag to our index HTML page for each TypeScript file created. During the compilation step, the dependency tree of the front-end application is scanned and bundled into a single file.
\subsubsection{Gulp}
\textit{Gulp} is a front-end build system which was used to speed up parts of development \cite{gulp}. Gulp tasks were defined to watch TypeScript files for changes and automatically transpile them into new JavaScript files using the ``tsc'' TypeScript compiler. A Gulp task was also used in order to compile LESS files to CSS so that they could be interpreted by browsers.
\subsubsection{Node Package Manager}
\textit{Node Package Manager} (NPM) was used to manage front-end dependencies \cite{npm}. By specifying these dependencies in a \code{package.json} file, developers can simply run \code{npm install} to fetch all of the requirements for the application. NPM was used in conjunction with the \textit{TypeScript Definition Manager} which fetches TypeScript type declaration files.
\subsection{User Interface}
\begin{figure}
\centering
\includegraphics[scale=0.24]{initialui.png}
\caption{An early user interface prototype.}
\label{initialui}
\end{figure}
The user interface was one of the most heavily iterated upon aspects of the application. Using a component-based user interface library (React) allowed components to be iterated on in isolation, and so changing the UI could be done gradually to allow for progress on other features.
An early proof of concept of the user interface (show in Figure \ref{initialui} displayed the stream of incoming tweets on the left half of the window, and the recommendations for the current query on the right hand side. After creating the proof of concept, mockups of several different UI components were created. An example of those created for the user suggestion components is shown in Figure \ref{suggestionsmockups}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{suggestionsmockups.pdf}
\caption{Initial designs for the user suggestions components. In the final iteration of the UI, a modification of Concept C was used.}
\label{suggestionsmockups}
\end{figure}
The initial mockup design of the ``Trending'' and ``Recent Searches'' lists is shown in \ref{listmockups}. These mockups were iterated on and eventually the front-end components were written in React to closely resemble them. The ability to view user timelines was not included at this point.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{listmockups.pdf}
\caption{The mockups created for the ``Recent Searches'' and ``Trending'' lists seen at the top right of the final UI in Figure \ref{finalscreenshot}.}
\label{listmockups}
\end{figure}
After multiple iterations and further refinement of the project requirements, the final version of the user interface was created (see Figure \ref{finalscreenshot}).
\begin{figure}[H]
\centering
\includegraphics[scale=0.17]{finalscreenshot.png}
\caption{The final iteration of the user interface.}
\label{finalscreenshot}
\end{figure}
The final user interface consists of a search box shown on the left side of the display. By entering a query into this box and pressing ``enter'', a list of suggestions will appear underneath. These suggestions, as explained previously, update in real-time. This is to model the evolving nature of the live events they are providing suggestions for. By selecting a user in the list, we display a preview of the user's Twitter timeline. At the top right hand corner of the screen, we display real-time lists of ``Recent Searches'' and ``Trending Hashtags'', which exist to provide suggestions of queries that users may wish to enter.
As discussed in the previous chapter, the user interface is implemented using a library called React. React allows user interfaces to be implemented using an extension of the syntax of JavaScript known as JSX, which is similar to XML. This syntax is compiled back into standard JavaScript in order to be parsed and executed by web browsers. The ``tsc'' TypeScript compiler comes with a built-in JSX compiler, and this was used by enabling it in TypeScript configuration file (\code{tsconfig.json}). This configuration is shown in Listing \ref{tsconfig}.
\begin{lstlisting}[label=tsconfig,caption=TypeScript compiler configuration.]
{
"compilerOptions": {
"jsx": "react",
"module": "commonjs",
"target": "es5",
"allowJs": true
},
"exclude": [
"node_modules"
]
}
\end{lstlisting}
By using the ``.tsx'' file extension, we inform ``tsc'' that the file contains JSX as well as TypeScript code, and that it must compile both of these syntaxes to plain JavaScript. An example of a React component which is written using TypeScript and JSX is shown in Listing \ref{tsjsx}. This component defines an individual item in a list, such as the ``Recent Searches'' and ``Trending'' lists. The component is defined generically, and is therefore written once, and reused in both of these lists.
\begin{lstlisting}[caption={Definition of a reusable list-item component using React, JSX, and TypeScript},label=tsjsx]
export class ScrollingListItem
extends React.Component<ScrollingListItemProps, any> {
constructor(props) {
super(props);
}
render() {
return (
<div className="scrolling-list-item" key={this.props.key}>
<Link to={this.props.link}>
<span className="scrolling-list-item-text">{this.props.text}</span>
</Link>
<span className="scrolling-list-item-subtext">{this.props.subtext}</span>
</div>
)
}
}
\end{lstlisting}
This component does not contain any internal state. Instead, it simply receives data from its parent component in the view hierarchy, and is tasked with displaying it. By keeping components stateless, they are more generic, and therefore more reusable. Writing reusable code in this way assists in satisfying requirement \textbf{N.5}, which says that the application must be constructed in a maintainable manner.
\subsection{Searching}
When a user enters a hashtag $Q$ into the search box and presses enter, an event handler is fired which changes the URL to \code{/\#/query/Q}. Since we are using React Router, the hierarchy of components mounted on the screen relates directly to the current URL. The TypeScript/JSX code which defines this hierarchy of components and their associated URLs is shown in Listing \ref{routerdef}.
\begin{lstlisting}[caption=Definition of the hierarchy of React component and how they map to the URL.,label=routerdef]
ReactDOM.render((
<Router>
<Route component={App}>
<Route path="/" component={Home}>
<Route path="/query/:query" component={QueryResults}>
<Route path="/query/:query/user/:screenName" component={TwitterUserPreviewPane} />
</Route>
</Route>
</Route>
</Router>
), document.getElementById("wtfc-app-mount"));
\end{lstlisting}
As a result of the code above, any requests to a URL beginning with \code{/\#/query} results in the \code{QueryResults} component and all of its parents to be mounted. In this way, all routing logic within the application is handled by the client. Immediately as \code{QueryResults} mounts, React calls its \code{componentDidMount} method, shown in Listing \ref{code:setquerychannel}. Inside this method, we send a request to the server to start the WebSocket handshake, and the server will create the channel for this query. In addition, we register callback functions which will execute when the client receives data from the server through the connected WebSocket, and the keep-alive messages that the client will send to the server for as long as \code{QueryResults} remains mounted. The TypeScript code which handles this process is shown in Listing \ref{code:setquerychannel} (extracted from the file \code{QueryResults.tsx}).
\begin{lstlisting}[caption=The method called by React when a new component mounts.,label=code:setquerychannel]
componentDidMount: function() {
// Creates WebSocket, keep-alive scheduled, registers callbacks
this._setQueryChannel(this.props.params.query);
},
_setQueryChannel(query: string): void {
let querySocket = new WebSocket(`ws://localhost:9000/ws/query/${this.props.params.query}`);
querySocket.onmessage = event => {
let recs: Array<UserScore> = JSON.parse(event.data)['results'];
let actualResultSize: number = JSON.parse(event.data)['totalResultSize'];
let history: Map<string, List<number>> = this.state.queryUserHistories;
// Update the history for the graph for this user
recs.forEach((rec: UserScore) => {
let screenName: string = rec.screenName;
if (history.has(screenName)) {
let userData: List<number> = history.get(screenName);
history = history.set(screenName, userData.push(rec.score));
} else {
history = history.set(screenName, Immutable.List([]));
}
});
this.setState({
queryComplete: true,
actualResultSize: actualResultSize,
queryResults: Immutable.List(recs),
queryUserHistories: history
});
};
// Schedule keep-alives only after handshake complete
querySocket.onopen = (event) => {
// Continuously send Keep-Alives to inform server that we still want recs and stats
let keepAlive = ChannelUtility.buildKeepAlive(this.props.params.query);
querySocket.send(keepAlive);
let keepAliveTrigger = setInterval(() => {
Logger.info(`Sending keep-alive to query channel ${query}`, "KEEP-ALIVE");
querySocket.send(keepAlive);
}, Configuration.KEEP_ALIVE_FREQUENCY);
this.setState({
querySocket: querySocket,
keepAlive: keepAliveTrigger
});
}
},
\end{lstlisting}
\subsection{Query Results}
Query results are presented as a list on the left hand side of the display. Figure \ref{queryresults} shows how the query results are displayed. As the score for each user is re-evaluated, the ordering of the results changes. The numbers shown on the right hand side of the suggestions represent the users relevance score. The graph shown on each suggestion shows the historical changes in score for the corresponding user.
\begin{figure}[H]
\centering
\includegraphics[scale=0.9]{queryresults.png}
\caption{An example of the display of suggested users.}
\label{queryresults}
\end{figure}
\subsection{Encouraging Interaction}
At the top right hand corner of the screen we display real-time lists of the latest queries which have been entered by users of the system. We also display the most popular hashtags found within the past 10 minutes of streaming. The aim of these components is to encourage users to interact with the system by providing them with up-to-the-moment information on what is popular on Twitter and what users are currently searching for. By selecting an item in the list, it is entered as a query. It was hoped that this feature would make people more likely to interact with the system, since it may provide them with suggestions for hashtags they could search for. As discussed in the Evaluation chapter of this report, users said that these features did indeed make them more likely to interact with the application.
\subsection{Previewing Twitter Users}
On selecting a suggested account, users are shown a preview of that account without having to leave the application. Figure \ref{userpreview} shows how we display the preview to users. Several elements of this view are personalised for each user, taking into consideration the colours used in their Twitter profile. Any occurrences of the query term within tweets are highlighted to bring attention to potentially relevant tweets. By previewing the suggestion within the application, users can provide relevance feedback for the purposes of evaluation and improving the relevance of future query results. This component fades into view, to prevent confusion that may be caused in the case that another user was being viewed and is immediately replaced by a new user. This animation was created using \textit{Velocity-React}, which allows animations to be specified declaratively \cite{velocityreact} using JSX syntax.
\begin{figure}
\centering
\includegraphics[scale=0.5]{userpreview.png}
\caption{The preview of a user's Twitter profile that is shown when a suggestion is selected.}
\label{userpreview}
\end{figure}
\section{Summary}
Both client and server presented a number of implementation challenges. This chapter has described some of these challenges and how they were overcome. On the server it was shown how feature extraction, indexing, and retrieval were performed. The use of real-time technologies such as the use of WebSockets to implement channels was also discussed. A variety of techniques for improving system responsiveness such as asynchronous programming, and actor pooling were also described.
In the description of the implementation of the client side, the use of a single page application and client side routing were explained. Early prototypes of the user interface were shown, and then the final iteration of the user interface was displayed, and its components were described.
\chapter{Evaluation}
After implementation had concluded, the application was evaluated using a variety of different evaluation techniques. The purpose of this evaluation was to attempt to answer the following questions:
\begin{enumerate}
\item Is there a gap in the market for this product?
\item Did users find the system to be usable?
\item Did the system help users find Twitter users with expertise in ongoing events?
\item What impact on results and the number of users discarded do different filtering configurations have?
\item What is the impact of altering the rate at which the system reads tweets?
\end{enumerate}
\section{User Evaluation}
In addition to weekly feedback from the project supervisor, a user evaluation study was conducted in the final weeks of the project to determine users opinions on the application.
\subsection{Questionnaires}
Two separate questionnaires were used. The first aimed to gain insight into how people use Twitter, and the second was a post-evaluation questionnaire enabling them to provide additional feedback after the think-aloud session.
\subsubsection{Twitter Usage Questionnaire}
This questionnaire was distributed to a subset of Level 4 Computing Science students at the University of Glasgow. 18 students completed this questionnaire. The intention was to determine how people use Twitter, specifically in the context of events. It also aimed to discover what motivates them to use Twitter and the considerations they make before following users. By gaining insight into these areas, we can gain a better understanding of the difficulties that Twitter users have, allowing us to determine whether there exists a gap in the market for this product.
\begin{enumerate}
\item \textit{How often do you use Twitter?}
\par
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{howoftentwitter.png}
\label{howoftentwitter}
\end{figure}
The majority of respondents said that they use Twitter less than once per week (labelled as \textit{Very occasionally}). Additionally, 11\% of respondents said that they never use Twitter, but have used it in the past and so understand its concepts.
\item \textit{How many people do you follow on Twitter?}
\par
Participants in the survey follow on average 150.2 Twitter users. Two respondents gave an approximation of the number of people they followed, and one respondent who no longer uses Twitter said they follow zero users. As a result, this figure is likely to be slightly below the true value.
\item \textit{Approximately what proportion of tweets that you see on Twitter do you find to be relevant to your interests?}
\par
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{proprelevance.png}
\label{proprelevance}
\end{figure}
This question aimed to determine how well Twitter currently does at meeting the information needs of users. The results showed that users generally find a minority of the information they see on Twitter as relevant to their interests, with over two thirds of participants saying that they believe less than 50\% of the content they see is relevant.
\item \textit{What considerations do you make when you decide to follow someone on social media?}
\begin{table}[H]
\centering
\begin{tabular}{| l | l | l |}
\hline
Consideration & Number of agreements & Percentage (\%) \\ \hline
Their popularity & 5 & 27.8 \\ \hline
How often they tweet & 3 & 16.7 \\ \hline
The contents of their tweets & 16 & 88.9 \\ \hline
Whether you know them in real life & 10 & 55.6 \\ \hline
Whether following them is appropriate & 4 & 22.2 \\
\hline
\end{tabular}
\end{table}
The majority of respondents (88.9\%) said that the the contents of the tweets written by users is one of the considerations they make when deciding whether to follow them or not. Only 27.8\% of respondents said that they consider the popularity of the account, despite this being one of the main signals used by existing services in the ranking of accounts. As such, this may be an indication that the current content-based approach to suggesting accounts to follow is more in line with what users want.
\item \textit{How easy do you find it to discover people tweeting about ongoing events?}
\par
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{findingusers.png}
\label{findingusers}
\end{figure}
Participants in general found it relatively simply to find users tweeting about ongoing events.
\item \textit{How easy do you find it to discover people tweeting \textbf{frequently} about ongoing events?}
\par
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{findingusersfreq.png}
\label{findingusersfreq}
\end{figure}
When tasked with finding users who are frequently tweeting about an ongoing event, particpants say they have more difficulty. The results are skewed towards \textit{Very difficult}, in contrast with question 5. This suggests that Twitter's existing product has been relatively unsuccessful at assisting people in meeting their need for expertise with regards to events.
\item \textit{Do you believe you would use Twitter more if you followed more people you are interested in?}
\par
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{usetwittermore.png}
\label{usetwittermore}
\end{figure}
The majority of questionnaire respondents said that they would use Twitter more if they followed more people that they are interested in. This suggests that recommending users which meet the information needs or interests of users is likely to increase the amount of time that people spend on Twitter.
\item \textit{I am more likely to use Twitter if an event I am interested in is currently happening. (1 represents strongly disagree, 5 represents strongly agree)}
\par
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{morelikelytwitter.png}
\label{morelikelytwitter}
\end{figure}
This question aimed to determine the influence that ongoing events have on peoples decision to visit Twitter. A majority of respondents replied saying that they either ``Agree'' or ``Strongly Agree'' that ongoing events that they have an interest in increase the likelihood of them visiting Twitter.
\end{enumerate}
The results of this questionnaire show that Twitter users have issues with the existing services which are similar to this project. In particular, they struggle to recommend users who have been frequently discussing ongoing events, despite a large number of respondents saying that such events make them more likely to use Twitter. This would suggest that there is indeed a gap in the market for this product, targeting the subset of the population which use Twitter to follow ongoing events.
\subsubsection{Post-Evaluation Questionnaire}
The purpose of this questionnaire was to retrieve feedback from users after they have evaluated the project, and to attempt to answer evaluation questions 2 and 3 (listed at the start of this chapter). The questionnaire asked users for feedback on system usability, and for their opinion on the overall relevance of the suggestions. Participants were asked to answer questions on a scale from 0 to 9, where 0 represents ``strongly disagree'' and 9 represents ``strongly agree''. The results of the usability section of this questionnaire are shown in the Table \ref{table:posteval}.
\begin{table}[H]
\centering
\begin{tabular}{| l | l |}
\hline
Question & Mean Response \\ \hline
Overall, I am satisfied with how easy it is to use this system. & 8.00 \\ \hline
It was simple to use this system. & 7.83 \\ \hline
I can effectively complete tasks using this system. & 7.66 \\ \hline
I feel comfortable using this system. & 7.33 \\ \hline
It was easy to learn to use this system. & 8.00 \\ \hline
Whenever I make a mistake using the system, I can recover easily and quickly. & 7.66 \\ \hline
The organisation of information on the screen is clear. & 7.83 \\ \hline
The user interface of the system is pleasant. & 7.83 \\ \hline
I like using the interface of the system. & 8.50 \\ \hline
I believe the system kept me informed of what it was doing. & 6.00 \\ \hline
I found that the system responded quickly to my actions. & 8.17 \\ \hline
\textbf{Average} & \textbf{7.71} \\
\hline
\end{tabular}
\caption{\label{table:posteval}The average responses from the usability section of the post-evaluation questionnaire.}
\end{table}
The project score an average of 7.71 out of 10 across all usability questions, indicating that users overall found the system to be usable. However, responses to individual questions indicate that there are certainly areas which can be improved. For example, increasing the level of feedback users receive when they perform actions may result in an improvement in mean response for the lowest scoring question (``I believe the system kept me informed of what it was doing'').
Participants were also asked to judge whether they were happy with more specific areas of the project, such as the suggestions made and the impact of individual elements of the user interface. The results from this part of the evaluation are shown in the Table \ref{table:posteval2}.
\begin{table}[H]
\centering
\begin{tabular}{| l | l |}
\hline
Question & Mean Response \\ \hline
I believe that the updating ranks gave me insight new participants on discussions. & 7.17 \\ \hline
I found the ranking of users suggested by the system to be accurate. & 6.33 \\ \hline
I found the 'Trending' hashtags list encouraged me to use the system. & 7.50 \\ \hline
I found the users suggested by the system to be relevant. & 7.00 \\
\hline
\end{tabular}
\caption{\label{table:posteval2} Responses to remaining post-evaluation questions.}
\end{table}
The results from this section of the post-evaluation questionnaire suggest that users overall were relatively happy with the suggestions made by the system, giving it the suggestions an average score of 7. However, users were less satisfied with the ranking of these results, with a lower average response of 6.33. These scores suggest that the application did indeed assist users in finding users tweeting about ongoing events.
\subsection{Think-Aloud Study}
All six evaluation participants participated in a ``think-aloud'' study, whereby they speak their thoughts aloud as they interact with the application. The intention of this study was to determine whether the application had any significant usability issues that multiple users noticed, and to determine the size of the ``Gulf of Execution''. That is, we try to determine the extent by which the actions that users perceive as being required to complete the task and the correct actions required differ. The task given to the users was to try and find users that are currently tweeting about the 2016 BRIT Awards.
A variety of constructive feedback was retrieved during these sessions. Some of the most common or constructive items of feedback are shown below.
\begin{itemize}
\item \textit{``The back button at the top left corner of the home-page when you first arrive on the application is confusing.''}
\par
Two participants noted that this button was confusing because they had just arrived on the site from elsewhere. The back button works identical to the browsers back button, so on first arrival it takes users back to the previous URL they visited (even if it wasn't within this application). It was suggested that this be replaced with a ``Home'' button.
\item \textit{``It wasn't immediately obvious that selecting a hashtag in the ``Trending'' or ``Recent Searches'' lists would submit a search for it.''}
\par
Although users generally said they were not surprised that selecting an item in the list performed a search when they tried clicking on it, it was generally felt that it could be made clearer exactly what would happen.
\item \textit{``When I search, it is not clear whether I have to include the hashtag symbol or not.''}
\par
Every participant made note of this, but it did not cause any issues because the application accepts either case.
\item \textit{``I assume that the graphs and numbers in user suggestions are some form of relevance score, but it is not completely clear.''}
\par
Participants were more confused about the graph than the scores (see Figure \ref{queryresults}). All of them understood that the score indicated some ranking function, but the idea that the graphs represent the change in each users score \textit{in isolation} caused confusion.
\item \textit{``It would be nice if I could access someones Twitter easily from the application so that I can follow those accounts that I find interesting.''}
\par
This feature was present in an older version of the product, and it was an oversight that it was not added to the current iteration.
\item \textit{The trending hashtags encouraged interaction as intended - users ended up making several additional searches by selecting items in this list.}
\item \textit{Most users completely ignored the help text provided.}
\end{itemize}
The information gathered from the think-aloud sessions show that there are certainly areas of the application that can be improved on from a usability perspective. However, there were no critical issues discovered, and the overall feedback received was positive (as shown previously in Table \ref{table:posteval}).
\section{Experiments}
Two experiments were performed with the system. The first examined filtering controls, the second examined the impact of altering the rate at which the system reads tweets. These were intended two answer evaluation questions 4 and 5 respectively.
\subsection{Experimenting With Filtering Controls}
The implications of altering the threshold at which tweets are discarded during the filtering stage were examined in order to answer evaluation question 4. It was hypothesised that the ``Spelling Accuracy'' and ``Follower Count'' filters would have the largest impact on the number of users discarded. By altering this configuration, the effect of filtering on the number of users the application has to process was investigated. Using the default filtering controls shown in Table \ref{table:standardfilters}, it was found that in a sample of 15000 tweets, approximately $97\%$ of users were discarded at the filtering stage. This shows that the combination of seemingly relaxed filters can result in a large number of users being dropped and therefore massively reducing the load on the server.
Further experiments were performed to determine the impact that the individual default filters have on the overall number of users filtered. This experiment was performed by disabling all user filtering, and examining the effects of individual filters by applying them in isolation. The determined contribution of the individual default filtering controls is shown in Table \ref{table:filteringresults}. The total percentage of users filtered is greater than 100\% because a single tweet will usually fail the checks imposed by several filters at once.
\begin{table}[H]
\centering
\begin{tabular}{| l | l |}
\hline
Filter Name & \% of Users Filtered \\ \hline
ALL FILTERS & 97 \\ \hline
Spelling Accuracy & 92 \\ \hline
Follower Count & 44 \\ \hline
Word Count & 30 \\
\hline
\end{tabular}
\caption{\label{table:filteringresults}The individual impact of different filtering controls. ($N=15000$).}
\end{table}
Three participants repeated the practical evaluation using the system under a modified set of filtering controls. The modified controls were identical to the default controls shown in Table \ref{table:standardfilters}, except the ``follower count'' filter was completely removed. As a result of this, the additional 44\% of users with less than 300 followers are processed. The participants who examined this modified configuration all said that a significantly larger number of suggestions were made by the application. They also said that there was very little noticeable difference between the quality or relevance of the results when comparing the two configurations. This may indicate that the ``Follower Count'' signal should not be weighted so heavily when deciding which users to filter from the stream. This observation suggests that manual filter controls are difficult to properly tune without repeated evaluation. Therefore, using classification techniques based on machine learning may provide a more robust means of finding an optimal weighting to apply to filtering parameters.
\begin{table}
\centering
\begin{tabular}{| l | l | l |}
\hline
Filter Name & Required Value & Description\\ \hline
Spelling Accuracy & $>50\%$ & Tweets with 50\% or less of words spelt correctly are discarded. \\ \hline
Word Count & $>3$ & Tweets with 3 or less words are discarded. \\ \hline
Follower Count & $\geq300$ & Tweets with authors who have less than 300 followers are discarded. \\ \hline
Hashtag Count & $<5$ & Tweets with 5 or more hashtags are discarded. \\ \hline
Capitalised Words & N/A & Tweet entirely capitalised are discarded. \\
\hline
\end{tabular}
\caption{\label{table:standardfilters}The default filtering controls.}
\end{table}
\subsection{Investigating the Effects of Stream Replay Rate}
The rate at which tweets are read when using a source file has a direct effect on system performance and usability, in order to answer evaluation question 5. An investigation was conducted into the performance and usability impact of altering the \textit{stream replay rate}. The stream replay rate is the rate at which new tweets are read into the system from a source file. This rate is configurable using two parameters: ``BatchDuration'' and ``BatchSize'' (\textbf{Note}: these parameters are unrelated to the batching performed by Spark Streaming described in the ``Implementation'' chapter). The ``BatchDuration'' parameter defines the time between examining batches of tweets from the source. The ``BatchSize'' parameter defines the number of tweets in each of these batches. These parameters can be configured using the file entitled ``application.conf'', and are shown in Listing 5.1. Users were given the opportunity to perform tasks with the system with two different stream replay rates, and asked to provide feedback on which version they preferred. The replay rates that users evaluated are described in Tables \ref{table:replayrate1} and \ref{table:replayrate2}.
\begin{table}[H]
\centering
\begin{tabular}{| l | l | l |}
\hline
Parameter & Value & Description \\ \hline
BatchSize & 70 & Each batch contained 70 tweets. \\ \hline
BatchDuration & 1000 & A batch was processed every 1000 milliseconds. \\
\hline
\end{tabular}
\caption{\label{table:replayrate1} The default stream replay configuration (70 tweets/sec, updating every second).}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{| l | l | l |}
\hline
Parameter & Value & Description \\ \hline
BatchSize & 100 & Each batch contained 100 tweets. \\ \hline
BatchDuration & 500 & A batch was processed every 500 milliseconds. \\
\hline
\end{tabular}
\caption{\label{table:replayrate2} The modified stream replay configuration (200 tweets/sec, updating every half second).}
\end{table}
\begin{lstlisting}[caption=Configuration of the stream replay rate in application.conf,language=Ini]
# path to gzipped json file containing tweets which will be read
# leave blank for live stream
stream.sourcefile.path = "/path/to/gzipped/tweets.json.gz"
# the number of tweets in each batch
stream.sourcefile.batchSize = 100
# time in milliseconds between each batch
stream.sourcefile.batchDuration = 500
\end{lstlisting}
Participants stated that they preferred the rate at which suggestions are made using the modified stream replay configuration. However, one person observed that for more popular events the faster update speed may make the user interface difficult to interact with. In reality, the optimal update rate is dependent on the popularity of the event. As such, it may be required in the future to buffer results on the client side for extremely popular events, giving users the ability to update them whenever they wish.
\section{Behaviour Testing}
Testing was done using the \textit{Specs 2} Scala testing framework \cite{specs2}. This framework allows us to define how our application works in terms of its behaviours. Tests were written to examine feature extraction, querying, and the servers response to requests for the single page the application is mounted at. These tests worked in isolation, but the author was unable to configure the Play Framework in order to run all of the tests using the same application. It proved extremely difficult to configure the environment that the tests execute in to match the environment that the application runs in. As such, each Specs2 specification had to be run individually, using a separate Play application for each. Unfortunately, these issues proved cumbersome to work around, and so testing focuses primarily on critical areas of source code. Further work would be required in this area to resolve this issue.
Akka TestKit \cite{akka} made it possible test the direct output of the internal behaviour of actors by exposing the methods defined inside the actor. These methods are traditionally hidden by Akka. An example of how the internal behaviour can be examined is shown in Listing \ref{test}.
\begin{lstlisting}[label=test,caption={Testing the internal behaviour of an actor.}]
"A QueryService actor" should {
"return an empty TerrierResultSet with an actual size of 0 on an empty query" in new Actors {
running(app) {
// Get the underlying actor (typically hidden by Akka)
val queryServiceActor = TestActorRef(app.injector.instanceOf(
BindingKey(classOf[QueryService]))
).underlyingActor
// Check the response
queryServiceActor.doQuery(EmptyQuery).actualSize must be equalTo 0
}
}
...
\end{lstlisting}
\section{Integration Testing}
In addition to tests which ensure that components behave as expected, tests were also written to ensure that they work correctly within their intended environment. In the context of actor systems, this is done by testing how actors respond to \textit{stimulus} from a \textit{probe}. Since actors perform computation in response to messages, we can probe their behaviour by sending them test messages and ensuring that they respond as expected.
As an example of how integration testing was useful: When a \code{SparkContext} has been started, it is illegal to attempt to register additional actions on the data. Since the system registers Spark actions in asynchronous calls, it is a possibility that the \code{SparkContext} is started before all actions are registered. To avoid this scenario, any actor registering Spark actions must inform its parent when it has done so. The parent will wait for all child actors to make this confirmation before starting the context. Integration testing is used to ensure that all actors which register Spark actions correctly inform their parent when this registration is complete, and thus avoids a runtime \code{IllegalStateException}. The associated test is shown in Listing \ref{integrationtest}.
\begin{lstlisting}[label=integrationtest,caption=Ensuring that the FeatureExtraction actor is correctly instantiated.]
"The FeatureExtraction actor should respond that it is ready when it receives a Twitter stream handle" in new Actors {
running(app) {
val featureExtractionActorRef = TestActorRef(app.injector.instanceOf(BindingKey(classOf[FeatureExtraction])))
val probe = TestProbe() // Construct the 'probe'
val config = app.injector.instanceOf(classOf[Configuration])
val streamPath = app.configuration.getString("stream.sourcefile.path").get
val handle = SparkInit.ssc.actorStream[Status]
(TweetStreamSimulator.props[Status](streamPath, 10, 10), TweetStreamSimulator.name)
// Apply 'stimulus'
featureExtractionActorRef.tell(ActiveTwitterStream(handle), probe.ref)
// Ensure correct response is received within 10 seconds
probe.expectMsgType[PipelineActorReady](FiniteDuration(10, TimeUnit.SECONDS))
}
}
\end{lstlisting}
\section{Summary}
Throughout this chapter, we have attempted to answer the evaluation questions listed at the beginning of the chapter. These answers are summarised below.
\begin{enumerate}
\item \textit{Is there a gap in the market for this product?}
\par
Through the findings of the Twitter usage questionnaire, it was determined that there is indeed a market for the product. The findings of this questionnaire were discussed in detail in Section 5.1.1.
\item \textit{Did users find the system to be usable?}
\par
The application scored an average of 7.71 out of 9 in response to the usability questions posed in the post-evaluation questionnaire, indicating that users did find the system to be usable. These responses were discussed in Section 5.1.1.
\item \textit{Did the system help users find Twitter users with expertise in ongoing events?}
\par
The second set of questions within the post-evaluation questionnaire indicated that users were overall happy with the results provided by the system, but it was found that there is certainly room for improvement. (See Section 5.1.1 - Post-Evaluation Questionnaire)
\item \textit{What impact on results and the number of users discarded do different filtering configurations have?}
\par
Section 5.2.1 showed that altering the filtering configuration can have a large impact on the number of users filtered. It showed that some filters have a disproportionally large impact on the number of users discarded. It was found that there was little difference between the filtering configuration which discarded accounts with less than 300 followers, and the configuration that didn't.
\item \textit{What is the impact of altering the rate at which the system reads tweets?}
Section 5.2.2 examined the impact of altering the ``stream replay rate''. It was found that given the two test configurations, users enjoyed using the faster configuration (200 tweets/sec, updating every half second) more. However, it was noted that for extremely popular events, updating the display every half second may cause difficulty.
\end{enumerate}
\chapter{Conclusion}
The final iteration of the ``Who To Follow In Context'' product implements a majority of the project's requirements. The project can analyse a stream of tweets in real-time, from either a source file or the Twitter live streaming API, and display dynamic results to users.
The architecture of the product has been thoroughly documented in this report, and the implementation of several of its components has been discussed in detail. The functions of these components ranged from processing a stream of tweets to handling concurrent queries submitted from several users. The development of the user interface was also discussed, as well as several implementation challenges and design decisions that were associated with it. In particular, the decision to construct the application as a single-page application was of note, since it involved the unusual decision of delegating URL routing logic to the client-side. The benefits of such an architecture were discussed in the Implementation chapter.
During the evaluation it was shown that this analysis can be performed without a noticeable impact on the responsiveness of the user interface. It has also been shown that users are overall happy with the usability of the final product, and the quality of the suggestions that it makes. Evaluation work also suggested that there is a segment of the market suffer from the problems which this application aims to solve. Finally, we found that removing a relatively impactful filter (follower count) had little to no impact on users perception of the quality of suggestions. The evaluation phase also proved extremely useful in discovering candidate ideas for future development.
\section{Future Work}
If work was to continue on the project, there are several areas which could be improved or extended. Some of these suggestions are made with the power of hindsight, whilst others are made as a result of the evaluation conducted in the previous section.
\subsection{Improving Results Using Learning To Rank}
As of writing, the application has extracted features from the tweets of $3,893,913$ unique Twitter users. By collecting a large set of relevance judgements for these users under different queries, a learning to rank model could be trained in order to improve the relevance of the suggestion made by the application. The Terrier Information Retrieval Platform contains an implementation of the LambdaMART learning to rank technique \cite{lambdamart}, meaning that most of the infrastructure required for such work is already in place \cite{l2r}.
\subsection{Implementing Evaluation Feedback}
A large collection of constructive criticism was received during the evaluation stage of the project. Some of the issues that participants had came up in several evaluations of the product. It should therefore be a priority to fix any issues that were not isolated incidents of misunderstanding.
\subsection{Machine Learning Based Filtering}
The current implementation uses a configurable set of quality threshold ``filters'' which can be modified in order to change which tweets are discarded during the filtering stage. Using a machine learning based classifier such as a Support Vector Machine trained on real world judgements of the quality of tweets would avoid the need for having to evaluate arbitrary filter control values. The classifier itself would find the optimal filtering criteria based on the training data. A Support Vector Machine is suggested because it remains efficient even with an extremely large number of features, and would thus allow feature extraction to be expanded in scope with minimum computational overhead.
\subsection{Deploying Across A Cluster}
The core technologies used to construct ``Who To Follow In Context'' are designed to be deployed in a distributed manner. The use of the Akka framework simplified the distribution of computation across threads. However, Akka also has the ability to distribute work across multiple machines over a network. By doing this, the application would then be able to process a larger number of tweets per minute. Increasing the rate at which the application can process tweets not only allow for better evaluation of the product, it will increase the rate at which suggestions are made to users. This may make it more difficult for users to interact with the system, due to the user interface quickly updating to reflect change on the backend. As such, deployment across a cluster would allow for an investigation a huge range of usability issues associated with the presentation of large, dynamically updating data sets to users.
\subsection{Formal Evaluation of Suggestions}
The frequently updating nature of the suggested users made performing a formal evaluation of the results more difficult than it otherwise would be in a traditional search engine, where the results are displayed and are static. Relevance judgements received by users indicate whether a user is relevant at the current moment in time, but not if they are relevant within the overall list of suggestions made by the application. A formal evaluation of the suggestions could be performed by building an identically configured index offline, and then asking users to evaluate the results of querying this index. The results from an offline evaluation could then be used to tune the retrieval parameters of application.
\section{Reflection}
Working on this project has improved my programming ability and has introduced me to a huge range of new technologies, many of which represent entirely different views on software design from those that I am accustomed to. Before working on this project, I had never worked with Scala, TypeScript, Redis or the Actor model, and so working with these technologies was a great opportunity to learn new skills and I am happy that I chose them.
The most enjoyable part of the project was working with Akka, since it represents a paradigm of concurrency with its own benefits and flaws which I have never experienced before. It made me more appreciative of the field of software engineering with the realisation that such vastly different views on architecture exist. I also enjoyed working with the WebSocket API, since I have personal interest in how real-time technologies can improve user experience on the web, and using them allowed me to explore this further.
At times the project proved stressful to work with, due to the amount of coursework exercises throughout the year in Level 4 Computing Science. There were times where I thought I would never finish certain features, but I am happy that I persevered and met a majority of the project's requirements in the end. If I were to work on the project again, I would try to better organise my time so that progress was not slowed as much during coursework deadline weeks.
%%%%%%%%%%%%%%%%
% %
% APPENDICES %
% %
%%%%%%%%%%%%%%%%
\begin{appendices}
\chapter{Building and Running the Project}
An example of running from the command line is as follows:
\begin{verbatim}
> cd WhoToFollow/app/assets/typescript # Go the the front-end root
> npm install # Install front-end dependencies.
> tsd install # Fetch TypeScript definition files.
> gulp less # Compile and bundle the LESS files.
> gulp build # Transpile and concatenate TypeScript.
> cd ../../../ # Return to the project root
> activator run # Get server dependencies, compile, run
\end{verbatim}
Note that several technologies are required in order to run these commands, including:
\begin{itemize}
\item NodeJS \& NPM
\item TypeScript Definition Manager (TSD)
\item Gulp
\end{itemize}
Other dependencies are installed automatically during the build process.
The application should now be accessible at \code{localhost:9000/\#/}.
\end{appendices}
%%%%%%%%%%%%%%%%%%%%
% BIBLIOGRAPHY %
%%%%%%%%%%%%%%%%%%%%
\bibliographystyle{plain}
\bibliography{bib}
\end{document}
| {
"alphanum_fraction": 0.7695923595,
"avg_line_length": 94.8278301887,
"ext": "tex",
"hexsha": "74c55944596746614e7cf47354cded5245675b7a",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2018-03-11T20:31:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-10-20T18:19:07.000Z",
"max_forks_repo_head_hexsha": "89dbbd9799b47e62773866872b0ab7f4dd556df5",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "darrenburns/WhoToFollow",
"max_forks_repo_path": "dissertation/dissertation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "89dbbd9799b47e62773866872b0ab7f4dd556df5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "darrenburns/WhoToFollow",
"max_issues_repo_path": "dissertation/dissertation.tex",
"max_line_length": 1269,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "89dbbd9799b47e62773866872b0ab7f4dd556df5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "darrenburns/WhoToFollow",
"max_stars_repo_path": "dissertation/dissertation.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-21T13:27:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-28T08:13:59.000Z",
"num_tokens": 24635,
"size": 120621
} |
\documentclass[review]{elsarticle}
\usepackage{lineno,hyperref}
\usepackage{amsmath}
\usepackage{multirow}
\usepackage{placeins}
\modulolinenumbers[5]
\journal{Physica A: Statistical Mechanics and its Applications}
\usepackage{xr}
\externaldocument{supportingInformation}
%%%%%%%%%%%%%%%%%%%%%%%
%% Elsevier bibliography styles
%%%%%%%%%%%%%%%%%%%%%%%
%% To change the style, put a % in front of the second line of the current style and
%% remove the % from the second line of the style you would like to use.
%%%%%%%%%%%%%%%%%%%%%%%
%% Numbered
%\bibliographystyle{model1-num-names}
%% Numbered without titles
%\bibliographystyle{model1a-num-names}
%% Harvard
%\bibliographystyle{model2-names.bst}\biboptions{authoryear}
%% Vancouver numbered
%\usepackage{numcompress}\bibliographystyle{model3-num-names}
%% Vancouver name/year
%\usepackage{numcompress}\bibliographystyle{model4-names}\biboptions{authoryear}
%% APA style
%\bibliographystyle{model5-names}\biboptions{authoryear}
%% AMA style
%\usepackage{numcompress}\bibliographystyle{model6-num-names}
%% `Elsevier LaTeX' style
\bibliographystyle{elsarticle-num}
%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{frontmatter}
\title{Temporal stability in human interaction networks\tnoteref{mytitlenote}}
\tnotetext[mytitlenote]{The Supporting Information document supplies thorough tables and figures.}
%% Group authors per affiliation:
\author[add1]{Renato Fabbri\corref{mycorrespondingauthor}}
\address[add1]{S\~ao Carlos Institute of Physics, University of S\~ao Paulo (IFSC/USP)}
\ead{[email protected]}
\author[add2]{Ricardo Fabbri}%
\address[add2]{Polytechnic Institute, Rio de Janeiro State University (IPRJ/UERJ)}
\ead{[email protected]}
\author[add3]{Deborah Christina Antunes}
\ead{[email protected]}
\address[add3]{Psychology Course, Federal University of Cear\'a (UFC), Sobral Campus}
\author[add4]{Marilia Mello Pisani}
\ead{[email protected]}
\address[add4]{Center for Natural and Human Sciences, Federal University of ABC (CCNH/UFABC)}
%% or include affiliations in footnotes:
\author[add1]{Osvaldo Novais de Oliveira Junior}
\ead{[email protected]}
\cortext[mycorrespondingauthor]{Corresponding author}
\begin{abstract}
This paper reports on stable (or invariant) properties of human interaction networks,
with benchmarks derived from public email lists. Activity, recognized through messages sent,
along time and topology were observed in snapshots in a timeline,
and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months.
The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered.
The activity of participants follows the expected scale-free trace, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erd\"os-R\'enyi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time.
Typically, $<15\%$ of the vertices are hubs, 15-45\% are intermediary and $>45\%$ are peripheral vertices. Similar results for the distribution of participants in the three sectors and for the relative importance of the topological metrics were obtained for 12 additional networks from Facebook, Twitter and ParticipaBR. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria.
\end{abstract}
\begin{keyword}
complex networks \sep pattern recognition \sep statistics \sep social network analysis \sep human typology
% \MSC[2010] 00-01\sep 99-00
\PACS{89.75.Fb,05.65.+b,89.65.-s}% PACS, the Physics and Astronomy
\end{keyword}
\end{frontmatter}
\linenumbers
\begin{quotation}
`The reason for the persistent plausibility of the typological approach, however, is not a static biological one, but just the opposite: dynamic and social.'
\emph{- Adorno et al, 1969, p. 747}
\end{quotation}
\section{Introduction}\label{sec:into}
The first studies dealing explicitly with human interaction networks
date from the nineteenth century while the foundation of
social network analysis is generally attributed to the psychiatrist Jacob Moreno in mid twentieth century~\cite{moreno,newmanBook}. With the increasing availability of data related to human interactions, research about these networks has grown continuously. Contributions can now be found in a variety of fields, from social sciences and humanities~\cite{latour2013} to computer science~\cite{bird} and physics~\cite{barabasiHumanDyn,newmanFriendship}, given the multidisciplinary nature of the topic. One of the approaches from an exact science perspective is to represent interaction networks as complex networks~\cite{barabasiHumanDyn,newmanFriendship}, with which
several features of human interaction have been revealed. For example, the topology of human interaction networks exhibits a scale-free trace, which points to the existence of a small number of highly connected hubs and a large number of poorly connected nodes. The dynamics of complex networks representing human interaction has also been addressed~\cite{barabasiEvo,newmanEvolving}, but only to a limited extent, since research is normally focused on a particular metric or task, such as accessibility or community detection~\cite{access,newmanModularity}.
In this paper we analyze the evolution of human interaction networks.
Directed and weighted representations were built through the observation of replies as links.
Interaction networks from email lists were the most convenient for deriving results and for benchmarking while networks from Facebook, Twitter and ParticipaBR were used for the sake of generalization.
Using a timeline of activity snapshots with a constant number of contiguous messages, we found remarkable stability (or invariance) for important network properties. For instance, activity along different timescales follows specific patterns; the most basic topological metrics can always be combined into characteristic principal components; and the fractions of participants in different sectors do not vary with time. This is not an intuitive result, given that participants constantly transition in network structure. Because these properties were shared by networks from various sources, and are consistent with the literature in complex networks~\cite{newmanBook}, we advocate that the conclusions might be valid for general classes of interaction networks. In particular, this allows us to
bridge the gap between data analysis and social sciences in the discussion of types of networks and of participants.
It is worth noting that typologies are the canon of scientific literature for the classification of human agents, with pragmatic standards~\cite{myers} and critical paradigms~\cite{adorno,typCanon}.
This paper is organized as follows. Section~\ref{sec:related} describes related work, while data, scripts and methods of analysis are given in Section~\ref{sec:data} and Section~\ref{sec:carac}.
Section~\ref{sec:results} reports results and discussion, leading to Section~\ref{sec:conc} for conclusions.
Supplementary data analysis, including directions for video and sound mappings of network structures, and numeric detailed results for networks from Twitter, Facebook and ParticipaBR, are provided in the Supporting Information document.
\subsection{Related work}\label{sec:related}
The fact that unreciprocated edges often exceed 50\% in human interaction networks~\cite{newmanEvolving} motivated the inclusion of symmetry metrics in our analysis.
No correlation of topological characteristics and geographical coordinates was found~\cite{barabasiGeo},
therefore geographical positions were not considered in our study.
Gender related behavior in mobile phone datasets was indeed reported~\cite{barabasiSex}
but it is not relevant for the present work because email messages and addresses have no gender related metadata~\cite{gmanePack}.
Research on network evolution is often restricted to network growth, in which there is a monotonic increase in the number of events~\cite{barabasiEvo}.
Network types have been discussed with regard to the number of participants, intermittence of their activity and network longevity~\cite{barabasiEvo}. Two topologically different networks emerged from human interaction networks, depending on whether the frequency of interactions follows a generalized power law or an exponential connectivity distribution~\cite{barabasiTopologicalEv}. In email list networks, scale-free properties were reported with $\alpha \approx 1.8$~\cite{bird} (as in web browsing and library loans~\cite{barabasiHumanDyn}), and different linguistic traces were related to weak and strong ties~\cite{Gmane2}.
\section{Data and scripts}\label{sec:data}\label{scripts}
Email list messages were obtained from
the Gmane email archive, which consists of more than $20,000$
email lists (discussion groups) and more than $130\times 10^6$ messages~\cite{Gmanewikipedia}. These lists cover a variety of topics, mostly technology-related. The archive can be described as a corpus along with message metadata, including sent time, place, sender name, and sender email address.
The usage of the Gmane database in scientific research is reported in studies of isolated lists and of lexical innovations~\cite{Gmane2,bird}.
We observed various email lists and selected four of them together with data from Twitter, Facebook and ParticipaBR for a thorough analysis,
from which general properties can be inferred. These lists are as follows:
\begin{itemize}
\item Linux Audio Users list\footnote{gmane.linux.audio.users is list ID in Gmane.},with participants from different countries with artistic and technological interests. English is the prevailing language. Abbreviated as LAU from now on.
\item Linux Audio Developers list\footnote{gmane.linux.audio.devel is list ID in Gmane.}, with participants from different countries; a more technical and less active version of LAU. English is the prevailing language. Abbreviated as LAD from now on.
\item Developer's list for the standard C++ library\footnote{gmane.comp.gcc.libstdc++.devel is list ID in Gmane.}, with computer programmers from different countries. English is the prevailing language. Abbreviated as CPP from now on.
\item List of the MetaReciclagem project\footnote{gmane.politics.organizations.metareciclagem is list ID in Gmane.}, a Brazilian email list for digital culture. Portuguese is the prevailing language, although some messages are written in Spanish and English. Abbreviated as MET from now on.
\end{itemize}
The first 20,000 messages of each list were considered, with basic attributes of total timespan, authors, threads and missing messages indicated in Table~\ref{tab:genLists}. We considered 140 additional email lists to report on the interdependence between the number of participants and the number of discussion threads. Furthermore, 12 networks from Facebook (8), Twitter (2) and ParticipaBR (2) were scrutinized, and their analysis is given in the Supporting Information document for the purpose of testing the generality of the results.
\begin{table}
\centering
\label{tab:genLists}
\begin{tabular}{l|c|c|c|c|c}\hline
list & $date_1$ & $date_{M}$ & $N$ & $\Gamma$ & $\overline{M}$ \\\hline
\input{tables/tab1Overview}
\end{tabular}
\caption{{\bf Overview of the email lists analyzed.} Columns $date_1$ and $date_M$ have dates of first and last messages from the 20,000 messages considered in each email list.
$N$ is the number of participants (number of different email addresses),
$\Gamma$ is the number of discussion threads (count of messages without antecedent),
$\overline{M}$ is the number of messages missing in the 20,000 collection
($100\frac{23}{20000}=0.115$ percent in the worst case).
}
\end{table}
The data and scripts used to derive the results, figures and tables, and this article itself are publicly available. Email messages are downloadable from the Gmane public database~\cite{Gmanewikipedia}.
Data annotated from Facebook and Twitter are in a public repository~\cite{fbtwData}.
Data from ParticipaBR were used from the linked data/semantic web RDF triples~\cite{opa}, available in~\cite{datahub}.
Computer scripts are delivered through a public domain Python PyPI package and an open Git repository~\cite{gmanePack}.
This open approach to both data and scripts reinforces the scientific aspect of the contribution~\cite{openSci} and mitigates ethical and moral issues involved in researching systems constituted of human individuals~\cite{anPhy,ccs15}.
\section{Methods}\label{sec:carac}
%The networks were characterized with: 1) statistics of activity along time, in scales from seconds to years; 2) dispersion of basic topological metrics; 3) sectioning of the networks in hubs, intermediary and periphery; 4) iterative visualization, sonification and data inspection.
%These procedures are described below.
\subsection{Temporal activity statistics}\label{sec:mtime}
Messages were counted over time as histograms in the scales of seconds,
minutes, hours, days of the week, days of the month, and months of the year.
Most standard measures of location and dispersion, e.g. the usual mean and
standard deviation, hold little meaning in a compact Riemannian manifold,
such as the recurrent time periods that we are interested in.
Similar measures were taken using circular statistics~\cite{directionalStats},
in which each measurement $t$ is represented as a unit complex number,
$z=e^{i\theta}=\cos(\theta)+i\sin(\theta)$, where $\theta=t\frac{2\pi}{T}$,
and $T$ is the period in which the counting is repeated.
For example, $\theta=12\frac{2\pi}{24}=\pi$ for a message sent at $t=12h$ and given $T=24h$ for days.
The moments $m_n$, lengths of moments $R_n$, mean angles $\theta_\mu$, and rescaled mean angles $\theta_\mu'$ are defined as:
\begin{align}\label{eq:cmom}
m_n&=\frac{1}{N}\sum_{i=1}^N z_i^n \nonumber\\
R_n&=|m_n|\\
\theta_\mu&=Arg(m_1) \nonumber \\
\theta_\mu'&=\frac{T}{2\pi} \theta_\mu \nonumber
\end{align}
$\theta_\mu'$ is used as the measure of location.
Dispersion is measured using the circular variance $Var(z)$,
the circular standard deviation $S(z)$, and the circular dispersion $\delta(z)$:
\begin{align}\label{eq:cmd}
Var(z)&=1 - R_1 \nonumber\\
S(z)&= \sqrt{-2\ln(R_1)}\\
\delta(z)&=\frac{1-R_2}{2 R_1^2} \nonumber
\end{align}
\noindent
Also, the ratio $r=\frac{b_l}{b_h}$ between the lowest $b_l$ and the highest $b_h$ incidences on the histograms
served as a further clue of how close the distribution was to being uniform. As expected, a positive correlation was found in all $r, Var(z)$, $S(z)$ and $\delta(z)$ dispersion measures,
which can be noticed in Section~\ref*{si:circ} of the Supporting Information. The circular dispersion $\delta(z)$ was found more sensitive and therefore preferred in the discussion of results.
\subsection{Interaction networks}\label{intNet}
Edges in interaction networks can be modeled both as weighted or unweighted, as directed or undirected~\cite{bird,newmanCommunityDirected,newmanCommunity2013}.
Networks in this paper are directed and weighted, the most informative of the possibilities. We did not investigate directed unweighted, undirected weighted, and undirected unweighted representations of the interaction networks.
The interaction networks were obtained as follows: a direct response from participant B to a message from participant A yields an edge from A to B, as information went from A to B. The reasoning is: if B wrote a response to a message from A, he/she read what A wrote and formulated a response, so B assimilated information from A, thus $A \rightarrow B$.
Edges in both directions are allowed. Each time an interaction occurs, the value of one is added to the edge weight. Selfloops were regarded as non-informative and discarded. Inverting edge direction yields the status network: B read the message and considered what A wrote worth responding, giving status to A, thus $B\rightarrow A$. This paper considers by convention the information network as described above ($A\rightarrow B$) and depicted in Figure~\ref{formationNetwork}. These interaction networks are reported in the literature as exhibiting scale-free and small-world properties, as expected for a number of social networks~\cite{bird,newmanBook}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{figs/criaRede3_}
\caption{{\bf The formation of interaction networks from exchanged messages.} Each vertex represents a participant. A reply message from author B to a message from author A is regarded as evidence that B received information from A and yields a directed edge. Multiple messages add ``weight'' to a directed edge. Further details are given in Section~\ref{intNet}.}
\label{formationNetwork}
\end{figure}
%Edges can be created from all antecedent message authors on the message-response thread to each message author.
%We only linked the immediate antecedent to the new message author, both for simplicity and because in adding two edges, $x\rightarrow y$ and $y\rightarrow z$, there is also a weaker connection between $x$ and $z$. Potential interpretations for this weaker connection are: double length, half weight or with one more ``obstacles''. This suggests the adequacy of centrality measurements to account for the connectivity of a node with all other nodes, such as betweenness centrality, eigenvector centrality and accessibility~\cite{luMeasures,access}.
\subsubsection{Topological metrics}\label{measures}
The topology of the networks was characterized
from a small selection of the most basic
and fundamental measurements for each vertex~\cite{newmanBook}, as follows:
\begin{itemize}
\item Degree $k_i$: number of edges linked to vertex $i$.
\item In-degree $k_i^{in}$: number of edges ending at vertex $i$.
\item Out-degree $k_i^{out}$: number of edges departing from vertex $i$.
\item Strength $s_i$: sum of weights of all edges linked to vertex $i$.
\item In-strength $s_i^{in}$: sum of weights of all edges ending at vertex $i$.
\item Out-strength $s_i^{out}$: sum of weights of all edges departing from vertex $i$.
\item Clustering coefficient $cc_i$: fraction of pairs of neighbors of $i$ that are linked, i.e. the standard clustering coefficient metric for undirected graphs.
\item Betweenness centrality $bt_i$: fraction of geodesics that contain vertex $i$. The betweenness centrality index was computed for weighted digraphs as specified in~\cite{faster}.
\end{itemize}
The non-standard metrics below were formulated to capture symmetries in the activity of participants:
\begin{itemize}
\item Asymmetry of vertex $i$: $asy_i=\frac{k_i^{in}-k_i^{out}}{k_i}$.
\item Average asymmetry of edges at vertex $i$:\\ $\mu_i^{asy}=\frac{\sum_{j\in J_i} e_{ji}-e_{ij}}{|J_i|}$, where $e_{ij}$ is 1 if there is an edge from $i$ to $j$, and $0$ otherwise, and $J_i$ is the set of neighbors of vertex $i$.
\item Standard deviation of asymmetry of edges:\\ $\sigma_i^{asy}=\sqrt{\frac{\sum_{j\in J_i}[\mu^{asy}_i -(e_{ji}-e_{ij}) ]^2 }{|J_i|} }$.
\item Disequilibrium: $dis_i=\frac{s_i^{in}-s_i^{out}}{s_i}$.
\item Average disequilibrium of edges:\\ $\mu_i^{dis}=\frac{\sum_{j \in J_i}\frac{w_{ji}-w_{ij}}{w_{ji}+w_{ij}}}{|J_i|}$, where $w_{xy}$ is the weight of edge $x\rightarrow y$ and zero if there is no such edge.
\item Standard deviation of disequilibrium of edges: $\sigma_i^{dis}=\sqrt{\frac{\sum_{j\in J_i}\left[\mu^{dis}_i-\frac{w_{ji}-w_{ij}}{w_{ji}+w_{ij}}\right]^2}{|J_i|}}$.
\end{itemize}
Both standard and non-standard metrics are used for the Erd\"os sectioning (described in Section~\ref{sectioning}) and for performing principal component analysis (PCA) (as described in Section~\ref{sec:pca}).
\subsection{Erd\"os sectioning}\label{sectioning}
It is often useful to think of vertices as hubs, peripheral and intermediary. We have therefore derived the peripheral, intermediary and hub sectors of the empirical networks from a comparison against an Erd\"os-R\'enyi network with the same number of edges and vertices,
as depicted in Figure~\ref{fig:setores}. We refer to this procedure as \emph{Erd\"os sectioning}, with the resulting sectors being named as \emph{Erd\"os sectors}. The Erd\"os sectioning was recognized as a theoretical possibility by M. O. Jackson in his video lectures~\cite{3setores}, but to our knowledge it has not as yet been applied to empirical data.
The degree distribution $\widetilde{P}(k)$ of a real network with a scale-free profile $\mathcal{N}_f(N,z)$ with $N$ vertices and $z$ edges has less
average degree nodes than the distribution $P(k)$ of an Erd\"os-R\'enyi
network with the same number of vertices and edges. Indeed, we define in this work the intermediary sector of a network to be the set of all the nodes whose degree is less abundant in the real network than on the Erd\"os-R\'enyi model:
\begin{equation}\label{criterio}
\widetilde{P}(k)<P(k) \Rightarrow \text{k is intermediary degree}
\end{equation}
If $\mathcal{N}_f(N,z)$ is directed and has no self-loops, the probability of the existence
of an edge between two arbitrary vertices is $p_e=\frac{z}{N(N-1)}$.
A vertex in the ideal Erd\"os-R\'enyi digraph with the same number of vertices and edges, and thus the same probability $p_e$ for the presence of an edge, will have degree $k$ with probability
\begin{equation}
P(k)=\binom{2(N-1)}{k}p_e^k(1-p_e)^{2(N-1)-k}
\end{equation}
The lower degree fat tail corresponds to the border vertices, i.e. the peripheral sector or periphery where $\widetilde{P}(k)>P(k)$ and $k$ is lower than any value of $k$ in the intermediary sector.
The higher degree fat tail is the hub sector, i.e. $\widetilde{P}(k)>P(k)$ and $k$ is higher than any value of $k$ in the intermediary sector. The reasoning for this classification is as follows: vertices so connected that they are virtually nonexistent in the Erd\"os-R\'enyi model, are coherently associated to the hub sector.
Vertices with very few connections, which are way more abundant than expected in the Erd\"os-R\'enyi model,
are assigned to the periphery.
Vertices with degree values predicted as the most abundant in the Erd\"os-R\'enyi model,
near the average, and less frequent in the real network, are classified as intermediary.
\clearpage
\begin{figure}[!h]
\centering
% \includegraphics[trim={0 0 0 1cm},clip,width=\textwidth]{figs/fser__}
\includegraphics[width=\textwidth]{figs/fser___}
\caption{{\bf Three sectors of a scale-free networks.} This is a classification of vertices by comparing degree distributions~\cite{3setores}.
The binomial distribution of the Erd\"os-R\'enyi network model exhibits more intermediary vertices, while a scale-free network, associated with the power-law distribution, has more peripheral and hub vertices. The sector borders are defined with respect to the intersections of the distributions. Characteristic degrees are in the compact intervals: $[0,k_L]$, $(k_L,k_R]$, $(k_R,k_{max}]$ for the periphery, intermediary and hub sectors, the ``Erd\"os sectors''.
The connectivity distribution of empirical interaction networks, e.g. derived from email lists, can be sectioned by comparison against the associated binomial distribution with the same number of vertices and edges. In this figure, a snapshot of 1000 messages from CPP list yields the degree distribution of an interaction network of 98 nodes and 235 edges. A thorough explanation of the method is provided in Section~\ref{sectioning}.}
\label{fig:setores}
\end{figure}
To ensure statistical validity of the histograms, bins can be chosen to contain at least $\eta$ vertices of the real network.
The range $\Delta$ of incident values of degree $k$ should be partitioned in $m$ parts $\Delta=\cup_{i=1}^m \Delta_i$,
with $\Delta_i\cap \Delta_j=\emptyset \; \forall\; i \neq j$ and:
%\begin{equation}
%\begin{split}
\begin{align}
\Delta_i =\Biggl\{ k\;\; & | & \overline{\Delta}_{i-1}< &\, k\leq l \text{ and }\;\;\;\;\;\;\;\;\;\;\;\;\nonumber\\
& \Biggl[ \Bigl[ & N - \sum_{k=0}^{\overline{\Delta}_{i-1}} & \eta_k< \eta \text{ and } l = \overline{\Delta} \Bigr] \text{ or }\\
& \Bigl[ & \sum_{k=\overline{\Delta}_{i-1}+1}^l & \eta_k \geq \eta \text{ and }\;\;\;\;\;\;\nonumber\\
& \Bigl( &\sum_{k=\overline{\Delta}_{i-1}+1}^{l-1} & \eta_k < \eta \text{ or } l=\overline{\Delta}_{i-1}+1 \Bigr) \;\Bigr] \Biggr] \Biggr\}\nonumber
\end{align}
%\end{split}
%\end{equation}
\noindent where $\eta_k$ is the number of vertices with degree $k$,
while $\overline{\Delta}_{(i)}=max(\Delta_{(i)})$, and $\overline{\Delta}_{0}=-1$.
% and $\overline{\Delta}_{i}<l\leq max(\Delta)$.
Equation~\ref{criterio} can now be written in the form:
\begin{equation}\label{criterio2}
\begin{split}
\sum_{x=min(\Delta_i)}^{\overline{\Delta}_i} \widetilde{P}(x) < \sum_{x=min(\Delta_i)}^{\overline{\Delta}_i} P(x) \Leftrightarrow \\
\Leftrightarrow \Delta_i \text{ spans intermediary degree values.}
\end{split}
\end{equation}
If the strength $s$ is used for comparison of the real network against the Erd\"os-R\'enyi model,
$P$ remains the same, but $P(\kappa_i)$ with $\kappa_i=\frac{s_i}{\overline{w}}$ should be used, where $\overline{w}=2\frac{z}{\sum_is_i}$ is the average weight of an edge and $s_i$ is the strength of vertex $i$. For in and out degrees ($k^{in}$, $k^{out}$), the real network should be compared against
\begin{equation}
\hat{P}(k^{way})=\binom{N-1}{k^{way}}p_e^k(1-p_e)^{N-1-k^{way}},
\end{equation}
\noindent where \emph{way} can be \emph{in} or \emph{out}. In and out strengths ($s^{in}$, $s^{out}$) are divided by $\overline{w}$ and compared also using $\hat{P}$. Note that $p_e$ remains the same, as each edge yields an incoming (or outgoing) edge, and there are at most $N(N-1)$ incoming (or outgoing) edges, thus $p_e=\frac{z}{N(N-1)}$, as with the total degree.
In other words, let $\gamma$ and $\phi$ be integers in the intervals $1 \leq \gamma \leq 6$, $1 \leq \phi \leq 3$, and each of the basic six Erd\"os sectioning possibilities $\{E_{\gamma}\}$ have three Erd\"os sectors $E_{\gamma}= \{e_{\gamma, \phi} \}$ defined as
\begin{alignat}{3}\label{eq:part}
e_{\gamma,1}&=\{\;i\;|\;\overline{k}_{\gamma,L}\geq&&\overline{k}_{\gamma,i}\} \nonumber \\
e_{\gamma,2}&=\{\;i\;|\;\overline{k}_{\gamma,L}<\;&&\overline{k}_{\gamma,i}\leq\overline{k}_{\gamma,R}\} \\
e_{\gamma,3}&=\{\;i\;|\;&&\overline{k}_{\gamma,i}>\overline{k}_{\gamma,R}\} \nonumber,
\end{alignat}
\noindent where $\{\overline{k}_{\gamma,i}\}$ is
\begin{equation}
\begin{split}
\overline{k}_{1,i}&=k_i \\
\overline{k}_{2,i}&=k_i^{in} \\
\overline{k}_{3,i}&=k_i^{out} \\
\overline{k}_{4,i}&=\frac{s_i}{\overline{w}} \\
\overline{k}_{5,i}&=\frac{s_i^{in}}{\overline{w}} \\
\overline{k}_{6,i}&=\frac{s_i^{out}}{\overline{w}}
\end{split}
\end{equation}
\noindent and both $\overline{k}_{\gamma,L}$ and $\overline{k}_{\gamma,R}$ are found using $P(\overline{k})$ or $\hat{P}(\overline{k})$ as described above and illustrated in Figure~\ref{fig:setores}.
Since different metrics can be used to identify the three types of vertices, more than one metric can be used simultaneously, which is convenient when analysing small networks,
such as the cases where only 50 messages are considered in Section~\ref*{si:frac} of the Supporting Information.
%For example, a very stringent criterion can be used, according to which a vertex is only regarded as pertaining to a sector if it is so for all the metrics.
After a careful consideration of possible combinations, these were reduced to six:
\begin{itemize}
\item Exclusivist criterion $C_1$: vertices are only classified if the class is the same according to all metrics. In this case, vertices classified do not usually reach $N$ (or 100\%), which is indicated by a black line in Figure~\ref{fig:sectIL}.
\item Inclusivist criterion $C_2$: a vertex has the class given by any of the metrics. Therefore, a vertex may belong to more than one class, and the total number of memberships may exceed $N$ (or 100\%), which is indicated by a black line in Figure~\ref{fig:sectIL}.
\item Exclusivist cascade $C_3$: vertices are only classified as hubs if they are hubs according to all metrics. Intermediary are the vertices classified either as intermediary or hubs with respect to all metrics. The remaining vertices are regarded as peripheral.
\item Inclusivist cascade $C_4$: vertices are hubs if they are classified as such according to any of the metrics. The remaining vertices are intermediary if they belong to this category for any of the metrics. Peripheral vertices are those which are classified as such with respect to all metrics.
\item Exclusivist externals $C_5$: vertices are hubs if they are classified as such according to all the metrics. Vertices are peripheral if they are peripheral or hubs for all metrics. The remaining nodes are intermediary.
\item Inclusivist externals $C_6$: hubs are vertices classified as hubs according to any metric. The remaining vertices are peripheral if they are classified as such according to any metric. The rest of the vertices are intermediary.
\end{itemize}
Using Equations~(\ref{eq:part}), these \emph{compound criteria} $C_\delta$, with $\delta$ integer in the interval $1\leq\delta\leq6$, can be specified as:
%\begin{alignat}{3}
\begin{equation}
\begin{split}
%\begin{multline}
C_1&=\{c_{1,\phi}=\left\{i\mid i\;\in e_{\gamma,\phi}, \;\forall\; \gamma\}\right\} \\
C_2&=\{c_{2,\phi}=\left\{i\mid \exists \;\;\gamma: i \in e_{\gamma,\phi}\}\right\} \\
C_3&=\{c_{3,\phi}=\left\{i\mid i\;\in e_{\gamma,\phi'}, \;\forall\; \gamma,\;\forall\;\phi'\geq \phi\}\right\} \\
C_4&=\{c_{4,\phi}=\left\{i\mid i\;\in e_{\gamma,\phi'}, \;\forall\; \gamma,\;\forall\;\phi'\leq \phi\}\right\} \\
C_5&=\{c_{5,\phi}=\left\{i\mid i\;\in e_{\gamma,\phi'}, \;\forall\; \gamma,\right.\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left.\;\forall\;(\phi'+1)\%4\leq (\phi+1)\%4\}\right\} \\
C_6&=\{c_{6,\phi}=\left\{i\mid i\;\in e_{\gamma,\phi'}, \;\forall\; \gamma,\right.\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left.\;\forall\;(\phi'+1)\%4\geq (\phi+1)\%4\}\right\} \\
%\end{multline}
\end{split}
\end{equation}
%\end{alignat}
Notice that the exclusivist cascade is the same sectioning of an inclusivist cascade from periphery to hubs, but with inverted order of sectors.
The simplification of all possible compound possibilities to the small set listed above might be formalized in strict mathematical terms, but this was considered out of the scope for current interests.
%\subsubsection{Sectioning of networks in peripheral, intermediary and hubs sectors}\label{sectioning}
\subsection{Principal Component Analysis of topological metrics}\label{sec:pca}
Principal Component Analysis (PCA) is a well documented technique~\cite{pca}, used here to address the following questions: 1) which metrics contribute to each principal component and in what proportion; 2) how much of the dispersion is concentrated in each component; 3) which are the expected values and dispersions for these quantities over various networks. This enables one to characterize human interaction networks in terms of the relative importance of network metrics and the way they combine.
Let $\mathbf{X}=\{X[i,j]\}$ be a matrix where each element is the value
of the metric $j$ at vertex $i$ .
Let
$\mu_X [j]=\frac{\sum_i X[i,j]}{I}$ be the mean of metric $j$ over all $I$ vertices,
$\sigma_X [j]=\sqrt{\frac{\sum_i (X[i,j]-\mu_X [j])^2}{I}}$ the standard deviation of metric $j$,
and $\mathbf{X'}=\{X'[i,j]\}=\left\{\frac{X[i,j]-\mu_X[j]}{\sigma_X[j]}\right\}$
the matrix with the \emph{z-score} of each metric.
Let $\mathbf{V}=\{V[j,k]\}$ be the matrix $J\times J$ of eigenvectors
of the covariance matrix $\mathbf{C}$
of $\mathbf{X'}$, one eigenvector per column.
Each eigenvector combines the original metrics into one principal component, therefore
$V'[j,k]=100\frac{|V[j,k]|}{\sum_{j'} |V[j',k]|}$
is the percentage of the principal component $k$
that is proportional to the metric $j$.
%With $k$ eigenvectors
%$D[k]$,
%it is enough to
Let $\mathbf{D}=\{D[k]\}$ be the eigenvalues associated with the eigenvectors $\mathbf{V}$,
then $D'[k]=100\frac{D[k]}{\sum_{k'}D[k']}$
is the percentage of total dispersion of the system that the principal component $k$
is responsible for.
We consider, in general, the three largest eigenvalues and
the respective eigenvectors in percentages:
$\{(D'[k],\;V'[j,k])\}$.
These usually sum up between 60 and 95\% of the dispersion
and reveal patterns for a first analysis.
In particular,
given $L$ snapshots $l$ of the interaction network,
we are interested in the mean
$\mu_{V'}[j,k]$
and the standard deviation $\sigma_{V'}[j,k]$
of the contribution of metric $j$ to the principal component $k$,
and the mean
$\mu_{D'}[k]$
and the standard deviation
$\sigma_{D'}[k]$
of the contribution of the component $k$ to the dispersion
of the system:
\begin{align}\label{eq:pca}
\mu_{V'}[j,k] &=\frac{\sum_{l=1}^L V'[j,k,l]}{L}\nonumber\\
\sigma_{V'}[j,k]&=\sqrt{\frac{\sum_{l=1}^L (\mu_{V'}-V'[j,k,l])^2}{L}}\\\nonumber
\mu_{D'}[k]&=\frac{\sum_{l=1}^L D'[k,l]}{L}\\\nonumber
\sigma_{D'}[k]&=\sqrt{\frac{\sum_{l=1}^L (\mu_{D'}-D'[k,l])^2}{L}}
\end{align}
The covariance matrix
$\mathbf{C}$ is the correlation matrix because $\mathbf{X'}$ is normalized.
Therefore, $\mathbf{C}$ is also directly observed as a first clue for patterns
by the most simple associations:
low absolute values indicate low correlation (and a possible independence);
high values indicate positive correlation;
negative values with a high absolute value indicate negative correlation.
Notice that in this case the variable $k$ is not the degree value
but a principal component.
In the results the principal components are numbered
according to the magnitude of associated eigenvalue and $k$ is incorporated into
the notation (e.g. PC2 for metrics of $\mu_{V'}[j,2]$).
\subsection{Evolution and audiovisualization of the networks}\label{sec:viz}
The evolution of the networks was observed within
sequences of snapshots. In each sequence, a fixed number of messages,
i.e. the window size $ws$, was used for all snapshots.
%was considered with different shifts in the message timeline to obtain snapshots.
The snapshots were made disjoint in the message timeline, and were used to perform both PCA with topological metrics and Erd\"os sectioning.
Figures and tables were usually inspected with
$ws=\{50, 100, 200, 400, 500, 800,$ $1000, 2000, 2500, 5000, 10000\}$ messages. Variations in the number of vertices, edges
and other network characteristics, within the same window size $ws$,
are given in Section~\ref*{si:frac} of the Supporting Information document.
Network structures were mapped to video animations, sound and musical structures developed for this research~\cite{animacoes}.% ,galGmane,appGmane}.
Such \emph{audiovisualizations} were crucial in the initial steps and
to guide the research into the most important features of network evolution.
%Furthermore, the size of the three Erd\"os sectors could be visualized in a timeline fashion.
%Visualization of network structure was especially useful in the initial inspection of data and of the structures derived from the email lists.
%This is a way to enhance the reliability of the methods,
%of algorithmic routines, of data consistency,
%and of the results themselves.
%Also, we believe that this practice raises the scientific contribution of the work,
%handing not only the framework and the results,
%but the exact data and processes that render them~\cite{openSci}.
\section{Results and discussion}\label{sec:results}
%Remarkable features from the analysis of the four email lists are:
%\begin{itemize}
% \item The activity along time is practically the same for all lists, thus suggesting stable patterns.
% \item The fraction of participants in each Erd\"os sector is stable along time and can be determined even with very few messages
% \item The topological metrics combine into principal components in PCA in the same way for all lists and all snapshots ().
% \item Symmetry measures of the topology, as defined in this article, present more dispersion than the usual clustering coefficient (Section~\ref{prevalence}).
% \item Typology speculations are immediate from results (Section~\ref{sec:pty}).
%\end{itemize}
\subsection{Activity along time}\label{constDisc}
Regular patterns of activity were observed along time
in the scales of seconds, minutes, hours, days and months.
Histograms in each of the time scales were computed as were circular average and dispersion values, and the results are given in Tables~\ref{tab:circ}-\ref{tab:min2}. For example, uniform activity is found with respect to seconds, minutes and days of the months. Weekend days exhibit about half the activity of regular weekdays, and there is a peak of activity between 11am and noon.
\begin{table}
\begin{center}
\begin{tabular}{ l|| c|c }
\hline
%scale & $\theta_\mu'$ & $S(z)$ & $Var(z)$ & $\delta(z)$ & $\frac{max(incidence)}{min(incidence)}$ & $ \mu_{\frac{max(incidence')}{min(incidence')}} $ & $ \sigma_{\frac{max(incidence')}{min(incidence')} } $ \\ \hline\hline
%& $\theta_\mu'$ & $S(z)$ & $Var(z)$ & $\delta(z)$ \\ \hline\hline
scale & mean $\theta_\mu'$ & dispersion $\delta(z)$ \\ \hline
\input{tables/tab2TimeLAD___}
\end{tabular}
\end{center}
\caption{{\bf Time-related circular statistics.} The rescaled circular mean $\theta_\mu'$ and the circular dispersion $\delta(z)$, described in Section~\ref{sec:mtime}, for different timescales. This example table was constructed using all LAD messages, and the results are the same for other lists, as shown in Section~\ref*{si:circ} of the Supporting Information document. The most uniform distribution of activity was found in seconds and minutes. Hours of the day exhibited the most concentrated activity (lowest $\delta(z)$), with mean between 2 p.m. and 3 p.m. ($\theta'=-9.61$). Weekdays, days of the month and months have mean near zero (i.e. near the beginning of the week, month and year) and high dispersion. Note that $\theta_u'$ has the dimensional unit of the corresponding time period while $\delta(z)$ is dimensionless.}
\label{tab:circ}
\end{table}
\begin{table}
\footnotesize
\input{tables/tabHoursCPP_}
\label{tab:hin}
\caption{{\bf Activity percentages along the hours of the day.} Nearly identical distributions were observed on other social systems as shown in Section~\ref*{si:hours} of the Supporting Information document.
Highest activity was observed between noon and 6pm (with 1/3 of total day activity), followed by the time period between 6pm and midnight.
Around 2/3 of the activity takes place from noon to midnight
but the activity peak occurs between 11 a.m. and 12 p.m.
This table shows results for the activity in CPP.}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ l || c | c | c | c | c | c | c}
\hline
& Mon & Tue & Wed & Thu & Fri & Sat & Sun \\ \hline
\input{tables/tabWeekdays}
\end{tabular}
\end{center}
\caption{{\bf Activity percentages along weekdays.}
Higher activity was observed during workweek days, with a decrease of activity on weekend days of at least one third and at most two thirds.}
\label{tab:win}
\end{table}
In the scales of seconds and minutes, activity is uniform,
with the messages being slightly more evenly distributed in all lists than in simulations with the uniform distribution\footnote{Numpy version 1.8.2, ``random.randint'' function, was used for simulations, algorithms in \url{https://github.com/ttm/percolation}.}.
In the networks, $\frac{min(incidence)}{max(incidence)} \in (0.784,.794)$ while simulations reach these values but have on average more discrepant higher and lower peaks, i.e. if $\xi=\frac{min(incidence')}{max(incidence')}$ than $\mu_\xi=0.7741 \text{ and } \sigma_\xi=0.02619$.
Therefore, the incidence of messages at each second of a minute and at each minute of an hour was considered uniform.
In these cases, the circular dispersion is maximized and the mean has little meaning as indicated in Table~\ref{tab:circ}.
As for the hours of the day, an abrupt peak is found between 11am and 12pm with the most active period being the afternoon, with one third of total daily activity, and two thirds of activity are allocated in the second 12h of each day. Days of the week revealed a decrease between one third and two thirds of activity on weekends.
Days of the month were regarded as homogeneous with an inconclusive slight tendency of the first week to be more active.
Months of the year revealed patterns matching usual work and academic calendars. The time period examined here was not sufficient for the analysis of activity along the years. These patterns are exemplified in Tables~\ref{tab:hin}-\ref{tab:min2}.
\FloatBarrier
\begin{table}
\footnotesize
\input{tables/tabMonthdaysMET}
\label{tab:min}
\caption{{\bf Activity along the days of the month.}
Nearly identical distributions are found in all systems
as indicated in Section~\ref*{si:monthdays} of the Supporting Information. Although slightly higher activity rates are found in the beginning of the month, the most important feature seems to be the homogeneity made explicit by the high circular dispersion in Table~\ref{tab:circ}.
This specific example and empirical table correspond to the activity of the MET email list.}
\end{table}
\begin{table}
\footnotesize
\input{tables/tabMonthsLAU}
\label{tab:min2}
\caption{{\bf Activity percentages on months along the year.} Activity is usually concentrated in Jun-Aug and/or in Dec-Mar, potentially due to academic calendars, vacations and end-of-year holidays. This table corresponds to activity in LAU. Similar results are shown for other lists in Section~\ref*{si:months} of the Supporting Information document.}
\end{table}
\subsection{Stable sizes of Erd\"os sectors}\label{subsec:pih}
The distribution of vertices in the hub, intermediary, periphery Erd\"os sectors is remarkably stable along time if the snapshots hold 200 or more messages, as it is clear in Figure~\ref{fig:sectIL} and in Section~\ref*{si:frac} of the Supporting Information document.
%Moreover, all email lists analyzed exhibit the same distribution profile.
Activity is highly concentrated on the hubs, while a very large number of peripheral vertices contribute to only a fraction of the activity.
This is expected for a system with a scale-free profile, as confirmed with the distribution of activity among participants in Table~\ref{autores}.
Typically, $[3\%-12\%]$ of the vertices are hubs,\\
$[15\%-45\%]$ are intermediary and $[44\%-81\%]$ are peripheral,
which is consistent with other studies~\cite{secFree}.
These results hold for the total, in and out degrees and strengths.
Stable sizes are also observed for 100 or less messages if the classification
of the three sectors is performed with one of the compound criteria established in Section~\ref{sectioning}. The networks often hold this basic structure with as few as 10-50 messages, i.e. concentration of activity and the abundance of low-activity participants take place even with very few messages, which is highlighted in Section~\ref*{si:frac} of the Supporting Information. A minimum window size for the observation of more general properties might be inferred by monitoring
both the giant component and the degeneration of the Erd\"os sectors.
In order to support the generality of these findings,
we list the Erd\"os sector sizes of 12 networks from Facebook, Twitter and ParticipaBR in Table~\ref*{tab:secE} of the Supporting Information document. The fractions of hubs, intermediary and periphery nodes are
essentially the same as for the email list networks but with exceptions and a greater variability.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/InText-WLAU-S1000__}
\caption{{\bf Stability of Erd\"os sector sizes.}
Fractions of participants derived from degree and strength criteria, $E_1$ and $E_4$ described in Section~\ref{sectioning}, are both on the left.
Fractions derived from the exclusivist $C_1$ and the inclusivist $C_2$ compound criteria are shown in the plots to the right.
The ordinates $\overline{e_{\gamma,\phi}}=\frac{|e_{\gamma,\phi}|}{N}$ denote the fraction of participants in sector $\phi$ through criterion $E_\gamma$
and, similarly, $\overline{c_{\delta,\phi}}=\frac{|c_{\delta,\phi}|}{N}$ denotes the fraction of participants in sector $\phi$ through criterion $C_\delta$.
Sections~\ref*{si:frac} and~\ref*{si:ext} of the Supporting Information bring a systematic collection of such timeline figures with all simple and compound criteria specified in Section~\ref{sectioning}, with results for networks from Facebook, Twitter and ParticipaBR.}
\label{fig:sectIL}
\end{figure*}
\begin{table}[h]
\begin{center}
\begin{tabular}{ l || c | c | c | c }
\hline
list & hub & $ Q_1 $ & $ Q_3 $ & $D_{-1}$ \\ \hline
\input{tables/userTab}
\end{tabular}
\end{center}
\caption{{\bf Distribution of activity among participants.}
The first column shows the percentage of messages sent by the most active participant. The column for the first quartile ($Q_1$) gives the minimum percentage of participants responsible for at least 25\% of total messages with the actual percentage in parentheses. Similarly, the column for the first three quartiles $Q_3$ gives the minimum percentage of participants responsible for 75\% of total messages.
The last decile $D_{-1}$ column shows the maximum percentage of participants responsible for 10\% of messages.}
\label{autores}
\end{table}
\subsection{Stability of principal components}\label{prevalence}
%The topology was analyzed using standard, well-established metrics of centrality and clustering.
%We also introduced symmetry metrics given the evidence of their importance in social contexts~\cite{newmanEvolving}.
%The contribution of each metric to the variance is very similar for all the networks and along time.
The principal components of the participants are very stable in the topological space, i.e. in the space of principal components of network measures.
Table~\ref{tab:pcain} exemplifies the formation of principal components by providing the averages over non-overlapped activity snapshots of a network. The most important result of this application of PCA, the stability of principal components, is underpinned by the very small dispersion of the contribution of each metric to each principal component.
%The contribution of each metric to the
%principal components presents
%very small standard deviation.
\begin{table}[!h]
\footnotesize
\input{tables/tabPCA3CPP}
\label{tab:pcain}
\caption{{\bf Invariance of principal components.} Loadings for the 14 metrics into the principal components for the MET list, $1000$ messages in 20 disjoint positions. The clustering coefficient (cc) appears as the first metric in the table, followed by 7 centrality metrics and 6 symmetry-related metrics. Note that the centrality measurements, including degrees, strength and betweenness centrality, are the most important contributors for the first principal component, while the second component is dominated by symmetry metrics. The clustering coefficient is only relevant for the third principal component. The three components have in average more than 85\% of the variance.
The low standard deviation $\sigma$ implies that the principal components are considerably stable.}
\end{table}
The first principal component is an average of centrality metrics:
degrees, strengths and betweenness centrality.
On one hand, the similar relevance of all centrality metrics is not surprising since they are highly correlated,
e.g. degree and strength have Spearman correlation coefficient $\in [0.95,1]$
and Pearson coefficient $\in [0.85,1)$ for window sizes greater than a thousand messages.
On the other hand, each of these metrics is related to a different participation characteristic,
and their equal relevance for variability,
as measured by the principal component, is noticeable.
Also, this suggests that these centrality metrics
are equally adequate for characterizing the networks
and the participants.
\begin{figure}
\centering
%\includegraphics[width=.6\textwidth,height=10cm]{figs/im13PCAPLOT__}
\includegraphics[width=.75\textwidth]{figs/im13PCAPLOT___}
\caption{{\bf Symmetry-related and clustering coefficient components along connectivity.} The first plot highlights the well-known pattern of degree versus clustering coefficient, characterized by the higher clustering coefficient of lower degree vertices.
The second plot shows the greater dispersion of the symmetry-related ordinates dominant in the second principal component (PC2).
This larger dispersion suggests that symmetry-related metrics are more powerful,
for characterizing interaction networks than the clustering coefficient,
especially for hubs and intermediary vertices.
This figure reflects a snapshot of the LAU list with 1000 contiguous messages.}
% Similar structures were observed in all window sizes $ws\;\in\;[500,10000]$, in networks derived from email lists,
% and in networks from Facebook, Twitter and Participabr,
% which suggests a common relationship between the metrics of degrees, strengths and betweenness centrality,
% the symmetry-related metrics and clustering coefficient.}
\label{fig:sym}
\end{figure}
According to Table~\ref{tab:pcain} and Figure~\ref{fig:sym},
dispersion is larger in symmetry-related metrics than in clustering coefficient.
% As expected from basic complex network theory, peripheral vertices have low values of centrality metrics and larger dispersion with regard to the clustering coefficient.
% %The scatter plot in the third system of Figure~\ref{fig:sym},
% %where all metrics are considered and there is a greater dispersion
% %with respect to the ordinates,
% This reflects in the relevance of the symmetry-related metrics.
We conclude that the symmetry metrics are more powerful, in terms of dispersion in the topological metrics space, in characterizing interaction networks and their participants, than the clustering coefficient, especially for hubs and intermediary vertices (peripheral vertices have larger dispersion with regard to the clustering coefficient).
Interestingly, the clustering coefficient is always combined
with the standard deviation of the asymmetry and disequilibrium
of edges $\sigma^{asy}$ and $\sigma^{dis}$ in the third principal component.
%These results are also reported for 12 networks from Facebook, Twitter and Participabr
%in Section~\ref{si:ext} of the Supporting Information document.
Similar results are presented in Sections~\ref*{si:pcat} and~\ref*{si:ext}
of the Supporting Information for other email lists and interaction networks. A larger variability was found for the latter networks,
which motivated the use of interaction networks derived from email lists for benchmarking.
%the overall behavior was maintained in that centrality measurements
%were found prevalent in the first principal component,
%followed by symmetry-related metrics on the second principal
%component and then clustering coefficient on the third principal component.
%Similar results are presented in Sections~\ref{si:pcat} and~\ref{si:ext}
%of the Supporting Information document for other email lists and other interaction networks,
%with the consideration of strategic combinations of metrics.
\subsection{Types from Erd\"os sectors}\label{sec:pty}
Assigning a type to a participant raises important issues about the scientific cannon for human types and the potential for stigmatization and prejudice. The Erd\"os sector to which a participant belongs can be regarded as implying a social type for this participant.
In this case, the type of a participant changes both along time and as different networks are considered, despite the stability of the network. Therefore, the potential for prejudice of such participant typology is attenuated~\cite{adorno}. In other words, an individual is a hub in a number of networks and peripheral in other networks, and even within the same network he/she most probably changes type along time~\cite{animacoes}.
The importance of this issue can be grasped by the consideration of static types derived from quantitative criteria. For example, in email lists with a small number of participants, the number of threads has a negative correlation with the number of participants.
When the number of participants exceeds a threshold, the number of threads has a positive correlation with the number of participants.
This finding is illustrated in Figure~\ref{fig:nmgamma3d}
and can also be observed in Table~\ref{tab:genLists}.
The assignment of types to individuals, in this latter case,
has more potential for prejudice because
the derived participant type is static and
one fails to acknowledge that
human individuals are not immutable entities.
Further observations regarding the Erd\"os sectors
and the implicit participant types were made, which are consistent with the literature~\cite{barabasiEvo}: 1) hubs and intermediary participants usually have intermittent activity, and stable activity was found only in smaller communities. For instance, the MET list had stable hubs while LAU, LAD and CPP exhibited intermittent hubs.
2) Network structure seems to be most influenced by the
activity of intermediary participants as they have less extreme
roles than hubs and peripheral participants and
can therefore connect to the sectors and other participants
in a more selective and explicit manner.
%Moreover, such typology of participants bridges exact and human sciences and may
%be enriched with concepts from other typologies,
%such as Meyer-Briggs, Pavlov or the authoritarian types of the F-Scale~\cite{adorno}.
%We analyzed the temporal evolution of the networks
%using visualization
%tools developed for this research~\cite{rcText,versinus}
%and inspected raw data.
%dictated (or revealed) by the
%(e.g. stable or intermittent patterns of activity and preferential communication
%with hubs or periphery)
%of both hubs and peripheral vertices
%have the trivial facets of interacting
%
%
%
%\begin{itemize}
% \item Typically, the activity of hubs is trivial: they interact as much as possible, in every occasion with everyone.
%The activity of peripheral vertices also follows a simple pattern: they interact very rarely, in very few occasions.
%Therefore, intermediary vertices seem responsible for the network structure.
%Intermediary vertices may exhibit preferential communication to peripheral, intermediary, or hub vertices; can be marked by stable communication partners; can involve stable or intermittent patterns of activity, to point just a few examples of this greater variety of roles.
%% \item Some of the most active participants receive many responses with relative few messages sent, and rarely are top hubs.
%%These seem as authorities and contrast with participants that respond much more than receive responses.
%% \item The most obvious community structure, as observed by a high clustering coefficient, i.e. members know each other often, is found mostly in peripheral and intermediary sectors.
%\end{itemize}
%Within networks as the whole objects of analysis,
%we were able to observe a peculiar correlation pattern
%between the number of threads and the number of participants.
\begin{figure}
\centering
% \includegraphics[trim={0 0 0 1cm},clip,width=.7\columnwidth]{figs/mpgamma2_}
\includegraphics[width=.7\columnwidth]{figs/mpgamma2__}
\caption{{\bf Threads against participants and messages.} A scatter plot of number of messages $M$ versus number of participants $N$ versus number of threads $\Gamma$ for 140 email lists.
Highest $\Gamma$ is associated with low $N$.
The correlation between $N$ and $\Gamma$ is negative for low values of $N$ but positive otherwise.
This negative correlation between $N$ and $\Gamma$ can also be observed in Table~\ref{tab:genLists}.
Accordingly, for $M=20000$ messages, this inflection
of correlation was found around $N=1500$, while CPP, LAU, LAD, MET lists
present smaller networks.}
\label{fig:nmgamma3d}
\end{figure}
%\section{Discussion}
% given the results, and before reaching the conclusions
% what to say?
% --> what is the overall knowledge derived from the results
% --> what are the limitations of this knowledge and of individual results
% --> how should this results carry on is on the next sections.
%\subsection{Consecutive scientific research}
% --> research
% textual diferences
% audiovisualization of data
% typologies, sociological critical theory, social psychology
%\subsection{Technological applications}
% --> technological
% resources categorization and recommendation
% document creation
% ontologies for the semantic web
%\subsection{Experimental and theoretical aspects of the research}
% --> methods
% Exploratory?
% Hypothesis testing?
% --> contributions
% verifiable
% knowledge
% contextualization in the academic knowledge
\subsection{Implications of the main findings}\label{sec:impl}
The findings reported in this article arose from an exploratory procedure to visually inspect the networks and to analyze considerable amounts of interaction networks data.
% deriving from email lists and also from other networks.
While this procedure has certainly an ad hoc nature, the statistics in the data are sufficiently robust for important features from these interaction networks to be extracted.
Temporal stability, in the sense that interaction networks could be considered as stationary time series, is the most important feature. Also relevant is the significant stability found on the principal components, on the fraction of participants in each Erd\"os Sector and on the activity along different timescales. In fact, these findings confirm our initial hypothesis - based on the literature~\cite{newmanBook} - that interaction networks should exhibit some stability traces. The potential generality of these findings is suggested by the analysis of networks derived from diverse systems, with interaction networks from public email lists serving as proper benchmarks. Indeed, with such benchmarks one can compare any social network system. Furthermore, this analysis enables us to establish an outline of human interaction networks. It takes the hub, intermediary and periphery sectors out of the scientific folklore and into classes drawn from quantitative criteria. It enables the conception of non-static human types derived from natural properties.
We envisage that the knowledge generated in the analysis may be exploited in applications where the type of each participant and the relative proportion of participants in each sector can be useful metadata. Just by way of illustration, this could be applied in semantic web initiatives, given that the Erd\"os sectorialization is static in a given snapshot. These results are also useful for classifying resources, e.g. in social media, and for resources recommendation to users~\cite{opa}.
Finally, the knowledge acquired with a quantitative treatment of the whole data may help guide the creation through collective processes of documents to assist in participatory democracy.
Perhaps the most outreaching implications are related to sociological consequences. The results expose a classification of human individuals which is directly related to the concentration of wealth and based on natural laws. The derived human typology changes over different systems and over time in the same system, which implies a negation of the absolute concentration of wealth. Such concentration exists but changes across different wealth criteria and with time. Also, the hubs stand out as dedicated, sometimes enslaved,
components of the social system. The peripheral participants have very limited interaction with the network. This suggests that intermediary participants tend to dictate structure, legitimate the hubs and stand out as authorities.
With regard to the limitations of our study, one should emphasize that not all types of human interaction networks were analyzed. Therefore, the plausible generalization of properties has to be treated with caution, as a natural tendency of such systems and not as a rule. Also, the stable properties in the networks were not explored to the limit, which leaves many open questions. For example, what are the maximum and minimum sizes of the networks for which they hold? What is the outcome of PCA analysis when more metrics are considered? What is the granularity in which the activity along the timescales is preserved? Do the findings reported also apply to other systems, beyond human networks?
\section{Conclusions}\label{sec:conc}
The very small standard deviations of principal components formation
(see Sections~\ref{sec:pca} and~\ref{prevalence}),
the presence of the Erd\"os sectors even in networks with
few participants (see Sections~\ref{sectioning} and~\ref{subsec:pih}),
and the recurrent activity patterns along different timescales (see Sections~\ref{sec:mtime} and~\ref{constDisc}),
go a step further in characterizing scale-free networks in the context
of the interaction of human individuals.
Furthermore, the importance of symmetry-related metrics,
which surpassed that of clustering coefficient,
with respect to dispersion of the system in the topological measures space,
might add to the current understanding of key-differences between digraphs and
undirected graphs in complex networks.
Noteworthy is also the very stable fraction participants in each Erd\"os sector when the network reaches more than 200 participants.
Benchmarks were derived from email list networks
and the supplied analysis of
networks from Facebook,
Twitter and ParticipaBR in the Supporting Information might ease hypothesizing
about the generality of these characteristics.
Further work should expand the analysis to include
more types of networks and more metrics.
The data and software needed to attain these results
should also receive dedicated and in-depth
documentation as they enable a greater level of transparency
and work share,
which is adequate for both benchmarking
and specifically for the study of systems constituted
by human individuals (see Section~\ref{sec:data}).
The derived typology of hub, intermediary and peripheral participants
has been applied for semantic web and participatory democracy efforts,
and these developments might be enhanced to yield scientific knowledge~\cite{opa}.
Also, we plan to further explore and publish the audiovisualizations
used for this research~\cite{versinus,animacoes} and
the linguistic differences found in each of the Erd\"os sectors~\cite{rcText}.
% trabalhos de visualização (versinus), diferenciação de texto
% aplicação da tipologia para democracia participativa e dados ligados
% necessidade de publicar os dados em formatos ligados para comparacao da estrutura
% Future work on including more measures, other networks, sharing
% data in RDF for benchmarking, exploring larger timescales
% and of the data and OWL ontologies
%
%
%A systematic study of the activity of participants belonging to the three
%distinct Erd\"os sectors indicated simple patterns for hubs and peripheral vertices,
%while the network structure was governed by the intermediary vertices.
%These properties were shared by all email lists and were time-independent,
%which is consistent with the literature.
%We may therefore consider the Erd\"os sectors as leading to a human typology which bridges exact sciences, with quantitative procedures for the classification, and human sciences, where there is a legacy in the observation of human types.
\subsection{Acknowledgments}
Financial support was obtained from CNPq (140860/2013-4,
project 870336/1997-5), United Nations Development Program (contract: 2013/000566; project BRA/12/018) and FAPESP.
The authors are grateful to the American Jewish Committee for maintaining an online copy of the Adorno book used on the epigraph~\cite{adorno}, to Gmane creators and maintainers for the public email list data, to the communities of the email lists and other groups used in the analysis, and to the Presidency of the Brazilian Republic for keeping ParticipaBR code and data open.
We are also grateful to developers and users of Python scientific tools, to Leonardo Paulo Maia (IFSC/USP) and to Francisco J. P. Lopes (UFRJ) for valuable insights.
\section*{References}
\bibliography{paper}
\end{document}
| {
"alphanum_fraction": 0.7760958883,
"avg_line_length": 74.245412844,
"ext": "tex",
"hexsha": "054286d3db6948799eb8100e0ee204e4b7881741",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "051dbbdee4d4b3ec033af18621be3c6fb5a8ca83",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ttm/articleStabilityInteractionNetworks",
"max_forks_repo_path": "physicaa/elsarticle-template.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "051dbbdee4d4b3ec033af18621be3c6fb5a8ca83",
"max_issues_repo_issues_event_max_datetime": "2015-04-07T23:23:17.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-04-07T19:03:07.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ttm/articleStabilityInteractionNetworks",
"max_issues_repo_path": "physicaa/elsarticle-template.tex",
"max_line_length": 1061,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "051dbbdee4d4b3ec033af18621be3c6fb5a8ca83",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ttm/articleStabilityInteractionNetworks",
"max_stars_repo_path": "physicaa/elsarticle-template.tex",
"max_stars_repo_stars_event_max_datetime": "2015-10-27T17:04:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-27T17:04:30.000Z",
"num_tokens": 15795,
"size": 64742
} |
\documentclass{report}
\title{Project 8}
\author{\textbf{Jinhao Wei}}
\date{\textbf{21 March 2019}}
\usepackage{634format}
\usepackage{enumerate}
\usepackage{listings}
\usepackage{textcomp}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{holtex}
\usepackage{holtexbasic}
\input{commands}
\begin{document}
\input{../HOL/HOLReports/HOLsolutionsOne}
\input{../HOL/HOLReports/HOLconopsZeroSolution}
%% --------------------------------------------------- the listings
%% parameter "language" is set to "ML"
%% ---------------------------------------------------
\lstset{language=ML,breaklines}
\maketitle{}
\begin{abstract}
This report is basically a summary on my attempts on Project 8, which includes datatype definiton, forward and goal-oriented proofs. This report provides my solution on \emph{13.10.1}, \emph{13.10.2} and \emph{14.4.1}. In addition, I had fine printed the corresponding datatypes and proofs and put the reports in \emph{../HOL/HOLReports/solutions1Report.pdf} and \emph{../HOL/HOLReports/conops0SolutionReport.pdf}.
\end{abstract}
\begin{acknowledgments}
This project follows the format and structure of \emph{sampleTheory} provided by Professor Shiu-Kai Chin. To make it more accurate, this project mostly followed the format of one of my previous projects, which is project 5, and project 5 followed the sturcture of Professor Shiu-Kai Chin's \emph{sampleTheory} project.
Besides, this report relies partly on \emph{example1Theory}, without which we will not be able to create \emph{solutions1Theory}
I will also need to thank Bowei Qu, my friend, who pointed out my mistakes in time. I thought the "Form" was not important while he told me it was more than important. If it wasn't him, I might be late for even more time.
\end{acknowledgments}
\tableofcontents{}
\chapter{Executive Summary}
\label{cha:executive-summary}
\textbf {This is a late submission by two days}.\\
\textbf{All requirements for this project are satisfied}. In
particular, we defined all the datatypes and proved all the theorems in this project, pretty printed the HOL theories,
and made use of the \emph{EmitTeX} structure to typeset HOL theorems
in this report.
The following datatypes are defined in this report
\begin{quote}
\HOLconopsZeroSolutionDatatypes
\end{quote}
and the following theorems are proved
\begin{quote}
\HOLsolutionsOneTheorems
\HOLconopsZeroSolutionTheorems
%% \HOLconopsZeroSolutionTheoremsApRuleStandDownXXthm
%% \HOLconopsZeroSolutionTheoremsApRuleActiveXXthm
%% \HOLconopsZeroSolutionTheoremsOpRuleAbortXXthm
%% \HOLconopsZeroSolutionTheoremsOpRuleLaunchXXthm
\end{quote}
\begin{description}
\item [Reproducibility in ML and \LaTeX]\ \\
All ML and \LaTeX{} source files compile well on the environment provided by this course.
\end{description}
\chapter{Exercise 10.13.1}
\label{cha:e10131}
\section{Problem Statement}
\label{sec:e10131ps}
In this exercise, we imported a previous defined theory \emph{example1Theory} and used the datatypes in it.we will give our proofs on the following theorem \HOLsolutionsOneTheoremsaclExerciseTheoremOne
with three different methods.
Before we go through the following sections, please kindly not that in the source file I provided in \emph{../HOL/solutions1Script.sml}, at the very begining I had imported the following package by doing
\begin{lstlisting}[frame = trBL]
open HolKernel boolLib Parse bossLib;
open acl_infRules aclrulesTheory aclDrulesTheory ;
open example1Theory;
\end{lstlisting}
\section{Forward Proofs}
In this section, we will give a forward proof on theorem \HOLsolutionsOneTheoremsaclExerciseTheoremOne
\subsection{Relevant Codes}
\label{subsec:e1161st}
\begin{lstlisting}[frame = trBL]
val aclExerciseTheorem1 =
let
val th1 = ACL_ASSUM``((Name Alice) says (prop go)):(commands, staff, 'd, 'e)Form``
val th2 = ACL_ASSUM``((Name Bob) says (prop go)):(commands, staff, 'd, 'e)Form``
val th3 = ACL_CONJ th1 th2
val th4 = AND_SAYS_RL th3
val th5 = DISCH(hd(hyp th2)) th4
in
DISCH (hd(hyp th1)) th5
end;
\end{lstlisting}
\subsection{Session Transcript}
If we send the above code to HOL, we will see the transcript as below:
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> # # # # # # # # # val aclExerciseTheorem1 =
|- ((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name Alice says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Bob says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Alice meet Name Bob says
(prop go :(commands, staff, 'd, 'e) Form):
thm
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
\section{Goal-oriented Proof with PROVE_TAC}
In this section, we will give a goal-oriented proof on theorem \HOLsolutionsOneTheoremsaclExerciseTheoremOne
\subsection{Relevant Codes}
\label{subsec:e1161st}
\begin{lstlisting}[frame = trBL]
val aclExerciseTheorem1A =
TAC_PROOF(
([],
``((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),(Os :'e po)) sat
((Name Alice) says (prop go)) ==>
(M,Oi,Os) sat ((Name Bob) says (prop go)) ==>
(M,Oi,Os) sat (((Name Alice) meet (Name Bob)) says (prop go))``
),
PROVE_TAC[Conjunction, And_Says_Eq]
);
\end{lstlisting}
\subsection{Session Transcript}
If we send the above code to HOL, we will see the transcript as below:
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> # # # # # # # # # Meson search level: .....
val aclExerciseTheorem1A =
|- ((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name Alice says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Bob says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Alice meet Name Bob says
(prop go :(commands, staff, 'd, 'e) Form):
thm
>
\end{verbatim}
\end{scriptsize}
\end{session}
\section{Goal-oriented Proof without PROVE_TAC}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val aclExerciseTheorem1B =
TAC_PROOF(
([],
``((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),(Os :'e po)) sat
((Name Alice) says (prop go)) ==>
(M,Oi,Os) sat ((Name Bob) says (prop go)) ==>
(M,Oi,Os) sat (((Name Alice) meet (Name Bob)) says (prop go))``
),
REPEAT STRIP_TAC THEN
ACL_AND_SAYS_RL_TAC THEN
ACL_CONJ_TAC THEN
REWRITE_TAC [] THEN
ASM_REWRITE_TAC []
);
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> # # # # # # # # # # # # # # val aclExerciseTheorem1B =
|- ((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name Alice says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Bob says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Alice meet Name Bob says
(prop go :(commands, staff, 'd, 'e) Form):
thm
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
\chapter{Exercise 13.10.2}
\section{Problem Statement}
In this exercise, we will prove the theorems
\begin{quote}
\HOLsolutionsOneTheoremsaclExerciseTheoremTwo
\end{quote}
Just like in the previous chapter. Before we go through the following sections, please kindly not that in the source file I provided in \emph{../HOL/solutions1Script.sml}, at the very begining I had imported the following package by doing
\begin{lstlisting}[frame=trBL]
open HolKernel Parse boolLib bossLib;
open TypeBase boolTheory arithmeticTheory
\end{lstlisting}
\section{Forward Proof}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val aclExerciseTheorem2 =
let
val th1 = ACL_ASSUM``((Name Alice) says (prop go)):(commands, staff, 'd, 'e)Form``
val th2 = ACL_ASSUM``((Name Alice) controls (prop go)):(commands, staff, 'd, 'e)Form``
val th3 = ACL_ASSUM``((prop go) impf (prop val))``
val th4 = CONTROLS th2 th1
val th5 = SAYS ``(Name Bob)`` th4
val th6 = DISCH (hd(hyp th3)) th5
val th7 = DISCH (hd(hyp th2)) th6
in
DISCH (hd(hyp th1)) th7
end;
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> # # # # # # # # # # # <<HOL message: inventing new type variable names: 'a, 'b, 'c>>
val aclExerciseTheorem2 =
|- ((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name Alice says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Alice controls (prop go :(commands, staff, 'd, 'e) Form) ==>
((M :(commands, 'b, 'a, 'b, 'c) Kripke),(Oi :'b po),(Os :'c po)) sat
(prop go :(commands, 'a, 'b, 'c) Form) impf
(prop (val :commands) :(commands, 'a, 'b, 'c) Form) ==>
(M,Oi,Os) sat Name Bob says (prop go :(commands, staff, 'd, 'e) Form):
thm
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
\section{Goal-oriented Proof with PROVE_TAC}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val aclExerciseTheorem2A =
TAC_PROOF(
([],
``((M:(commands, 'b, staff, 'd, 'e) Kripke),(Oi: 'd po),(Os: 'e po)) sat
(Name Alice says (prop go)) ==>
(M, Oi, Os) sat ((Name Alice) controls (prop go)) ==>
(M, Oi, Os) sat ((prop go) impf (prop launch)) ==>
(M, Oi, Os) sat ((Name Bob) says (prop launch))``
),
PROVE_TAC[Modus_Ponens, Controls, Says]);
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> # # # # # # # # # # Meson search level: .......
val aclExerciseTheorem2A =
|- ((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name Alice says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Alice controls (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
(prop go :(commands, staff, 'd, 'e) Form) impf
(prop launch :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Bob says (prop launch :(commands, staff, 'd, 'e) Form):
thm
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
\section{Goal-oriented Proof without PROVE_TAC}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val aclExerciseTheorem2B =
TAC_PROOF(([],
``((M:(commands, 'b, staff, 'd, 'e) Kripke),(Oi: 'd po),(Os: 'e po)) sat
((Name Alice) says (prop go)) ==>
(M, Oi, Os) sat ((Name Alice) controls (prop go)) ==>
(M, Oi, Os) sat ((prop go) impf (prop launch)) ==>
(M, Oi, Os) sat ((Name Bob) says (prop launch))``
),
REPEAT STRIP_TAC THEN
ACL_SAYS_TAC THEN
PAT_ASSUM ``(M, Oi, Os) sat (Name Alice controls prop go)``
(fn th1 =>(PAT_ASSUM ``(M,Oi, Os) sat (Name Alice says prop go)``
(fn th2 => ASSUME_TAC (CONTROLS th1 th2)))) THEN
PAT_ASSUM ``(M, Oi, Os) sat ((prop go) impf (prop launch))``
(fn th1 =>(PAT_ASSUM ``(M,Oi,Os) sat (Name Alice controls prop go)``
(fn th2 =>(PAT_ASSUM ``(M,Oi,Os) sat (Name Alice says prop go)``
(fn th3 => ASSUME_TAC (ACL_MP (CONTROLS th2 th3) th1)))))) THEN
PROVE_TAC []
);
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
<<HOL message: inventing new type variable names: 'a, 'b, 'c>>
<<HOL message: inventing new type variable names: 'a, 'b, 'c, 'd>>
Meson search level: ..
val aclExerciseTheorem2B =
|- ((M :(commands, 'b, staff, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name Alice says (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Alice controls (prop go :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
(prop go :(commands, staff, 'd, 'e) Form) impf
(prop launch :(commands, staff, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name Bob says (prop launch :(commands, staff, 'd, 'e) Form):
thm
val it = (): unit
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
\chapter{Exercise 14.4.1}
\section{Problem Statement}
In this question, we will give some definitions on our own datatypes, then we will use those datatypes to prove four theorems which are essentially the same.
Please kindly note, in the corresponding source file \emph{../HOL/Conops0SolutionsScript.sml}, I had imported the following packages
\begin{lstlisting}[frame=trBL]
open HolKernel Parse boolLib bossLib;
open acl_infRules aclrulesTheory aclDrulesTheory ;
\end{lstlisting}
to guarantee that all the following procedures work as expected.
\section{Definitions on Datatypes}
We will need to define several datatypes before we dive into the proofs, and they are respectively
\begin{lstlisting}[frame=trBL]
val _ = Datatype `commands = go | nogo | launch|abort|activate|stand_down`;
val _ = Datatype `people = Alice | Bob`;
val _ = Datatype `roles = Commander | Operator | CA`;
val _ = Datatype `keyPrinc = Staff conops0Solution$people | Role conops0Solution$roles | Ap num`;
val _ = Datatype `principals = PR keyPrinc | Key keyPrinc`;
\end{lstlisting}
\section {Proof on OpRuleLaunch}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val OpRuleLaunch =
let
val th1 = ACL_ASSUM ``(Name (PR (Role Commander)) controls (prop go)): (commands, principals, 'd, 'e)Form``
val th2 = ACL_ASSUM ``(reps (Name (PR (Staff Alice))) (Name (PR (Role Commander))) (prop go)): (commands, principals, 'd, 'e)Form``
val th3 = ACL_ASSUM ``(Name (Key (Staff Alice)) quoting Name (PR (Role Commander)) says prop go): (commands, principals, 'd, 'e)Form``
val th4 = ACL_ASSUM ``(prop go impf prop launch):(commands, principals, 'd, 'e)Form``
val th5 = ACL_ASSUM ``(Name (Key (Role CA)) speaks_for Name (PR (Role CA))): (commands,principals, 'd, 'e)Form``
val th6 = ACL_ASSUM ``(Name (Key (Role CA)) says Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice))): (commands, principals, 'd, 'e)Form``
val th7 = ACL_ASSUM ``(Name (PR (Role CA)) controls Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice))): (commands, principals, 'd, 'e)Form``
val th9 = SPEAKS_FOR th5 th6;
val th10 = CONTROLS th7 th9;
val th11 = QUOTING_LR th3;
val th12 = SPEAKS_FOR th10 th11
val th13 = QUOTING_RL th12
val th14 = REPS th2 th13 th1;
val th15 = ACL_MP th14 th4;
val th16 = SAYS ``Name(Key (Staff Bob)) quoting Name (PR (Role Operator))`` th15
val th17 = DISCH(hd (hyp th7)) th16
val th18 = DISCH(hd (hyp th6)) th17
val th19 = DISCH(hd (hyp th5)) th18
val th20 = DISCH(hd (hyp th4)) th19
val th21 = DISCH(hd (hyp th3)) th20
val th22 = DISCH(hd (hyp th2)) th21
in
DISCH (hd (hyp th1)) th22
end;
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
# # # # # # # # # # # # # # # # # # # # # # # # # # # val OpRuleLaunch =
|- ((M :(commands, 'b, principals, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name (PR (Role Commander)) controls
(prop go :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
reps (Name (PR (Staff Alice))) (Name (PR (Role Commander)))
(prop go :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Staff Alice)) quoting Name (PR (Role Commander)) says
(prop go :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
(prop go :(commands, principals, 'd, 'e) Form) impf
(prop launch :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
((Name (Key (Role CA)) speaks_for Name (PR (Role CA)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Role CA)) says
((Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (PR (Role CA)) controls
((Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Staff Bob)) quoting Name (PR (Role Operator)) says
(prop launch :(commands, principals, 'd, 'e) Form):
thm
>
\end{verbatim}
\end{scriptsize}
\end{session}
\section {Proof on ApRuleActive}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val ApRuleActive =
let
val th1 = ACL_ASSUM ``(Name (PR (Role Operator)) controls (prop launch)): (commands, principals, 'd, 'e)Form``
val th2 = ACL_ASSUM ``(reps (Name (PR (Staff Bob))) (Name (PR (Role Operator))) (prop launch)): (commands, principals, 'd, 'e)Form``
val th3 = ACL_ASSUM ``(Name (Key (Staff Bob)) quoting Name (PR (Role Operator)) says prop launch): (commands, principals, 'd, 'e)Form``
val th4 = ACL_ASSUM ``(prop launch impf prop activate):(commands, principals, 'd, 'e)Form``
val th5 = ACL_ASSUM ``(Name (Key (Role CA)) speaks_for Name (PR (Role CA))): (commands,principals, 'd, 'e)Form``
val th6 = ACL_ASSUM ``(Name (Key (Role CA)) says Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob))): (commands, principals, 'd, 'e)Form``
val th7 = ACL_ASSUM ``(Name (PR (Role CA)) controls Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob))): (commands, principals, 'd, 'e)Form``
val th9 = SPEAKS_FOR th5 th6;
val th10 = CONTROLS th7 th9;
val th11 = QUOTING_LR th3;
val th12 = SPEAKS_FOR th10 th11
val th13 = QUOTING_RL th12
val th14 = REPS th2 th13 th1;
val th15 = ACL_MP th14 th4;
(*val th16 = SAYS ``Name(Key (Staff Bob)) quoting Name (PR (Role Operator))`` th15*)
val th16 = DISCH(hd (hyp th7)) th15
val th17 = DISCH(hd (hyp th6)) th16
val th18 = DISCH(hd (hyp th5)) th17
val th19 = DISCH(hd (hyp th4)) th18
val th20 = DISCH(hd (hyp th3)) th19
val th21 = DISCH(hd (hyp th2)) th20
in
DISCH (hd (hyp th1)) th21
end;
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
# # # # # # # # # # # # # # # # # # # # # # # # # # # val ApRuleActive =
|- ((M :(commands, 'b, principals, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name (PR (Role Operator)) controls
(prop launch :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
reps (Name (PR (Staff Bob))) (Name (PR (Role Operator)))
(prop launch :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Staff Bob)) quoting Name (PR (Role Operator)) says
(prop launch :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
(prop launch :(commands, principals, 'd, 'e) Form) impf
(prop activate :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
((Name (Key (Role CA)) speaks_for Name (PR (Role CA)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Role CA)) says
((Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (PR (Role CA)) controls
((Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat (prop activate :(commands, principals, 'd, 'e) Form):
thm
\end{verbatim}
\end{scriptsize}
\end{session}
\section {Proof on OpRuleAbort}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val OpRuleAbort =
let
val th1 = ACL_ASSUM ``(Name (PR (Role Commander)) controls (prop nogo)): (commands, principals, 'd, 'e)Form``
val th2 = ACL_ASSUM ``(reps (Name (PR (Staff Alice))) (Name (PR (Role Commander))) (prop nogo)): (commands, principals, 'd, 'e)Form``
val th3 = ACL_ASSUM ``(Name (Key (Staff Alice)) quoting Name (PR (Role Commander)) says prop nogo): (commands, principals, 'd, 'e)Form``
val th4 = ACL_ASSUM ``(prop nogo impf prop abort):(commands, principals, 'd, 'e)Form``
val th5 = ACL_ASSUM ``(Name (Key (Role CA)) speaks_for Name (PR (Role CA))): (commands,principals, 'd, 'e)Form``
val th6 = ACL_ASSUM ``(Name (Key (Role CA)) says Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice))): (commands, principals, 'd, 'e)Form``
val th7 = ACL_ASSUM ``(Name (PR (Role CA)) controls Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice))): (commands, principals, 'd, 'e)Form``
val th9 = SPEAKS_FOR th5 th6;
val th10 = CONTROLS th7 th9;
val th11 = QUOTING_LR th3;
val th12 = SPEAKS_FOR th10 th11
val th13 = QUOTING_RL th12
val th14 = REPS th2 th13 th1;
val th15 = ACL_MP th14 th4;
val th16 = SAYS ``Name(Key (Staff Bob)) quoting Name (PR (Role Operator))`` th15
val th17 = DISCH(hd (hyp th7)) th16
val th18 = DISCH(hd (hyp th6)) th17
val th19 = DISCH(hd (hyp th5)) th18
val th20 = DISCH(hd (hyp th4)) th19
val th21 = DISCH(hd (hyp th3)) th20
val th22 = DISCH(hd (hyp th2)) th21
in
DISCH (hd (hyp th1)) th22
end;
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> val OpRuleAbort =
|- ((M :(commands, 'b, principals, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name (PR (Role Commander)) controls
(prop nogo :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
reps (Name (PR (Staff Alice))) (Name (PR (Role Commander)))
(prop nogo :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Staff Alice)) quoting Name (PR (Role Commander)) says
(prop nogo :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
(prop nogo :(commands, principals, 'd, 'e) Form) impf
(prop abort :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
((Name (Key (Role CA)) speaks_for Name (PR (Role CA)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Role CA)) says
((Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (PR (Role CA)) controls
((Name (Key (Staff Alice)) speaks_for Name (PR (Staff Alice)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Staff Bob)) quoting Name (PR (Role Operator)) says
(prop abort :(commands, principals, 'd, 'e) Form):
thm
val it = (): unit
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
\section {Proof on ApRuleStandDown}
\subsection{Relevant Code}
\begin{lstlisting}[frame=trBL]
val ApRuleStandDown =
let
val th1 = ACL_ASSUM ``(Name (PR (Role Operator)) controls (prop abort)): (commands, principals, 'd, 'e)Form``
val th2 = ACL_ASSUM ``(reps (Name (PR (Staff Bob))) (Name (PR (Role Operator))) (prop abort)): (commands, principals, 'd, 'e)Form``
val th3 = ACL_ASSUM ``(Name (Key (Staff Bob)) quoting Name (PR (Role Operator)) says prop abort): (commands, principals, 'd, 'e)Form``
val th4 = ACL_ASSUM ``(prop abort impf prop stand_down):(commands, principals, 'd, 'e)Form``
val th5 = ACL_ASSUM ``(Name (Key (Role CA)) speaks_for Name (PR (Role CA))): (commands,principals, 'd, 'e)Form``
val th6 = ACL_ASSUM ``(Name (Key (Role CA)) says Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob))): (commands, principals, 'd, 'e)Form``
val th7 = ACL_ASSUM ``(Name (PR (Role CA)) controls Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob))): (commands, principals, 'd, 'e)Form``
val th9 = SPEAKS_FOR th5 th6;
val th10 = CONTROLS th7 th9;
val th11 = QUOTING_LR th3;
val th12 = SPEAKS_FOR th10 th11
val th13 = QUOTING_RL th12
val th14 = REPS th2 th13 th1;
val th15 = ACL_MP th14 th4;
(*val th16 = SAYS ``Name(Key (Staff Bob)) quoting Name (PR (Role Operator))`` th15*)
val th16 = DISCH(hd (hyp th7)) th15
val th17 = DISCH(hd (hyp th6)) th16
val th18 = DISCH(hd (hyp th5)) th17
val th19 = DISCH(hd (hyp th4)) th18
val th20 = DISCH(hd (hyp th3)) th19
val th21 = DISCH(hd (hyp th2)) th20
in
DISCH (hd (hyp th1)) th21
end;
\end{lstlisting}
\subsection{Session Transcript}
\label{sec:session-transcript}
\setcounter{sessioncount}{0}
\begin{session}
\begin{scriptsize}
\begin{verbatim}
> val ApRuleStandDown =
|- ((M :(commands, 'b, principals, 'd, 'e) Kripke),(Oi :'d po),
(Os :'e po)) sat
Name (PR (Role Operator)) controls
(prop abort :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
reps (Name (PR (Staff Bob))) (Name (PR (Role Operator)))
(prop abort :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Staff Bob)) quoting Name (PR (Role Operator)) says
(prop abort :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
(prop abort :(commands, principals, 'd, 'e) Form) impf
(prop stand_down :(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
((Name (Key (Role CA)) speaks_for Name (PR (Role CA)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (Key (Role CA)) says
((Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat
Name (PR (Role CA)) controls
((Name (Key (Staff Bob)) speaks_for Name (PR (Staff Bob)))
:(commands, principals, 'd, 'e) Form) ==>
(M,Oi,Os) sat (prop stand_down :(commands, principals, 'd, 'e) Form):
thm
val it = (): unit
>
*** Emacs/HOL command completed ***
\end{verbatim}
\end{scriptsize}
\end{session}
%% ------------------------------------------
%% Change to letters for appendix
%% ------------------------------------------
%% ------------------------------------------
%% this restarts the section numbering
%% ------------------------------------------
\appendix{}
%% ------------------------------------------
% label using capital letters
%% ------------------------------------------
\renewcommand{\thechapter}{\Alph{chapter}}
\chapter{Source Code for example1Script.sml}
\label{cha:source-code-sample}
The following code is from \emph{emample1Script.sml}, which is located
in directory "../HOL/"
\lstinputlisting{../HOL/example1Script.sml}
\chapter{Source Code for solutions1Script.sml}
\label{cha:source-code-sample}
The following code is from \emph{solutions1Script.sml}, which is located
in directory "../HOL/"
\lstinputlisting{../HOL/solutions1Script.sml}
\chapter{Source Code for conops0SolutionScript.sml}
\label{cha:source-code-sample}
The following code is from \emph{conops0SolutionScript.sml}, which is located
in directory "../HOL/"
\lstinputlisting{../HOL/conops0SolutionScript.sml}
\end{document}
| {
"alphanum_fraction": 0.6556409866,
"avg_line_length": 33.8993548387,
"ext": "tex",
"hexsha": "0d16bf87e37515b235f6fd07888359663ca27d23",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4c6d063b40719371e20fac49671a5af177c3750d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jwei15/CIS-634",
"max_forks_repo_path": "HW8/LaTeX/HW8.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4c6d063b40719371e20fac49671a5af177c3750d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jwei15/CIS-634",
"max_issues_repo_path": "HW8/LaTeX/HW8.tex",
"max_line_length": 416,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4c6d063b40719371e20fac49671a5af177c3750d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jwei15/CIS-634",
"max_stars_repo_path": "HW8/LaTeX/HW8.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8612,
"size": 26272
} |
%to force start on odd page
\newpage
\thispagestyle{empty}
\mbox{}
\section{Functional Analysis}
\lettrine[lines=4]{\color{BrickRed}F}unctional analysis is the branch of mathematics and specifically of the analysis that is related to the study of function spaces. It takes its historical roots in the study of transformations such as the Fourier transform and in the study of differential equations and integrals. As such it encompasses so many areas that it is difficult to justify that it be a section of this book because it is rather a field of study. Moreover, it is because of this difficulty to accurately identify the area it covers that the reader will find the Fundamental Theorem of Calculus in the chapter of Integral and Differential Calculus rather than here...
Why do we we use the term "analysis" in the particular case of functions? The reason lies in the historical study of various phenomena of nature and resolution of various technical problems and therefore mathematics, which often lead us to consider the variation of a parameter correlated with the variation of another or several other variables. To study these variations, many tools are available to each of us:
\begin{itemize}
\item The engineer, for example, frequently use charts (in cartesian, polar or logarithmic coordinate system... concepts which are discussed further in more detail) to determine the mathematical relations (or "law") linking variables between them. Certainly, this kind of method is (sometimes ...) aesthetic but students know well how it is sometimes painful to transcribe measures points on a sheet of paper or on a computer and consultants know how dangerous can be a chart when not build in a scientific way. This is unfortunately a necessary step (but should avoid an abusive usage) to understand how our predecessors worked and got the results that help us today in our advances in theoretical physics.
\item The mathematician and theoretical physicist usually hate to use the paper-pencil-scrawl methods. Nevertheless, the role of the mathematician or physicist is to develop new theories with mathematical axioms or principles which should require no usage of graphical representation nor access to experimental measures that are often attached to it.
\end{itemize}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Before starting to read what follows, it may be useful to remind the reader that the definition of the concept of "function" (and the basic properties thereto) is given in the section on Set Theory.
\end{tcolorbox}
Function analysis is also strongly linked to Vector Calculus (and not only...). Thus for people who want to increase their knowledge about the fundamentals of function analysis we strongly recommend the reader to have a look to the Vector Calculus section.
\pagebreak
\subsection{Representations}
We will see in what follows, firstly, how to represent different values related by tables and charts (yes! We must because it helps to understand more complicated stuff) and secondly how to mathematically analyze the properties of these representations only by using abstract mathematical tools.
\textbf{Definition (\#\mydef):} A function is named "\NewTerm{univalent function}\index{univalent function}" or "\NewTerm{unary function}\index{unary function}" if the number of its arguments (parameters or variables) is equal to one. In the case of a function of two arguments, we speak about a "\NewTerm{bivalent function}\index{bivalent function}" or "\NewTerm{bivalent function}\index{bivalent function}", etc. Formally a function is $n$-ary if:
\subsubsection{Tabular Representation}
Among the possible visual representation of functions, the most intuitive and the oldest is the one where we have in the column or the row of a table in an orderly way the values of the independent variable $x_1,x_2,...,x_n$ and the corresponding values, namely the "\NewTerm{transformed variables}\index{transformed variables}" of the function $y_1,y_2,...,y_n$ in another column or aligned row:
In the expression:
we say that the $a_1,a_2,...,a_n$ are the "\NewTerm{arguments}\index{arguments}" of $f$.
Such are, for example, tables of trigonometric functions, logarithmic tables, etc. and during the experimental study of certain phenomena tables which express the existing functional dependence between the measured physical quantities such as the readings of the temperature of the air stored in a meteorological station during one day.
Of course, this concept can be generalized to any multivalent function regardless its definition domain.
However, this method is laborious and does not permit to directly see the behavior of the function and therefore a simple and attractive visual analysis of its properties. It still has the advantage of not requiring any special tools or advanced mathematics.
\pagebreak
\subsubsection{Graphical Representation}
The natural, relative or purely imaginary numbers (\SeeChapter{see section Numbers}) can all be represented as simply by points on a numerical infinite axis (straight line).
To this purpose, we choose on this axis:
\begin{enumerate}
\item A point O named "\NewTerm{origin}\index{origin}"
\item A positive direction, that we indicate by a horizontal arrow
\item A unit of measure (usually represented by small vertical lines: the "\NewTerm{graduation}\index{graduation}")
\end{enumerate}
Such that:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/representative_1d.eps}
\caption{Typical representation example of an oriented infinite axis with origin}
\end{figure}
In most cases we put (traditionally) the axis horizontally and choose the direction from left to right (at least when there is only on axis...).
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
The point (letter) $O$, frequently represents the number zero in mathematics but we might very well choose to put the origin elsewhere. For example, in physics, the point $O$ is often positioned at the location of the centroid of a system.
\end{tcolorbox}
It is obvious that the fact that the sets of numbers that we discussed in the section Numbers are ordered implies that every number is represented by a single point on this axis. Thus, two distinct real numbers correspond two different points on the axis.
Thus, there is a correspondence between all numbers and all the points of the axis (in the case of real or complex numbers, it corresponds not a number to each graduation, but a number at each \underline{point} of the axis!) . Thus, each number represents a point or a unique graduation and back to each point or graduation is a single number which is the image.
\pagebreak
\paragraph{2D representations}\mbox{}\\\\
There are besides the one dimensional representations other of higher dimensions (phew!...) like the "\NewTerm{planar representation}\index{planar representation}" that allow us to draw much more than simple points on a one-dimensional straight line but functions of one variable. Let's see what this is and looks like!
In single-variable calculus, the functions that one encounters are functions of a variable (usually $x$ or $t$) that varies over some subset of the real number line (which we denote by $\mathbb{R}$). For such a function, say, $y = f (x)$, the graph of the function $f$ consists of the points
These points lie in the Euclidean plane, which, in the Cartesian or rectangular
coordinate system, consists of all ordered pairs of real numbers $(a,b)$. We use the word "Euclidean" to denote a system in which all the usual rules of Euclidean geometry hold (\SeeChapter{see section Euclidean Geometry}). We denote the Euclidean plane by $\mathbb{R}^2$. where the exponent "$2$" represents the number of dimensions of the plane.
Thus we can for each of a variable $x$ on a horizontal axis, named commonly "\NewTerm{$x$-axis\index{$x$-axis}}" match a value $y$ through a function $f$ such that:
plotted on a vertical axis, named commonly the "\NewTerm{$y$-axis}\index{$y$-axis}" which passes through the junction defined by the origin $O$ such tat (arbitrary example):
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/representative_2d_planar.eps}
\caption{Typical example of a planar representation with orthogonal axes, origin $O$ and the 4 quadrants}
\end{figure}
All points of the plan denoted with the variations $X\text{O}Y$, $XY$ or $x\text{O}y$, $\text{O}xy$, $xy$, whose abscissa is traditionally the values of the independent variable and the ordinates the corresponding values of the function is named "\NewTerm{planar graph}\index{planar graph}" of this function. If there is no confusion, we just say "\NewTerm{graph}\index{graph}".
In the case of representation by a rectangular coordinate system (cartesian, polar or logarithmic) as the figure above, we can see that the entire coordinate plane is divided into four areas that by tradition we name "\NewTerm{quadrants}\index{quadrants}".
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
When we wish to highlight a particular point on the graph representing the function, we draw most of time a small round as presented above for the point of coordinates $(x_n,y_n)$.
\end{tcolorbox}
Another classic case of plane graph representation known by a large number of students is the plot of polynomials (\SeeChapter{see section Calculus}) with real coefficients or trigonometric functions (\SeeChapter{see section Trigonometry}).
Indeed, to solve polynomial equations of the second degree (\SeeChapter{see section Calculus}), it is common in small classes that the teacher asks his students in addition to give an algebraic expression of the roots of:
given by for recall (see section Calculus for the proof):
a graphics resolution where the two roots (in the case where there are two distinct real roots) are given by the intersection of the parabola with the $x$-axis (of course, if the equation has no solution, there are no intersections...):
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/roots_parabola.eps}
\caption{Representation of roots on a planar graph}
\end{figure}
The graphical representation can be generalized to polynomial equations of the 3rd, 4th and 5th degree (we will prove much further, using Galois theory that it is not possible to get a general algebraic expression of the roots of a polynomial equation of the 5th degree and higher).
There is another well-known and interesting example of special graph because when most young people think that after high-school they will never do maths again, in Switzerland many employees are faced to calculate in spreadsheet softwares what we name the "coordinate wage" that is a "\NewTerm{step-wise function}\index{step-wise function}" defined in year 2013 by the government as:
Where $R$ is a minimal value defined also by the government as being equal to 25,800.- in 2013 and the wage is denoted by the letter $S$ (for \textbf{S}alary).
When we plot such a stepwise function with for example Maple 4.00b we get:
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{img/analysis/step_wise_function.eps}
\caption{Example of step-wise function for swiss coordinate wage with Maple 4.00b}
\end{figure}
And therefore it is obvious thank to this chart representation that the previous definition can be simplified as:
That is much easier to write in any spreadsheet software or also with Maple 4.00b:
\texttt{>R:=258000;}\\
\texttt{>plot(min(max(R/8,S-7/8*R),17/8*R,S=3/4*R..100000);}
Also, graphs are powerful qualitative tools in the field of statistics (\SeeChapter{see section Statistics}) as a starting point for data analysis (histograms, cheese, box plots, radar, scatter plots, etc.). The assumptions and ideas that are generated by graphical analysis can be investigated with advanced statistical tools (for a hundred of examples see the R Software or MATLAB™ companion book).
Below for example, a graph (histogram) taken from the Industrial Engineering section that is very common in the field of statistics and project management in the global industry:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/six_sigma.eps}
\caption{Example of typical histogram in engineering companies (Six Sigma)}
\end{figure}
Histograms allow to observe distributions and determine qualitatively if it fits a particular theoretical model.
Graphics can also be used to observe changes over time (time series, control charts, residual analysis, etc.):
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/time_serie.eps}
\caption{Example of time series with moving averages in financial trading}
\end{figure}
and still many other charts... that we have already seen and other we will see throughout the pages of this book.
\paragraph{3D representations}\mbox{}\\\\
Of course, in the case of a trivalent function (three-dimensional), that is to say a parameter which depends on two other, the idea is the same as for 2D except that the number of quadrants doubles:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/quadrants_3d.eps}
\caption{Quadrants in a 3D orthogonal system (source: Wikipedia)}
\end{figure}
This 3D method of representation and analysis of a trivalent function was time consuming at the beginning of the 20th century but with the help of computers in the end of the 20th century this time consuming problem was almost solved...
In 3D functional analysis, we will deal with functions of two or three variables (usually $x, y, z$, respectively). The graph of the arrow of coordinates $(x, y, z)=(x,y,f(x,y))$, lies in Euclidean space. Since Euclidean can be 3-dimensional (and more or less for sure!), we denote it by $\mathbb{R}^3$.
Euclidean space has three mutually perpendicular coordinate axes ($x$, $y$ and $z$), and three
mutually perpendicular coordinate planes: the $xy$-plane, $yz$-plane and $xz$-plane:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/algebra/euclidian_planes.eps}
\caption{Mutually perpendicular planes in $\mathbb{R}^3$}
\end{figure}
The coordinate system shown above is known as a right-handed coordinate system, because it is possible, using the right hand, to point the index finger in the positive direction of the $x$-axis, the middle finger in the positive direction of the $y$-axis, and the thumb in the positive direction of the $z$-axis, as below:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/algebra/right_hand.eps}
\caption{Right hand system}
\end{figure}
What we are going to represent now further below (special example), purists mathematicians would notice it as follows (it's nice to have seen at least once this notation as you could meet it in other books):
and let us see what it gives with Maple 4.00b:
\texttt{>restart:}\\
\texttt{>with(plots):}\\
\texttt{>f:=(x,y)->12*x/(1+x\string^ 2+y\string^ 2);}\\
\texttt{>xrange:=-10..10;yrange:=-5..5;}\\
\texttt{>plot3d(f,xrange,yrange);}
This will give:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/representation_grid_function.eps}
\caption{Grid representation of a 3D function with Maple 4.00b}
\end{figure}
Let us improve the visual by adding a shading interpolation color with warm color to high positions and cold colors to low positions:
\texttt{>plot3d(f,xrange,yrange, style=patchnogrid, grid=[80,50], shading=ZHUE, axes=FRAME, tickmarks=[3,3,3], labels=[`x`,`y`,`f(x,y)`], labelfont=[TIMES,BOLD,12], title=`Graphique rempli`, titlefont=[TIMES,BOLD,12], scaling=unconstrained, orientation=[-107,68]);}
This will give:
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{img/analysis/representation_shading_interp_function.eps}
\caption{Isolines representation of a 3D function with Maple 4.00b}
\end{figure}
Let us plot now the "\NewTerm{contour lines}", also named "\NewTerm{isoline}", that represents lines of the same height on the function surface\footnote{It is a cross-section of the three-dimensional graph of the function $f(x, y)$ parallel to the $x, y$ plane. In cartography, a contour line (often just named a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines.} (see section of Differential Geometry for a rigorous definition):
\texttt{>plot3d(f,xrange,yrange,style=patchcontour);}
This will give:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/representation_isoline.eps}
\caption{Shading interpolation representation of a 3D function with Maple 4.00b}
\end{figure}
It's not very nice so let us improve this a little bit:
\texttt{plot3d(f,xrange,yrange,style=patchcontour,contours=[seq(-7+k/4,k=0..60)],\\
grid=[80,50],shading=ZHUE,axes=FRAME, tickmarks=[3,3,3],\\ scaling=unconstrained,orientation=[-107,68]);}
This will give:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/representation_nice_3d_function.eps}
\caption{Better representation of a 3D function with Maple 4.00b}
\end{figure}
With a small rotation to view from above:
\texttt{>plot3d(f,xrange,yrange, style=patchcontour, contours=[seq(-7+k/4,k=0..60)], grid=[80,50], shading=ZHUE, axes=FRAME, tickmarks=[3,3,3], scaling=unconstrained, orientation=[-90,0]);}
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/representation_nice_3d_function_above.eps}
\caption{Above representation of a 3D function with Maple 4.00b}
\end{figure}
And in section view (side view):
\texttt{>plot(f(x,2),x=xrange);}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{img/analysis/representation_nice_3d_function_section.eps}
\caption{Representation of a section of the pseudo-3D surface}
\end{figure}
Or with multiple sections views:
\texttt{>display([seq(plot(f(x,y),x=xrange),y=yrange)]);}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{img/analysis/representation_nice_3d_function_multiple_sections.eps}
\caption{Representation of multiple sections of the pseudo-3D surface}
\end{figure}
The reader can also animate the graph above with the following command:
\texttt{>display([seq(display([plot(f(x,k/5),x=xrange),}\\ \texttt{textplot([6,5,cat('y=',convert(evalf(k/5,2),string))],font=[TIMES,BOLD,16])])}\\
\texttt{,k=-25..25)],insequence=true, title='Animation',titlefont=[TIMES,BOLD,18]);}
That's all for typical and simple example of standard manipulations of an engineer hired in a company and using graphics (in practice it will instead use MATLAB™ instead of Maple but the reader can refer to the free companion book on MATLAB™ with a few hundreds of pages graphics).
\paragraph{2D Vector representations}\mbox{}\\\\
It is also frequently made use of graphic representations in the context of analytical geometry to simplify analysis or to prove theorems witht the help of visual representations (do not abuse of this method!).
Thus, we can easily introduce the concept of norm (see section Vector Calculus) in a very easy way by plotting the distance between two points (in 2D or in 3D) and applying the Pythagorean theorem that will be assumed to be known (see section Euclidean geometry).
The main idea of a planar vector representation in physics and engineering labs is that a point $P_1$ of coordinates $(x_1,y_1)$ that has some physical properties (typically a velocity) will be after a given time at the point $P_2$ of coordinates $(x_2,y_2)$ supposed to be always in the same plane. In this way, the straight line between $P_1$ and $P_2$ is a visualization of the "intensity" of the velocity (and implicitly of the force). When doing that for many points we get a planar representation of a planar vector field (for more example see the companion book on MATLAB™):
\begin{figure}[H]
\centering
\includegraphics{img/analysis/vector_field.jpg}
\caption[]{Typical planar vector field with MATLAB™}
\end{figure}
Now let us represent three points $P_1,P_2,P_3$ on a plane graph in which has been defined a referential as presented below:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/vector_plane.jpg}
\caption[]{Scenario of three points in a plane}
\end{figure}
We can consider the straight line $\overline{P_1P_2}$ as a vector but not translated at the origin of the referential (\SeeChapter{Vector Calculus}).
If $x_1\neq x_2$ and $y_1\neq y_2$ (as in the figure above), the points $P_1,P_2,P_3$ are the vertices of a perpendicular triangle. By applying the Pythagorean theorem (\SeeChapter{see section Euclidean Geometry}) we can easily calculate the metric distance $d$ as:
On the figure, we see that:
Since $\forall x \in \mathbb{R} \; \vert x \vert ^2 =x^2$, we can write:
If $x_1=y_1=0$, we end up with a relation named "\NewTerm{norm}", "\NewTerm{module}" or "\NewTerm{distance}" that we have already defined as part of our study of vector calculus when the origin of the vector is translated on the origin of the referential (see section of the corresponding name).
\begin{theorem}
Obviously, if we consider two points $P_1(x_1,y_1),P_2(x_2,y_2)$, we can determine if a third point $P_3(x_3,y_3)$ is on the mediator (\SeeChapter{see section Euclidean Geometry}) of the first two and for this that it is obviously sufficient that (by definition of the mediator!):
\end{theorem}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
We hesitated to put this proof in the section of Analytical Geometry but at then end we have decided that it was a nice example of showing how visual representation can help readers to better understand some subjects.
\end{tcolorbox}
\begin{dem}
As $(x_1,y_1),(x_2,y_2)$ are known, we can easily express an "\NewTerm{analytic expression}" property of the mediator that is that for each point on the mediator we have:
where $a, b$ are therefore constants and wherein any point that satisfies this relation, which is in this case the equation of a straight line, lies on the mediator.
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
Furthermore, it is easy to see that the midpoint of the line segment that coincide with the mediator is given by:
So we see that with a simple visual representation, we can achieve results that are sometimes (...) more obvious for students.
Let us use this example to define some concepts on that we will come back further and do some reminders.
\textbf{Definition (\#\mydef):} Any function of the form of a polynomial (\SeeChapter{see section Calculus}) of degree $1$ with constant real coefficients:
is the analytic expression of what we name a "\NewTerm{straight line}" of "\NewTerm{slope}" $a$ and "intercept" $b$ (when $x=0$).
Obviously, if:
the line is horizontal if we graphically represent it since $y$ is constant for all $x$ and is equal to $b$. Conversely, if:
the straight line will be vertical in the $x\text{O}y$ referential.
\paragraph{Properties of visual representations}\mbox{}\\\\
Depending on the type of graph we visualize (especially graphics planes) it is possible to extract some basic properties. Let us see the most important one to know for univariate functions:
\begin{enumerate}
\item[P1.] The graph of a function is "\NewTerm{symmetrical about the $y$-axis}\index{graph symmetric about the $y$-axis}" if the change in from $x$ to $-x$ in the function does not change the value of $y$ such that:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_symetry_y.jpg}
\caption{Example of symmetry through the $y$-axis of a function}
\end{figure}
\item[P2.] The graph of a function is "\NewTerm{symmetrical about the $x$-axis}\index{graph symmetric about the $x$-axis}" if the change from $y$ to $-y$ does not change the value of $x$ such that:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_symetry_x.jpg}
\caption{Example of symmetry through the $x$-axis of a function}
\end{figure}
\item[P3.] The graph of a function is "\NewTerm{symmetrical about the origin $\text{O}$}\index{graph symmetrical about the origin}" if the simultaneous change of $y$ to $-y$ and from $x$ to $-x$ gives the following result (that is to say that the change in the sign of one variable change the sign of the other):
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_symetry_o.jpg}
\caption{Example of symmetry through the origin $\text{O}$ of a function}
\end{figure}
\item[P4.] Given a function $y=f(x)$, if we add a constant $c^{te} \geq 0$ to this function as:
then the function $f(x)$ is shifted (or "translated") vertically upwards of a distance $c^{te}$ as presented in the figure below:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_positive_translated.jpg}
\caption{Example of a positive vertical translation of a function}
\end{figure}
And conversely if $c^{te} \geq 0$ but:
then the function $f(x)$ is obviously translated vertically downwards:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_negative_translated.jpg}
\caption{Example of a negative vertical translation of a function}
\end{figure}
We can also consider horizontal translations of functions. Specifically, if we have still $c^{te}$, then $y=f(x)$ is translated horizontally to the right if we write:
which graphically is represented by:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_negative_horizontal_translated.jpg}
\caption{Example of negative horizontal translation of a function}
\end{figure}
and conversely, translated horizontally to the left, if we write:
as shown in the graph below:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_positive_horizontal_translated.jpg}
\caption{Example of positive horizontal translation of a function}
\end{figure}
To stretch or compress vertically a function, we simply multiply $y=f(x)$ by a constant $c^{te}>1$ and respectively $0\leq c^{te}<1$ as:
and don't forget that if a function is linear then we have the special property $f(\lambda x)=\lambda f(x)$.
This is graphically represented for the case $c^{te}>1$ by:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_upscaled.jpg}
\caption{Example of vertical stretch of a function}
\end{figure}
and when $0\leq c^{te}<1$ by:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_downscaled.jpg}
\caption{Example of vertical compression of a function}
\end{figure}
To stretch or compress a function horizontally, by the same way, we just need to multiply the variable $x$ by a constant by a constant $c^{te}>1$ and respectively $0\leq c^{te}<1$ as:
This is graphically represented for the case $c^{te}>1$ by:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_upscaled_horizontal.jpg}
\caption{Example of horizontal stretch of a function}
\end{figure}
and when $0\leq c^{te}<1$ by:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_downscaled_horizontal.jpg}
\caption{Example of horizontal downscale of a function}
\end{figure}
\end{enumerate}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Translate, stretch, compress a function or apply it a symmetry is transforming it. The plot resulting from these transformations is named the "\NewTerm{transformed}\index{transformed graph}" from the initial plot.
\end{tcolorbox}
\textbf{Definitions (\#\mydef):} We say that a function $f$ is (we simplify the definition using an univariate function):
\begin{itemize}
\item[D1.] A function is a "\NewTerm{constant function}\index{constant function}" on an interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\neq x_2$, we have $f(x_1)=f(x_2)$. What we denote in a condensed manner by:
\item[D2.] A function is an "\NewTerm{increasing function}\index{increasing function}" or an "\NewTerm{increasing function in the broadest sense}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)\leq f(x_2)$. What we denote in a condensed manner by:
\item[D3.] A function is an "\NewTerm{decreasing function}\index{decreasing function}" or an "\NewTerm{decreasing function in the broadest sense}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)\geq f(x_2)$. What we denote in a condensed manner by:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
A function is a "\NewTerm{monotonic function}\index{monotonic function}" or "\NewTerm{monotonic function in the broadest sense}" on an interval $I$ if it is increasing or decreasing in this interval.
\end{tcolorbox}
\item[D4.] A function is a "\NewTerm{strictly increasing function}\index{strictly increasing function}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)< f(x_2)$. What we denote in a condensed manner by:
\item[D5.] A function is an "\NewTerm{strictly decreasing function}\index{strictly decreasing function}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)> f(x_2)$. What we denote in a condensed manner by:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
A function is a "\NewTerm{strictly monotonic function}" on an interval $I$ if it is strictly increasing or decreasing in this interval.
\end{tcolorbox}
\end{itemize}
\subsubsection{Analytical Representation}
The analytical method of representation is by far the most used and consists of representing any function in an "\NewTerm{analytic expression}\index{analytic expression}" or "\NewTerm{closed form}\index{closed form}" which is a symbolic and abstract mathematical notation of all known mathematical operations that must be applied in a certain order to numbers and letters expressing constants or variables that we seek to analyze.
Note that by "all known mathematical operations", we consider not only the mathematical operations seen in the chapter of Arithmetics (addition, subtraction, root extraction, etc.) but also all the operations that will be defined later in this book.
If the functional dependence $y=f(x)$ is such that $f$ is an analytic expression, then we say that the "\NewTerm{function $y$ of $x$}" is "given analytically ".
Here are some examples of simple analytical expressions:
When we have determined the equation of the mediator, we have obtained an analytical expression of the visual straight line that characterize it as a function of the type:
which we recall, is the analytical expression of the equation of a straight line, also named "\NewTerm{linear equation}\index{linear equation}" or "\NewTerm{affine function}\index{affine function}", on a plane where two points are known $P_1(x_1,y_1),P_2(x_2,y_2)$, the slope is given by the ratio of vertical growth on the horizontal growth as:
A friendly and trivial application is to prove analytically that two non-vertical lines are parallel if and only if they have the same slope. Thus, given two lines with the equations:
The lines intersect at a point $(x, y)$ if and only if values of $y$ are equal for a certain $x$, that is to say:
The last equation can be solved with respect to $x$ if and only if $a_2-a_2\neq 0$. We have therefore proved that the lines $y_1,y_2$ intersect if and only if $a_1\neq a_2$. Therefore, they do not intersect (are parallel) if and only if $a_1=a_2$.
In a quite simple way by applying the Pythagorean theorem, it is not difficult (\SeeChapter{see section Analytical Geometry}) to determine the equation of a circle with center $C (h, k)$ has for equation (it is of use in mathematics not explain $y$ for the equation of the circle therefore the equation of the latter is much more visually aesthetic and speaking):
In these examples the functions are expressed analytically by a single formula (equality between two analytical expressions) which defines at the same time the "natural domain of definition" of the functions.
\textbf{Definition (\#\mydef):} The "\NewTerm{natural domain of definition}\index{natural domain of definition}" of a function given by an analytical expression is the set of $x$ values for which the expression on the right-hand side has a definite value.
For example the function:
is defined for all values of $x$ except the value $x=1$ where we have a singularity (division by zero).
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
There are an infinite number of functions and we can not expose them all here, however we will meet more than a thousand on this entire book and should amply suffice to get an idea of their study.
\end{tcolorbox}
And we have the famous following table of variations that is also considered as an analytical tool and also used by some teach to study the basics of the derivative $f'$ of a function $f$ (\SeeChapter{see section Differential and Integral Calculus}). For example with the function $x^3-3x^2+2$ (already seen in the previous mentioned section):
\begin{minipage}{\linewidth}\centering
\begin{variations}
x & \mI & & 0 & & 2 & & \pI \\
\filet
f' & \ga + & 0 & - & 0 & \dr+ \\
\filet
\m{f} & ~ & \c & \h{~} & \d & ~ & \c \\
\end{variations}
\end{minipage}
Whose corresponding plot is:
\begin{figure}[H]
\centering
\includegraphics{img/algebra/variation_plot_example.jpg}
\caption[]{Plot of function $x^3-3x^2+2$}
\end{figure}
\pagebreak
\subsection{Functions}
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.
Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function. Some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, named the "graph" of the function. In science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation.
First remember the definitions already given earlier during our study of graphical representation of functions:
\textbf{Definitions (\#\mydef):} We say that a function $f$ is (we simplify the definition using an univariate function):
\begin{itemize}
\item[D1.] A function is a "\NewTerm{constant function}\index{constant function}" on an interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\neq x_2$, we have $f(x_1)=f(x_2)$. What we denote in a condensed manner by:
\item[D2.] A function is an "\NewTerm{increasing function}\index{increasing function}" or an "\NewTerm{increasing function in the broadest sense}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)\leq f(x_2)$. What we denote in a condensed manner by:
\item[D3.] A function is an "\NewTerm{decreasing function}\index{decreasing function}" or an "\NewTerm{decreasing function in the broadest sense}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)\geq f(x_2)$. What we denote in a condensed manner by:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
A function is a "\NewTerm{monotonic function}\index{monotonic function}" or "\NewTerm{monotonic function in the broadest sense}" on an interval $I$ if it is increasing or decreasing in this interval.
\end{tcolorbox}
\item[D4.] A function is a "\NewTerm{strictly increasing function}\index{strictly increasing function}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)< f(x_2)$. What we denote in a condensed manner by:
\item[D5.] A function is an "\NewTerm{strictly decreasing function}\index{strictly decreasing function}" on the interval $I$ if for each pair $(x_1,x_2)$ of elements of $I$ such that $x_1\leq x_2$, we have $f(x_1)> f(x_2)$. What we denote in a condensed manner by:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
A function is a "\NewTerm{strictly monotonic function}\index{strictly monotonic function}" on an interval $I$ if it is strictly increasing or decreasing in this interval.
\end{tcolorbox}
\end{itemize}
And let us add now complementary definitions:
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] We say that $y$ is a function of $x$ and we will write $y=f(x),y=\varphi(x)$, etc., if for every value of the variable $x$ belonging to a certain domain of definition (set) $D$, a corresponds a value of the variable $y$ in another domain of definition (set) $E$. What we denote in various ways (the third one being the most recommended):
The variable $x$ is named "\NewTerm{independent variable}\index{independent variable}" or "\NewTerm{input variable}" or even "\NewTerm{exogenous variable}\index{exogenous variable}" and $y$ the "\NewTerm{dependent variable}\index{dependent variable}" or "\NewTerm{endogenous variable}\index{endogenous variable}".
The dependence between the variables $x$ and $y$ is named a "\NewTerm{functional dependency}\index{functional dependency}". The letter $f$, which in the symbolic notation of functional dependence, indicates that we need to apply some operations to $x$ to obtain the corresponding $y$ value.
Sometimes we write:
rather than:
In the latter case the letter $y$ expresses at the same time the value of the function and the symbol of operations applied to $x$.
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
As we saw it during our study in the section Set Theory, an application (or function) may be injective, surjective or bijective:
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/functions_type.jpg}
\caption{Quick summary of applications/functions types}
\end{figure}
It is therefore necessary that the reader for whom these concepts are unknown goes in priority read these definitions.
\end{tcolorbox}
\item[D2.] The set of $x$ values (inputs) for which the value of the function $y$ is given by the function $f (x)$ is named the "\NewTerm{range of existence}\index{range of existence}" of the function or "\NewTerm{domain of definition}\index{domain of definition}" of the function.
The set of ouputs of $f(x)$ is named the "\NewTerm{image}\index{image}" or sometimes the "\NewTerm{co-domain}\index{co-domain}". When study of the point of view of the knowledge of the output values only, the set of $x$ is named the "\NewTerm{pre-image}".
\item[D3.] A function $f(x)$ is named a "\NewTerm{periodic function}\index{periodic function}" if there is a constant $c^{te}$ such that the function's value does not change when we add (or subtract we) that constant to the independent variable such as:
which corresponds to a translation along the $x$-axis. The smallest constant satisfying this condition is named the "\NewTerm{period}\index{period}" of the function. It is frequently denoted by the letter $T$ in physics.
The most common periodic functions know by students and engineers are the trigonometric functions (see section of the corresponding name):
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{img/analysis/periodic_function.jpg}
\caption{Example of periodic function with period and amplitude}
\end{figure}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
The sum of two periodic functions with different periods is not necessarily periodic and there is no general formula to get the period of a function that is the sum of $n$ other functions!
\end{tcolorbox}
\item[D4.] In differential calculus (\SeeChapter{see section Differential and Integral Calculus}), the expression:
with $h\neq 0$ is of particular interest. We name it a "\NewTerm{growth quotient}\index{growth quotient}" (we discuss this in much more detail in our study of differential and integral calculus).
\item[D5.] We use certain properties of functions for easy graphical representation and analysis or mathematical simplifications. In particular, a function $f (x)$ is named "\NewTerm{even function}\index{even function}" if:
for all $x$ in its definition domain.
That is to say as we already seen previously, it is symmetric relatively with the $y$-axis:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/function_property_symetry_y.jpg}
\caption{Example of even function}
\end{figure}
A function $f (x)$ is named "\NewTerm{odd function}\index{odd function}" if:
for all $x$ in its definition domain.
That is to say as we already seen previously, it is symmetric relatively with the origin:
\begin{figure}[H]
\centering \includegraphics{img/analysis/function_property_symetry_o.jpg}
\caption{Example of odd function}
\end{figure}
So, to summarize, an even function is a function that is independent of the sign of the variable and an odd function change of sign when we change the sign of the variable (the spiral of Cornus in the section Civil Engineering is a good practical example of odd function). This concept will be very useful to us to simplify some very useful developments in physics (such as Fourier transforms of odd or even functions for example, or the calculation of certain integrals!).
\begin{theorem}
Remember that this type theorem linking a general concept to a particular case and its opposite is often found in mathematics. We will see such examples in tensor calculus with the symmetric and anti-symmetric tensor (\SeeChapter{see section Tensor Calculus}) or in quantum physics with the Hermitian or non-Hermitian operators (\SeeChapter{see section Wave Quantum Physics}).
\end{theorem}
\begin{dem}
Let us write:
Then:
If we sum then we get:
and by subtracting:
So there is really and odd and even decomposition of any function!!!
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
Finally, it is important to note that:
\begin{itemize}
\item The product of two even functions is an even function
\item The product of two odd functions is an even function
\item The product of an even and odd function is an odd function
\end{itemize}
Let us see a short proof of the last property because we will need it in the chapter on Geometry.
\begin{dem}
Let $g(x)$ be an even function and $h(x)$ an odd function such as:
Therefore:
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
\item[D6.] In general, if $f (x)$ and $g (x)$ are arbitrary functions, we use the terminology and notations given in the following table:
The definition domains of $f+g,f-g,f\cdot g$ are the intersection $I$ of the definition domain of $f (x)$ and g $(x)$, that is to say, the numbers which are common to both domains of definition. The domain of definition of $g/g$ est meanwhile the subset $I$ of all $x$ such that $g(x)\neq 0$.
\item[D7.] Let $y$ be a function of $f$ of $u$ such that $y=f(u)$ and $u$ a function $g$ of $x$ such that $u=g(x)$, then $y$ depends on $x$ and we have what we name a "\NewTerm{composite function}\index{composite function}" that we denote:
The last equality should be read "\NewTerm{$f$ round $g$}" and not confuse with the "round" symbol with the notation of the scalar product that we have seen during our study of the section Vector Calculus.
The domain of definition of the composite function is either identical to the entire domain of definition of the function $u=g(x)$ or the part of the domain in which the values of $u$ are such that the corresponding values $f (u)$ belong to the domain of definition of this function.
Obviously the principle of composite function can be applied not only once, but an arbitrary number of times such that $y=f(g(h(t)))$ and so on...
In computing science a function may compose with itself a given number of times $n$, such that $f(f(f(f(f...)))))=f^n$ that must not be confuse with the notation $f^2(x)$.
If $u$ does not depend on another variable (or it is not itself a composite function), then we say that $f(x)$ is an "\NewTerm{elementary function}\index{elementary function}".
Obviously there are an infinite number of elementary functions but most can be classified into families whose expression is similar to one of the following:
\begin{itemize}
\item "\NewTerm{Linear functions}\index{linear function}":
The are simply functions representing straight lines of slope $a$ passing through the origin of the axis.
\item "\NewTerm{Affine functions}\index{affine function}":
The are simply functions representing straight lines of slope $a$ passing through the origin of the axis or not (linear function with a translation).
\item "\NewTerm{Power functions}\index{power function}":
where $m\in\mathbb{R}$. Functions involving roots are often named "\NewTerm{radical functions}\index{radical functions}".
\begin{figure}[H]
\centering
\includegraphics{img/analysis/power_function.jpg}
\caption{Different plots of simple power functions}
\end{figure}
\item "\NewTerm{Absolute value functions}\index{absolute value function}" (see section Arithmetic Operators for the definition and the study of the "absolute value"):
For example the plots with Maple 4.00b that we get with the command:\\
\texttt{>plot([(x),(cos(x)),(x\string^2-3),(x\string^3-4*x\string^2+2*x)],x=-6..6,y=-4..3,\\
thickness=3);}
\begin{figure}[H]
\centering
\includegraphics{img/analysis/pre_absolute_plot_functions.jpg}
\end{figure}
and taking the absolute value:\\
\texttt{>plot([abs(x),abs(cos(x)),abs(x\string^2-3),abs(x\string^3-4*x\string^2+2*x)]\\
,x=-6..6,y=-0.5..3,thickness=3);}
\begin{figure}[H]
\centering
\includegraphics{img/analysis/post_absolute_plot_functions.jpg}
\end{figure}
\item "\NewTerm{Exponential functions}\index{exponential function}":
where the famous $e^x$ is only a special case and also $a\in\mathbb{R}$.
When $a\geq 0$ we have typically:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/exponential_functions.jpg}
\caption{Different plots of simple exponential functions $(1^2,2^x,3^x,4^x)$ with Maple 4.00b}
\end{figure}
where $m$ is a positive number different from $1$ (otherwise it is simple a linear function):
If $a<0,x\in\mathbb{R}$ the function is not defined. Indeed for $(-1)^(0.5)=\left\lbrace \mathrm{i}, -\mathrm{i} \right\rbrace$ therefore it is an application from $f:\mathrm{R}\mapsto\mathbb{C}^2$ and as far as we know there is no nice way to represent it visually and anyways this is not a function in the traditional way.
\item "\NewTerm{Logarithmic functions}\index{logarithmic function}":
with $a\in\mathbb{R}^{+}$ and that by construction of the logarithm (see further below) are of the type $f:\mathbb{R}^{+}\mapsto \mathbb{R}$.
We have typically::
\begin{figure}[H]
\centering
\includegraphics{img/analysis/logarithm_functions.jpg}
\caption{Different plots of logarithm $\ln(x)=\ln_e(x)$ in green and $\log_10(x)$ in red with Maple 4.00b}
\end{figure}
\item "\NewTerm{Periodic/Trigonometric functions}\index{period function}\index{trigonometric function}":
We already defined previously what is a periodic function. For the trigonometric function the reader can see below a plot of the main one but for more details it is strongly recommended to read the section Trigonometry:
\begin{figure}[H]
\centering
\includegraphics[scale=0.9]{img/analysis/trigonometric_functions.jpg}
\caption{Different plots of trigonometric functions with Maple 4.00b}
\end{figure}
\item "\NewTerm{Polynomial functions}\index{polynomial function}":
where as we already know $a_0,a_1,...,a_n$ are constant numbers named "\NewTerm{coefficients}\index{coefficients}" and $n$ is a positive integer that we name "\NewTerm{degree of the polynomial}\index{degree of a polynomial}". Obviously this function is defined for all values of $x$, that is to say, it is define on an infinite interval.
If follows that functions the power functions of the type $x^m$ and linear functions of the type $f(x)=x$ are a subclass of polynomial for $m\in \mathbb{N}$.
We have already study more deeply polynomials in the section Calculus with their main properties but let us give us again the plot of some of them as recall:
\begin{figure}[H]
\centering
\includegraphics{img/algebra/polynomials.jpg}
\caption{Some polynomials plotted with R.3.2.1 (see by book on R)}
\end{figure}
We will see and study in this book some famous polynomials as: Legendre polynomials (
\item "\NewTerm{Rational fractions}\index{rational fractions}" are polynomials divisions (\SeeChapter{see section Calculus}):
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Obviously two rational fractions are equal, if one is obtained from the other by multiplying the numerator and denominator by the same polynomial.
\end{tcolorbox}
The rational function:
is not defined at $x^2=5 \Leftrightarrow x=\pm \sqrt{5}$. It is asymptotic (see further below) to $\frac{x}{2}$ as x approaches infinity:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/rational_function.jpg}
\caption{Example ration function $f(x) = \frac{x^3-2x}{2(x^2-5)}$}
\end{figure}
The rational function:
is defined for all real numbers, but not for all complex numbers, since if x were a square root of -1 (i.e. the imaginary unit or its negative), then formal evaluation would lead to division by zero!
A constant function such as is a rational function since constants are polynomials. Every polynomial function $f(x) = P(x)$ is a rational function with $Q(x) = 1$. The power functions $f(x)=x^m$ are also rational functions when $m\in\mathbb{N}$.
\item "\NewTerm{Algebraic functions}\index{algebraic function}" are defined by the fact that the function $f(x)$ is the result of addition, subtraction, multiplication, division, of variables put to an integer or non-integer power. Therefore most of the functions defined previously can be included in this definition: linear functions, affine function, power function, polynomial function, rational functions.
\item A "\NewTerm{piecewise function}\index{piecewise function}" is a function defined by different formulas on different parts of its domain. The absolute value is a famous example of a piecewise-defined function because the formula changes with the sign of $x$:
\item A "\NewTerm{step function}\index{step function}" $f:[a,b]\in \mathbb{R}$ is defined if and only if there exists a subdivision $(a_i)_{0\leq i \leq n}$ of $[a, b]$ such that $a_0=a$ and $a_n=b$ and $(\lambda_0,...,\lambda_n)\in \mathbb{R}^n$ such as:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/step_function.jpg}
\caption{Example of step function}
\end{figure}
Such functions can be found in signal processing and also in statistics for survival analysis.
\end{itemize}
However, there are a very large number of other elementary functions that will meet in the individual sections of this book. Examples include the "Bessel functions" (\SeeChapter{see section Sequences and Series}), the "Lipschitz functions" (\SeeChapter{see section Topology}), the "Dirac functions" (\SeeChapter{see section Differential and Integral Calculus}), the "distribution functions" (\SeeChapter{see section Statistics}), the "Euler gamma function" (\SeeChapter{see section Differential and Integral Calculus}), etc. The reader will notice that the Dirac function also belongs to the family of distribution functions.
\end{enumerate}
\subsubsection{Limits and Continuity of Functions}
We will now consider ordered variables of a special type, which we define by the relation "the variable tends to a limit." In what will follow, the concept of limit of a variable will play a fundamental role, being intimately related to the basic notions of mathematical analysis, derivatives, integrals, etc.
\textbf{Definition (\#\mydef):} The number $a$ is named the "\NewTerm{limit}\index{limit}" of variable magnitude $x$, if for any arbitrarily small positive number $\varepsilon$ we have:
If the number $a$ is the limit of the variable $x$, we say that "\NewTerm{
$x$ tends to the limit $a$}".
We can also define the concept of limit from geometrical considerations (this can help to better understand ... but not always ...):
The constant number $a$ is the limit of the variable $x$, if for any given neighborhood, no matter how small, of center $a$ and of radius $\varepsilon$, we can find a value $x$ such that all the points corresponding to the following values of the variable belong to this neighborhood (notions that we defined earlier). We represent geometrically this as:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/limit_geometric_representation.jpg}
\caption{Geometric concept of limit in $\mathbb{R}^1$}
\end{figure}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
\textbf{R1.} It should be trivial that the limit of a constant value is equal to this constant, since the inequality $|x-c^{te}|=|c^{te}-c^{te}|=0<\varepsilon $ is always satisfied for an arbitrary $\varepsilon>0$.\\
\textbf{R2.} Not all variable have limits. For example $y=\sin(x)$ as this trigonometric function fluctuates between $[-1,+1]$ from $[-\infty,+\infty]$.
\end{tcolorbox}
\textbf{Definition (\#\mydef):} A variable $x$ tends to infinity if for any positive chosen $M$, we indicate one value of $x$ from which all successive values of the variable $x$ (vaules in the neighborhood of the previous choosen value) satisfy the inequality $|x|>M$. Formally:
\begin{itemize}
\item A variable $x$ "\NewTerm{tends to $+\infty$}" if for any positive chosen $M>0$, we indicate one value of $x$ from which all the successive values of the variables $x$ satisfies the inequality $M<x$.
It is typically the type of consideration that we have for divergent sequences (divergent to infinity) where for a given term of value $M$ of the sequence all the other terms are greater ant $M$.
\item A variable $x$ "\NewTerm{tends to $-\infty$}" if for any negative chosen $M<0$, we indicate one value of $x$ from which all the successive values of the variables $x$ satisfies the inequality $x<M$.
\end{itemize}
\textbf{Definition (\#\mydef):} Given $y=f(x)$ a function defined in a neighborhood of $a$ or on some point of this neighborhood. The function $y=f(x)$ tends to the limit $b$ (that is to say $y\rightarrow b$) when $x$ tends to $a$ (that is to say $x a$) if for any positive number $\varepsilon$ as small as possible, we can indicate positive number $\delta$ such that all $x$ different from $a$ satisfying the inequality $|x-a|<\delta$ also satisfy $|f(x)-b|<\varepsilon$. Formall ya function has a limit $b$ on $a$ when in a domain $E$ if:
The inequality $|x-a|<\delta$ gives the possibility to have the distance from which we come with our $x$ without taking care of the direction (left or right) as we take for measurement of distance the absolute values. Indeed on a system of axis representing ordinates values, we can, for a given value, coming from the left or from the right (if necessary you can imagine a bus coming to a bus stop that can from the left or from the right only since the absolute distance from it to the bus stop is less than or equal to $\delta$.
If $b$ is the limit of the function $f (x)$ when $x\rightarrow a$ we then write in this book in any case:
Obviously the above definition is available when $a=\pm \infty$ or/and $b=\pm \infty$!
To define the direction from which we come from by applying the limit, we use a special notation (recall that this will give us the information of which side of the road comes our bus from...). Thus, if $f (x)$ tends towards the limit $b_1$ when $x$ approaches a number $a$ by taking only values smaller than $a$, then we write:
(notice the small $-$ subscript) and we name $b_1$ the "\NewTerm{left limit}\index{left limit}" of the function $f (x)$ at point $a$ (because remember that the horizontal axis goes from left to right from $-\infty$ to $+\infty$, so small values compared to a given value, are on the left).
If $x$ takes values greater than $a$, then we will write:
(notice the small $+$ subscript) and we name $b_2$ the "\NewTerm{right limit}\index{right limit}" of the function $f (x)$ at point $a$.
In the figure below we have for example:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/limits.jpg}
\caption{Left and Right limit examples}
\end{figure}
It is not always easy to calculate limits of some functions. Let us see two typical examples:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E1. Let us prove that:
is true. For this purpose we have to prove that for any small $\varepsilon$ the inequality:
will be satisfied as soon as $|x|>M$ where $M$ is defined by the choice of $\varepsilon$. The previous inequality is obviously equal to:
which is satisfy if we have $x$:
We admit that the example and the method can be discussed.... But in fact it is only an application of the Hospital rule (ratio of the derivatives) already proved in the section of Differential and Integral Calculus. The reader must also know that we will see also other techniques to determine limits further below with better examples.\\
E2. Now using Taylor series and change of variables consider we want to calculate:
The method is quite to intuitive. Indeed, first we do a change of variable:
Now consider the Taylor series about $x=0$ for the function $f(x)=\sqrt{1+ax}$. We have:
Which gives:
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
as a Taylor expansion about $x=0$. Applying this to our limit we see that:
\end{tcolorbox}
The signification of the symbols $x\rightarrow -\infty$ and $x\rightarrow +\infty$ makes obvious the signification of the expressions:
and:
that we denote formally by:
We have defined the case where the function $f (x)$ tends to a certain limit $b$ when $x\rightarrow a_{+,-}$ or $x\rightarrow \pm \infty$. Now let us consider the case where the function tend to infinity when the variable $x$ change in a certain way.
We then have typically and obviously:
Or when we need to indicate the direction:
If the function $f(x) \rightarrow +\infty$ when $a \rightarrow +\infty$ the we write:
And as we have four possibilities for the sign we write when we do only theore:
that is sot say the four following possibilities:
And once again don't forget as we already mentioned before that some function such as for example $f(x)=\sin(x)$ don't have any finite limit when $x\rightarrow \pm \infty$. Then we say that the function is just "bounded" (\SeeChapter{see section Set Theory}).
\begin{figure}[H]
\centering
\includegraphics{img/analysis/limite_delucq.jpg}
\end{figure}
Now that we've roughly an overview of the concept of limit, we will give an extremely important definition that has a very important place of many areas of high-level mathematics, theoretical physics and computing science (numerical methods).
\textbf{Definition (\#\mydef):} Given a function $f(x)$ and one of its subdomain (or whole one) $E$ (most of time $E \subseteq \mathbb{R}$ and $x_0\in E$, we say that we have a "\NewTerm{continuous function}\index{continuous function}" on $x_0$ if and only if:
That is to say more formally (you have to be able to to read the fact that we are going close in an infinitely small way of a limit an this allows the continuity):
In other words: a function is continuous if for every point $x_0$ in the domain $E$, we can make the images of that point ($f(x_0)$) and another point ($f(x)$) arbitrarily close (of a distance $\varepsilon$) if we move the other point ($x$) close enough (distance $\delta$) to our given point.
The latter relation will be generalized a little bit in the section of Topology and completed with the concept of... "uniform continuity"!
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1. }$f$ is "\NewTerm{continuous on the left}\index{continuous on the left}" or respectively "\NewTerm{continuous on the right}\index{continuous on the right}", if we add to the definition above the condition $x>x_0$, respectively $x<x_0$.\\
\textbf{R2.} A continuous function with a continuous inverse function is named a "\NewTerm{homeomorphism}\index{homeomorphism}".\\
\textbf{R3.} Instead of saying when necessary that a function is not continuous on $x_0$ or on a given domain, some practitioners prefer to say that the function has an "\NewTerm{oscillation}\index{oscillation}".
\end{tcolorbox}
We have the following trivial corollaries:
\begin{enumerate}
\item[C1.] $f(x)$ is continuous on $x_0$ if and only if $f(x)$ is continuous on the left right and on the right left.
\item[C2.] $f(x)$ is continuous on $E$ if and only if $f(x)$ is continuous on any point of $E$.
\end{enumerate}
\paragraph{Limit laws}\mbox{}\\\\
We now take a look at the "\NewTerm{limit laws}\index{limit laws}" , the individual properties limits in the univariate case. The proofs will be omitted as it is quite intuitive but any reader can request us the proof of one of them if needed!
Let $f(x)$ and $g(x)$ be defined for all $x\neq a$ over some open interval containing $a$. Assume that $L$ and $M$ are real numbers such that:
Let $c^{te}$ be a constant. Then, each of the following statements holds:
\begin{itemize}
\item The sum law for limits gives:
\item The difference law for limits gives:
\item Constant multiple law for limits:
\item Product law for limits:
\item Quotient law for limits:
for $M\neq 0$.
\item Power law for limits:
for every positive integer $n$.
\item Root law for limits:
for all $L$ if $n$ is odd and for $L\geq 0$ if $n$ is even.
\end{itemize}
\subsubsection{Asymptotes}
The term "\NewTerm{asymptote}\index{asymptote}" is used in mathematics to precise possible properties of an infinite branch of curve which growth tends to an infinitesimal value.
In analytic geometry, an asymptote of a curve is simply said to be a line such that the distance between the curve and the line approaches zero as they tend to infinity. In some contexts, such as algebraic geometry, an asymptote is defined as a line which is tangent to a curve at infinity.
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
The word asymptote is derived from the Greek and means "not falling together".
\end{tcolorbox}
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] When the limit of a function $f(x)$ tends to a constant
$c^{te}$ when $x \rightarrow \pm \infty$, then the graphical representation of this function leads us to draw a horizontal line that we name "\NewTerm{horizontal asymptote}\index{horizontal asymptote}" which equation is satisfies:
\item[D2.] When the limit of a function $f(x)$ tends to
$\pm \infty$ when $x \rightarrow a_{+,-}$, then the graphical representation of this function leads us to draw a vertical line that we name "\NewTerm{vertical asymptote}\index{vertical asymptote}" which equation is satisfies:
Vertical asymptots is the typical symptom of a division by zero in a fraction and has a very important place in physics. The syndrome is also named a "\NewTerm{singularity}\index{singularity}".
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The graph of the function:
has the straight line of $x=1$ and $y=0$ as horizontal asymptote:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/asymptote_vertical_horizontal_example.jpg}
\caption{Graphical representation of a horizontal and vertical asymptote}
\end{figure}
\end{tcolorbox}
\item[D3.] The straight line of equation equation is an "\NewTerm{oblique asymptote}\index{oblique asymptote}" of a curve of the function $f (x)$ if:
the values of $a$ and $b$ can be easily found using the following relations:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Caution! A curve may have two distinct oblique asymptotes in $+\infty$ and $-\infty$.
\end{tcolorbox}
To find a possible oblique asymptote, one must already be certain that the function $f(x)$ admits an infinite limit in $+\infty$ or $-\infty$ then only we look for the limits at $-\infty$ and $+\infty$ of $f (x) / x$ and $f(x)-ax$.
Three typical cases can be considered for oblique asymptotes:
\begin{enumerate}
\item The representative curve of $f(x)$ has for asymptotical direction the affine equation $y=ax$:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The graph of the function:
has the straight line of $y=x$ as oblique asymptote:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/asymptote_oblique_affine_example.jpg}
\caption{Graphical representation of an oblique affine asymptote}
\end{figure}
\end{tcolorbox}
\item The representative curve of $f(x)$ has an infinite branch (this branch has not close form asymptote) and the only one thing we can say is that $x$-axis is the direction of of this asymptote. Such an asymptote exists when:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The functions $f(x)=\sqrt{x}$ (in red) or $\ln(x)$ (in green) have a limit $f(x)/x$ equal to $0$ and both have a "parabolic branch" of direction following the $x$-axis:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/asymptote_parabolic_branche_example_x.jpg}
\caption[]{Graphical representation of an parabolic branch example following $x$-axis}
\end{figure}
\end{tcolorbox}
\item The representative curve of $f(x)$ has an infinite branch (this branch has not close form asymptote) and the only one thing we can say is that $y$-axis is the direction of of this asymptote (we then also speak of "parabolic branch"):
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The function $f(x)=x^2$ has an infinite $f(x)/x$ limit and therefore has a parabolic branch of direction following the $y$-axis.
\begin{figure}[H]
\centering
\includegraphics{img/analysis/asymptote_parabolic_branche_example_y.jpg}
\caption[]{Graphical representation of a parabolic branch example following $y$-axis}
\end{figure}
\end{tcolorbox}
\item A function $f(x)$ is say to have a "\NewTerm{curvilinear asymptote}\index{curvilinear asymptote}" if it satisfies:
for $n>1 $where for recall $P_n(x)$ is a polynomial of degree $n$.
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The function :
has a curvilinear asymptote that is:
Indeed:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/asymptote_curvilinear_example.jpg}
\caption[]{Graphical representation of curvilinear asymptote}
\end{figure}
\end{tcolorbox}
\end{enumerate}
\end{enumerate}
\pagebreak
\subsection{Logarithms}
We hesitated to put the definition of logarithms in the section Calculus. After a moment of reflection, we decided it was better to put it in this section because to understand it well, we must be aware of the concept of limits, of definition domain and of the power function. We hope that our choice will suit you best.
Given the power (bijective) function of any base where $a \in \mathbb{R}_{+}^{*}/1$ (we exclude $1$ ontherwise it is not bijective) and denoted for recall by:
for which it corresponds to each real number $x$, exactly one positive number $a^x$ (the image set of the function is in $\mathbb{R}$) such as the powers calculation rules are applicable (\SeeChapter{see section Calculus}).
We know that for such a function that if $a>1$, then $f (x)$ is an increasing and positive (monotone) in $\mathbb{R}$, and if $0<a<1$, then $f(x)$ is positive and decreasing (monotone) in $\mathbb{R}$.
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} If $a>1$, when $x$ decreases to negative values, the graph of $f (x)$ approaches the $x$-axis. Thus, the $x$ axis is a horizontal asymptote. When $x$ increases in positive values, the graph rises quickly. This type of change is characteristic of the "\NewTerm{law of exponential of growth}\index{law of exponential of growth}" and $f(x)$ is sometimes named "\NewTerm{growing function}\index{growing function}"... If $0<a<1$, when $x$ increases, the graph tends asymptotically to the $x$-axis. This type of variation is known as an "\NewTerm{exponential decay}\index{exponential decay}".\\
\textbf{R2.} By studying $a^x$, we exclude the case where $a\leq 0$ and $a=1$. Notice that if $a<0$, then $a^x$ is not a real number for many values of $x$ (we recall that the whole image set is forced to $\mathbb{R}$ in our previous definition). If $a=0$, the $a^0=0$ is not defined. Finally, if $a=1$, then $a^x=1$ for all $x$ and the graph of $f(x)$ is a horizontal line.
\end{tcolorbox}
As the power function $f (x)$ is bijective then there exists an inverse function $f^{-1}(x)$ and is named "\NewTerm{logarithm function}\index{logarithm function}" of base $a$ and is denoted by:
and therefore:
if and only if $y=a^x$.
More generally it is defined by:
Considering $\log_a(x)$ as an exposant, we have the following properties:
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} The word "logarithm" means "number of logos", "logos" meaning "reason" or "ratio".\\
\textbf{R2.} The logarithm and power functions are defined by their bases (the number $a$). When using a power of $10$ as a base ($10, 100, 1000, ...$) then we speak of "\NewTerm{common system}\index{common system}" because they have for $\log$ successive integers.\\
\textbf{R3.} The integer part of the logarithm is named the "\NewTerm{characteristic}"\index{characteristic of a logarithm}.
\end{tcolorbox}
There are two types of logarithms that we find almost exclusively in mathematics and physics: the logarithm of base $10$, logarithm of base $e$ (the latter often named "\NewTerm{natural logarithm}\index{natural logarithm}") and logarithm of base $2$ for information theory.
First the on in base $10$ (the most used on graphical representations):
abusively noted:
and the base (Eulerian) $e$:
historically noted:
the "$n$" meaning "Napierian".
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Historically, it is John Napier (1550-1617) whose name was Latinized "Napier" that we own the study of logarithms and the name of "natural logarithms" which aimed to facilitate greatly the time for manual calculations.
\end{tcolorbox}
In English for the logarithm function in base-$10$ logarithmic we need to calculate:
ask the following question: at what power $n\in \mathbb{R}$ should we raise $10$ to get $x$?
Formally, this consist to solve the equation:
or written in another way:
with $x$ being known and therefore in base $10$:
The logarithm in base $10$ is used a lot in graphical representations in the scientific perspective when we look at amplitudes variations. For example with Maple 4.00b we have for two sine function having respectively for their respective mean the same amplitude variation of $50\%$ visible below that do not highlights necessarily this fact trivially:
\texttt{>plot({10+0.5*10*sin(x),100+100*0.5*sin(x)},x=1..10);}
\begin{figure}[H]
\centering
\includegraphics{img/analysis/two_sinus_for_comparison_without_logarithm_scale.jpg}
\caption[]{Plot with Maple 4.00b with two sine functions having same amplitude change compared to their average}
\end{figure}
While in logarithmic scale, this gives:
\texttt{>with(plots):\\
>logplot({10+0.5*10*sin(x),100+100*0.5*sin(x)},x=1..10);}
\begin{figure}[H]
\centering
\includegraphics{img/analysis/two_sinus_for_comparison_with_logarithm_scale.jpg}
\caption[]{Same plot with Maple 4.00b but with the $y$-axis in logarithm (base $10$) scale}
\end{figure}
For the logarithmic function in Eulerian base $e$ it is necessary to calculate:
to ask ourself the following question: at what power $n\in \mathbb{R}$ we must raise the number $e$ to get $x$?
Formally this consists to solve the equation:
with $x$ being known and therefore:
Technically, we say that the exponential function (see below for details):
is the inverse bijection of the $\ln (x)$ function.
\begin{figure}[H]
\centering
\includegraphics{img/analysis/bijection_ln_x_exp_x.jpg}
\caption{Graphical representation of the correspondence between the natural logarithm and the exponential}
\end{figure}
But what is that "Eulerian" number also named "\NewTerm{Euler number}\index{Euler number}"? Why do we find so often in physics and mathematics? Let us first determine the origin of its value:
with $\alpha \in \mathbb{N}$ and when $\alpha +\infty$.
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
The second term of equality is typically the type of expression that we find in compound interest in finance (\SeeChapter{see section Economy}) or in any other type of identical increase factor. And what interests us in this case is when this type of increase tends to infinity.
\end{tcolorbox}
The interest we have to pose the problem as in this way is that if we do tend $\alpha \rightarrow +\infty$ the function written above tends to $e$ and this function has the special property of being calculable more or less easily for historical reasons using Newton's binomial.
So according to the development of the Newton binomial (\SeeChapter{see section Calculus}) we can write:
This development is similar to the Taylor expansion (\SeeChapter{see section Sequences and Series}) of some given functions for particular cases of development values (hence the reason why we find this eulerian number in many places that we will later).
By performing some algebraic transformations that should now be obvious to the reader, we find:
We see in this last equality that the function $\left(1+\frac{1}{\alpha}\right)^\alpha$ is increasing when $\alpha$ increases. Indeed, when we move from $\alpha$ to the value $\alpha+1$ each term of this sum increases:
Let us prove now that the variable $\left(1+\frac{1}{\alpha}\right)^\alpha$ is bounded. By seeing that:
So we get by analogy with the extended expression of Newton binomial determined just previously the following order relation:
On the other hand:
We then can write the inequality:
The underlined terms constitute a geometric sequence of reason $q=1/2$ (see section Sequences and Series) and whose first term is $1$. If follows using the result obtained in the section of Sequences ans Series, that we can write:
Therefore, we have:
We have therefore proved that the function $\left(1+\frac{1}{\alpha}\right)^\alpha$ is bounded.
The limit:
tends to this limited value that is the number $e$ whose value is:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
As we have proved it in the section Numbers, this number is irrational.
\end{tcolorbox}
We can then define the "\NewTerm{natural exponential function}\index{natural exponential function}" (reciprocal of the natural logarithm function) by:
also sometimes denoted by:
The number $e$ and the function that determines it are very useful. We find them in all areas of mathematics and physics and thus in almost all the chapters of this book.
As we have proved it in the section of Differential and Integral Calculus the functions $e^x$ has for remarkable property that its derivative is equal to itself:
and this is used a lot for the resolution of differential equations in physics and finance.
Logarithms have several properties. Here are the most important one in our point of view (we are referring to a given base $X$) and that are very useful in physics, electronics, chemistry and so on...
Let us begin. First:
If we put $X^m=a$ and $X^n=b$ we get:
If we have the special case when $a=b$ then:
Now let us try to express:
in another way. For this we put first:
which leads us to the development:
Now let us try to express:
with $n\in \mathbb{N}^{*}$ in another way. For this we put first:
which leads us to the development:
There is also another relation used a lot of time in physics in respect to the change of logarithm basis. The first relation is trivial and follows from the algebraic properties of logarithms:
the second one:
is a bit less trivial and requires perhaps a proof (we used it for our study of continued fractions in the section Number Theory).
\begin{dem}
We first use the equivalent equations (of the first relation above):
and we proceed as follows:
What finally brings us to:
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
\pagebreak
\subsection{Transformations}
\subsubsection{Fourier Transform}
\subsubsection{Laplace Transform}
The Laplace transform is used extensively in order to solve differential equations that arise in many modeling situations of real life.
A power series about an origin is any series that can be written in the following form:
where $a_n$ are numbers and $n$ is a nonnegative integer. One can think of $a_n = a(n)$ as a function of $n$ for each non-negative integer $n = 0, 1, 2, \ldots$. In order to give birth to Laplace transformation technique, we make some associations. The discrete variable $n$ is converted into a real variable $t$. The coefficient term $a_n$ is written as $f(t)$. The term $x^n$ can equivalently be written as $e^{(\ln x^t)}$. Finally, summation notation can be replaced by its continuous analogue, that is, integration. By doing so, we have following:
For convergence, it is obviously important to have following condition for the above integral (yes think for this about the original sum!):
Therefore:
Since $0<x<1 $ so it implies that:
Thus $\ln(x)$ has to be negative for the integral to converge, In this regard, we suppose $\ln(x)=-s$ where $s>0$. Thus, the final integral takes the form:
In this way, we can say that Laplace Transform is simply stretching a discrete (infinite series) into a continuous (integration) analogue.
\subsubsection{Hilbert Transform}
\subsection{Functional dot product (inner product)}
The "\NewTerm{functional dot product}\index{function dot product}\index{orthogonality of functions}\index{functions orthogonality}" (very strong analogy with the dot product in seen in the section Vector Calculus) may seem unnecessary when examined for the first time outside of an application context or only as generalization purpose, but in fact it has many practical applications. We will make such direct use in the section of Wave Quantum Physics and Quantum Chemistry, or even more important in the context of trigonometric polynomials through the Fourier series and transforms (\SeeChapter{see section Sequences and Series}) that we find everywhere in contemporary physics and computer science.
However, if the reader has not traveled the section of Vector Calculus and the part treating the vector dot product, we would highly recommend reading it otherwise what follows may be a little incomprehensible.
We put ourselves in the space $\mathcal{C}([a,b],\mathbb{R})$ of continuous functions in the interval $[a, b]$ into $\mathbb{R}$ with the inner product defined by (we find here again the specific notation of the dot product in its functional version as we had mentioned during our definition of the vector dot product in the section of Vector Calculus):
A family of orthogonal polynomials, as we can make the analogy with the dot product in the section Vector Calculus, is therefore a polynomial family $(p_0,...,p_n,...)$ such as:
if $j \ne k$. We recall that an orthogonal family is a free family. We also saw in the section of Vector Calculus that in the space $\mathcal{C}([a,b],\mathbb{C})$ the only possible coherent choice was:
We name the two previous relations "\NewTerm{$L^2$-dot product}\index{dot product!$L^2$-dot product}".
Therefore we can build the "\NewTerm{$L^2$-norm}\index{norm!$L^2$-norm}":
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
We will see further below that the definition above is not the most general one as especially physicists and engineers say that functions are orthogonal under a more constraint situation.
\end{tcolorbox}
The development that will follow will remind us the Gram-Schmidt procedure (\SeeChapter{see section Vector Calculus}) to build an orthogonal family.
\begin{theorem}
Given $(p_0,...,p_n,...)$ a family of linearly independent polynomial defined on $[a,b]$ and $V$ the vector space defined by this family. The family $(y_0,...,y_n,...)$ defined by recurrence in the following way:
and $y_0=p_0$ is orthogonal and generates $V$.
\end{theorem}
\begin{dem}
Let us show by induction on $n$ that $(y_0,y...,y_n,...)$ is an orthogonal family which generates the same space as $(p_0,...,p_n,...)$. The assertion holds for $n=0$. Let us suppose that the assertion holds for $n\geq 0$, for $0\leq k\leq n$ we have:
$(y_0,...,y_n,...)$ is therefore orthogonal. Finally, the equality:
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
\addcontentsline{toc}{paragraph}{Orthogonality of trigonometric functions}
Let us see a first example very important is signal processing and statistic that is relatively to frequency analysis:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Let us consider the very important example in modern physics that is the set of continuous $2\pi$-periodic function denoted $P_{2\pi}$ that forms a vector space (\SeeChapter{see section Vector Calculus}).\\
We define the scalar product of two functions of this set by:
The aim of this definition is to build an abstract functional basis $P_{2\pi}$ on which we can break down any $2\pi$-periodic function!!!\\
The simplest idea is then to use the trigonometric functions sine and cosine:
The relations below show that the basis chosen above are orthogonal and therefore form a free family, plus it's a generating family of the vector space $P_{2\pi}$ because as we have seen in our study of Fourier series (\SeeChapter{see section Sequences and Series}), we have the following values:
where $\delta_{km}$ is the kronecker symbol (\SeeChapter{see section Tensor Calculus}).\\
Therefore it is also an orthogonal basis but not orthonormal. If we want to normalized the vectors of the basis we just need obviously to take:
\end{tcolorbox}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
If the reader remembers that for a random variable $X$ defined on $\mathbb{R}$, the mean was calculated as (\SeeChapter{see section Statistics}):
The we can assimilate:
where:
to the expected mean of the function $g(x)$! Analogy sometimes very useful in practice!
\end{tcolorbox}
\addcontentsline{toc}{paragraph}{Orthogonality of complex exponential functions}
Let us see now another example that is an extension of the previous one and that has also a great importance in signal processing but also in quantum physics and quantum chemistry:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Let us consider a basis of complex functions of the form $r e^{\mathrm{i}n\varphi}$ with $n\in\mathbb{Z}$. We therefore can write:
We get:
It is obvious that if we take for basis functions of the type:
then we have an orthonormal basis (and not just an orthogonal one).
\end{tcolorbox}
\addcontentsline{toc}{paragraph}{Orthogonality of Bessel functions}
Another example that will be useful for us in the section of Wave Mechanics for our study of the ideal circular membrane of a drum.
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
We have proved in the section Sequences and Series that the Bessel function\index{Bessel function} $J_p(x)$ satisfies the following differential equation (Bessel's equation):
which can be written as:
The variable $p$ need not be an integer as we will see it in the section of Mechanical Engineering with the study case of the self-buckling column.\\
It turns out to be useful to define a new variable $t$ by $x = a t$, where $a$ is a constant which we will take to be a zero of $J_p$, i.e. $J_p(a) = 0$. Let us define:
which implies:
and substituting into (\ref{eq}) gives:
since $x \mathrm{d}/\mathrm{d}x$ is equivalent to $t d/dt$.
We can also write down the equation obtained by picking another zero, $b$. Defining:
which implies:
we have then:
To derive the orthogonality relation, we multiply (\ref{eq1}) by $v$, and (\ref{eq2}) by $u$. Subtracting and dividing by $t$ gives:
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
The first two terms in (\ref{combine}) can be combined as:
since the extra terms present in (\ref{totalderiv}), but not in (\ref{combine}), when the derivatives are expanded out are equal and opposite and so cancel. Hence we have:
We next integrate this over the range of $t$ from $0$ to $1$ ($0$ since the Bessel function is not defined for $t<0$ and to $1$ since it's the place where there is a zero by construction for recall!), which gives:
The integrated term vanishes at the lower limit because $t=0$, and it also vanishes at the upper limit because $u(1) = v(1) = 0$, see (\ref{u10}) and (\ref{v10}). Hence, if $a \ne b$, (\ref{int01}) gives:
which, using (\ref{ut}) and (\ref{v10}), can be written
This is the desired orthogonality equation. Remember we require that $a$ and $ b$ are distinct zeroes of $J_p$, so both Bessel functions in (\ref{orthog}) vanish at the upper limit.
\end{tcolorbox}
\pagebreak
\addcontentsline{toc}{paragraph}{Orthogonality of Hermite polynomial}
A last example that will be useful for us in the section of Wave Quantum Physics during our study of the harmonic oscillator:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
We will introduce in the section of Wave Quantum Physics the following "physicist Hermite polynomial\index{Hermite polynomial}":
Therefore (see the plot in the section of Wave Quantum Physics):
where we notice almost immediately that (useful for for further below):
And we need to prove that they are orthogonal (or even better: orthonormal!).\\
For this purpose we introduce the weight function $w(x)=e^{-x^2}$ therefore:
So we get (we use the Gauss integral as proved in the section Statistics):
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
Finally, using Kronecker symbol (\SeeChapter{see section Tensor Calculus}):
\end{tcolorbox}
From what we have seen above we deduce that:
is in fact generalized by:
where $w(x)$ is the "\NewTerm{weight function}\index{weight function}".
So the engineer, physicist, mathematician must always be careful when he see in a textbook a sentence of the type: \textit{these functions are orthogonal}. Indeed the author/redactor should instead read: \textit{these functions are orthogonal with a given weight}.
\begin{flushright}
\begin{tabular}{l c}
\circled{100} & \pbox{20cm}{\score{4}{5} \\ {\tiny 47 votes, 71.49\%}}
\end{tabular}
\end{flushright}
%to make section start on odd page
\newpage
\thispagestyle{empty}
\mbox{}
\section{Complex Analysis}
\lettrine[lines=4]{\color{BrickRed}B}efore starting this section on the study of differential and integral calculus in the generalized case of complex numbers, I should point out that I used many illustrations inspired by the PDF of E.~Hairer (with his permission). This text also contains many sentences and developments taken, homogenized and simplified from the same PDF (at the risk to make some purist readers climbing to the curtains...) according to the notations and educational objectives of the rest of this book.
The subject of the complex analysis is the study of functions $\mathbb{C} \mapsto \mathbb{C}$ and their differentiability (which is different from that in $\mathbb{R}^2$). The "holomorphic functions" (that is to say differentiable in a subset of $\mathbb{C}$) have as we will see it later surprising and elegant properties that can be reused in the situation of the special case of functions in $\mathbb{R}^2$ (remember that $\mathbb{C}$ is a generalization of $\mathbb{R}^2$ ) that have important applications in advanced physics (we will use the results of this section for our study of quantum physics and also for some applications of fluid mechanics and also for advanced models in financial options pricing).
Before we start let us first explain the interest of Complex Analysis in a simplified way!
We studied in the chapter Algebra a part of the Differential and Integral Calculus with some useful and important theorems in physics and engineering. However, staying in $\mathbb{R}$ or $\mathbb{R}^2$ the list of theorems runs out somehow and we end up finding much relevant tools in practice that allows to simplify the integration calculation that we can sometimes found in industrial applications. So, when we remember that $\mathbb{R} \subset \mathbb{C}$ (thus the set of complex numbers generalizes the set of real numbers) and that we can also build a correspondence $\mathbb{R}^2 \mapsto \mathbb{C}$ as we shall see, then new theorems appear with very interesting results that can be exploit for the integrals in $\mathbb{R}$ or $\mathbb{R}^2$!! It is because of this reason that the engineer needs to know Complex Analysis!
After studying this particular field of mathematics, it is common to say that the shortest path between two truths of the real domain often requires to pass trough the complex domain...
\subsection{Linear Applications}
A good introduction to complex analysis and its representation is to look at first (for educational purposes mainly) the special case of complex linear applications. Let us see this!
Let $U \subset \mathbb{C}$ be a set and $V \subset \mathbb{C}$ another set. A function that associates to each $z \in U$ an $w \in V$ such that:
is a "\NewTerm{complex function}\index{complex function}":
What is important is to remember (\SeeChapter{see section Numbers}) that we can identify:
and:
We have thus two functions of two real variables $x, y$:
which are the coordinates of the point $w$.
\textbf{Definition (\#\mydef):} An application is named "\NewTerm{$\mathbb{C}$-linear}\index{$\mathbb{C}$-linear function}" if for example a function of the type:
where $c$ is a fixed complex number and $z$ an arbitrary complex number, satisfying:
That is to say that $f(z)$ must me additive and homogeneous or just briefly when this two properties are satisfied we say that $f(z)$ is a "\NewTerm{linear map}\index{linear map}".
We have seen and proved in the section on Numbers during our study of complex numbers, that the multiplication of two complex numbers could be equivalent to an orthogonal rotation followed by a scaling and that this same multiplication could be represented in matrix form! Or the transcription into a matrix form involves as we saw in the section on Linear Algebra automatically the property of linearity!
So the reader can easily check that a matrix of rotation/scaling is an example of an $\mathbb{C}$-linear application (on request we can detail) that we will now write:
Which can be typically represented as follows (we can clearly see a rotation and a scaling which conserve the angles and proportions):
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{img/analysis/clinear_application.eps}
\caption{$\mathbb{C}$-linear application example}
\end{figure}
It is the fact that the proportions and the angles are kept that makes a complex function $\mathbb{C}$-linear. Otherwise, we would say that the function is $\mathbb{R}$-linear.
So a matrix equation is $\mathbb{C}$-linear if and only if it is of the form:
Let us see some examples of quite remarkable $\mathbb{C}$ non-linear functions.
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E1. Consider the function:
In real coordinates this gives:
So let's look what this function do with the points of the complex plane which are coincident with the vertical lines of this same plane (which take us to write $x=a$). Then we have:
and eliminating y, we find the equation of a parabola or rather a family of parabolas (for several values of $b$) which are open to the left of the pictured complex plane.\\
Here is a picture representation of the complex plane on which we have drawn a cat head:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.75]{img/analysis/c_linear_image_cat.eps}
\end{center}
\caption{Complex representation of the image of the example function}
\end{figure}
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
and if we look at the corresponding pre-image complex plane then we have two heads of cats that appear:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.75]{img/analysis/c_linear_pre_image_cat.eps}
\end{center}
\caption{Pre-image representation of the example function}
\end{figure}
The appearance of these two heads of cats is that this function has 2 possible pre-images for each image point (so it is a surjective function - see section Set Theory).\\
Here is a nice Maple 17.00 script by Carl Ebehart to check this (shame that this can not be done in an easier way in Maple):\\
\texttt{> complextools[gridimage] := proc(p)\\
local llhc, width, height, xres, yres, clrs, V, H, i,j,k,l,pz,x,y,z,f,g,xtcs,ytcs,opts,margs;\\
llhc := [-1, -1];\\
width := 2;height := 2;\\
xres := .25;yres := .25;\\
xtcs := 1; ytcs := 1;\\
clrs := [red, black];\\
opts := NULL;\\
opts := op(select(type,[args],`=`));\\
margs:= remove(type, [args] ,`=`) ;\\
if nops(margs) >1 and margs[2] <> `` then llhc := margs[2] fi:\\
if nops(margs) >2 and margs[3] <> `` then width := margs[3] fi:\\
if nops(margs) >3 and margs[4] <> `` then height := margs[4] fi:\\
if nops(margs) >4 and margs[5] <> `` then xres := margs[5] fi:\\
if nops(margs) >5 and margs[6] <> `` then yres := margs[6] fi:\\
if nops(margs) >6 and margs[7] <> `` then xtcs := margs[7] fi:\\
if nops(margs) >7 and margs[8] <> `` then ytcs := margs[8] fi:}
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\texttt{if nops(margs) >8 and margs[9] <> `` then clrs := margs[9] fi:\\
z:= x + I*y;\\
pz := evalc(p(z));\\
f := unapply(evalc( Re(pz)),x,y); g := unapply(evalc(Im(pz)),x,y);\\
V:= plot( [
seq([seq(op([[f(llhc[1] + i*xres ,llhc[2]+(j-1)*yres/ytcs),g(llhc[1] + i*xres ,llhc[2]+(j-1)*yres/ytcs)], [f(llhc[1] + i*xres , llhc[2] +j*yres/ytcs),g(llhc[1] + i*xres , llhc[2] +j*yres/ytcs)]]),
j=1..ytcs*height/yres)], i = 0 .. width/xres)
],color=clrs[1]);\\
H := plot( [
seq([seq(op([[f(llhc[1]+(j-1)*xres/xtcs,llhc[2] + i*yres),
g(llhc[1]+(j-1)*xres/xtcs,llhc[2] + i*yres)],
[f(llhc[1] +j*xres/xtcs, llhc[2] + i*yres),
g(llhc[1] +j*xres/xtcs, llhc[2] + i*yres)]]),
j=1..xtcs*width/xres)], i = 0 .. height/yres)
],color=clrs[2]);\\
plots[display]([V,H],scaling=constrained,opts);
end:\\
with(complextools);}\\
\texttt{>plots[display]([seq(plots[display]([gridimage(z->z), gridimage(z->z\string^2)]),i=10)],insequence=true);}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.75]{img/analysis/c_linear_maple_transform.eps}
\end{center}
\caption{Practical Maple 17.00 example of simple $C$-linear application}
\end{figure}
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
E2. Another interesting feature is the "\NewTerm{Cayley transformation}\index{Cayley transformation}" used in some areas of physics and defined as:
having as domain definition: $\mathbb{C}/\left\lbrace 1\right\rbrace$.\\
We note that this is an involutive function since:
and as we have proved in the section of Proofs Theory that any involution function is both injective and surjective, then the Cayley transform is a bijective function.\\
This function transforms the imaginary axis $\mi y$ in unit circle (and vice versa as it is involutive). Let us see that:
where:
satisfies:
That is to say:
This is the equation of a circle as proven in the section of Analytical Geometric.
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
E3. As another example of function, consider the "\NewTerm{Joukovski transformation}\index{Joukovski transformation}" defined by:
If the definition domain is built in polar coordinates look at how a circle or ellipse transforms with this function:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.75]{img/analysis/joukovski_pre_image.eps}
\end{center}
\caption{Transformation into polar coordinates of an ellipse with the example function}
\end{figure}
Then the image plane will be:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{img/analysis/joukovski_image.eps}
\end{center}
\caption{Result of the Joukovski transformation in polar coordinates}
\end{figure}
It thus transforms the circles respectively centered at $0$ and the rays passing through $0$ into a family cofocal ellipses and hyperbole . To prove this fact, we use the complex polar coordinates (Euler formula) seen in the section on Numbers (\SeeChapter{see chapter Arithmetics}):
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
and:
Then we have:
therefore:
and we immediately see that (\SeeChapter{see section Trigonometry}):
has the form of the equation of an ellipse (\SeeChapter{see section Analytical Geometry}) and we also have:
which is the equation of a hyperbole (\SeeChapter{see section Analytical Geometry}).\\
This function is useful in case we cleverly place a circle through the point $z=1$ (as in the case of the first figure) the plan represented in polar coordinates with a dotted line might looks like an airplane wing. This allowed a time ago in aerodynamics (but the technique is obsolete today) to transpose the study of a vector field of an airplane wing profile to the study of a circle profile and to do after the Joukovski transformation.
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
Indeed, let us see a part of this still with Maple 4.00:\\
\texttt{> assume(x,real,y,real);}\\
\texttt{> z:=x+I*y;}\\
\texttt{> F:=1/2*(z+1/z);}\\
\texttt{> u:=Re(F);}\\
\texttt{> u:=evalc(u);}\\
\texttt{> v:=Im(F);}
\texttt{> v:=evalc(v);}\\
\texttt{> with(plots):with(plottools):}\\
\texttt{> p1:=disk([0,0],1,color=black):}\\
\texttt{> p2:=implicitplot({seq(v=b8,b=-10..10)},x=-4..4,y=-2..2,color=black):}\\
\texttt{> display([p2,p1],scaling=constrained);}\\
We thus get:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{img/analysis/joukovski_application.eps}
\end{center}
\caption{Important application example of the Joukovski function}
\end{figure}
\end{tcolorbox}
Let us see a last example that shows an electric dipole with its electric field and potential lines (\SeeChapter{see section Electrostatic}) can bee seen as the emergence of the $\mathbb{C}$-linear function $1/z$:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E4. Always with Maple 4.00b we write:\\
\texttt{>assume(x,real,y,real);}\\
\texttt{> z:=x+I*y;}\\
\texttt{> F:=1/z;}\\
\texttt{> u:=Re(F);u:=evalc(u);}\\
\texttt{> v:=Im(F);v:=evalc(v);}\\
\texttt{> with(plots):}\\
\texttt{> p1:=implicitplot({seq(u=a,a=-5..5)},x=-1..1,y=-1..1,numpoints=1000):}\\
\texttt{> p2:=implicitplot({seq(v=b,b=-5..5)},x=-1..1,y=-1..1,numpoints=1000,}\\
\texttt{color=green):}\\
\texttt{> display([p1,p2],scaling=constrained);}\\
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{img/analysis/dipole.eps}
\end{center}
\caption{Another important application of a complex application}
\end{figure}
\end{tcolorbox}
\subsection{Holomorphic Functions}
The definition of the derivative with respect to a complex variable is naturally formally identical to the derivative with respect to a real variable.
We then have, if the function $f(z)$ is differentiable in $z_0$:
and we say (abusively in this book) that function is "\NewTerm{holomorphic}\index{holomorphic function}" (while in $\mathbb{R}$ we say "differentiable") or "\NewTerm{analytical}\index{analytical function}" in its domain or in a subset of it if it is differentiable at any point.
In other words a holomorphic functions is a complex-valued function of one or more complex variables that is complex differentiable in a neighborhood of every point in its domain. The existence of a complex derivative in a neighborhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal to its own Taylor series.
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} A complex function is derived like a real function, we just have to put $z$ as being $x$... at the condition of what we will see in what follows is respected!\\
\textbf{R2.} In fact if the function is holomorphic in a subset of the complex plane, we will see a little further below in our study of the convergence of power series that this is than always an open subset.
\end{tcolorbox}
Equivalently, we say that the function $f$ is $\mathbb{C}$-differentiable if the following limit exists in $\mathbb{C}$:
Let us now present and prove a central theorem for complex analysis named "\NewTerm{Cauchy-Riemann theorem}\index{Cauchy-Riemann theorem}"!
If the function:
is $\mathbb{C}$-differentiable on $z_0=x_0+\mathrm{i}y_0$, then we have:
which is somewhat the equivalent to the Schwarz theorem (limited to $\mathbb{R}$) proved in the section of Differential and Integral Calculus. The above two relations are named "Cauchy conditions". So these are the two conditions that must verify a complex function to be differentiable on $z_0$. Thus, it is possible to use these relations to examine the points where the function is not analytic.
\begin{theorem}
If these conditions are satisfied (what will prove right below), then we deduce that $u$ and $v$ must both harmonic functions of $x$ and $y$.
\end{theorem}
\begin{dem}
As:
by choosing:
with $x \in \mathbb{R}$ we get:
and as $x$ approaches a small value $\mathrm{d}x$, we have (\SeeChapter{see section Differential and Integral Calculus}):
by choosing:
with $y \in \mathrm{R}$, we get:
and when $y$ tends to a small value we have (\SeeChapter{see section Differential and Integral Calculus}):
So now we have:
But remember we proved in the section of Integral and Differential Calculus the following theorem:
Therefore:
Therefore using directly Shwartz theorem:
Which can be written:
A trivial solution is obviously to have:
Therefore the right to write:
By identifying real and imaginary parts, we finish the proof!
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
So for $f$ to be differentiable in the complex domain $\mathbb{C}$ (holomorphic) at a point, it is sufficient that it be differentiable as a function of two real variables ($\mathbb{R}^2$-differentiable on $(x_0,y_0)$) and that its partial first derivatives at this point satisfy the Cauchy-Riemann equations.
But, for it to be $\mathbb{C}$-differentiable, Cauchy-Riemann's equations must valid at all points of the complex plane (we sometimes speaks about "\NewTerm{complete functions}\index{complete functions}") and not only in a subdomain thereof! Otherwise, it contains therefore "\NewTerm{singularities}\index{singularities}" and we then speak of "\NewTerm{meromorphic function}\index{meromorphic function}" (which is therefore a holomorphic function excepted on singularities points).
The Gamma function studied in the section of Differential and Integral Calculus is such a wellknown function:
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{img/analysis/gamma_meromorphic.jpg}
\caption{Gamma function is meromorphic in the whole complex plane (source: Wikipedia)}
\end{figure}
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
Geometrically, we will prove later that a holomorphic function has a possible interpretation in the sense that it is a conformal transformation (angles conservation).
\end{tcolorbox}
Notice therefore that if $f (z)$ is $\mathbb{C}$-differentiable it can be developed as Taylor series (\SeeChapter{see section Sequences and Series}):
Note an important thing too. If we rewrite:
as following:
Then we say that $f$ is "\NewTerm{irrotational}\index{irrotational}" (\SeeChapter{see section Vector Calculus}") since the first relation can be seen as:
which is an important analogy! Finally, the second equation:
Also let us say by analogy (but it stops at a simple analogy!) that the function $f$ is non-divergent (\SeeChapter{see section Vector Calculus}) what is good mnemonic way to remember this equation.
Let's also show something else in evidence. If we take the two Cauchy-Riemann equations:
and that we derivate them once again we get:
and that we sum these two relations, we get then:
It is the same with v. Then we have:
And we know very well this form of equations (Maxwell-Poisson equation in the section of Electrodynamics and Newton-Poisson equation in the section of Astronomy...). This is a wave equation also named "\NewTerm{Laplace equation}\index{Laplace equation}" (nothing to do with that of the same name seen in our study of the hydrostatic!) and given by the scalar Laplacian (\SeeChapter{see section Vector Calculus}):
Then it is traditional to say that $u$ is harmonic and of course we can get the same result with $v$! Well obviously ... we knew it, since we have already studied in the section Numbers that the real and imaginary parts of a complex number could be put in trigonometric form.
Thanks to this discovery, Riemann opened the application of holomorphic functions in many problems of physics, since these equations are satisfied by the gravitational potential (Newton-Poisson equation in the section of Astronomy) by electric and magnetic fields (Maxwell-Poisson equation in the section of Electrodynamics) by heat balance (no examples yet in this book) and by movements without rotational of certain fluids (no examples either in this book yet).
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The potential of a dipole can be described by the following holomorphic function:
The figure below:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/holomorphic_dipole_plot.jpg}
\caption{Plane representation of a well known holomorphic function...}
\end{figure}
shows level-curves (iso-curves) of the given harmonic functions $u (x, y)$ and $v (x, y)$ as real and complex parts of the function $f (z)$ of this example.
\end{tcolorbox}
\pagebreak
\subsubsection{Orthogonality of real and imaginary iso-curves}
We will now proove a friendly property that have the functions that satisfy the Cauchy conditions (i.e. that are analytic functions!). Indeed, remember that we have already seen above the function:
which gave the following diagram:
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.75]{img/analysis/c_linear_image_cat.eps}
\end{center}
\caption{Reminder of plane representation of a complex function seen earlier}
\end{figure}
\begin{theorem}
Well he functions satisfying the conditions Cauchy have the simple following geometrical property following: the lines whose real part of the function is constant $\mathcal{R}(f(z))=c^{te}$ and lines whose imaginary part is constant $\mathcal{I}(f(z))=c^{te}$ are orthogonal to each other (think to the trigonometric form of complex numbers it helps to better visualize!).
In other words, the analytical complex functions are transformation functions of of an area of the plane into a new plane where the angles are preserved. Then we say that the function is a "\NewTerm{complete transformation}\index{complete transformation}".
\end{theorem}
\begin{dem}
For the proof remember that we have proved in section of Vector Calculus that gradient of a function $f$ of $\mathbb{R}^2$ is given by:
and as part of our study of isolines in the section of Differential Geometry that the tangent vector to isolines of the function $f$ will always be parallel to the vector of the plane:
and that the latter two vectors are perpendicular, such that:
Now assimilate the tangent (parallel) vector $\vec{t}_u$ to the real isolines:
with:
and the normal vector to the imaginary isolines:
with the gradient $v$ of components:
Using the Cauchy conditions proved above, we have for this last relation:
By comparing:
we therefore see that $\vec{t}_u$ and $\vec{\nabla}(v)$ are parallel (collinear). And since $\vec{t}_u$ is colinear the real isolines and that $\vec{\nabla}(v)$ is perpendicular to the imaginary isolines we finished our proof.
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
The reader may take as an example the function:
mathematically and schematically detailed earlier above! But to change a little bit, consider an example that will accompany us throughout the rest of this section and that is the following holomorphic function:
That gives us with Maple 4.00b:
\texttt{
>assume(x,real,y,real);\\
> z:=1/(1+(x+I*y)\string^2);\\
> F:=1/z;\\
> u:=Re(F);\\
> u:=evalc(u);\\
> v:=Im(F);\\
> v:=evalc(v);\\
> with(plots):\\
> p1:=implicitplot({seq(u=a,a=-5..5)},x=-5..5,y=-5..5,numpoints=1000):\\
> p2:=implicitplot({seq(v=b,b=-5..5)},x=-5..5,y=-5..5,numpoints=1000,color=green):\\
> display([p1,p2]);
}
which gives:
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/holomorphic_isoclines.jpg}
\end{center}
\caption{Representation of an important holomorphic function with its isolines}
\end{figure}
\subsection{Complex Logarithm}
We need for all functions built into $\mathbb{R}$ found their equivalent in $\mathbb{C}$ while knowing that if we reduce the case of $\mathbb{R}$ to $\mathbb{C}$ we must get back on our feet!
To do this, let us start with the most classical and academic function which is the logarithm and also the only one function for which we will need the complex version in other sections of this book. As always we will focus only on the properties that we will need later for practical applications and nothing more!
In the same way that we built the logarithm as being by definition by the inverse function of the natural exponential $e^x$ in the section of Functional Analysis, we first start from:
where $z$ is a complex number and we will define the complex logarithm that must be reduced to the natural logarithm if $z$ has no imaginary part!
So by definition the complex logarithm will be:
and in this entire book, the complex logarithm will be differentiated by the real logarithm by a capital L for the first letter!
Let us write $z$ and $w$ in the Euler form as viewed in the section Numbers:
Then we have:
By correspondence, we find immediately
with $k \in \mathbb{Z}$. Therefore we get:
Therefore:
or more explicitly:
So if $w$ has no imaginary part, we fall back on our feet since $\text{arg} (w)$ becomes zero.
A big difference is highlighted between the logarithm of the complex and real numbers: the complex numbers logarithms can take several values because of the argument!!
For a function to have an inverse, it must map distinct values to distinct values, i.e., be injective. But the complex exponential function is not injective, because $e^{z+2\pi \mathrm{i} }= e^z$ for any $z$, since adding $\mathrm{i}\theta$ to $w$ has the effect of rotating $e^z$ counterclockwise $\theta$ radians. So all the points of the form $z+\mathrm{i}k\theta$ are all mapped to the same number by the exponential function. So the exponential function does not have an inverse function in the standard sense.
There are two solutions to this problem.
\begin{enumerate}
\item One is to restrict the domain of the exponential function to a region that does not contain any two numbers differing by an integer multiple of $2\pi \mathrm{i}$: this leads naturally to the definition of "\NewTerm{branches}\index{branches}" of $\text{Log}(w)$, which are certain functions that single out one logarithm of each number in their domains.
\item Another way to resolve the indeterminacy is to view the logarithm as a function whose domain is not a region in the complex plane, but a Riemann surface (\SeeChapter{see section Non-Euclidean Geometries}) that covers the punctured complex plane in an infinite-to-1 way.
\end{enumerate}
Branches have the advantage that they can be evaluated at complex numbers. On the other hand, the function on the Riemann surface is elegant in that it packages together all branches of $\text{Log}(w)$ and does not require an arbitrary choice as part of its definition.
We can see this with Maple 4.00b easily:
\texttt{>plot3d([r*cos(f),r*sin(f),f],r=0..1,f=-2*Pi..2*Pi,axes=boxed,style=patch,\\
shading=ZHUE);}
which gives:
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/complex_logarithm.jpg}
\end{center}
\caption{Complex Logarithm plot with Maple 4.00b}
\end{figure}
For this reason, one cannot always apply $\text{Log}$ to both sides of an identity $e^{z_1}=e^{z_2}$ to deduce $z_1=z_2$ . Also, the identity $\text{Log} (z_1z_2)= \text{Log}(z_1) + \text{Log}(z_2)$ can fail: the two sides can differ by an integer multiple of $2\pi \mathrm{i}$.
For each nonzero complex number $w = x + i\mathrm{y}$, the principal value $\text{Log}(w)$ is the logarithm whose imaginary part lies in the interval $[-\pi,+\pi]$. The expression $\text{Log}(0)$ is left undefined since there is no complex number $z$ satisfying $e^z = 0$.
Then the principal value of the complex logarithm can be defined by (\SeeChapter{see section Trigonometry}):
We see also obviously that the function $\text{Log}(w)$ is discontinuous at each negative real number (we can see it on the figure above), but continuous everywhere else in $\mathbb{C}^*$.
\subsection{Complex Integral Calculus}
We have seen just above how to check if a complex function $f (z)$ was differentiable (it must at least respect the Cauchy-Riemann equations) at any point.
Now let us see the opposite case... the integration that is absolutely fascinating in complex plane!
We have obviously taking again the notations of the section of Differential and Integral Calculus:
either in explicit form:
Well once this expression established, let us give a little explanation about how to read it:
\begin{enumerate}
\item We know that $u$ and $v$ are dependent both in the general case of $x$ and $y$.
\item We know that $ u $ and $ v $ represent (see examples at the beginning of this section) closed or open curves and also straight lines when $ x $ (or respectively $y$) is fixed and that the other associated variable varies!
\end{enumerate}
So each term have an integral in the above expression is in fact a line integral on a family of open or closed curves (including a specific case that is straight lines...)!
This integral can be evaluated using the Green's theorem in the plane (\SeeChapter{see section Vector Calculus}) if we consider the particular case of a closed curvilinear path such as:
Let us first study the real part:
Indeed we proved (it is strongly advised to read again this Green's theorem) in the section of Vector Calculus that:
What will be written in our situation:
However, if the function is holomorphic and thus satisfies the Cauchy-Riemann equations we get immediately:
Thus our integral is reduced in the particular case of a closed path:
and... reusing Green's theorem for this imaginary part:
However, if the function is holomorphic (for reminder that is to say differentiable at every point of the complex plane or an open subset of it) and thus satisfies the Cauchy-Riemann equations we get immediately:
and we thus obtain the "\NewTerm{Cauchy theorem}\index{Cauchy theorem}", or "\NewTerm{Cauchy-Goursat theorem}\index{Cauchy-Goursat theorem}" for its generalized version for non continuous functions, which says that if a function is holomorphic (thus satisfying the Cauchy-Riemann equations) and integrated on a closed contour then:
As a corollary (without proof), any function that satisfies the above relation is holomorphic (in the whole complex plane or an open subset of it).
This result gives the possibility in certain fields like quantum physics fields (we think of the Yukawa potential that is not yet treated in this book in detailed) to calculate complicated real definite integrals using the above property. The idea is when choosing the closed contour of the path integral to play to make the real definite integral only as a part only of the path (by generalizing to the complex case) and by equality with zero we deduce its value thanks to the other parts of the integrals of the chosen path (parts that are obviously simple to calculate).
In other words, the idea is to calculate by difference! The difficulty residing in practice in finding the function $f (z)$ and the closed contour that permits to make appear the function $f (x) $ of the researched definite integral...
Using this result, let us make a very important academic example which will be useful later (but who has no connection with the case of calculating a real definite integral).
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Let us calculate:
For this purpose, we will use the simplification that consist to remember (\SeeChapter{see section Numbers}) that:
Therefore:
We can then write the path integral as:
Or as on a closed path differentiable at any point (without nodes) the angle to make a full turn will necessarily be between $0$ and $+\pi$. It comes then:
\end{tcolorbox}
Before we continue by noticing a very interesting and important fact the we will detail later formally: An integral (we do not speak of primitive but of integral!) of the type $1/x$ in $\mathbb{R}$ would not be calculable. But now if we generalize the concept of $\mathbb{C}$, we see that we get go around the singularity via a path integral that enclose the singularity. And ... and ... in our previous calculation $z$ might have only the real value and not the imaginary one (so $z$ reduce to $x$). So the integral of $1/x$ becomes calculable and has a result in the set of complex numbers which is remarkable!
Some mathematicians interpret this by figuring that $1/x$ is a flat projection of a three-dimensional space in which the imaginary axis is perpendicular to the plane $\mathbb{R}^2$ (see figures below). Hence the fact that $1/x$ can integrated in the set $\mathbb{C}$.
Finally, let us indicate that $1/z$ is holomorphic on the whole complex plane except on $0$ (the derivative being the same as $1/x$). Then the function $1/z$ is thus not $\mathbb{C}$-differentiable!
This being done, let us do an important and similar case with the following path integral:
where $z_0$ is a constant complex number. Let us write:
We can then write if we make one turn counter clockwise:
which is valid only if our integration path avoids $z_0$ what otherwise there is a singularity. This latter integral is a little simplistic generalization of the previous one.
Now let us show the important theorem that interests us since the beginning of this section using many proven results so far!
We know that if a function $f (z)$ satisfies the Cauchy-Riemann equations, the if we carefully avoid the value $z_0$ (as in the above calculations), the expression:
is differentiable at all points except in on $z_0$ (where the expression is no longer holomorph) is name a "\NewTerm{singularity}\index{singularity}".
Indeed, take a holomorphic function $f (z)$ satisfying Cauchy-Riemann equation and subtract a constant ($f(z_0)$) does not change the fact that the expression (in this case the numerator in previous relationship) remain holomorphic. Finally, multiply it by a fraction (the denominator of the above equation) which is also holomorph gives a holomorphic function. But singularities can then appear, we then speak of "\NewTerm{meromorphic functions}\index{meromorphic functions}" (this is the ratio of two holomorphic functions).
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
A meromorphic function is a function holomorphic in the whole complex plane, except possibly on a set of isolated points each of which is a pole (singularity) for the function (see further below for the concept of pole/singularity). The gamma function (see the plot in the Differential and Integral calculus section) is a famous example of meromorphic function!
\end{tcolorbox}
So if we take the path integral on a closed path avoiding $z_0$, the Cauchy theorem gives us immediately (remember the proof above):
However, this can also be written after rearrangement of terms:
Therefore:
But we have proved above that:
Then we get the result named "\NewTerm{Cauchy's integral theorem}\index{Cauchy's integral theorem}", or more rarely "\NewTerm{Cauchy formula}\index{Cauchy formula}" (of which there is a generalized result we will prove later below):
In fact, in practice all the subtlety is to be able to take back a given holomorphic function $g(z)$ (which therefore satisfies the Cauchy-Riemann equations) by manipulating it in a form of the type:
when its possible... then the calculation of its path integral (closed path) becomes extremely simple since it will be equal to:
by the Cauchy's integral theorem!
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} So we know how to calculate the value of a path integral of an epxression that is not holomorp but for which the numerator is holomorph!\\
\textbf{R2.} Caution! The sign of the value of a path integral will depend on the direction in which its integration path will be done. If the direction is straightforward (that is to say "counterclockwise") its sign will be positive; if on the contrary the direction is clockwise his sign will be negative. You probably think that this information is irrelevant since this value is usually zero. Yes... it is, but we will see later the importance of this information when referring to the calculation of what we name the "residuals".
\end{tcolorbox}
There is a similar relation for the derivative $f'(z_0)$ to that given by the Cauchy's integral theorem. Let us see this:
Therefore:
thereby continuing, we have:
In short, we therefore note that:
which is the "\NewTerm{Generalized Cauchy's integral theorem}\index{Generalized Cauchy's integral theorem}"
This result is very powerful because it shows that holomorphic functions are infinitely differentiable (because of the denominator), that is to say analytical, and it is much more difficult to find an equivalent theorem with such simple conditions for real functions.
If we now return to our Taylor expansion of a complex function:
um ... and what do we see here? Well this !:
It follows the following relation named "\NewTerm{Laurent series in positive powers}\index{Laurent series in positive powers}" (there is a more generalized version will be prove later below):
that gives us the formal expression of a complex function in the form of infinite series of integer powers near a point $z_0$ of the complex plane with therefore:
Remembering that $d^{n}f(z_0)/\mathrm{d}z^n$ can be written equivalently $f^n(z_0)$, we see that all the two previous relations gives us the Taylor series expansion that we had obtained in real analysis (\SeeChapter{see section Sequences and Series}) and that was:
Thus, the Taylor series in $\mathbb{R}$ are a special case of Laurent series that are in $\mathbb{C}$!!!
This result is quite remarkable because it also shows that we can use the path integral in the complex plane for calculating the coefficients $c_n$ of the Laurent series instead of calculating the derivatives of order $n$ of the function $f$ if these latter are too complicated to determine. Or vice versa... calculate a simple derivation instead of calculating a headache type path integral (typically the case in physics) using the fact that:
The only unfortunate point being that the latter relation is calculable only if we can put the function in path line integral in the form:
where $n$ is a positive or null integer. This is honestly far from to be easy in most cases! The idea would be to find a general path for line integral, valid for any function $f (z)$ such that the denominator (which additionally contains a singularity on $z_0$) disappears. That would be ideal ... but we need a track ... and it will come from the study of the convergence of series of complex powers. Let's see what it is with a qualitative approach!
\pagebreak
\subsubsection{Convergence of a complex series}
We saw in the section of Sequences and Series that many real functions could be expressed in Maclaurin series (special case of the Taylor series on $x_0=0$) in the form:
We also showed, by example the only, that this series expansion of infinite powers was valid for some real functions only in a certain domain of definition named "radius of convergence".
Even if this radius of convergence can be determined more or less easily in each case, there are some baffling examples that could not in the early 19th century be understood without complex analysis.
Let's see a simple example to understand what kind of problem it is. Consider for this the two functions:
and before continuing our example, recall that we have proved in the section of Sequences and Series the relation:
relative to a geometric series, that is to say a series whose terms are of the type:
Therefore it comes immediately if $n \rightarrow +\infty$ and $q \in ]-1,+1[$:
if $u_0=1$, we get:
So if we change the notation, we have:
Then it comes immediately:
Therefore the two previous functions $g(x)$ and $h(x)$ are defined for a infinite series expansion in powers only in radius of convergence $x \in ]-1,+1[$.
We would get the same result by making a Maclaurin series expansion!
We see that there is trivially for $g(x)$ two singularities that are $x=\left\lbrace -1,+1\right\rbrace$ by cons, basically we do not see trivial singularities for $h (x)$ if we reason only in $\mathbb{F}$ so it can be hard for the latter function to understand the origin of the radius of convergence!
Indeed, if we trace these two functions in $\mathbb{R}$ with Maple 4.00b we get respectively:
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/g_h_example_functions.jpg}
\end{center}
\caption{Complex Logarithm plot with Maple 4.00b}
\end{figure}
hence the problem of why there is still implicitly a radius of convergence $x \in ]-1,+1[$ for $h (x)$???
An even more blatantly way to highlight the problem, is to show the approach of these two functions by a Maclaurin series expansion with ten terms.
For $g(x)$ we get for example:
\texttt{>with(plots):\\
>xplot:= plot(1/(1-x\string^2),x=-5..5,thickness=2,color=red):\\
>tays:= plots[display](xplot):\\
>for i from 1 by 2 to 10 do\\
tpl:= convert(taylor(1/(1-x\string^2), x=0,i),polynom):\\
tays:= tays,plots[display]([xplot,plot(tpl,x=-5..5,y=-2..2,\\
color=black,title=convert(tpl,string))])\\
od:\\
>plots[display]([tays],view=[-5..5,-2..2]);}
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/g_function_expansion_inspection.jpg}
\end{center}
\caption{Plane representation of the function $g$ to visualize the problem}
\end{figure}
where we see well that the Maclaurin series (or expression in power series) does not converge outside $x \in ]-1,+1[$ which can be intuitive because of both singularities.
For $h(x)$ we have by cons:
\texttt{
> with(plots):\\
> xplot:= plot(1/(1+x\string^2),x=-5..5,thickness=2,color=red):\\
> tays:= plots[display](xplot):\\
> for i from 1 by 2 to 10 do\\
tpl:= convert(taylor(1/(1+x\string^2), x=0,i),polynom):\\
tays:= tays,plots[display]([xplot,plot(tpl,x=-5..5,y=-2..2,\\
color=black,title=convert(tpl,string))])\\
od:\\
> plots[display]([tays],view=[-5..5,-2..2]);\\
}
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/h_function_expansion_inspection.jpg}
\end{center}
\caption{Surprisingly, here the Maclaurin series (in black) does not converge}
\end{figure}
where we see well that the Maclaurin series (or the expression in power series) does not converge either outside $x \in ]-1,+1[$ which was unsettling and against-intuitive at the beginning of the history of real analysis.
Today even a high school student knows that he can also think in $\mathbb{C}$ and that $\mathbb{R} \subset \mathbb{C}$. So the reals analysis is just a special case and restricted of the field of complex analysis.
The singularity for $h (x)$ in $\mathbb{C}$ comes that latter it is written:
and there are therefore two singularities $z=\left\lbrace{-\mathrm{i},+\mathrm{i} }\right\rbrace$ that we see well if we represent:
with Maple 4.00b (fortunately we now have the equivalent of a microscope in mathematics with Maple...):
\texttt{>plot3d(abs(1/(1+(re+I*im)\string^2)),re=-3..3,im=-3..3,view=[-2..2,-2..2,-2..2]\\
,orientation=[-130,70],contours=50,style=PATCHCONTOUR,axes=frame,\\
grid=[100,100],numpoints=10000);}
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/g_inspection_in_C.jpg}
\end{center}
\caption{Complex representation of the function $h$ to highlight the reason for the divergence}
\end{figure}
where we can see the two singularities on the imaginary axis and the function $h (x)$ on the real axis (between the two peaks). So when we develop a function in power series, we conclude that the radius of convergence is defined by the whole complex plane and not by the traditional axis of the real analysis.
This makes it more natural to understand why we were talking in section of Sequences and Series of "radius" as seen from above, we have in the complex plane:
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/h_various_radius_convergence.jpg}
\end{center}
\caption{Representation of the various convergence of radius of $h(z)$}
\end{figure}
hence the fact that we are talking sometimes about (open) convergence disk and sometimes of (open) convergence radius. Moreover, we notice on the chart that the domain of convergence is convex (any couple of points of the domain can be connected by a straight line that is in the area of convergence).
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Let us Recall that a subset, interval or "open" disc means that we do not take its border as we have seen in the section Topology.
\end{tcolorbox}
Then we understand better why the Taylor series does not converge trivially for $h(x)$: it must converge on the whole disc of the complex plane and not just converge on the real axis!
From all this we deduce that our Laurent series in positive powers proved above:
not necessarily converge, unsurprisingly... on the whole complex plane (just like the Taylor series on the real line as this is the equivalent!) but sometimes only in a opened subdomain (convex?) of this plane around $z_0$ (which in the particular example taken above was obviously: $0$).
With our function $h(x)$ expressed using a development of Maclaurin with 5 terms, we see immediately with Maple 4.00b that on the borders of the square inscribed in the disc of convergence, the series does not converge and we're guessing the start of the two singularities:
\texttt{>plot3d(abs(1-(re+I*im)\string^2+(re+I*im)\string^4-(re+I*im)\string^6+(re+I*im)\string^8),\\
re=-0.7..0.7,im=-0.7..0.7,view=[-1.5..1.5,-1.5..1.5,0..1.5]\\
,orientation=[-130,70],contours=50,style=PATCHCONTOUR,axes=frame,\\
grid=[100,100],numpoints=10000);}
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/h_zoom_on_complex_representation.jpg}
\end{center}
\caption{Focus on the complex representation to understand the reason for the divergence}
\end{figure}
a little outside the disc of convergence, we obviously have a little bit nonsense:
\texttt{>plot3d(abs(1-(re+I*im)\string^2+(re+I*im)\string^4-(re+I*im)\string^6+(re+I*im)\string^8),re=-3..3,\\
im=-3..3,view=[-1.5..1.5,-1.5..1.5,0..1.5],orientation=[-130,70]\\
,contours=50,style=PATCHCONTOUR,axes=frame,grid=[100,100],numpoints=10000);}
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/h_divergence.jpg}
\end{center}
\caption{This diverges ... (stalactites ???)}
\end{figure}
There is still something interesting to try ... since we are now on a plane, not a straight line right (axis), it is possible for us to makethe Taylor expansion around a singularity $z_0$ by deforming the disk in a convex crown/ring simply connected as shown below (the crown/ring being the simplest simply convex geometry arising from the deformation of a disk):
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/h_representation_transformation_disc_in_crow.jpg}
\end{center}
\caption{Representation of the deformation of a disc in a crown/ring}
\end{figure}
The advantage of this is to deform the area of convergence on the whole complex plane by avoiding (bypassing) all the singularities. Thus, unlike the Taylor series that are only valid on an interval of the $x$-axis, we would have a new type of series describing a function absolutely everywhere, that is to say before AND after (so around...) singularities!
So obviously we will require that in the deformed crown above the function is always holomorph and analytical (as in the initial convex disc). Before determining what we are going down (generalized Laurent series!), we must first do a study of the decomposition of path integral:
\pagebreak
\subsection{Path Decomposition}
The path integrals as given previously can also be written in another form almost classical and used many times in the literature.
Let us see this. First, remember that we have just proved in the special case of a holomorphic function that:
But a closed path can be seen as a path having a round trip:
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/closed_path.jpg}
\end{center}
\caption{Representation of a closed path with round trip}
\end{figure}
Therefore we can write:
And now comes what interest us... for this purpose let us focus one one of the path integral of the type:
We already well know (1st form of notation) that any complex number $z$ of the type:
can be (2nd form of notation) written as (Euler form):
and to integrate on a path, nothing prevent us to choose a path where $r$ (the module) would be fixed and $\theta$ variable (we could not have the possibility to do this with the 1st form because by modifying the imaginary or real part, we can't get be guarantee to get a nice smooth curve but this is possible with the Euler form of a complex number)!
Therefore we have:
We write then naturally:
and as:
Therefore:
what we often find in the following form in the literature:
\subsubsection{Inverse Path}
If $C$ is a curve going from a point $P$ to a point $Q$, then we denote by $C^-$ the same curve but traveled from $Q$ to $P$.
Let us parametrized $C^-$:
If $C(t)$ it is the curve defined on $[a, b]$, then we define the curve $C^-(t)$ on $[a, b]$ by:
Indeed we have with this parameterization:
and when $t$ increases from $a$ to $b$, $a + b - t$ decreases from $b$ to $a$. $C^-$ is therefore only $C$ but traveled in the opposite direction.
We then have using the last proof:
Let us put:
Therefore:
Then we have:
Therefore if $C^-$ and $C$ are the paths of the same function but travel in the opposite direction, we have by taking our conventional notation (Caution! In the second term it is implicit that the parameterization is different from the first one!):
Therefore:
this is why we often say that the sign of the value of a line integral will depend on the direction in which its integration path is travel. If the direction is straightforward (that is to say "counterclockwise") its sign will be positive; if on the contrary the direction is clockwise his sign will be negative (\SeeChapter{see section Differential and Integral Calculus}).
\pagebreak
\subsection{Laurent Series}
This last relation obtained, we can return to the deformation of our disc of convergence in a crown. We recall that initially the idea is to have the analytical expression of a function as an infinite series of powers in a limited area around a singularity point and all this... in the puprose to be able to calculate for physicists complex path integrals through a method using the properties of complex series!
Let's start with the point (2) that is to say have a infinie power series for a path integra, which will take us more easily to point (1) that is to say get the an analytical expression of a function around a signularity point, by zooming on our crown:
\begin{figure}[H]
\begin{center}
\includegraphics{img/analysis/crown.jpg}
\end{center}
\caption{Zoom on our crown from our starting example}
\end{figure}
We therefore have if the function $f$ is analytic and holomorphic in the crown of outer radius $R$ and inside radius $r$, the following path curvilinear in the crown as we proved above (we change notation: $z=z'$ and $z_0=z$):
therefore we denote now by $z$ the point where we want to know the function and $z'$ variable of which $f$ depends. This notation change will be justified later for a purely practical reason.
The crown can be broken down into four paths:
If both segments $C_c$ and $-C_c$ are infinitely close, they then correspond to the same path traveled once in a positive direction and once in the negative direction. As we have proved just above that:
It therefore follows that:
Which brings us to write:
where we have put a "+" between the last two terms, because as we shall see immediately, the convergence criterion associated with the traditional notation in this field of study, makes automatically emerge the sign "-".
For the two integral $f_1,f_2$, we know that the fraction can be written as a geometric series as already seen above. Effectively, starting from (now you will understand why we changed the notation):
by assimilating:
where as we have seen, the convergence requires that:
so that $x$ is lower in absolute value to $1$.
We then shee the infinite geometric series appearing:
Therefore:
To come back to:
we have in any point $z$ inside the circle of radius $R$ whose border is described by the variable $z'$ and of center $z_0$ the convergence that is assured because:
Then we can write:
Integrating term by term integration, we highlight the development (already known):
with the definition of coefficients $c_n$, where $n$ is a positive or null integer:
This development may do think to the development of Taylor in the sense that only positive (or zero) powers of $(z'-z_0)$ appear, but this is not Taylor development in the case of the crown! Indeed, $c_n$ can not be written this time as:
since, by assumption, $f(z)$ is assumed analytic in the crown only and may therefore very well not be inside the small circle of radius $r$, in particular on $z_0$, in which case $f^{n}(z_0)$ may simply not exist (let us repeat that $z$ is strictly constrained to be in the crown, therefore $r<\vert z \vert <R$). We will see later what happens when $f (z)$ is holomorphic in this disk and that, in particular, $z_0$ is not a singular point.
We still need to treat $f_2$. We then do the same type of development as for $f_1$, with the difference that now:
when $z'$ browse the small circle of radius $r$. To make a geometric series appear, we must write this time:
Therefore:
So we have:
Integrating term by term, we highlight the (new) development:
with:
By changing $n$ in $-n$ in the summation for $f_2$, we have for the sum $f_1(z)+f_2(z)$:
with at this time two distinct $c_n$:
We will now see that these two relations can be combined into one!
For this purpose if we observe well the last two relations, we find that they do not depend at all of $z$ (!) and this is normal since the $c_n$ are the coefficients of the series expansion of $f(z)$ and these are the same at any point of the domain of definition of the function where it is analytic!
So the two contours (circles) can be merged into only one circle since it is located in the crown and has for center $z_0$:
Furthermore, the attentive reader will have noticed that this contour does not even need to be a circle finally! It may be any geometry as long as it is closed and is located in an analytical area!
Thus, we get the two relations:
\begin{equation}
\addtolength{\fboxsep}{5pt}
\boxed{
\begin{gathered}
\begin{aligned}
f_(z)&=\sum_{n=-\infty}^{+\infty} c_n(z-z_0)^n\\
c_n&=\dfrac{1}{2\pi\mathrm{i}}\int\limits_{\gamma_R} \dfrac{f(z')}{(z'-z_0)^{n+1}}\mathrm{d}z' \qquad (n=0,\pm 1,\pm 2, ...)
\end{aligned}
\end{gathered}
}
\end{equation}
The two previous relation define the "\NewTerm{general Laurent series}\index{general Laurent series}". It is remarkable and differs from a Taylor series in the sense that it contains all the positive and negative integer powers and the coefficients $c_n$ can a priori not be expressed with the derivatives of $f$.
%--------------------
The power series of $n\geq 0$ is named "\NewTerm{regular part}\index{regular part of a power series}", the negative powers is commonly named "\NewTerm{main part}\index{main part of a power series}".
The series of negative powers converges uniformly everywhere outside $\gamma_r$, that of positive powers within $\gamma_R$. In total the development of Laurent converges uniformly in the common area, which is the crown and therefore also on the unique path $\gamma$.
Let us now show a point that we have mentioned above. If the circle contains no singularity, then all the coefficients:
are zero. First note that $-n-1$ is a positive or zero integer, which we will denote by $p$ such as:
We then have the following integrand along a closed path:
But, if we remove the singularity that requires $f(z')$ is holomorphic (and in anyway this is required by all initial developments of the Laurent series).
As $(z'-z_0)^p$ is polynomial with positive and not null integer powers and that as we know any polynomial satisfying these conditions is differentiable at least once without showing singularity. Thus this term is also holomorphic.
Assuming that the product of two holomorphic functions is holomorphic and that contour $\gamma$ is closed, then we have using the following result proved above (for a holomorphic function):
the following immediate consequence:
if there is no singularity in the small circle of the crown. We then fall back again on a development with only positive powers, the $c_{n\geq 0}$ this time being equal to:
according to the generalized Cauchy integral theorem proved earlier above. Conversely, we see well that this is the main part (when it exists!) which contains the information on the fact that $f$ is not a priori holomorphic in the small disk. The existence of negative powers shows that $f$ is clearly not bounded on $z_0$.
The classification of singularities of a function will be precisely based on the consideration of the characteristics of the main part of the Laurent development centered on a singular point of this function.
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Let us see to what looks like the Laurent serie of our famous example function:
on a simply convex domain that would be the crown rounding the singularity $\mathrm{i}$ for example (we could have taken the second singularity $-\mathrm{i}$ but we had to choose one to not repeat twice the explanations below...). This is equivalent therefore to search the power series development of $z-\mathrm{i}$. \\
We will proceed as following:
For what will follow we will use:
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
The second fraction can be expressed as a geometric series if as we have already seen:
Therefore it comes:
Let us multiply both sides of this equality by $-i / $2 and then divide them by $z - i$ (the second term in the denominator of the original fraction) for obtain the left term:
and for the right term:
Finally we have the following geometric series:
We see then on this Laurent series around $\mathrm{i}$ of the holomorphic function $f(z)$ that the following coefficient appears:
and then we have with Maple 4.00b:\\
\texttt{>plot3d(abs(-I/2*1/((re+I*im)-I)-(I/2)\string^2-(I/2)\string^3*(re-I*im)-\\
(I/2)\string^4*(re-I*im)\string^2-(I/2)\string^5*(re-I*im)\string^3),\\
re=-1.5..1.5,im=-1.5..1.5,view=[-2..2,-2..2,-1..2],\\
orientation=[-130,70],contours=50,style=PATCHCONTOUR,axes=frame,\\
grid=[100,100],numpoints=10000);}
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
This gives the following figure:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/laurent_series_representation.jpg}
\caption{Laurent series representation of $f(z)$ with Maple 4.00b}
\end{figure}
where we see that the Laurent series allows us to express $f (z)$ in a neighborhood close to the singularity $\mathrm{i}$ by taking five terms.\\
Ditto if we make the sum of the two Laurent series for the two singularities with seven terms:\\
\texttt{>plot3d(abs(-I/2*1/((re+I*im)-I)-(I/2)\string^2-(I/2)\string^3*(re-I*im)-\\
(I/2)\string^4*(re-I*im)\string^2-(I/2)\string^5*(re-I*im)\string^3 -(I/2)\string^6*(re-I*im)\string^4\\
-(I/2)\string^7*(re-I*im)\string^5+I/2*1/((re+I*im)+I)+(I/2)\string^2+
(I/2)\string^3*(re+I*im)+(I/2)\string^4*(re+I*im)\string^2+(I/2)\string^5*(re+I*im)\string^3\\
+(I/2)\string^6*(re+I*im)\string^4
+(I/2)\string^7*(re+I*im)\string^5),re=-1.5..1.5, im=-1.5..1.5, view=[-2..2,-2..2,-1..2],orientation=[130,70], contours=50, style=PATCHCONTOUR, axes=frame,grid=[100,100],\\
numpoints=10000);}
This give the image visible on the next page:
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
This gives the following figure:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/sum_of_two_laurent_series.jpg}
\caption{Sum of the two Laurent Series of $f(z)$ for both singularities with Maple 4.00b}
\end{figure}
\end{tcolorbox}
\subsection{Singularities}
We have seen just before that it was possible to calculate the path integral of a function, on condition of analyticity, on the outline of a singularity. Our goal will now be to enhance this approach.
We have already mentioned and highlighted in our previous proofs that the integrant in the "Cauchy integral theorem" was of the form:
where $f(z)$ is well defined in $z_0$.
The point $z-z_0$ is of course a singularity of $f(z)$ and it is not defined there.
As we saw during our proof of Laurent series, $f(z)$ can be expressed as a Laurent series in the form a positive power Laurent series in a convergence disk (or what remains the same: as a series of Laurent in a crown not centered on a singularity...) in the form:
Before continuing, it is customary in mathematics to define a small conventional vocabulary regarding this time the possible singularities of $f(z)$!
Let us first recall that we know, and that we have proved, that all information on the singularities of $f (z)$ are contained in the main part of the Laurent series (negative powers) defined on the crown surrounding $z_0$:
The following classification focus on "\NewTerm{isolated singularities}\index{isolated singularities}", that is to say, a singular point where $f(z)$ is analytic everywhere in the neighborhood excepted on $z_0$. This classification, as we will see permits us to distinguish three types of singular points, will be useful when developing the theory of residues further.
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] When the limit of the function $\vert f(z) \vert$ exists on $z_0$, we say that the singularity is a "\NewTerm{removable singular point}\index{removable singular point}" or "\NewTerm{apparent singularity}\index{apparent singularity}".
For example:
does not seem to be defined on $z=z_0=0$ but we have a numerator having a Laurent series without negative powers (therefore a simple Taylor series). It then comes into by doing the Maclaurin series (that is to say the Taylor series on $z=z_0=0$...):
We then see that $f (z)$ finally has no term with negative power and therefore we have eliminated the singularity (or that it contains simply no signularities... which can easily be check with Maple 4.00b).
\item[D2.] When on $z_0$ the limit $\vert f(z) \vert $ does not exist we speak about "\NewTerm{essential singularity}".
For example, $z_0=0$ is an essential singularity for the function:
Indeed, if $z$ approaches zero coming from the positive real axis $\mathbb{R}_+$, the function diverges, more precisely, it tends to $+\infty$. If $z$ comes from $\mathbb{R}_-$, the function tends to zero as illustrated by the following Maple 4.00b plot:
\texttt{>plot3d(abs(exp(1/(re+I*im))),re=-5..5,im=-5..5,\\
view=[-3..3,-3..3,-0.5..3],orientation=[-130,70],contours=50,\\
style=PATCHCONTOUR,axes=frame,grid=[100,100],numpoints=10000)\\
>plot3d(abs(exp(1/(re+I*im))),re=-5..5,im=-5..5,\\
view=[-3..3,-3..3,-0.5..3],orientation=[-130,70],contours=50,\\
style=PATCHCONTOUR,axes=frame,grid=[100,100],numpoints=10000)}
\begin{figure}[H]
\centering
\includegraphics{img/analysis/plot_essential_singularity.jpg}
\caption{Essential singularity example with $e^{1/z}$ in Maple 4.00b}
\end{figure}
Indeed:
So an equivalent way of defining an essential singularity, is to say that there are an infinite number of terms with negative powers in the main part of the Laurent series.
\item[D3.] When on $z_0$ the limit of $\vert f(z) \vert$ is $+\infty$, we speak about a "\NewTerm{pole}".
This is the last category (as far as we know...) in which we can store a function that is not classifiable neither in the first nor in the second definition above.
So another equivalent way of defining a "pole", is to say that there is a finite number of terms with negative powers in the main part of the Laurent series. If the number of terms is $k$, then we speak of "\NewTerm{pole of order $k$}".
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} We sometimes say that an essential singularity is a " \NewTerm{pole of order $+\infty$}".\\
\textbf{R2.} A pole of order 1 is named a "\NewTerm{simple pole}". One of order 2 is named a "\NewTerm{double pole}" and so on...
\end{tcolorbox}
\end{enumerate}
If we come back on our example:
We have proved previously that que Laurent series of the function was:
This function has therefore a trivial pole of order $1$ on $z_0=\mathrm{i}$ and also on $z_0=-1$ because in this latter case this infinite series diverge to $+\infty$ and we can easily check this with the following Maple 17.00 command:
\texttt{>sum(-(I*(1/2))\string^n*(-2*I)\string^(n-2), n = 0 .. infinity)}
\subsection{Residue Theorem}
Consider a function $f(z)$ whose pole is of order less or equal to $k$.
Let us make it analytic:
that is to say that we have take a function $f(z)$ that we have made analytic after elimination of the poles supposed in finite number - order - less than or equal to $k$ on $z_0$.
This function $\phi(z)$ has therefore a Laurent series development in a disc center on $z_0$.
As we have prove it previously, we can therefore by using the following relation:
write:
Using $f(z)$ under the integral it comes:
You must deeply analyze this relation and understand that it link together the integral of a function having singularities with the value on one point of an analytical function having no more singularities!!!
This latter relation can be rewritten by rearranging terms:
And by expressing $\phi^{(k)}(z_0)$ by using (this is authorized because this latter function is analytical) the fact that by definition:
We get obviously:
Therefore by making $\phi(z)$ explicit again:
This latter relation is valid onlye for ONE isolated singularity (in case you forget!) and where $k$ is equal at least to $1$!
Mathematicians therefore define:
as being the residue of the function $f(z)$ at the point being an isolated singularity of order $k$. Or respectively:
where the path integral is centered on $z_0$.
Now notice that the term on the right of the equality in the previous relation correspond to the coefficient $c_{-1}$ of the Laurent Serie. Indeed:
Therefore:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Therefore it comes that on an isolated singularity that can be eliminated, the residue is null because as we saw it before, the path integral rounding a domain without singularity is equal to zero!
\end{tcolorbox}
To resume, the relation:
is very interesting for the physicist... because this is a very elegant way for him to calculate the path integral of a non analytic function $f(z)$ having a unique isolated singularity and this just by knowing the order of its poles!
For example if a function $f(z)$ has only a pole of order $1$, we have therefore:
and we replace therefore $z_0$ by the desired value in the parenthesis $(z-z_0)$ and after we calculate the limit between brackets!
Now to go more fare, let us remind that outline of the path integral:
and the curvilinear path of the integral:
are in fact combined (identical) and the coefficients $c_n$ do not depend on $z$! The only constraint on the path is that is is closed and in an analytical domain centered on one point.
So if we have several isolated singularities, surrounded by connected curvilinear paths as shown below on the complex plane of a function having a pole of order 3 (i.e. three non-removable singularities $z_0,z_1,z_2$):
\begin{figure}[H]
\centering
\includegraphics{img/analysis/multiple_surrounded_singularities.jpg}
\caption{Multiple isolated singularities surrounded by curvilinear paths}
\end{figure}
then we have still only one closed curvilinear path but whose different isolated singularities are connected by cross which as we know: the paths that are opposed cancel themselves! And let us remind that the coefficients are the same throughout on all the path since it is on an analytical domain.
We then have the generalized version of the residue theorem for a function $f$ with $n$ isolated singularities:
\begin{equation}
\addtolength{\fboxsep}{5pt}
\boxed{
\begin{gathered}
\begin{aligned}
\oint f(z)\mathrm{d}z&=2\pi\mathrm{i}\sum_{i=1}^n \text{Res}\left[f(z);z_i\right]\\
\text{Res}\left[f(z);z_i\right]&=\lim_{z \rightarrow z_0}\dfrac{1}{(k_i-1)!}\dfrac{\mathrm{d}^{k_i-1}}{\mathrm{d}z^{k_i-1}}\left[(z-z_i)^{k_i}f(z)\right]
\end{aligned}
\end{gathered}
}
\end{equation}
with a rigorous approach that is specific to engineers ... who sometimes write this latter relation as following:
where $r$ is therefore a residue. This is an important result in the field of solving differential equations associated with some inverse Laplace transforms (\SeeChapter{see section Functional Analysis}). This intermediate result will give us the possibility to get an another one a little further of major importance for the section of Corpuscular Quantum Physics.
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Let us take again our famous function:
We know it has a pole of order $1$ on $z_0=\mathrm{i}$ and a pole of order $1$ on $z_0=-\mathrm{i}$. So if we take this time Laurent series with a path that surrounds the two singularities (and not only one) then we have a function with a pole of order $2$.
It comes then for this particular case:
with $n$ being equal to $2$.
Then we have:
and:
\end{tcolorbox}
\pagebreak
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
We can easily check this with Maple 4.00b:\\
\texttt{>readlib(singular):\\
>singular(1/(1+z\string^2),z);\\
>readlib(residue):\\
>residue(1/(1+z\string^2),z=-I);\\
>residue(1/(1+z\string^2),z=I);\\}
and therefore:
In fact in this case, the residue theorem gives zero because the function has no poles to infinity which is true since in our example:
Physicists meanwhile say that... the force does make any work on this path ...!
\end{tcolorbox}
\subsubsection{Pole at infinity}
We have say before that any function that did not have poles at infinity had therefore the sum of the residues of all the poles that are equal to zeros. This result is very important in physics and merits to be study!
It is almost trivial to recognize the number of poles... but to recognize the poles that are at infinity there are many times traps we can easily fall in.
Let us consider the expression $f(z)\mathrm{d}z$. If $z$ is at the neighborhood of the infinity then $1/z$ is neare $0$. Let us write:
Then we have:
Then the residue at infinity is such that:
with:
Therefore with:
The latter relation we will be indispensable to us in the section of Corpuscular Quantum Physics Corpuscular to build the relativistic Sommerfeld model of hydrogenous atom because we will need to to calculate a path integral with a pole.
Let's see an example with the function that accompanies us since the beginning of this section. That is to say:
Therefore it comes:
And we recognize immediately the initial function, in absolute value, and that has therefore no pole on $0$. Therefore $f(z)$ has no pole at infinity.
\begin{flushright}
\begin{tabular}{l c}
\circled{100} & \pbox{20cm}{\score{3}{5} \\ {\tiny 14 votes, 61.43\%}}
\end{tabular}
\end{flushright}
%to force start on odd page
\newpage
\thispagestyle{empty}
\mbox{}
\section{Topology}
\lettrine[lines=4]{\color{BrickRed}T}opology is an extremely broad field of mathematics for which it is difficult to define precisely the object so the areas where it applies are varied (real line topology, graphs topology, differential topology, complex topology, symplectic topology, etc.).
We mainly make a distinction with:
\begin{itemize}
\item "\NewTerm{General topology}" that establishes the foundational aspects of topology and investigates properties of topological spaces and investigates concepts inherent to topological spaces. It includes point-set topology, which is the foundational topology used in all other branches (including topics like compactness and connectedness).
\item "\NewTerm{Algebraic topology"} tries to measure degrees of connectivity using algebraic constructs such as homology and homotopy groups.
\item "\NewTerm{Differential topology"} is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry (see section of the same name in the chapter about Geometry) and together they make up the geometric theory of differentiable manifolds.
\item "\NewTerm{Geometric topology}" primarily studies manifolds and their embeddings (placements) in other manifolds. A particularly active area is low dimensional topology, which studies manifolds of four or fewer dimensions. This includes knot theory, the study of mathematical knots.
\end{itemize}
What we can say at first is that in its foundations Topology is very closely related to the set theory, the study of convergence of sequences and series, functional analysis, analysis complex, the differential and integral calculus, vector calculus and the geometry to mention only the most important cases that the reader can already found in this book.
The origin of Topology comes from the problems that laid to the progress of functional analysis in the rigorous study of continuous functions, their differentiability, their limits at a point (finite or note), the existence of extremums, etc. in higher-dimensional spaces (in fact, implicitly, the goal for topology is to create tools that easily allow to study the properties of functions in all dimensions). All these concepts, needed a rigorous mathematical definition of the intuitive idea of proximity, especially when doing operations on such functions.
We will try in this section to identify the basis of the structures that allow us to speak about limits and continuity and this only for curiosity as consultants in R\&D and financial engineering we never saw a business application where the subjects below are absolutely necessary to develop a new business or solve a problem.
The majority of examples that we will take in this section will be in $\mathbb{R}$ (the $\mathbb{R}$ straight line to be more exact...) because it is the most used one by the engineers (and most of times the only one!) and the one we will have to use for the sections on Graph Theory, Statistics, Differential and Integral Calculus and also on Fractals. When we restrict our study of Topology on $\mathbb{R}$ we then speak of "\NewTerm{Real Analysis}".
\subsection{General Topology}
General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology.
The fundamental concepts in point-set topology are "\NewTerm{continuity}", "\NewTerm{compactness}", and "\NewTerm{connectedness}".
\begin{itemize}
\item Continuous functions take \underline{nearby} points to nearby points.
\item Compact sets are those that can be covered by finitely many sets of \underline{arbitrarily small} size.
\item Connected sets are sets that cannot be divided into two pieces that are \underline{far apart}.
\end{itemize}
The words \underline{nearby}, \underline{arbitrarily small}, and \underline{far apart} can all be made precise by using open sets. If we change the definition of open set, we change what continuous functions, compact sets, and connected sets are. Each choice of definition for open set is named a "\NewTerm{topology}". A set with a topology is named a "\NewTerm{topological space}".
"\NewTerm{Metric spaces}" are an important class of topological spaces where distances can be assigned a number named a "\NewTerm{metric}". Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces.
An "\NewTerm{inner product}" (\SeeChapter{see section Vector Calculus}) induces a "\NewTerm{norm}" (\SeeChapter{see section Vector Calculus}) and the norm induces a metric space.
Therefore we understand better what we will study in this section that can be summarized by the following figure:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/topological.jpg}
\end{figure}
\subsubsection{Topological Spaces}
Topological spaces form the conceptual foundation on which the concepts of limit, continuity or equivalence are defined.
The framework is general enough to be applied in many different situations: finite sets, discrete sets, geometry spaces, $n$ dimensional numerical spaces and most complex functional areas. These concepts appear in almost all branches of mathematics, they are therefore central to the modern view of mathematics.
\textbf{Definition (\#\mydef):} Consider a nonempty set $X$ (the length of a plastic ruler for example). A "\NewTerm{topology $\mathcal{T}$}" or "\NewTerm{topological space $(x,\mathcal{T})$}" on $X$ is a family $\mathcal{T}$ of parts of $X$ (of length of our rule...) named "\NewTerm{open $V$}" (as the open intervals seen in the section of Functional Analysis) such that the following axioms are true:
\begin{enumerate}
\item[A1.] The empty set $\varnothing$ and $X$ are considered as open $V$ and must belong to the family of the topology $\mathcal{T}$ (these only both open sets define what we name the "\NewTerm{trivial topology}" that is the most minimal one satisfying all the axioms):
In other words, if we imagine our plastic ruler, the measure zero (strictly speaking: the empty set) must belong to the topology defined on the ruler and the ruler itself (seen as a subset).
\item[A2.] Any finite intersection of open of $\mathcal{T}$ will be an open of $\mathcal{T}$:
\item[A3.] Any union of open of $\mathcal{T}$ will be an open of $\mathcal{T}$:
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} Mathematicians frequently note by $O$ the family of open sets and by $F$ the family of closed sets. Convention we will not follow in this book.\\
\textbf{R2.} The close sets of a topology are complementary of open sets. Therefore, the family of close sets contains among other $X$ and the empty set $\varnothing$...\\
\textbf{R3.} There is no difference between part and subset of a set.
\end{tcolorbox}
\item[A4.] The couple $(X,\mathcal{T})$ is a "\NewTerm{Hausdorff space}" or "\NewTerm{separate space}" if moreover the property named "\NewTerm{Hausdorff axiom}" is verified:
\end{enumerate}
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} A well known example of topological space is $\mathbb{R}$ provided with the set $F$ generated by the open intervals (by the union law), that is to say the intervals of the type $] a, b [$.\\
\textbf{R2.} We will see a very concrete and beautiful + nice application of Hausdorff spaces in our study of fractals in the chapter Theoretical Computing.
\end{tcolorbox}
\textbf{Definition (\#\mydef):}
\begin{enumerate}
\item[D1.] If we denote by $(X, V)$ a a topological space, $V$ designating the open sets of $X$, a "\NewTerm{base}", in the topological sense, of $(X, V)$ is a part $B$ of $V$ such that any open set of $V$ is a an union of open sets of $B$ (this is the same idea as vector spaces but in fact applied to sets ... nothing bad and difficult! If you want an example see the section of Measure Theory).
\item[D2.] In topology a subset $A$ of a topological space $X$ is named "\NewTerm{dense}" (in $X$) if for every point $x$ in $X$ either belongs to $A$ or is a limit point of $A$. Informally, for every point in $X$, the point is either in $A$ or arbitrarily "close" to a member of $A$. For instance, every real number is either a rational number or has one arbitrarily close to it. Therefore $\mathbb{Q}$ is dense in $\mathbb{R}$.
\end{enumerate}
\subsection{Metric Space and Distance}
\textbf{Definition (\#\mydef):} A "\NewTerm{metric space}" denoted by $(X, d)$ or sometimes $X_d$ (or evend sometimes just $X$ if the type of distance $d$ cannot not be confused) is by definition a set $X$ with provided with an application:
named "\NewTerm{distance}" or "\NewTerm{metric}", which satisfies the following axioms:
\begin{itemize}
\item[A1.] Positivity:
\item[A2.] Separation:
\item[A3.] Triangular inequality:
\item[A4.] Symetry:
\end{itemize}
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} Some readers will probably see immediately that some of these properties have already been seen in other sections of this book during our study of of distances between functional points and during our study of norms (triangle inequality proved in the section of Vector Calculus - the symmetry, positivity, the separation have already been study in the section of Functional Analysis).\\
\textbf{R2.} Some authors omit the axiom A1 which is strictly correct as it trivially follows from A3.\\
\end{tcolorbox}
The "distance function" of $\forall x,y \in X$ is thus usually denoted in the more possible sense in mathematics (at least as far as we know):
we will see three examples much more further below with a schema.
\textbf{Definition (\#\mydef):} If we do not impose the axiom A2, we say that $d$ is a "\NewTerm{semi-distance}" on $X$ and if we allow a semi-distance $d$ to take the value $+\infty$, we prefer to say that $d$is a "gap".
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} If a distance $d$ satisfies the property:
property more restrictive that the triangle inequality in some spaces, we say that $d$ is "\NewTerm{ultrametric}".\\
An example of ultrametric distance is the family tree (...):
\begin{figure}[H]
\centering
\includegraphics{img/analysis/family_tree.jpg}
\caption{Example of ultrametric distance with an orgchart}
\end{figure}
We have the following distances:
We note that the distances above do not add up, but we have by cons:
Therefore:
\textbf{R2.} Let $(X, d)$ be a metric space and consider $F=\varnothing$ a part of the set $E$. The metric space $(F,\delta)$ where $\delta$ denotes the restriction $d_{F \times F}$ of $d$ is named "\NewTerm{metric subspace}" of $(X, d)$ (we should check that the distance $d$ is equivalent to the distance $\delta$). In this case, we also say that $F$ is provided with the distance induced by this of $X$. We therefore simply note $d$ the induced distance.
\end{tcolorbox}
\pagebreak
Let us give now some examples:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E1. If we take for $X$ the plane, or the three-dimensional space of Euclidean geometry and a unit of length, the "distance" in the usual sense is a distance within the meaning of the 4 axioms mentioned above. In these spaces, the three points $A, B, C$ satisfy as we have proved it in section Vector Calculus:
with other inequality obtained by circular permutation of $A, B, C$. These inequalities are well known, for example between the side lengths of a triangle.\\
E2. If we take $X=\mathbb{R}^n$, $n \in \mathbb{N} \geq 1$ and that we equip $\mathbb{R}^n$ of an Euclidean vector space structure (and not non-Euclidean!) and we take two points:
in $\mathbb{R}^n$, the distance is then given by (we have already proved this in the sections of Functional Analysis and Vector Calculus):
\end{tcolorbox}
This latter distance satisfies the five axioms of distance and we name it the "\NewTerm{Euclidean distance}". We can take (it is an interesting property for the general culture), any relation of the form:
is also a distance in $\mathbb{R}^n$ (without proof). In the particular case with $n=1$, we have of course:
This is the usual distance on $\mathbb{R}$.
Mathematicians are even stronger by generalizing ever more (the proof has little interest for now in this book) the prior-previous relation (taking into account the definition of the distance) in the form:
which is named "\NewTerm{Hölder distance}".
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
Following the intervention of a reader we would like to point out that strictly speaking the above inclusion should be noted $[1,+\infty[ \subset \mathbb{\overline{R}}$ where $\mathbb{\overline{R}}$ is the achieved line (also valid for precision for the Minkowski inequality below).
\end{tcolorbox}
As for the triangle inequality, then given by (\SeeChapter{see section Vector Calculus}):
The generalization, by the verification of the existence of the Hölder distance, gives us true "\NewTerm{Minkowski inequality}":
Let us continue with our examples:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E3. If we take $X=\mathbb{C}$ we will consider the distance:
Therefore if $z=a+\mathrm{i}b=(a,b)$ and $z'=a'+\mathrm{i}b'=(a',b')$ we have the module that the same manner as norm in $\mathbb{R}^2$, forms a distance:
\\
E4. Let us consider $E=\varnothing$ an arbitrary set. Let us write:
It is quite check that this distance satisfies the five axioms and that is furthermore an ultrametric distance. This distance is named "\NewTerm{discreet distance}" and the reader will notice that, by analogy, we choosed to express this distance by the Dirac symbol $\delta$ (this is not innocent !!) rather than the traditional $d$.
\end{tcolorbox}
\pagebreak
\subsubsection{Equivalent Distances}
Sometimes two different distances $d$ and $\delta$ on the same set $E$ are quite similar so that the related metric spaces $(E,d),(E,\delta)$ have the same properties for certain mathematical objects defined by $d$ on one hand, and by $\delta$ on the hand. There are several concepts of equivalences for example first (before the others that require mathematical tools that we have not yet defined):
\textbf{Definition (\#\mydef):} Let $d$ and $\delta$ be two distances on the same set $E$, $d$ and $\delta$ are named "\NewTerm{equivalent distances}" if there are two real constants $c>0,C>0$ such that:
Therefore:
with $c\leq C$. We note this equivalence by:
The advantage of this definition is the following: if we have convergence for one of the metric, then we have convergence for the other too. More clearly:
verbatim:
\subsubsection{Lipschitz Functions}
With respect to the above definitions, we can now assign some additional properties to functions such as we had define in the section of Set Theory or Functional Analysis and analyzed (in part...) in the section of Differential and Integral Calculus. The idea is also mainly to build a set of tools enabling the study of differential properties of non differentiable functions.
Let $(E, d)$ and $(F,\delta)$ be metric spaces, and $f:E \rightarrow E$ a function. We define the following properties:
\begin{enumerate}
\item[P1.] We say that $f$ is an "\NewTerm{isometry}" if (it is rather intuitive ...!):
\item[P2.] If we take the usual distance, the $L$-Lipschitz or "\NewTerm{Lipschits function of order $L$}" is then defined by on a given interval by:
that we can also write:
or what remains the same: all line drawn between two arbitrary points of the graph must have a bounded and finite slope coefficient (derivative) between $L$ and $-L$.
Any such $L$ is referred to as a "\NewTerm{Lipschitz constant}" for the function $f$. As $L$ can be define on intervals (not necessarily the whole domain of definition) smallest constant is sometimes named the "\NewTerm{best Lipschitz constant}".
Otherwise, one can equivalently define a function to be Lipschitz continuous if and only if there exists a constant $L$ such that, for all $x\neq y$:
For real-valued functions of several real variables, this holds if and only if the absolute value of the slopes of all secant lines are bounded by $k$.
Therefore all Lipschitz must be continuous and any function $f$ that has a bounded $L$ value is more restrictive than just simply being continuous!
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E1. The function $f(x)=\sin (x)$ is $1$-Lipschitz as the derivative of the cosine is between $-1$ and $1$.\\
E2. The function $f(x)=x^2$ is locally Lipschitz as for any interval closed and finite interval we can found a bound $L$ but is not globally Lipschitz as when $x\rightarrow \pm\infty$ then the derivative has also no bounds.\\
E3. The function $f(x)=|x|$ has no derivatives on $x=0$ in the ordinary sense. But its derivative in the Lipschitz sense on $x=0$ is given by the a closed interval denoted $\partial_L f(0)=[-1,1]$ given by the bound of $L$ as the ordinary derivatives is less than or equal to $L$ in absolute value!\\
This last example show us that the notion of a local minimum of a function $f(x)$ in the ordinary sense is generalized with Lipschitz condition. As we can now simple defined the condition of local minimum of a non-smooth function $f(x)$ as:
rather than in the ordinary sense (more restrictive):
\end{tcolorbox}
Schematically for a Lipschitz continuous function, there is a double cone (shown in white) whose vertex can be translated along the graph, so that the graph always remains entirely outside the cone.
\begin{figure}[H]
\centering
\includegraphics{img/analysis/lipschitz.jpg}
\caption{Example of Lipschitz function (source: Wikipedia)}
\end{figure}
Intuitively, a Lipschitz continuous function is therefore limited in how fast it can change: there exists a definite real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; this bound is named a "Lipschitz constant" of the function (or "modulus of uniform continuity"). For instance, every function that has bounded first derivatives is Lipschitz.
\item[P3.] If $L=1$, we say that the function $f(x)$ is a "\NewTerm{short map}". If $-1<L<1$, we say that $f(x)$ is "\NewTerm{strictly contracting}".
\item[P4.] We say that two metric spaces are "\NewTerm{isometric spaces}" if there is a surjective isometry of one over the other (which is quite natural in geometry ...).
\end{enumerate}
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} An isometry is always injective as:
but in general it is not surjective.\\
\textbf{R2.} If $(E,d)$ and $(F,\delta)$ are isometric, of the point of view of the theory of metric spaces they are not discernible, as all their properties are the same, but their elements can be of very different nature (sequences in one and functions in the other).\\
\end{tcolorbox}
\pagebreak
\subsubsection{Continuity and Uniform Continuity}
As we already see it in the section of Functional Analysis, a a continuous function is, roughly speaking, a function for which small changes in the input result in small changes in the output and that permits the analysis of limits. Otherwise, a function is said to be a discontinuous function. Formally it was defined by:
In other words remember that this mean that a function is continuous if for every point $x_0$ in the domain $E$, we can make the images of that point ($f(x_0)$) and another point ($f(x)$) arbitrarily close (of a distance $\varepsilon$) if we move the other point ($x$) close enough (distance $\delta$) to our given point.
Hence it is not continuous if:
The previous definition is not so good as it make usage of a special case of distance (the absolute value). It is therefore more common to generalize by writing:
Now let us state a more restrictive definition!
\textbf{Definition (\#\mydef):} A function $f(x)$ is "\NewTerm{uniformly continuous}" if it satisfies:
with $\lambda=\varepsilon/L$ and $L\neq 0$. In other words, if we can bring two points as close as we want in a space, so can we in the other way (which ensures somehow the derivation.
Hence it is not uniformly continuous if:
If we compare the two relations:
the only difference is the order of the quantifiers. Indeed, for something to be continuous, you can check "one $x$ at a time", so for each $x$, you pick a $\varepsilon$ and then find some $\varepsilon>0$ that depends on both $x$ and $\varepsilon$ so that $|f(x)-f(x_0)|<\varepsilon$ if $|x-x_0|<\delta$. If we want uniform continuity, we need to pick first a $\varepsilon$, then find a $\delta$ which is good for ALL the $x$ values we might have.
As the previous definition are not quite easy for everybody let us see the engineer version of these two definitions:
\pagebreak
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] A function $f(x):E \rightarrow \mathbb{R}$ is "\NewTerm{continuous}" at a point $x_0\in A$ if, for all $\varepsilon>0$, there exists a $\delta>0$ such that whenever $|x-x_0|<\delta$ (and $x\in E$) it follows that $|f(x)-f(x_0)|<\varepsilon$.
\item[D2.] A function $f(x):E \rightarrow \mathbb{R}$ is "\NewTerm{uniformly continuous}" on $A$ if, for all $\varepsilon>0$, there exists a $\delta>0$ such that whenever $|x-x'|<\delta$ (and $(x,x')\in E$) it follows that $|f(x)-f(x')|<\varepsilon$.
\end{enumerate}
Therefore we see better the difference: continuity is defined at a point $x_0$, whereas uniform continuity is defined on a set $E$. Roughly speaking, uniform continuity requires the existence of a single $\delta>0$ that works for the whole set $E$, and not near the single point $x_0$.
From this definition wee that any uniformly continuous function is continuous but the reciprocity is not true (any uniformly continuous function is not necessarily continuous):
If the chosen distance is known (for example the absolute value for scalar functions such that $d=|\cdot|$ and $\delta=|\cdot|$) the previous definition notation change obviously a little bit:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The function $f(x) = x^2$ is continuous but not uniformly continuous
on the interval $E = [0,+\infty[$.\\
We prove first that our function $f(x)$ is continuous on $E$. Remember first that:
In our case we can therefore write and check that:
Let us choose $x_0=a-1$ with $a>1$ and $\delta=\min(1,\varepsilon/2a)$ (note that $\delta$ depends on $x_0$ since $a$ does). Choose $x \in S$. Assume $|x-x_0|<\delta$. Then $|x-x_0|<1$ so $x<x_0+1$ so $x,x_0<a$ so:
We prove now that $f(x)$ is not uniformly continuous on $E$, i.e.:
Let $\varepsilon=1$. Choose $\delta>0$. Let $x_0=1/\delta$ and $x=x_0+\delta/2$. Then $|x-x_0|=\delta/2<\delta$ but:
as required.
\end{tcolorbox}
\subsection{Opened and Closed Set}
\textbf{Definition (\#\mydef):} Consider a set $E$ with a distance $d$. A subset $U$ of $E$ is named "\NewTerm{open subset}" if, for each element of $U$, there is a non-null distance $r$ for which all the elements of $E$ whose distance from this element is less than or equal to $r$, belong to $U$, which gives in mathematical language:
In topology, an open subset is then only an abstract concept generalizing the idea of an open interval in the real line.
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
For recall, the symbol "|" means in this context: satisfies the property...
\end{tcolorbox}
In practice, however, open sets are usually chosen to be similar to the open intervals of the real line. The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined.
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The points $(x, y)$ satisfying $x^2 + y^2 = r^2$ are colored blue. The points $(x, y)$ satisfying $x^2 + y^2 < r^2$ are colored red. The red points form an open subset of the plane $\mathbb{R}^2$. The blue points form a boundary set. The union of the red and blue points is a closed set.
\begin{figure}[H]
\centering
\includegraphics{img/analysis/opened_set.jpg}
\end{figure}
\end{tcolorbox}
This definition may perhaps seem complicated but in fact, its real meaning is simpler than it seems. In fact, according to this definition, an open set in a topological space is nothing more than a set of contiguous points and without borders.
The lack of border comes from the condition $r\neq 0$. Indeed, by reductio ad absurdum, if an open set $U$ had an edge, then for each point on it (the edge) it would still be possible to find a point not belonging to $U$ as close as we want from it. It follows that the distance $r$ becomes necessary therefore zero.
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] A "\NewTerm{closed subset}" is an "\NewTerm{open with edge}"
\item[D2.] A "\NewTerm{neighborhood}" of a point $E$ is a subset of $E$ containing an open subset containing this point.
\end{enumerate}
The definition of an open set can be simplified by introducing an additional concept, that of "open ball":
\subsubsection{Balls}
Given $x$ an element of $E$:
\textbf{Definition (\#\mydef):} An "\NewTerm{open ball of center $x$ and radius $r>0$}" or "\NewTerm{metric ball of radius $r$ centered at $x$ without border}" is the subset of all the points of $E$ whose the distance $x$ is less than $r$, that we write in general:
An open set can also be defined as a set for which it is possible to define an open ball at each point.
Typically in the real plane where $d$ is the euclidean distance:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/open_set.jpg}
\caption{An open ball of radius $r$, centered at the point $x$}
\end{figure}
An open set can also be defined as a set for which it is possible to define an open ball at each point.
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} The open such defined, form what we name an "\NewTerm{induced topology}" by the distance $d$ or also a "\NewTerm{metric topology}".\\
\textbf{R2.} We name an "\NewTerm{open cover}" $U$ of $E$, a set of open of $E$ whose union is equal to $E$. In other words: A collection of open sets that collectively cover another set.\\
Formally, if:
is an indexed family of open sets $U_\alpha$, then $C$ is a cover of $X$ if:
Visually in a naive way this gives:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/open_cover.jpg}
\end{figure}
\end{tcolorbox}
\textbf{Definition (\#\mydef):} A "\NewTerm{closed ball}" is similar to an open ball but differs in the sense that we include the elements located at a distance $r$ from the center:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
For $0<r<r'$ the inclusions $_oB_x^r \subset B_x^r \subset B(x,r')$ are direct consequences of the definition of the open and closed ball.
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The usual distance in $\mathbb{R}$ is given by $d(x,y)=|x-y|$. The balls are there simple intervals. For $x \in \mathbb{R}$ and $r\in \mathbb{R}_{+}^{*}$, we have:
\end{tcolorbox}
\textbf{Definition (\#\mydef):} A "\NewTerm{sphere}" is given by:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Since by definition $r>0$, open and closed balls are not empty because they contain at least their center. By cons, a sphere may be empty!
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
With $\mathbb{R}^n,\mathbb{C}^n$ we have seen in the previous examples we could set different distances. To distinguish them, we denote then by:
So in $\mathbb{R}^2$ the closed balls with center O and of radius unit equivalent to the previous three formulations, have the following shapes (remember that $0<r\leq 1$ in this example):
\begin{figure}[H]
\centering
\includegraphics{img/analysis/shape_some_distances.jpg}
\caption{Examples of closed balls of unit radius with different distances}
\end{figure}
\end{tcolorbox}
For example in statistics (see the section of the same name) we also use (among a lot of others) the Chi-2 distance given by:
Or always in (multivariate) statistics (have a look to the Statistics section but also to the Industrial Engineering one) the Mahalanobis distance:
Or in the Error Correcting Codes section we use the Hamming distance given by:
and so on...
\subsubsection{Partititions}
Now that we have defined the concepts of balls, we can finally (almost) rigorously define the concepts of open and closed intervals (which in a space of more than one dimension are named "partitions") that we have so often used in the section of Functional Analysis and Differential and Integral Calculus.
\textbf{Definition (\#\mydef):} Let $(X,d)$ of a metric space. We say that a subset $A$ of $X$ is "bounded" if there is a closed ball $_fB_{r_0}^r(X)$ such that $A \subseteq _fB_{r_0}^r(X)$:
Given the previous note on balls inclusions, it is clear that we can replace the word "closed" by"open". Moreover the triangle inequality implies that the bounded character of $A$ does not depend on the choice of $x_0$ (with a $x_0^{'}$ we simply need to replace $r$ by $r'=r+d\left(x_0,x_0^{'}\right)$).
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
The odd–even topology is the topology where $X = \mathbb{N}$ and:
the unbounded partitions of radius $r<1$. \\
Therefore we see that unless $P$ is trivial, at least one set in $P$ contains more than one point, and the elements of this set are topologically indistinguishable: the topology does not separate points!
\end{tcolorbox}
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] Let $X$ be a set and $(Y,d)$ a metric space. If $X$ is a set, we say that a function $f: X\mapsto Y$ est "bounded" if its image $f (X)$ is bounded (the case of the sine or cosine function, for example).
\item[D2.] Given $(E, d)$ a metric space, and given $A$ a non-empty subset of $E$. For any $u\in E$ we note $d(u, A)$ and name "\NewTerm{distance $u$ to $A$}", the positive real number:
We extend the concept by writing:
If $A$ and $B$ are two parts (subsets) of $E$ we have respectively (perhaps this is more understandable in this way for some readers...)
The reader must take care here to interpret $d(A,B)$ as the infinimum of the distance between the sets $A$ and $B$, because the distance between the parties does not always define a distance in the usual way on the part for example of $\mathcal{P}(\mathbb{R})$.
Indeed, if we take again our famous example:
we have $d(A,B)=0$ when $n \rightarrow 0$ while $A\neq B$.
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.}If the reader has well understood the definition of "parts" (and especially the previous example) he has probably noticed that it does not necessarily always exist a $a\in A$ such that $d(u,A)=d(u,a)$. Accordingly, we write:
Moreover, if such an $\alpha$ exists, it is obviously not necessarily unique.\\
\textbf{R2.} It should be remembered that this distance also meets the $5$ axioms of distances (we can give the proof on request)!
\end{tcolorbox}
\item[D3.] Given $(E, d)$ a metric space, and let $A$ be a part (subset) of $E$. We name "\NewTerm{adhesion}" of $A$ and denote by $\text{adh} (A)$ the subset of $E$ defined by:
For example, the adhesion of the part (subset) of rational numbers $\mathbb{Q}$ (part $A$) of $\mathbb{R}$ (the metric space $E$) is a subset of $\mathbb{R}$ itself since any real number is the limit of a rational.
Especially, since $\forall u\in A:\quad =+\infty$, we have $\text{adh}(\varnothing)=\varnothing$, and since $\forall u\in E:\quad d(u,E)=0$, we have $\text{adh}(E)=E$.
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} Any element of the set $\text{adh}(A)$ est named "adheherent point" of $A$.\\
\textbf{R2.} We say that a part $A$ of $E$ is and "\NewTerm{closed part}" if it is equal to its adherence.\\
\textbf{R3.} We say that a part $A$ of $E$ is an "\NewTerm{open part}" if its complementary relatively to $E$:
is closed.
\end{tcolorbox}
\end{enumerate}
It follows from the definitions that (without proof):
and:
with some properties (supposed as very obvious but we can give the proof on request):
\begin{enumerate}
\item[P1.] If $A\subset E$ and $B\subset E$ satifies $A\subset B$, then we have:
\item[P2.] For all $A\subset E$, any $u\in E$ we have:
The latter property has for corollary (obvious and therefore without proof excepted on request):
If for any $u\in E$, we have $d(u,A)=d(u,B)$ and $A,B\neq \varnothing$, we then have:
\end{enumerate}
\subsubsection{Formal Ball}
The concept of distance from a point to a set gives the possibility to extend the notions of ball and sphere see previously. We will see now the basis concepts of a "\NewTerm{formal ball}" also named "\NewTerm{generalized ball}".
\begin{enumerate}
\item[D1.] Given $A\neq \varnothing$ and given a $r>0$. We name "\NewTerm{generalized open ball}" of center $A$ of radius $r$, the following set:
and respectively "\NewTerm{generalized closed ball}":
and respectively a "\NewTerm{generalized sphere}":
\item[D2.] Given $(E, d)$ a metric space and let $A, B$ be two non-empty parts (subsets) of $E$. We denote by $g (A, B)$ and name "\NewTerm{gap}" of $A$ to $B$, the real number greater than or equal to zero such that:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
The triangle inequality $g(A,B)\leq g(A,C)+g(C,B)$ is not valid in the context of gaps. To prove it, a single example that contradicts this inequality is sufficient.
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
In $\mathbb{R}$ let us take $A=\{0,1\},B=\{2,3\},C=\{1,3\}$ then we have:
\end{tcolorbox}
\end{enumerate}
\subsubsection{Diameter}
\textbf{Definition}: Given $(E, d)$ a metric space and $A$ a non-empty part (subset) of $E$. We denote $\text{diam}(A)$ and name "\NewTerm{diameter}" of $A$, the positive nonzero real number:
Every non-empty part (subset) $A$ of a metric space satisfying $\text{diam}<+\infty$ will also be say "bounded".
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
We consider the empty set $\varnothing$ as bound set of diameter $A$.
\end{tcolorbox}
If the whole metric space $(E,d)$ is bounded, we say that the distance $d$ is bounded. For example, the discrete distance is limited, the usual distance on $\mathbb{R}$ is not.
We also have the following properties (the first two are usually trivial, the third one comes from the definition of the diameter itself):
\begin{enumerate}
\item[P1.] $\text{diam}(A)=0 \Leftrightarrow A=\{a\}$ or $A=\varnothing$
\item[P2.] $A\subset B \Rightarrow \text{diam}(A)\leq \text{diam}(B)$
\item[P3.] $\text{diam}(_fB_x^r)\leq 2r,\text{diam}(_oB_x^r)\leq 2r,\text{diam}(S_x^r)\leq 2r$
Caution!!! Concerning the latter property, the reader must take the habit of thinking with the Euclidean distance. The first common pitfall is to think that the second diameter (that of the open ball) should be strictly less but that would be forgetting that the board has no thickness strictly speaking!
There is also often a understanding problem wiht $\text{diam}(S_x^r)\leq 2r$. To be convinced just take the discrete distance (that for two points that are not confused is equal to $1$, otherwise $0$). Thus, in a metric space where we take $S_x^r$ with $r=1$, we have indeed $\text{diam}(S_x^1)\leq 1$ (that is an interesting case because almost completely counter-intuitive).
\item[P4.] $\text{diam}(A\cup B)\leq \text{diam}(A)+g(A,B)+\text{diam}(B)$
To be convinced, in $\mathbb{R}$ take $A=B$, then we have (trivial strict inferiority):
\item[P5.] $A$ is bounded if and only if: $\exists r>0,\exists x\in E:\quad A\subset _oB_x^r$
\end{enumerate}
\textbf{Definition (\#\mydef):} We name "\NewTerm{Hausdorff excess}" or "\NewTerm{Hausdorff distance}" from $X$ to $Y$:
that we found often in the literature with the more condensed notation:
or much more explicitly:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/hausdorff_distance.jpg}
\caption{Components of the calculation of $d_H$ between $X$ and $Y$ (source: Wikipedia)}
\end{figure}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Let us take $X\subset \mathbb{R}^2$ as the unit radius circle centered at the origin and $Y$ to the square circumscribing it. Elementary geometry concepts obviously leads to finding that the Hausdorff distance between the circle and the square is therefore:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/hausdorff_distance_highschool_example.jpg}
\caption{High-school example of a Hausdorff distance in the plane}
\end{figure}
technically:
\end{tcolorbox}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
We have generally $e(X,Y)\neq e(Y,X)$ and these quantities may not be finite.
\end{tcolorbox}
\subsection{Varieties}
We now introduce the "varieties". These are topological spaces that are "locally as $\mathbb{R}^2$" (our space for example ...).
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] A "\NewTerm{topological variety of dimension $n$}" is a Hausdorff space $M$ such that for every $p\in M$ there exists an open neighborhood $U\subset M$ with $p\in U$, an open neighborhood $U' \subset \mathbb{R}^n$ and a homeomorphism such that:
\item[D2.] A "\NewTerm{homeomorphism}" between two spaces is a continuous bijection whose inverse is also continuous.
\item[D3.] The pairs $(U,\varphi)$ are named "\NewTerm{maps}", $U$ being the "\NewTerm{domain of the map}" and $\varphi$ the "\NewTerm{coordinate application}". Instead of "map" sometimes we say also "coordinate system".
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
We will denote by $\dim(M)$ the dimension of a topological variety. Therefore:
\end{tcolorbox}
\item[D4.] Given $M$ be a topological variety of dimension $n$. A family $A$ of maps of $M$ family is named a "\NewTerm{atlas}" if for each $x\in M$, there is exists a map $(U, \varphi)\in A$ such as $x\in U$.
\end{enumerate}
If $(U_1,\varphi_1),(U_2,\varphi_2)$ are two maps of $M$ such as $U_1\cap U_2\neq \varnothing$, then the application of map changes:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/homeomorphism_of_maps.jpg}
\end{figure}
is obviously also a homeomorphism.
\subsubsection{Surfaces Homeomorphism}
\textbf{Definition (\#\mydef):} In the mathematical field of topology, a "\NewTerm{homeomorphism} or "\NewTerm{topological isomorphism}" is a continuous function between topological spaces that has a continuous inverse function. Roughly speaking, a topological space is a geometric object, and the homeomorphism is a continuous stretching and bending of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not.
More formally, remember that an application $\varphi: X \mapsto Y$ between two topological spaces is named a homeomorphism if it has the following properties:
\begin{enumerate}
\item $\varphi$ is a bijection (one-to-one and onto)
\item $\varphi$ is continuous
\item The reciprocal function $\varphi^{-1}$ is continuous
\end{enumerate}
Can we say that a square (being a special map in $\mathbb{R}^2$) is homeomorph to circle (being another special map in $\mathbb{R}^2$), or a torus to a cup of tee... If this is possible we must be able to found a closed form bijective expression between the two surfaces.
As the pure theoretical concepts are very not friendly in our point of view let us begin with a two dimension special case. Let us show (prove) first that we can transform all interior points of square of side $1$ into all interior points circle of radius $1$. This is represented by the know figure:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/isomorphism_circle_square.jpg}
\end{figure}
Such mappings have particular interest in industrial design or just simply for communication purposes (Photoshop effects or Statistics charts deformation as we do many times in the R Software):
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{img/analysis/chessboard_isomorphic_circle_square.jpg}
\end{figure}
or for defishing fish eyes captors, picture or security mirrors:
\begin{figure}[H]
\centering
\includegraphics[scale=1.25]{img/analysis/defishing.jpg}
\end{figure}
Recall that we defined unit disc as the set:
If we think of the unit dis as a continuum of concentric circles with radii growing from zero to one, we ca parameterize the unit disc as the set:
In doing so, we introduced a parameter $t$ this is the distance of point $(u,v)$ to the origin.
\begin{figure}[H]
\centering
\includegraphics{img/analysis/continnum_disc.jpg}
\end{figure}
In analogy to the circular continuum of the unit disc, one can write the square region $[-1,1] \times [-1,1]$ as the set:
In other words, the square can be considered as a continuum of concentric shrunken FG-squircles (\SeeChapter{see section Analytical Geometry}).
\begin{figure}[H]
\centering
\includegraphics{img/analysis/continnum_square.jpg}
\end{figure}
Topologists denote the proof that the interior points of two geometries are homeomorph in this special case using the following notation
We will now show that $\mathring{\mathcal{D}}\mapsto \mathring{\mathcal{D}}$ as it is the most common case in practice in our point of view. That is to say:
\begin{figure}[H]
\centering
\includegraphics{img/analysis/mapping_circle_to_square.jpg}
\end{figure}
We can establish a correspondence between the unit disc and the square region by mapping every circular contour in the interior of the disc to a squircular contour in the interior of the square. In other words, we map contour curves in the circular continuum of the disc to those in the squircular continuum of the square. This can be done by equating the parameter $t$ of both sets to get the equation:
We name this equation the "\NewTerm{squircularity condition}" for mapping a circular disc to a square region.
It is easy to derive the FG-Squircular mapping by combining the squircularity condition:
That we can also write:
Using radial to cartesian coordinates (\SeeChapter{see section Vector Calculus}):
Therefore by equivalence we get:
After substitution of parameter $t$, we get:
In other words, this is a radial mapping that converts circular contours on the disc to squircular contours on the square.
We shall now derive the inverse equations for the FG-Squircular mapping. But as it is boring to write in in \LaTeX{} and it is not used to much in practice we will omit the latter for the moment.
\subsubsection{Differential Varieties}
\textbf{Definitions}:
\begin{enumerate}
\item[D1.] A "\NewTerm{differentiable variety}" is a topological space $M$ where the applications $\varphi$ are of class $\mathcal{C}^{+\infty}$.
\item[D2.] A "\NewTerm{diffeomorphism}" is an application where $\varphi: U\mapsto U' $ where $U,U'$ are open domains of $\mathbb{R}^n$ and if $\varphi$ is a homeomorphism and furthermor $U,U'$ are differentiable.
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
"Differentiable" in this context will always mean of class $\mathcal{C}^{+\infty}$.
\end{tcolorbox}
\item[D3.] Given a topological variety $M:=M^n$ (to simplify the notations), two maps $(U_1,\varphi_1),(U_2,\varphi_2)$ of $M$ are named \NewTerm{compatible maps} (more precisely: compatible of class $\mathcal{C}^{+\infty}$ if one of these two properties is satisfied:
\begin{enumerate}
\item[P1.] $U_1\cap U_2\neq \varnothing$ and the applicaton $\varphi_2\circ \varphi_1^{-1}$ of map changes is a diffeomorphism.
\item[P2.] $U_1\cap U_2 =\varnothing$
\end{enumerate}
An atlas $A$ of $M$ is differentiable if all maps of $A$ are compatible between them.
\end{enumerate}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Given a differentiable atlas, it is sometimes necessary to complete it: we say that a map of $M$ is compatible with a differentiable atlas if it is compatiable with every map of $A$. An atlas of $A$ is a "\NewTerm{maximal atlas}" if every compatible map of $A$ belongs already to $A$. A maximal atlas is named a "\NewTerm{differentiable structure}".
\end{tcolorbox}
\begin{flushright}
\begin{tabular}{l c}
\circled{70} & \pbox{20cm}{\score{4}{5} \\ {\tiny 12 votes, 71.67\%}}
\end{tabular}
\end{flushright}
%to make section start on odd page
\newpage
\thispagestyle{empty}
\mbox{}
\section{Measure Theory}
\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]
\bcbombe Caution! The level of abstraction and of motivation required for reading and understanding this section is quite high for engineers (target audience of this book for recall). The reader should be comfortable with the concepts seen in the section Set Theory as well as the one one Topology. We also apologize for the actual lack of figures.
\end{tcolorbox}
\lettrine[lines=4]{\color{BrickRed}T}he measure, in the topological sense, will allow us to generalize the elementary notion of measure of a segment or area (in the Riemann sense, for example) and is inseparable from the new theory of integration that will build Lebesgue from the years to 1901-1902 and we will address here to build mathematical tools much more powerful than the simple Riemann integral (\SeeChapter{see section Differential and Integral Calculus}) with practical and numerical example in MATLAB\textsuperscript{TM}.
The philosophers of science who developed measurement theory were largely concerned with epistemic questions like: we can't observe correlations between physical objects and real numbers, so how can the use of real numbers be justified in terms of things we can observe? Indeed, privileges a single unit of mass, involves real numbers in the facts of
mass. Why is the latter bad?:
\begin{enumerate}
\item Real numbers are abstract and therefore causally inert
\item Real numbers don't fundamentally exist
\item Real numbers are constructed entities, and constructed entities can't be involved in fundamental facts
\end{enumerate}
The Measure Theory will also allow us to rigorously define the concept of measurement (no matter what is the measure) and so return to the important results of the study of probabilities (\NewTerm{see section Probabilities}). Indeed, we will see (we will define the vocabulary that follows just now further below) why $(U, A, P)$ is a "\NewTerm{probability space}" where $A$ is in fact a "\NewTerm{tribe}" on $U$ and $P$ a measure on the measurable space $(U , A)$.
\subsection{Measurable Spaces}
When in mathematics we calculate derivatives, primitives or simply count stuff, we carry implicitly a measure of an object or set of objects. Rigorously, mathematicians want to define how the measured thing can be structured, how to make a measurement of it and the properties resulting!
\textbf{Definitions (\#\mydef):}
\begin{enumerate}
\item[D1.] Let $E$ be a set, a "\NewTerm{tribe}" on $E$ is a family $\mathcal{A}$ (this notation comes from the fact that many people speak of "\textbf{A}lgebra sets" instead of "tribe") of subsets of $E$ satisfying the following axioms:
\begin{enumerate}
\item[A1.] $E\in \mathcal{A}$ (see examples below - $E$ being one of the possible elements of $\mathcal{A}$).
\item[A2.] If $A$ is a member of a tribe then:
This means that $\mathcal{A} $ is "\NewTerm{stable by transition to complementary}". This axiom implies that the empty set is always an element of a tribe!
\item[A3.] For any sequence $(A_n)$ of elements of $\mathcal{A}$ we have:
We then say that $\mathcal{A}$ is then "\NewTerm{stable by countable union}".
\end{enumerate}
For example, the graduating from a simple ruler of measurement ... satisfies these three axioms!
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} We write $E\in \mathcal{A}$ because we consider with this notation $E$ not anymore as a subset of $\mathcal{A}$ but as an element of $\mathcal{A}$!\\
\textbf{R2.} The uncountable cases are typical of topology, statistics or integral calculus!
\end{tcolorbox}
\item[D2.] The pair $(E,\mathcal{A})$ is named "\NewTerm{measurable space}" and we say that the elements of $\mathcal{A}$ are "\NewTerm{measurable sets}".
\item[D3.] If in the third axiom we require that $\mathcal{A}$ is stable under finite (uncountable) union then we impose the more general notion of "\NewTerm{$\sigma$-algebra}". Thus, a tribe is necessarily contained in an $\sigma$-algebra (but the opposite is not true just because the axiom is stronger) such that we can write:
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
In the field of probabilities, $E$ is assmiliated to the Universe of events and $\mathcal{A}$ to a family of events and we speak then of "\NewTerm{probabilistic space}" or simply of... "\NewTerm{measurable space}".
\end{tcolorbox}
\end{enumerate}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
E1. Given $E=\{1,2\}$ a set of cardinal 2... The only two tribe $\mathcal{A}$ that satisfy the three axioms are:
There are no other tribes for the set $E$ as these two (the normal one, and the maximum one), because we must not forget that the union of each elements of the tribe must also be in the tribe (axiom A3), and also the complement of a member (axiom A2).\\
We also see from this example that if $E$ is set then $\{E,\varnothing\}$ is indeed a tribe!\\
E2. The set of parts of $E$, denoted $\mathcal{P}(E)$ is also a tribe (dixit previous example).
\end{tcolorbox}
A tribe $\mathcal{A}$ is also "\NewTerm{stable by the union of the finite complementaries}". Indeed, if $(A_n)$ is a sequence of elements of $\mathcal{A}$ we have (trivial when taking for example the previous first example):
A tribe is also "\NewTerm{stable by finite intersection}", that is to say (trivial also by taking the previous first example):
which brings to the property that a tribe is stable by finite unions and intersections. Especially, if we take two elements of a tribe $A,B\in \mathcal{A}$, then $A\setminus B\in \mathcal{A}$ with for recall (\SeeChapter{see section Set Theory}):
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
Most readers should probably easily see with the previous first example that if $(\mathcal{A}_i)_I$ is a family of tribes on $E$ the $\bigcap_I \mathcal{A}_i$ is also a tribe (the verification is almost immediate).
\end{tcolorbox}
Well it is nice to play with potatoes and sub-potatoes... and their complementary but let us continue...
\textbf{Definition (\#\mydef):} Given $E$ a set and $\mathcal{B}$ a family of subsets of $\mathcal{P}(E)$ such that $\mathcal{B}\subset \mathcal{P}(E)$. We denote by definition:
the "\NewTerm{generated tribe}" by $\mathcal{B}$. Therefore $\sigma(\mathcal{B})$ is by defintion the smallest tribe containing $\mathcal{B}$ (and by extension the smallest tribe of $E$).
Below are three small examples that gives the opportunity to check if what precedes has been well understood and that also gives the possibility to highlight important results for what will follow:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Examples:}\\\\
Given a set $E$ and $A\subset E,A \neq E$ and also $\mathcal{B}=\{A\}$ then (when $A$ is seen as a subset of $E$ as given by the statement of a family of subsets!):
E2. If $\mathcal{A}$ is a tribe on $E$ then:
E3. Given $E=\{1,2,3,4\}$ and $A=\{\{1,2\},{3}\}$ we then have (take care because now $A$ is a family of parts (subsets) and not only a unique subset!) the following generated tribe:
Rather than determining this tribe by seeking the smallest tribe $\mathcal{P}(E)$ containing $A$ (which would be laborious) we play with the axioms defining a tribe to easily find it.
So therefore we find well in $\sigma(A)$ at least the obligatory empty set $\{\varnothing\}$ and also:
follwing the axiom A1 and:
itself by the definition of $\sigma(A)$ and the complementaries of:
following the axiom A2 and also the unions:
following the axiom A3.
\end{tcolorbox}
\textbf{Definition}: Let $E$ be a topological space (\SeeChapter{see section Topology}). We denote by $\mathcal{B}(E)$ the tribe generated by the open sets of $E$. $\mathcal{B}(E)$ is named the "\NewTerm{borelian tribe}" of $E$. The elements of $\mathcal{B}(E)$ are named the "\NewTerm{borelians}" of $E$.
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} The notion of borelian tribe is especially interesting because it is necessary for the definition of "Lebesgue tribe" and afterwards to the "Lebesgue measure" that will lead us to define the famous "Lebesgue integral"!\\
\textbf{R2.} The tribe $\mathcal{B}(E)$ being stable by going to the complementary, it also contains all closed subsets.\\
\textbf{R3.} If $E$ is a topological space with a finite basis, $\mathcal{B}(A)$ in generated by the opens of the basis.
\end{tcolorbox}
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
If $\mathbb{R}$ designates the space provided of real numbers with the Euclidean topology (\SeeChapter{see section Topology}), the family of open intervals with rational bounds is a "\NewTerm{countable base}" (given the bounds...) of $\mathbb{R}$ and therefore generates $\mathcal{B}(\mathbb{R})$ . Same thing for $\mathbb{R}^d$ with for countable basis the family of open spaces with rational bounds.
\end{tcolorbox}
\begin{theorem}
Let us now consider a dense set (\SeeChapter{see section Topology}) in $\mathbb{R}$. The following families generate $\mathcal{B}(\mathbb{R})$:
\end{theorem}
\begin{dem}
Given (the family of open subsets):
We have obviously:
Furthermore:
Therefore the intervals of the type $[a,b[$ with $a$ and $b$ in $\mathcal{S}$ also belongs to $\sigma(\mathcal{F})$. Therefore, if we generalize, with $x<y$, it exists a sequence $(a_n)$ of elements of $\mathcal{S}$ decreasing to $x$ and a sequence $(b_n)$ of elements of $\mathcal{S}$ increasing to $y$ such that:
which bring in the same way as $E\in \mathcal{A}$ that $\mathcal{B}(\mathbb{R})\subseteq\sigma(\mathcal{F})$. Other cases can be treated analogously.
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
\begin{theorem}
Given $(E, \mathcal{A})$ a measurable space and $A\subseteq E$ (and $A\in \mathcal{A}$) (where $A$ is therefore considerate as a subset and non as an element!). The family $\{A\cap B| B\in \mathcal{A}\}$ is a tribe on $A$ named "\NewTerm{trace tribe}" of $\mathcal{A}$ on $A$, that we will denote by $A\cap \mathcal{A}$. Furthermore, if $A\in \mathcal{A}$, the trace tribe is formed by the measurable elements contained in $A$.
\end{theorem}
\begin{dem}
We will do a proof by the example (... yes it is not a real proof...). For this we check the three points that define a tribe:
\begin{enumerate}
\item $A=(E\cap A) \Rightarrow A\in (A\cap \mathcal{A})$
\item Given $B\in\mathcal{A},A \setminus (A\cap B)=A\setminus B=A \cap B^c=A \cap B^c$ and therefore $A\setminus (A\cap B)\in (A\cap \mathcal{A})$
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Given $E=\{1,2,3\}$ then (a tribe among others - do not forget the stability by union!):
Let us choose $A=\{1,2\},B=\{2,3\}$ (it is obvious that $\{A\cap B|B\in \mathcal{A}\}$ is a tribe on $A$). Then:
and we have well $\{1\}\in A$ and also $A\in (A\cap \mathcal{A})$.
\end{tcolorbox}
\item Given $(A\cap B_n)$ a sequence of elements of $A\cap \mathcal{A}\; (B_n\in \mathcal{A})$ then:
The last statement of the proposition will be supposed as obvious (if not, let us know!).
\end{enumerate}
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
Given now $E$ a set, $\mathcal{C}$ a family of subsets of $E$ and $A\subseteq E$ non empty. We denote by $A\cap \mathcal{C}$, the trace $\mathcal{C}$ on $A$ and $\sigma_A(A\cap \mathcal{C})$ the tribe generated on $A$. Therefore:
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Given the set $E=\{1,2,3,4\},\mathcal{C}=\{\{1,2\},\{3\}\},A=\{3,4\}$ then:
and let us check that $A\cap \sigma(\mathcal{C})=\sigma_A(A\cap \mathcal{C})$:
So the equality is satisfied!
\end{tcolorbox}
A trivial corollary of this equality is that if we consider a topological space $ E$ and $A\subseteq E$ with the induced topology, then:
We will study more in details $\sigma$-algebra in measurement theory but first let us recall that a tribe (sometimes named "algebra of sets") on $E$ must satisfy the following properties:
\begin{enumerate}
\item[P1.] Has to contain $E$
\item[P2.] Must be stable by the complementary
\item[P3.] Must be stable by countable union or intersection
\end{enumerate}
and a $\sigma$-algebra on $E$ is less restrictive than a tribe as it has to satisfy:
\begin{enumerate}
\item[P1.] Has to contain $E$
\item[P2.] Must be stable by the complementary
\item[P3.] Must be stable by finite (uncountable) union or intersection
\end{enumerate}
Let us recall (\SeeChapter{see section Set Theory}) that if we have $E$ that is a set, then for every $A,B\subseteq E$ we define the symmetric difference $A\Delta B$ between $A$ and $B$ by:
Trivial properties are as follows:
\begin{enumerate}
\item[P1.] A $\sigma$-algebra is stable by symmetric difference ($A,B\in \mathcal{A}$ we have $A\Delta B\in \mathcal{A}$)
\item[P2.] $A\Delta B=B\Delta A$
\item[P3.] $A^c\Delta B^c=A\Delta B$
\item[P4.] $A\Delta B=(A\cup B)\setminus (A\cap B)$
\end{enumerate}
\begin{theorem}
If $\mathcal{B}$ is a $\sigma$-algebra over $E$, then $(\mathcal{B},\Delta,\cap)$ is a "\NewTerm{Boolean ring}" (or "Boolean algebra" but be careful with the term "algebra" here which can cause confusion with the corresponding structure in Set theory) with $\varnothing$ and $E$ as neutral "additive" element ($\Delta$) and respectively" multiplicative" ($\cap$).
\end{theorem}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
For reminders on the items listed in the preceding paragraph, the reader can refer to the section of Set Theory and the subsection of Boolean Algebra (\SeeChapter{see section Formal Logic Systems}).
\end{tcolorbox}
\begin{dem}
The "addition" $\Delta$ is associative because developing we get (this can verified by an arrow diagram if needed - the "potatoes"):
and the latter expression is stable by permutation (commutation) of $A$ and $C$ (same method of verification). Therefore:
We check that $\varnothing$ is neutral with respect to the symmetric difference (the proof that $E$ is neutral with respect to inclusion is obvious). It is trivial that:
$(\mathcal{B},\Delta,\varnothing)$ is therefore well an Abelian group with respect to the law $\Delta$ (symmetric difference).
Finally $\cap$ is distributive with respect to $\Delta$. Indeed:
What makes $(\mathcal{B},\Delta,\varnothing)$ is indeed a ring (furthermore of a commutative ring!).
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
\pagebreak
\subsubsection{Monotone Classes}
\textbf{Definition}: Let $E$ be a set. A "\NewTerm{monotone class}" on $E$ is a family $\mathcal{C}$ of subsets of $E$ satisfying the following axioms:
\begin{enumerate}
\item[A1.] $E\in \mathcal{C}$
\item[A2.] $A,B\in \mathcal{C}$ and $A\subseteq B \Rightarrow \setminus A\in \mathcal{C}$
\item[A3.] If $(A_n)$ is an increasing sequence (take care to the word "increasing"!) of elements of $\mathcal{C}$ then $\displaystyle\bigcup_{i=1}^{+\infty} A_i\in \mathcal{C}$ (stable by countable increasing union).
\end{enumerate}
\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]
\textbf{R1.} An increasing sequence of sets is: $A_1\subseteq A_2\subseteq A_3 ...$\\
\textbf{R2.} The first two axioms imply that $\mathcal{C}$ is by complementary.\\
\textbf{R3.} The three axioms together leads to that the monotonous class is stable by decreasing intersection. A way to check this to take the complement of each element of the increasing sequence to fall back on the decreasing sequence and vice versa.
\end{tcolorbox}
Every $\sigma$-algebra is a monotone class, because $\sigma$-algebras are closed under arbitrary countable unions and intersections.
Therefore:
In the same way as for the tribes, if we consider a family $(\mathcal{C}_i)_I$ of monotone class on $E$. Then $\bigcap_I \mathcal{C}_i$ is a monotone class (the proof is verified immediately by the three previous axioms).
\begin{tcolorbox}[colframe=black,colback=white,sharp corners]
\textbf{{\Large \ding{45}}Example:}\\\\
Given $E$ a set, $\mathcal{P}(E)$ is a monotone class on $E$. More generally, a tribe is a monotone class.
Equivalently to tribes, les us consider a set $E$ and $\mathcal{C}\subseteq \mathcal{P}(E)$. Given $\mathcal{S}$ the family of all monotone class containing $\mathcal{C}$, $\mathcal{S}$ is not empty because $\mathcal{P}(E)\in \mathcal{S}$. We denote by:
the monotone class generated by $\mathcal{C}$. Therefore $\mathcal{M}(\mathcal{C})$ is the smallest monotone class containing $\mathcal{C}$ (and satisfying obviously the previous axioms).
\end{tcolorbox}
\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]
If $E$ is a set and $\mathcal{C}\subseteq \mathcal{P}(E)$ then $\mathcal{C}\subseteq \sigma(\mathcal{C})$, as $\sigma(\mathcal{C})$ is a monotone class (and also a tribe) containing $\mathcal{C}$ and therefore contains also $\mathcal{M}(\mathcal{C})$ (see the examples with tribes).
\end{tcolorbox}
\begin{theorem}
Given $E$ as set. If $\mathcal{C}$ is a family of parts of $E$ that we impose as stable by finite intersection then $\mathcal{C}=\sigma{\mathcal{C}}$ (we then have to prove that the smallest tribe of $\mathcal{C}$ is equal to the smallest monotone class of $\mathcal{C}$. If we do not impose that $\mathcal{C}$ is stable by finite intersection we would not have necessarily the equality!
\end{theorem}
\begin{dem}
As already said: $\mathcal{M}(\mathcal{C})\subseteq \sigma(\mathcal{C})$ (which is trivial). We will prove first that $\mathcal{M}(\mathcal{C})$ is a tribe on $E$. For this it is sufficient to show that $\mathcal{M}(\mathcal{C})$ is (also) stable by countable union (and not necessarily by an increasing sequence of elements!).
Let us considerate following families for the proof:
By the previous definitions $\mathcal{M}_1\subseteq \mathcal{C}$ but $\mathcal{C}$ being (imposed) stable by finite intersections implies that $\mathcal{C}\subseteq \mathcal{M}_1$ and therefore (it is the same reasoning as for the tribes):
$\mathcal{M}_1$ is a monotone class, indeed $E\in \mathcal{M}_1$, if $A_1,A_2 \in \mathcal{M}_1$ and that $A_1\subseteq A_2$ (second axiom) then:
and therefore (which supports the fact that the other elements $(A_n)$ satisfy the previous relation):
If $(A_n)$ is an increasing sequence of elements of $\mathcal{M}_1$ then:
as $(A_n\cap B)$ is an increasing sequence.
Therefore $\mathcal{M}_1$ is indeed a monotone class and by $\mathcal{C}\subseteq \mathcal{M}_1 \subseteq \mathcal{M}(\mathcal{C})$, we therefore have:
The latter equality implies $\mathcal{C}\subseteq \mathcal{M}_2$. As for $\mathcal{M}_1$, we shos that $\mathcal{M}_2$ is a monotone class and therefore $\mathcal{C}= \mathcal{M}_2$, which means by extension that $\mathcal{M}(\mathcal{C})$ is therefore stable by finite intersections.
$\mathcal{M}(\mathcal{C})$ being stable by complementary this take us to that $\mathcal{M}(\mathcal{C})$ is, we just proved it, stable by finite unions (but we want to prove that it is stable by countable union!).
Given now a sequence $(A_n)$ of elements of $\mathcal{M}(\mathcal{C})$. We consider the sequence:
$(B_n)$ is an increasing sequence of elements of $\mathcal{M}(\mathcal{C})$, therefore:
but:
Therefore:
Therefore $\mathcal{M}(\mathcal{C}$ is stable by countable union and finally $\mathcal{M}(\mathcal{C})$ is a tribe. But as $\mathcal{C}\subseteq \mathcal{M}(\mathcal{C})$ this brings us to $\mathcal{M}(\mathcal{C})=\sigma(\mathcal{C})$.
\begin{flushright}
$\square$ Q.E.D.
\end{flushright}
\end{dem}
We will later see some important applications of this theorem (but first we want to improve the above text with figure and more simple and practical examples!).
\begin{flushright}
\begin{tabular}{l c}
\circled{50} & \pbox{20cm}{\score{4}{5} \\ {\tiny 17 votes, 76.47\%}}
\end{tabular}
\end{flushright} | {
"alphanum_fraction": 0.7416640189,
"avg_line_length": 62.9816564758,
"ext": "tex",
"hexsha": "e9ce2a0b958982b7500e15e22c6606417dccc549",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "lefevred/Opera_Magistris_Francais_v3",
"max_forks_repo_path": "Chapter_Analysis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "lefevred/Opera_Magistris_Francais_v3",
"max_issues_repo_path": "Chapter_Analysis.tex",
"max_line_length": 832,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "lefevred/Opera_Magistris_Francais_v3",
"max_stars_repo_path": "Chapter_Analysis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 61892,
"size": 226608
} |
\section{Inventory}
\label{sec:inventory}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{images/Inventory_Overview.eps}
\label{inventory_overview}
\caption{Inventory - Class Overview}
\end{figure}
An inventory is a place, where products are stored.
In \salespoint{}, an abstract representation if the \code{Inventory} interface and its implementing class \code{PersistentInventory}.
The interface and declares methods to add, remove and find products.
Because an inventory contains specific product instances, \code{PersistentInventory} aggregates \code{PersistentProductInstance}s.
\code{PersistentProductInstance}s can be retrieved from \code{PersistentInventory} by specifying a \code{SerialNumber} or a \code{ProductIdentifier}.
A \code{SerialNumber} is used to reference a specific \code{ProductInstance}.
A \code{ProductIdentifier} identifies a \code{Product} uniquely, thus all \code{PersistentProductInstance}s of the \code{PersistentProduct} specified by the supplied \code{ProductIdentifier} are returned.
Additionally an \code{Iterable<ProductFeature>} can be supplied to the \code{find()}-method along with a \code{ProductIdentifier} to retrieve all instances of a product, where the \code{ProductFeature}s match exactly those specified.
Matching a set of \code{ProductFeatures} against a \code{PersistentProductInstance} is hard to express in JPQL or Criteria Queries (see Section \ref{sec:jpa}).
Therefore, only the \code{ProductIdentifier} is used to build a Criteria Query, which is executed on the database.
Selecting only those \code{PersistentProductInstances} which match the specified \code{ProductFeature}s is done in Java code.
%\code{Inventory} aggregates \code{Product}s, \code{PersistentInventory} aggregates \code{PersistentProduct}s
%A shop must store products in an inventory, because the customer should get your good on order very quickly. If there are no amount of this product inside, it must be ordered by its producer.
%This procedure can be implemented by the \code{PersistentInventory}-class. This class is an implementation of the interface \code{Inventory}, to used its functionality and also
%to persist the items of your inventory.\\
%With the methods of the \code{Inventory}-class you can add one or many \code{Products} to the inventory, you can remove a product from it or you can checked, whether a product exist in it.
%Also you can find \code{Products} with several options, if you know the \code{ProductIdentifier}, their \code{productFeatures} or the classes of them.
| {
"alphanum_fraction": 0.8022754021,
"avg_line_length": 84.9666666667,
"ext": "tex",
"hexsha": "11b5b52467953700463b2aeda15bbd9817821a4f",
"lang": "TeX",
"max_forks_count": 37,
"max_forks_repo_forks_event_max_datetime": "2021-12-15T17:29:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-10-21T22:26:08.000Z",
"max_forks_repo_head_hexsha": "5e363f25501d9241d5e9d0913e3f7ca4abd3010a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "olivergierke/salespoint",
"max_forks_repo_path": "doc/core/inventory.tex",
"max_issues_count": 220,
"max_issues_repo_head_hexsha": "5e363f25501d9241d5e9d0913e3f7ca4abd3010a",
"max_issues_repo_issues_event_max_datetime": "2022-03-08T16:20:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-03T16:05:19.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "olivergierke/salespoint",
"max_issues_repo_path": "doc/core/inventory.tex",
"max_line_length": 233,
"max_stars_count": 111,
"max_stars_repo_head_hexsha": "e99e67b70008498804c7021d6408f3114d07e257",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "svenkarol/salespoint",
"max_stars_repo_path": "doc/core/inventory.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-09T12:29:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-09T22:54:26.000Z",
"num_tokens": 577,
"size": 2549
} |
% !TEX root = ../Main.tex
\chapter{My First Content Chapter}
\label{chapterlabel2}
% This just dumps some pseudolatin in so you can see some text in place.
\blindmathpaper | {
"alphanum_fraction": 0.7514450867,
"avg_line_length": 24.7142857143,
"ext": "tex",
"hexsha": "7297feddf6c80052b3b240e07b15c1bd6d533125",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6e311265a339191273579b4cd2944adb3808307f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "padraic-padraic/ucl-latex-thesis-templates",
"max_forks_repo_path": "Text_Files/Chapter2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6e311265a339191273579b4cd2944adb3808307f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "padraic-padraic/ucl-latex-thesis-templates",
"max_issues_repo_path": "Text_Files/Chapter2.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6e311265a339191273579b4cd2944adb3808307f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "padraic-padraic/ucl-latex-thesis-templates",
"max_stars_repo_path": "Text_Files/Chapter2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 46,
"size": 173
} |
% This is "sig-alternate.tex" V2.0 May 2012
% This file should be compiled with V2.5 of "sig-alternate.cls" May 2012
%
% This example file demonstrates the use of the 'sig-alternate.cls'
% V2.5 LaTeX2e document class file. It is for those submitting
% articles to ACM Conference Proceedings WHO DO NOT WISH TO
% STRICTLY ADHERE TO THE SIGS (PUBS-BOARD-ENDORSED) STYLE.
% The 'sig-alternate.cls' file will produce a similar-looking,
% albeit, 'tighter' paper resulting in, invariably, fewer pages.
%
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V2.5) produces:
% 1) The Permission Statement
% 2) The Conference (location) Info information
% 3) The Copyright Line with ACM data
% 4) NO page numbers
%
% as against the acm_proc_article-sp.cls file which
% DOES NOT produce 1) thru' 3) above.
%
% Using 'sig-alternate.cls' you have control, however, from within
% the source .tex file, over both the CopyrightYear
% (defaulted to 200X) and the ACM Copyright Data
% (defaulted to X-XXXXX-XX-X/XX/XX).
% e.g.
% \CopyrightYear{2007} will cause 2007 to appear in the copyright line.
% \crdata{0-12345-67-8/90/12} will cause 0-12345-67-8/90/12 to appear in the copyright line.
%
% ---------------------------------------------------------------------------------------------------------------
% This .tex source is an example which *does* use
% the .bib file (from which the .bbl file % is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission, you *NEED* to 'insert'
% your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% ================= IF YOU HAVE QUESTIONS =======================
% Questions regarding the SIGS styles, SIGS policies and
% procedures, Conferences etc. should be sent to
% Adrienne Griscti ([email protected])
%
% Technical questions _only_ to
% Gerald Murray ([email protected])
% ===============================================================
%
% For tracking purposes - this is V2.0 - May 2012
\documentclass{sig-alternate}
\begin{document}
%
% --- Author Metadata here ---
\conferenceinfo{SECrus}{2017, St.Petersburg, Russia}
%\CopyrightYear{2007} % Allows default copyright year (20XX) to be over-ridden - IF NEED BE.
%\crdata{0-12345-67-8/90/01} % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE.
% --- End of Author Metadata ---
\title{On development of a framework\\for massive source code analysis\\
using static code analyzers }
\subtitle{[Extended Abstract]
\titlenote{A full version of this paper is available as
\textit{Author's Guide to Preparing ACM SIG Proceedings Using
\LaTeX$2_\epsilon$\ and BibTeX} at
\texttt{www.acm.org/eaddress.htm}}}
%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.
\numberofauthors{3} % in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\alignauthor
Alexander Chistyakov\\
\affaddr{ITMO University}\\
\affaddr{197101 Kronverkskiy, 49}\\
\affaddr{Saint-Petersburg, Russia}\\
\email{[email protected]}
% 2nd. author
\alignauthor
Artem Pripadchev\\
\affaddr{ITMO University}\\
\affaddr{197101 Kronverkskiy, 49}\\
\affaddr{Saint-Petersburg, Russia}\\
\email{[email protected]}
% 3rd. author
\alignauthor Irina Radchenko\\
\affaddr{ITMO University}\\
\affaddr{197101 Kronverkskiy, 49}\\
\affaddr{Saint-Petersburg, Russia}\\
\email{[email protected]}
\and % use '\and' if you need 'another row' of author names
}
\maketitle
\begin{abstract}
Authors describe architecture and implementation of an automated source code
analyzing system which uses pluggable static code analyzers. The paper
presents a module for gathering and analyzing the source code massively in a detailed manner.
Authors also compare existing static code analyzers for Python programming
language. A common format of storing results of code analysis for subsequent
processing is introduced. Also, authors discuss methods of statistical
processing and visualizing of raw analysis data.
\end{abstract}
% A category with the (minimum) three required fields
\category{D.2.8}{Software and its engineering}[Software testing and debugging]
% \terms{Theory}
\keywords{code analysis, open source, static analyzers}
\section{Introduction}
Informational technology field is one of the fastest growing industries today.
Without doubt, using various automated systems to replace human labor especially
when doing repeatable operations is very useful. It increases effectiveness
of work and removes the possibility of accidental human-induced errors.
However, automated systems are not error-free per se. A risk of deterministic
error in program code induced not by a worker on a conveyor but by
a program creator arises \cite{item01}.
Global industrial community is very concerned about a possibility of appearance
of various types of defects in program code. To address this a number of
international standards covering software development process
were developed (ISO/IEC 90003:2014, CMM/CMMI) \cite{item02}. Moreover, a lot of
various methodologies and approaches to development of different types
of computer systems were created in past decades. Every such methodology aims
for getting a working software product in timely fashion. Some methodologies
emphasize sequence of steps of software development process, others make
development process simple and agile. Yet every such methodology targets
creating a product of good quality.
Software engineers need to define a set of measurable parameters to control the overall process
of developing a product. This can be expanded to quality of code too. It's hard
to make any project management decisions in lack of quantitative
characteristics. Thus a problem of measuring code quality is actual nowadays.
The term "quality" is quite complex and multi-dimensional. "Quality" usually
means compliance of object properties to a set of predefined requirements \cite{item03}.
Quality of program code implies thoroughly designed architecture, clear division
of code into functional submodules, defining strict structure and so on.
Software engineers use various methodologies and techniques to improve code
quality, such as using design patterns, utilizing existing libraries and
algorithms for solving typical tasks and so on \cite{item04}.
But do we have to care about code quality if end users demand another
thing - the overall quality of a product? Yes, we obviously do, because every
complex information system is a subject to evolution and modification. In this
case important metrics are number of defects in program code and a cost of
modification of code. If adding new functionality introduces a critical number
of errors the product is not able to fulfill customer needs anymore. In the
same way, if a cost of adding new functionality to the product is too high
it will affect users negatively too. Therefore, code quality is not directly
related to functionality but nonetheless important parameter which indirectly
relates to the overall program product quality.
Since code quality affects overall computer program product quality severely
we have got an idea to develop an automated code analysis system based on
existing static code analyzers.
Our idea is to use Github \cite{item16} as a provider of Python code repositories
of different size and code quality. We are going to build a system suitable for
performing massive code analysis using a predefined set of existing static code
analyzers. We chose Python mainly for two reasons: firstly, because it's fourth most popular
programming language on Github and secondly, because number of already developed
static code analyzers for Python is relatively big. Also, we can program in Python
and will be able not only fix errors in existing Python-based tools but develop
our own implementation of a static code analyzer if needed.
Our goal is to gather some raw data in result of processing code repositories from
Github using static analyzers and to normalize this raw data using a developed
common data format. Next step is to perform statistical analysis of this set of
normalized raw data using classification and clusterization and other data processing
algorithms.
\section{Related works}
Static code analysis is analysis of code conducted without real program
execution \cite{item05}. Result of such analysis in this paper is a certain analytics
which can be used to get a representation of code quality. Static code analysis
is also being used for other purposes. So, authors of \cite{item06} classify android
applications into two types: utilities and games using machine learning.
Many papers are related to finding possible vulnerabilities in programs
during development stage \cite{item07}, \cite{item08}, \cite{item09}. Authors of \cite{item10} extract code characteristics
to find defects subsequently. Another problem domain of static code analysis
is automated detection of malicious code \cite{item11}.
\section{Description of experiment}
In order to determine qualitative characteristics of program code, we should
define a set of measurable parameters first. Metrics of program code can
be used as these parameters. Essentially, this method analyzes source code
to get various numeric metrics. Usually these metrics are defined based
on analyzing either a control flow graph or a structure of program code \cite{item12}.
There are a big number of metrics representing various program code aspects
as of today. Most common metrics are number of SLOC, cyclomatic complexity,
number of warnings and errors and so on. A benefit of using metrics of program
code is absence of human factor. These metrics are measured by a computer.
This fact guarantees precision and repeatability of measurements for every
metric. Moreover, it becomes possible to measure these metrics automatedly
and to create various analytic reports based on these automated measurements.
Proposed method of measuring code quality metrics is presented in a form of
block diagram on Fig.~\ref{fig:structscheme}.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{structscheme}
\caption{Block diagram of a framework for massive source code analysis using static code analyzers}
\label{fig:structscheme}
\end{figure}
The file preparation module is an entry point for a user. The user defines
all necessary environment for subsequent analysis. This environment consists
of a folder with project source code to be analyzed, a set of analyzers
to utilize, a set of excluded analyzer rules and so on.
After setting up the environment and starting the application system begins
to perform source code analysis. It's worth to mention that we chose Python
as a reference programming language, mainly, because Python has a lot of
scientific libraries for data processing, visualization and so on. We decided
to start off with a preexisting set of source code analyzers and then write
our own custom solution if it becomes needed. Thus, our first task was to
compare existing open source implementations of static source code
analyzing tools for Python language.
\section{Tools}
There is only a limited number of code analysis frameworks for Python with an ability
to plug different static code analyzers. Namely, they are Coala \cite{item13},
Pylama \cite{item14} and Flake8 \cite{item15}.
We defined the following set of parameters to compare these frameworks:
popularity among Github \cite{item16} users, CPU and RAM utilization, ability to
parallelize process of analysis and time required to process the same set of projects.
Since Flake8 is designed to perform style guide enforcements only and does not
have an ability to disable standard plugins easily we excluded it from further comparison.
Coala provides a uniform CLI interface for code style checking and code
improvement. Coala uses a set of plugins (called "bears") for various
programming languages. It's also possible to extend a standard set of
plugins with a custom plugin.
Pylama is a code auditing tool for Python and JavaScript programming
languages. It is not as feature rich as Coala due to lesser popularity
on Github and lower number of active contributors and code commits.
We used a virtual machine with 3Gb of RAM, 3 CPU cores and Ubuntu 14.04
installed as a test host.
\section{Results}
We chose a relatively small sample project (consisted
of roughly 130 files) and configured two plugins in both Coala and Pylama.
Processing time took 112 seconds for Coala and 183 seconds for Pylama.
But it turned out that the most important metric was RAM utilization. We created
a graphical representation of OS memory usage on Fig.~\ref{fig:memusage}.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{memusage}
\caption{OS memory usage when analyzing a single project by Coala and Pylama}
\label{fig:memusage}
\end{figure}
Pylama constantly accumulates information about errors so needed RAM grows
almost linearly. This leads to a critical defect that reveals as total RAM
exhaustion. Operating system forcibly terminates Pylama then. Coala does not
accumulate errors and flushes information on errors to stdout periodically, so
the RAM does not exhaust.
Combined comparison results are presented in Table~\ref{tab:compare}. Based on
this results we decided to use Coala as a base framework for future work.
Another possible option is to patch Pylama.
\begin{table}
\centering
\caption{\label{tab:compare}Comparison of Coala and Pylama}
\begin{tabular}{|l|c|c|} \hline
~ & Coala & Pylama \\ \hline
Popularity among github users & + & - \\ \hline
CPU usage & + & + \\ \hline
RAM usage & + & - \\ \hline
Parallel processing & + & - \\ \hline
Analysis time (sec) & 112 & 183 \\ \hline
\hline\end{tabular}
\end{table}
Next planned step is to process results of analysis by a parsing module
and to standardize them. We are going to store standardized analysis results to
a database. We plan to evaluate a number of relational (e.g. PostgreSQL) and
non-relational databases (e.g. HBase and Cassandra).
We designed a data model and represented it as an ER diagram (Fig.~\ref{fig:dbscheme}).
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{dbscheme}
\caption{The database schema for storing source code analysis results}
\label{fig:dbscheme}
\end{figure}
\section{Future work}
We defined an overall structure of the system so the next step is
to build a working prototype suitable for massive code analysis. Also
we are going to define a more strict set of metrics and to start
visualizing them. Some of methods of visualizing results of measuring
quality of program products are described in \cite{item18}, \cite{item19}, \cite{item20}.
Analysis results are going to be represented as numerical metrics stored in
tables. Next step is to create a separate module to interpret these raw
standartizes results. The last step is to visualize results by a request of
end user initiated code analysis. Thus, different types of comparisons can be
done on visualization step \cite{item17}. For example, comparison can be based
on chronological characteristics of objects or we can perform per component comparison.
Resulting infographics can be presented in various forms, e.g. matrixes, maps,
figures, graphs and diagrams.
\section{Conclusion}
A problem of controlling, measuring and predicting code quality is actual
at the moment and will become even more actual in the future. Software
developers working on big projects need measurable code quality-related
metrics to improve their process. End users and other third parties need
these metrics to choose a product of best possible quality and for various
other needs. Solving this problem requires a proper instrument which scales
well and produces well-defined and predictable results. Authors of this paper
proposed an approach of creating this instrument, described its architecture
and chose a set of tools as a base to implement it.
\bibliographystyle{abbrv}
\bibliography{sigproc}
\end{document}
| {
"alphanum_fraction": 0.765499554,
"avg_line_length": 47.9572192513,
"ext": "tex",
"hexsha": "47a9ddfbd25fe4043f16bc0318d7d07471927e2c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c3a8a3c95ab50544280c679dad2d533836a4749d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "scats/PSI17",
"max_forks_repo_path": "ShareLaTeX/acm-statan.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c3a8a3c95ab50544280c679dad2d533836a4749d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "scats/PSI17",
"max_issues_repo_path": "ShareLaTeX/acm-statan.tex",
"max_line_length": 123,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c3a8a3c95ab50544280c679dad2d533836a4749d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "scats/PSI17",
"max_stars_repo_path": "ShareLaTeX/acm-statan.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4099,
"size": 17936
} |
\section{Methodology or Problem Description}
Choose one that fits your research best:
\subsection{Methodology}
Typically in general research articles, the second section contains a description of the research methodology, explaining what you, the researcher, is doing to answer the research question(s), and why you have chosen this method.
For purely analytical work this is a description of the data collection or experimental setup on how to test the hypothesis, with a motivation.
In any case this section includes references to necessary background information.
\subsection{Formal Problem Description}
For some types of work in computer science the methodology is standard: analyze the problem (e.g., make assumptions and derive properties), present a new algorithm and its theoretical background, proving its correctness, and evaluate unproven aspects in simulation.
Then an explanation of the methodology is often omitted, and the setup of the evaluation is part of a later section on the evaluation of the ideas.\footnote{This already shows that there is no single outline to be given for all papers.}
In this case, explain relevant concepts, theory and models in this section (with references) and relate them to your research question.
Also this section then typically contains a more precise, formal description of the problem.
Do not forget to give this section another name, for example after the problem you are solving. | {
"alphanum_fraction": 0.8175487465,
"avg_line_length": 102.5714285714,
"ext": "tex",
"hexsha": "4634d3ef73ee2035650bc018c9cdffa879737316",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1c3e19b02b46812ff563bfc776d793adb440b00a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "timanema/brb-thesis",
"max_forks_repo_path": "report/sections/methodology_or_problem_description.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1c3e19b02b46812ff563bfc776d793adb440b00a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "timanema/brb-thesis",
"max_issues_repo_path": "report/sections/methodology_or_problem_description.tex",
"max_line_length": 265,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1c3e19b02b46812ff563bfc776d793adb440b00a",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "timanema/brb-thesis",
"max_stars_repo_path": "report/sections/methodology_or_problem_description.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 269,
"size": 1436
} |
\section*{Introduction\footnote{This section was authored by Matthias Radscheit\ \cite{loeperradscheit}.}}
\label{sec:Introduction}
\addcontentsline{toc}{section}{Introduction}
% Following the rational choice theory approach
When it comes to economic decisions, uncertainty is a critical issue. Following the approach of the rational choice theory, every market player is constantly trying to maximize his utility and minimize his effort. Uncertainty can be described as a lack of information of how a market - or herein the full German economic system - is constituted and about the future behavior of the market players. Presuming that every market player is acting on a rational basis, all information regarding his situation, resources, plans, and relations makes the results of his decisions more predictable. In this manner, we can state: The more relevant information a market player gathers about other players in the market or economy, the better the foundation of his decisions is. The broad range of that kind of information can lead to a significant competitive advantage. So it should be in a rational player's interest to collect as much relevant information as possible.\par
In a connected economy, a lot of those uncertainties lie in the relations between corporations\footnote{We define as \emph{corporation} any juristic entity that takes part in the German economy. This includes especially businesses but also other entities like public corporations.}. This became evident in the so called \emph{Abgas-Skandal} or \emph{Dieselgate} of the Volkswagen AG in 2015, wherein a lot of external suppliers spun out of control, from a financial perspective \cite{stuttzeit, automobilwoche}. This happened even though most of the suppliers did not take part in the scandal itself. Since there are a lot of other examples like the \emph{Lehmann Brothers bankruptcy} or any other economic shock event, we can state that relations are a significant factor in the economic evaluation of corporations and their financial risks.\par
\subsection*{The German Corporate Graph Project}
Because there are millions of corporations in the German economy\footnote{ The \emph{Federal Bureau of Statistics} notes 3,469,039 businesses in Germany in 2015\ \cite{destatis1}. Following our definition of corporations, this number has to be seen as a lower bound for the total number of corporations in Germany.} and each corporation potentially holds relations to hundreds or thousands of other corporations, collecting and overseeing all those relations becomes a complicated matter. The \emph{German Corporate Graph Project} is one approach to solve this problem. The project's purpose is to extract business entities from multiple structured knowledge bases (e.g., Wikidata and DBpedia), merge them, enrich them with relations extracted from unstructured documents and finally display the graph so that it can be visually explored.\par
The project consists of a pipeline, which starts with the import and normalization of structured knowledge bases. The next step is the Deduplication, which is the detection and fusion of occurrences of the same entity over multiple knowledge bases. These entities form a graph, whose nodes are businesses and whose edges are the relations between them. This graph is then enriched during the Information Extraction. In this step relations between entities are extracted from unstructured documents using Named Entity Recognition, Entity Linking and Relation Extraction.\par
The results of all these steps can be viewed and curated in the so-called Curation Interface. This is a web-interface, which can be used to control the pipeline itself, view statistical data generated by other pipeline steps and to view and curate the entities and relations of the graph itself. The final graph can be visually explored by using the Corporate Landscape Explorer, which is a web-interface as well.\par
\subsection*{One Project - Seven Contributions}
This thesis is published in the context of a bachelor's project in 2016/2017 at Hasso-Plattner-Institute in Potsdam, Germany. The project's objective was to build the \emph{German Corporate Graph}, as described above, to display Germany's corporate landscape. The project lasted ten months and was accompanied by Commerzbank AG, Germany.\par
As a result, the following theses were published. Strelow describes the used data model for businesses and their relations with respect to working with Apache Cassandra\ \cite{strelow}. Löper and Radscheit evaluate the duplicate detection during the Deduplication\ \cite{loeperradscheit}. Pabst explores efficient blocking strategies, which are used to increase the performance of the Deduplication\ \cite{pabst}. Janetzki explains the creation of our knowledge base and the extraction of features used for the Named Entity Recognition and Entity Linking\ \cite{janetzki}. This work evaluates the quality of different classification models and of the features used to train them. Schneider evaluates different Relation Extraction methods\ \cite{schneider}. Gruner describes methods to extract useful knowledge from the generated business graph\ \cite{gruner}.\par
% Ehmüller evaluates the quality of different classification models and of the features used to train them.
| {
"alphanum_fraction": 0.8198675497,
"avg_line_length": 310.8823529412,
"ext": "tex",
"hexsha": "6ab42b3b8b004d7962a7d04d64800ea7b418f7aa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "310db25128c34209122a90ef02b8a61370230a2b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "janehmueller/bachelorthesis",
"max_forks_repo_path": "sections/00.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "310db25128c34209122a90ef02b8a61370230a2b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "janehmueller/bachelorthesis",
"max_issues_repo_path": "sections/00.tex",
"max_line_length": 964,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "310db25128c34209122a90ef02b8a61370230a2b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "janehmueller/bachelorthesis",
"max_stars_repo_path": "sections/00.tex",
"max_stars_repo_stars_event_max_datetime": "2020-02-03T19:31:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-03T19:31:37.000Z",
"num_tokens": 1070,
"size": 5285
} |
\documentclass{article}
\title{REDUCE-MathML Interface}
\author{Luis Alvarez-Sobreviela \\ [email protected] \\ \\
Konrad--Zuse--Zentrum f\"ur Informationstechnik Berlin \\
Takustra\ss e 7 \\ D-14195 Berlin-Dahlem}
\date{August 9, 1999}
\begin{document}
\maketitle
\section{Introduction}
MathML is intended to facilitate the use and re-use of mathematical and
scientific content on the Web, and for other applications such as computer
algebra systems. For this reason we believe MathML is an important step
for the scientific community considering the widespread use of the
Internet. We found necessary to include REDUCE in this trend, and
developed the MathML-REDUCE interface.
The MathML interface for REDUCE provides an easy to use series of
commands, allowing it to evaluate and output MathML. This manual is
therefore intended to give the user a good guideline on how to make of it a
proper use.
\\
The principal features of this package can be resumed as:
\begin{itemize}
\item Evaluation of MathML code. Allows REDUCE to parse MathML expressions
and evaluate them.
\item Generation of MathML compliant code. Provides the printing of REDUCE
expressions in MathML source code, to be used directly in web page
production.
\item Generation of MathML code surrounded by HTML $<$embed$>$$<$/embed$>$
tags. This is useful for directly including MathML into a webpage. It can
then easily be rendered on any browser with the IBM Tech Explorer plug-in
\footnote{Currently available at
http://www.software.ibm.com/network/techexplorer/}.
\end{itemize}
We assume that the reader is familiar with MathML 1.0. If not, the
specification. is available at: \\
{\tt http://www.w3.org/Math/ }
\section{Getting Started}
\subsection{Loading}
The MathML-REDUCE interface package is under the name {\tt mathml}, and so is
loaded by supplying {\tt load mathml;}.
\subsection{Switches}
There are three switches which can be used alternatively and
incrementally. These are {\bf mathml}, {\bf web} and {\bf both}. There
use can be described as follows:
\begin{description}
\item[{\tt mathml}:] All output will be printed in MathML.
\item[{\tt both}:] All output will be printed in both MathML and normal
REDUCE.
\item[{\tt web}:] All output will be printed within an HTML $<embed>$ tag.
This is for direct use in an HTML web page when using IBM Tech Explorer.
\end{description}
MathML has often been said to be too verbose. If {\bf both} is on, an easy
interpretation of the results is possible, improving MathML readability.
\subsection{Entering MathML}
The MathML-REDUCE interface gives the user various ways of providing
input. This can be done via a file containing MathML or by writing MathML
directly in the prompt. \\
\subsubsection{Reading MathML from a File: {\tt MML}}
When reading from a file the command {\bf mml} is used. {\bf mml} takes as
argument the name of the file containing the MathML. \\
{\tt mml}(FILE:{\it string}):{\it expression}
\paragraph{Example:} As long as the file given contains valid MathML, no
errors should be produced.
{\tt 1: mml "ex.mml";}
\subsubsection{Reading MathML from Prompt: {\tt PARSEML}}
By using the function {\bf parseml} it is possible to introduce a series of
valid mathml tokens to the prompt. {\bf parseml} takes no arguments,
although once it is called it will prompt you to enter mathml tags starting
with {\tt $<$mathml$>$} and ending with {\tt $<$/mathml$>$}. It then returns
an expression resulting from evaluating the input. \\
\paragraph{Example:} Here is an extract of a REDUCE session where {\tt
parseml()} is used:\\
{\tt 2: parseml();
2: <math>
2: <apply><plus/>
2: <cn>3</cn>
2: <cn>5</cn>
2: </apply>
2: </math>
\\
\hspace*{1cm} 8 \\
3: } \\
If for some reason you do not want to continue typing in at the prompt, and
cancel what has already been typed, it is possible to exit by calling
control {\tt C} or control {\tt D}.
Although it is simpler to edit a file and then use {\tt mml}, we still
added {\bf parseml} for completeness.
\section{The Evaluation of MathML}
MathML is evaluated by the interface always before outputting any results.
Just in the same way as REDUCE normally does. The program works in the same
way as the {\tt algebraic} mode. Undefined variables remain undefined,
and it is possible to use all normal REDUCE switches and packages.
\paragraph{Example:}
\noindent The following MathML:\\
\\
{\tt
\hspace*{0mm}<math>
\hspace*{1mm}<reln><gt/>
\hspace*{6mm}<ci>x</ci>
\hspace*{6mm}<ci>y</ci>
\hspace*{1mm}</reln>
\\
\hspace*{0mm}</math>}
\\
\\
will evaluate to {\tt gt(x,y)}. This only states that x is greater than
y.
\\
The interesting characteristic, is that we can set the undefined values of
x and y.
\\
Suppose we enter the following:
\\
{\tt 5: x:=3; y:=2;
\\
x := 3
\\
y := 2}
\\
\\
If we once again enter and evaluate the above piece of MathML we will have as
result:
\\
{\tt t}
\\
\\
because it is true that 3 is greater than 2. It is important to note that
it is also possible to set only one of the two variables x or y above, say
{\tt y:=4}. The expression will then not evaluate completely, and we will
have: \\
{\tt gt(x,4)}
\\
\\
When one of the switches is on, the MathML output will be:
\\
\\
{\tt
\hspace*{1mm}<math>\\
\hspace*{5mm} <reln><gt/>\\
\hspace*{9mm} <ci>x</ci>\\
\hspace*{9mm} <cn type="integer">4</cn>\\
\hspace*{5mm} </reln>\\
\hspace*{1mm}</math>\\}
Hence, it is possible when dealing with a MathML formula representation in
which there are a set of undefined variables to set each variable to the
desired value and evaluate it. Let us consider a second example to make sure
everything is clear.
\paragraph{Example:}
Let the file {\tt ex.mml} contain the following mathml:
\\
\\
{\tt <math>\\
\hspace*{2mm} <apply><int/>\\
\hspace*{5mm} <bvar>\\
\hspace*{9mm} <ci>x</ci>\\
\hspace*{5mm} </bvar>\\
\hspace*{5mm} <apply><fn><ci>F</ci><fn>\\
\hspace*{9mm} <ci>x</ci>\\
\hspace*{5mm} </apply>\\
\hspace*{2mm} </apply>\\
</math>} \\
\\
If we do the following:\\
{\tt 1: mml "ex.mml";}\\
\\
This is what we get:
\\
\\
{\tt int(f(x),x);}\\
\\
It is clear that this has remained unevaluated. We can now set the function
{\tt f(x)} as follows:\\
{\tt for all x let f(x)=2*x**2;}\\
\\
If we then enter '{\tt mml "ex.mml"}' once again, we will have the
following result:\\
{\Large \( \frac {2*x^{3}}{3}\)}
\\
\\
Hence the MathML-REDUCE interface allows the user to set a value to a
variable in order to manipulate the evaluation of the MathML, without needing
to edit the MathML itself.
\subsection{Using Boolean Values}
Boolean values are not defined in MathML, despite their importance in REDUCE.
To get around this problem, we can set a variable's value to a boolean value,
and when evaluating, the MathML-REDUCE interface will use the boolean value
of the variable.
\\
Suppose we want to evaluate the following expression:
\\
\\
{\tt and(t,nil);}
\\
\\
Then all we do, is we create a file with the following MathML:
\\
\\
{\tt $<$mathml$>$\\
\hspace*{2mm} $<$apply$>$$<$and/$>$\\
\hspace*{5mm} $<$ci$>$ a $<$/ci$>$\\
\hspace*{5mm} $<$ci$>$ b $<$/ci$>$\\
\hspace*{2mm} $<$/apply$>$\\
$<$/mathml$>$\\}
\\
And before evaluating it, we set {\tt a} to {\tt true} and {\tt b} to {\tt
nil}. When evaluating the MathML, it will produce the same result as the
equivalent REDUCE expression.
\section{Interpretation of Error Messages}
The MathML-REDUCE interface has a set of error messages which aim to help the
user understand and correct any invalid MathML. Because there can exist many
different causes of errors, such error messages should be considered merely
as advice. Here we shall consider the most important error messages.
\paragraph{Missing tag:}
Many MathML tags go by pairs, such as {\tt $<$apply$>$$<$/apply$>$,
$<$reln$>$$<$/reln$>$}, {\tt $<$ci$>$$<$/ci$>$}, etc\ldots. In the
case
where the ending tag is missed out, or misspelled, it is very probable that an
error of this type will be thrown.
\paragraph{Ambiguous or Erroneous Use of {\tt $<$apply$>$}}
This error message defines an undefined error. When this error message
appears, it is not very clear to the interface where exactly lies the error.
Probable causes are a misuse of the {\tt $<$apply$>$$<$/apply$>$} tags, or a
mispelling of the tag preceding the {\tt $<$apply$>$}. However, other types
of errors may cause this error message to appear.
Tags following an {\tt $<$apply$>$} tag may be misspelled without causing an
error message, but they will be considered as operators by REDUCE and
therefore evaluate to some unexpected expression.
\paragraph{Syntax Errors}
It is possible that the input MathML is not syntactically correct. In such
a situation, the error will be spotted, and in some cases a solution might be
presented. There are a variety of syntax errors messages, but relying on
their advice might not always be helpful.
\\
Despite the verbose nature of the error messages and their recommended
solutions, we do recommend that in most situations reference to the MathML
specification is made.
\section{Limitations of the Interface}
Not all aspects of MathML have been perfectly fitted into the interface.
There are still some problems unsolved in the present version of the
interface:
\begin{itemize}
\item MathML Presentation Markup is not supported. The
interface will treat every presentation tag as an unknown tag, or a REDUCE
operator. We found presentation markup not prioritary when dealing with
computer algebra systems, although in the future, parsing of presentation
markup within content markup shall be supported.
\item Certain MathML tags
do not play an important role in the REDUCE environment. Such tags will not
evaluate or affect in anyway the interface's behaviour. They will be parsed
correctly, although their action will be ignored. These tags are:
\begin{enumerate}
{\tt
\item $<$interval$><$/interval$>$
\item $<$inverse$><$/inverse$>$
\item $<$condition$><$/condition$>$
\item $<$compose$><$/compose$>$
\item $<$ident$><$/ident$>$
\item $<$forall/$>$
\item $<$exists/$>$}
\end{enumerate}
Although {\tt $<$condition$><$/condition$>$} and {\tt
$<$interval$><$/interval$>$} tags are supported when used within the
following tags:
\begin{enumerate}
{\tt
\item $<$int/$>$
\item $<$limit/$>$
\item $<$sum/$>$
\item $<$product/$>$}
\end{enumerate}
\item The {\tt $<$declare$>$} construct takes one or two arguments. It sets
the first argument to the value of the second. In the case where the second
argument is a vector or a matrix, an obscure error message is produced. It is
clearly something which must be fixed in the future.\\ \item The program
throws an error when it encounters {\tt nil} between {\tt $<$ci$><$/ci$>$}
tags. It is not possible to use boolean values directly. Please refer to the
above subsection treating this matter.
\end{itemize}
\section{Including MathML into a Webpage}
We shall not get into the details of including MathML in the design of a
web page, but it is important for completeness, to give a quick overview.
One can include MathML in two ways:
\begin{itemize}
\item By introducing MathML inside $<$math$>$$<$/math$>$ tags in any HTML
page.
\item By having MathML being embeded inside HTML $<$embed$>$$<$/embed$>$
tags.
\end{itemize}
When one has the {\bf mathml} switch on, MathML is generated inside
$<$math$>$$<$/math$>$ tags. Browsers such as WebEQ \footnote{Available at
http://www.webeq.com}can read this type of encoding.
\paragraph{Example:}
\begin{verbatim}
<html>
<body>
<title>MathML example</title>
<h1>Example of MathML</h1>
<br>
The following is MathML directly encoded within this homepage:
<br><br>
<math>
<apply><int>
<bvar>
<ci>x</ci>
</bvar>
<apply><power/>
<ci>x</ci>
<apply><power/>
<ci>x</ci>
<cn type="integer">2</cn>
</apply>
</apply>
</apply>
</math>
<br><br>
On some browsers such as WebEQ, the MathML is rendered correctly.
</html>
\end{verbatim}
But most people on the web use either Netscape or Microsoft Internet
Explorer, which do not at the moment support complete MathML rendering.
What must be done in cases where Netscape or Microsoft Internet Explorer
are used, is to make available the IBM Tech Explorer Plug-in, and include
the MathML inside the web page surrounded by $<$embed$>$$<$/embed$>$ tags.
This is easily done with the REDUCE-MathML interface by having the {\bf
web} switch on. Here is an example of how to do this:
\paragraph{Example}
\begin{verbatim}
<html>
<body>
<title>MathML example</title>
<h1>Example of MathML</h1>
<br>
The following is MathML directly encoded within this homepage:
<br><br>
<EMBED TYPE="text/mathml" MMLDATA="<math><apply><int><bvar><ci>x</ci></bvar>
<apply><power/><ci>x</ci><apply><power/><ci>x</ci><cn type="integer">
2 </cn></apply></apply></apply></math>"
HEIGHT=300 WIDTH=200>
<br><br>
On all browsers with the Tech Explorer Plug-in, this is rendered
correctly.
</html>
\end{verbatim}
Another way to embed MathML in a web page document is by having a
separate file contain MathML (usually a file with {\bf .mml} extension)
and to embed it inside a page with the following code:
\begin{verbatim}
<EMBED TYPE="text/mathml" src="mathml.mml" HEIGHT=300 WIDTH=200>
\end{verbatim}
\section{Examples}
We would like to present a series of examples which will illustrate the
possibilities of the interface.
\paragraph{Example 1} Type in the following and observe the resulting
expression:
\begin{verbatim}
23: on mathml;
24: solve({z=x*a+1},{z,x});
\end{verbatim}
\paragraph{Example 2}Have a file {\tt ex2.mml} containing the following
MathML source code:\\
\\
{\tt $<$mathml$>$\\
\hspace*{1mm} $<$apply$>$$<$sum/$>$\\
\hspace*{5mm} $<$bvar$>$\\
\hspace*{9mm} $<$ci$>$x$<$/ci$>$\\
\hspace*{5mm} $<$/bvar$>$\\
\hspace*{5mm} $<$apply$>$$<$fn$>$$<$ci$>$F$<$/ci$>$$<$fn$>$\\
\hspace*{9mm} $<$ci$>$x$<$/ci$>$\\
\hspace*{5mm} $<$/apply$>$\\
\hspace*{1mm} $<$/apply$>$\\
$<$/mathml$>$\\}
\\
and type:\\
\\
{\tt mml "ex2.mml"}
\paragraph{Example 3} This example illustrates how practical the switch
{\bf both} can be for interpreting verbose MathML.
Introduce the following MathML source into a file, say {\tt ex3.mml}\\
\\
{\tt $<$mathml$>$\\
\hspace*{1mm} $<$apply$>$$<$int/$>$\\
\hspace*{5mm} $<$bvar$>$\\
\hspace*{9mm} $<$ci$>$x$<$/ci$>$\\
\hspace*{5mm} $<$/bvar$>$\\
\hspace*{5mm} $<$apply$>$$<$sin/$>$\\
\hspace*{9mm} $<$apply$>$$<$log/$>$\\
\hspace*{13mm} $<$ci$>$x$<$/ci$>$\\
\hspace*{9mm} $<$/apply$>$\\
\hspace*{5mm} $<$/apply$>$\\
\hspace*{1mm} $<$/apply$>$\\
$<$/mathml$>$ \\}
\\
then do the following:
\begin{verbatim}
2: on both;
3: mml "ml";
\end{verbatim}
\section{An overview of how the Interface Works}
The interface is primarily built in two parts. A first one which parses
and evaluates MathML, and a second one which parses REDUCE's algebraic
expressions and prints them out in MathML format. Both parts work by
recursive parsing, using Top-Down Recursive Descent parsing with one token
look ahead.
The BNF description of the MathML grammar is to be defined informally in
APPENDIX E of the current MathML specification. It is with this document that
we have developed the MathML parser. The MathML parser evaluates all
that is possible and returns a valid REDUCE algebraic expression. When {\bf
mathml} or {\bf both} are on, this algebraic expression is fed into the
second part of the program which parses these expressions and transforms them
back into MathML.
The MathML generator parses through the algebraic expression produced by
either REDUCE itself or the MathML parser. It works in a very
similar way as the MathML parser. It is simpler, since no evaluation is
involved. All the generated code is MathML compliant. It is important to note
that the MathML code generator sometimes introduces Presentation Markup tags,
and other tags which are not understood by the MathML parser of the
interface\footnote{The set of tags not understood by the MathML parser are
detailed in section {\bf Limitations}.}.
\end{document}
| {
"alphanum_fraction": 0.700500183,
"avg_line_length": 30.6429906542,
"ext": "tex",
"hexsha": "f77a053067bfa9f4e41dcdfd58b1ede941c18fc5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "arthurcnorman/general",
"max_forks_repo_path": "packages/mathml/mathml.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "arthurcnorman/general",
"max_issues_repo_path": "packages/mathml/mathml.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "arthurcnorman/general",
"max_stars_repo_path": "packages/mathml/mathml.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4710,
"size": 16394
} |
% !TEX root =../thesis-letomes.tex
\chapter{Mathematica Notebooks} \label{apx:mathematica}
\section{Mars Hohmann Derivations}\label{apx:mars-hohmann-derivations}
\includepdf[scale=1.0,pages={-}]{pdf/hohmann_to_mars.pdf} | {
"alphanum_fraction": 0.7772727273,
"avg_line_length": 44,
"ext": "tex",
"hexsha": "11dc7ef111a7c866fd7329c50282e8fe9a768536",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5f73a4066fcf69260cb538c105acf898b22e756d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GandalfSaxe/letomes",
"max_forks_repo_path": "report/appendices/APX-Mathematica-Notebooks.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5f73a4066fcf69260cb538c105acf898b22e756d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GandalfSaxe/letomes",
"max_issues_repo_path": "report/appendices/APX-Mathematica-Notebooks.tex",
"max_line_length": 70,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5f73a4066fcf69260cb538c105acf898b22e756d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GandalfSaxe/letomes",
"max_stars_repo_path": "report/appendices/APX-Mathematica-Notebooks.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 80,
"size": 220
} |
\documentclass{ctexart}
\usepackage{bbm}
\usepackage{xcolor}
\usepackage{listings}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\graphicspath{{images/}}
\usepackage[backend=biber, style=numeric, sorting=ynt]{biblatex}
\addbibresource{refs.bib}
\usepackage{hyperref}
\usepackage{amsmath}
\usepackage{subfiles}
\title{FCOS: Fully Convolutional One-Stage Object Detection}
\author{Zhi Tian, Chunhua Shen, Hao Chen, Tong He}
\date{}
\begin{document}
\maketitle
\begin{abstract}
\subfile{sections/abstract}
\end{abstract}
\section{Introduction}
\subfile{sections/introduction}
\section{Our Approach}
\subfile{sections/detection}
\printbibliography
\end{document} | {
"alphanum_fraction": 0.7906295754,
"avg_line_length": 23.5517241379,
"ext": "tex",
"hexsha": "438bfd47555bb11fb7abdbcdfa3acd94bdd97327",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0d57783ecb49c0ce2d7621460cff102caacca2da",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sjzyzz/paper_translation",
"max_forks_repo_path": "papers/FCOS/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0d57783ecb49c0ce2d7621460cff102caacca2da",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sjzyzz/paper_translation",
"max_issues_repo_path": "papers/FCOS/main.tex",
"max_line_length": 64,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0d57783ecb49c0ce2d7621460cff102caacca2da",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sjzyzz/paper_translation",
"max_stars_repo_path": "papers/FCOS/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 210,
"size": 683
} |
\input{templates/ex_template.tex}
\title{BPP Exercise 9 -- Object Oriented Programming}
% {YYYY}{MM}{DD}
\setdate{2019}{06}{09}
\begin{document}
\section{Warm-Up: OOP Quiz (20 points)}
1. Look at the following code snippets. What is the value of {\tt a} and {\tt b} at the end of each of them? Can you explain what the difference is?
\begin{pythoncode}
a = [1, 2, 3]
b = a
# change b
b.append(42)
\end{pythoncode}
\begin{solution}
The value of {\tt a}, as well as {\tt b} at the end is {\tt [1, 2, 3, 42]}. The reason is, as explained in slides 11-14 in this weeks lecture, that when we write a statement like {\tt a = [1, 2, 3]}, we create a list somewhere, but only store a {\bf pointer} to it in {\tt a}. When we write {\tt b = a}, we just copy the pointer, not the actual object. Now {\tt a} and {\tt b} point to the same object, so if we append to the object in {\tt b}, we also automatically append to the object in {\tt a}.
\end{solution}
\begin{pythoncode}
a = 42
b = a
# change b
b += 10
\end{pythoncode}
\begin{solution}
In the first two lines, the exact same thing happens. The difference lies in the operations {\tt append} and {\tt +=}. {\tt append} actually {\it mutates} (i.e. changes) the list. {\tt b += 10} can be rewritten as {\tt b = b + 10}, and there you can see the difference: Instead of mutating the object in {\tt b}, we assign a {\it new object} to {\tt b}. This of course does not change the object in {\tt a}.
\vspace{1em}
\end{solution}
\noindent 2. Decide for each of the statements below whether they are true or false. Give reasons for your answer. When in doubt, base your answer on Python's OOP, not on OOP in other languages.
\begin{enumerate}
\item An object of {\it class} {\tt Penguin} is also of {\it type} {\tt Penguin}
\item When an attribute is a {\it property}, it can only be accessed from withtin the object it belongs to
\item When an attribute starts with an underscore ({\tt \_}), it can only be accessed from within the object it belongs to
\item {\bf All} of the following things are objects: integers, strings, lists, dictionaries, functions
\item The constructor is a method that is called when an object is created and is mostly used to define its other methods
\end{enumerate}
\begin{solution}
\begin{enumerate}
\item {\bf True.} Well, mostly. In Python, the terms {\it class} and {\it type} are mostly equivalent. We usually talk about user-defined {\it classes} and built-in {\it types}, but this may vary.
\item {\bf False.} Being a property means to be {\it read-only}, not to be private.
\item {\bf False} (although true is also an accepted answer if the reasoning is correct). {\tt \_} does signal that an attribute {\it should} not be accessed from outside the object (by convention), but it is still possible.
\item {\bf True.} In Python, everything you can assign to a variable is an object.
\item {\bf False.} Yes, the constructor is called on object creation, but it is mostly used to define {\it attributes}, not methods.
\end{enumerate}
\end{solution}
\section{Modelling with OOP (30 points)}
\subsection{Sketching Ideas}
One Motivation behind Object Oriented Programming is that we can model {\bf real-world entities} with objects. In this assignment, we want to model the entities {\it Human}, {\it Employee} and {\it Student} as {\bf classes}.
\vspace{1em}
\noindent 1. Think of a suitable inheritance structure. Who should inherit from who (i.e. which concept is a specialized version of another concept)? Draw these relationships in a diagram (you can draw on a piece of paper and take a picture if you want). Explain why you chose this structure.
\vspace{1em}
\noindent {\bf Bonus:} Look up the term "UML diagram". It is a standardized way of drawing relationships between classes.
\vspace{1em}
\noindent 2. Think about {\bf attributes} these classes have (at least 3 per class). For example, every human has an age. Keep in mind that attributes get inherited. Write down what attributes you came up with (you can just add them to your diagram, but give a brief explanation if it is not obvious).
\vspace{1em}
\noindent 3. Now think about {\bf methods}. Just with the attributes, define some methods (2 per class) for your classes that make sense. Again, write down your ideas or add them to the diagram.
\vspace{1em}
\begin{solution}
\begin{center}
\includegraphics[width=0.5\textwidth]{09_OOP/humans.pdf}
\end{center}
\vspace{1em}
\noindent Both students and employees are humans, so it makes sense for both to inherit from {\tt Human}. They are {\it specialized} concepts of the concept human. Depending on the context, it may or may not make sense for other inheritance relations: perhaps you are modelling student jobs, where every employee is also a student. Then your structure could look like this: {\tt Employee} inherits from {\tt Student}, which inherits from {\tt Human}.
\end{solution}
\subsection{Implementation}
Now it is time to {\it implement} the class structure. Define the three classes you sketched out in 2.1 and give them their methods and attributes. For the methods, it is enough if you leave them empty (e.g. {\tt pass} or {\tt return None}) except for a small docstring that explains what they are supposed to do. Make sure the classes that should inherit from another class do so. Test your implementation by instanciating some objects and checking if they have the methods and attributes that they should have.
\vspace{1em}
\begin{solution}
\begin{pythoncode}
class Human:
def __init__(self, name, birthday):
self.name = name
self.birthday = birthday
# this function does not exist, but could be implemented
self.age = compute_age(current_time(), birthday)
def walk_to(self, destination):
pass
def eat(self, food):
pass
def make_friends(self, human):
pass
class Student(Human):
def __init__(self, name, birthday, mat_no, semester, major):
super().__init__(name, birthday)
self.matrikulation_no = mat_no
self.semester = semester
self.major = major
def study(self, subject, time):
pass
def submit_homework(self, subject):
pass
def eat_at_mensa(self):
self.walk_to("mensa")
self.eat(get_best_mensa_food(today()))
class Employee(Human):
def __init__(self, name, birthday, institution, salary):
super().__init__(name, birthday)
self.institution = institution
self.monthly_salary = salary
self.currently_sick = False
def work(self, task, time):
pass
def take_coffee_break(self, time):
pass
def apply_for_raise(self):
pass
\end{pythoncode}
\end{solution}
\section{Drunk Turtles (30 points)}
Take a look at the file {\tt drunk\_turtles.py}. You will find some code already provided that uses the class {\tt DrunkTurtle}, which inherits from the {\tt Turtle} class of your old friend, the {\tt turtle} module. You should implement this class. The provided code will animate them walking around aimlessly in the empty plains of the turtle world, step by step.
\subsection{The Constructor}
A {\tt DrunkTurtle} cannot walk in a straight line, but walks in random directions. Therefore, the first argument it takes should be an angle that defines how much it can change direction before each step it takes (some turtles are more drunk than others). The second argument it takes should be the size of each step the {\tt DrunkTurtle} takes (some turtles have larger legs than others). The third and last argument is a song the turtle sings while walking around. A song in this context is a {\tt ["list", "of", "words"]}.
\vspace{1em}
\noindent One should be able to construct a {\tt DrunkTurtle} object like this:
\begin{pythoncode}
pete = DrunkTurtle(45, 20, ["Shoo", "wa", "da", "dub."])
\end{pythoncode}
\noindent Because all of these values need to be available by the methods we are going to define later on, you should store them in {\bf attributes}!
\vspace{1em}
\noindent {\bf Bonus - only if you feel like you understand everything so far:} Think of which attributes (and methods from the next step) should be private and how you achieve that in Python.
\subsection{Methods}
Our {\tt DrunkTurtle} will need two methods: {\tt sing} and {\tt move}.
\begin{itemize}
\item {\tt sing} should take no arguments (besides {\tt self}), and should sing the next word in the song. "sing" in this context means to write the word to the screen at the turtle's current position. You can use the method {\tt write} for that, which {\tt DrunkTurtle} automatically inherits from {\tt Turtle}. You will need to think of a way how {\tt sing} can keep track of where it is in the song.
\item {\tt move} should also take no arguments. It should rotate the turtle by a random angle from the interval $[-random\_angle, random\_angle]$ and then move it by {\tt step\_size}. The {\tt turtle} functions {\tt forward} and {\tt left} you already know are also available as {\it methods} of {\tt Turtle} objects. It should also call {\tt sing}, because we want to sing one word each step.
\end{itemize}
\subsection{Instances}
Once your class is implemented, fill the two placeholders in the template script with instances of {\tt DrunkTurtle}. Experiment with the arguments and see how they affect the turtles' paths. When you run the script, something like this should unfold:
\begin{center}
\includegraphics[width=0.5\textwidth]{09_OOP/drunk_turtles.png}
\end{center}
\vspace{1em}
\begin{solution}
Check the file {\tt drunk\_turtles\_sol.py} for an example solution.
\end{solution}
\section{Thinking Outside the Box: Magic Methods (20 points)}
A {\bf vector} can be described as a collection of numbers, e.g. $(3, 5, 1)$. In two-dimensional vectors (vectors which only have two elements), the first one is often referred to as $x$ and the second as $y$.
\vspace{1em}
\noindent When you {\bf add} two vectors $u$ and $v$, you again get a vector whose elements are the sum of the elements of $u$ and $v$ {\it at the same index}. For example: $(1, 4) + (3, 2) = (1+3, 4+2) = (4, 6)$. This is also called {\it element-wise} addition.
\vspace{1em}
\noindent We would like to use the magic method {\tt \_\_add\_\_} to achieve this behavior in Python. Complete the template of the class {\tt Vector2D} given below.
\begin{enumerate}
\item First, you need to implement the {\bf constructor}. It is given the $x$ and $y$ values of the vector, which we need to store in attributes.
\item Next, implement the magic method {\tt \_\_add\_\_}. Besides {\tt self}, it is given the other vector that is supposed to be added to the first one. It should {\it return} their sum, which itself is a {\tt Vector2D}.
\item Lastly, we want to be able to print a {\tt Vector2D}, just like we can print tuples or lists. For this, the {\tt \_\_str\_\_} method is responsible - it gets called whenever Python tries to convert an object to a string. It should return a string representing the object in some way.
\end{enumerate}
\begin{pythoncode}
class Vector2D:
def __init__(self, x, y):
# YOUR CODE HERE
def __add__(self, other_vector):
# YOUR CODE HERE
# ADD DUNDER METHOD __str__ HERE
# testing
u = Vector2D(12, 36)
v = Vector2D(42, 43)
print(u + v)
\end{pythoncode}
\noindent {\bf Intended output (may vary depening on your representation):}
\begin{outputcode}
Vector2D(54, 79)
\end{outputcode}
\begin{solution}
\begin{pythoncode}
class Vector2D:
def __init__(self, x, y):
self.x = x
self.y = y
def __add__(self, other_vector):
"""Element-wise addition of two vectors"""
return Vector2D(self.x + other_vector.x, self.y + other_vector.y)
def __str__(self):
"""String representation of vector: Vector2D(x, y)"""
return "Vector2D({}, {})".format(self.x, self.y)
\end{pythoncode}
\end{solution}
\end{document}
| {
"alphanum_fraction": 0.7120310692,
"avg_line_length": 45.6679245283,
"ext": "tex",
"hexsha": "89534271dd910f395674970f863c5ca15f7689a2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-20T14:26:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-20T14:26:28.000Z",
"max_forks_repo_head_hexsha": "e8cabf0e4ac01ab3d97eecee5e699139076d6544",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lfrommelt/monty",
"max_forks_repo_path": "2019/09_OOP/09_OOP_Ex.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e8cabf0e4ac01ab3d97eecee5e699139076d6544",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lfrommelt/monty",
"max_issues_repo_path": "2019/09_OOP/09_OOP_Ex.tex",
"max_line_length": 526,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e8cabf0e4ac01ab3d97eecee5e699139076d6544",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lfrommelt/monty",
"max_stars_repo_path": "2019/09_OOP/09_OOP_Ex.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3109,
"size": 12102
} |
% \newcommand\rectyp{\FinTy{\tnat}}
\newcommand\factyp{\typ}
\section{Implementation in \toolname}\label{sec:haskell}
We have implemented \declang in \toolname.
%
In \S~\ref{sec:termination} we saw real world termination checks.
%
Here claim soundness of \toolname's termination checker,
as the checker derives as a the transition from \declang to \lhaskell.
\subsection{Termination}
%Let ${\factyp \defeq \tfunbasic{\FinTy{\tnat}}{\FinTy{\tnat}}}$.
Haskell's recursive functions of type ${\tfunbasic{\FinTy{\tnat}}{\typ}}$
are represented, in GHC's Core \cite{SulzmannCJD07} as
${\mathtt{let\ rec}\ f = \ \efun{n}{}{e}}$ that is operationally
equivalent to ${\mathtt{let}\ f = \ \etfix{\typ}\ (\efun{n}{}{\efun{f}{}{e}})}$.
Given the type of $\etfix{\typ}$, checking that $f$ has
type $\tfunbasic{\FinTy{\tnat}}{\typ}$ reduces to checking
$e$ in a \emph{termination-weakened environment} where
$$\tbind{f}{\tfunbasic{\tref{v}{\tnat}{\finite}{v < n}}{\typ}}$$
%
Thus, \toolname proves termination just as \declang
does: by checking the body in the above environment,
where the recursive binder is called with $\tnat$
inputs that are strictly smaller than $n$.
\mypara{Default Metric}
For example, \toolname proves that
%
\begin{code}
fac n = if n == 0 then 1 else n * fac (n-1)
\end{code}
%
has type $\tfunbasic{\FinTy{\tnat}}{\FinTy{\tnat}}$
by typechecking the body of @fac@
in a termination-weakened environment
${\mathtt{fac}\ : \tfunbasic{\tref{\ttv}{\tnat}{\finite}{\ttv < \ttn}}{\FinTy{\tnat}}}$.
The recursive call generates the query:
\begin{align*}
\tbind{\ttn}{\{0 \leq \ttn\}}, \lnot (\ttn = 0) \vdash_D &\ \subt{\ttref{v=n-1}}{\ttref{0 \leq v \wedge v < n}}\\
%\tlref{n}{\tint}{\finite}{0 \leq n}, \lnot (n = 0) \vdash_D &\ \subt{\ttref{v=n-1}}{\ttref{0 \leq v \wedge v < n}}\\
\intertext{Which reduces to the valid VC:}
0 \leq \ttn \wedge \lnot (\ttn = 0) \Rightarrow &\ (\ttv = \ttn-1) \Rightarrow (0 \leq \ttv \wedge \ttv < \ttn)
\end{align*}
%
proving that $\mathtt{fac}$ terminates, in essence because the
\emph{first parameter} forms a \emph{well-founded decreasing metric}.
\mypara{Refinements Enable Termination}
Consider Euclid's GCD:
%
\begin{code}
gcd :: a:Nat -> {v:Nat | v < a} -> Nat
gcd a 0 = a
gcd a b = gcd b (a `mod` b)
\end{code}
%
Here, the first parameter is decreasing, but this requires
the fact that the second parameter is smaller than the first
and that @mod@ returns results smaller than its second
parameter. Both facts are easily expressed as refinements,
but elude non-extensible checkers~\cite{Giesl11}.
\mypara{Explicit Termination Metrics}
The indexed-fixpoint combinator technique is easily extended to
cases where some parameter \emph{other} than the first is the
well-founded metric. For example, consider:
% As an example, consider the tail-recursive factorial:
%
\begin{code}
tfac :: Nat -> n:Nat -> Nat / [n]
tfac x n | n == 0 = x
| otherwise = tfac (n*x) (n-1)
\end{code}
%
We specify that the \emph{last parameter} is decreasing by
specifying an explicit termination metric @/ [n]@ in the
type signature.
%
\toolname \emph{desugars} the
termination metric into a new $\tnat$-valued \emph{ghost parameter} @d@
whose value is always equal to the termination metric @n@:
\begin{code}
tfac :: d:Nat -> Nat -> {n:Nat | d = n} -> Nat
tfac d x n | n == 0 = x
| otherwise = tfac (n-1) (n*x) (n-1)
\end{code}
%
Type checking, as before, checks the body in an environment where
the first argument of @tfac@ is weakened, \ie, requires proving @d > n-1@.
%
So, the system needs to know that the ghost argument @d@
represents the decreasing metric.
%
We capture this information in the type signature of @tfac@ where the \emph{last}
argument exactly specifies that @d@ is the termination metric @n@, \ie, @d = n@.
%
Note that since the termination metric can depend on any argument,
it is important to refine the last argument,
so that all arguments are in scope, with the fact that @d@ is the termination metric.
To generalize, desugaring of termination metrics proceeds as follows.
Let $f$ be a recursive function with parameters $\overline{x}$ and
termination metric $\mu(\overline{x})$. Then \toolname will
\begin{itemize}
\item add a $\tnat$-valued ghost first parameter $d$ in the definition of $f$,
\item weaken the last argument of $f$ with the refinement $d = \mu(\overline{x})$, %and
\item at each recursive call of $f\ \overline{e}$,
apply $\mu(\overline{e})$ as the first argument.
\end{itemize}
%
%%As will shall see this technique can be used
%%when the termination metric is any logical expression.
\mypara{Explicit Termination Expressions}
Let us now apply the previous technique in a function where
none of the parameters themselves decrease across recursive calls,
but there is some \emph{expression} that forms the decreasing metric.
%
%Sometimes, none of the parameters themselves decrease across recursive calls,
%but there is some \emph{expression} that forms the decreasing metric.
%
Consider @range lo hi@ (as in~\S~\ref{sec:termination}), which returns the list of
@Int@s from @lo@ to @hi@:
%
We generalize the explicit metric specification to
\emph{expressions} like @hi-lo@. \toolname \emph{desugars} the
expression into a new $\tnat$-valued \emph{ghost parameter}
whose value is always equal to @hi-lo@, that is:
\begin{code}
range :: lo:Nat -> {hi:Nat | hi >= lo} -> [Nat] / [hi-lo]
range lo hi
| lo < hi = lo : range (lo + 1) hi
| _ = []
\end{code}
%
Here, neither parameter is decreasing (indeed, the first one
is \emph{increasing}) but @hi-lo@ decreases across each call.
%
We generalize the explicit metric specification to
\emph{expressions} like @hi-lo@. \toolname \emph{desugars} the
expression into a new $\tnat$-valued \emph{ghost parameter}
whose value is always equal to @hi-lo@, that is:
%
\begin{code}
range lo hi = go (hi-lo) lo hi
where
go :: d:Nat -> lo:Nat -> {hi:Nat | d = hi - lo} -> [Nat]
go d lo hi
| lo < hi = l : go (hi-(lo+1)) (lo+1) hi
| _ = []
\end{code}
%
After which, it proves @go@ terminating, by showing
that the first argument @d@ is a \tnat that decreases across each
recursive call (as in @fac@ and @tfac@).
\mypara{Recursion over Data Types}
The above strategy generalizes easily to functions that recurse
over (finite) data structures like arrays, lists, and trees.
In these cases, we simply use \emph{measures} to project the
structure onto \tnat, thereby reducing the verification to
the previously seen cases. For each user defined type, \eg
%
\begin{code}
data L [sz] a = N | C a (L a)
\end{code}
%
we can define a \emph{measure}
%
\begin{code}
measure sz :: L a -> Nat
sz (C x xs) = 1 + (sz xs)
sz N = 0
\end{code}
%
We prove that @map@ terminates using the type:
%
\begin{code}
map :: (a -> b) -> xs:L a -> L b / [sz xs]
map f (C x xs) = C (f x) (map f xs)
map f N = N
\end{code}
%
That is, by simply using @(sz xs)@ as the
decreasing metric.
\mypara{Generalized Metrics Over Datatypes}
Finally, in many functions there is no single argument
whose (measure) provably decreases. For example, consider:
%
\begin{code}
merge :: xs:L a -> ys:L a -> L a / [sz xs + sz ys]
merge (C x xs) (C y ys)
| x < y = x `C` (merge xs (y `C` ys))
| otherwise = y `C` (merge (x `C` xs) ys)
\end{code}
%
from the homonymous sorting routine. Here, neither parameter
decreases, but the \emph{sum} of their sizes does.
%
As before \toolname desugars the decreasing expression into
a ghost parameter and thereby proves termination (assuming,
of course, that the inputs were finite lists, \ie
$\FinTy{\mathtt{L}}\ a$).
\mypara{Automation: Default Size Measures}
Structural recursion on the first argument is a common pattern
in \lhaskell code.
%
\toolname automates termination proofs for this common case,
by allowing users to specify a \emph{size measure}
for each data type, (\eg @sz@ for @L a@).
%
Now, if \emph{no} termination metric is given, by default
\toolname assumes that the \emph{first} argument whose type
has an associated size measure decreases.
%
Thus, in the above, we need not specify metrics for @fac@
or @gcd@ or @map@ as the size measure is automatically
used to prove termination.
%
This simple heuristic allows us to {automatically}
prove 67\% of recursive functions terminating.
%%% \mypara{Summary}
%%% To sum up,
%%% %
%%% \begin{itemize}
%%% \item No termination check for functions marked @lazy@,
%%% \item If no explicit termination metrice, then first
%%% argument with size measure used by default,
%%% \item Otherwise, explicit termination metric desugared
%%% into ghost @nat@ parameter that is used to prove
%%% termination.
%%% \end{itemize}
\subsection{Non-termination}
By default, \toolname checks that every function is
terminating. We show in \Sref{sec:refinedhaskell:evaluation} that
this is in fact the overwhelmingly common case in practice.
%
However, annotating a function as @lazy@ deactivates
\toolname's termination check (and marks the result as a
\Div type).
%
This allows us to check functions that are
non-terminating, and allows \toolname to prove safety
properties of programs that manipulate \emph{infinite}
data, such as streams, which arise idiomatically with
Haskell's lazy semantics.
%
For example, consider the classic @repeat@ function:
%
\begin{code}
repeat x = x `C` repeat x
\end{code}
%
We cannot use the $\etfix{}$ combinators to
represent this kind of recursion, and hence,
use the non-terminating $\efix{}$ combinator
instead.
%
I \toolname, we use the @lazy@ keyword to denote
potentially diverging definitions
defined using the non-terminating $\efix{}$ combinator.
\begin{comment}
Abstract Streams
Let us see how we can use refinements to statically
distinguish between finite and infinite streams.
The direct, \emph{global} route of using a measure
%
\begin{code}
measure inf :: L a -> Prop
inf (C x xs) = inf xs
inf N = false
\end{code}
%
to describe infinite lists is unavailable as such
a measure, and hence, the corresponding refinement
would be non-terminating.
%
Instead, we describe infinite lists in \emph{local}
fashion, by stating that each \emph{tail} is non-empty.
\mypara{Step 1: Abstract Refinements}
We can parametrize a datatype with abstract
refinements that relate sub-parts of the
structure \cite{vazou13}.
For example, we parameterize the list type as:
%
\begin{code}
data L a <p :: L a -> Prop>
= N | C a {v: L<p> a | (p v)}
\end{code}
%
which parameterizes the list with a refinement
@p@ which holds \emph{for each} tail of the list,
\ie holds for each of the second arguments to
the @C@ constructor in each sub-list.
\mypara{Step 2: Measuring Emptiness} Now, we can write a measure that
states when a list is \emph{empty}
%
\begin{code}
measure emp :: L a -> Prop
emp N = true
emp (C x xs) = false
\end{code}
%
As described in \Sref{sec:typing}, \toolname translates the
abstract refinements and measures into refined types for
@N@ and @C@.
\mypara{Step 3: Specification \& Verification}
Finally, we can use the abstract refinements and measures to
write a type alias describing a refined version of @L a@
representing infinite streams:
%
\begin{code}
type Stream a =
{xs: L <{\v -> not(emp v)}> a | not(emp xs)}
\end{code}
%
We can now type @repeat@ as:
%
\begin{code}
lazy repeat :: a -> Stream a
repeat x = x `C` repeat x
\end{code}
%
The @lazy@ keyword \emph{deactivates} termination checking, and
marks the output as a \Div type.
%
Even more interestingly, we can prove safety properties of
infinite lists, for example:
%
\begin{code}
take :: Nat -> Stream a -> L a
take 0 _ = N
take n (C x xs) = x `C` take (n-1) xs
take _ N = error "never happens"
\end{code}
%
\toolname proves, similar to the @head@ example from
\Sref{sec:refinedhaskell:overview}, that we never match a @N@ when
the input is a @Stream@.
\mypara{Finite vs. Infinite Lists}
%
Thus, the combination of refinements
and labels allows our stratified type
system to specify and verify whether
a list is finite or infinite.
%
Note that:
%
$\FinTy{\mathtt{L}}\ a$ represents
\emph{finite} lists \ie those
produced using the (inductive)
terminating fixpoint combinators,
%
$\WnfTy{\mathtt{L}}\ a$ represents
(potentially) infinite lists which
are guaranteed to reduce to values,
\ie non-diverging computations that
yield finite or infinite lists,
and
$\DivTy{\mathtt{L}}\ a$ represents
computations that may diverge or
produce a finite or infinite list.
\end{comment}
\subsection{User Specifications and Type Inference}
In program verification it is common that the user provides functional
specification that the code should satisfy.
In \toolname these specifications can be provided as type signatures
for @let@-bound variables.
%
Consider the typechecking rules of Figure~\ref{fig:typing}
that is used by \declang.
%
$$
\inference{
\hastype{\Gamma}{e_x}{\tau_{x}} &&
\hastype{\Gamma,\tbind{x}{\tau_x}}{e}{\tau} &&
\iswellformed{\Gamma}{\tau}
}{
\hastype{\Gamma}{\elet{x}{e_x}{e}}{\tau}
}[\rtlet]
$$
%
Note that \rtlet \emph{guesses} an appropriate type $\tau_x$
for $e_x$ and binds it to $x$ to typecheck $e$.
\toolname allows the user to specify the type $\tau_x$ for top level bindings.
%
For every binding \elet{x}{e_x}{\dots}, if the user provides a type specification $\tau_x$,
\toolname checks using the appropriate environment
(1)~that the specified type is well-formed and
(2)~that the expression $e_x$ typechecks under the specification $\tau_x$.
%
For the other top level bindings, \ie those without user-provided specifications,
as well as all local bindings, \toolname uses the Liquid Types~\citep{LiquidPLDI08}
framework to infer refinement types, thus greatly reducing the number of annotations
required from the user.
| {
"alphanum_fraction": 0.7020973035,
"avg_line_length": 33.3761904762,
"ext": "tex",
"hexsha": "cde0e7ed06366bd852810da79afbc6a1090d24af",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z",
"max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "nikivazou/thesis",
"max_forks_repo_path": "text/refinedhaskell/haskell.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "nikivazou/thesis",
"max_issues_repo_path": "text/refinedhaskell/haskell.tex",
"max_line_length": 119,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "nikivazou/thesis",
"max_stars_repo_path": "text/refinedhaskell/haskell.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z",
"num_tokens": 4138,
"size": 14018
} |
\section{Related Work}
\label{sec:relatedwork}
% alternatives: What is different approaches/alternative to NIDS has there been before?
%(p1: In the related work, you should maybe start with a paragraph where you introduce anomaly detection, and describe what alternatives there are. As I remember it, there are approaches that do one-class classification (and do not try to group anomalies at all), and cluster based approaches (which is what you do). As it is now, you jump directly in sentence 2 of Related Work into an explanation about different clustering methods.)
For an overview of different anomaly detection methods and applications, mostly semi-supervised or unsupervised approaches have previously been applied to anomaly detection problems including one-class SVM \cite{ester96}, bayesian method \cite{holst12}, graph stream \cite{aggarwal01} and distance-based method \cite{ramaswamy00} because those approaches can handle imbalanced datasets and make use of unlabeled data which is cheaper to obtain than labeled data.
However, when the intrusions occur collectively among a background of normal data, those approaches are not effective since they are limited to finding point anomalies as known as outliers.
So clustering analysis \cite{breuning00} \cite{knorr00} are preferred in the context of collective anomaly detection.
%Semi-supervised or unsupervised approaches are preferred in intrusion detection \cite{chandola09} because it can handle imbalanced datasets and make use of unlabeled data which is cheaper to obtain than labeled data.
%Clustering analysis is an important tool for those approaches so also studied in the context of network anomaly detection \cite{chandola09}.
%Clustering approaches to network intrusion detection systems can be divided into two methods
%- distance-based \cite{ramaswamy00}\cite{knorr00} and density-based methods \cite{aggarwal01}\cite{breuning00}\cite{ester96}.
%In the context of intrusion detection, clustering analysis is \cite{ramaswamy00} \cite{knorr00} is an effective tool so the method presented here is also based on clustering algorithm.
%The method presented here is based on clustering analysis since it is an important tool for semi-those approaches so also studied in the context of network anomaly detection \cite{chandola09}.
% problems : In what way they differ from each other
Clustering approaches, however, have several drawbacks.
Firstly, they are hard to be easily generalized in multi-dimensional data because the neighborhood can be arbitrary.
Secondly, a major drawback to clustering algorithm is that anomalies which should not belong to any cluster can be assigned to a larger cluster.
% because clustering algorithm force every instance to be assigned to a cluster.
Thirdly, it is subjective to choose $k$, the number of clusters, so not applicable to the case of dynamic intrusion detection.
Especially in the case studied here, where the size of abnormal class is varied, finding $k$ is very important.
% alternatives
Compared to the those clustering algorithms, spectral clustering has advantages.
The spectral clustering approach solves the graph cutting problem.
It often outperforms the other clustering approaches \cite{ulrike07}, and it offers proper way of finding $k$.
Graph cutting is the intractable optimization problem which is associated with clustering algorithms.
The naive graph cut algorithm is the NP-hard problem and noise-sensitive.
To diminish those weaknesses, normalized cut \cite{jianbo00} is proposed which solves the NP-hard problem by approximate solution.
Spectral clustering according to Jordan and Francis \cite{jordan04} and Ng, Jordan and Weiss \cite{ng01} proposed a new algorithm for spectral clustering with objective function with different normalized terms that minimize the error.
Dhillon et al. \cite{dhillon04}, and Ding and He \cite{cding04} showed that the objective function is equivalent to weighted kernel k-means algorithm.
However since all of them try to divide data points into only two classes, it is sensitive to noise.
I use Shi's multiclass spectral clustering \cite{jianbo03} to alleviate noise problem because I found recursive bipartite approach is not applicable for real data which has much noise.
% solutions
In this report, I will design the intrusion detection system using spectral clustering as a promising alternative.
It uses both distance-based and density-based methods to solve those three problems stated above.
Firstly, it will utilize the EM approach to automatically learn the similarity score function to reduce dimensions of raw data.
Secondly, it will learn density function for normal connections to find anomalies within a cluster which is classified as normal connections.
Thirdly, this one uses the top eigenvectors to find best $k$ and finds clusters as an approximative solution to graph cut problem. % which is the intractable optimization problem. %with an eigenvalue decomposition and graph cut algorithm.
% Remainder
The remainder of this report is structured as follows.
Section~\ref{sec:potentialanomalies} describes potential anomalies in the data set.
Section~\ref{sec:spectralclustering} describes the detail of the spectral clustering algorithm.
More details can be found in \cite{ulrike07}.
Section~\ref{sec:connectionsimilarity} describes how to train a similarity score function and a density function.
Section~\ref{sec:experiments} illustrates how the method works on the NSL-KDD data.
Section~\ref{sec:conclusion} offers concluding remarks.
%It is called a spectral clustering as well.
%Mostly spectral clustering is used in the spatial analysis such as image segmentation and their performances are generally good especially for non-convex clustering problem.
%I will utilize the EM approach to automatically learn the similarity score functions and density function for normal network connections. %before the clustering.
%This semi-supervised learning ensures that it is useful for the case of normal behaviours, and easy to bring prior knowledge.
%that multi-lcass spectral clustering with EM algorithm and density estimation.
%Although spectral clustering only requires pairwise similarity of data points, I use semi-supervised way of learning.
%it does not consider density which is.
%I uses histogram and mixture models to measure its similarity.
| {
"alphanum_fraction": 0.8138114626,
"avg_line_length": 110.1034482759,
"ext": "tex",
"hexsha": "30f66f077d86439b61c0243b8e314e6ede15f199",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-16T21:50:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-16T21:50:52.000Z",
"max_forks_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wsgan001/AnomalyDetection",
"max_forks_repo_path": "report/sections/relatedwork.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wsgan001/AnomalyDetection",
"max_issues_repo_path": "report/sections/relatedwork.tex",
"max_line_length": 463,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wsgan001/AnomalyDetection",
"max_stars_repo_path": "report/sections/relatedwork.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1328,
"size": 6386
} |
\chapter*{\Large Abstract} | {
"alphanum_fraction": 0.7692307692,
"avg_line_length": 26,
"ext": "tex",
"hexsha": "5a2926c9f869748626fa7e1d592b2c0aff3abe33",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "glupeksha/Thesis-Format",
"max_forks_repo_path": "structure/front_matter/abstract.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "glupeksha/Thesis-Format",
"max_issues_repo_path": "structure/front_matter/abstract.tex",
"max_line_length": 26,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "upeksha1996/Thesis-Format",
"max_stars_repo_path": "structure/front_matter/abstract.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-26T00:51:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-26T00:51:34.000Z",
"num_tokens": 7,
"size": 26
} |
\documentclass[10pt,conference,compsocconf]{IEEEtran}
\usepackage{hyperref}
\usepackage{graphicx} % For figure environment
% for text in math mode
\usepackage{amsmath}
% for table
\usepackage{tabu}
\usepackage{tabularx}
\usepackage{multirow}
\begin{document}
\title{Higgs Boson Machine Learning Challenge}
\author{
Cheng Chun Lee, I Made Sanadhi Sutandi, Haziq Razali\\
\textit{School of Computer and Communication Science, EPFL}
}
\maketitle
\begin{abstract}
The discovery of the Higgs-Boson in 2012 was a major breakthrough in particle physics and it was the combination of effort by inter-discipline join of physicists and data scientists, as an attempt to better understand this particle. In this paper, we tackle the problem of identifying the Higgs-Boson through the implementation of machine learning techniques. We investigate the usage of linear models to discriminate between the Higgs-Boson (signal) and background. Coupled with data analysis, we show the efficacy of our pipeline on the Higgs-Boson dataset with promising results.
\end{abstract}
\section{Introduction}
The Higgs-Boson is the first elementary particle discovered in nature through experiments at the Large Hadron Collider. Our aim in this project is to develop a learning system capable of identifying the Higgs-Boson given its decay signature. To achieve this, we use the CERN's public dataset consists of 250,000 observations, each with a vector of 30 features representing the decay signature of a collision event. The observations are labeled 1 for an actual event and -1 for background noise.
\section{Pre-processing}
\subsection{Data partitioning}
Initial exploratory data analysis reveals an interesting structure within the dataset: that there are several features that tightly couped by the value of PRI\_JET\_NUM (feature 22). The number of jet emissions, ranging from 0 to 3, dictate the availability of specific readings with outliers value of being designated the integer -999. It would thus make sense to posit that the number of jets emitted will result in different readings of features. Consequently, we partitioned the dataset into 4 disjoint subsets according to the emission count, removing all missing entries including the jet number. In addition to this partitioning, we also figure out that the predicitons most probably are background noises if outliers are present in DER\_MASS\_MMC (feature 1). Therefore we split each subset again into two subsets based on the presence of outliers in this feature, giving us 8 subsets in total accordingly.
\subsection{Features Removal}
We plot of a histogram of all variables and split according to the observation (signal/background). Then we find that the variables 'phi' to have no distinct pattern (Figure 1). We thus suspect that these variables do not provide useful information and decide to remove all phi related variables from the dataset.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{feature.jpg}
\caption{Histogram of the variable PRI\_lep\_phi for true observations (top) and background (bottom).}
\end{center}
\label{fig:phi}
\end{figure}
\subsection{Feature Expansion}
Next, we augment all 8 feature matrices with non-linear transformations of the raw data to increase the expressive power of our models. More specifically, for each matrix ${\textbf X_i}$ we concatenate as follows:
\vspace*{-1mm}
\begin{equation}
{\textbf X} \leftarrow \big[ \ {\textbf X} \ \vert \ {\textbf A} \ \vert \ {\textbf B} \ \vert \ {\textbf C} \ \vert \ {\textbf d} \ \vert \ {\textbf E} \ \vert \ {\textbf F} \ \big]
\end{equation}
\vspace*{-5mm}
\begin{equation}
a_{ij} = x_{ij}^2, \ \ b_{ij} = \log (1 + x_{ij}), \ \ c_{ij} = b_{ij}^{-1}, \ \ d_{i} = 1
\end{equation}
\vspace*{-5mm}
\begin{equation}
E = (X)*(X), \ \ f_{ij} = \sqrt{x_{ij}}
\end{equation}
bringing the size of each matrix from $ (N \times m_i) $ to $ (N \times (6m_i + 1))$. A drawback of feature augmentation is the increased tendency for models to overfit which we can reduce via regularization. Note that we have also append a column vector of ones to the final matrix to act as the bias term present in linear models.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{box1.jpg}
\caption{Box plot distributions of our models.}
\end{center}
\label{fig:boxplot}
\end{figure*}
\subsection{Standardization}
Learning algorithms often benefit from the many standardization techniques to prevent any particular feature from dominating the objective function. In our implementation, we standardize our data using standard score (z-score) normalization in which by scaling each feature to have zero-mean and unit-variance.
\section{Models}
We experiment with the following linear models: Least Squares and Logistic Regression with $L_2$ regularization. All weights are initialized by sampling from a normal distribution with zero-mean and unit variance. Recall from section II that the data was split into 8 subsets; our algorithm thus builds 8 separate classifiers trained on each subset. Since we define 5000 iterations for each subset, we use small value of step-size of 0.000002 in order to best compromise between stability and time. Smaller step size ensures we oscillate closely to the global minimum. We then train all models using gradient descent with a total of 5000x8 iterations. We list all parameters in Table 1. At test time, we simply process the input data as described in section II and select the appropriate classifier for discrimination.
\begin{table}[t]
\small
\begin{tabu} to \columnwidth { | X[l] | X[c] | X[0.9c] |}
\hline
\multirow{2}{*}{\textbf{Dataset}} & \multicolumn{2}{c|}{\textbf{Accuracy (\%) }} \\
\cline{2-3}
& Logistic Regression & Least Squares \\
\hline
Jet 0 with mass & 80.836 & 80.458 \\
\hline
Jet 0 without mass & 95.050 & 95.035 \\
\hline
Jet 1 with mass & 79.499 & 78.312 \\
\hline
Jet 1 without mass & 92.290 & 92.542 \\
\hline
Jet 2 with mass & 83.526 & 82.187 \\
\hline
Jet 2 without mass & 92.107 & 92.988 \\
\hline
Jet 3 with mass & 82.675 & 81.476 \\
\hline
Jet 3 without mass & 94.177 & 96.208 \\
\hline
Ensemble & 83.167 & 82.399 \\
\hline
\end{tabu}
\medskip
\caption{The accuracies of each component in the ensemble along with their aggregated scores}
\end{table}
\section{Experiments}
We evaluate the above-mentioned models on the Higgs-Boson dataset using 10-fold cross validation. Since we perform several pre-processing steps, it is reasonable to evaluate how much the overall result depends on each component. In our first experiment, we study the improvements obtained when we gradually append additional steps in our pre-processing pipeline. Summarized in Figure 2 are the box-plot distributions of our models, each with additional components gradually added. Note that the figures on the left and middle display the results when using only a single classifier i.e. we do not partition the data as described in 2A. Also note that their features have been standardized taking only non missing entries into account with these entries then set to 0. The figure on the right display the results of the complete system as discussed in previous sections.
These results show fascinating studies. Firstly, we see that both models trained on the standardized raw-data are quite close in their prediction accuracy. The removal of all phi-related features provides a small boost in performance. The figure also clearly illustrates the improvements obtained when training the models on the augmented feature matrix with an approximate increase of 4\% in the accuracy of both classifiers, suggesting the presence of a non-linear decision boundary. Lastly, we can observe that training an ensemble of classifiers on the 8 subsets has a positive influence on the overall performance.
We then measure the performance of each component in the ensemble and their overall accuracy, weighted by the size of the subsets. It is interesting to see that the classifiers for datasets without mass have higher accuracy, although we have already found that these datasets have a majority of their observations labeled as background.
From all the experiments, we note that the least squares classifier lags behind the logistic in terms of accuracy. This is most likely due to the cost function in least squares that renders it less resilient against outliers. Lastly, the deviations in the accuracy also implies that the classifiers did not over-fit the training data.
\section{Conclusion}
We have presented a technique for identifying the Higgs-Boson. Our system employs an ensemble of 8 classifiers that heavily relies on feature processing. We saw in the previous section that the largest improvement is obtained when we map the features into a higher dimensional space. Possible extensions can thus incorporate the use of highly non-linear classifiers such as neural networks. The inclusion of domain specific feature engineering could also prove worthwhile.
\bibliographystyle{IEEEtran}
\bibliography{literature}
\end{document}
| {
"alphanum_fraction": 0.7730496454,
"avg_line_length": 70.5,
"ext": "tex",
"hexsha": "c5b013c6e8ac68ae348c4fbdda9639e44cb7184e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d7ece9e4666bb7aef230cc482e8226eb4942d256",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sanadhis/IST_ML_Project1",
"max_forks_repo_path": "report/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d7ece9e4666bb7aef230cc482e8226eb4942d256",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sanadhis/IST_ML_Project1",
"max_issues_repo_path": "report/main.tex",
"max_line_length": 914,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d7ece9e4666bb7aef230cc482e8226eb4942d256",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sanadhis/IST_ML_Project1",
"max_stars_repo_path": "report/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2189,
"size": 9165
} |
\documentclass{article}
\title{College. Where should I apply? \\ An interactive app to help at-risk students decide}
\author{Liang Hao, Bret Hart, Andrew Shibata, Gary Nguyen}
\date{\today}
\usepackage{Sweave}
\begin{document}
\input{00-abstract-concordance}
\maketitle
\section{Abstract}
This project aims to design and build a website for an NGO in order to provide minority studetns a portfolio of colleges which would best serve their needs: a high-quality, low-cost education.
\end{document}
| {
"alphanum_fraction": 0.7562862669,
"avg_line_length": 32.3125,
"ext": "tex",
"hexsha": "79abf662388b52743db8b721d2d6d9d26451e208",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b311f79d2c251ff7d39629ad1544859b6069aeec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "haoliangsky1/stat159-fall2016-proj3",
"max_forks_repo_path": "report/00-abstract.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b311f79d2c251ff7d39629ad1544859b6069aeec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "haoliangsky1/stat159-fall2016-proj3",
"max_issues_repo_path": "report/00-abstract.tex",
"max_line_length": 193,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "b311f79d2c251ff7d39629ad1544859b6069aeec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "haoliangsky1/stat159-fall2016-proj3",
"max_stars_repo_path": "report/00-abstract.tex",
"max_stars_repo_stars_event_max_datetime": "2017-09-20T02:51:14.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-11-11T03:26:09.000Z",
"num_tokens": 127,
"size": 517
} |
\section{History}
\subsection{Documentation History}
This is the first edition of the \toarutitle. Writing of this documentation began on March 17th, 2011. This section is reserved to document future changes to this documentation.
\subsubsection{Revision History}
\begin{itemize}
\item 2011-03-17 \emph{Documentation work begins}
\item 2011-12-25 \emph{Actually bothered to expand the manual}
\end{itemize}
\subsection{Kernel History}
For a complete history of the ToAruOS Kernel, please reference the git commit logs.
\subsubsection{Revision History}
\begin{itemize}
\item 2011-01-15 \emph{Initial commit}
\item 2011-01-20 \emph{Memory paging}
\item 2011-01-28 \emph{EXT2 in-memory read support}
\item 2011-02-04 \emph{Ramdisk moved to \texttt{genext2fs}}
\item 2011-02-07 \emph{Kernel debug shell}
\item 2011-02-10 \emph{Moved to clang for compilation}
\item 2011-02-19 \emph{Serial console}
\item 2011-12-25 \emph{VESA support}
\end{itemize}
\subsection{Userspace History}
No userspace applications or libraries currently exist.
| {
"alphanum_fraction": 0.7548746518,
"avg_line_length": 37.1379310345,
"ext": "tex",
"hexsha": "d6f558eb1f2cbc59897c9d15c8630a134a68291d",
"lang": "TeX",
"max_forks_count": 16,
"max_forks_repo_forks_event_max_datetime": "2022-01-17T03:48:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-09-09T05:30:54.000Z",
"max_forks_repo_head_hexsha": "a125d2dbb9260370d2e9f712bd17b067ca97afb5",
"max_forks_repo_licenses": [
"NCSA",
"Unlicense",
"MIT"
],
"max_forks_repo_name": "bodgergely/osdev",
"max_forks_repo_path": "docs/history.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a125d2dbb9260370d2e9f712bd17b067ca97afb5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"NCSA",
"Unlicense",
"MIT"
],
"max_issues_repo_name": "bodgergely/osdev",
"max_issues_repo_path": "docs/history.tex",
"max_line_length": 177,
"max_stars_count": 52,
"max_stars_repo_head_hexsha": "9b745b5de0ba198d3ba245de62453012f346d5af",
"max_stars_repo_licenses": [
"MIT",
"NCSA",
"Unlicense"
],
"max_stars_repo_name": "stevej/osdev",
"max_stars_repo_path": "docs/history.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-19T17:20:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-29T17:20:29.000Z",
"num_tokens": 324,
"size": 1077
} |
% mainfile: ../../../../master.tex
\subsection{Building an Author Dictionary}
% The part of the label after the colon must match the file name. Otherwise,
% conditional compilation based on task labels does NOT work.
\label{task:20140822_jkn0}
\tags{development,author}
\authors{jkn}
\files{newBuild.py,authorDictionary.tex}
%\persons{}
The author dictionary is now build and placed in the {\tt buildFiles} folder. Moreover, the author names are now shown in the margin of the diary and in the index. If an author name is clicked, an email can be written to him or her.
| {
"alphanum_fraction": 0.7649122807,
"avg_line_length": 51.8181818182,
"ext": "tex",
"hexsha": "26e671b2362de3630f9663d50abcd8bd30dac484",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-03-22T11:33:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-18T10:42:59.000Z",
"max_forks_repo_head_hexsha": "460869ca28c5515b28e7a1c3a44c61e375fd96c0",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jkjaer/latexResearchDiary",
"max_forks_repo_path": "entries/2014/08/22/20140822_jkn0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "460869ca28c5515b28e7a1c3a44c61e375fd96c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jkjaer/latexResearchDiary",
"max_issues_repo_path": "entries/2014/08/22/20140822_jkn0.tex",
"max_line_length": 232,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "460869ca28c5515b28e7a1c3a44c61e375fd96c0",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jkjaer/latexResearchDiary",
"max_stars_repo_path": "entries/2014/08/22/20140822_jkn0.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-29T12:17:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-07T07:52:06.000Z",
"num_tokens": 136,
"size": 570
} |
% define page layout
\setlength{\textwidth}{5.7in}
\setlength{\textheight}{8.5in}
%\setlength{\topmargin}{-0.125in}
\setlength{\oddsidemargin}{18pt}
\setlength{\evensidemargin}{18pt}
%\setlength{\columnseprule}{.4pt}
\setlength{\headheight}{19pt}
\setlength{\headsep}{18pt}
%\setlength{\footheight}{16pt}
%\setlength{\footskip}{34pt}
%\setlength{\headrulewidth}{2pt}
%\setlength{\footrulewidth}{0pt}
\renewcommand{\sectionmark}[1]{ }
\renewcommand{\subsectionmark}[1]{ }
%% FILE: holmac.tex
%% This is a LaTeX macro for formatting HOL theory generated by the latex-hol
%% library function
%% By Wai Wong on 21 May 1991
%%
% check to see if this macro file has already been loaded
\ifx\undefined\holmacs\def\holmacs{}\else\endinput\fi
% Special symbols
\def\US{\_}
\def\SH{\#}
\def\AM{\&}
\def\PC{\%}
\def\DO{\$}
\def\BS{{\tt\char'134}}
\def\PR{{\tt\char'23}}
\def\TI{{\tt\char'176}}
\def\AS{{\tt\char'52}}
\def\LE{{\tt\char'74}}
\def\BA{{\tt\char'174}}
\def\GR{{\tt\char'76}}
\def\LB{{\tt\char'133}}
\def\RB{{\tt\char'135}}
\def\CI{{\tt\char'136}}
\def\LC{{\tt\char'173}}
\def\RC{{\tt\char'175}}
\def\GOAL{\relax\ifmmode\hbox{?\kern0.2em --}\quad\else\hbox{?kern0.2em --}\quad\fi}
\def\THM{\relax\ifmmode\vdash\else$\vdash$\fi}
\def\DEF{\relax\ifmmode\vdash\!\!\lower .5ex\hbox{{\scriptsize\sl def}}\quad%
\else$\vdash\!\!\lower.5ex\hbox{{\scriptsize\sl def}}\quad$\fi}
\def\AND{\relax\ifmmode\wedge\else$\wedge$\fi}
\def\OR{\relax\ifmmode\vee\else$\vee$\fi}
\def\IMP{\relax\ifmmode\supset\else$\supset$\fi}
\def\LONG{\relax\ifmmode\longrightarrow\else$\longrightarrow$\fi}
\def\IFF{\relax\ifmmode\Longleftrightarrow\else$\Longleftrightarrow$\fi}
\def\LEE{\relax\ifmmode\leq\else$\leq$\fi}
\def\GEE{\relax\ifmmode\geq\else$\geq$\fi}
\def\EXISTSUNIQUE{\relax\ifmmode\exists\forall\else$\exists\forall$\fi}
\def\LES{\relax\ifmmode<\else$<$\fi}
\def\GRE{\relax\ifmmode>\else$>$\fi}
\def\MUL{\relax\ifmmode\times\else$\times$\fi}
\def\NOT{\relax\ifmmode\neg\else{$\neg$}\fi}
\def\FORALL{\relax\ifmmode\forall\else$\forall$\fi}
\def\EXISTS{\relax\ifmmode\exists\else$\exists$\fi}
\def\SELECT{\relax\ifmmode\varepsilon\else$\varepsilon$\fi}
\def\FUNCOM{\relax\ifmmode\circ\else$\circ$\fi}
\def\LAMBDA{\relax\ifmmode\lambda\else$\lambda$\fi}
\def\RESDOT{\relax\ifmmode::\else$::$\fi}
\def\DOT{\relax\ifmmode.\,\else$.\,$\fi}
\def\NIL{\relax\ifmmode[\:]\else$[\:]$\fi}
\def\EMPTYSET{\relax\ifmmode{\{\:\}}\else{$\{\:\}$}\fi}
\def\BEGINSET{\relax\ifmmode\left\{\else{$\left\{\right.$}\fi}
\def\ENDSET{\relax\ifmmode\right\}\else{$\left.\right\}$}\fi}
\def\SUCHTHAT{\relax\ifmmode\mid\else$\mid$\fi}
%\def\makeulactive{\catcode`\_=\active\relax}
%\def\makeulsub{\catcode`\_=8\relax}
%\let\ul=\_
%\begingroup
% \makeulactive
% \gdef\_{\ul}
% \gdef_{\ul}
%\endgroup
%\def\dotoken#1#2{\mbox{#1#2}\endgroup}
%\def\mlname{\begingroup\makeulactive\dotoken{\tt}}
%\def\CONST{\begingroup\makeulactive\dotoken{\constfont}}
%\def\KEYWD{\begingroup\makeulactive\dotoken{\keyfont}}
\def\HOL{{\sc HOL}}
% Tells TeX to break line after binary operators.
\def\setholprec{\relpenalty=10000 \binoppenalty=9999 \raggedright}
% The parents and types sections are set in ttlist environment
\newenvironment{ttlist}{\begingroup\typefont}{\endgroup}
% The constants and infix sections are set in typelist environment.
% The labels are left justified.
\def\labeledlabel#1{#1\hfil}
\newenvironment{typelist}%
{\begin{list}{\ }%
{\typefont \setlength{\leftmargin}{0.8in}%
\setlength{\labelwidth}{0.7in}\setlength{\labelsep}{0.1in}%
\renewcommand{\makelabel}{\labeledlabel}}}%
{\end{list}}
% The Axioms, definitions and theorems sections are set in thmlist environment
\newenvironment{thmlist}
{\begin{list}{\ }%
{\setholprec \setlength{\leftmargin}{0.8in}%
\setlength{\labelwidth}{0.7in}\setlength{\labelsep}{0.1in}%
\renewcommand{\makelabel}{\labeledlabel}}}%
{\end{list}}
\def\monthname{\ifcase\month
v \or Jan\or Feb\or Mar\or Apr\or May\or Jun%
\or Jul\or Aug\or Sep\or Oct\or Nov\or Dec\fi}%
\def\timestring{\begingroup
\count0 = \time \divide\count0 by 60
\count2 = \count0 % the hour
\count4 = \time \multiply\count0 by 60
\advance\count4 by -\count0 % the minute
\ifnum\count4<10 \toks1 = {0}% get a leading zero
\else \toks1 = {}%
\fi
\ifnum\count2<12 \toks0 = {a.m.}%
\else \toks0 = {p.m.}%
\advance\count2 by -12
\fi
\ifnum\count2=0 \count2 = 12 \fi
\number\count2:\the\toks1 \number\count4
\thinspace \the\toks0
\endgroup}%
\def\timestamp{\number\day\space\monthname\space
\number\year\quad\timestring}%
\def\printtimestamp{Printed at \timestring\space%
on \number\day\space\monthname\space\number\year.}
% The following commands are generated by latex_theory_to
% They may be redefined to suit your style.
% \sec{section_name} marks the sections within a theory, e.g., Types, Theorems.
% \theory{theory_name} marks the beginning of the theory, if the file is
% generated for being included in other LaTeX files.
% \endthy{theory_name} marks the end of the theory.
%
\def\sec#1{\section*{#1}\noindent}
\def\theory#1{\section{#1}}
\def\endthy#1{\hbox to\hsize{\hrulefill\ End of theory {\tt#1}\ \hrulefill}}
\def\typefont{\tt} % for types, ML identifiers
\def\constfont{\sf} % for HOL constants
\def\keyfont{\bf} % for keywords
\makeatletter
\def\verb{\begingroup \catcode``=13 \@noligs
\verbatim@font \let\do\@makeother \dospecials
\@ifstar{\@sverb}{\@verb}}
% Definitions of \@sverb and \@verb changed so \verb+ foo+ does not lose
% leading blanks when it comes at the beginning of a line.
% Change made 24 May 89. Suggested by Frank Mittelbach and Rainer Sch\"opf.
%
\def\@sverb#1{\def\@tempa ##1#1{\leavevmode\null##1\endgroup}\@tempa}
\def\@verb{\@vobeyspaces \frenchspacing \@sverb}
\def\wordn{\verb|:word|$n$}
\def\word{\@ifnextchar[{\@word}{\@word[*]}}
\def\@word[#1]{\verb|:(#1)word|}
\def\sect{\@startsection {subsection}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{1.5ex plus .2ex}{\normalsize\bf}}
\def\subsect{\@startsection {subsubsection}{2}{\z@}{-3.5ex plus -1ex minus
-.2ex}{-1em}{\footnotesize\bf}}
\def\inputmlfile#1{\begingroup \footnotesize \input#1 \endgroup}
\renewenvironment{theindex}{\begin{multicols}{2}[\section*{\indexname}]%
\columnseprule \z@ \columnsep 35\p@
\parindent\z@ \parskip\z@ plus.3\p@\relax\let\item\@idxitem}{\end{multicols}}
\def\@idxitem{\par\hangindent 40\p@}
\def\subitem{\par\hangindent 40\p@ \hspace*{20\p@}}
\def\subsubitem{\par\hangindent 40\p@ \hspace*{30\p@}}
\def\indexspace{\par \vskip 10\p@ plus5\p@ minus3\p@\relax}
\makeatother
\def\NBWORD#1#2{\CONST{NBWORD}\,\CONST{#1}\,\CONST{#2}}
\def\SEG#1#2#3{\CONST{SEG}\,\CONST{#1}\,\CONST{#2}\,#3}
\def\T{\CONST{T}}
\def\dotoken#1#2{\mbox{#1#2}\endgroup}
\def\idxname#1{\begingroup\makeulother\dotoken#1}
\def\dotokenidx#1#2{\mbox{#1#2}\index{#2@\string\idxname{#1}{#2}}\endgroup}
\def\mlname{\begingroup\makeulother\dotokenidx{\tt}}
\def\CONST{\begingroup\makeulother\dotokenidx{\constfont}}
\def\KEYWD{\begingroup\makeulother\dotoken{\keyfont}}
\def\idxmlname{\begingroup\makeulother\dotoken{\tt}}
\def\idxconst{\begingroup\makeulother\dotoken{\constfont}}
\def\ul#1{\relax\ifmmode\underline#1\else$\underline{#1}$\fi}
% define environment for HOL definitions and theorems
\def\makeulother{\catcode`\_=12\relax}
\def\makeulsub{\catcode`\_=8\relax}
%\begingroup
% \makeulother
% \gdef\_{\ul}
%\endgroup
\def\doholdef#1{\par\vspace*{5pt}\index{#1@\string\idxmlname{#1}|ul}%
\flushleft{\bf HOL Definition }({\tt#1})\label{def-#1}\endgroup}
\def\holdef{\begingroup\makeulother\doholdef}
\let\endholdef=\endflushleft
\def\doholthm#1{\par\index{#1@\string\idxmlname{#1}|ul}%
\flushleft{\bf HOL Theorem }({\tt#1})\label{thm-#1}\endgroup}
\def\holthm{\begingroup\makeulother\doholthm}
\let\endholthm=\endflushleft
\def\sfc\sf \def\constfont{\sfc}
| {
"alphanum_fraction": 0.6938878387,
"avg_line_length": 34.9559471366,
"ext": "tex",
"hexsha": "3ff2eaf9cd4cab3794046c9da9611b5de08f0fb0",
"lang": "TeX",
"max_forks_count": 126,
"max_forks_repo_forks_event_max_datetime": "2022-03-26T00:42:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-17T03:20:30.000Z",
"max_forks_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "dwRchyngqxs/HOL",
"max_forks_repo_path": "src/res_quan/Manual/summacs.tex",
"max_issues_count": 759,
"max_issues_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T17:33:39.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-01T00:40:01.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "dwRchyngqxs/HOL",
"max_issues_repo_path": "src/res_quan/Manual/summacs.tex",
"max_line_length": 84,
"max_stars_count": 492,
"max_stars_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "dwRchyngqxs/HOL",
"max_stars_repo_path": "src/res_quan/Manual/summacs.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-27T22:18:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-07T16:36:19.000Z",
"num_tokens": 3086,
"size": 7935
} |
\chapter{HyCube Java library}
\label{sec:library}
This chapter describes the \emph{HyCube} library - Java implementation. The library (a jar file) consists of core classes and interfaces (generic - allowing creating different implementations of a DHT), and \emph{HyCube}-specific classes, implementing the algorithms used by a \emph{HyCube} node. The jar file contains the compatibility information (Java version) and a default configuration file (\emph{hycube-default.cfg}). Individual configuration properties may be overwritten in the application-specific configuration files. The chapter presents the API, the architecture of the library and describes configuration of individual modules. The following sections are supposed to provide an overview and explain the main concepts being used. Details of individual classes/interfaces are provided in the \emph{Javadoc} documentation attached to the library.
\section{The API}
The library API is provided by several node service classes:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{HyCubeSingleQueueNodeService}, \emph{HyCubeSingleQueueNodeServiceNonWakeable}
\item \emph{HyCubeMultiQueueNodeService}, \emph{HyCubeMultiQueueNodeServiceNonWakeable}
\item \emph{HyCubeSchedulingMultiQueueNodeService}
\item \emph{HyCubeSimpleNodeService}, \emph{HyCubeSimpleSchedulingNodeService}
%\item \emph{HyCubeSingleQueueNodeService}
%\item \emph{HyCubeSingleQueueNodeServiceNonWakeable}
%\item \emph{HyCubeMultiQueueNodeService}
%\item \emph{HyCubeMultiQueueNodeServiceNonWakeable}
%\item \emph{HyCubeSchedulingMultiQueueNodeService}
%\item \emph{HyCubeSimpleNodeService}
%\item \emph{HyCubeSimpleSchedulingNodeService}
\end{itemize}
Node services should be considered as the main API entry point, unless certain customizations are required, in which case other publicly accessible classes may be used. All node services implement the \emph{HyCubeNodeService} interface (defining operations performed in the node context). Individual node services realize various approaches for event processing and managing threads. Because individual node services manage threads differently, they expect different configuration parameters - specified in the configuration file, as well as parameters passed to the node service initializer method call (\emph{initialize}), and they may define additional service-specific methods (in addition to the \emph{HyCubeNodeService} interface methods).
This section focuses on the \emph{HyCubeNodeService} interface and the \emph{HyCubeSimpleNodeService} node service implementation, which may be used for majority of applications. Other node services are described in detail in Section \ref{sec:libNodeServices}. \emph{HyCubeSimpleNodeService} is a node service automatically determining the number of threads needed for processing events, based on the parameters specified upon the node service instance creation (\emph{initialize} method) - described in Table \ref{tab:libHyCubeSimpleNodeServiceInitializeParams}.
\begin{table}[H]
\begin{center}
\scriptsize
\begin{tabular}{p{6.0cm} p{9.0cm}}
\hline
\textbf{Method} & \textbf{Description} \\[1mm]
\hline
\textbf{\emph{Environment} environment} & The environment object represents the external environment, defines the time provider and contains node configuration read from the configuration file (details in Sections \ref{sec:libEnvironment}, \ref{sec:libTimeProvider} and \ref{sec:libConfiguration}). In the simplest case, the \emph{DirectEnvironment} class instance (providing the system time time provider and scheduler) and the default parameter file may be used (used by default when the configuration file is not specified in the \emph{DirectEnvironment.initialize} method). \\[1.5mm]
\textbf{\emph{String} nodeIdString} / \textbf{\emph{NodeId} nodeId}} & The node ID - a \emph{String} representation or a \emph{NodeID} object (\emph{HyCubeNodeId} instance is expected with the default configuration). \\[1.5mm]
\textbf{\emph{String} bootstrapNodeAddress} & The bootstrap node network address. With the default configuration, the UDP/IP protocol is used, and the address format is: \emph{IP\_ADDRESS:PORT}. If the \emph{null} value is provided, the node does not perform the JOIN procedure and forms a DHT containing only itself (other nodes may connect to it). \\[1.5mm]
\textbf{\emph{JoinCallback} joinCallback} & The callback object that is notified (by calling the \emph{joinReturned} method) when the JOIN operation terminates (the node joined the DHT). In most cases, the use of an instance of \emph{JoinWaitCallback} (providing blocking waiting for the JOIN to finish) is sufficient. \\[1.5mm]
\textbf{\emph{Object} callbackArg} & An argument that will be passed to the \emph{JoinCallback.joinReturned} method. \\[1.5mm]
\textbf{\emph{int} blockingExtEventsNum} & The number of custom blocking events processed by nodes (details in Section \ref{sec:libNodeServices}). For the default configuration, value 0 should specified (this the default value used in case the value is not specified). \\[1.5mm]
\textbf{\emph{boolean} wakeup} & A flag determining whether blocking events should be interrupted when non-blocking operations are enqueued to be processed. The mechanism is described in detail in Section \ref{sec:libWakeables}. The value \emph{true} may be specified by default, in which case the node may be served by a single thread. Otherwise, at least two threads should be defined, as one of the threads would be blocked for most of the time by blocking waiting for incoming messages. \\[1.5mm]
\textbf{\emph{EventProcessingErrorCallback} errorCallback} & Specifies the error callback object that will be notified (\emph{errorOccurred} method called) when a critical internal error occurs in a thread different than the API caller thread. When such an error is raised, the further processing should be terminated and the node instance should be discarded. \\[1.5mm]
\textbf{\emph{Object} errorCallbackArg} & An object passed to the \emph{EventProcessingErrorCallback.errorOccurred} method when an error is raised. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{\emph{HyCubeSimpleNodeService.initialize} method parameters}
\label{tab:libHyCubeSimpleNodeServiceInitializeParams}
\end{table}
A node service represents a DHT node (connected to the DHT system), and the operations defined by \emph{HyCubeNodeService} interface may be performed in the node context (Table \ref{tab:libDescHyCubeNodeService}).
%\begin{table}
\begin{center}
\scriptsize
\begin{longtable}{p{6.5cm} p{8.5cm}}
\hline
\textbf{Method} & \textbf{Description} \\[1mm]
\hline
\textbf{\emph{Node} getNode} & Returns the \emph{Node} class instance, representing the node \\[1.5mm]
\textbf{\emph{NetworkNodePointer} createNetworkNodePointer} & Creates a network node pointer object from its string representation \\[1.5mm]
\textbf{\emph{void} setPublicAddress} & Sets the node's public network address (in case the network address translation is used) - this address will be exposed to other nodes in messages \\[1.5mm]
\textbf{\emph{MessageSendInfo} send} & Sends a message to the specified recipient from the local source port to the recipient's destination port. The method expects the following arguments: the message recipient ID (\emph{NodeId}), optional direct recipient (\emph{String} or \emph{NetworkNodePointer} representation of the network address), the message data (\emph{byte[]}), the ack callback object (\emph{AckCallback}) notified when the delivery confirmation is received, optional routing parameters (\emph{Object[]}) described in Section \ref{sec:libRoutingManager}, and a boolean flag indicating whether the call should be blocking or sending the message should be enqueued and performed in the background. The recipient ID, recipient network address, source port, destination port and message data may be aggregated in a \emph{DataMessage} class instance. \\[1.5mm]
\textbf{\emph{LookupCallback} lookup} & Initiates the node lookup procedure, for the specified node ID. The method arguments include the lookup node ID (\emph{NodeId}), the lookup callback object (\emph{LookupCallback}) notified when the lookup procedure terminates, passing the result of the operation to the \emph{lookupReturned} method, a callback argument (\emph{Object}) that will be passed to the \emph{lookupReturned} method call, and optional lookup parameters (\emph{Object[]}) described in Section \ref{sec:libLookupManager}. \\[1.5mm]
\textbf{\emph{SearchCallback} search} & Initiates the search procedure, for the given number of closest nodes to the specified node ID. The method arguments include the search node ID (\emph{NodeId}), the search callback object (\emph{SearchCallback}) notified when the search procedure terminates, passing the result of the operation to the \emph{searchReturned} method, a callback argument (\emph{Object}) that will be passed to the \emph{searchReturned} method call, optional set of initial node pointers (\emph{NodePointer[]}) to which initial search requests should be sent, an optional flag indicating whether the exact match node should NOT be returned by any intermediate node (default value \emph{false}), and optional search parameters (\emph{Object[]}) described in Section \ref{sec:libSearchManager}. \\[1.5mm]
\textbf{\emph{PutCallback} put} & Initiates the PUT operation, storing a resource in the DHT. The method arguments include the message recipient (\emph{NodePointer}) - the argument is optional (if not specified, the message will be routed), the resource key (\emph{BigInteger}), the resource object (\emph{HyCubeResource}), a callback object (\emph{PutCallback}) notified when the PUT operation is finished, passing the status of the operation to the \emph{putReturned} method call, and optional put parameters (\emph{Object[]}) described in Section \ref{sec:libDHTManager}. \\[1.5mm]
\textbf{\emph{RefreshPutCallback} refreshPut} & Initiates the REFRESH\_PUT operation, refreshing the validity time of the resource in the DHT. The method arguments include the message recipient (\emph{NodePointer}) - the argument is optional (if not specified, the message will be routed), the resource key (\emph{BigInteger}), the resource descriptor (\emph{HyCubeResourceDescriptor}), a callback object (\emph{RefreshPutCallback}) notified when the REFRESH\_PUT operation is finished, passing the status of the operation to the \emph{refreshPutReturned} method call, and optional parameters (\emph{Object[]}) described in Section \ref{sec:libDHTManager}. \\[1.5mm]
\textbf{\emph{GetCallback} get} & Initiates the GET operation, retrieving resources from the DHT. The method arguments include the message recipient (\emph{NodePointer}) - the argument is optional (if not specified, the message will be routed), the resource key (\emph{BigInteger}), the get criteria (\emph{HyCubeResourceDescriptor}), a callback object (\emph{GetCallback}) notified when the GET operation is finished, passing the results (resources) to the \emph{getReturned} method call, and optional get parameters (\emph{Object[]}) described in Section \ref{sec:libDHTManager}. \\[1.5mm]
\textbf{\emph{DeleteCallback} delete} & Initiates the DELETE operation, deleting a resource from a node in the DHT. The method arguments include the message recipient (\emph{NodePointer}), the resource key (\emph{BigInteger}), the delete criteria (\emph{HyCubeResourceDescriptor}), a callback object (\emph{DeleteCallback}) notified when the DELETE operation is finished, passing the status of the operation to the \emph{deleteReturned} method call, and optional delete parameters (\emph{Object[]}) described in Section \ref{sec:libDHTManager}. \\[1.5mm]
\textbf{\emph{void} join} & Performs the JOIN procedure, connecting to the specified bootstrap node, and notifying the specified join callback object when the procedure terminates \\[1.5mm]
\textbf{\emph{void} leave} & Performs the LEAVE operation - should be called before the node is disconnected from the DHT and discarded. \\[1.5mm]
\textbf{\emph{LinkedBlockingQueue<ReceivedDataMessage>} registerPort} & Registers an incoming messages port and returns a queue to which the received messages will be inserted. \\[1.5mm]
\textbf{\emph{void} registerMessageReceivedCallbackForPort} & Registers a callback object for a port (the callbeck object will be notified when a message is received) \\[1.5mm]
\textbf{\emph{void} unregisterMessageReceivedCallbackForPort} & Unregisters the message received callback for port \\[1.5mm]
\textbf{\emph{void} unregisterPort} & Unregisters the port \\[1.5mm]
\textbf{\emph{void} registerMessageReceivedCallback} & Registers a callback for incoming messages (when ports are not used - configuration). The callback object is notified when a message is received. \\[1.5mm]
\textbf{\emph{void} unregisterMessageReceivedCallback} & Unregisters the message received callback (when ports are not used) \\[1.5mm]
\textbf{\emph{int} getMaxMessageLength} & Returns the maximum allowed message length \\[1.5mm]
\textbf{\emph{int} getMaxMessageDataLength} & Returns the maximum allowed message data length \\[1.5mm]
\textbf{\emph{boolean} isInitialized} & Returns a boolean value indicating whether the service instance is initialized \\[1.5mm]
\textbf{\emph{boolean} isDiscarded} & Returns a boolean value indicating whether the service instance is discarded \\[1.5mm]
\textbf{\emph{void} recover} & Explicit execution of the recovery procedure \\[1.5mm]
\textbf{\emph{void} recoverNS} & Explicit execution of the neighborhood set recovery procedure \\[1.5mm]
\textbf{\emph{void} discard} & Discards the node service instance \\[1.5mm]
\hline
%\captionsetup{justification=centering}
\caption{Operations defined by \emph{HyCubeNodeService}}
\label{tab:libDescHyCubeNodeService}
\end{longtable}
\end{center}
%\end{table}
\section{Example of a node life cycle}
The listing below presents an exemplary application creating a node instance using the \emph{SimpleNodeService} service, registering an incoming messages port, sending a test message to itself, receiving the message, leaving the system and destroying the node instance.
\begin{lstlisting}[style=listing1noindentsmall]
public static void main(String[] args) {
Environment environment = DirectEnvironment.initialize();
JoinWaitCallback joinWaitCallback = new JoinWaitCallback();
SimpleNodeService sns = HyCubeSimpleNodeService.initialize(environment,
HyCubeNodeId.generateRandomNodeId(4, 32),
"192.168.1.9:5000", "192.168.1.8:5000", joinWaitCallback, null, 0, true, null, null);
LinkedBlockingQueue<ReceivedDataMessage> inMsgQueue = sns.registerPort((short)0);
sendTestDataMessageToSelf(sns, "Test string");
try {
ReceivedDataMessage recMsg = inMsgQueue.take();
} catch (InterruptedException e) {}
System.out.println("Message received!: " + new String(recMsg.getData()));
sns.discard();
environment.discard();
}
public static void sendTestDataMessageToSelf(NodeService ns, String text) {
MessageAckCallback mac = new WaitMessageAckCallback() {
public void notifyDelivered(Object callbackArg) {
super.notifyDelivered(callbackArg);
System.out.println("Message DELIVERED.");
}
public void notifyUndelivered(Object callbackArg) {
super.notifyUndelivered(callbackArg);
System.out.println("Message UNDELIVERED");
}
};
byte[] data = text.getBytes();
DataMessage msg = new DataMessage(ns.getNode().getNodeId(), null, (short)0, (short)0, data);
try {
MessageSendInfo msi = ns.send(msg, mac, null);
System.out.println("Message send info - serial no: " + msi.getSerialNo());
} catch (NodeServiceException e) {}
}
\end{lstlisting}
\section{Library architecture, core library classes and their implementations}
\label{sec:libClasses}
The core of the \emph{HyCube} library is a set of configurable classes allowing implementation of any DHT algorithm. The architecture is based on exchangeable components implementing certain interfaces, realizing individual system functions. \emph{HyCube} library provides implementation of these components, realizing the \emph{HyCube} algorithms.
Figure \ref{fig:HyCubeLibraryArchitecture} presents the core of the library, and individual classes/interfaces are described in Section \ref{sec:libClasses}. Section \ref{sec:libEventProcessing} describes the realized solutions for event processing. Section \ref{sec:libNodeServices} focuses on node services, which are wrapper classes for node instances, additionally providing event processing mechanisms.
\begin{figure}
\centering
\includegraphics[trim = 10mm 10mm 10mm 10mm, clip, scale=.84]{img/HyCubeLibraryNode.pdf}
\caption{Node library architecture}
\label{fig:HyCubeLibraryArchitecture}
\end{figure}
\subsection{Node and library architecture}
The main class of the library is \emph{Node}. An instance of this class represents a DHT node and exposes the node interface to node services - operations such as \emph{join}, \emph{sendMessage}, \emph{lookup}, \emph{search}, \emph{put}, \emph{refreshPut}, \emph{get}, \emph{delete}, \emph{leave}, as well as methods registering incoming message queues (\emph{ReceivedDataMessage} instances) and callbacks (\emph{MessageReceivedCallback} instances). This class gathers the logic of all operations performed by nodes explicitly, as well as the processes taking place in the background. \emph{Node} is instantiated by calling one of the variants of its static method \emph{initializeNode}.
Most of the logic is realized by defining modules (classes) implementing certain interfaces, which are instantiated and initialized during the node initialization. Most of these modules (implementations) are defined in the configuration file (details in Section \ref{sec:libConfiguration}), but some of them are explicitly passed to the node initialization method upon creation (\emph{initialize}). Using the abstraction of individual modules, the node calls configured implementations as a result of the API, as well as internal calls.
Because individual components require full access to the node state, and often, to other dependent components, a special object, implementing \emph{NodeAccessor} interface, is passed to the components upon initialization \emph{initialize} method call (by the \emph{Node} instance). This object serves as a back-reference to the node instance, and may be used to access/modify node properties.
\subsection{Environment}
\label{sec:libEnvironment}
An object of a class extending abstract \emph{Environment} class is passed to the node initialization on creation. This object represents the external environment, defines the time provider and contains node configuration (read from the configuration file). The information passed to the \emph{Node} instance within the \emph{Environment} object may be extended and used by any module (accessed through the instance of \emph{NodeAccessor}). The default implementation of \emph{Environment} - \emph{DirectEnvironment} uses the system clock as the time provider, and reads configuration from a file (described in Section \ref{sec:libConfiguration}).
\subsection{Time provider}
\label{sec:libTimeProvider}
An object implementing \emph{TimeProvider} interface is passed to the \emph{Node} instance as a property of the \emph{Environment} object. The time provider object is expected to return a \emph{long} value, representing the current time. The interpretation of the time value returned is implementation specific. The default implementation \emph{SystemTimeProvider}, embedded in \emph{DirectEnvironment} class, uses the system clock (\emph{System.currentTimeMillis()} method) to retrieve the current time.
\emph{TimeProvider} provides also methods for scheduling tasks (instances of \emph{ScheduledTask}) for execution at a certain time in future. The default implementation uses the system clock (internally uses the \emph{ScheduledThreadPoolExecutor} class).
\subsection{Configuration}
\label{sec:libConfiguration}
Node configuration is passed to the \emph{Node} object on initialization, as a property of the \emph{Environment} object. The node properties are accessed though an object implementing the \emph{NodeProperties} interface. The methods of this interface allow access to the configuration properties for given property keys. The property keys and values are strings of characters. However, the \emph{NodeProperties} implementations should provide conversion of values to simple types defined in the \emph{ObjectToStringConverter.MappedType} enumeration, as well as conversion to enumerations (by default, the class \emph{ObjectToStringConverter} is used for the conversion). Additionally, \emph{NodeProperties} methods allow reading lists of values (of any type, including enumerations). Table \ref{tab:libConfTypes} presents the data types supported, as well as their formats.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3.5cm} p{4.0cm} p{7.0cm}}
\hline
\textbf{Corresponding Java type} & \textbf{MappedType} & \textbf{Format} \\[1mm]
\hline
\textbf{Integer} & \textbf{MappedType.INT} & A signed decimal string representation, as expected by \emph{java.lang.Integer.Integer(String s)} constructor \\[1.5mm]
\textbf{Short} & \textbf{MappedType.SHORT} & A signed decimal string representation, as expected by \emph{java.lang.Short.Short(String s)} constructor \\[1.5mm]
\textbf{Long} & \textbf{MappedType.LONG} & A signed decimal string representation, as expected by \emph{java.lang.Long.Long(String s)} constructor \\[1.5mm]
\textbf{Boolean} & \textbf{MappedType.BOOLEAN} & A string equal to \emph{true} or \emph{false}, as expected by \emph{java.lang.Boolean.Boolean(String s)} constructor \\[1.5mm]
\textbf{Float} & \textbf{MappedType.FLOAT} & A string representation of the \emph{float} value, as expected by \emph{java.lang.Float.Float(String s)} constructor \\[1.5mm]
\textbf{Double} & \textbf{MappedType.DOUBLE} & A string representation of the \emph{double} value, as expected by \emph{java.lang.Double.Double(String s)} constructor \\[1.5mm]
\textbf{Decimal} & \textbf{MappedType.DECIMAL} & A string representation of a decimal number, as expected by the constructor \emph{java.math.BigDecimal.BigDecimal(String val)} \\[1.5mm]
\textbf{BigInteger} & \textbf{MappedType.BIGINTEGER} & A decimal string representation of a big integer, as expected by the constructor \emph{ java.math.BigInteger.BigInteger(String val)} \\[1.5mm]
\textbf{Date} & \textbf{MappedType.DATE} & yyyy-MM-dd HH:mm:ss.SSS \\[1.5mm]
\textbf{String} & \textbf{MappedType.STRING} & - \\[1.5mm]
\textbf{Enum} & \textbf{Separate conversion methods are defined for enumerations} & Enumerations are converted using \emph{Enum.valueOf(Class<T>, String)} and \emph{Enum.toString()} methods. \\[1.5mm]
\textbf{List} & \textbf{MappedType.LIST} & A list of elements of any of the types described above. The list elements are, by default, separated by the comma (``,'') character \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{Configuration - property types}
\label{tab:libConfTypes}
\end{table}
The default configuration technique employed by \emph{HyCube} uses \emph{ReaderNodeProperties} class, which reads configuration using an abstract reader object. The properties are, in the standard implementation of the \emph{Environment} class (\emph{DirectEnvironment}), read from a file (standard Java properties file format), using the \emph{FileNodePropertiesReader} class (implementation of NodePropertiesReader). \emph{FileNodePropertiesReader} reads properties from two files - default properties file and the application properties file, overwriting any property value redefined in the application properties file. Both properties files are expected to be located in the classpath of the application, and, if the default properties file name is not specified, the file name ``\emph{hycube\_default.cfg}'' is used (the name of the default configuration file delivered with the library).
The configuration architecture allows defining hierarchy of properties by separating keys at individual levels by the ``.'' character. \emph{NodeProperties} contains methods returning lower-level nested \emph{NodeProperties} objects. Such an approach may be used for configuration of nested components, by passing the nested configuration (\emph{NodeProperties}) objects to nested components. For example, for the following properties:
\begin{lstlisting}[style=listing1noindent]
Node.PropertyKey1 = PropertyValue1
Node.PropertyKey2 = PropertyValue2
Node.Module1.PropertyKey3 = PropertyValue3
Node.Module1.PropertyKey4 = PropertyValue4
\end{lstlisting}
\noindent
it is possible to access ``Node.Module1.*'' properties by a nested instance of \emph{NodeProperties}, using property keys, relative to ``Node.Module1'', for example ``PropertyKey3''.
Moreover, \emph{HyCube} configuration allows definition of nested properties (keys) using a different notation:
\begin{lstlisting}[style=listing1noindent]
Node[Module1].PropertyKey3 = PropertyValue3
Node[Module1].PropertyKey4 = PropertyValue4
\end{lstlisting}
\noindent
The effect is the same as in the previous example. However, such a notation allows easier identification of properties for subcomponents, for example:
\begin{lstlisting}[style=listing1noindent]
Node.RoutingModuleKeys = Module1, Module2
Node.RoutingModules[Module1].PropertyKey1 = PropertyValue1
Node.RoutingModules[Module2].PropertyKey1 = PropertyValue1
\end{lstlisting}
\noindent
Such properties may be accessed by retrieving nested properties until the lowest-level is reached, by specifying the full property key, or by specifying the key ``Node.RoutingModules'' (possibly just ``RoutingModules'' on the nested \emph{NodeProperties} object) and specifying ``Module1'' or ``Module2'' as the ``element name''. All three methods are supported by the \emph{HyCube} configuration API.
\emph{ReaderNodeProperties} allows specifying the property determining the name space of the configuration (by default the property name is ``configuration''). Such an approach makes it very easy to switch between many configuration variants stored within the same file, for example:
\begin{lstlisting}[style=listing1noindent]
configuration = node.main
# configuration = node.simulation
node.main.Property1 = Value1
node.main. ...
node.simulation.Property1 = Value2
node.simulation. ...
\end{lstlisting}
\noindent
making is possible to switch the whole configuration by changing the value of only one property - \emph{configuration}. The \emph{Node} instance, within the \emph{Environment} object, receives the \emph{NodeProperties} instance, representing a nested element defined by the configuration name space and nested \emph{NodeProperties} instances are passed to individual components.
The character ``\#'' at the beginning of a line denotes a comment, and causes that such a line is not processed by the configuration reader. It is also possible to use pointers (@ symbol) in the configuration to point property values (or whole nested properties sets) defined under a different key, for example:
\begin{lstlisting}[style=listing1noindent]
Node.RoutingModuleKeys = Module1, Module2, Module3
Node.RoutingModules[Module1].PropertyKey1 = @conf1.PropertyValueX
Node.RoutingModules[Module2].PropertyKey1 = @conf1.PropertyValueX
Node.RoutingModules[Module3] = @Node.RoutingModules[Module1]
conf1.PropertyValueX = PropertyValue1
\end{lstlisting}
\noindent
Whenever the symbol ``@'' is found at the beginning of the property value, the value (or the whole nested properties set) is first retrieved from the property that the symbol ``@'' points. Such a feature may become very useful when semantically the same property is a configuration parameter of multiple modules. In such a case, it is possible to define that property once, and use pointers to set the same value in configuration of all modules. The pointers mechanism is recursive, so multiple pointers may be used before the final value is obtained. In the above example, the property ``PropertyValue1'' for a nested \emph{NodeProperties} object (representing ``Node.RoutingModules[Module3]'') would have value ``PropertyValue1''. Two pointers are resolved while retrieving the value. However, the pointer would be resolved only when the value or the nested property retrieved is the pointer itself. The pointer ``@Node.RoutingModules[Module1]'' would not be resolved in the example above, if the property ``Node.RoutingModules[Module3].PropertyValue1'' was requested on an \emph{NodeProperties} object relative to the root of the configuration.
For module (configurable objects created dynamically), a convention has been adopted to define a key (or keys) representing the module instances and configure their properties using the [] notation. Additionally, when the implementing class (of the module) is configurable, a property ``Class'' is introduced on the module level (defining the full class name). This property should be read by the parent module - the class name should be known to instantiate the child module. An example (configuration of the node ID factory and exemplary extensions) is presented in the listing below:
\begin{lstlisting}[style=listing1noindent, mathescape]
configuration = node
node.NodeIdFactory = HyCubeNodeIdFactory
node.NodeIdFactory[HyCubeNodeIdFactory].Class
$\hookrightarrow$ = net.hycube.core.HyCubeNodeIdFactory
node.NodeIdFactory[HyCubeNodeIdFactory].Dimensions = 4
node.NodeIdFactory[HyCubeNodeIdFactory].Levels = 32
node.Extensions = KeepAliveExtension, RecoveryExtension, ...
node.Extensions[KeepAliveExtension].Class
$\hookrightarrow$ = net.hycube.maintenance.HyCubeKeepAliveExtension
node.main.Extensions[KeepAliveExtension].PingInterval = 5000
...
node.Extensions[RecoveryExtension].Class
$\hookrightarrow$ = net.hycube.maintenance.HyCubeRecoveryExtension
...
\end{lstlisting}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3.5cm} p{2.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{MessageTTL} & \textbf{Integer} & The TTL - maximum route length \\[1.5mm]
\textbf{MessageAckEnabled} & \textbf{Boolean} & Determines whether the message delivery acknowledgments should be sent \\[1.5mm]
\textbf{DirectAck} & \textbf{Boolean} & Determines whether ACK message are sent directly to the sender or routed \\[1.5mm]
\textbf{AckTimeout} & \textbf{Integer} & The waiting time (milliseconds) for an ACK message, after which the original message is considered undelivered \\[1.5mm]
\textbf{ProcessAckInterval} & \textbf{Integer} & The schedule interval for the background process processing awaiting ACK messages \\[1.5mm]
\textbf{ResendIfNoAck} & \textbf{Boolean} & Determines whether messages should be resent when no ACK is received \\[1.5mm]
\textbf{SendRetries} & \textbf{Integer} & Determines how many times messages should be resent if they are not delivered \\[1.5mm]
\textbf{NodeIdFactory} & \textbf{Nested (+ Class)} & The configuration of the node ID factory module \\[1.5mm]
\textbf{MessageFactory} & \textbf{Nested (+ Class)} & The configuration of the message factory module \\[1.5mm]
\textbf{RoutingTable} & \textbf{Nested (+ Class)} & The configuration of the routing table structure module \\[1.5mm]
\textbf{NextHopSelectors} & \textbf{Nested (+ Class)} & The configuration of the next hop selectors. Multiple next hop selectors may be defined. \\[1.5mm]
\textbf{RoutingManager} & \textbf{Nested (+ Class)} & The configuration of the routing manager module \\[1.5mm]
\textbf{LookupManager} & \textbf{Nested (+ Class)} & The configuration of the lookup manager module \\[1.5mm]
\textbf{SearchManager} & \textbf{Nested (+ Class)} & The configuration of the search manager module \\[1.5mm]
\textbf{JoinManager} & \textbf{Nested (+ Class)} & The configuration of the join manager module \\[1.5mm]
\textbf{LeaveManager} & \textbf{Nested (+ Class)} & The configuration of the leave manager module \\[1.5mm]
\textbf{DHTManager} & \textbf{Nested (+ Class)} & The configuration of the DHT manager module \\[1.5mm]
\textbf{NotifyProcessor} & \textbf{Nested (+ Class)} & The configuration of the notify processor module \\[1.5mm]
\textbf{NetworkAdapter} & \textbf{Nested (+ Class)} & The configuration of the network adapter module \\[1.5mm]
\textbf{MessageReceiver} & \textbf{Nested (+ Class)} & The configuration of the message receiver module \\[1.5mm]
\textbf{ReceivedMessageProcessors} & \textbf{Nested (+ Class)} & The configuration of the received message processors (multiple entries) \\[1.5mm]
\textbf{MessageSendProcessors} & \textbf{Nested (+ Class)} & The configuration of the message send processors (multiple entries) \\[1.5mm]
\textbf{Extensions} & \textbf{Nested (+ Class)} & The configuration of the extension modules (multiple entries) \\[1.5mm]
\textbf{BackgroundProcesses} & \textbf{Nested (+ Class)} & The configuration of the background process modules (multiple entries) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{Node configuration properties}
\label{tab:libPropNode}
\end{table}
After instantiating a module instance, as a rule, the initialization (\emph{initialize()}) method of the created object is called, passing the configuration (a nested \emph{NodeProperties} object) and the node accessor to the module. Module's \emph{discard} method should be used to release all resources maintained by the module and perform any operations necessary when the module is disconnected from the node instance. This method is called by the \emph{Node} object when the module is discarded.
The properties at the node level (used by the \emph{Node} class) are described in Table \ref{tab:libPropNode}. The properties of individual component implementations are presented in the following sections. It is also possible to define environment-level parameters by specifying the nested configuration - ``Environment'' property at the root namespace level. Table \ref{tab:libDirectEnvironment} specifies the configuration of \emph{DirectEnvironment}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{2cm} p{7.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Environment} & \textbf{String} & \textit{DirectEnvironment} \\[1.5mm]
\textbf{Environment[DirectEnvironment] \newline $\hookrightarrow$.SchedulerThreadPoolSize} & \textbf{Integer} & The number of threads used by the system clock scheduler. This parameter is optional - if not specified, the default value (1) is used. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{DirectEnvironment configuration properties}
\label{tab:libDirectEnvironment}
\end{table}
\subsection{Node ID factory, Node ID}
Classes implementing \emph{NodeId} interface represent node identifiers. To allow dynamic creation of node ID instances, based on their binary or text representation, a factory class (implementing \emph{NodeIdFactory} interface) should be defined. The methods of the factory object would return the instances of the node IDs of a specific type. The implementations of classes \emph{NodeId} and \emph{NodeIdFactory} define the object-byte and byte-object conversion for the node IDs, as well as comparisons and other operations on the node IDs.
To represent the node ID in \emph{HyCube}, the class \emph{HyCubeNodeId} and the factory class \emph{HyCubeNodeIdFactory} (implementing the interfaces described) are used. \emph{HyCubeNodeIdFactory} class should be configured for a certain number of dimensions and hierarchy levels (of the hierarchical hypercube), and returns instances of \emph{HyCubeNodeId} class for the configured numbers of dimensions and hierarchy levels. The properties that should be configured for \emph{HyCubeNodeIdFactory} module are presented in Table \ref{tab:libPropHyCubeNodeIdFactory}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.core.HyCubeNodeIdFactory} \\[1.5mm]
\textbf{Dimensions} & \textbf{Integer} & The number of dimensions \\[1.5mm]
\textbf{Levels} & \textbf{Integer} & The number of hierarchy levels \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeNodeIdFactory configuration properties}
\label{tab:libPropHyCubeNodeIdFactory}
\end{table}
\subsection{Routing table}
Because the core of the library may be used for implementation of many different DHT systems, the structure of the routing table may vary depending on the overlay structure. The class representing the routing table should implement the \emph{RoutingTable} interface. \emph{HyCubeRoutingTableImpl} is a class implementing the \emph{RoutingTable} interface, reflecting the structure of routing tables supported by nodes in \emph{HyCube}: the primary routing table, the secondary routing table and the neighborhood set. Classes operating on the routing tables should cast the maintained routing table object to \emph{HyCubeRoutingTableImpl}, and would then be able to access the \emph{HyCube}-specific structure. \emph{HyCubeRoutingTableImpl} configurable properties are presented in Table \ref{tab:libPropHyCubeRoutingTableImpl}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.core.HyCubeRoutingTableImpl} \\[1.5mm]
\textbf{Dimensions} & \textbf{Integer} & The number of dimensions \\[1.5mm]
\textbf{Levels} & \textbf{Integer} & The number of hierarchy levels \\[1.5mm]
\textbf{NSSize} & \textbf{Integer} & The size of the neighborhood set \\[1.5mm]
\textbf{RoutingTableSlotSize} & \textbf{Integer} & The maximum number of nodes that may be stored in the routing table slot \\[1.5mm]
\textbf{UseSecureRouting} & \textbf{Boolean} & Determines whether the secure routing tables should be maintained \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingTableImpl configuration properties}
\label{tab:libPropHyCubeRoutingTableImpl}
\end{table}
The class \emph{RoutingTableEntry} is used to store routing table entry information (for individual references). A routing table entry contains the node pointer (\emph{NodePointer} class, which consists of the node ID and the node's network pointer - a network layer specific class implementing the \emph{NetworkNodePointer} interface). In addition to a node pointer, routing table entries store the entry creation time, the distance to the node, a reference to the routing table slot containing the entry (routing table implementation specific), information whether the entry is enabled (used in next hop selection), discarded (the node should be removed). Furthermore, the routing table entry contains a map of additional data (objects) managed by individual modules. The map may be used to store the LNS/PNS indicators, cached results of extensive calculations performed for nodes, and many other. The map keys used by individual modules should be unique to avoid conflicts.
\subsection{Message factory}
A configurable message factory class should be an implementation of \emph{MessageFactory} interface. The message factory object is responsible for creating message instances (instances of a class implementing \emph{Message} interface). The implementations of \emph{Message} and \emph{MessageFactory} are responsible for message creation, object-byte and byte-object conversion, as well as message header definition, and any other operations performed on the message objects.
The implementations specific to the \emph{HyCube} protocol are \emph{HyCubeMessage} and \emph{HyCubeMessageFactory}. The header contains all the fields defined in Appendix \ref{sec:protocol}. Furthermore, the \emph{HyCubeMessageFactory} may be configured to include additional header fields that may be used by modules (values are defined during object creation, or using getters/setters). The configuration parameters of \emph{HyCubeMessageFactory} are presented in Table \ref{tab:libPropHyCubeMessageFactory}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3.5cm} p{2.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.messages.HyCubeMessageFactory} \\[1.5mm]
\textbf{NodeIdFactory} & \textbf{Nested (+ Class)} & The node ID factory module (used by the message factory) configuration (a pointer to the node ID factory configuration on the Node level may be used) \\[1.5mm]
\textbf{NetworkAddressByteLength} & \textbf{Integer} & The number of hierarchy levels \\[1.5mm]
\textbf{HeaderExtensionsCount} & \textbf{Integer} & The number of the header extensions \\[1.5mm]
\textbf{HeaderExtensionLengths} & \textbf{List of Integers} & Lengths of the header extensions \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeMessageFactory configuration properties}
\label{tab:libPropHyCubeMessageFactory}
\end{table}
\subsection{Extensions}
\label{sec:libExtenstions}
The library architecture allows defining extensions (classes implementing the \emph{Extension} interface). Extensions may be used to extend the library with any functionality or provide additional data structures maintained by nodes that may be used by the defined modules. An object of an extension is created during node initialization and may be accessed from outside the node by the node accessor or by an entry point (Section \ref{sec:libEntryPoints}) exposed by node services (method \emph{getExtensionEntryPoint} returning an \emph{EntryPoint} object - if implemented). Extension objects are initialized (by calling the \emph{initialize} method) before initialization of other modules, to allow individual modules to configure the extensions. However, if certain additional initialization activities should be performed after initialization of modules, they should be defined in the extensions's \emph{postInitialize} method. Extension's \emph{discard} method should be used to release all resources maintained by the extension and perform any operations necessary when the extension is disconnected from the node instance. This method is called by the \emph{Node} object when the extension is discarded.
\subsection{Background processes}
The library, by design, allows defining background processes - procedures executed in the node context in the background. Background processes may be scheduled or may be called explicitly. A background process should be an instance of a class implementing the \emph{BackgroundProcess} interface. The interface's methods allow calling the background process, scheduling next execution, starting/stopping scheduled execution of the process and performing a check if the process is running. Classes implementing \emph{BackgroundProcess} should also define the event type key for the background process events, and an entry point (Section \ref{sec:libEntryPoints}) allowing access to the background process through the \emph{Node} object (specifying the background process key). The \emph{discard} method of a background process should be used to release all resources maintained by the background process instance and perform any operations necessary when the background process is disconnected from the node instance. This method is called by the \emph{Node} object when the background process object is discarded. The common configuration properties of all background processes are defined in Table \ref{tab:libBackgroundProcess}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & The full background process class name \\[1.5mm]
\textbf{ScheduleImmediately} & \textbf{Boolean} & Determines whether the background process should be scheduled immediately after the initialization (\emph{schedule} method call) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{BackgroundProcess configuration properties}
\label{tab:libBackgroundProcess}
\end{table}
An abstract class \emph{AbstractBackgroundProcess} implements several commonly used functions of background processes, like scheduling (configurable time interval), processing background events, starting/stopping scheduled process periodic execution, as well as returning an entry point (Section \ref{sec:libEntryPoints}) - an instance of class \emph{AbstractBackgroundProcess.BackgroundProcessEntryPointImpl}, implementing a proxy to basic operations performed on background process instances: starting, stopping, checking whether the process is started, and running the process. Most typical background processes may be defined by extending this class, in which case only one method, \emph{doProcess()} - the process logic, remains to be defined. Properties required for background processes extending the class \emph{AbstractBackgroundProcess} are defined in Table \ref{tab:libAbstractBackgroundProcess}. The set of properties may be extended by the implementing classes.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & The full background process class name \\[1.5mm]
\textbf{ScheduleImmediately} & \textbf{Boolean} & Determines whether the background process should be scheduled immediately after the initialization (\emph{schedule} method call) \\[1.5mm]
\textbf{ScheduleInterval} & \textbf{Integer} & Schedule interval (in milliseconds) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{AbstractBackgroundProcess configuration properties}
\label{tab:libAbstractBackgroundProcess}
\end{table}
\subsection{Entry points}
\label{sec:libEntryPoints}
Every module defined, as well as extensions and background processes, may define entry points - instances of classes implementing the \emph{EntryPoint} interface (\emph{BackgroundProcessEntryPoint} interface for background processes). Instances of the entry points, returned by appropriate getter methods (\emph{getEntryPoint}, \emph{getBackgroundProcessEntryPoint}), are publicly accessible through the \emph{Node} object (for extensions and background processes, the methods expect the extension/process key as an argument), and may be used to access functions defined by modules, extensions, execute background processes or start/stop their scheduled execution through \emph{EntryPoint} and (\emph{BackgroundProcessEntryPoint} interfaces, or by casting to module-specific subclasses.
\subsection{Next hop selectors}
Next hop selectors are modules responsible for locating next hops in local routing tables. Multiple next hop selectors may be defined and individual next hop selectors may be accessed (for example through the node accessor object) to find next hop(s) for routing, lookup, search or other procedures. Next hop selectors should extend the abstract class \emph{NextHopSelector}. The next hop selection methods (\emph{findNextHop}, \emph{findNextHops}) expect arguments of types \emph{NodeId} (recipient node ID), \emph{NextHopSelectionParameters} (next hop selection options, which may be extended to include algorithm-specific options), and an integer value determining the number of next hops to be returned.
\begin{center}
\scriptsize
\begin{longtable}{p{5.0cm} p{1.0cm} p{8.5cm}}
%\begin{table}
%\scriptsize
%\begin{center}
%\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{Class = net.hycube.nexthopselection.HyCubeNextHopSelector} \\[1.5mm]
\textbf{Dimensions} & \textbf{Integer} & The number of dimensions of the hierarchical hypercube \\[1.5mm]
\textbf{Levels} & \textbf{Integer} & The number of hierarchy levels of the hierarchical hypercube \\[1.5mm]
\textbf{UseRT1} & \textbf{Boolean} & Determines whether the primary routing table should be used \\[1.5mm]
\textbf{UseRT2} & \textbf{Boolean} & Determines whether the secondary routing table should be used \\[1.5mm]
\textbf{UseNS} & \textbf{Boolean} & Determines whether the neighborhood set should be used \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The routing metric (\emph{Metric} enumeration) \\[1.5mm]
\textbf{UseSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus transform should be used \\[1.5mm]
\textbf{DynamicSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus point should be modified by nodes (variable Steinhaus metric) \\[1.5mm]
\textbf{RouteWithRegularMetricAfterSteinhaus} & \textbf{Boolean} & Determines whether the next hop selection should continue without the use of the Steinhaus transform when no next hop is found \\[1.5mm]
\textbf{PrefixMismatchHeuristicEnabled} & \textbf{Boolean} & Determines whether the prefix mismatch heuristic (PMH) should be applied \\[1.5mm]
\textbf{PrefixMismatchHeuristicMode} & \textbf{Enum} & (\emph{HyCubePrefixMismatchHeuristicMode}) - specifies the PMH mode - the PMH is applied based on the average or maximum neigh. set node distance \\[1.5mm]
\textbf{PrefixMismatchHeuristicFactor} & \textbf{Double} & The prefix mismatch heuristic factor ($\lambda$). If equal to 0, routing will proceed without enforcing the prefix condition (the PMH will be applied every time, without checking distances). \\[1.5mm]
\textbf{PrefixMismatchHeuristicWhenNoNextHop} & \textbf{Boolean} & Determines whether the PMH should be applied if no next hop is found \\[1.5mm]
\textbf{UseSteinhausTransformOnlyWithPMH} & \textbf{Boolean} & Determines whether the Steinhaus transform should be applied only when PMH is applied - otherwise, routing proceeds according to the Euclidean metric \\[1.5mm]
\textbf{RespectNumOfCommonBitsInNextGroup} & \textbf{Boolean} & Determines whether the next hop selection algorithm should respect the number of common bits in the first different digit of IDs \\[1.5mm]
\textbf{UseNSInFullScanWithoutPMH \newline / UseRT1InFullScanWithoutPMH \newline / UseRT2InFullScanWithoutPMH} & \textbf{Boolean} & Determines whether all neighborhood set / primary routing table / secondary routing table nodes should be checked by the next hop selection algorithm (with PMH not applied). If \emph{false} for the primary routing table, the routing table slots at the level corresponding to the common prefix length will be checked. If \emph{false} for the secondary routing table, slots for all dimensions at levels $\floor{\log_{2}{d_{dim}}}$ and $\floor{\log_{2}{d_{dim}}}+1$ will be checked ($d_{dim}$ is the distance to the destination node in dimension \emph{dim}). If \emph{false} for the neighborhood set, the neighborhood set will not be checked if it does not contain the destination node. \\[1.5mm]
\textbf{UseNSInFullScanWithPMH \newline / UseRT1InFullScanWithPMH \newline / UseRT2InFullScanWithPMH} & \textbf{Boolean} & Determines whether all neighborhood set / primary routing table / secondary routing table nodes should be checked by the next hop selection algorithm (with PMH applied). If \emph{false} for the primary routing table, no primary routing table slots will be checked. If \emph{false} for the secondary routing table, slots for all dimensions at levels $\floor{\log_{2}{d_{dim}}}$ and $\floor{\log_{2}{d_{dim}}}+1$ will be checked ($d_{dim}$ is the distance to the destination node in dimension \emph{dim}). If \emph{false} for the neighborhood set, the neighborhood set will not be checked if it does not contain the destination node. \\[1.5mm]
\textbf{UseSecureRouting} & \textbf{Boolean} & Determines whether secure routing is allowed \\[1.5mm]
\textbf{SkipRandomNumberOfNodesEnabled} & \textbf{Boolean} & Determines whether skipping a random number of nodes in next hop selection is allowed \\[1.5mm]
\textbf{SkipRandomNumberOfNodesMean} & \textbf{Double} & The mean of the normal distribution of the number of nodes to be skipped \\[1.5mm]
\textbf{SkipRandomNumberOfNodesStdDev} & \textbf{Double} & The std. dev. of the normal distr. of the number of nodes to be skipped \\[1.5mm]
\textbf{SkipRandomNumberOfNodesAbsolute} & \textbf{Boolean} & Specifies whether the absolute value of the generated number of nodes to skip should be used. Otherwise, for generated values smaller than 0, no nodes will be skipped. \\[1.5mm]
\textbf{SkipNodesNumMax} & \textbf{Integer} & The maximum number of the nodes skipped \\[1.5mm]
\textbf{SkipNodesNumWhenRandomExceedsMax} & \textbf{Integer} & The number of nodes that should be skipped when the generated random number exceeds the maximum value \\[1.5mm]
\textbf{ForceSkipRandomNumberOfNodes} & \textbf{Boolean} & Determines whether the generated number of nodes should be skipped even if the returned number of next hops would be smaller than requested \\[1.5mm]
\textbf{SkipNodesIncludeExcactMatch} & \textbf{Boolean} & Determines whether the set of skipped nodes may include the exact match \\[1.5mm]
\hline
%\end{tabular}
%\end{center}
\caption{HyCubeNextHopSelector configuration properties}
\label{tab:libPropHyCubeNextHopSelector}
%\end{table}
\end{longtable}
\end{center}
\begin{center}
\scriptsize
\begin{longtable}{p{4.5cm} p{2.0cm} p{8.0cm}}
%\begin{table}
%\scriptsize
%\begin{center}
%\begin{tabular}{p{4.5cm} p{2.0cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{steinhausTransformApplied} & \textbf{boolean} & Determines whether the Steinhaus transform is applied \\[1.5mm]
\textbf{steinhausPoint} & \textbf{HyCubeNodeId} & Determines the current Steinhaus point \\[1.5mm]
\textbf{includeMoreDistantNodes} & \textbf{boolean} & Determines whether next hop selection should include more distant nodes than the current node \\[1.5mm]
\textbf{skipTargetNode} & \textbf{boolean} & Determines whether the exact match node (recipient ID) should be skipped in the next hop selection \\[1.5mm]
\textbf{includeSelf} & \textbf{boolean} & Determines whether the current node (self) reference may be included in the results set \\[1.5mm]
\textbf{pmhApplied} & \textbf{boolean} & Determines whether the prefix mismatch heuristic is applied \\[1.5mm]
\textbf{preventPmh} & \textbf{boolean} & Determines whether the prefix mismatch heuristic should not be applied even if a message is already in the vicinity of the destination node \\[1.5mm]
\textbf{skipRandomNumOfNodesApplied} & \textbf{boolean} & Determines whether skipping a random number of nodes in next hop selection is applied \\[1.5mm]
\textbf{secureRoutingApplied} & \textbf{boolean} & Determines whether secure routing is applied \\[1.5mm]
\hline
%\end{tabular}
%\end{center}
\caption{Parameters of next hops selection (passed within HyCubeNextHopSelectorParameters)}
\label{tab:libPropHyCubeNextHopSelectorParameters}
%\end{table}
\end{longtable}
\end{center}
The class \emph{HyCubeNextHopSelector} is the implementation of the \emph{HyCube} next hop selection algorithm (implementing the \emph{NextHopSelector} methods). The configuration of the \emph{HyCubeNextHopSelector} module consists of the parameters presented in Table \ref{tab:libPropHyCubeNextHopSelector} (the configuration parameters), and the runtime node selection parameters (values modified by nodes according to the algorithm). The latter are passed to the next hop selection methods (and modified values are returned) as an argument of type \emph{HyCubeNextHopSelectionParameters} - the list of runtime parameters is presented in Table \ref{tab:libPropHyCubeNextHopSelectorParameters}.
\subsection{Notify processor}
The notify processor component (a class extending the abstract class \emph{NotifyProcessor}) is responsible for processing notifications of existence of other nodes (for example when a NOTIFY or RECOVERY\_REPLY message is received). The abstract method of this class (\emph{processNotify}) takes two arguments - the new node reference and the current time stamp.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.8cm} p{2.2cm} p{7.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeNotifyProcessor} \\[1.5mm]
\textbf{Dimensions} & \textbf{Integer} & The number of dimensions of the hierarchical hypercube \\[1.5mm]
\textbf{Levels} & \textbf{Integer} & The number of hierarchy levels of the hierarchical hypercube \\[1.5mm]
\textbf{NSSize} & \textbf{Integer} & The maximum size of the neighborhood set \\[1.5mm]
\textbf{RoutingTableSlotSize} & \textbf{Integer} & The maximum number of nodes stored in a routing table slot \\[1.5mm]
\textbf{UseRT1} & \textbf{Boolean} & Determines whether the primary routing table should be used \\[1.5mm]
\textbf{UseRT2} & \textbf{Boolean} & Determines whether the secondary routing table should be used \\[1.5mm]
\textbf{UseNS} & \textbf{Boolean} & Determines whether the neighborhood set should be used \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The routing metric (\emph{Metric} enumeration) \\[1.5mm]
\textbf{ExcludeRT2ScopeFromRT1} & \textbf{Boolean} & Determines whether nodes covered by secondary routing table slots should NOT be processed as primary routing table candidates \\[1.5mm]
\textbf{UseSecureRouting} & \textbf{Boolean} & Determines whether secure routing tables should be maintained \\[1.5mm]
\textbf{UpdateNetworkAddressWhenDifferent} & \textbf{Boolean} & If set to \emph{true}, when processing a node, the network addresses will be updated for all references with the same node ID and different network addresses \\[1.5mm]
\textbf{RecentlyProcessedNodesRetentionTime} & \textbf{Integer} & Time within the same node should not be processed more than once \\[1.5mm]
\textbf{RecentlyProcessedNodesCacheMaxSize} & \textbf{Integer} & Maximum number of nodes stored for which the last notification time is stored \\[1.5mm]
\textbf{RTNodeSelector} & \textbf{Nested (+ Class)} & The routing table node selector module \\[1.5mm]
\textbf{NSNodeSelector} & \textbf{Nested (+ Class)} & The neighborhood set node selector module \\[1.5mm]
\textbf{SecureRTNodeSelector} & \textbf{Nested (+ Class)} & The secure routing table node selector module \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeNotifyProcessor configuration properties}
\label{tab:libHyCubeNotifyProcessor}
\end{table}
The \emph{HyCube} implementation of the notify processor is the class \emph{HyCubeNotifyProcessor}. The notify processor finds appropriate routing table slot(s) (including the neighborhood set), and, within the routing table slot level, the best node(s) are determined based on two other components: routing table node selector (extending the class \emph{HyCubeRTNodeSelector}) and neighborhood set node selector (extending \emph{HyCubeNSNodeSelector}). Both classes \emph{HyCubeRTNodeSelector} and \emph{HyCubeNSNodeSelector} declare abstract methods processing the new node in the context of existing nodes - the routing table slot reference is also passed to the method. A separate routing table node selector is defined for choosing nodes for secure routing tables. The configuration parameters of \emph{HyCubeNotifyProcessor} are presented in Table \ref{tab:libHyCubeNotifyProcessor}. The following routing table node selectors were implemented:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{HyCubeSimpleRTNodeSelector} - adds a new node to a routing table slot only when the slot is not full. The properties of this node selector are presented in table \ref{tab:libHyCubeSimpleRTNodeSelector}.
\item \emph{HyCubeLnsRTNodeSelector} - realizes the LNS technique adopted by \emph{HyCube}, connected with \emph{HyCube}'s keep-alive mechanism. The properties of this node selector are presented in table \ref{tab:libHyCubeLnsRTNodeSelector}.
\item \emph{HyCubeSecureRTNodeSelector} - realizes the \emph{HyCube} secure node selection algorithm. The properties of this node selector are presented in table \ref{tab:libHyCubeSecureRTNodeSelector}.
\end{itemize}
\noindent
and the following neighborhood set node selectors:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{HyCubeDistanceNSNodeSelector} - realizes the neighborhood set node selection based only on distances (selects closest nodes). The properties of this node selector are presented in table \ref{tab:libHyCubeDistanceNSNodeSelector}.
\item \emph{HyCubeBalancedRingNSNodeSelector} - realizes the neighborhood set node selection based on distances and ensuring uniform distribution of nodes in terms of directions on a logical ring (half of nodes would be successors, half would be predecessors). The properties of this node selector are presented in table \ref{tab:libHyCubeBalancedRingNSNodeSelector}.
\item \emph{HyCubeBalancedOrthantsNSNodeSelector} - realizes the neighborhood set node selection based on distances, tending to achieve equal numbers of neighbors in individual orthants of the system of coordinates with the center at the address (ID) of the node whose neighborhood set is considered. The configuration properties of this node selector are presented in Table \ref{tab:libHyCubeBalancedOrthantsNSNodeSelector}.
\end{itemize}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.rtnodeselection.HyCubeSimpleRTNodeSelector} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSimpleRTNodeSelector configuration properties}
\label{tab:libHyCubeSimpleRTNodeSelector}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.3cm} p{1.2cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.rtnodeselection.HyCubeLnsRTNodeSelector} \\[1.5mm]
\textbf{LnsIndicatorRteKey} & \textbf{String} & Specifies the key under which the LNS indicator value is stored in routing table entries. This value should be the same as the value of the parameter ``PingResponseIndicatorRteKey'' of the keep-alive extension (Section \ref{sec:keepAliveMechanism}), as both modules operate on the same values. \\[1.5mm]
\textbf{InitialLnsIndicatorValue} & \textbf{Double} & Specified the initial value of the LNS indicator (for newly added nodes) \\[1.5mm]
\textbf{LnsIndicatorReplaceThreshold} & \textbf{Double} & Specifies the LNS replace threshold - routing table references with values of the LNS indicator lower than this threshold may be replaced. This parameter value should be a pointer to the ``PingResponseIndicatorReplaceThreshold'' property of the \emph{HyCubeKeepAliveExtension} extension (Section \ref{sec:keepAliveMechanism}). \\[1.5mm]
\textbf{KeepAliveExtensionKey} & \textbf{String} & Specifies the keep-alive extension key (Section \ref{sec:keepAliveMechanism}) \\[1.5mm]
\textbf{UseKeepAliveExtensionLnsIndicatorCache} & \textbf{Boolean} & Determines whether the LNS indicator value should be cached for nodes removed from routing tables and used when checking these nodes again \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeLnsRTNodeSelector configuration properties}
\label{tab:libHyCubeLnsRTNodeSelector}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{1.5cm} p{10.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.rtnodeselection.HyCubeSecureRTNodeSelector} \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The routing metric (\emph{Metric} enumeration) \\[1.5mm]
\textbf{Dimensions} & \textbf{Integer} & The number of dimensions of the hierarchical hypercube \\[1.5mm]
\textbf{Levels} & \textbf{Integer} & The number of hierarchy levels of the hierarchical hypercube \\[1.5mm]
\textbf{XorNodeIdChangeAfter} & \textbf{Integer} & Determines after how many node checks the secret Node ID should be regenerated \\[1.5mm]
\textbf{DistFunRteKey} & \textbf{String} & Specifies the key under which the distance function data is stored in routing table entries \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSecureRTNodeSelector configuration properties}
\label{tab:libHyCubeSecureRTNodeSelector}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.rtnodeselection.HyCubeDistanceNSNodeSelector} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeDistanceNSNodeSelector configuration properties}
\label{tab:libHyCubeDistanceNSNodeSelector}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.rtnodeselection.HyCubeBalancedRingNSNodeSelector} \\[1.5mm]
\textbf{SemiringNoRteKey} & \textbf{String} & Specifies the key under which the semi-ring number of the neighbors is stored in routing table entries \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeBalancedRingNSNodeSelector configuration properties}
\label{tab:libHyCubeBalancedRingNSNodeSelector}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.rtnodeselection.HyCubeBalancedOrthantsNSNodeSelector} \\[1.5mm]
\textbf{Dimensions} & \textbf{Integer} & The number of dimensions of the hierarchical hypercube \\[1.5mm]
\textbf{OrthantNoRteKey} & \textbf{String} & Specifies the key under which the orthant number of the neighbors is stored in routing table entries \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeBalancedOrthantsNSNodeSelector configuration properties}
\label{tab:libHyCubeBalancedOrthantsNSNodeSelector}
\end{table}
\subsection{Routing manager}
\label{sec:libRoutingManager}
The routing manager is a module (a class implementing \emph{RoutingManager} interface) responsible for routing messages. The method \emph{routeMessage} takes two arguments - \emph{MessageSendProcessInfo} (information about the message to be sent) and \emph{wait} (a boolean flag determining whether the sending should be performed in the foreground or the background). The \emph{MessageSendProcessInfo} object contains the message object (\emph{Message}), the direct recipient to which the message should be sent (\emph{NodePointer}), information whether the message should be processed before sending (by message send processors), and routing parameters (an algorithm specific \emph{Object[]} array) specified for the message being sent.
The class \emph{HyCubeRoutingManager} is the implementation of the \emph{RoutingManager} that routes messages based on the next hop selector module, updates message's TTL and hop count values, as well as provides anonymous and registered routes functionalities. The properties of \emph{HyCubeRoutingManager} are presented in Table \ref{tab:libHyCubeRoutingManager}. The expected routing parameters (specified within the \emph{MessageSendProcessInfo} object) are presented in Table \ref{tab:libHyCubeRoutingManagerRoutingParameters}. The class \emph{HyCubeRoutingManager} provides helper methods for creating and retrieving the runtime parameters - operating on the \emph{Object[]} array. If the default value of a runtime parameter (or the value is missing - null) is passed to the routing manager, and the appropriate message header field of the message passed is given a non-default value, the non-default value is used. If the value is specified by both, the message header field and the value passed within the \emph{Object[]} runtime parameters, the value of the runtime parameter is used.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.routing.HyCubeRoutingManager} \\[1.5mm]
\textbf{NextHopSelectorKey} & \textbf{String} & The next hop selector key (among possible multiple next hop selectors defined) \\[1.5mm]
\textbf{UseSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus transform should be enabled for routed messages \\[1.5mm]
\textbf{AllowRegisteredRoutes} & \textbf{Boolean} & Determines whether registered routes are allowed (otherwise messages sent with ``Register route'' header option set to \emph{true} will be dropped) \\[1.5mm]
\textbf{RegisteredRoutesRetentionTime} & \textbf{Integer} & Specifies the time for which the registered route information should be maintained \\[1.5mm]
\textbf{AllowAnonymousRoutes} & \textbf{Boolean} & Determines whether anonymous routes are allowed (otherwise messages sent with ``Anonymous route'' header option set to \emph{true} will be dropped) \\[1.5mm]
\textbf{ConcealTTL} & \textbf{Boolean} & Determines whether the TTL message field should be concealed for all messages (the method is specified by the parameters ``DecreaseTTLProbability'' and ``IncreaseTTLByRandomNum'') \\[1.5mm]
\textbf{DecreaseTTLProbability} & \textbf{Double} & Determines the probability with the nodes along the routes should decrease the TTL values \\[1.5mm]
\textbf{IncreaseTTLByRandomNum} & \textbf{Boolean} & Determines whether the TTL should be increased by nodes along the routes by a random number (normal distribution) \\[1.5mm]
\textbf{IncreaseTTLRandomMean} & \textbf{Double} & Specifies the mean of the distribution (normal distribution) for generating the TTL increase. \\[1.5mm]
\textbf{IncreaseTTLRandomStdDev} & \textbf{Double} & Specifies the std. deviation of the distribution (normal distribution) for generating the TTL increase. \\[1.5mm]
\textbf{IncreaseTTLRandomAbsolute} & \textbf{Boolean} & Specifies whether the absolute value of the generated random TTL increase should be used. Otherwise, for generated values smaller than 0, the TTL will not be increased. \\[1.5mm]
\textbf{IncreaseTTLRandomModulo} & \textbf{Double} & Specifies the maximum TTL increase. When the randomly generated TTL increase is greater, the increase will be recalculated modulo the value of this parameter. \\[1.5mm]
\textbf{ConcealHopCount} & \textbf{Boolean} & Determines whether the hop count message field should be concealed (set to its maximum value) for all messages \\[1.5mm]
\textbf{EnsureSteinhausPointAnonymity} & \textbf{Boolean} & Determines whether the initial Steinhaus point ID for a message should be modified when the sender node is one of the most distant nodes to the recipient (in terms of distances between identifiers in the hierarchical hypercube) \\[1.5mm]
\textbf{SteinhausPointAnonymityDistanceFactor} & \textbf{Double} & Determines the ratio of closer neighborhood set nodes to the destination (than the sending node), above which (inclusive) the Steinhaus point anonymity will be enabled. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingManager configuration properties}
\label{tab:libHyCubeRoutingManager}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{2cm} p{9.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{secureRouting} & \textbf{Boolean} & A flag indicating whether secure routing tables should be used (by all nodes along the route) for next hops selection for the message being sent (default value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & A flag indicating whether random numbers of nodes should be skipped (by all nodes along the route) in next hop selection for the message being sent (default value \emph{false}) \\[1.5mm]
\textbf{regusterRoute} & \textbf{Boolean} & A flag indicating whether the route should be registered by nodes (default value \emph{false}) \\[1.5mm]
\textbf{routeBack} & \textbf{Boolean} & A flag indicating whether the message should be routed back along a registered route (default value \emph{false}) \\[1.5mm]
\textbf{routeId} & \textbf{Integer} & The route ID, applicable when the message is routed back along a registered route (default value 0) \\[1.5mm]
\textbf{anonymousRoute} & \textbf{Boolean} & A flag indicating whether the message should be routed anonymously (forced anonymous route, ``Anonymous route'' header option set to \emph{true}). The default value is \emph{false}. If the value is \emph{true}, every node along the route should replace the message's original sender node ID and network address with its own node ID and network address. Additionally the option forces concealing the ``TTL'' and the ``Hop count'' header fields, and modifies the initial Steinhaus point if the sending node ID is one of the most distant nodes to the recipient in the hierarchical hypercube (the new value is the second best next hop found in the routing tables). This option can be set to \emph{true} together with ``registerRoute'' or ``routeBack'' option, in which case, the message will be routed along a registered route, and concealing the ``TTL'', ``Hop count'' and ``Steinhaus point'' fields will be forced. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingManager runtime routing parameters}
\label{tab:libHyCubeRoutingManagerRoutingParameters}
\end{table}
\subsection{Lookup manager}
\label{sec:libLookupManager}
The lookup manager module (a class implementing \emph{LookupManager} interface) is responsible for performing lookup requests. Methods performing the lookup take the lookup node ID as an argument, as well as optional lookup parameters (an algorithm specific \emph{Object[]} array). Additionally, an object of type \emph{LookupCallback} is passed to the the lookup method calls, and an optional callback argument of type \emph{Object}. The callback object's \emph{lookupReturned} method is called when the lookup procedure terminates, and the result, as well as the callback argument specified, are passed to the method call. The classes implementing the \emph{LookupManager} interface should also define the event types for callback events and for request timeout events (Section \ref{sec:libEventProcessing}). An entry point to the lookup manager may also be defined by implementing the method \emph{getEntryPoint()} returning an object of type \emph{EntryPoint}.
Class \emph{HyCubeLookupManager} is an implementation of the \emph{LookupManager} interface, realizing the lookup procedure of \emph{HyCube}. Additionally, \emph{HyCubeLookupManager} implements methods processing received lookup requests and responses (called by received message processors, described in Section \ref{sec:libMessageProcessors}). The configuration parameters of \emph{HyCubeLookupManager} are presented in Table \ref{tab:libHyCubeLookupManager}. The expected runtime lookup parameters (elements of the \emph{Object[]} array - the argument of the lookup method call) are presented in Table \ref{tab:libHyCubeLookupManagerLookupParameters}. The class \emph{HyCubeLookupManager} provides helper methods for creating and retrieving the runtime parameters - operating on the \emph{Object[]} array.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4cm} p{2cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.lookup.HyCubeLookupManager} \\[1.5mm]
\textbf{NextHopSelectorKey} & \textbf{String} & The next hop selector key (among possible multiple next hop selectors defined) \\[1.5mm]
\textbf{LookupCallbackEventKey} & \textbf{String} & Lookup callback event key \\[1.5mm]
\textbf{LookupRequestTimeoutEventKey} & \textbf{String} & Lookup request timeout event key \\[1.5mm]
\textbf{DefaultBeta} & \textbf{Integer} & Default value of the parameter \emph{beta} ($\beta$) - the maximum number of nodes returned by intermediate nodes \\[1.5mm]
\textbf{DefaultGamma} & \textbf{Integer} & Default value of the parameter \emph{gamma} ($\gamma$) - the number of temporary nodes stored during the lookup \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The routing metric (\emph{Metric} enumeration) \\[1.5mm]
\textbf{UseSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus transform should be enabled in the next lookup hop selection \\[1.5mm]
\textbf{LookupRequestTimeout} & \textbf{Integer} & Lookup request timeout (milliseconds) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeLookupManager configuration properties}
\label{tab:libHyCubeLookupManager}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{3cm} p{8.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{beta} & \textbf{Integer} & The maximum number of nodes returned by intermediate nodes. If not specified (or equal to 0), the default value is used. \\[1.5mm]
\textbf{gamma} & \textbf{Integer} & The number of temporary nodes stored during the lookup. If not specified (or equal to 0), the default value is used. \\[1.5mm]
\textbf{secureLookup} & \textbf{Boolean} & A flag indicating whether secure routing tables should be used for next hop selection (default value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & A flag indicating whether random numbers of nodes should be skipped (by nodes receiving LOOKUP messages) in next hop selection (default value \emph{false}) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeLookupManager runtime lookup parameters}
\label{tab:libHyCubeLookupManagerLookupParameters}
\end{table}
\subsection{Search manager}
\label{sec:libSearchManager}
The search manager module (a class implementing \emph{SearchManager} interface) is responsible for performing search requests. Methods initiating the search procedure take the search node ID as an argument, $k$ - the number of nodes to be found, as well as optional: search parameters (an algorithm specific \emph{Object[]} array), an optional set of references to initial search nodes (nodes to which initial requests are sent), and a flag determining whether the exact match node should be skipped in searches (local next hop selection) - \emph{ignoreTargetNode}. Additionally, an object of type \emph{SearchCallback} is passed to the the search method calls (and an optional callback argument of type \emph{Object}). The callback object's \emph{searchReturned} method is called when the search procedure terminates, and the result, as well as the callback argument specified, are passed to the method call. The classes implementing the \emph{SearchManager} interface should also define the event types for callback events and for request timeout events (event processing described in Section \ref{sec:libEventProcessing}). An entry point to the search manager may also be defined by implementing the method \emph{getEntryPoint()} returning an object of type \emph{EntryPoint}.
Class \emph{HyCubeSearchManager} is the implementation of the \emph{SearchManager} interface, realizing the search procedure of \emph{HyCube}. Additionally, \emph{HyCubeSearchManager} implements methods processing received search requests and responses (called by received message processors, described in Section \ref{sec:libMessageProcessors}). The configuration parameters of \emph{HyCubeSearchManager} are presented in Table \ref{tab:libHyCubeSearchManager}, and the expected runtime search parameters (elements of the \emph{Object[]} array - the argument of the search method call) are presented in Table \ref{tab:libHyCubeSearchManagerSearchParameters}. The class \emph{HyCubeSearchManager} provides helper methods for creating and retrieving the runtime parameters - operating on the \emph{Object[]} array.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4cm} p{2cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.search.HyCubeSearchManager} \\[1.5mm]
\textbf{NextHopSelectorKey} & \textbf{String} & The next hop selector key (among possible multiple next hop selectors defined) \\[1.5mm]
\textbf{SearchCallbackEventKey} & \textbf{String} & Search callback event key \\[1.5mm]
\textbf{SearchRequestTimeoutEventKey} & \textbf{String} & Search request timeout event key \\[1.5mm]
\textbf{DefaultAlpha} & \textbf{Integer} & Default value of the parameter \emph{alpha} ($\alpha$) - the number of closest nodes to which search requests are sent - parallelism factor \\[1.5mm]
\textbf{DefaultBeta} & \textbf{Integer} & Default value of the parameter \emph{beta} ($\beta$) - the maximum number of nodes returned by intermediate nodes \\[1.5mm]
\textbf{DefaultGamma} & \textbf{Integer} & Default value of the parameter \emph{gamma} ($\gamma$) - the number of temporary nodes stored during the search \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The routing metric (\emph{Metric} enumeration) \\[1.5mm]
\textbf{UseSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus transform should be enabled in the next hop selection \\[1.5mm]
\textbf{SearchRequestTimeout} & \textbf{Integer} & Search request timeout (milliseconds) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSearchManager configuration properties}
\label{tab:libHyCubeSearchManager}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3cm} p{2cm} p{9.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{alpha} & \textbf{Integer} & The number of closest nodes to which search requests are sent (parallelism factor). If not specified (or equal to 0), the default value is used. \\[1.5mm]
\textbf{beta} & \textbf{Integer} & The maximum number of nodes returned by intermediate nodes. If not specified (or equal to 0), the default value is used. \\[1.5mm]
\textbf{gamma} & \textbf{Integer} & The number of temporary nodes stored during the search. If not specified (or equal to 0), the default value is used. \\[1.5mm]
\textbf{secureSearch} & \textbf{Boolean} & A flag indicating whether secure routing tables should be used for next hop selection (default value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & A flag indicating whether random numbers of nodes should be skipped (by nodes receiving SEARCH messages) in next hop selection (default value \emph{false}) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSearchManager runtime search parameters}
\label{tab:libHyCubeSearchManagerSearchParameters}
\end{table}
\subsection{Join manager}
The join manager module (a class implementing the \emph{JoinManager} interface) is responsible for realization of node joins. Methods performing the join take the following arguments: the bootstrap node network address (\emph{String}), optional join parameters (an algorithm specific \emph{Object[]} array), an object of type \emph{JoinCallback}, and an optional callback argument of type \emph{Object}). The callback object's \emph{joinReturned} method is called when the join procedure terminates, and the callback argument specified is passed to the method call. Classes implementing \emph{JoinManager} interface should also define the event type for the callback event (Section \ref{sec:libEventProcessing}). An entry point to the join manager may also be defined by implementing the method \emph{getEntryPoint()} returning an object of type \emph{EntryPoint}. The ID of the joining node is expected to be set before the node join procedure is initiated.
In the \emph{HyCube} library, two different join managers have been implemented: \emph{HyCubeSearchJoinManager} (realizing the search join procedure) and \emph{HyCubeRouteJoinManager} (realizing the route join procedure). Both join managers implement the join request initiation, as well as the logic of processing received join requests and responses (called by received message processors, described in Section \ref{sec:libMessageProcessors}). The configuration parameters of \emph{HyCubeSearchJoinManager} (search join technique) are presented in Table \ref{tab:libHyCubeSearchJoinManager}, and the expected runtime join parameters (elements of the \emph{Object[]} array - the argument of the join method call) are presented in Table \ref{tab:libHyCubeSearchJoinManagerJoinParameters}. The configuration of \emph{HyCubeRouteJoinManager} (route join technique) is presented in Table \ref{tab:libHyCubeRouteJoinManager} and the runtime join parameters are presented in Table \ref{tab:libHyCubeRouteJoinManagerJoinParameters}. Both join managers provide helper methods for creating and retrieving the runtime parameters - operating on the \emph{Object[]} array.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.join.searchjoin.HyCubeSearchJoinManager} \\[1.5mm]
\textbf{NextHopSelectorKey} & \textbf{String} & The next hop selector key (among possible multiple next hop selectors defined) \\[1.5mm]
\textbf{JoinCallbackEventKey} & \textbf{String} & Join callback event key \\[1.5mm]
\textbf{JoinRequestTimeoutEventKey} & \textbf{String} & Join request timeout event key \\[1.5mm]
\textbf{JoinAlpha} & \textbf{Integer} & The value of the parameter \emph{alpha} ($\alpha$) - the number of closest nodes to which search requests are sent - the parallelism factor used by the join algorithm \\[1.5mm]
\textbf{DefaultBeta} & \textbf{Integer} & The value of the parameter \emph{beta} ($\beta$) - the maximum number of nodes returned by intermediate nodes used by the join algorithm \\[1.5mm]
\textbf{DefaultGamma} & \textbf{Integer} & The value of the parameter \emph{gamma} ($\gamma$) - the number of temporary nodes stored during the lookup used by the join algorithm \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The routing metric (\emph{Metric} enumeration) \\[1.5mm]
\textbf{UseSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus transform should be enabled in the next hop selection \\[1.5mm]
\textbf{JoinRequestTimeout} & \textbf{Integer} & Join request timeout (milliseconds) \\[1.5mm]
\textbf{SendClosestInInitialJoinReply} & \textbf{Boolean} & Determines whether the closest nodes to the joining node ID should be returned in responses to initial join requests \\[1.5mm]
\textbf{IncludeNSInInitialJoinReply} & \textbf{Boolean} & Determines whether the neighborhood set references should be returned in responses to initial join requests \\[1.5mm]
\textbf{IncludeRTInInitialJoinReply} & \textbf{Boolean} & Determines whether the routing table references should be returned in responses to initial join requests \\[1.5mm]
\textbf{IncludeSelfInInitialJoinReply} & \textbf{Boolean} & Determines whether a reference to self should be returned in responses to initial join requests \\[1.5mm]
\textbf{MarkInitialJoinReplySenderAsResponded} & \textbf{Boolean} & Determines whether the nodes to which initial requests were sent should be marked as requested (no additional join requests would be sent to those nodes in the first search phase) \\[1.5mm]
\textbf{RecoveryNSAfterJoin} & \textbf{Boolean} & Determines whether nodes should perform a neighborhood set recovery procedure after joining \\[1.5mm]
\textbf{RecoveryAfterJoin} & \textbf{Boolean} & Determines whether nodes should perform a full recovery procedure after joining \\[1.5mm]
\textbf{RecoveryExtensionKey} & \textbf{String} & Specifies the key of the recovery extension that is used to perform the recovery after joining \\[1.5mm]
\textbf{DiscoverPublicNetworkAddress} & \textbf{Boolean} & Determines if the requested node should return the connecting node's public network address, which is then stored by the connecting node \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSearchJoinManager configuration properties}
\label{tab:libHyCubeSearchJoinManager}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.5cm} p{2cm} p{10cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{secureSearch} & \textbf{Boolean} & A flag indicating whether secure routing tables should be used for next hop selection (default value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & A flag indicating whether random numbers of nodes should be skipped (by nodes receiving JOIN messages) in next hop selection (default value \emph{false}) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSearchJoinManager runtime join parameters}
\label{tab:libHyCubeSearchJoinManagerJoinParameters}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.join.routejoin.HyCubeRouteJoinManager} \\[1.5mm]
\textbf{NextHopSelectorKey} & \textbf{String} & The next hop selector key (among possible multiple next hop selectors defined) \\[1.5mm]
\textbf{PMHDisabledForJoinMessages} & \textbf{Boolean} & Determines whether the prefix mismatch heuristic should be disabled for JOIN messages \\[1.5mm]
\textbf{UseSteinhausTransform} & \textbf{Boolean} & Determines whether the Steinhaus transform should be enabled in the next hop selection \\[1.5mm]
\textbf{JoinCallbackEventKey} & \textbf{String} & Join callback event key \\[1.5mm]
\textbf{JoinTimeoutEventKey} & \textbf{String} & Join timeout event key \\[1.5mm]
\textbf{WaitAfterFinalJoinReplyTimeoutEventKey} & \textbf{String} & Event key for join timeout after receiving the final join reply (after this timeout, the procedure terminates even if not all responses have been received from intermediate nodes) \\[1.5mm]
\textbf{JoinTimeout} & \textbf{Integer} & Join timeout (milliseconds) \\[1.5mm]
\textbf{WaitTimeAfterFinalJoinReply} & \textbf{Integer} & Join timeout (milliseconds) after receiving the final join reply (after this timeout, the procedure terminates even if not all responses have been received from intermediate nodes) \\[1.5mm]
\textbf{IncludeNSInJoinReply} & \textbf{Boolean} & Determines whether the neighborhood set references should be returned in join responses \\[1.5mm]
\textbf{IncludeRTInJoinReply} & \textbf{Boolean} & Determines whether the routing table references should be returned in join responses \\[1.5mm]
\textbf{IncludeSelfInJoinReply} & \textbf{Boolean} & Determines whether a reference to self should be returned in join responses \\[1.5mm]
\textbf{IncludeNSInJoinReplyFinal} & \textbf{Boolean} & Determines whether the neighborhood set references should be returned in final join responses \\[1.5mm]
\textbf{IncludeRTInJoinReplyFinal} & \textbf{Boolean} & Determines whether the routing table references should be returned in final join responses \\[1.5mm]
\textbf{IncludeSelfInJoinReplyFinal} & \textbf{Boolean} & Determines whether a reference to self should be returned in final join responses \\[1.5mm]
\textbf{RecoveryNSAfterJoin} & \textbf{Boolean} & Determines whether nodes should perform a neighborhood set recovery procedure after joining \\[1.5mm]
\textbf{RecoveryAfterJoin} & \textbf{Boolean} & Determines whether nodes should perform a full recovery procedure after joining \\[1.5mm]
\textbf{RecoveryExtensionKey} & \textbf{String} & Specifies the key of the recovery extension that is used to perform the recovery after joining \\[1.5mm]
\textbf{DiscoverPublicNetworkAddress} & \textbf{Boolean} & Determines if the requested node should return the connecting node's public network address, which is then stored by the connecting node \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRouteJoinManager configuration properties}
\label{tab:libHyCubeRouteJoinManager}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.5cm} p{2cm} p{10cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{secureSearch} & \textbf{Boolean} & A flag indicating whether secure routing tables should be used for next hop selection (default value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & A flag indicating whether random numbers of nodes should be skipped in next hop selection by all nodes routing the JOIN message (default value \emph{false}) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRouteJoinManager runtime join parameters}
\label{tab:libHyCubeRouteJoinManagerJoinParameters}
\end{table}
\subsection{Leave manager}
The leave manager module (a class implementing the \emph{LeaveManager} interface) is responsible for running procedures performed when a node leaves the DHT. The method \emph{leave} (no arguments) should implement the leaving logic.
In the \emph{HyCube} implementation (\emph{HyCubeLeaveManager} class), the leaving node sends all neighborhood set references to all nodes in its neighborhood set. \emph{HyCubeLeaveManager} also implements a method processing received LEAVE messages (the method is called by a received message processor - the message processors are described in Section \ref{sec:libMessageProcessors}). A node receiving a LEAVE message removes the leaving node from the routing tables, and (depending on the configuration) processes the list of the nodes received as routing table candidates (notify processor). The configuration parameters of \emph{HyCubeLeaveManager} are presented in Table \ref{tab:libHyCubeLeaveManager}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.leave.HyCubeLeaveManager} \\[1.5mm]
\textbf{BlockingSendLeave} & \textbf{Boolean} & Determines whether the leave manager should send LEAVE messages in the blocking mode (waiting for the messages to be physically sent) \\[1.5mm]
\textbf{WaitAfterSendLeaveTime} & \textbf{Integer} & Specifies the time (in milliseconds) that nodes should wait after sending LEAVE messages (before returning) \\[1.5mm]
\textbf{ProcessReceivedNodes} & \textbf{Boolean} & Determines whether the leave manager should process references received within LEAVE messages \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeLeaveManager configuration properties}
\label{tab:libHyCubeLeaveManager}
\end{table}
\subsection{DHT manager}
\label{sec:libDHTManager}
The DHT manager module (a class implementing the \emph{DHTManager} interface) is responsible for performing operations on resources: inserting (PUT) resources into the DHT, refreshing (REFRESH\_PUT, retrieving (GET) and deleting (DELETE), as well as performing the same operations on resources in the local storage, called when appropriate requests are received. Additionally, the class should define (implementing appropriate getters) callback event types and an entry point.
The class \emph{HyCubeRoutingDHTManager} implements the PUT, REFRESH\_PUT, GET and DELETE procedures of \emph{HyCube}, including processing received request and response messages, as well as resource operations on the local storage. The interface \emph{HyCubeDHTManager}, extending the interface \emph{DHTManager}, specifies additional methods, using specific types used by \emph{HyCubeRoutingDHTManager} - resources are represented by \emph{HyCubeResource} objects, and resource descriptors/criteria are represented by \emph{HyCubeResourceDescriptor} objects (\emph{HyCubeRoutingDHTManager} implements this interface). The methods inserting, refreshing, retrieving and deleting resources from the DHT expect the following arguments: the resource key (\emph{BigInteger}), optional node pointer (the node to which the request should be sent directly), an object representing the resource (PUT), or resource criteria (used for specifying resources for REFRESH\_PUT, GET and DELETE), a callback object, and a callback argument that would be passed to the callback method when the request terminates. The methods operating on the local storage expect the following arguments: the resource key, the requesting node ID, and the object representing resource/criteria (implementation specific). The methods of the DHT manager (implementing operations PUT, REFRESH\_PUT, GET, DELETE, as well as their corresponding local storage operations) have an optional argument - an \emph{Object[]} array, specifying runtime parameters. In the \emph{HyCubeRoutingDHTManager} implementation, the runtime parameters are used only by procedures PUT (Table \ref{tab:HyCubeRoutingDHTManagerPutParameters}), REFRESH\_PUT (Table \ref{tab:HyCubeRoutingDHTManagerRefreshPutParameters}), GET (Table \ref{tab:HyCubeRoutingDHTManagerGetParameters}), DELETE (Table \ref{tab:HyCubeRoutingDHTManagerDeleteParameters}). The methods operating on the local storage do not expect any additional runtime parameters in the \emph{Object[]} array. The class \emph{HyCubeRoutingDHTManager} provides helper methods for creating and retrieving the runtime parameters - operating on the \emph{Object[]} array. The configuration properties of the class \emph{HyCubeRoutingDHTManager} are presented in Table \ref{tab:libHyCubeRoutingDHTManager}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.5cm} p{1.5cm} p{10.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{exactPut} & \textbf{Boolean} & Indicates whether the PUT request is an exact PUT request sent to a node that should process it and not route it any further. If this flag is set to \emph{true}, the direct recipient of the request must be specified. The default value is \emph{false}. \\[1.5mm]
\textbf{secureRouting} & \textbf{Boolean} & Indicates whether secure routing tables should be used for next hop selection (def. value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & Indicates whether random numbers of nodes should be skipped in next hop selection by all nodes routing the PUT message (default value \emph{false}) \\[1.5mm]
\textbf{registerRoute} & \textbf{Boolean} & Indicates whether the request should be routed using a registered route (in which case, the response would be sent back along the same path). The default value is \emph{false}. \\[1.5mm]
\textbf{anonymousRoute} & \textbf{Boolean} & Indicates whether the request should be routed anonymously (forced anonymous route, ``Anonymous route'' header option set to \emph{true}). The default value is \emph{false}. If the value is \emph{true}, every node along the route should replace the message's original sender node ID and network address with its own node ID and network address. Additionally the option forces concealing the ``TTL'' and the ``Hop count'' header fields, and modifies the initial Steinhaus point if the sending node ID is one of the most distant nodes to the recipient in the hierarchical hypercube (the new value is the second best next hop found in the routing tables). This option can be set to \emph{true} together with the ``registerRoute'' option, in which case, the message will be routed along a registered route, concealing the ``TTL'', ``Hop count'' and ``Steinhaus point''. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingDHTManager runtime PUT parameters}
\label{tab:HyCubeRoutingDHTManagerPutParameters}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.5cm} p{1.5cm} p{10.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{exactRefreshPut} & \textbf{Boolean} & Indicates whether the REFRESH\_PUT request is an exact REFRESH\_PUT request sent to a node that should process it and not route it any further. If this flag is set to \emph{true}, the direct recipient of the request must be specified. The default value is \emph{false}. \\[1.5mm]
\textbf{secureRouting} & \textbf{Boolean} & Indicates whether secure routing tables should be used for next hop selection (def. value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & Indicates whether random numbers of nodes should be skipped in next hop selection by all nodes routing the REFRESH\_PUT message (default value \emph{false}) \\[1.5mm]
\textbf{registerRoute} & \textbf{Boolean} & Indicates whether the request should be routed using a registered route (in which case, the response would be sent back along the same path). The default value is \emph{false}. \\[1.5mm]
\textbf{anonymousRoute} & \textbf{Boolean} & Indicates whether the request should be routed anonymously (forced anonymous route, ``Anonymous route'' header option set to \emph{true}). The default value is \emph{false}. If the value is \emph{true}, every node along the route should replace the message's original sender node ID and network address with its own node ID and network address. Additionally the option forces concealing the ``TTL'' and the ``Hop count'' header fields, and modifies the initial Steinhaus point if the sending node ID is one of the most distant nodes to the recipient in the hierarchical hypercube (the new value is the second best next hop found in the routing tables). This option can be set to \emph{true} together with the ``registerRoute'' option, in which case, the message will be routed along a registered route, concealing the ``TTL'', ``Hop count'' and ``Steinhaus point''. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingDHTManager runtime REFRESH\_PUT parameters}
\label{tab:HyCubeRoutingDHTManagerRefreshPutParameters}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.5cm} p{1.5cm} p{10.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{exactGet} & \textbf{Boolean} & Indicates whether the GET request is an exact GET request sent to a node that should process it and not route it any further. If this flag is set to \emph{true}, the direct recipient of the request must be specified. The default value is \emph{false}. \\[1.5mm]
\textbf{findClosestNode} & \textbf{Boolean} & Indicates whether the GET request should be routed to the closest node to the resource key, or the resource(s) should be returned by the first node on the route that stores it (def. value \emph{false}). \\[1.5mm]
\textbf{secureRouting} & \textbf{Boolean} & Indicates whether secure routing tables should be used for next hop selection (def. value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & Indicates whether random numbers of nodes should be skipped in next hop selection by all nodes routing the GET message (default value \emph{false}) \\[1.5mm]
\textbf{registerRoute} & \textbf{Boolean} & Indicates whether the request should be routed using a registered route (in which case, the response would be sent back along the same path). The default value is \emph{false}. \\[1.5mm]
\textbf{anonymousRoute} & \textbf{Boolean} & Indicates whether the request should be routed anonymously (forced anonymous route, ``Anonymous route'' header option set to \emph{true}). The default value is \emph{false}. If the value is \emph{true}, every node along the route should replace the message's original sender node ID and network address with its own node ID and network address. Additionally the option forces concealing the ``TTL'' and the ``Hop count'' header fields, and modifies the initial Steinhaus point if the sending node ID is one of the most distant nodes to the recipient in the hierarchical hypercube (the new value is the second best next hop found in the routing tables). This option can be set to \emph{true} together with the ``registerRoute'' option, in which case, the message will be routed along a registered route, concealing the ``TTL'', ``Hop count'' and ``Steinhaus point''. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingDHTManager runtime GET parameters}
\label{tab:HyCubeRoutingDHTManagerGetParameters}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.5cm} p{1.5cm} p{10.5cm}}
\hline
\textbf{Parameter name} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{exactDelete} & \textbf{Boolean} & Indicates whether the DELETE request is an exact DELETE request sent to a node that should process it and not route it any further. If this flag is set to \emph{true}, the direct recipient of the request must be specified. The default value is \emph{false}. \\[1.5mm]
\textbf{secureRouting} & \textbf{Boolean} & Indicates whether secure routing tables should be used for next hop selection (def. value \emph{false}) \\[1.5mm]
\textbf{skipRandomNextHops} & \textbf{Boolean} & Indicates whether random numbers of nodes should be skipped in next hop selection by all nodes routing the DELETE message (default value \emph{false}) \\[1.5mm]
\textbf{registerRoute} & \textbf{Boolean} & Indicates whether the request should be routed using a registered route (in which case, the response would be sent back along the same path). The default value is \emph{false}. \\[1.5mm]
\textbf{anonymousRoute} & \textbf{Boolean} & Indicates whether the request should be routed anonymously (forced anonymous route, ``Anonymous route'' header option set to \emph{true}). The default value is \emph{false}. If the value is \emph{true}, every node along the route should replace the message's original sender node ID and network address with its own node ID and network address. Additionally the option forces concealing the ``TTL'' and the ``Hop count'' header fields, and modifies the initial Steinhaus point if the sending node ID is the most distant nodes to the recipient in the hierarchical hypercube (the new value is the second best next hop found in the routing tables). This option can be set to \emph{true} together with the ``registerRoute'' option, in which case, the message will be routed along a registered route, concealing the ``TTL'', ``Hop count'' and ``Steinhaus point''. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRoutingDHTManager runtime DELETE parameters}
\label{tab:HyCubeRoutingDHTManagerDeleteParameters}
\end{table}
The class \emph{HyCubeRoutingDHTManager} also implements the functions performed by background processes defined for resource management:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{DHTBackgroundProcess} - The background process run every specified time interval, checking resources' refresh times and deleting expired resources .
\item \emph{ReplicationBackgroundProcess} - The background process run every specified time interval, performing replication - sending replication information to neighbors.
\end{itemize}
\noindent
Both background processes extend the abstract class \emph{AbstractBackgroundProcess} and their configuration includes only parameters expected by \emph{AbstractBackgroundProcess}.
The \emph{HyCubeRoutingDHTManager} module defines three nested modules:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{DHTStorageManager} - Realizes the local resource storage (implementing interface \emph{DHTSTorageManager}), depending on the implementation, it may store resources in memory, save them to disk or take any other approach. The default implementation (class \emph{HyCubeSimpleDHTStorageManager}) maintains the resources being stored in memory. Table \ref{tab:libHyCubeSimpleDHTStorageManager} specifies the configuration of this module.
\item \emph{ResourceAccessController} - Realizes access control mechanism (implementing interface \emph{HyCubeResourceAccessController}) - determines whether nodes are allowed to perform operations on resources. The default implementation (class \emph{HyCubeSimpleResourceAccessController}) allows all operations to be performed by any node. Table \ref{tab:libHyCubeSimpleResourceAccessController} presents the configuration of this module.
\item \emph{ResourceReplicationSpreadManager} - A module determining to how many nodes resources should be replicated, based on the analysis of incomings requests - implementing the interface \emph{HyCubeDHTStorageManager}. The module may be used to increase the replication radius for popular resources. Methods of this module are called when requests are processed, and another two methods return the number of nodes to which the resource should be replicated, or the multiplier - relative to the default value. The default implementation (class \emph{HyCubeSimpleResourceReplicationSpreadManager}) does not process requests, and returns the default number of replication nodes for all resources.
\end{itemize}
\begin{center}
\scriptsize
\begin{longtable}{p{5cm} p{1.2cm} p{8.3cm}}
%\begin{table}
%\scriptsize
%\begin{center}
%\begin{tabular}{p{5cm} p{1.2cm} p{8.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.dht.HyCubeRoutingDHTManager} \\[1.5mm]
\textbf{XxxxCallbackEventTypeKey} & \textbf{String} & The event type key for PUT / REFRESH\_PUT / GET / DELETE operation callback events. * Xxxx = Put / RefreshPut / Get / Delete \\[1.5mm]
\textbf{XxxxRequestTimeoutEventTypeKey} & \textbf{String} & The event type key for PUT / REFRESH\_PUT / GET / DELETE request timeout events. * Xxxx = Put / RefreshPut / Get / Delete \\[1.5mm]
\textbf{XxxxRequestTimeout} & \textbf{Integer} & The timeouts for PUT / REFRESH\_PUT / GET / DELETE requests \newline * Xxxx = Put / RefreshPut / Get / Delete \\[1.5mm]
\textbf{ResourceStoreTime} & \textbf{Integer} & Resource validity time (milliseconds) - after that time resources should be deleted from node's local storage (if not refreshed) \\[1.5mm]
\textbf{Metric} & \textbf{Enum} & The metric defining distances between nodes/resource keys (\emph{Metric} enum.) \\[1.5mm]
\textbf{CheckIfResourceReplicaBeforeStoring} & \textbf{Boolean} & Determines whether nodes should check (before storing resources) whether they are one of the closest nodes to the resource key \\[1.5mm]
\textbf{ResourceStoreNodesNum} & \textbf{Integer} & The number of nodes that should store individual resources ($k_{store}$) \\[1.5mm]
\textbf{ReplicationNodesNum} & \textbf{Integer} & Specifies to how many nodes the resources should be replicated ($k_{rep}$) \\[1.5mm]
\textbf{Replicate} & \textbf{Boolean} & Determines whether the replication mechanism is enabled \\[1.5mm]
\textbf{AnonymousReplicate} & \textbf{Boolean} & Determines whether REPLICATE messages should be sent anonymously (``Anonymous route'' header flag set to \emph{true}). \\[1.5mm]
\textbf{MaxReplicationNSNodesNum} & \textbf{Integer} & Specifies the maximum number of nodes (from the neighborhood set) to which replication information is sent \\[1.5mm]
\textbf{MaxReplicationSpreadNodesNum} & \textbf{Integer} & Specifies the maximum number of replication nodes \\[1.5mm]
\textbf{AssumeNsOrdered} & \textbf{Boolean} & If \emph{true} (the node selection algorithm maintains the neighborhood set ordered), efficiency of the DHT manager may be improved. \\[1.5mm]
\textbf{DensityCalculationQuantileFuncThreshold} & \textbf{Double} & Determines how many neighborhood set nodes are taken into account in the DHT density calculation \\[1.5mm]
\textbf{EstimatedDistanceCoefficient} & \textbf{Double} & Adjusts the probability of nodes accepting resources by multiplying the estimated radius by the parameter value \\[1.5mm]
\textbf{EstimateDensityBasedOnLastNodeOnly} & \textbf{Boolean} & Estimates the network density based only on the most distant node from the neighborhood set nodes being considered \\[1.5mm]
\textbf{XxxxResponseAnonymousRoute} & \textbf{Boolean} & Determines whether PUT / REFRESH\_PUT / GET / DELETE responses should be sent anonymously (``Anonymous route'' header flag set to \emph{true}). \newline * Xxxx = Put / RefreshPut / Get / Delete \\[1.5mm]
\textbf{ReplicationGetRegisterRoute} & \textbf{Boolean} & Determines whether GET requests initiated as an effect of the replication should be sent using a registered route \\[1.5mm]
\textbf{ReplicationGetAnonymousRoute} & \textbf{Boolean} & Determines whether GET requests initiated as an effect of the replication should be sent anonymously (``Anonymous route'' header flag set to \emph{true}) \\[1.5mm]
\textbf{IgnoreExactXxxxRequests} & \textbf{Boolean} & Determines whether nodes should ignore exact match PUT / REFRESH\_PUT / GET / DELETE requests (anonymity) \newline * Xxxx = Put / RefreshPut / Get / Delete \\[1.5mm]
\textbf{DHTStorageManager} & \textbf{Nested} & Storage management module (provides the storage functionality) \\[1.5mm]
\textbf{ResourceAccessController} & \textbf{Nested} & DHT access controller module - determines whether nodes are allowed to perform operations on resources \\[1.5mm]
\textbf{ResourceReplicationSpreadManager} & \textbf{Nested} & A module determining to how many nodes resources should be replicated, based on the analysis of incoming requests \\[1.5mm]
\hline
%\end{tabular}
%\end{center}
\caption{HyCubeRoutingDHTManager configuration properties}
\label{tab:libHyCubeRoutingDHTManager}
%\end{table}
\end{longtable}
\end{center}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.dht.HyCubeSimpleDHTStorageManager} \\[1.5mm]
\textbf{StoreMultipleCopies} & \textbf{Boolean} & Determines whether multiple copies for the same \emph{resourceId} values should be stored (different \emph{resourceUrl} values) \\[1.5mm]
\textbf{MaxResourcesNum} & \textbf{Integer} & Specifies the maximum number of resources stored by the node \\[1.5mm]
\textbf{MaxKeySlotSize} & \textbf{Integer} & Specifies the maximum number of resources stored for any resource key \\[1.5mm]
\textbf{MaxResourceSlotSize} & \textbf{Integer} & Specifies the maximum number of resources stored for any unique pair of \emph{resourceId} and \emph{resourceUrl} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSimpleDHTStorageManager configuration properties}
\label{tab:libHyCubeSimpleDHTStorageManager}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.dht.HyCubeSimpleResourceAccessController} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSimpleResourceAccessController configuration properties}
\label{tab:libHyCubeSimpleResourceAccessController}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.dht.HyCubeSimpleResourceReplicationSpreadManager} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeSimpleResourceReplicationSpreadManager configuration properties}
\label{tab:libHyCubeSimpleResourceReplicationSpreadManager}
\end{table}
Whenever an anonymous (``Anonymous route'' message header option) request (PUT, REFRESH\_PUT, GET, DELETE) is received, the response should also be sent anonymously. However, GET requests sent anonymously, if the route is not registered, should be dropped by any node receiving it. Because the original request sender is unknown, it would be not possible to return the result to the sender. For PUT, REFRESH\_PUT and DELETE requests sent anonymously without the route being registered, the requests should processed, but no responses would sent, as the original request senders are unknown.
\subsection{Recovery manager}
The recovery manager is a module responsible for providing recovery functionality in \emph{HyCube}. The recovery manager is not directly attached to the node instance, but uses the extension mechanism. The extension \emph{HyCubeRecoveryExtension} is defined, which maintains the recovery manager module (\emph{HyCubeRecoveryManager}) within it. The recovery manager implements methods initiating recovery, neighborhood recovery and ID recovery procedures (ID recovery is a recovery based on sending recovery requests to closest nodes to a given ID), as well as methods processing received recovery requests and responses, called by received message processors (Section \ref{sec:libMessageProcessors}). The recovery extension configuration (Table \ref{tab:libHyCubeRecoveryExtension}) consists only of the configuration of the nested recovery manager - \emph{HyCubeRecoveryManager} (Table \ref{tab:libHyCubeRecoveryManager}). The ``Class'' property is not required for the \emph{HyCubeRecoveryManager} module, because the extension always creates an instance of \emph{HyCubeRecoveryManager} class.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeRecoveryExtension} \\[1.5mm]
\textbf{RecoveryManager} & \textbf{Nested} & Recovery manager module \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRecoveryExtension configuration properties}
\label{tab:libHyCubeRecoveryExtension}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5cm} p{1.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{SendRecoveryToNS} \newline / \textbf{SendRecoveryToRT1} \newline / \textbf{SendRecoveryToRT2} & \textbf{Boolean} & Determines whether recovery (full recovery) requests should be sent to the nodes in the neighborhood set / primary routing table / secondary routing table \\[1.5mm]
\textbf{SendNotifyToNS} \newline / \textbf{SendNotifyToRT1} \newline / \textbf{SendNotifyToRT2} & \textbf{Boolean} & Determines whether NOTIFY messages should be sent to the nodes in the neighborhood set / primary routing table / secondary routing table (all recovery procedure variants) \\[1.5mm]
\textbf{ProcessRecoveryAsNotify} & \textbf{Boolean} & Determines whether RECOVERY requests should be also processed as notifications (NOTIFY messages) \\[1.5mm]
\textbf{ReturnNS} \newline / \textbf{ReturnRT1} \newline / \textbf{ReturnRT1} & \textbf{Boolean} & Determines whether nodes should return their neighborhood set / primary routing table / secondary routing table references in RECOVERY\_REPLY messages (full recovery) \\[1.5mm]
\textbf{RecoveryNSReturnNS} \newline / \textbf{RecoveryNSReturnRT1} \newline / \textbf{RecoveryNSReturnRT2} & \textbf{Boolean} & Determines whether nodes should return their neighborhood set / primary routing table / secondary routing table references in RECOVERY\_REPLY messages (neighborhood set recovery) \\[1.5mm]
\textbf{RecoveryIdReturnNS} \newline / \textbf{RecoveryIdReturnRT1} \newline / \textbf{RecoveryIdReturnRT2} & \textbf{Boolean} & Determines whether nodes should return their neighborhood set / primary routing table / secondary routing table references in RECOVERY\_REPLY messages (ID recovery) \\[1.5mm]
\textbf{SendNotifyToRecoveryReplyNodes} & \textbf{Boolean} & Determines whether a NOTIFY message should be sent to nodes to which references were received in RECOVERY\_REPLY messages \\[1.5mm]
\textbf{RecoveryEventKey} & \textbf{String} & Specifies the recovery event key \\[1.5mm]
\textbf{RecoveryNodesMax} & \textbf{Integer} & If greater than 0, specifies the maximum number of nodes to which the recovery requests are sent (all neighborhood set nodes + remaining number of random primary and routing table nodes) \\[1.5mm]
\textbf{MinRecoveryInterval} & \textbf{Integer} & Specifies the minimum interval between recovery procedure runs (ms) \\[1.5mm]
\textbf{MinNotifyNodeInterval} & \textbf{Integer} & Specifies the minimum interval between sending NOTIFY messages to the same node \\[1.5mm]
\textbf{NotifyNodesCacheMaxSize} & \textbf{Integer} & Specifies the maximum number of node notification times stored \\[1.5mm]
\textbf{NotifyNodesMax} & \textbf{Integer} & If greater than 0, specifies the maximum number of nodes to which the NOTIFY messages are sent (all neighborhood set nodes + remaining number of random primary and routing table nodes - considering only nodes that were not recently notified (MinNotifyNodeInterval)) \\[1.5mm]
\textbf{NotifyRecoveryReplyNodesMax} & \textbf{Integer} & If SendNotifyToRecoveryReplyNodes is \emph{true}, the parameter specifies the max. number of such nodes to which notifications are sent - random selection (0 indicates that NOTIFY should be sent to all nodes) \\[1.5mm]
\textbf{RecoveryReplyNodesToProcessMax} & \textbf{Integer} & If greater than 0, specifies the maximum number of references received within a RECOVERY\_REPLY message that should be processed - according to the sequence in the message \\[1.5mm]
\textbf{RecoveryReplyNodesMax} & \textbf{Integer} & If greater than 0, specifies the maximum number of references returned by nodes within RECOVERY\_REPLY messages (all neighborhood set nodes + remaining number of random primary and routing table nodes) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRecoveryManager configuration properties}
\label{tab:libHyCubeRecoveryManager}
\end{table}
\emph{HyCubeRecoveryExtension} defines a method returning an entry point - an instance of class \emph{HyCubeRecoveryExtensionEntryPoint}, which may be used to invoke the recovery mechanism from outside the library.
In addition to the recovery manager module, a background process (class \emph{HyCubeRecoveryBackgroundProcess}), directly bound to the recovery manager, was created. The background process runs the recovery procedure (or neighborhood set recovery procedure) based on the recovery plan. Table \ref{tab:libHyCubeRecoveryBackgroundProcess} presents the configuration properties of \emph{HyCubeRecoveryBackgroundProcess}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4cm} p{2.5cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.dht.HyCubeDHTBackgroundProcess} \\[1.5mm]
\textbf{ScheduleImmediately} & \textbf{Boolean} & Determines whether the background process should be immediately scheduled after initialization \\[1.5mm]
\textbf{RecoveryExtensionKey} & \textbf{String} & Recovery extension key (allow binding the recovery background process to the recovery extension/manager) \\[1.5mm]
\textbf{EventTypeKey} & \textbf{String} & The recovery background process event type key \\[1.5mm]
\textbf{ScheduleInterval} & \textbf{Integer} & Background process schedule interval (milliseconds) \\[1.5mm]
\textbf{SchedulePlan} & \textbf{List (Enum)} & The recovery plan - a list of recovery types (\emph{HyCubeRecoveryType} enumeration), determining a sequence of recovery variants (for individual recovery runs, successive recovery types from the recovery plan are performed - following a cyclical pattern). The values allowed are: FULL\_RECOVERY, RECOVERY\_NS. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeRecoveryBackgroundProcess configuration properties}
\label{tab:libHyCubeRecoveryBackgroundProcess}
\end{table}
\subsection{Received message processors, message send processors}
\label{sec:libMessageProcessors}
Received message processors (classes implementing the interface \emph{ReceivedMessageProcessor}) are modules processing messages received by a node. When a message is received, the node calls \emph{processMessage} method of each defined message processor. The method returns a boolean value indicating whether the message should be passed to the next message processor, or further processing should be abandoned.
Message send processors (classes implementing the interface \emph{MessageSendProcessor}) are modules processing messages before sending. Before the message is passed to the routing manager and sent, the method \emph{processSendMessage} of each processor is called. The method expects an argument of type \emph{MessageSendProcessInfo} containing the message object, the direct recipient of the message, and a flag indicating whether the message should be processed by message send processors. The method also returns a boolean value indicating whether further processing should be continued. If any of the processors returns the value \emph{false}, the message eventually will not be sent.
If no message send processors are defined, the messages being sent will be immediately sent without any additional processing. However, if no received message processors are defined, received messages will not be processed at all (dropped) - the logic is based on received message processors, which may then pass the message (or the information received within the message) to individual modules implementing further processing.
Two message processors - \emph{HyCubeReceivedMessageProcessor} (received message processor - implementing \emph{ReceivedMessageProcessor}) and \emph{HyCubeMessageSendProcessor} (message send processor - implementing \emph{MessageSendProcessor}) allow dispatching messages to nested message processors based on message types (recursive definition of message processors). \emph{HyCubeReceivedMessageProcessor} additionally allows defining maximum numbers of messages processed in a specified time interval (limiting the total number of messages processed and limiting the numbers of messages for individual message types). \emph{HyCubeReceivedMessageProcessor} is also responsible for detecting messages routed back along a registered route and, if necessary, passing them to the routing manager (which handles routing messages back along registered routes). The configuration parameters of \emph{HyCubeReceivedMessageProcessor} are presented in Table \ref{tab:libHyCubeReceivedMessageProcessor}, and the configuration of \emph{HyCubeMessageSendProcessor} is presented in Table \ref{tab:libHyCubeMessageSendProcessor}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6cm} p{1.5cm} p{7.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.processing \newline $\hookrightarrow$.HyCubeReceivedMessageProcessor} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (Enum)} & Specifies the message types (\emph{HyCubeMessageType} enum.) that should be processed by this message processor. For a top-level message processor, the list should include all message types. \\[1.5mm]
\textbf{LimitMaxProcessedMessagesRate.Num}, \textbf{LimitMaxProcessedMessagesRate.Time} & \textbf{Integer} & The maximum number of messages processed within the specified time interval (ms) \\[1.5mm]
\textbf{LimitMaxProcessedMessagesRate.LimitForTypes} & \textbf{List (Enum)} & A list of message types (\emph{HyCubeMessageType} enum.) for which type-level limits should be applied. For individual message types, limits are defined by parameters: ``LimitMaxProcessedMessagesRate[TYPE].Num'' and ``LimitMaxProcessedMessagesRate[TYPE].Time'' \\[1.5mm]
\textbf{LimitMaxProcessedMessagesRate[TYPE].Num}, \textbf{LimitMaxProcessedMessagesRate[TYPE].Time} & \textbf{Integer}, \textbf{Integer} & The maximum number of messages processed within the specified time interval (ms) for messages of type TYPE (\emph{HyCubeMessageType} enum.) \\[1.5mm]
\textbf{ProcessRouteBackMessagesByNodesOnRoute} & \textbf{List (Enum)} & Determines whether messages sent back along a registered route should be processed by the nested message processors, if the processing node is not the registered route start \\[1.5mm]
\textbf{ReceivedMessageProcessors} & \textbf{Nested} & A list of nested message processors. For each of these message processor the property ``MessageTypes'' should be defined, specifying a list of message types that should be processed (\emph{HyCubeMessageType} enumeration). \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessor configuration properties}
\label{tab:libHyCubeReceivedMessageProcessor}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4cm} p{2.0cm} p{9.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.processing.HyCubeMessageSendProcessor} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (Enum)} & Specifies the message types (\emph{HyCubeMessageType} enum.) that should be processed by this message processor. For a top-level message processor, the list should include all message types. \\[1.5mm]
\textbf{MessageSendProcessors} & \textbf{Nested} & A list of nested message processors. For each of these message processor the property ``MessageTypes'' should be defined, specifying a list of message types that should be processed (\emph{HyCubeMessageType} enumeration). \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeMessageSendProcessor configuration properties}
\label{tab:libHyCubeMessageSendProcessor}
\end{table}
Several received message processors and several message send processors have been implemented to process messages of individual types:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{HyCubeReceivedMessageProcessorData} - Realizes processing of received data messages - inserts the received message to an appropriate received messages queue and, if registered, calls the received message callback object. Depending on the configuration, upon receiving a message, this message processor may also call the ACK manager in order to send an ACK message to the message sender. \emph{HyCubeReceivedMessageProcessorData} additionally provides detection of message duplicates. Table \ref{tab:libHyCubeReceivedMessageProcessorData} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorAck} - Processes received ACK messages and calls the ACK callback object (if registered upon sending the DATA message). Table \ref{tab:libHyCubeReceivedMessageProcessorAck} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorPing} - Processes received PING and PONG messages. Upon receiving a PING message, the message processor sends back a PONG message. When a PONG message is received, the processor updates the node liveness information (ping response indicator) stored in instances of \emph{RoutingTableEntry} class. Table \ref{tab:libHyCubeReceivedMessageProcessorPing} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorLookup} - Transfers processing of received LOOKUP and LOOKUP\_REPLY messages to the lookup manager module. Table \ref{tab:libHyCubeReceivedMessageProcessorLookup} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorSearch} - Transfers processing of received SEARCH and SEARCH\_REPLY messages to the search manager module. Table \ref{tab:libHyCubeReceivedMessageProcessorSearch} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorSearchJoin} - Transfers processing of received JOIN and JOIN\_REPLY messages to the search join manager module (if the search join technique is used). Table \ref{tab:libHyCubeReceivedMessageProcessorSearchJoin} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorRouteJoin} - Transfers processing of received JOIN and JOIN\_REPLY messages to the route join manager module (if the route join technique is used). Table \ref{tab:libHyCubeReceivedMessageProcessorRouteJoin} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorRecovery} - Transfers processing of received RECOVERY and RECOVERY\_REPLY messages to the recovery manager module. Table \ref{tab:libHyCubeReceivedMessageProcessorRecovery} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorNotify} - Passes the node reference (sender of the received NOTIFY message) to the notify processor module. Table \ref{tab:libHyCubeReceivedMessageProcessorNotify} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorLeave} - Transfers processing of received LEAVE messages to the leave manager module. Table \ref{tab:libHyCubeReceivedMessageProcessorLeave} specifies the configuration of this message processor.
\item \emph{HyCubeReceivedMessageProcessorDHT} - Transfers processing of received DHT requests/responses to the DHT manager module. Table \ref{tab:libHyCubeReceivedMessageProcessorDHT} specifies the configuration of this message processor.
\item \emph{HyCubeMessageSendProcessorData} - Transfers processing of the DATA message being sent to the ACK manager module, which registers the message in the list of messages awaiting acknowledgment. Table \ref{tab:libHyCubeMessageSendProcessorData} specifies the configuration of this message processor.
\item \emph{HyCubeMessageSendProcessorPing} - Processes PING messages being sent - registers the PING message in the list of messages awaiting responses (PONG). Table \ref{tab:libHyCubeMessageSendProcessorPing} specifies the configuration of this message processor.
\end{itemize}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.data.HyCubeReceivedMessageProcessorData} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & DATA \\[1.5mm]
\textbf{AckEnabled} & \textbf{Boolean} & Determines whether ACK mechanism is enabled \\[1.5mm]
\textbf{AckExtensionKey} & \textbf{String} & ACK extension key \\[1.5mm]
\textbf{ProcessDataMessageIfCannotRoute} & \textbf{Boolean} & Determines whether DATA messages should be processed by nodes even if the recipient ID is not reached (but the message cannot be routed any further) \\[1.5mm]
\textbf{PreventDuplicates} & \textbf{Boolean} & Determines whether message duplicates should be detected \\[1.5mm]
\textbf{PreventAnonymousDuplicates} & \textbf{Boolean} & Determines whether message duplicate detection should be enabled for anonymous messages (``Anonymous route'' header option) - every node routing an anonymous message updates the original message sender, which may cause messages sent by different nodes to be detected as duplicates \\[1.5mm]
\textbf{PreventDuplicatesIncludeCRC} & \textbf{Boolean} & Determines whether message duplicates detection should also compare the CRC message header field \\[1.5mm]
\textbf{PreventDuplicatesRetentionPeriod} & \textbf{Integer} & The time (ms) for which duplicates information is stored \\[1.5mm]
\textbf{PreventDuplicatesCacheMaxSize} & \textbf{Integer} & The maximum number of entries (messages received) stored for duplicates detection \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorData configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorData}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.ack.HyCubeReceivedMessageProcessorDataAck} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & DATA\_ACK \\[1.5mm]
\textbf{AckExtensionKey} & \textbf{String} & ACK extension key \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorAck configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorAck}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeReceivedMessageProcessorPing} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & PING, PONG \\[1.5mm]
\textbf{KeepAliveExtensionKey} & \textbf{String} & Keep-alive extension key \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorPing configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorPing}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.lookup.HyCubeReceivedMessageProcessorLookup} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & LOOKUP, LOOKUP\_REPLY \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorLookup configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorLookup}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.search.HyCubeReceivedMessageProcessorSearch} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & SEARCH, SEARCH\_REPLY \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorSearch configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorSearch}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.join.searchjoin.HyCubeReceivedMessageProcessorSearchJoin} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & JOIN, JOIN\_REPLY \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorSearchJoin configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorSearchJoin}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.join.routejoin.HyCubeReceivedMessageProcessorRouteJoin} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & JOIN, JOIN\_REPLY \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorRouteJoin configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorRouteJoin}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeReceivedMessageProcessorRecovery} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & RECOVERY, RECOVERY\_REPLY \\[1.5mm]
\textbf{RecoveryExtensionKey} & \textbf{String} & Recovery extension key \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorRecovery configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorRecovery}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.0cm} p{1.5cm} p{9.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeReceivedMessageProcessorNotify} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & NOTIFY \\[1.5mm]
\textbf{ValidateNotifyMessageSender} & \textbf{Boolean} & Determines whether only LEAVE messages sent directly should be processed \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorNotify configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorNotify}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.0cm} p{1.5cm} p{9.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.leave.HyCubeReceivedMessageProcessorLeave} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & LEAVE \\[1.5mm]
\textbf{ValidateLeaveMessageSender} & \textbf{Boolean} & Determines whether only LEAVE messages sent directly should be processed \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorLeave configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorLeave}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.dht.HyCubeReceivedMessageProcessorDHT} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & PUT, PUT\_REPLY, GET, GET\_REPLY, DELETE, DELETE\_REPLY, REFRESH\_PUT, REFRESH\_PUT\_REPLY, REPLICATE \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeReceivedMessageProcessorDHT configuration properties}
\label{tab:libHyCubeReceivedMessageProcessorDHT}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.ack.HyCubeMessageSendProcessorDataAck} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & DATA \\[1.5mm]
\textbf{AckExtensionKey} & \textbf{String} & ACK extension key \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeMessageSendProcessorData configuration properties}
\label{tab:libHyCubeMessageSendProcessorData}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{4.5cm} p{1.5cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeMessageSendProcessorPing} \\[1.5mm]
\textbf{MessageTypes} & \textbf{List (String)} & PING \\[1.5mm]
\textbf{KeepAliveExtensionKey} & \textbf{String} & Keep-alive extension key \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeMessageSendProcessorPing configuration properties}
\label{tab:libHyCubeMessageSendProcessorPing}
\end{table}
\subsection{Message delivery acknowledgments and resending}
The interface for message delivery acknowledgment mechanism is provided by the core of the library. The \emph{Node}'s \emph{sendDataMessage} methods take two additional arguments (ACK callback object implementing the interface \emph{MessageAckCallback} and an \emph{Object} - the callback argument passed to the callback method when the ACK message is received). This information, as well as additional information required to resend the message if ACK is not received (timeout), are stored in a special object of type \emph{AckProcessInfo} and passed to the message send processors within an object of type \emph{DataMessageSendProcessInfo} (extending the \emph{MessageSendProcessInfo} class). The message send processor \emph{HyCubeMessageSendProcessorData} is responsible for processing DATA messages being sent (casting the message send info object to \emph{DataMessageSendProcessInfo}). The processors is bound (configuration) to an extension object (ACK extension - \emph{HyCubeAckExtension} class), which maintains the ACK manager module - an instance of \emph{HyCubeAckManager} class. The message processor passes processing of the DATA message being sent to the ACK manager, which registers the message in the list of messages awaiting acknowledgment. When the ACK message (corresponding to the DATA message sent) is received, the entry is removed from the list and the ACK callback object is called (\emph{notifyDelivered} method) to signal the acknowledgment was received. Depending on configuration (on \emph{Node} level), when the ACK is not received within a specified time, the message may be resent (the number of attempts is also configurable). If all resend attempts fail (no ACK is received), the ACK callback is called (\emph{notifyUndelivered} method) to signal that the message delivery failed (was not confirmed). The list of sent messages awaiting acknowledgments is checked (for entries for which the timeout elapsed) by the ACK manager, which is triggered periodically by the ACK background process (\emph{HyCubeAwaitingAcksBackgroundProcess} class).
The configuration properties of \emph{HyCubeAckExtension} (incl. nested properties of \emph{HyCubeAckManager}) are presented in Table \ref{tab:libHyCubeAckExtension}, and properties of \emph{HyCubeAwaitingAcksBackgroundProcess} are presented in Table \ref{tab:libHyCubeAwaitingAcksBackgroundProcess}.
If a DATA message is sent with the ``Secure routing'' and/or ``Skip random next hops'' header options set to \emph{true}, the ACK message sent to the DATA message sender has the same values of both header options.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.5cm} p{1.0cm} p{7.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.ack.HyCubeAckExtension} \\[1.5mm]
\textbf{AckManager \newline $\hookrightarrow$.ApplySecureRoutingAfterNotDeliveredCount} & \textbf{Integer} & Determines after how many attempts (ACK not received) secure routing should be applied \\[1.5mm]
\textbf{AckManager \newline $\hookrightarrow$.ApplySkippingNextHopsAfterNotDeliveredCount} & \textbf{Integer} & Determines after how many attempts (ACK not received) a random number of nodes in next hop selection should be skipped by nodes on the route \\[1.5mm]
\textbf{AckManager.ValidateAckSender} & \textbf{Boolean} & Determines whether the ACK sender should be validated (to be the same node as the one to which the original message was addressed). The value should be set to \emph{false} when anonymous registered routing is enabled, or when processing of DATA messages by closest nodes, not being the message recipient, is allowed. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeAckExtension configuration properties}
\label{tab:libHyCubeAckExtension}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.5cm} p{1.0cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.ack.HyCubeAwaitingAcksBackgroundProcess} \\[1.5mm]
\textbf{ScheduleImmediately} & \textbf{Boolean} & Determines whether the background process should be scheduled immediately after the initialization \\[1.5mm]
\textbf{AckExtensionKey} & \textbf{String} & ACK extension key \\[1.5mm]
\textbf{EventTypeKey} & \textbf{String} & The background process event type key \\[1.5mm]
\textbf{ScheduleInterval} & \textbf{Integer} & The schedule interval (ms) for the background process \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeAwaitingAcksBackgroundProcess configuration properties}
\label{tab:libHyCubeAwaitingAcksBackgroundProcess}
\end{table}
\subsection{Keep-alive mechanism}
\label{sec:keepAliveMechanism}
\emph{HyCube}'s keep-alive mechanism (working together with the routing table node selection technique of \emph{HyCube}) is based on node ``liveness'' (ping response indicator) values (stored in \emph{RoutingTableEntry} objects for individual references). The mechanism is based on exchanging keep-alive messages between nodes. The \emph{HyCubePingBackgroundProcess} background process periodically sends keep-alive (PING) message to neighbors, and nodes receiving these messages are supposed to send keep-alive responses (PONG messages) to the PING sender. When a PING message is sent, the \emph{HyCubeMessageSendProcessorPing} adds that message to the list of PING messages awaiting replies. The liveness values are updated by the received message processor \emph{HyCubeReceivedMessageProcessorPing} upon receiving keep-alive responses (PONG messages) or by the \emph{HyCubeAwaitingPongsBackgroundProcess} background process, when the keep-alive response is not received after a specified timeout (the \emph{HyCubeAwaitingPongsBackgroundProcess} background process runs periodically, checking whether the timeout elapsed for individual PING messages sent). The list of PING messages, for which PONG responses are expected is stored in a common extension object, \emph{HyCubeKeepAliveExtension}, which is accessed by both message processors and background processes. \emph{HyCubeKeepAliveExtension} also stores the common parameters of the keep-alive mechanism, that are accessible for other modules. Furthermore, the \emph{HyCubeKeepAliveExtension} object serves as a cache for the liveness values of nodes that were removed from routing tables (the values are stored for the time specified in the configuration and are used when a removed node is again being considered as a routing table candidate).
The configuration of the message processors (\emph{HyCubeMessageSendProcessorPing} and \emph{HyCubeReceivedMessageProcessorPing}) is presented in Section \ref{sec:libMessageProcessors}. The configuration properties of the \emph{HyCubePingBackgroundProcess} and \emph{HyCubeAwaitingPongsBackgroundProcess} background processes is presented in Tables \ref{tab:libHyCubePingBackgroundProcess} and \ref{tab:libHyCubeAwaitingPongsBackgroundProcess}, and the configuration of the \emph{HyCubeKeepAliveExtension} module is presented in Table \ref{tab:libHyCubeKeepAliveExtension}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.5cm} p{1.0cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubePingBackgroundProcess} \\[1.5mm]
\textbf{ScheduleImmediately} & \textbf{Boolean} & Determines whether the background process should be scheduled immediately after the initialization \\[1.5mm]
\textbf{KeepAliveExtensionKey} & \textbf{String} & Keep-alive extension key \\[1.5mm]
\textbf{EventTypeKey} & \textbf{String} & The background process event type key \\[1.5mm]
\textbf{ScheduleInterval} & \textbf{Integer} & The schedule interval (ms) for the background process. The value should be a pointer to the ``PingInterval'' configuration parameter of the \emph{HyCubeKeepAliveExtension} extension module. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubePingBackgroundProcess configuration properties}
\label{tab:libHyCubePingBackgroundProcess}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.5cm} p{1.0cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeAwaitingPongsBackgroundProcess} \\[1.5mm]
\textbf{ScheduleImmediately} & \textbf{Boolean} & Determines whether the background process should be scheduled immediately after the initialization \\[1.5mm]
\textbf{KeepAliveExtensionKey} & \textbf{String} & Keep-alive extension key \\[1.5mm]
\textbf{EventTypeKey} & \textbf{String} & The background process event type key \\[1.5mm]
\textbf{ScheduleInterval} & \textbf{Integer} & The schedule interval (ms) for the background process. The value should be a pointer to the ``ProcessPongInterval'' configuration parameter of the \emph{HyCubeKeepAliveExtension} extension module. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeAwaitingPongsBackgroundProcess configuration properties}
\label{tab:libHyCubeAwaitingPongsBackgroundProcess}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.5cm} p{1.0cm} p{8.0cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.maintenance.HyCubeKeepAliveExtension} \\[1.5mm]
\textbf{PingInterval} & \textbf{Integer} & The ping time interval (ms). Determines how often PING messages are sent to neighbors. \\[1.5mm]
\textbf{PongTimeout} & \textbf{Integer} & Specifies the keep-alive timeout (ms) - the waiting time for PONG messages, after which the keep-alive is considered failed. \\[1.5mm]
\textbf{ProcessPongInterval} & \textbf{Integer} & Specifies how often the \emph{HyCubeAwaitingPongsBackgroundProcess} background process should be triggered. \\[1.5mm]
\textbf{PingResponseIndicatorRteKey} & \textbf{String} & Specifies the key under which the ping response indicator value is stored in routing table entries. This value should be the same as the value of the parameter ``LnsIndicatorRteKey'' of the \emph{HyCubeLnsRTNodeSelector} module, as both modules operate on the same values. \\[1.5mm]
\textbf{InitialPingResponseIndicatorValue} & \textbf{Double} & Specifies the initial value of the ping response indicator - $L_{init}$ \\[1.5mm]
\textbf{MaxPingResponseIndicatorValue} & \textbf{Double} & Specifies the maximum value of the ping response indicator - $L_{max}$ \\[1.5mm]
\textbf{PingResponseIndicatorUpdateCoefficient} & \textbf{Double} & Specifies the ping response indicator update coefficient - $p$ parameter \\[1.5mm]
\textbf{PingResponseIndicatorDeactivateThreshold} & \textbf{Double} & Specifies the ping response indicator threshold value under which the node is deactivated (not considered in next hop selection) - $L_{deactivate}$ \\[1.5mm]
\textbf{PingResponseIndicatorReplaceThreshold} & \textbf{Double} & Specifies the LNS replace threshold - routing table references with values of the LNS indicator lower than this threshold may be replaced - $L_{replace}$ \\[1.5mm]
\textbf{PingResponseIndicatorRemoveThreshold} & \textbf{Double} & Specifies the ping response indicator threshold value under which the node is removed from the routing table(s) - $L_{remove}$ \\[1.5mm]
\textbf{PingResponseIndicatorRetentionTime} & \textbf{Integer} & Specifies the time (ms) for which the values of the ping response indicator are stored after a node is removed from the routing table. The value 0 indicates that the values for removed nodes should not be cached at all. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeKeepAliveExtension configuration properties}
\label{tab:libHyCubeKeepAliveExtension}
\end{table}
\subsection{Network adapter and message receiver}
The network adapter module represents the the network (transport) layer of the node. A \emph{HyCube} node may operate upon any underlying network protocol, and the \emph{Node}'s protected method \emph{pushMessage}, which is supposed to send messages through the physical network (transport layer), passes the message and the direct recipient pointer to the network adapter module (implementing the \emph{NetworkAdapter} interface) by calling its \emph{sendMessage} method. The network adapter maintains the network interface address as well as the public node network address (if different). The interface address is determined by the address string specified upon the node initialization, and the public address, initially equal to the interface address, might be discovered during the joining procedure. Both values are set by appropriate modules and their interpretation depends on the network layer used. Three address representations should be supported by the network adapter: a \emph{String} representation, \emph{byte[]} representation and the \emph{NetworkNodePointer} representation - the implementation of this interface is network adapter dependent. The \emph{NetworkAdapter} interface specifies methods converting network addresses from \emph{String} and \emph{byte[]} representations to the \emph{NetworkNodePointer} representation, and conversion from \emph{NetworkNodePointer} to string and binary representations is provided by implementations of \emph{NetworkNodePointer} interface. All three representations may be used by any modules, maintaining the abstraction of the actual network adapter in use.
The \emph{NetworkAdapter} interface declare the method \emph{messageReceived}, which is supposed to be called when a message is received by the node. The method should be called by a separate module - the message receiver (implementing \emph{MessageReceiver} interface), which is responsible for receiving incoming messages and passing them (and the direct sender network address) to the network adapter. Upon receiving a message, the network adapter is supposed to pass the message received and the direct sender to the \emph{Node} instance by calling \emph{Node.messageReceived} method, which would then pass the message to the defined received message processors. The message receiver implementation should be compatible with the network adapter used and may use functionalities implemented by the network adapter.
By design, a single message receiver is able to deliver messages to multiple network adapter instances (multiple node instances served by the same message receiver). The \emph{MessageReceiver} interface specifies methods for registering and unregistering network adapters. The message receiver may listen for messages coming from multiple sources and dispatch them to appropriate network adapters, based for example of the network addresses of individual network adapters (of individual nodes). Thus, the message receiver is not initialized automatically by the \emph{Node} instance, but should be created outside, and the node's network adapter should then be registered for the message receiver. After the message receiver is initialized, it should be explicitly started (by calling the \emph{startMessageReceiver} method) to start receiving messages. The method \emph{receiveMessage} is responsible for receiving messages (possibly a blocking call), and may possibly pass multiple messages to the network adapter at a time (or no messages if no message was received).
The network adapter module may implement message fragmentation and reassembly (splitting messages to smaller messages not exceeding the defined maximum message size). In such a case, the network adapter, before sending the message, should split the message into fragments and send all fragments within a single \emph{sendMessage} call. When the message receiver passes a message fragment to the network adapter, the network adapter is responsible to gather all remaining message fragments, and call the \emph{Node.messageReceived} method only when the complete message is assembled.
Within the \emph{HyCube} library, several network adapter and message receiver implementations, based on the UDP/IP protocol, are delivered. The list below briefly characterizes individual implementations:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{UDPNetworkAdapter} - A network adapter providing abstraction of the UDP/IP protocol, providing message fragmentation. The implementation is based on the Java Socket API. The socket created is publicly accessible and may be used by message receivers to receive messages. Table \ref{tab:libUDPNetworkAdapter} specifies the configuration of this network adapter.
\item \emph{UDPSelectorNetworkAdapter} - A network adapter providing abstraction of the UDP/IP protocol, providing message fragmentation. The implementation is based on the Java NIO API and uses a channel (\emph{DatagramChannel} object) to send messages. The channel is publicly accessible and may be used by a message receiver to receive messages. With the use of a selector (\emph{Selector} object), message receivers are able to listen for messages on multiple channels, serving multiple network adapters. This is the default network adapter implementation used in \emph{HyCube}. Table \ref{tab:libUDPSelectorNetworkAdapter} specifies the configuration of this network adapter.
\item \emph{UDPMessageReceiver} - A message receiver based on the UDP/IP protocol. The implementation is based on the Java Socket API. The message receiver expects network adapters registered to be instances of \emph{UDPNetworkAdapter} and uses the sockets maintained by them to receive messages. Table \ref{tab:libUDPMessageReceiver} specifies the configuration of this message receiver.
\item \emph{UDPSelectorMessageReceiver} - A message receiver based on the UDP/IP protocol. The implementation is based on the Java NIO API. The message receiver expects network adapters registered to be instances of \emph{UDPSelectorNetworkAdapter} and uses the channels maintained by them to receive messages using the selector mechanism. For all registered network adapters, only one instance of selector is used, which allows listening for messages on all channels using a single \emph{select()} call (one thread). Table \ref{tab:libUDPSelectorMessageReceiver} specifies the configuration of this message receiver.
\item \emph{UDPWakeableSelectorMessageReceiver} - A message receiver based on the UDP/IP protocol. The implementation is based on the Java NIO API. The message receiver expects network adapters registered to be instances of \emph{UDPSelectorNetworkAdapter} and uses the channels maintained by them to receive messages using the selector mechanism. For all registered network adapters, only one instance of selector is used, which allows listening for messages on all channels using a single \emph{select()} call (one thread). \emph{UDPWakeableSelectorMessageReceiver} additionally allows waking-up a blocking call of the \emph{receiveMessage} method when needed (implements the \emph{Wakeable} interface), which is a very useful feature for efficient event processing (details of the ``wakeable'' objects design in Section \ref{sec:libEventProcessing}). This is the default message receiver used by \emph{HyCube}. Table \ref{tab:libUDPWakeableSelectorMessageReceiver} specifies the configuration of this message receiver.
\end{itemize}
\emph{UDPNetworkAdapter} and \emph{UDPSelectorNetworkAdapter} use a nested message fragmenter module (a class implementing the \emph{MessageFragmenter} interface), which is responsible for message fragmentation. The interface provides a method for fragmenting messages, taking a message as an argument and returning an array of messages - fragments, and a method responsible for reassembling message fragments into a complete message, taking a message fragment as an argument and returning the assembled message, if all fragments were received, or \emph{null} if no complete message was assembled (storing the fragments received is managed by the fragmenter). The library provides an implementation of the message fragmenter - the class \emph{HyCubeMessageFragmenter}, which provides simple message fragmentation based on the fragment length specified in the configuration of the module. \emph{HyCubeMessageFragmenter} requires 4 bytes reserved in the message header extension field (2 bytes for the fragment number and 2 bytes for the total number of fragments). The configuration of the module is presented in Table \ref{tab:libHyCubeMessageFragmenter}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.transport.UDPNetworkAdapter} \\[1.5mm]
\textbf{OSSendBufferSize} & \textbf{Integer} & The buffer size allocated by the operating system for sending UDP datagrams \\[1.5mm]
\textbf{OSReceiveBufferSize} & \textbf{Integer} & The buffer size allocated by the operating system for receiving UDP datagrams \\[1.5mm]
\textbf{ReceiveTimeout} & \textbf{Integer} & The timeout (ms) (maximum blocking time) of the receive operation on the socket \\[1.5mm]
\textbf{MaxMessageLength} & \textbf{Integer} & The maximum allowed message length \\[1.5mm]
\textbf{ThrowWhenMaxMessageLengthExceeded} & \textbf{Boolean} & Determines whether an exception should be thrown when the length of the message being sent exceeds the defined maximum value, or whether such messages should be silently dropped \\[1.5mm]
\textbf{FragmentMessages} & \textbf{Boolean} & Determines whether message fragmentation is enabled \\[1.5mm]
\textbf{MessageFragmenter} & \textbf{Nested} & Specifies the configuration of the message fragmenter module (nested) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{UDPNetworkAdapter configuration properties}
\label{tab:libUDPNetworkAdapter}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.transport.UDPSelectorNetworkAdapter} \\[1.5mm]
\textbf{OSSendBufferSize} & \textbf{Integer} & The buffer size allocated by the operating system for sending UDP datagrams \\[1.5mm]
\textbf{OSReceiveBufferSize} & \textbf{Integer} & The buffer size allocated by the operating system for receiving UDP datagrams \\[1.5mm]
\textbf{ReceiveTimeout} & \textbf{Integer} & The timeout (ms) (maximum blocking time) of the receive operation on the socket \\[1.5mm]
\textbf{MaxMessageLength} & \textbf{Integer} & The maximum allowed message length \\[1.5mm]
\textbf{ThrowWhenMaxMessageLengthExceeded} & \textbf{Boolean} & Determines whether an exception should be thrown when the length of the message being sent exceeds the defined maximum value, or whether such messages should be silently dropped \\[1.5mm]
\textbf{FragmentMessages} & \textbf{Boolean} & Determines whether message fragmentation is enabled \\[1.5mm]
\textbf{MessageFragmenter} & \textbf{Nested} & Specifies the configuration of the message fragmenter module (nested) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{UDPSelectorNetworkAdapter configuration properties}
\label{tab:libUDPSelectorNetworkAdapter}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.transport.UDPMessageReceiver} \\[1.5mm]
\textbf{MessageFactory} & \textbf{Nested} & The configuration (nested) of the message factory module used by the message receiver to create message objects from message bytes received \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{UDPMessageReceiver configuration properties}
\label{tab:libUDPMessageReceiver}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.transport.UDPSelectorMessageReceiver} \\[1.5mm]
\textbf{MessageFactory} & \textbf{Nested} & The configuration (nested) of the message factory module used by the message receiver to create message objects from message bytes received \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{UDPSelectorMessageReceiver configuration properties}
\label{tab:libUDPSelectorMessageReceiver}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.transport.UDPWakeableSelectorMessageReceiver} \\[1.5mm]
\textbf{MessageFactory} & \textbf{Nested} & The configuration (nested) of the message factory module used by the message receiver to create message objects from message bytes received \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{UDPWakeableSelectorMessageReceiver configuration properties}
\label{tab:libUDPWakeableSelectorMessageReceiver}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{1.0cm} p{8.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Class} & \textbf{String} & \textit{net.hycube.messaging.fragmentation.HyCubeMessageFragmenter} \\[1.5mm]
\textbf{HeaderExtensionIndex} & \textbf{Integer} & The index of the message header extension reserved for the message fragmenter \\[1.5mm]
\textbf{FragmentLength} & \textbf{Integer} & The maximum byte length of a single message. If the message length exceeds this value, the message will be split into messages of maximum length of FragmentLength \\[1.5mm]
\textbf{FragmentsRetentionPeriod} & \textbf{Integer} & The amount of time (ms) for which received message fragments are stored by the message fragmenter. The complete message might not be reassembled if the time difference between receiving the first and the last fragment exceeds the value of this parameter. \\[1.5mm]
\textbf{PreventFragmentDuplicates} & \textbf{Boolean} & Determines whether the message fragments duplicates received should be ignored (\emph{true}) or the fragments received before should be overwritten (\emph{false}) \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeMessageFragmenter configuration properties}
\label{tab:libHyCubeMessageFragmenter}
\end{table}
\section{Event processing}
\label{sec:libEventProcessing}
This section discusses the architecture of event processing implemented in the \emph{HyCube} library. A node instance performs many operations as a consequence of certain actions (external triggers like receiving message, explicit execution by other operations, scheduled operations, or operations initiated from outside the library by calling the API methods). By design, the operations are not performed in the calling thread, but such events are inserted to event queues - instances of \emph{LinkedBlockingQueue} (defined for individual event types) and processed by the ``event processor'' (an object implementing the \emph{EventQueueProcessor} interface). The event processor is responsible for processing events from event queues. The event processor should be created independently of the node instance, and the event queues (passed to the \emph{Node} initializer method) should be registered in the event processor object, after which the event processor should be started (\emph{start} method). The event queues may be shared among possible multiple node instances, and a single queue may be used for multiple event types. In most cases (a single node instance running), the events may be successfully served by a single queue.
Individual implementations of event processors may use a single processing thread, multiple threads or adjust the number of threads used depending on the number of events in the queues. Events are represented by objects of class \emph{Event} and the event operation is defined either by overloading the method \emph{Event.process()} in the classes extending \emph{Event} class, or by the process event proxy (an object of type \emph{ProcessEventProxy}) to which processing of the event is passed. The process event proxy is specified for the event instance upon initialization. The \emph{Event.process()} method calls the proxy's \emph{processEvent} method passing the event instance to it - this is the default behavior. Additional objects required for event processing may be stored in the \emph{Event} object by specifying a \emph{Object[] eventArgs} array in the event's constructor, or may be explicitly included in the classes extending the \emph{Event} class. Two additional properties are defined in the \emph{Event} class, that may be used by inserting and processing modules: \emph{timestamp} and \emph{priority}.
Every event has a property \emph{eventType}, which defines to which queue the event is inserted. The event type may also be used by the process event proxy object to determine how the event should be processed. The event type is defined by the \emph{eventType} property of class \emph{EventType}. The \emph{EventType} class defines two properties: \emph{eventCategory} (\emph{EventCategory} enumeration) and \emph{eventTypeKey} (\emph{String}). The event category defines the family of the event types, while event type key specifies the event type among possible multiple event types within the same event category. Upon \emph{Node} initialization, the event queues are specified for individual event types - a \emph{Map<EventType, LinkedBlockingQueue<Event>>} objects is passed to the \emph{Node.initialize} method. If for, any event type, the event type key is an empty string, this queue will be the default event queue for all event types for the specified event category. Table \ref{tab:libEventCategory} presents the event categories supported by the \emph{HyCube} library and classes extending \emph{Event} class specific for individual events.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{5.0cm} p{9.5cm}}
\hline
\textbf{Event category} & \textbf{Description} \\[1mm]
\hline
\textbf{undefinedEvent} & Undefined event type. May be used in certain situations when the event type is not relevant. \\[1.5mm]
\textbf{receiveMessageEvent} & An event executing the message receiver call event. Class \emph{MessageReceiverProcessEventProxy} should be used to process events of this type. The implementation's \emph{processEvent} methods calls the \emph{receiveMessage} method of the message received specified in the constructor of \emph{MessageReceiverProcessEventProxy}. The events of this category are represented by \emph{Event} class instance (do not require any additional parameters). \\[1.5mm]
\textbf{processReceivedMessageEvent} & An event processing a received message. Class \emph{ProcessReceivedMessageEvent} should be used to represent events of this event category. \\[1.5mm]
\textbf{pushMessageEvent} & An event pushing a message to the network layer (physically sending the message). Class \emph{PushMessageEvent} should be used to represent events of this event category. \\[1.5mm]
\textbf{pushSystemMessageEvent} & An event pushing a system message to the network (transport) layer, physically sending the message. Class \emph{PushMessageEvent} should be used to represent events of this event category. \\[1.5mm]
\textbf{processAckCallbackEvent} & An event executing received ACK callbacks. Class \emph{MessageAckCallbackEvent} should be used to represent events of this event category. \\[1.5mm]
\textbf{processMsgReceivedCallbackEvent} & An event executing received message callbacks. Class \emph{MessageReceivedCallbackEvent} should be used to represent events of this event type. \\[1.5mm]
\textbf{executeBackgroundProcessEvent} & An event running a background process. The event type key specifies the event types for individual background processes. Class \emph{Event} is used to represent events of this event category, and \emph{BackgroundProcessEventProxy} is the process event proxy class processing events of this category. The background process object is bound to the process event proxy on initialization (constructor argument). The \emph{Event}'s property \emph{eventArgs[0]} specifies whether the \emph{process} or \emph{schedule} method of the background process should be called (\emph{BackgroundProcessEventProxy.BackgroundProcessEventProxyOperation} enumeration). \\[1.5mm]
\textbf{extEvent} & Extended (custom) event - may be any module specific events. The event type key specifies the event type. Modules defining custom event types, may use any arbitrarily chosen class to represent events (extending \emph{Event}) and any process event proxy implementations. \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{Event categories}
\label{tab:libEventCategory}
\end{table}
Event of types: \emph{processReceivedMessageEvent}, \emph{pushMessageEvent}, \emph{pushSystemMessageEvent}, \emph{processAckCallbackEvent}, \emph{processMsgReceivedCallbackEvent} are created by the \emph{Node} instance and a special process event proxy class (\emph{Node.NodeProcessEventProxy}) was created to pass processing these types of events back to the \emph{Node} instance, which is responsible for further processing.
\subsubsection{Event scheduler}
In many situations, instead of processing an event immediately by the event processor, it would be necessary to postpone executing an event - to schedule its future execution. The \emph{Node.initialize} method expects an argument of type \emph{EventScheduler}, which defines the event scheduler object used for scheduling events by the node and all modules. The interface specifies three methods allowing scheduling an event to be executed at the defined time or with a certain delay. The scheduler implementations should insert the specified \emph{Event} object to the specified queue at the time defined in the method call. The class \emph{TimeProviderEventScheduler} is an event scheduler implementation that relies on the scheduling methods of a time provider (\emph{TimeProvider} instance), specified in the constructor. The \emph{DirectEnvironment} environment (default) defines an event scheduler based on the system clock (\emph{ScheduledThreadPoolExecutor}).
\subsubsection{Message receiver events}
\emph{receiveMessageEvent} is a special event category. Events of this category call the message receiver's \emph{receiveMessage} method and should be explicitly inserted to the appropriate queue, after the message receiver is properly initialized and the network adapter(s) are already bound to the message receiver. The \emph{MessageReceiver} interface contains a method \emph{startMessageReceiver} and an overloaded version taking an integer argument \emph{numEventsToEnqueue}. The no-argument method implementations should insert the receiver event to the message receiver queue, while the integer argument of the second method specifies the number of the events that should be inserted to the queue - for certain message receivers it is possible to define multiple threads processing an event queue, in which case multiple events may be processed at the same time. However, the message receiver should, in such a case, be aware of the possibility of multiple simultaneous calls of the \emph{receiveMessage} method.
The message receivers should ensure that, after the \emph{receiveMessage} returns, another message receiver event is inserted to the queue, so that new messages may be received. That is why, as a design rule, the implementations of the \emph{receiveMessage} should enqueue the message receiver event after receiving a message, which would ensure that the number of events processed and the events queued, specified in the \emph{startMessageReceiver} call, remains the same.
\subsubsection{Wakeable events and notifying queues}
\label{sec:libWakeables}
As it was already mentioned, the message receiver event (calling \emph{MessageReceiver.receiveMessage} method) may be blocking, in which case the processing thread would not be possible to process events waiting in the queue. In such a case, it would be useful to allow the blocking calls, but provide a mechanism of ``waking up'' the blocking operation, and allow processing enqueued events, however, inserting another message receiver event to the queue. Similar situations may occur for any custom blocking event type. In the \emph{HyCube} library, the \emph{Wakeable} interface has been introduced, which defines one method \emph{wakeup()}. When called, the object is expected to interrupt the blocking operation being performed, or, if the operation is not yet being performed (for instance, the event is still in the event queue), prevent the operation from blocking when it is executed. The implementation is module specific, but the event processors (Section \ref{sec:libEventProcessors}) and node services (Section \ref{sec:libNodeServices}) expect the described behavior of wakeable events, and the implementations should take into account the possibility of multiple simultaneous \emph{wakeup} calls, if performing the blocking operation simultaneously by multiple threads is allowed. The \emph{UDPWakeableSelectorMessageReceiver} is an example of a wakeable receiver, for which it is possible to wake up a blocking \emph{select} operation by calling the \emph{wakeup} method (or perform it in the non-blocking mode) if the \emph{wakeup()} method was called before the selection.
In order to work with possible multiple blocking and non-blocking events in the queue, the wakeable instances should be managed by a so called ``wakeable manager'' object, implementing the \emph{WakeableManager} interface. An instance of a wakeable manager is expected to be provided upon creation of any wakeable message receiver (implementing \emph{WakeableMessageReceiver} interface). The wakeable manager object is responsible for managing multiple blocking operations, allowing registering new running operations and removing them after the blocking operation is finished, as well as waking up one of the wakeable objects (and removing them from the list of managed wakeable objects) when the \emph{wakeup} method is called explicitly.
Calls to the wakeable manager methods (\emph{addWakeable}, \emph{removeWakeable}, \emph{wakeup}) and operations performed by the wakeable objects (adding events to the queue, waking up, internal wakeable object state changes) should be atomic. To make sure that no race conditions exist, \emph{WakeableManager} exposes a lock object that is used to synchronize the operations the \emph{addWakeable}, \emph{removeWakeable}, \emph{getWakeables} and \emph{wakeup} operations. This lock may be used to demarcate critical sections including operations on the queues.
The implementation of a wakeable manager provided, the class \emph{EventQueueWakeableManager}, is capable of managing wakeable objects (blocking operations) for an event queue processed by a given number of threads (\emph{availableSlots} constructor argument). When the blocking events are processed, they should be registered as wakeable objects within the wakeable manager, and removed when the processing is finished. The \emph{wakeup} call is interpreted as inserting a non-blocking event to the event queue. The wakeable manager ensures that no non-blocking event processing would have to be delayed because of a blocking operation taking place. The manager registers the numbers of non-blocking events enqueued (\emph{wakeup} calls) after every wakeable event (\emph{addWakeable} call), and when a wakeable event is removed, all non-blocking events enqueued after it (but before the next wakeable event), are also removed. Thus, upon every \emph{wakeup} call, it is able to determine the total number of events currently enqueued for processing. The \emph{wakeup} call would wake up an appropriate number of wakeable objects to ensure that processing of the event would not be delayed by any blocking (wakeable) operation, taking into account the number of available processing threads - \emph{availableSlots}. If there is a certain number of blocking (wakeable) events that should be rescheduled every time after processing the event, the number of threads processing the queue should be at lest equal to the number of these blocking events, to prevent delays in processing blocking events.
The ``wakeup'' mechanism needs to be integrated with the operations on the event queue, to allow automatic calls to the wakeable manager when the events are inserted to the event queue. For that purpose, the \emph{NotifyingQueue<T>} and \emph{NotifyingBlockingQueue<T>} interfaces has been created, as well as two implementations: \emph{NotifyingLinkedBlockingQueue<T>} and \emph{NotifyingPriorityBlockingQueue<T>} (extending \emph{LinkedBlockingQueue<T>} and \emph{PriorityBlockingQueue<T>}), to be used as event queues. The classes implementing \emph{NotifyingQueue} interface are supposed to call all registered listener objects (\emph{NotifyingQueueListener<T>}) every time an element is inserted to the queue, by calling the \emph{itemInserted} or \emph{itemsInserted} method (when multiple elements are inserted at a time), passing the items inserted to the method call. The methods inserting elements to the queue have also overloaded versions taking a boolean argument \emph{notify}, determining whether the listener objects should be called on insertions, allowing changing the default notification behavior. This arguments determines whether the listeners should be called after the insertion. \emph{NotifyingLinkedBlockingQueue<T>} and \emph{NotifyingPriorityBlockingQueue<T>} extend the \emph{LinkedBlockingQueue<T>} and \emph{PriorityBlockingQueue<T>} classes and intercept the operations inserting elements to the queue, executing the listener method after the element is inserted. All operations on such queues should be synchronized (operations on the queue and the listener calls) - no race conditions should be present.
To link a notifying queue to an \emph{EventQueueWakeableManager} instance, an instance of \emph{WakeableManagerQueueListener<T> } class may be used, which is supposed to be registered as the notifying queue listener, and would call the wakeup manager's \emph{wakeup} method whenever an element is added to the queue. When a blocking (wakeable) event is inserted to the queue, the \emph{EventQueueWakeableManager} implementation expects the queue NOT to notify the wakeable manager. Therefore, the overloaded notifying queue's method (with the argument \emph{notify=false}), should be called in such cases. The wakeable object should be registered using the \emph{addWakeable} method call upon inserting events to the queue. To allow proper synchronization of \emph{wakeup} and \emph{addWakeable} operations with the operations performed by the notifying queue, the wakeable manager and the wakeable object, all these objects should perform synchronization using the lock object provided by the wakeable manager (the lock object of the notifying queue is defined in the queue object constructor, and the wakeable manager object is passed to the wakeable message receiver's initialize method).
Wakeable managers may also define the maximum time for which a wakeable object is allowed to block in the next call (implementation of the \emph{getNextMaxSleepTime} method), which might be useful when the wakeable manager cooperates with the event scheduler - it may, for example, limit the blocking time to the time left to the execution of the earliest scheduled event. In such a case, no additional scheduler thread would be needed to process the event queue (to wake up the blocking operation when the scheduled event should be inserted to the event queue). With the use of such a mechanism, it is possible to process the event queue (including blocking events) as well as scheduled events, using a single thread.
\subsubsection{Event queue processors}
\label{sec:libEventProcessors}
Event processors are objects processing events from defined event queues. The arguments of event processor initializer methods define the queues they should operate on, parameters specifying how individual queues should be processed, as well as an error callback object and the error callback argument - the error callback will be called when processing an event throws an exception, passing the callback argument to the callback method. Such a mechanism is necessary to allow processing exceptions thrown while processing events, because the events are not processed in the API caller's thread. Such exceptions should, however, be thrown outside the event processing only when they denote critical errors. Exceptions that are not critical should be handled by the event processing logic. In case the error callback object is notified, the node instance should be immediately terminated.
Several event queue processors have been implemented in the \emph{HyCube} library. Individual implementations differ in the way they manage threads processing event queues:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \emph{SimpleThreadEventQueueProcessor} - A simple implementation of an event queue processor, employing a certain number of threads (\emph{Thread} objects) processing events from the defined queues (the numbers of threads are defined for individual queues).
\item \emph{SimpleThreadPoolEventQueueProcessor} - A simple implementation of an event queue processor, employing a thread pool (using fixed number of threads) for each of the defined queues (the numbers of threads are defined for individual queues).
\item \emph{ThreadPoolEventQueueProcessor} - An implementation of an event queue processor, employing a thread pool (using a certain number of threads) for each of the defined queues. The implementation dynamically adjusts the number of threads employed for processing individual queues. The maximum number of threads, as well as the keep alive time is defined for every thread pool. The keep alive time is the maximum time that excess idle threads will wait for new tasks before terminating. Parameters of thread pools processing individual queues are defined by providing objects of type \emph{ThreadPoolInfo} passed to the processor's \emph{initialize} method.
\item \emph{EventQueueSchedulerProcessor} - An implementation of an event queue processor, employing a thread pool (using a certain number of threads) for each of the defined queues. The implementation dynamically adjusts the number of threads employed for processing individual queues. The maximum number of threads, as well as the keep alive time is defined for every thread pool. The keep alive time is the maximum time that excess idle threads will wait for new tasks before terminating. \emph{EventQueueSchedulerProcessor} class also implements the \emph{EventScheduler} interface and provides event scheduler functionality. Parameters of thread pools processing individual queues are defined by providing objects of type \emph{EventQueueProcessingInfo} passed to the processor's \emph{initialize} method. The \emph{EventQueueProcessingInfo} objects specifies the thread pool parameters (\emph{ThreadPoolInfo}), the list of event types expected in the queue, and a boolean \emph{wakeable} flag, specifying if the wakeup mechanism should be applied for processing the queue (in which case the queue is expected to be an instance of \emph{NotifyingQueue}). \emph{EventQueueSchedulerProcessor} class defines the (internal) class \emph{EventQueueProcessorRunnable} (implementing \emph{Runnable} interface). Instances of the internal class \emph{EventQueueProcessorRunnable} are run by threads processing individual queues, retrieving and processing events from the queue. The class \emph{EventQueueProcessorRunnable} additionally implements the \emph{WakeableManager} interface, and its instances serve as wakeable managers for individual queues. The functionality of the wakeable manager described in Section \ref{sec:libWakeables} is extended by providing the maximum blocking time for wakeable objects, determined based on the number of the processing threads, scheduled event execution times, and planned wakeup times of the events currently being performed. The time returned ensures that, for every scheduled event \emph{e} within $k$ first scheduled events ($k = availableSlots$), the number of threads that would be available for scheduled execution (not sleeping/blocking at the scheduled execution time) is not less than the number of events scheduled, up to the event \emph{e}. That would ensure that first $k$ scheduled events (earliest) can be executed in parallel (by the maximum number of available threads). The same calculations are performed to determine the maximum blocking time of the \emph{BlockingQueue.poll} method call on the event queue object. \emph{EventQueueSchedulerProcessor} takes into account the possibility of scheduling new events for execution in the future while the processing threads are already performing blocking operations or sleeping (waiting for the events to be inserted to the event queue). Upon scheduling a new event, if needed, an appropriate number of threads are woken to allow processing the event in time. Such an approach ensures that no additional scheduler thread is needed to process the event queue (to wake up blocking operations at the time when the scheduled event should be inserted to the event queue). This implementation is the only one provided that allows processing all the events by a single thread. Although other event processors (presented before) may be configured to use a single processing thread, additional thread(s) would have to be used to insert scheduled events to the event queues (for example the \emph{ScheduledThreadPoolExecutor} instance used by the scheduling functionality of \emph{SystemTimeProvider}). There is, however, one important restriction on the \emph{EventQueueSchedulerProcessor} event processor. Because it relies on specifying maximum blocking times for blocking operations, which are passed to Java built-in classes that interpret the time as the system clock time (\emph{BlockingQueue.poll}, \emph{selector.select}, \ldots), the time provider specified for the node instance (within the \emph{Environment} instance passed to the node initializer) should be based on the system time (possibly shifted, but the time values returned should grow proportionally to the time values returned by \emph{System.currentTimeMillis()} call).
\end{itemize}
\section{Node services}
\label{sec:libNodeServices}
To run a node instance, the programmer is supposed to create and initialize the environment, time provider, scheduler, message receiver and event processor instances before creating the \emph{Node} instance. To simplify the process of creating a node, special classes - node services have been created. Node services are classes responsible for creating the node and the necessary supporting objects, based on the parameters specific to individual node service implementations. In most cases, the node services should be considered as the library API, unless certain customizations, not supported by the implementations provided, are supposed to be made.
The \emph{NodeService} interface exposes operations that may be performed in the node context by the API programmers. The methods directly relate to the methods exposed by the \emph{Node} class. All node services (running a single node) implement this interface. The interface specifies only methods performing generic operations, defined by the core of the library. \emph{HyCube}-specific operations (operations realized by \emph{HyCube}-specific modules) are defined in the interface \emph{HyCubeNodeService}, extending \emph{NodeService}. Two important classes implementing these interfaces are provided: \emph{NodeProxyService} and \emph{HyCubeNodeProxyService}. \emph{NodeProxyService} (implementing \emph{NodeService}) is a proxy service passing the method calls to the corresponding \emph{Node} instance. \emph{HyCubeNodeProxyService} extends \emph{NodeProxyService} with the methods specified in the \emph{HyCubeNodeService} interface - overloaded methods based on \emph{HyCube}-specific types, as well as methods calling entry points of \emph{HyCube}-specific modules. \emph{HyCubeNodeProxyService} should be configured in the configuration file by specifying nested node proxy service configuration - property \emph{NodeProxyService} at the root level of the configuration name space. Table \ref{tab:libHyCubeNodeProxyService} specifies the configuration of \emph{HyCubeNodeProxyService}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.0cm} p{1.0cm} p{7.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{NodeProxyService} & \textbf{String} & \textbf{HyCubeNodeProxyService} - Determines the key for the nested configuration of the node proxy service. \\[1.5mm]
\textbf{NodeProxyService[HyCubeNodeProxyService] \newline \ $\hookrightarrow$.RecoveryExtensionKey} & \textbf{String} & Recovery extension key \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{HyCubeNodeProxyService configuration properties}
\label{tab:libHyCubeNodeProxyService}
\end{table}
The library provides a number of abstract node service classes that implement managing the message receiver, event queues and the event processor, and methods provided by the \emph{NodeService} interface. Internally, they use an instance of \emph{NodeProxyService} to pass requests to the \emph{Node} instance. However, the abstract classes do not define the creation of the node proxy service object. They use an object returned by the abstract method \emph{initializeNodeProxyService}. The implementation of this method in \emph{HyCube}-specific classes creates and returns an object of class \emph{HyCubeNodeProxyService} (which initializes the \emph{Node} instance). Individual node services provide processing events using different event processors and message receivers. The message receiver configuration is read from the configuration file, and the event processor is configured either by specifying the parameters as arguments passed to the node service's \emph{initialize} method call, or by specifying the values in the configuration file, in which case the \emph{initializeFromConf} method should be used to create the service instance. In the configuration file, the node services are configured by nested properties of the property \emph{NodeService} (\emph{NodeService[NodeServiceKey]}) at the root level of the configuration name space, and the \emph{NodeServiceKey} is passed to the \emph{initializeFromConf} method of the service.
The listing below presents the \emph{HyCube} node services provided by the library:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \textbf{\emph{SingleQueueNodeService}} (abstract) \newline \textbf{\emph{HyCubeSingleQueueNodeService}} (\emph{HyCube}-specific) \newline A node service using a single event queue to process all event types by a thread pool defined in a \emph{ThreadPoolInfo} object passed to the \emph{initialize} method or in the configuration file (configuration specified in Table \ref{tab:libSingleQueueNodeService}). The wakeup mechanism is used to wake up wakeable events (with the default \emph{HyCube} configuration, only message receiver events). The message receiver is expected to be an instance of \emph{WakeableMessageReceiver}.
\item \textbf{\emph{MultiQueueNodeService}} (abstract) \newline \textbf{\emph{HyCubeMultiQueueNodeService}} (\emph{HyCube}-specific) \newline A node service using multiple event queues to process events. The parameters and lists of event types for individual queues are defined by an \emph{EventQueueProcessingInfo[]} array passed to the \emph{initialize} method or in the configuration file (configuration specified in Table \ref{tab:libMultiQueueNodeService}). The wakeup mechanism is used to wake up wakeable events (with the default \emph{HyCube} configuration, only message receiver events). The message receiver is expected to be an instance of \emph{WakeableMessageReceiver}.
\item \textbf{\emph{SchedulingMultiQueueNodeService}} (abstract) \newline \textbf{\emph{HyCubeSchedulingMultiQueueNodeService}} (\emph{HyCube}-specific) \newline A node service using multiple event queues to process events. The parameters and lists of event types for individual queues are defined by an \emph{EventQueueProcessingInfo[]} array passed to the \emph{initialize} method or in the configuration file (configuration specified in Table \ref{tab:libSchedulingMultiQueueNodeService}). Internally, \emph{EventQueueSchedulerProcessor} is used as the event processor and the event scheduler. The wakeup mechanism is used to wake up wakeable events. The message receiver is expected to be an instance of \emph{WakeableMessageReceiver}.
\item \textbf{\emph{SingleQueueNodeServiceNonWakeable}} (abstract) \newline \textbf{\emph{HyCubeSingleQueueNodeServiceNonWakeable}} (\emph{HyCube}-specific) \newline A node service using a single event queue to process all event types by a thread pool defined by a \emph{ThreadPoolInfo} object passed to the \emph{initialize} method or in the configuration file (configuration specified in Table \ref{tab:libSingleQueueNodeServiceNonWakeable}). The wakeup mechanism is not used by this node service. Therefore, the number of threads serving queues that may contain blocking events (with the default \emph{HyCube} configuration, only message receiver events) should be large enough to prevent blocking operations from causing delays in processing non-blocking events.
\item \textbf{\emph{MultiQueueNodeServiceNonWakeable}} (abstract) \newline \textbf{\emph{HyCubeMultiQueueNodeServiceNonWakeable}} (\emph{HyCube}-specific) \newline A node service using multiple event queues to process events. The parameters and lists of event types for individual queues are defined by an \emph{EventQueueProcessingInfo[]} array passed to the \emph{initialize} method or in the configuration file (configuration specified in Table \ref{tab:libMultiQueueNodeServiceNonWakeable}). The wakeup mechanism is not used by this node service. Therefore, the number of threads serving queues that may contain blocking events (with the default \emph{HyCube} configuration, only message receiver events) should be large enough to prevent blocking operations from causing delays in processing non-blocking events.
\item \textbf{\emph{SimpleNodeService}} (abstract) \newline \textbf{\emph{HyCubeSimpleNodeService}} (\emph{HyCube}-specific) \newline A node service using one or two event queues to process events. The node service creates the minimum number of threads required to prevent blocking events from delaying processing of other events. The parameters passed to the \emph{initialize} method, or specified in the configuration file (configuration presented in Table \ref{tab:libSimpleNodeService}), define the number of custom blocking events (\emph{blockingExtEventsNum}) that may be enqueued at the same time (in addition to the message receiver event - with the default configuration, there are no additional blocking events), and a flag (\emph{wakeupMessageReceiver}) determining whether the wakeup mechanism should be used to wake the message receiver. If \emph{wakeupMessageReceiver} is set to \emph{true}, the node service creates one event queue and the queue is processed by a thread pool consisting of $blockingExtEventsNum+1$ threads. If \emph{wakeupMessageReceiver} is set to \emph{false}, two event queues are created: a queue for the message receiver events processed by a single thread, and a queue for other event types, processed by $blockingExtEventsNum+1$ threads. Because the wakeup mechanism is used to wake up wakeable events, the message receiver is expected to be an instance of \emph{WakeableMessageReceiver}.
\item \textbf{\emph{SimpleSchedulingNodeService}} (abstract) \newline \textbf{\emph{HyCubeSimpleSchedulingNodeService}} (\emph{HyCube}-specific) \newline A node service using one or two event queues to process events. The node service creates the minimum number of threads required to prevent blocking events from delaying processing of other events. The parameters passed to the \emph{initialize} method, or specified in the configuration file (configuration presented in Table \ref{tab:libSimpleSchedulingNodeService}), define the number of custom blocking events (\emph{blockingExtEventsNum}) that may be enqueued at the same time (in addition to the message receiver event - with the default configuration, there are no additional blocking events), and a flag (\emph{wakeup}) determining whether the wakeup mechanism should be used. If \emph{wakeup} is set to \emph{true}, the node service creates one event queue and the queue is processed by a thread pool consisting of $blockingExtEventsNum+1$ threads. If \emph{wakeup} is set to \emph{false}, two event queues are created: a queue for the message receiver events processed by a single thread, and a queue for other event types, processed by $blockingExtEventsNum+1$ threads. Internally, \emph{EventQueueSchedulerProcessor} is used as the event processor and the event scheduler. Because the wakeup mechanism is used to wake up wakeable events, the message receiver is expected to be an instance of \emph{WakeableMessageReceiver}.
\end{itemize}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \\[1.5mm]
\textbf{ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{SingleQueueNodeService configuration properties}
\label{tab:libSingleQueueNodeService}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Queues} & \textbf{List (String)} & The list of event queues (names). Individual queues are configured referring to the names specified in the list. \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \emph{QueueName} \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\textbf{Queues[QueueName].Wakeable} & \textbf{Boolean} & Determines whether the event queue should be bound to the wakeable manager instance \\[1.5mm]
\textbf{Queues[QueueName].EventTypes} & \textbf{List (String)} & Defines the event types (names) that the event queue is defined for. Individual event types (for names) are specified by the nested parameters \emph{EventCategory} and \emph{EventTypeKey} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventCategory} & \textbf{Enum} & The event category (\emph{EventCategory} enum.) for the event type \emph{ET} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventTypeKey} & \textbf{String} & The event type key for the event type \emph{ET} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{MultiQueueNodeService configuration properties}
\label{tab:libMultiQueueNodeService}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Queues} & \textbf{List (String)} & The list of event queues (names). Individual queues are configured referring to the names specified in the list. \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \emph{QueueName} \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\textbf{Queues[QueueName].Wakeable} & \textbf{Boolean} & Determines whether the event queue should be bound to the wakeable manager instance \\[1.5mm]
\textbf{Queues[QueueName].EventTypes} & \textbf{List (String)} & Defines the event types (names) that the event queue is defined for. Individual event types (for names) are specified by the nested parameters \emph{EventCategory} and \emph{EventTypeKey} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventCategory} & \textbf{Enum} & The event category (\emph{EventCategory} enum.) for the event type \emph{ET} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventTypeKey} & \textbf{String} & The event type key for the event type \emph{ET} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{SchedulingMultiQueueNodeService configuration properties}
\label{tab:libSchedulingMultiQueueNodeService}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \\[1.5mm]
\textbf{ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{SingleQueueNodeServiceNonWakeable configuration properties}
\label{tab:libSingleQueueNodeServiceNonWakeable}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Queues} & \textbf{List (String)} & The list of event queues (names). Individual queues are configured referring to the names specified in the list. \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \emph{QueueName} \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\textbf{Queues[QueueName].Wakeable} & \textbf{Boolean} & Determines whether the event queue should be bound to the wakeable manager instance \\[1.5mm]
\textbf{Queues[QueueName].EventTypes} & \textbf{List (String)} & Defines the event types (names) that the event queue is defined for. Individual event types (for names) are specified by the nested parameters \emph{EventCategory} and \emph{EventTypeKey} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventCategory} & \textbf{Enum} & The event category (\emph{EventCategory} enum.) for the event type \emph{ET} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventTypeKey} & \textbf{String} & The event type key for the event type \emph{ET} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{MultiQueueNodeServiceNonWakeable configuration properties}
\label{tab:libMultiQueueNodeServiceNonWakeable}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3.5cm} p{1.5cm} p{9.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{BlockingExtEventsNum} & \textbf{Integer} & The number custom blocking events (\emph{blockingExtEventsNum}) that may be enqueued at the same time in addition to the message receiver event (default value 0). With the default configuration, there are no additional blocking events, and the default value 0 should be used. \\[1.5mm]
\textbf{Wakeup} & \textbf{Boolean} & Determining whether the wakeup mechanism should be used \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{SimpleNodeService configuration properties}
\label{tab:libSimpleNodeService}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{3.5cm} p{1.5cm} p{9.5cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{BlockingExtEventsNum} & \textbf{Integer} & The number custom blocking events (\emph{blockingExtEventsNum}) that may be enqueued at the same time in addition to the message receiver event (default value 0). With the default configuration, there are no additional blocking events, and the default value 0 should be used. \\[1.5mm]
\textbf{Wakeup} & \textbf{Boolean} & Determining whether the wakeup mechanism should be used \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{SimpleSchedulingNodeService configuration properties}
\label{tab:libSimpleSchedulingNodeService}
\end{table}
\section{Multiple node services}
\label{sec:libMultipleNodeServices}
In addition to the node services running single node instances, the library provides two multiple-node services, which may be used to manage multiple node instances with a common event processor, processing the events of all nodes, and a common message receiver, dispatching received messages to individual nodes. Such node services are useful when a single machine runs many node instances (for example for simulation purposes). With the use of a single event processor, large numbers of nodes may be processed by a small number of processing threads, using a single message receiver instance.
Multiple node services implement the \emph{MultipleNodeService} interface, specifying methods initializing message receivers (returning \emph{MessageReceiver}) and initializing node services (returning \emph{NodeService}), as well as methods discarding massage receivers and nodes. The interface \emph{HyCubeMultipleNodeService} extends \emph{MultipleNodeService}, overriding the return type of the methods initializing node services to \emph{HyCube}-specific \emph{HyCubeNodeService} class. The abstract class \emph{AbstractMultipleNodeService} provides implementation of managing the message receiver and node service instances and specifies two abstract methods: \emph{initializeNodeProxyService} which is supposed to create the node proxy service (creating a \emph{Node} instance), and \emph{discard} which is supposed to discard the multiple node service instance (discarding the instances of individual node services). Two abstract classes has been created: \emph{MultiQueueMultipleNodeService} and \emph{SchedulingMultipleNodeService} (extending \emph{AbstractMultipleNodeService}), which implement managing the event queues, managing the event processor and the \emph{discard} method. \emph{MultiQueueMultipleNodeService} internally uses \emph{ThreadPoolEventQueueProcessor}, while \emph{SchedulingMultipleNodeService} uses \emph{EventQueueSchedulerProcessor}. However, these abstract classes do not define the creation of the node proxy service object. The abstract method \emph{initializeNodeProxyService} is implemented in two classes extending \emph{MultiQueueMultipleNodeService} and \emph{SchedulingMultipleNodeService}: \emph{HyCubeMultiQueueMultipleNodeService} and \emph{HyCubeSchedulingMultipleNodeService} - the implementations of the method return an instance of the \emph{HyCubeNodeService}. These classes may be used for creating multiple instances of \emph{HyCube} nodes, processed by the same event processor, and one (or more) message receivers. The event queues and the event processor are configured by the parameters and lists of event types for individual queues defined in a \emph{EventQueueProcessingInfo[]} array passed to the \emph{initialize} method of the service, or in the configuration file, in which case the method \emph{initializeFromConf} should be used to initialize the service instance - the node service key is passed to the \emph{initializeFromConf} method call. The configuration of \emph{MultiQueueMultipleNodeService} and \emph{SchedulingMultipleNodeService} (\emph{HyCubeMultiQueueMultipleNodeService} and \emph{HyCubeSchedulingMultipleNodeService}) is specified in Tables \ref{tab:libMultiQueueMultipleNodeService} and \ref{tab:libSchedulingMultipleNodeService}. Because the wakeup mechanism is used to wake up wakeable events, the message receiver is expected to be an instance of \emph{WakeableMessageReceiver}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Queues} & \textbf{List (String)} & The list of event queues (names). Individual queues are configured referring to the names specified in the list. \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \emph{QueueName} \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\textbf{Queues[QueueName].Wakeable} & \textbf{Boolean} & Determines whether the event queue should be bound to the wakeable manager instance \\[1.5mm]
\textbf{Queues[QueueName].EventTypes} & \textbf{List (String)} & Defines the event types (names) that the event queue is defined for. Individual event types (for names) are specified by the nested parameters \emph{EventCategory} and \emph{EventTypeKey} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventCategory} & \textbf{Enum} & The event category (\emph{EventCategory} enum.) for the event type \emph{ET} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventTypeKey} & \textbf{String} & The event type key for the event type \emph{ET} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{MultiQueueMultipleNodeService configuration properties}
\label{tab:libMultiQueueMultipleNodeService}
\end{table}
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{6.3cm} p{0.9cm} p{7.3cm}}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Value} \\[1mm]
\hline
\textbf{Queues} & \textbf{List (String)} & The list of event queues (names). Individual queues are configured referring to the names specified in the list. \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.PoolSize} & \textbf{Integer} & The maximum number of threads in the thread pool processing the event queue \emph{QueueName} \\[1.5mm]
\textbf{Queues[QueueName].ThreadPool.KeepAliveTimeSec} & \textbf{Integer} & The maximum time (s) that excess idle threads will wait for new tasks before terminating \\[1.5mm]
\textbf{Queues[QueueName].Wakeable} & \textbf{Boolean} & Determines whether the event queue should be bound to the wakeable manager instance \\[1.5mm]
\textbf{Queues[QueueName].EventTypes} & \textbf{List (String)} & Defines the event types (names) that the event queue is defined for. Individual event types (for names) are specified by the nested parameters \emph{EventCategory} and \emph{EventTypeKey} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventCategory} & \textbf{Enum} & The event category (\emph{EventCategory} enum.) for the event type \emph{ET} \\[1.5mm]
\textbf{Queues[QueueName].EventTypes[ET].EventTypeKey} & \textbf{String} & The event type key for the event type \emph{ET} \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{SchedulingMultipleNodeService configuration properties}
\label{tab:libSchedulingMultipleNodeService}
\end{table}
\section{Logging}
The library uses Apache commons-logging library for logging, and therefore can be used with any logging implementation at runtime. Commons-logging comes with support for a number of popular logging implementations, and writing adapters for others is a reasonably simple task. For the purposes of testing the library, Log4j was used. To use the library with Log4j, it is enough to include Log4j library and Log4j configuration file in the classpath. However, it is not set as a "compile" dependency for the project. The choice of the logger is left to the user of the library.
\section{Conventions}
This section briefly describes conventions adopted by HyCube library.
\subsection{Logging}
Within the HyCube library, the loggers are accessed through the static methods of class net.hycube.logging.LogHelper. The methods provide access to three loggers (Apache commons.logging):
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \textbf{user log (net.hycube.log.user)}:\\
\noindent
The logger used for messages that should be logged at the user (application level). No stack trace should be presented in the user log - just a message briefly describing the event. This logger should be considered node state monitoring log - only major state changes should be logged.
\item \textbf{dev log (net.hycube.log.dev)}:\\
\noindent
The logger used for development, debugging purposes. The log entries should contain full description of the event, including the stack trace, if applicable. For individual classes or log categories, separate loggers (hierarchical) may be created, following the naming convention: net.hycube.log.dev.[full\_class\_name] or net.hycube.log.dev.[log\_category], allowing defining logging levels for individual packages/classes or for common funcionalities. For example: "net.hycube.log.dev.net.hycube.join" or "net.hycube.log.dev.net.hycube.join.searchjoin.HyCubeSearchJoinManager".
\item \textbf{msg log (net.hycube.log.messages)}:\\
\noindent
The logger used for logging information about messages being sent or processed by nodes - used for development, debugging pusposes.
\end{itemize}
Table \ref{tab:libLogLevels} presents guidelines for choosing logging levels for individual loggers used within the library.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{p{2.0cm} p{2.0cm} p{10.5cm}}
\hline
\textbf{Logger} & \textbf{Level} & \textbf{Events} \\[1mm]
\hline
\textbf{user log} & fatal & no recoverable error, app termination \\[1.5mm]
& error & error events that user should be informed about, like for monitoring window, includes events when api returns errors \\[1.5mm]
& warn & warning - possible error, but the application may continue \\[1.5mm]
& info & interesting events \\[1.5mm]
& debug & less important events \\[1.5mm]
& trace & least important events at lower level \\[1.5mm]
\hline
\textbf{dev log} & fatal & no recoverable errors causing application termination \\[1.5mm]
& error & errors, exceptions that are not rethrown, external errors or some other errors that should be logged \\[1.5mm]
& warn & warning - possible error, but the application may continue \\[1.5mm]
& info & important events \\[1.5mm]
& debug & less important events, details, data being processed \\[1.5mm]
& trace & messages informing about current place of execution, least important events, details and data being processed \\[1.5mm]
\hline
\textbf{msg log} & fatal & - \\[1.5mm]
& error & invalid packet send try \\[1.5mm]
& warn & invalid packet received \\[1.5mm]
& info & packet sent, packet received \\[1.5mm]
& debug & more details about packets being sent/received, packet propagation through layer stack \\[1.5mm]
& trace & logs detailed message processing \\[1.5mm]
\hline
\end{tabular}
\end{center}
\caption{Logging levels}
\label{tab:libLogLevels}
\end{table}
\subsection{Exceptions}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Exceptions should be logged only once, just before throwing the exceptions outside the library (API) scope. Logging the same exception stack trace more than once can confuse the programmer examining the stack trace about the original source of exception. The only place that should log and throw are well defined entry points. Logging and throwing might be however allowed in certain situations, when the information logged is of a great importance and requires special attention.
\item Whenever an exception is caught and thrown, the thrown exception should include the whole stack trace, the exception caught as a nested exception.
\item Exceptions thrown outside the API should not be inner classes
\item Exceptions that may be processed outside the API should be checked exceptions
\item Throwing unchecked exceptions implicitly outside the API (e.g. exceptions thrown from the JRE classes called by HyCube) should be prevented. Whenever the input data processing might cause throwing an unchecked exception, the data should be validated by the HyCube code before processing (if possible), and any invalid input should be processed by the HyCube code, and, if necessary, an exception should be thrown outside the library.
\item Runtime exceptions that are explicitly thrown outside the API should be encapsulated in UnrecoverableRuntimeException instances (cause property)
\item Runtime exceptions thrown implicitly (thrown from classes called by HyCube, not being caught/processed by HyCube code) should not be logged. Any explicitly thrown (HyCube-level) exceptions should be logged just before throwing outside the API.
\end{itemize}
\subsection{Other}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Use of factory methods - allows returning objects of sub-classes
\item Use of abstract factories - to allow the use of factories implementing a common interface, allowing instantiation of specific-type objects (implementing a common interface), maintaining the implementation abstraction
\item Use of protected fields and methods so that classes are extendable
\end{itemize}
\chapter{Changelog}
\noindent
\textbf{1.0}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Initial release
\end{itemize}
\newline
\noindent
\textbf{1.0.1}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Changed the build automation to Maven
\item Integration with Travis CI
\item Fixed compatibility with Java 1.7 and 1.6
\end{itemize}
\newline
\noindent
\textbf{1.0.2}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Corrected integration with Travis CI and deployment to Maven Central
\end{itemize}
\newline
\noindent
\textbf{1.0.3}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Corrected generating artifacts with dependencies
\item Removed shaded artifact from the release
\item Added a build profile for building timestamped snapshots
\end{itemize}
\newline
\noindent
\textbf{1.0.4}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Updated versions of dependencies
\end{itemize}
\newline
\noindent
\textbf{1.0.5}
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Cycled credentials used by Travis
\end{itemize}
\newline
% ex: set tabstop=4 shiftwidth=4 softtabstop=4 noexpandtab fileformat=unix filetype=tex encoding=utf-8 fileencodings= fenc= spelllang=pl,en spell:
| {
"alphanum_fraction": 0.7612688385,
"avg_line_length": 89.9618790913,
"ext": "tex",
"hexsha": "267fe1c9cc88ba7bf42c8b110a94b8270ca1ca0c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-10T16:08:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-10T16:08:30.000Z",
"max_forks_repo_head_hexsha": "e7dc0bc7ff5d7c1d406bfee952398515f3f6b6c8",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "suhasagg/hycube",
"max_forks_repo_path": "src/documentation/library_doc_tex/tex/library.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "e7dc0bc7ff5d7c1d406bfee952398515f3f6b6c8",
"max_issues_repo_issues_event_max_datetime": "2016-11-27T18:10:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-10-02T14:25:30.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "suhasagg/hycube",
"max_issues_repo_path": "src/documentation/library_doc_tex/tex/library.tex",
"max_line_length": 4236,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "e7dc0bc7ff5d7c1d406bfee952398515f3f6b6c8",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "arturolszak/hycube",
"max_stars_repo_path": "src/documentation/library_doc_tex/tex/library.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-18T02:15:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-18T14:05:13.000Z",
"num_tokens": 61573,
"size": 233631
} |
\section{Appendix}
\label{section:appendix}
\subsection*{Priors}
\label{section:priors}
We used the default priors in the {\tt isochrones.py} {\it python} package.
The prior over age was,
\begin{equation}
p(A) = \frac{\log(10) 10^{A}}{10^{10.5} - 10^8}, ~~~ 8 < A < 10.5.
\end{equation}
% where $A$, is $\log_{10}(\frac{\mathrm{Age}}{\mathrm{yrs}})$.
where $A$, is $\log_{10}(\mathrm{Age~[yrs]})$.
% The prior over mass is uniform in natural-log between -20 and 20,
% \begin{equation}
% p(M) = U(-20, 20)
% \end{equation}
% % where $M$ is $\ln(\frac{\mathrm{mass}}{M_\odot})$.
% where $M$ is $\ln(\mathrm{Mass}~[M_\odot])$.
The prior over EEP was uniform with an upper limit of 800.
We found that adding this upper limit reduced some multi-modality caused by
the giant branch and resulted in better performance.
The prior over true bulk metallicity was based on the galactic metallicity
distribution, as inferred using data from the Sloan Digital Sky Survey
\citep{casagrande2011}.
% It is based on two double-Gaussian distribution, where the halo is described as
% a broad Gaussian and the galactic disc as a narrow Gaussian.
It is the product of a Gaussian that describes the metallicity distribution
over halo stars and two Gaussians that describe the metallicity distribution
in the thin and thick disks:
\begin{eqnarray}
p(F) =
& H_F \frac{1}{\sqrt{2\pi\sigma_{\mathrm{halo}}^2}}
\exp\left(-\frac{(F-\mu_{\mathrm{halo}})^2}{2\sigma_{\mathrm{halo}}}\right)
\\ \nonumber
& \times (1-H_F)
\frac{1}{\xi}
\left[\frac{0.8}{0.15}\exp\left(-\frac{(F-0.016)^2}{2\times 0.15^2}\right)
+ \frac{0.2}{0.22}\exp\left(-\frac{(F-0.15)^2}{2\times
0.22^2}\right)\right],
\end{eqnarray}
where $H_F = 0.001$ is the halo fraction, $\mu_\mathrm{halo}$ and
$\sigma_{\mathrm{halo}}$ are the mean and standard deviation of a Gaussian
that describes a probability distribution over metallicity in the halo, and
take values -1.5 and 0.4 respectively.
% $\mu_\mathrm{disk, 1}$, $\mu_\mathrm{disk, 2}$, $\sigma_\mathrm{disk, 1}$
% and $\sigma_\mathrm{disk, 2}$ are the means and standard deviations of two
The two Gaussians inside the square brackets describe probability
distributions over metallicity in the thin and thick disks.
The values of the means and standard deviations in these Gaussians are from
\citet{casagrande2011}.
$\xi$ is the integral of everything in the square brackets from $-\infty$ to
$\infty$ and takes the value $\sim 2.507$.
% D_F = 0.8 \sigma_{\mathrm{disk, 1}} = 0.15 \mu_{\mathrm{disk, 1}} = 0.016
% \sigma_{\mathrm{disk, 2}} = 0.22 \mu_{\mathrm{disk, 2}} = 0.15
The prior over distance was,
\begin{equation}
p(D) = \frac{3}{3000^3} D^2, ~~~ 0 < D < 3000,
\end{equation}
with D in kiloparsecs, and, finally, the prior over extinction was uniform
between zero and one,
\begin{equation}
p(A_V) = U(0, 1).
\end{equation}
| {
"alphanum_fraction": 0.6958942241,
"avg_line_length": 44.2153846154,
"ext": "tex",
"hexsha": "9cef4fad30ec625ce48b443425064e209f948567",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-03-03T22:16:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-02-11T02:46:32.000Z",
"max_forks_repo_head_hexsha": "5c0d45c1e2eb9ec5b6c57aeacbcb301304065bbc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "john-livingston/stardate",
"max_forks_repo_path": "paper/appendix.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "5c0d45c1e2eb9ec5b6c57aeacbcb301304065bbc",
"max_issues_repo_issues_event_max_datetime": "2019-09-09T10:38:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-02-21T21:37:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "john-livingston/stardate",
"max_issues_repo_path": "paper/appendix.tex",
"max_line_length": 81,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "5c0d45c1e2eb9ec5b6c57aeacbcb301304065bbc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "john-livingston/stardate",
"max_stars_repo_path": "paper/appendix.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-31T23:46:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-19T13:46:46.000Z",
"num_tokens": 966,
"size": 2874
} |
\begin{appendix}
\section{Measurements}
\label{sec:Measurements}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c}
Absolute Position (m) & Magnetic Field $B_z$ (T) & Absolute Error (T) \\
\hline\hline
0.6 & -3.00E-08 & -9.00E-11 \\ \hline
0.68 & -1.50E-07 & -4.50E-10 \\ \hline
0.76 & -7.00E-08 & -2.10E-10 \\ \hline
0.84 & 1.30E-07 & 3.90E-10 \\ \hline
0.92 & 3.80E-07 & 1.14E-09 \\ \hline
1 & 6.20E-07 & 1.86E-09 \\ \hline
1.08 & 7.50E-07 & 2.25E-09 \\ \hline
1.16 & 8.40E-07 & 2.52E-09 \\ \hline
1.24 & 8.40E-07 & 2.52E-09 \\ \hline
1.32 & 8.20E-07 & 2.46E-09 \\ \hline
1.4 & 7.50E-07 & 2.25E-09 \\ \hline
1.48 & 6.70E-07 & 2.01E-09 \\ \hline
1.56 & 6.60E-07 & 1.98E-09 \\ \hline
\end{tabular}
\caption{Magnetic Surface Measurements}
\label{tab:Magnetic_Surface_Measurements}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c}
Current $I$ (A) & Magnetic Field $B_0$ (T) & Absolute Error (T) \\
\hline\hline
0 & -2.60000E-07 & -7.80000E-10 \\ \hline
0.00471 & 9.87000E-06 & 2.96100E-08 \\ \hline
0.02307 & 4.95600E-05 & 1.48680E-07 \\ \hline
0.05135 & 1.10780E-04 & 3.32340E-07 \\ \hline
0.0962 & 2.07540E-04 & 6.22620E-07 \\ \hline
0.1547 & 3.34030E-04 & 1.00209E-06 \\ \hline
0.31412 & 6.78900E-04 & 2.03670E-06 \\ \hline
0.3658 & 7.90460E-04 & 2.37138E-06 \\ \hline
0.42 & 9.06120E-04 & 2.71836E-06 \\ \hline
0.534 & 1.15303E-03 & 3.45909E-06 \\ \hline
0.624 & 1.34634E-03 & 4.03902E-06 \\ \hline
0.748 & 1.61493E-03 & 4.84479E-06 \\ \hline
0.821 & 1.77441E-03 & 5.32323E-06 \\ \hline
0.881 & 1.90360E-03 & 5.71080E-06 \\ \hline
0.956 & 2.06517E-03 & 6.19551E-06 \\ \hline
1.001 & 2.16350E-03 & 6.49050E-06 \\ \hline
\end{tabular}
\caption{Short Cylindrical Coil Measurements (Central Value)}
\label{tab:Short_Cylindrical_Coil_Measurements_Central_Value}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c}
Position $z$ (m) & Magnetic Field $B_z$ (T) & Absolute Error (T) \\
\hline\hline
0.402 & 5.66000E-06 & 1.69800E-08 \\ \hline
0.362 & 7.72000E-06 & 2.31600E-08 \\ \hline
0.312 & 1.20700E-05 & 3.62100E-08 \\ \hline
0.267 & 1.92900E-05 & 5.78700E-08 \\ \hline
0.207 & 4.13900E-05 & 1.24170E-07 \\ \hline
0.167 & 7.88700E-05 & 2.36610E-07 \\ \hline
0.147 & 1.15560E-04 & 3.46680E-07 \\ \hline
0.127 & 1.78140E-04 & 5.34420E-07 \\ \hline
0.107 & 2.90400E-04 & 8.71200E-07 \\ \hline
0.087 & 5.02460E-04 & 1.50738E-06 \\ \hline
0.067 & 8.80290E-04 & 2.64087E-06 \\ \hline
0.057 & 1.14001E-03 & 3.42003E-06 \\ \hline
0.047 & 1.42730E-03 & 4.28190E-06 \\ \hline
0.037 & 1.69344E-03 & 5.08032E-06 \\ \hline
0.027 & 1.91074E-03 & 5.73222E-06 \\ \hline
0.017 & 2.06315E-03 & 6.18945E-06 \\ \hline
0.007 & 2.14593E-03 & 6.43779E-06 \\ \hline
-0.003 & 2.16095E-03 & 6.48285E-06 \\ \hline
-0.013 & 2.10972E-03 & 6.32916E-06 \\ \hline
-0.023 & 1.98830E-03 & 5.96490E-06 \\ \hline
-0.033 & 1.79866E-03 & 5.39598E-06 \\ \hline
-0.043 & 1.55104E-03 & 4.65312E-06 \\ \hline
-0.053 & 1.27438E-03 & 3.82314E-06 \\ \hline
-0.063 & 9.90810E-04 & 2.97243E-06 \\ \hline
-0.073 & 7.57060E-04 & 2.27118E-06 \\ \hline
-0.093 & 4.29050E-04 & 1.28715E-06 \\ \hline
-0.113 & 2.50830E-04 & 7.52490E-07 \\ \hline
\end{tabular}
\caption{Short Cylindrical Coil Measurements (Field Pattern)}
\label{tab:Short_Cylindrical_Coil_Measurements_Field_Pattern}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c}
Current $I$ (A) & Magnetic Field $B_0$ (T) & Absolute Error (T) \\
\hline\hline
0.00001 & 1.00000E-07 & 3.00000E-10 \\ \hline
0.00513 & 6.74000E-06 & 2.02200E-08 \\ \hline
0.02304 & 3.08800E-05 & 9.26400E-08 \\ \hline
0.05135 & 6.90400E-05 & 2.07120E-07 \\ \hline
0.0918 & 1.23320E-04 & 3.69960E-07 \\ \hline
0.148 & 1.99000E-04 & 5.97000E-07 \\ \hline
0.2095 & 2.81730E-04 & 8.45190E-07 \\ \hline
0.2951 & 3.96950E-04 & 1.19085E-06 \\ \hline
0.3497 & 4.70430E-04 & 1.41129E-06 \\ \hline
0.421 & 5.66400E-04 & 1.69920E-06 \\ \hline
0.521 & 7.01080E-04 & 2.10324E-06 \\ \hline
0.628 & 8.44440E-04 & 2.53332E-06 \\ \hline
0.731 & 9.83850E-04 & 2.95155E-06 \\ \hline
0.843 & 1.13341E-03 & 3.40023E-06 \\ \hline
0.929 & 1.25063E-03 & 3.75189E-06 \\ \hline
0.952 & 1.28146E-03 & 3.84438E-06 \\ \hline
0.999 & 1.34348E-03 & 4.03044E-06 \\ \hline
\end{tabular}
\caption{Long Cylindrical Coil Measurements (Central Value)}
\label{tab:Long_Cylindrical_Coil_Measurements_Central_Value}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c}
Position $z$ (m) & Magnetic Field $B_z$ (T) & Absolute Error (T) \\
\hline\hline
0.434 & 4.93000E-06 & 1.47900E-08 \\ \hline
0.404 & 6.19000E-06 & 1.85700E-08 \\ \hline
0.374 & 7.88000E-06 & 2.36400E-08 \\ \hline
0.344 & 1.03400E-05 & 3.10200E-08 \\ \hline
0.314 & 1.38800E-05 & 4.16400E-08 \\ \hline
0.284 & 1.93900E-05 & 5.81700E-08 \\ \hline
0.264 & 2.48500E-05 & 7.45500E-08 \\ \hline
0.254 & 2.83300E-05 & 8.49900E-08 \\ \hline
0.244 & 3.26200E-05 & 9.78600E-08 \\ \hline
0.234 & 3.76400E-05 & 1.12920E-07 \\ \hline
0.224 & 4.41500E-05 & 1.32450E-07 \\ \hline
0.214 & 5.17000E-05 & 1.55100E-07 \\ \hline
0.204 & 6.16600E-05 & 1.84980E-07 \\ \hline
0.194 & 7.43600E-05 & 2.23080E-07 \\ \hline
0.184 & 9.04300E-05 & 2.71290E-07 \\ \hline
0.174 & 1.11420E-04 & 3.34260E-07 \\ \hline
0.164 & 1.40060E-04 & 4.20180E-07 \\ \hline
0.154 & 1.78130E-04 & 5.34390E-07 \\ \hline
0.144 & 2.29830E-04 & 6.89490E-07 \\ \hline
0.134 & 2.99870E-04 & 8.99610E-07 \\ \hline
0.124 & 3.91560E-04 & 1.17468E-06 \\ \hline
0.114 & 5.13370E-04 & 1.54011E-06 \\ \hline
0.104 & 6.54820E-04 & 1.96446E-06 \\ \hline
0.094 & 8.03410E-04 & 2.41023E-06 \\ \hline
0.084 & 9.43880E-04 & 2.83164E-06 \\ \hline
0.074 & 1.06252E-03 & 3.18756E-06 \\ \hline
0.064 & 1.15278E-03 & 3.45834E-06 \\ \hline
0.054 & 1.22114E-03 & 3.66342E-06 \\ \hline
0.044 & 1.26824E-03 & 3.80472E-06 \\ \hline
0.034 & 1.30034E-03 & 3.90102E-06 \\ \hline
0.024 & 1.32299E-03 & 3.96897E-06 \\ \hline
0.014 & 1.33571E-03 & 4.00713E-06 \\ \hline
0.004 & 1.34199E-03 & 4.02597E-06 \\ \hline
-0.006 & 1.34157E-03 & 4.02471E-06 \\ \hline
-0.016 & 1.33482E-03 & 4.00446E-06 \\ \hline
-0.026 & 1.32152E-03 & 3.96456E-06 \\ \hline
-0.036 & 1.29926E-03 & 3.89778E-06 \\ \hline
-0.046 & 1.26571E-03 & 3.79713E-06 \\ \hline
-0.056 & 1.22072E-03 & 3.66216E-06 \\ \hline
-0.066 & 1.15103E-03 & 3.45309E-06 \\ \hline
-0.076 & 1.06003E-03 & 3.18009E-06 \\ \hline
-0.086 & 9.43750E-04 & 2.83125E-06 \\ \hline
-0.096 & 8.04640E-04 & 2.41392E-06 \\ \hline
-0.106 & 6.55330E-04 & 1.96599E-06 \\ \hline
-0.116 & 5.13610E-04 & 1.54083E-06 \\ \hline
-0.126 & 3.94030E-04 & 1.18209E-06 \\ \hline
\end{tabular}
\caption{Long Cylindrical Coil Measurements (Field Pattern)}
\label{tab:Long_Cylindrical_Coil_Measurements_Field_Pattern}
\end{table}
\end{appendix}
| {
"alphanum_fraction": 0.5958527296,
"avg_line_length": 40.9768786127,
"ext": "tex",
"hexsha": "04fea6e629482fac8cd1c934d51b295362c3731f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks",
"max_forks_repo_path": "glaL3_E_6_Magnetic_Fields/sections/appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks",
"max_issues_repo_path": "glaL3_E_6_Magnetic_Fields/sections/appendix.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks",
"max_stars_repo_path": "glaL3_E_6_Magnetic_Fields/sections/appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3921,
"size": 7089
} |
\documentclass{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{geometry}
\usepackage[utf8]{inputenc}
\usepackage{enumitem}
\usepackage{physics}
\setlength{\parindent}{0pt}
\setlength{\parskip}{1em}
\relpenalty=10000
\binoppenalty=10000
\begin{document}
\section*{Inequalities}
\begin{enumerate}
\item \textbf{General mean inequality.}
The mean of order $p$ of positive real numbers $x_1,\dots,x_n$ is defined as:
$$M_p=
\begin{cases}
\left(\frac{x_1^p+ \dots + x_n^p}{n}\right)^{1/p} &\text{for } p \ne 0 \\
\sqrt[n]{x_1 \dots x_n} &\text{for } p=0
\end{cases}
$$
In particular
\begin{center}
\begin{tabular}{lcl}
Smallest element & $\min\{x_i\}$ & $M_{-\infty}$ \\
Harmonic mean & HM & $M_{-1}$ \\
Geometric mean & GM & $M_0$ \\
Arithmetic mean & AM & $M_1$ \\
Quadratic mean & QM & $M_2$ \\
Largest element & $\max\{x_i\}$ & $M_{\infty}$
\end{tabular}
\end{center}
Then for any real $p$ and $q$
$$M_p \leq M_q \iff p \leq q $$
\item \textbf{Cauchy inequality.}
For real numbers $x_1, \dots , x_n, y_1, \dots , y_n$
$$\left(\sum_{i=1}^{n} x_i y_i\right)^2 \leq \sum_{i=1}^{n} x_i^2 \sum_{i=1}^{n} y_i^2 $$
\item \textbf{Chebyshev inequality.}
For real numbers $x_1 \geq \dots \geq x_n$ and $y_1 \geq \dots \geq y_n$
$$\frac{1}{n} \sum_{i=1}^{n} x_iy_i
\geq
\left(\frac{1}{n}\sum_{i=1}^{n}x_i\right)
\left(\frac{1}{n}\sum_{i=1}^{n}y_i\right)
\geq
\frac{1}{n} \sum_{i=1}^{n} x_iy_{n+1-i} $$
\item \textbf{Jensen inequality.}
Given positive real numbers $\lambda_1,\hdots,\lambda_n$ for which $\lambda_1+\hdots+\lambda_n=1$ and a convex function $f(x)$ the following holds:
$$f(\lambda_1 x_1 + \hdots + \lambda_n x_n) \leq \lambda_1 f(x_1) + \hdots + \lambda_n f(x_n)$$
Similarly, when $f(x)$ is a concave function, then
$$f(\lambda_1 x_1 + \hdots + \lambda_n x_n) \geq \lambda_1 f(x_1) + \hdots + \lambda_n f(x_n)$$
\end{enumerate}
\newpage
\section*{Problems}
\begin{enumerate}
\item % vorrat 97
Let $a_1,\dots,a_n$ be positive real numbers such that $a_1\dots a_n =1$ Prove that
$$(1+a_1)\dots (1+a_n) \geq 2^n$$
\item % vorrat 100
For real numbers $x_1, \dots , x_n, y_1, \dots , y_n$ the following holds
$$x_1+\dots+x_n \geq x_1y_1 + \dots + x_ny_n$$
Prove that
$$x_1+\dots+x_n \leq \frac{x_1}{y_1} + \dots + \frac{x_n}{y_n}$$
\item % vorrat 99
Let $n$ be an integer ($n\geq 2$) and $a_1,\dots,a_n$ be positive real numbers such that $a_1+\dots+a_n=1$. Prove the following inequality for any positive real numbers $x_1,\dots,x_n$ for which $x_1+\dots+x_n=1$
$$2\sum_{i<j} x_ix_j \leq \frac{n-2}{n-1} + \sum_{i=1}^n \frac{a_ix_i^2}{1-a_i}$$
When does the equality hold?
\item % vorrat 102
Prove that for any positive real numbers $a_1,\dots,a_n$
$$\frac{1}{\frac{1}{1+a_1}+\dots+\frac{1}{1+a_n}} - \frac{1}{\frac{1}{a_1}+\dots+\frac{1}{a_n}} \geq n$$
\item % vorrat 103
Let $x_1,\dots,x_n$ be positive real numbers such that
$$\frac{1}{1+x_1} + \dots + \frac{1}{1+x_n}=1$$
Prove that
$$x_1 \dots x_n \geq (n-1)^n$$
\item % vorrat 123
Prove for real numbers $x_1,\dots,x_5$
$$x_1^2 + x_2^2 + x_3^2 + x_4^2 + x_5^2 \geq \frac{2}{\sqrt{3}}(x_1x_2 + x_2x_3 + x_3x_4 + x_4x_5) $$
\item % vorrat 106
Let $a,b,c$ be positive real numbers. Prove that
$$\left(1+\frac{a}{b}\right) \left(1+\frac{b}{c}\right) \left(1+\frac{c}{a}\right) \geq
2\left(1+\frac{a+b+c}{\sqrt[3]{abc}}\right)$$
\item % vorrat 104
Given positive real numbers $x_1,\dots,x_n$ for which $x_1^2+\dots+x_n^2=1$, find the minimal value of the expression
$$\frac{x_1^5}{x_2+x_3+\dots+x_n} + \frac{x_2^5}{x_1+x_3+\dots+x_n} + \dots +\frac{x_n^5}{x_1+x_2+\dots+x_{n-1}}$$
\item % vorrat 101
Let $x_1,\dots,x_n$ be positive real numbers for which $x_1+\dots+x_n=1$. Prove that
$$\frac{x_1}{\sqrt{1-x_1}} + \dots + \frac{x_n}{\sqrt{1-x_n}}
\geq
\frac{\sqrt{x_1}+\dots+\sqrt{x_n}}{\sqrt{n-1}} $$
\item % http://artofproblemsolving.com/wiki/index.php?title=2004_USAMO_Problems/Problem_5
Let $a,b,c$ be positive real numbers. Prove that
$$(a^5-a^2+3)(b^5-b^2+3)(c^5-c^2+3)\geq (a+b+c)^3$$
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.6340466458,
"avg_line_length": 36.8053097345,
"ext": "tex",
"hexsha": "166f6cc6fdd5180536a44e26c80759ae244e8b7b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-08T07:04:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-08T07:04:43.000Z",
"max_forks_repo_head_hexsha": "0dcacba8a6d1769bbccfedda89d08fa22c3f55a1",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "ZhaoWanLong/maths-olympiad",
"max_forks_repo_path": "12_inequalities.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0dcacba8a6d1769bbccfedda89d08fa22c3f55a1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "ZhaoWanLong/maths-olympiad",
"max_issues_repo_path": "12_inequalities.tex",
"max_line_length": 214,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "0dcacba8a6d1769bbccfedda89d08fa22c3f55a1",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "ZhaoWanLong/maths-olympiad",
"max_stars_repo_path": "12_inequalities.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-21T21:57:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-21T21:57:43.000Z",
"num_tokens": 1841,
"size": 4159
} |
\chapter{Probabilistic Design Evaluation}
\label{chap:prob-design-eval}
In this chapter, we propose a general formulation for the evaluation and verification of probabilistic design.
We establish the connection between the proposed formulation and SSAT, weighted model counting, and probabilistic model checking.
Moreover, a new SSAT algorithm based on \textit{binary decision diagram} (BDD) is proposed.
Most content in this chapter is based on our conference paper~\cite{LeeICCAD14ProbDesign} published at ICCAD\,'14 and journal paper~\cite{LeeTC18ProbDesign} published in IEEE Transactions on Computers.
\input{prob-design-eval/preliminaries.tex}
\input{prob-design-eval/formulation.tex}
\input{prob-design-eval/technique.tex}
\input{prob-design-eval/discussion.tex}
\input{prob-design-eval/evaluation.tex} | {
"alphanum_fraction": 0.8187422935,
"avg_line_length": 62.3846153846,
"ext": "tex",
"hexsha": "9756e06f52a3f5ddd00eec9a4629ebb5a03f090e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "nianzelee/PhD-Dissertation",
"max_forks_repo_path": "paper/prob-design-eval.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "nianzelee/PhD-Dissertation",
"max_issues_repo_path": "paper/prob-design-eval.tex",
"max_line_length": 201,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nianzelee/PhD-Dissertation",
"max_stars_repo_path": "paper/prob-design-eval.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z",
"num_tokens": 191,
"size": 811
} |
\title{Computer Architecture - CS 301} % You may change the title if you want.
\author{Rishit Saiya - 180010027, Assignment - 4}
\date{\today}
\documentclass[12pt]{article}
\usepackage{fullpage}
\usepackage{enumitem}
\usepackage{amsmath,mathtools}
\usepackage{amssymb}
\usepackage[super]{nth}
\usepackage{textcomp}
\usepackage{hyperref}
\begin{document}
\maketitle
%----------------------------------------------------------------
\section{}
If we freeze one machine, copy all the state (memory, register file, PC, flags register) to another machine, it depends if both machines give results or not.
An ISA describes the design of Architecture of a computer going intricately till the basic operations being supported. It is distinguished from a micro architecture, which is a set of processor design techniques used, in a particular processor to implement Instruction Set. Processors with different can share a common Instruction Set.
So, what we mostly care about is the set or collection of basic operations to be supported in similar fashions in both systems. So, if the second machine have different design but same ISA, then maybe output of 2 machines will be same but performance might differ.
The ISA defines various types of instructions supported by the processor, maximum length of each type of instruction and the Instruction Format. So, if the second machine has a different ISA, then the result is less likely to be same for both the machines, there is more chance that the instructions will not be recognised by the machines. So generalising the statement isn't possible and depends upon the compatibility of the ISA's as well.
%----------------------------------------------------------------
\section{}
The maximum number of activation blocks would be \textbf{\textit{5}} (including main()).
When the foo(5) is called, in the recursion it calls foo(4) \& foo(3). So the activation length becomes 4 (main(), foo(5), foo(4), foo(3)). The next recursion step would include foo(2) \& foo(1) which doesn't require an activation block because it is just a condition loop. \\
\textbf{\textit{Ascending Stack order during iterations:}} \\
\begin{itemize}
\item main
\item main $\rightarrow$ foo(5)
\item main $\rightarrow$ foo(5) $\rightarrow$ foo(4)
\item main $\rightarrow$ foo(5) $\rightarrow$ foo(4) $\rightarrow$ foo(3)
\item main $\rightarrow$ foo(5) $\rightarrow$ foo(4) $\rightarrow$ foo(3) $\rightarrow$ foo(2)
\item main $\rightarrow$ foo(5) $\rightarrow$ foo(4) $\rightarrow$ foo(3) $\rightarrow$ foo(1)
\end{itemize}
The remaining steps are backwards and recursive values are used.
%----------------------------------------------------------------
\end{document} | {
"alphanum_fraction": 0.703010279,
"avg_line_length": 55.5918367347,
"ext": "tex",
"hexsha": "c81a2bae4b2fa712ff3f7363a78f2da598b53d82",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rishitsaiya/Computer-Architecture-Theory",
"max_forks_repo_path": "Assignment-4/180010027_RishitSaiya.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rishitsaiya/Computer-Architecture-Theory",
"max_issues_repo_path": "Assignment-4/180010027_RishitSaiya.tex",
"max_line_length": 441,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rishitsaiya/Computer-Architecture-Theory",
"max_stars_repo_path": "Assignment-4/180010027_RishitSaiya.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-25T17:20:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-25T17:20:42.000Z",
"num_tokens": 634,
"size": 2724
} |
\documentclass[11pt,aspectratio=43]{beamer}
\usepackage[utf8]{inputenc}
\usepackage{amsmath, amsfonts, amssymb, amsthm}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage{xcolor}
\usepackage{setspace}
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{graphicx}
\usepackage{tikz}
% \usetikzlibrary{decorations}
\usetikzlibrary{decorations.pathreplacing}
\usepackage{ulem}
\usepackage{hyperref}
\usepackage{booktabs}
\usepackage{babel}
\usepackage{makecell}
\usepackage[para,online,flushleft]{threeparttable}
\usepackage{pdfpages}
\usepackage{tcolorbox}
\usepackage{bm}
\usepackage{appendixnumberbeamer}
\usepackage{natbib}
\usepackage{caption}
\captionsetup[figure]{labelformat=empty}% redefines the caption setup of the figures environment in the beamer class.
\usetheme[compress]{Boadilla}
\usecolortheme{default}
\useoutertheme{miniframes}
\usefonttheme[onlymath]{serif}
\newcommand{\jump}[2]{\hyperlink{#1}{\beamerbutton{#2}}}
\newcommand{\orange}[1]{\textcolor{orange}{#1}}
\newcommand{\red}[1]{\textcolor{red}{#1}}
\setbeamertemplate{itemize item}{\raisebox{0.1em}{\scalebox{0.7}{$\blacksquare$}}}
\setbeamertemplate{itemize subitem}[circle]
\setbeamertemplate{itemize subsubitem}{--}
\setbeamercolor{itemize item}{fg=black}
\setbeamercolor{itemize subitem}{fg=black}
\setbeamercolor{itemize subsubitem}{fg=black}
\setbeamercolor{item projected}{bg=darkgray,fg=white}
\definecolor{blue}{rgb}{0.2, 0.2, 0.7}
\setbeamercolor{alerted text}{fg=blue}
\setbeamertemplate{enumerate items}[circle]
\setbeamertemplate{headline}{}
%==========================================
\let\olditemize=\itemize
\let\endolditemize=\enditemize
\renewenvironment{itemize}{\olditemize \itemsep1em}{\endolditemize}
\let\oldenumerate=\enumerate
\let\endoldenumerate=\endenumerate
\renewenvironment{enumerate}{\oldenumerate \itemsep1em}{ \endoldenumerate}
\DeclareMathOperator*{\argmax}{\arg\!\max}
\DeclareMathOperator*{\E}{\mathbb{E}}
\DeclareMathOperator*{\var}{\rm Var}
\DeclareMathOperator*{\cov}{\rm Cov}
\theoremstyle{definition}
\newtheorem{assume}{Assumption}
\newtheorem{lem}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{thm}{Theorem}
\newtheorem{corol}{Corollary}
\begin{document}
\title[Lecture 11]{Lecture 11 \\ Distorting Taxes and the Welfare Theorems}
\author[Hui-Jun Chen]{Hui-Jun Chen}
\institute[OSU]{The Ohio State University}
% \date{\today}
\date{\today}
\setbeamertemplate{navigation symbols}{}
\setstretch{1.2}
%-------------------------------------------------------
{
% \usebackgroundtemplate{\includegraphics[width=1\paperwidth]{../EveningSky_cropped_edit43_bright.jpg}}
\begin{frame}
% \vspace{3em}
\centering
% {\footnotesize ECON 4002 Intermediate Macroeconomic Theory}
\maketitle
% \vspace{-1.5em}
% \centering
% \includegraphics[width=0.55\linewidth]{Pictures/houses.jpeg}
\end{frame}
}
% -------------------------------------------
\setbeamertemplate{headline}
{
\setbeamercolor{section in head/foot}{fg=black, bg=white}
\vskip1em \tiny \insertsectionnavigationhorizontal{1\paperwidth}{\hspace{0.50\paperwidth}}{}
}
%------------------------------------------
\begin{frame}{Overview}
\label{slide:Overview}
In previous lectures, all the taxes we are discussing is \alert{lump-sum tax}.
\begin{itemize}
\item pure \alert{income effect}, no change to consumption-leisure allocation
\item satisfy both welfare theorems
\end{itemize}
In this lecture, the \alert{distorting taxes} will include \alert{substitution effect}, and thus
\begin{itemize}
\item creating ``wedges'' to distort consumption-leisure choice
\item violate the welfare theorems (CE $ \neq $ SPP)
\end{itemize}
\end{frame}
\section{Simplified Model}
\label{sec:Simplified_Model}
\begin{frame}{SPP in Simplified Model}
\label{slide:SPP_in_Simplified_Model}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{figure}
\includegraphics[width=\textwidth]{./figures/Figure_5_14.jpg}
\end{figure}
\end{column}
\begin{column}{0.5\textwidth}
Assume production is labor-only technology:
%
\begin{equation*}
Y = z N^{d}
\end{equation*}
%
So PPF is
%
%
\begin{equation*}
C = z( h-l ) - G
\end{equation*}
%
Thus, SPP is
%
\begin{align*}
& \max_{l} U( z( h-l ) - G, l )
\\
\text{FOC:} \quad
& \frac{D_{l}U( C, l )}{D_{C}U( C, l )} = MRS_{l, C}
\\
& = MRT_{l, C} = z = MPN
\end{align*}
%
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Labor Demand in Simplified Model}
\label{slide:Labor_Demand_in_Simplified_Model}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{figure}
\caption{\scriptsize Figure 5.15 The Labor Demand Curve in the Simplified Model}
\includegraphics[width=\textwidth]{./figures/Figure_5_15.jpg}
\end{figure}
\end{column}
\begin{column}{0.5\textwidth}
%
\begin{equation*}
\max_{N^{d}} z N^{d} - wN^{d}
\end{equation*}
%
FOC would be $ z = w $ (horizontal line)
\begin{itemize}
\item if $ z < w $: negative profit for every worker hired, choose $ N^{d} = 0 $
\item if $ z > w $: positive profit for every worker hired, choose $ N^{d} = \infty $
\item only $ z = w $ possible, $ \therefore $ linear PPF in previous slide
\begin{itemize}
\item ``infinitely elastic'' $ N^{d} $
\end{itemize}
\end{itemize}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Competitive Equilibrium w/ Distorting Tax}
\label{slide:Competitive_Equilibrium_w__Distortionary_Tax}
A competitive equilibrium, with $ \{ z, \alert{t}, K \} $ exogenous, is a list of endogenous prices and quantities $ \{ C, l, N^{s}, N^{d}, Y, \pi, w, \alert{G} \} $ such that:
\begin{enumerate}
\item taking $ \{ w, T, \pi \} $ as given, the consumer solves
%
%
%
\begin{equation*}
\max_{C, l, N^{s}} U( C, l )
\quad \text{subject to} \quad
C = w\alert{( 1-t )}N^{s} + \pi - T
\quad \text{and} \quad
N^{s} + l = h
\end{equation*}
%
\item taking $ w $ as given, the firm solves:
%
\begin{equation*}
\max_{N^{d}, Y, \pi} \pi
\quad \text{subject to} \quad
\pi = Y - w N^{d}
\quad \text{and} \quad
Y = z N^{d}
\end{equation*}
%
\item the government spends $ G = w t N^{s} $
\item the labor market clears at the equilibrium wage, i.e. $ N^{s} = N^{d} $
\end{enumerate}
\end{frame}
\begin{frame}{Effect of Distorting Tax}
\label{slide:Effect_of_Distorting_Tax}
Since the tax is imposed on consumers/workers, it distorted the consumption-leisure decision:
%
\begin{equation*}
MRS_{l, C} = w( 1-t )
\end{equation*}
%
So in the equilibrium, it deviates from SPP:
%
\begin{equation*}
MRS_{l, C} = w( 1-t ) < w = z = MPN = MRT_{l, C}
\end{equation*}
%
\textbf{Result}: CE and SPP lead to different allocation!
\end{frame}
\begin{frame}{Graphical Representation}
\label{slide:Graphical_Representation}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{figure}
\caption{\scriptsize Figure 5.16 Competitive Equilibrium in the Simplified Model with a Proportional Tax on Labor Income}
\includegraphics[width=\textwidth]{./figures/Figure_5_16.jpg}
\end{figure}
\end{column}
\begin{column}{0.5\textwidth}
SPP solution lies at point E:
\begin{itemize}
\item $\overline{AB}$: PPF, slope $ -z $
\item can reach indifference curve $ I_{1} $
\end{itemize}
CE solution lies at point H:
\begin{itemize}
\item $\overline{DF}$: consumer’s budget line
\item can only reach $ I_{2} $
\item proportional tax $ \Rightarrow $ $ N^{s} $ $ \downarrow $
\item $ N^{s} $$ \downarrow $$ \Rightarrow $$ Y $$\downarrow $, but still need to meet $ G $, so $ C \downarrow $: gov’t budget critical!
\end{itemize}
\end{column}
\end{columns}
\end{frame}
\section{Full Model}
\label{sec:Full_Model}
\begin{frame}{How Much Tax Revenue can be Generated?}
\label{slide:How_Much_Tax_Revenue_can_be_Generated_}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{figure}
\caption{\scriptsize Figure 5.17 A Laffer Curve}
\includegraphics[width=\textwidth]{./figures/Figure_5_17.jpg}
\end{figure}
\jump{slide:Multiple_Competitive_Equilibria_Possible}{Back}
\end{column}
\begin{column}{0.5\textwidth}
\small
equilibrium wage: $ w = z $, implies \alert{total tax revenue} by solve consumer problem:
%
\begin{equation*}
R( t ) = tz( h - l^{*}( t ) )
,\end{equation*}
%
What $ t $ maximizes? Solve
%
\begin{equation*}
\max_{t} R( t ) = \max_{t} tz( h - l^{*}( t ) )
,\end{equation*}
%
\begin{itemize}
\item not just $ t = 1 $! tax \alert{rate } vs tax \alert{base}
\item $ t = 0 $: no revenue because no tax
\item $ t = 1 $: no revenue because no incentive to work
\end{itemize}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Full Model Elaboration}
\label{slide:Full_Model_Elaboration}
Let $ U( C, l ) = \ln C + \ln l $, and $ h = z = 1 $. Consumer has some non-labor income denoted as $ x > 0 $. FOC leads to
%
\begin{align*}
MRS_{l, C}
& = \frac{C}{l}
\\
& = \frac{( 1-t )( 1-l ) + x}{l} = 1 - t = MRT_{l, C}
\\
& \Rightarrow ( 1-t )( 1-l ) + x = ( 1-t )l
\\
& \Rightarrow l = \frac{x + 1 - t}{2( 1-t )}
\\
& \red{ \Rightarrow N^{s} ( t ) = 1-l = \frac{1}{2} \left( 1 - \frac{x}{1-t} \right)}
\end{align*}
%
\end{frame}
\begin{frame}{Maximize Tax Revenue}
\label{slide:Maximize_Tax_Revenue}
Total tax revenue is
%
\begin{equation*}
R( t ) = t N^{s}( t )
,\end{equation*}
%
and thus government's problem is
%
\begin{equation*}
\max_{t} \frac{1}{2} t \left(
1 - \frac{x}{1-t}
\right)
.\end{equation*}
%
FOC leads to
%
\begin{align*}
( 1 - \frac{x}{1-t} ) + t \frac{(-1)( -1 )( -x )}{( 1-t )^{t}}
& = 0
\\
\frac{1-t-x}{1-t}
& = \frac{tx}{(1-t)^{2}}
\\
( 1-t )( 1-t-x )
&= tx
\\
t^{*}
&= 1 - \sqrt{x}
\end{align*}
%
\end{frame}
\begin{frame}{Visualization}
\label{slide:Visualization}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{figure}
\includegraphics[width=\textwidth]{./figures/lafferCurve.png}
\end{figure}
\end{column}
\begin{column}{0.5\textwidth}
Consider two cases:
\begin{enumerate}
\item consumer is poor (low $ x $)
\item consumer is rich (high $ x $)
\end{enumerate}
For a given after tax-wage , rich consumer supplies less labor
\begin{itemize}
\item tax revenue shifts down
\item Laffer peak shifts left
\item many other conditions also impact this analysis!
\end{itemize}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Multiple Competitive Equilibria Possible}
\label{slide:Multiple_Competitive_Equilibria_Possible}
\begin{columns}
\begin{column}{0.5\textwidth}
\begin{figure}
\caption{Figure 5.18 Two Competitive Equilibria}
\includegraphics[width=\textwidth]{./figures/Figure_5_18.jpg}
\end{figure}
\end{column}
\begin{column}{0.5\textwidth}
Previous slide logic implies the government can choose 2 tax rates for a given required level of $ G $
\begin{itemize}
\item both $ t_{1} $ and $ t_{2} $ yield the same revenue
\item consumer strictly better off under lower tax rate $ t_{1} $
\end{itemize}
\jump{slide:How_Much_Tax_Revenue_can_be_Generated_}{Tax Revenue}
\end{column}
\end{columns}
\end{frame}
\begin{frame}{Conclusion}
\label{slide:Conclusion}
We’ve focused on the simple case to keep analysis straightforward, but logic applies more broadly.
\begin{itemize}
\item SPP: $ MRS_{l, C} = MRT_{l, C} = MPN $, since PPF is $ C = z F( K, N ) - G $
\item CE: same distortion as our simple case:
\begin{itemize}
\item consumer problem implies $ MRS_{l, C} = w ( 1-t ) $
\item firm problem implies $ MRT_{l, C} = w $
\item same result as simplified model: $ MRS_{l, C} \neq MRT_{l, C} $, unlike SPP
\item only difference from simplified model: $ MPN = D_{N}F( K, N ) \neq z $
\end{itemize}
\end{itemize}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.5698489557,
"avg_line_length": 33.2620192308,
"ext": "tex",
"hexsha": "fa2bbcc96cd9848c2b0880829fc1ec248dc64ac2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b6ed028475340cbcb9d7bdee376846b933d0d0a5",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "huijunchen9260/websrc",
"max_forks_repo_path": "data/pdf/IntermediateMacroSummer2022/Lecture_11/Lecture_11.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b6ed028475340cbcb9d7bdee376846b933d0d0a5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "huijunchen9260/websrc",
"max_issues_repo_path": "data/pdf/IntermediateMacroSummer2022/Lecture_11/Lecture_11.tex",
"max_line_length": 180,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b6ed028475340cbcb9d7bdee376846b933d0d0a5",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "huijunchen9260/websrc",
"max_stars_repo_path": "data/pdf/IntermediateMacroSummer2022/Lecture_11/Lecture_11.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4103,
"size": 13837
} |
\section{File structure description with explanations of functionality of each part}
\section{Architecture and messaging}
\newthought{LMCP and MDMs, link to LMCP docs}
\newthought{PackageMaker, description and use}
| {
"alphanum_fraction": 0.8045454545,
"avg_line_length": 22,
"ext": "tex",
"hexsha": "1d42704894c47edcc88c7754746e957cefdee7ba",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7982e32432b36e7877fe27bd4024e61fe316a5ea",
"max_forks_repo_licenses": [
"NASA-1.3"
],
"max_forks_repo_name": "cmcghan/OpenUxAS_old",
"max_forks_repo_path": "doc/reference/UserManual/Navigating/Navigating.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7982e32432b36e7877fe27bd4024e61fe316a5ea",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"NASA-1.3"
],
"max_issues_repo_name": "cmcghan/OpenUxAS_old",
"max_issues_repo_path": "doc/reference/UserManual/Navigating/Navigating.tex",
"max_line_length": 84,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "aba0ffca2d7940b5331dd1dd30c3cffee0b229cf",
"max_stars_repo_licenses": [
"NASA-1.3"
],
"max_stars_repo_name": "sahabi/OpenUxAS",
"max_stars_repo_path": "doc/reference/UserManual/Navigating/Navigating.tex",
"max_stars_repo_stars_event_max_datetime": "2018-03-18T13:41:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-18T13:41:59.000Z",
"num_tokens": 49,
"size": 220
} |
\unnumberedchapter{Abstract}
\chapter*{Abstract}
A self-consistent phase space distribution is a charged particle beam in which the electric field has a linear dependence on the particle coordinates, and furthermore, in which the linearity of the electric field is conserved as the beam is transported through arbitrary linear focusing fields. These features could increase the possible beam intensity in a circular accelerator by minimizing/eliminating the space charge tune shift/spread. Additionally, the uniform density of known self-consistent distributions would be ideal for fixed-target applications. Finally, certain self-consistent distributions can be flattened by exploiting the relationships between their phases space coordinates.
Although self-consistent distributions are often used in theoretical studies, they are not assumed to be realistic. Yet simulations predict that at least one — the Danilov distribution — could be approximately produced in a real machine using a method called elliptical painting. This dissertation contributes to efforts to test this prediction in the Spallation Neutron Source (SNS). First, the beam envelope model was used to calculate the matched solutions of the Danilov distribution in periodic focusing channels, placing constraints on the elliptical painting method. Second, several methods to indirectly measure the four-dimensional (4D) phase space distribution of an accumulated beam in the SNS were identified, implemented using existing diagnostics, and optimized, allowing the comparison of real beams to the Danilov model in minimal time. Finally, three initial experiments to produce a Danilov distribution in the SNS were carried out. Although the experiments were performed under suboptimal conditions due to current hardware constraints, the measured reduction in 4D emittance was not insignificant in the final experiment, indicating that the beam was closer to the desired self-consistent case than a typical beam in the SNS. Simulations were included to benchmark the measurements, resulting in qualitative agreement and recommendations for future experiments. Small modifications to the SNS ring lattice are expected to bring the beam closer to a self-consistent state.
| {
"alphanum_fraction": 0.8357875948,
"avg_line_length": 320.1428571429,
"ext": "tex",
"hexsha": "6c7eefc509cd12dd00bc85ed03ed0fbd0530cf8f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "austin-hoover/dissertation",
"max_forks_repo_path": "Preamble/abstract.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "austin-hoover/dissertation",
"max_issues_repo_path": "Preamble/abstract.tex",
"max_line_length": 1491,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "austin-hoover/dissertation",
"max_stars_repo_path": "Preamble/abstract.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 404,
"size": 2241
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{\label{sec:GroupTracking}Group ID-Based Process Tracking}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
One function that HTCondor often must perform is keeping track of all
processes created by a job. This is done so that HTCondor can provide
resource usage statistics about jobs, and also so that HTCondor can properly
clean up any processes that jobs leave behind when they exit.
In general, tracking process families is difficult to do reliably.
By default HTCondor uses a combination of process parent-child
relationships, process groups, and information that HTCondor places in a
job's environment to track process families on a best-effort
basis. This usually works well, but it can falter for certain
applications or for jobs that try to evade detection.
Jobs that run with a user account dedicated for HTCondor's use
can be reliably tracked, since all HTCondor needs to do is look for all
processes running using the given account. Administrators must specify
in HTCondor's configuration what accounts can be considered dedicated
via the \Macro{DEDICATED\_EXECUTE\_ACCOUNT\_REGEXP} setting. See
Section~\ref{sec:RunAsNobody} for further details.
Ideally, jobs can be reliably tracked regardless of the user account
they execute under. This can be accomplished with group ID-based
tracking. This method of tracking requires that a range of dedicated
\emph{group} IDs (GID) be set aside for HTCondor's use. The number of GIDs
that must be set aside for an execute machine is equal to its number
of execution slots. GID-based tracking is only available on Linux, and
it requires that HTCondor either runs as \Login{root} or uses privilege
separation (see Section~\ref{sec:PrivSep}).
GID-based tracking works by placing a dedicated GID in the
supplementary group list of a job's initial process. Since modifying
the supplementary group ID list requires
\Login{root} privilege, the job will not be able to create processes
that go unnoticed by HTCondor.
Once a suitable GID range has been set aside for process tracking,
GID-based tracking can be enabled via the
\Macro{USE\_GID\_PROCESS\_TRACKING} parameter. The minimum and maximum
GIDs included in the range are specified with the
\Macro{MIN\_TRACKING\_GID} and \Macro{MAX\_TRACKING\_GID}
settings. For example, the following would enable GID-based tracking
for an execute machine with 8 slots.
\begin{verbatim}
USE_GID_PROCESS_TRACKING = True
MIN_TRACKING_GID = 750
MAX_TRACKING_GID = 757
\end{verbatim}
If the defined range is too small, such that there is not a GID available
when starting a job,
then the \Condor{starter} will fail as it tries to start the job.
An error message will be logged stating that there are no more tracking GIDs.
GID-based process tracking requires use of the \Condor{procd}. If
\MacroNI{USE\_GID\_PROCESS\_TRACKING} is true, the \Condor{procd} will
be used regardless of the \Macro{USE\_PROCD} setting. Changes to
\MacroNI{MIN\_TRACKING\_GID} and \MacroNI{MAX\_TRACKING\_GID} require
a full restart of HTCondor.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{\label{sec:CGroupTracking}Cgroup-Based Process Tracking}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\index{cgroup based process tracking}
A new feature in Linux version 2.6.24 allows HTCondor to more accurately
and safely manage jobs composed of sets of processes. This Linux
feature is called Control Groups, or cgroups for short, and it is
available starting with RHEL 6, Debian 6, and related distributions.
Documentation about Linux kernel support for cgroups can be found
in the Documentation directory in the kernel source code distribution.
Another good reference is
\URL{http://docs.redhat.com/docs/en-US/Red\_Hat\_Enterprise\_Linux/6/html/Resource\_Management\_Guide/index.html}
Even if cgroup support is built into the kernel,
many distributions do not install the cgroup tools by default.
In order to use cgroups, the tools must be installed.
On RPM-based systems, these can be installed with the command
\begin{verbatim}
yum install libcgroup\*
\end{verbatim}
After these tools are installed, the cgconfig service needs to be
running. It parses the \File{/etc/cgconfig.conf} file, and makes
appropriate mounts under \File{/cgroup}. Before starting the cgconfig
service, you will need to edit the file \File{/etc/cgconfig.conf} to
add a group specific to HTCondor.
Here is an example of the contents of file \File{/etc/cgconfig.conf} with
appropriate values for the HTCondor group:
\begin{verbatim}
mount {
cpu = /cgroup/cpu;
cpuset = /cgroup/cpuset;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
freezer = /cgroup/freezer;
blkio = /cgroup/blkio;
}
group htcondor {
cpu {}
cpuacct {}
memory {}
freezer {}
blkio {}
}
\end{verbatim}
On Debian based systems, the memory system is often not on by default, and
needs to be enable by a boot time option. This setting needs to be inheritted
down to the per-job cgroup with the following commands in rc.local:
\begin{verbatim}
/usr/sbin/cgconfigparser -l /etc/cgconfig.conf
/bin/echo 1 > /sys/fs/cgroup/htcondor/cgroup.clone_children
\end{verbatim}
Finally for Debian, add to group htcondor the following field:
\begin{verbatim}
cpuset {
cpusets.mems = 0;
}
\end{verbatim}
After the \File{/etc/cgconfig.conf} file has had the htcondor group
added to it, add and start the cgconfig service by running
\begin{verbatim}
chkconfig --add cgconfig
service cgconfig start
\end{verbatim}
When the cgconfig service is correctly running, the virtual filesystem
mounted on \File{/cgroup} should have several subdirectories under it, and
there should a an htcondor subdirectory under the directory \File{/cgroup/cpu}
Starting with HTCondor version 7.7.0,
the \Condor{starter} daemon can optionally use cgroups
to accurately track all the processes started by a job,
even when quickly-exiting parent processes spawn many child processes.
As with the GID-based tracking, this is only implemented when a
\Condor{procd} daemon is running. The HTCondor team recommends enabling
this feature on Linux platforms that support it. When cgroup tracking is enabled,
HTCondor is able to report a much more accurate
measurement of the physical memory used by a set of processes.
To enable cgroup tracking in HTCondor, once cgroups have been enabled
in the operating system, set the \Macro{BASE\_CGROUP} configuration
variable to the string that matches the group name specified in the \File{/etc/cgconfig.conf}
In the example above, "htcondor" is a good choice. There is no default value
for \Macro{BASE\_CGROUP}, and if left unset, cgroup tracking will not be used.
Kernel cgroups are named in a virtual file system hierarchy.
HTCondor will put each running job on the execute node in a distinct cgroup.
The name of this cgroup is the name of the execute directory for that \Condor{starter}, with
slashes replaced by underscores, followed by the name and number of the slot. So, for the
memory controller, a job running on slot1 would have its cgroup located at
\File{/cgroup/memory/htcondor/condor\_var\_lib\_condor\_execute\_slot1/}. The \File{tasks}
file in this directory will contain a list of all the processes in this cgroup, and
many other files in this directory have useful information about resource usage
of this cgroup. See the kernel documentation for full details.
Once cgroup-based tracking is configured,
usage should be invisible to the user and administrator.
The \Condor{procd} log, as defined by configuration variable
\MacroNI{PROCD\_LOG},
will mention that it is using this method,
but no user visible changes should occur,
other than the impossibility of a quickly-forking process escaping from the
control of the \Condor{starter},
and the more accurate reporting of memory usage.
| {
"alphanum_fraction": 0.7549190535,
"avg_line_length": 45.3672316384,
"ext": "tex",
"hexsha": "e2e16e39ae34aef53952a7bf07f37ddba08fb85a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "10aea32f1717637972af90459034d1543004cb83",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "AmesianX/htcondor",
"max_forks_repo_path": "doc/admin-man/group-tracking.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "10aea32f1717637972af90459034d1543004cb83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "AmesianX/htcondor",
"max_issues_repo_path": "doc/admin-man/group-tracking.tex",
"max_line_length": 113,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "10aea32f1717637972af90459034d1543004cb83",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "AmesianX/htcondor",
"max_stars_repo_path": "doc/admin-man/group-tracking.tex",
"max_stars_repo_stars_event_max_datetime": "2020-11-18T21:52:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-18T21:52:11.000Z",
"num_tokens": 1917,
"size": 8030
} |
\documentclass[12pt]{article}
%
\usepackage{amsfonts,amsmath,amssymb,enumerate,fancyhdr,fullpage,graphicx,hyperref,float,lastpage,multicol,multirow,pgfplots,setspace,subfigure,supertabular,tablefootnote,tikz, titlesec,vwcol,wasysym,xcolor}
\usepackage[bottom]{footmisc}
\usepackage[utf8]{inputenc}
\usepackage[backend=biber]{biblatex}
\newcounter{lemma}
\newcommand{\lemma}[0]{\textbf{Lemma \refstepcounter{lemma}\thelemma\label{lem:\thelemma}}}
\newcounter{corollary}
\newcommand{\corollary}[0]{\textbf{Corollary \refstepcounter{corollary}\thecorollary\label{cor:\thecorollary}}}
\newcounter{theorem}
\newcommand{\theorem}[0]{\textbf{Theorem \refstepcounter{theorem}\thetheorem\label{thm:\thetheorem}}}
\newcounter{defn}
\newcommand{\defn}[0]{\textbf{Definition \refstepcounter{defn}\thedefn\label{def:\thedefn}}}
\pgfplotsset{compat=newest}
\usetikzlibrary{shapes.geometric,arrows,fit,matrix,positioning}
\tikzset
{
treenode/.style = {circle, draw=black, align=center, minimum size=1cm},
subtree/.style = {isosceles triangle, draw=black, align=center, minimum height=0.5cm, minimum width=1cm, shape border rotate=90, anchor=north}
}
\bibliography{quadtrees}
%
\hypersetup{colorlinks,citecolor=black,filecolor=black,linkcolor=black,urlcolor=black}
\titleformat{\section}
{\normalfont\large\bfseries}
{\thesection. }
{0pt}{}
\titleformat{\subsection}
{\normalfont\large\bfseries}
{\thesubsection. }
{0pt}{}
\titleformat{\subsubsection}
{\normalfont\bfseries}
{\thesubsubsection. }
{0pt}{}
\titlespacing{\section}{0pt}{1em}{0em}
\setlength{\parskip}{1em}
\setlength{\parindent}{0cm}
\setlength{\columnsep}{1cm}
%
%begin document
\begin{document}
\title{Analyzing Randomized and Deterministic Skip Quadtrees}
\author{Botong Ma and William Qian\\\{btma, wqian94\}@mit.edu}
\date{} % no date
\maketitle
\begin{abstract}
\noindent Quadtrees and related spacial data structures have applications in modeling, machine learning, collision detection, and more. In this paper, we briefly discuss the skip quadtree data structure, then explore the two different variants described by Eppstein, Goodrich, and Sun \cite{sqt} through implementing both variants and comparing the results.
\end{abstract}
\section{Introduction}
\indent A quadtree is a tree where all the nodes can be visualized as a square divided into exactly four quadrants, representing the four children that each node has. When a collision occurs in one of the quadrants, it will then recursively split into four more nodes. Quadtrees are typically used to represent information in 2D, and are also often used for point and range queries.
Point query is a problem where given a set of $n$ points $S = {p_1, p_2, \hdots, p_n}$, we would like to quickly return the closest point $p_i$ in $S$ to a given point $q$ that may or may not be in $S$. In game design, this is often used for collision detection between objects. Range queries are directed at finding all points in a set $P$ that belong to a specific region. In both cases, quadtrees are useful for solving these problems.
Additionally, in graphics, meshes are used to define shapes of objects in modeling. Generally, they are composed of triangles or other simple shapes. However, sometimes, uniform meshes are not computationally sound for some objects, such as a model of a system of partial differential equations. In these cases, we will need a non-uniform mesh, which can be generated using a quadtree.
Unfortunately, a na\"ive implementation of a quadtree can result in poor runtime and space bounds. In this paper, we begin at the na\"ive, simple quadtree and conclude by developing and testing tighter bounds for both runtime and space based on empirical data for skip quadtrees \cite{sqt}.
\section{Background}
\subsection{Terms}
Before we begin, the reader should be familiar with two data structures, quadtrees \cite{qt} and skip lists \cite{sl}, as well as some associated terms.
A \textit{simple quadtree} follows from the traditional definition of a quadtree $Q$ defined by its center point $q_0 = (x_0, y_0)$ and the side length $\ell_0$ of the region it bounds, where all quadrants have lengths exactly half of their parents (except the root). As a result, the upper bound on the tree's height has a dependency on the smallest pair-wise distances between two points in the tree, and the lower bound on the tree's height depends linearly on the number of points in the tree. The set of points contained by a quadtree is given by $p(Q)$ and the number of points in the quadtree is $n(Q) = |p(Q)|$.
A \textit{skip list} is a 1D data structure that builds from a list $L_0$ of values, and keeps several ``levels" of the list as the lists $L_0, L_1, \cdots$, where $L_{i+1} \subseteq L_i$ for $i = 0, 1, \cdots$. The runtimes for inserting into, deleting from, and querying a skip list is, in expectation, $O(\lg n)$, if $L_0$ has $n$ elements. The space required to store a skip list is $O(n)$ if a point $p_j \in L_i$ exists also in $L_{i+1}$ with a fixed probability.
A \textit{node} is an element in the tree. Each node can have a key (or value), children, and a parent. Each node is always the parent for its children, and a child for its parent.
A \textit{leaf node} is a node with no children in the tree.
An \textit{internal node} is a node with children in the tree.
The \textit{root node} of a tree is the node with no parent. Since an empty tree has no root, the root node is always an internal node.
\subsection{Simple Quadtrees}
% Consider the 1D region $R$ of length $\ell_0$ centered at point $q_0$. We know this as the range from $q_0-\frac{1}{2}\ell_0$ to $q_0+\frac{1}{2}\ell_0$ on the real number line.
%
%
% \begin{figure}[ht!]
% \centering
% \begin{tikzpicture}[scale=2]
% \draw[very thick] (-1,0) -- (1,0);
% \path [draw=black, fill=black] (1,0) circle (2pt);
% \path [draw=black, fill=black] (-1,0) circle (2pt);
% \path [draw=black, fill=white, thick] (-0.2, 0) circle (2pt) ;
% \path [draw=black, fill=white, thick] (-0.4, 0) circle (2pt);
% \path [draw=black, fill=white, thick] (0.5, 0) circle (2pt);
% \path [draw=black, fill=white, thick] (0.1, 0) circle (2pt);
% \path [draw=black, fill=white, thick] (0.8, 0) circle (2pt);
% \path [draw=black, fill=white, thick] (-0.7, 0) circle (2pt);
% \draw[latex-latex] (-3.5,0) -- (3.5,0) ;
% \foreach \x in {-3,-2,-1,0,1,2,3}
% \draw[shift={(\x,0)},color=black] (0pt,3pt) -- (0pt,-3pt);
% \foreach \x in {-3,-2,-1,0,1,2,3}
% \draw[shift={(\x,0)},color=black] (0pt,0pt) -- (0pt,-3pt) node[below]
% {$\x$};
% \end{tikzpicture}
% \label{fig:1}
% \caption{An example region where $n=6$, $q_0 = 0$ and $\ell_0 = 2$.}
% \end{figure}
%
% If we then have $n$ points $p_1,\cdots,p_n$ from within this range, we can construct a binary search tree to store these $n$ points. While many methods exist for constructing such a tree, one such method uses our knowledge that the points are contained in $R$.
% \begin{figure}[ht!]
% \centering
% \begin{tikzpicture}[->,>=stealth', level/.style={sibling distance = 5cm/#1, level distance = 1.5cm}, scale=0.6,transform shape]
% \node [treenode] {$p_1$}
% child
% {
% node [treenode] {$p_2$}
% child
% {
% node[treenode] {$p_3$}
% }
% child
% {
% node[treenode] {$p_4$}
% }
% }
% child
% {
% node[treenode] {$p_5$}
% child
% {
% node [treenode] {$p_6$}
% }
% };
%
% \end{tikzpicture}
% \label{fig:2}
% \caption{Binary tree for the example region in Figure \ref{fig:1}}
% \end{figure}
%
% Let the entire tree contain all the points with the root's key being $q_0$, then divide $R$ into two equal subspaces, $R_1$ and $R_2$. Define the left subtree to contain all points in $R_1$ and the right subtree to contain all points in $R_2$, and each corresponding internal node's key being the center of the region that its subtree represents. We can then recursively define our tree this way, letting only leaves.
%
% As stated before, a quadtree $Q$ is a geometric data structure where each node can be visualized as a square, and has exactly four children.
\subsubsection{QUERY}
The worst-case scenario for a simple quadtree centered at $q_0 = (x_0, y_0)$ with side length $\ell_0$ can be achieved with just two points: $p_1 = (x_0 + \varepsilon, 0)$ and $p_2 = (0, y_0 + \varepsilon)$ for $\varepsilon > 0$. Inserting just these two points in the quadtree will result in $\approx \lg \frac{\ell_0}{2\varepsilon}$ subdivisions before $p_1$ and $p_2$ end up in separate quadrants, which means that both points are at depth $O(\lg \frac{\ell_0}{2\varepsilon}) = O(-\lg \varepsilon)$ for a fixed $\ell_0$. Then, as $\varepsilon \to 0$, it is easy to see that we asymptotically approach infinite depth, and thus, asymtotically-infinite search time in the worst case.
\subsubsection{INSERT}
Since inserting implicitly involves traversal, insertion inherits the same worst-case asymptotically-infinite runtime for points that are clustered very closely together.
\subsubsection{DELETE}
Similarly, deletion also incurs, in the worst-case scenario, asymptotically-infinite runtime.
\subsection{Compressed Quadtrees}
As we just saw, the worst-case runtimes for a simple quadtree have a dependency on the pairwise distances between the points, resulting in the possibility of an asymptotically-infinite runtime for all three functions. To eliminate this awful dependency, we make the following observation: the asymptotically-infinite runtimes result from the asymptotically-infinite depth, which in turn results from having so many non-splitting internal nodes. These nodes have only one child, and as it turns out, we need not keep track of all these internal nodes.
A \textit{compressed quadtree} \cite{cqt} is like a simple quadtree, except with one additional rule: excluding the root, all internal nodes must contain at least two children (the root is allowed to have only 1 child). Conceptually, this means that if a quadrant has only one child, it will be replaced by that child, such that the parent points directly to the child, rather than to this quadrant. Recursively applying this idea, we can see that a compressed quadtree is still correct, while requiring less space in memory.
\lemma: Inserting a point into a compressed quadtree $Q$ adds at most two nodes to $Q$.
\textbf{Proof:} If $Q$ is empty, then it has no nodes, so when we add a point $p$ into $Q$, we must add one node for the root node and one node for the point.
If $Q$ is not empty, then when we insert a point into $Q$, we must create one node to represent the point, and if a collision occurs, we must resolve that collision. Since every non-root internal node must have at least two children, we only create one internal node to resolve each collision.
This means that, for each point we insert, we will also create at most one internal node. Thus, inserting a point in the tree results in an addition of at most two nodes to the tree. \hfill $\blacksquare$
\theorem: A quadtree of $n$ points has at most $2n$ nodes.
\textbf{Proof:} By Lemma \ref{lem:1}, we know that each insertion increases the node count by at most 2, so after inserting $n$ nodes into an initially-empty tree, the tree has at most $2n$ nodes. \hfill $\blacksquare$
Therefore, a compressed quadtree of $n$ points requires only $O(n)$ space, which is an improvement on the simple quadtree, now that we have eliminated the dependency on the smallest pairwise distances between the points.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.75]{CompressedQuadtree.png}
\caption{A simple quadtree (left) and the corresponding compressed quadtree (right), taken from Figure 1 in \cite{sqt}.}
\label{fig:3}
\end{figure}
\subsubsection{QUERY}
From Theorem \ref{thm:1}, we know that, after inserting $n$ points, the tree can have at most $2n$ nodes. Since exactly $n$ of these nodes must be leaf nodes, this mean that a tree can have at most $n$ internal nodes, so the tree has height at most $n$. This occurs when all $n$ points are constructed such that $p_j = (x_0 + \ell_0 (1 - 2^{-j}), y_0 + \ell_0 (1 - 2^{-j}))$ for $j = 1, \cdots, n$. With such a construction for the $n$ points, the tree will have height $n$, so the worst-case runtime for querying a compressed quadtree is $O(n)$.
\subsubsection{INSERT}
Inserting a point into a compressed quadtree requires a traversal and then actually creating and inserting the nodes. By Lemma \ref{lem:1}, we know that an insertion requires the creation of a constant number of nodes. We can then see that we also only need to change a constant number of pointers, so the actual creation and insertion of the nodes takes constant time. Since we have just shown that a traversal takes $O(n)$ time, this means that the runtime for inserting a point in a quadtree is $O(n)$.
\subsubsection{DELETE}
Deleting a point requires a traversal followed by disconnecting the node representing the point from the rest of the tree. Such a disconnection might also trigger the deletion of the parent node as well, so we might have to delete two nodes from the tree; fortunately, we will never have to delete the grandparent node because by virtue of having a parent, the node must also have a sibling, so that sibling will replace the parent if the parent is also deleted, which does not change the number of children that the grandparent has. Thus, the actual deletion of up to two nodes from the tree is $O(1)$, so the overall runtime for a deletion is dominated by the traversal, which is $O(n)$.
\section{Skip Quadtrees}
A skip quadtree takes the idea of skip lists and applies it to compressed quadtrees. Instead of having layers of lists, we instead have layers of quadtrees, denoted by $Q_i$, where $i \geq 0$ indicates the level of the quadtree \cite{sqt}. $Q_0$ is the original compressed quadtree, which contains all the points in the data structure.
\defn: In a skip quadtree, if $b > a$, then $p(Q_b) \subseteq p(Q_a)$.
\theorem: If node $v \in Q_{i+1}$, then we know that $v \in Q_i$.
\textbf{Proof:} We will consider $v$ in two scenarios: $v$ is a leaf node and $v$ is an internal node.
Consider when $v$ is a leaf node. Then by Definition \ref{def:1}, we know that $v \in Q_{i+1} \implies v \in Q_i$, so this case is trivial.
Consider when $v$ is an internal node. This means that $v$ has at least two children, implying that $\exists p_1, p_2$ such that $p_1$ and $p_2$ are points that belong to different subquadrants of $v$, and correspond to leaf nodes. This means that $p_1$ and $p_2$ certainly must exist in $Q_i$, so it must be necessary to have $v$ in $Q_i$ to resolve the collision that would otherwise occur between $p_1$ and $p_2$ in $Q_i$.
Thus, we can conclude that, if $v \in Q_{i+1}$, then $v \in Q_i$. \hfill $\blacksquare$
\subsection{Promotion and Demotion}
If $p_j \in Q_i$ but $p_j \not\in Q_{i+1}$, then the process of adding $p_j$ to $Q_{i+1}$ is called \textit{promotion}. Similarly, if $p_j \in Q_i$ and $Q_{i+1}$, then the process of removing $p_j$ from just $Q_{i+1}$ is called \textit{demotion}. This leads us to the following decision problem.
\defn\ (The Promotion Problem): Given a point $p_j$ such that $p_j \in Q_i$ and $p_j \not\in Q_{i+1}$, should $p_j$ be promoted to $Q_{i+1}$?
Below, we examine the two variants of skip quadtrees that \cite{sqt} describes: randomized and deterministic. These terms refer to the algorithm used to decide the Promotion Problem. Interestingly, the applications of the Promotion Problem also differ between these two variants.
\subsection{Randomized Skip Quadtrees}
The randomized skip quadtree decides the Promotion Problem in a probabilistic manner. Define $P$ to be the probability that we answer the Promotion Problem with YES. Then, each time we insert a point $p_j$, we ask for decisions to the Promotion Problem until we get a NO. Letting $k$ be the number of times we got YES before we got NO, then $p_j$ should be promoted $k$ times from $Q_0$, such that $p_j \in Q_0, \cdots, Q_k$.
This means that, for a skip quadtree of $n$ points,
$$\mathbb{E}[n(Q_i)] = n \cdot P^{-i} \implies \mathbb{E}[|Q_i|] \leq 2n \cdot P^{-i},$$
so the expected number of nodes overall in the skip quadtree is
$$\mathbb{E}[|Q|] = \sum_{i = 0}^\infty \mathbb{E}[|Q_i|] \leq \sum_{i = 0}^\infty (2n \cdot P^{-i}) = \frac{2n}{1 - P} = O(n).$$
So, we can expect query, insertion, and deletion operations to run in $O(\lg n)$ time while still using only linear space \cite{sqt}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.75]{RandomizedCompressedQuadtree1.png}
\caption{Randomized skip quadtree layers, Figure 2 in \cite{sqt}.}
\label{fig:4}
\end{figure}
\subsubsection{Analysis}
Unfortunately, randomized skip quadtrees do not entirely solve the problem. While \textit{on expectation} the quadtree can query, insert, and delete in $O(\lg n)$ time, there still exists the possibility of randomization working against us. For example, suppose we used the data set described in section 2.3.1; then, with probability $(1 - P)^{-n}$, no points will be promoted from $Q_0$ to $Q_1$, so we are left with essentially the same situation as in 2.3.1, 2.3.2, and 2.3.3: linear runtime.
Another problem with randomized skip quadtrees is that the number of levels is unbounded, and as a result, for any point being inserted, it is possible to end up with far more levels than necessary.
\subsection{Deterministic Skip Quadtrees}
The deterministic skip quadtree resolves the problem posed by the poorly-chosen data set above. To do this, the deterministic skip quadtree adds an additional invariant to the mix: the leaf nodes must form a valid 1-2-3 skip list \cite{dsl}. Note that forming such a skip list requires a total order of the points, and one such ordering is by comparing points dimension-by-dimension.
By applying the insertion and deletion principles of 1-2-3 skip lists from \cite{dsl} to skip quadtrees, insertion and deletion operations on deterministic skip quadtrees achieve a deterministic runtime of $O(\lg n)$ \cite{sqt}.
\section{Implementation}
In order to compare randomized and deterministic skip quadtrees, we decided to implement both data structures and compare their benchmark results. For the purposes of this paper, we implemented and compared only the query and insertion operations. Since the deletion operations are the most difficult to implement and we would expect to see roughly the same results as with insertion operations, we chose to leave deletion operations out of our implementation and comparisons.
We implemented both variants in C, for performance reasons, and used a common interface to ensure that the data structures and functions are comparable. In both cases, the query and insertion operations are implemented recursively.
Additionally, for the randomized implementation, we set $P = \frac{1}{2}$, so each node has a $\frac{1}{2}$ probability of being promoted.
\section{Benchmark}
\subsection{Hardware Specifications}
We ran our benchmarks on a machine with 16 Intel\textsuperscript\textregistered\ Core\textsuperscript\texttrademark\ i7-5960X cores, each running at 3 GHz, hosted by MIT's Computer Science and Artificial Intelligence Laboratory. Since our benchmarks were single-threaded, we only used one of the cores.
\subsection{Methods}
The benchmark reports throughput, measured by the number of completed operations, given a time interval, the number of dimensions, an initial population count, and the probability of running an insertion operation (as opposed to a query operation). The initial population count parameter determines the initial size of the tree before benchmarking begins -- the larger this value, the lower the probability of collisions. The insertion operation probability determines approximately how many of the total operations will be insertion operations -- the remaining operations are all queries.
Although the points are generated randomly, the random number generator is seeded deterministically, such that at least the initial population is the same for runs with the same initial population count. The random number generator we used is an implementation of the Marsaglia polar method, which we believe produces a sufficiently-uniform distribution for our purposes.
Hardware-wise, since we ran these benchmarks on the same dedicated machine, we have high confidence in our results.
\subsection{Results}
We chose to fix the time intervals at 1 second, and the number of dimensions at 2. We then varied the initial population count parameter by choosing an initial population of 1, 1000, and 1000000 nodes, while also varying the insertion operation probability by choosing 0\%, 1\%, and 10\%. We then ran each trial five times, and took the average of the results to produce Table \ref{tbl:1}.
\begin{table} [ht!]
\centering
\begin{tabular}{|r|r|r|r|r|}
\hline
\textbf{Nodes}& \textbf{\% Insertions}& \textbf{Randomized}& \textbf{Deterministic}& \textbf{Ratio: R/D}\\ \hline
1 & 0\% & 20746759 & 15851391 & 1.31 \\ \hline
1 & 1\% & 5794683 & 14607628 & 0.40 \\ \hline
1 & 10\% & 2416052 & 7743879 & 0.31 \\ \hline
1000 & 0\% & 11733530 & 15610295 & 0.75 \\ \hline
1000 & 1\% & 3150763 & 4866744 & 0.65 \\ \hline
1000 & 10\% & 1661835 & 3476177 & 0.48 \\ \hline
1000000 & 0\% & 6533244 & 15614092 & 0.42 \\ \hline
1000000 & 1\% & 2386309 & 3133555 & 0.76 \\ \hline
1000000 & 10\% & 1489428 & 2767394 & 0.54 \\ \hline
\end{tabular}
\label{tbl:1}
\caption{Results of the benchmark, averaged over five trials. Note that the deterministic implementation consistently outperforms the randomized implementation, except for the single-node-query-only test case.}
\end{table}
\section{Discussion}
From Table \ref{tbl:1}, we can see that, of the 9 test cases, the deterministic implementation outperforms the randomized implementation in 8 test cases. The only test case where the randomized implementation performs better is the test case where the tree begins with only 1 point, and the benchmark tests only for query operations. This is a little surprising because the algorithms used in both implementations for querying are the same. One possible explanation for this behavior is that the deterministic implementation has no nodes at the top-most level of the tree, so when querying, we always have to drop down a level to reach a non-empty level; on the other hand, the randomized version does not have this layer, and thus when only 1 point is in the tree, that point can be found without needing to recurse down a level.
For the other 8 test cases, the fact that the deterministic implementation outperforms the randomized implementation is surprising. We had initially hypothesized that the randomized implementation would perform better, due to having to preserve fewer invariants, which resulted in less logic in the code for the randomized implementation. The actual data, however, belies this hypothesis, and additionally shows that, in fact, the deterministic implementation severely outperforms the randomized one.
As we can see in Figures \ref{fig:3}, \ref{fig:4}, and \ref{fig:5} in the Appendix, whenever insertions became involved, the deterministic implementation produced faster runtimes per operation compared to the randomized implementation, and held this lead as the number of initial nodes increased. This suggests that inserting a point may be significantly faster with the deterministic implementation than with the randomized one. This is perhaps due to how the answer to the Promotion Problem is decided. For the randomized implementation, the decision is to promote with $\frac{1}{2}$ probability, so on expectation, after $n$ insertions, we will have promoted $n$ times. On the other hand, for the deterministic implementation, promotions only occur when 3 nodes are already adjacent in a given level \cite{dsl}. This means that promotions happen once for every fourth node inserted in a row, so after $n$ insertions, we can expect about $\frac{n}{2}$ promotions. While the factor of 2 is small at face value, it results in significant performance differences because insertions, which mutate the data structure, are far more expensive than queries, which are read-only operations.
Additionally, it is also worthwhile to note that, with the exception of the deterministic implementation for 0\% insertions, all the other graphs support the claim that both implementations sport $O(\lg n)$ runtimes for their query and insertion operations. Moreover, the slopes are incredibly modest, which suggests that the constant is manageable and the data structures are feasible.
That said, it is also important to note the behavior of the deterministic implementation for 0\% insertions. Unlike all the other trends, this one remains constant, regardless of how many initial nodes there are. This seems suspicious, and warrants further investigation to determine the cause of this unusual behavior.
Unfortunately, the efficiency of the deterministic implementation is undercut by is noticeably-more-difficult implementation, which requires not only the quadtree itself, but also an underlying 1-2-3 skip list.
\section{Conclusion}
Compared to the simple quadtree, both the randomized and deterministic implementations of the skip quadtree perform far better and can better handle worst-case scenarios for the simple quadtree. We also used the empirical data to validate the authors' claim that skip quadtrees have $O(\lg n)$ query and insertion operations \cite{sqt}.
Between the randomized and deterministic implementations themselves, although we had originally believed that the randomized implementation would be more performant than the deterministic implementation due to the latter implementation's need to also build a 1-2-3 skip list, empirical results showed that, in fact, the deterministic implementation outperforms the randomized one in most cases.
\section{Future Work}
In this exploration into skip quadtrees, our implementations for both the randomized and deterministic have not been performance engineered. As a result, it is possible that well-tuned and better-engineered implementations may produce different results regarding the relative efficiencies of these data structures. Additionally, we did not compare the deletion operations for these quadtrees, so a future work may consider implementing the deletion operation and comment on the performance and implementation results. Finally, further comparison with other spacial data structures, such as k-d trees, will be necessary to fully determine the scope for which skip quadtrees are optimal.
\pagebreak
\section{Appendix}
\begin{figure}[ht!]
\centering
\includegraphics[scale=1.0]{0Insertion.png}
\caption{Comparison of 0\% Insertion Runtimes}
\label{fig:5}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=1.0]{1Insertion.png}
\caption{Comparison of 1\% Insertion Runtimes}
\label{fig:6}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=1.0]{10Insertion.png}
\caption{Comparison of 10\% Insertion Runtimes}
\label{fig:7}
\end{figure}
% bib stuff
\printbibliography
\end{document} | {
"alphanum_fraction": 0.6899271439,
"avg_line_length": 92.0676923077,
"ext": "tex",
"hexsha": "790b071972eff1b767d28cdb00953f2060b820eb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "656a715df8b691936546582915b79165cd9b997e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wqian94/dsqt",
"max_forks_repo_path": "854paper/paper.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "656a715df8b691936546582915b79165cd9b997e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wqian94/dsqt",
"max_issues_repo_path": "854paper/paper.tex",
"max_line_length": 1191,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "656a715df8b691936546582915b79165cd9b997e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wqian94/dsqt",
"max_stars_repo_path": "854paper/paper.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7506,
"size": 29922
} |
\section{Environments}\label{sec:environments}
Environments define how the steps are run, what limits are applied to them, and how the processes are monitored.
They are also responsible for implementing various checks that validators might want to use, but their exact
implementation would depend on the environment's specification.\\
There are three predefined environments, \hyperref[sec:LocalComputer]{\python{LocalComputer}},
\hyperref[sec:PsutilEnvironment]{\python{PsutilEnvironment}} and
\hyperref[sec:KolejkaObserver]{\python{KolejkaObserver}}.\\
\hyperref[sec:LocalComputer]{\python{LocalComputer}} is supposed to provide a minimal support for running the judge
without installing any additional packages (provided \shell{/usr/bin/time} is available).
It doesn't support any limits, and should be used solely for debugging/testing purposes.\\
\hyperref[sec:PsutilEnvironment]{\python{PsutilEnvironment}} is an slightly enhanced environment, which uses the
\shell{psutil} module and a separate thread to continuously query the step being run for used resources.\\
\hyperref[sec:KolejkaObserver]{\python{KolejkaObserver}} uses the \shell{kolejka-observer} package, and is
recommended for any serious checking systems, as it provides the greatest flexibility in terms of the available
limits.\\
The utility function \hyperref[sec:detect_environment]{\python{detect_environment()}} can be used to
automatically select the environment, based on the command line arguments.\\
In order to create an own environment, its implementation has to implement all abstract methods, i.e.
\hyperref[sec:run_command]{\python{run_command()}} and
\hyperref[sec:format_execution_status]{\python{format_execution_status()}}.\\
\subsection*{\python{ExecutionEnvironment}}
\renewcommand\labelitemiii{\textbullet}
\begin{itemize}[label={}]
\item This is the base class, defining common methods for the environments.
\item \docfunc{__init__(self, output_directory)}
\docfuncdesc{
Constructor, which takes as an argument the directory where all files created by the checking run will be
stored.
Commands will have this directory set as a current working directory.
Note that the executed programs aren't limited by this setting when they don't follow the working directory,
especially if they use absolute paths.\\
The \hyperref[sec:Validators]{\python{Validators}} nested class is instantiated with the environment as an
argument and assigned to a variable, creating a circular dependency between these two.
}
\item \docfunc{set_limits(self, **kwargs)}
\docfuncdesc{
Filters the limits passed as the arguments, based on the \python{self.recognized_limits} variable, and prints out a warning on \shell{stderr} for each unrecognized
one.
The identified limits are saved to be used during the following \hyperref[sec:run_command]{
\python{run_command()}} calls,
until the next \python{set_limits()} invocation.
}
\item \phantomsection \label{sec:run_steps} \docfunc{run_steps(self, steps)}
\docfuncdesc{
Executes the received steps one by one, halting immediately when any of the steps returns an exit status
(e.g. \shell{CME}).
Returns a 2-tuple containing the final status (either one of the exit statuses, or \shell{OK}),
and a dictionary with execution statistics for each ran step.
}
\item \phantomsection \label{sec:run_command_step} \docfunc{run_command_step(self, step, name)}
\docfuncdesc{
Responsible for running the command step, which consists of the following parts:
\begin{itemize}
\item verifying that step is configured correctly
\item verifying the prerequisites are met
\item setting the limits requested by the step (see
\hyperref[sec:get_limits]{\python{CommandBase.get_limits()}})
\item evaluating the \hyperref[sec:DependentExpr]{\python{DependentExpr}} expressions
\item calling the \hyperref[sec:run_command]{\python{run_command()}} method
\item checking the postconditions
\item restoring the old limits
\end{itemize}
}
\item \phantomsection \label{sec:run_command}
\docfunc{run_command(self, command, stdin, stdout, stderr, env, user, group)}
\docfuncdesc{
Abstract method.
Responsible for running the specified command within the appropriate launch configuration, consisting of
standard input/output files (handles opened from \python{stdin}, \python{stdout}, \python{stderr} arguments,
all of type \python{pathlib.Path}),
environment variables (\python{env}), process permissions (\python{user} and \python{group}) and limits
(from previous \python{set_limits()} call).
Returns the optional \hyperref[sec:ExecutionStatistics]{execution statistics} object.
}
\item \phantomsection \label{sec:run_task_step} \docfunc{run_task_step(self, step, name)}
\docfuncdesc{
Responsible for running the task step, which consists of the following parts:
\begin{itemize}
\item verifying the prerequisites are met
\item calling the \hyperref[sec:execute]{\python{execute()}} method
\end{itemize}
}
\item \phantomsection \label{sec:env_get_env} \docfunc{get_env(self)}
\docfuncdesc{
Returns the dictionary of environment variables that will be passed to the spawned process.
Steps can expand and modify the mapping by overriding the
\hyperref[sec:command_get_env]{\python{CommandBase.get_env()}} method.
}
\item \phantomsection \label{sec:set_variable} \docfunc{set_variable(self, variable_name, value)}
\docfuncdesc{
Used by the tasks to store any value that should be accessible by the command steps, but is
undetermined before run-time.
}
\item \phantomsection \label{sec:format_execution_status} \docfunc{format_execution_status(cls, status)}
\docfuncdesc{
Abstract classmethod.
Responsible for serializing the execution statistics data into a dictionary containing solely
JSON-compatible types.
Useful for logging.
}
\item \docfunc{get_path(self, path)}
\docfuncdesc{
Responsible for returning a path uniquely determined by the argument, that is a subdirectory of
\python{self.output_directory}.
}
\item \docfunc{get_file_handle(file, mode)}
\docfuncdesc{
Creates all required parent directories of \python{file}, if they don't exist.
Returns a file handle opened with the specified mode.
}
\item \phantomsection \label{sec:Validators} \docfunc{Validators}
\docfuncdesc{
Class specifying the environment-specific validators, available for use mainly in the prerequisites.
When requesting an unknown validator, an no-op function is returned instead, to ensure maximum compatibility
while switching between multiple environments.
}
\end{itemize}
\subsection*{\python{LocalComputer}}\label{sec:LocalComputer}
\begin{itemize}[label={}]
\item \python{recognized_limits = []}
\item \docfunc{run_command(self, command, stdin, stdout, stderr, env, user, group)}
\docfuncdesc{
Runs the command using the \shell{/usr/bin/time} tool to measure the time and memory used.
Returns the \hyperref[sec:LocalComputer.LocalStats]{\python{LocalComputer.LocalStats}} object containing
execution statistics.
}
\item \docfunc{format_execution_status(cls, status)}
\docfuncdesc{
Implements the \hyperref[sec:format_execution_status]{
\python{ExecutionEnvironment.format_execution_status()}
}
method.
}
\item \phantomsection \label{sec:LocalComputer.LocalStats} \docfunc{LocalStats}
\docfuncdesc{
Object representing the execution statistics.
Contains three properties: \code{time}, \code{memory} and \code{cpus}.
}
\item \docfunc{Validators}
\docfuncdesc{
Inherits all validators from the \hyperref[subsec:LocalExecutionEnvironmentValidatorsMixin]{
\python{LocalExecutionEnvironmentValidatorsMixin}}.
}
\end{itemize}
\subsection*{\python{PsutilEnvironment}}\label{sec:PsutilEnvironment}
\begin{itemize}[label={}]
\item \python{recognized_limits = ['cpus', 'cpus_offset', 'time', 'memory']}
\item \docfunc{run_command(self, command, stdin, stdout, stderr, env, user, group)}
\docfuncdesc{
Runs the command using the \python{psutil.Popen} function, and starts a separate thread monitoring the
resource usage of the launched process (see \hyperref[sec:monitor_process]{\python{monitor_process()}}).
Returns the \hyperref[sec:PsutilEnvironment.LocalStats]{\python{PsutilEnvironment.LocalStats}} object containing
execution statistics.
}
\item \phantomsection \label{sec:monitor_process} \docfunc{monitor_process(self, process, execution_status)}
\docfuncdesc{
Sets the \python{cpu_affinity} limit on the process passed as an argument, then proceeds to query the
time and memory usage each 0.1s.
Kills the process if it exceeds the \python{time} or \python{memory} limits.
After the program finishes its execution, sets the gathered statistics on the \python{execution_status}
argument.
}
\item \docfunc{format_execution_status(cls, status)}
\docfuncdesc{
Implements the \hyperref[sec:format_execution_status]{\python{ExecutionEnvironment.format_execution_status}}
method.
}
\item \phantomsection \label{sec:PsutilEnvironment.LocalStats} \docfunc{LocalStats}
\docfuncdesc{
Object representing the execution statistics.
Contains three properties: \code{time}, \code{memory} and \code{cpus}.
}
\item \docfunc{Validators}
\docfuncdesc{
Inherits all validators from the \hyperref[subsec:LocalExecutionEnvironmentValidatorsMixin]{
\python{LocalExecutionEnvironmentValidatorsMixin}}.
}
\end{itemize}
\subsection*{\python{KolejkaObserver}}\label{sec:KolejkaObserver}
\begin{itemize}[label={}]
\item \python{recognized_limits = ['cpus', 'cpus_offset', 'pids', 'memory', 'time']}
\item \docfunc{run_command(self, command, stdin, stdout, stderr, env, user, group)}
\docfuncdesc{
Runs the command using the \python{observer.run()} function from the \python{kolejka-observer} package.
Returns an enriched \python{CompletedProcess} object, containing
execution statistics.
}
\item \docfunc{format_execution_status(cls, status)}
\docfuncdesc{
Implements the \hyperref[sec:format_execution_status]{
\python{ExecutionEnvironment.format_execution_status()}
}
method.
}
\item \docfunc{Validators}
\docfuncdesc{
Inherits all validators from the \hyperref[subsec:LocalExecutionEnvironmentValidatorsMixin]{
\python{LocalExecutionEnvironmentValidatorsMixin}}.
}
\end{itemize}
\subsection*{\docfunc{detect_environment()}}\label{sec:detect_environment}
\begin{itemize}[label={}]
\item Returns the environment based on the command line arguments.
\item \shell{--local} (\textit{default}) -- \hyperref[sec:LocalComputer]{\python{LocalComputer}}
\item \shell{--psutil} -- \hyperref[sec:PsutilEnvironment]{\python{PsutilEnvironment}}
\item \shell{--kolejkaobserver} -- \hyperref[sec:KolejkaObserver]{\python{KolejkaObserver}}
\item Remaining arguments are then passed to the environment-specific parsers and, if recognized, to the
environment constructor as \python{kwargs}.
\end{itemize}
\subsection*{\docfunc{LocalExecutionEnvironmentValidatorsMixin}}\label{subsec:LocalExecutionEnvironmentValidatorsMixin}
\begin{itemize}[label={}]
\item Defines the methods that are shared between all environments running on a local file system.
Currently two validators are implemented:
\item \python{file_exists(self, file)} - checks if the file exists in the file system
\item \python{program_exists(self, file)} - checks if the program exists in the file system
\end{itemize}
| {
"alphanum_fraction": 0.6884142094,
"avg_line_length": 49.5057471264,
"ext": "tex",
"hexsha": "5848a47c97c9def7ece62ad594c80047b1d0aea6",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-10-08T19:32:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-10-08T19:32:09.000Z",
"max_forks_repo_head_hexsha": "4fa42d9b9a52a94cd8dc57a99218b32d0e8fc18f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Raalsky/kolejka-judge",
"max_forks_repo_path": "doc/sections/environments.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "571df05b12c5a4748d7a2ca4c217b0042acf6b48",
"max_issues_repo_issues_event_max_datetime": "2021-09-01T10:09:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-09-01T08:10:35.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "zielinskit/kolejka-judge",
"max_issues_repo_path": "doc/sections/environments.tex",
"max_line_length": 175,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "571df05b12c5a4748d7a2ca4c217b0042acf6b48",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "zielinskit/kolejka-judge",
"max_stars_repo_path": "doc/sections/environments.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-08T19:27:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-29T11:00:03.000Z",
"num_tokens": 2882,
"size": 12921
} |
\documentclass[aspectratio=169]{beamer}
\mode<presentation>
{
\setbeamertemplate{background canvas}[square]
\pgfdeclareimage[width=6em,interpolate=true]{dsailogo}{../dsai-logo}
\pgfdeclareimage[width=6em,interpolate=true]{erasmuslogo}{../erasmus-logo}
\titlegraphic{\pgfuseimage{dsailogo} \hspace{0.2in} \pgfuseimage{erasmuslogo}}
%\usetheme{default}
\usetheme{Madrid}
\usecolortheme{rose}
\usefonttheme[onlysmall]{structurebold}
}
\usepackage{pgf,pgfarrows,pgfnodes,pgfautomata,pgfheaps,pgfshade}
\usepackage{amsmath,amssymb}
\usepackage{graphics}
\usepackage{ragged2e}
\usepackage[latin1]{inputenc}
\usepackage{colortbl}
\usepackage[absolute,overlay]{textpos}
\setlength{\TPHorizModule}{30mm}
\setlength{\TPVertModule}{\TPHorizModule}
\textblockorigin{10mm}{10mm}
\usepackage[english]{babel}
\setbeamercovered{dynamic}
\AtBeginSection[]{
\begin{frame}<beamer>
\frametitle{Outline}
\tableofcontents[currentsection]
\end{frame}
}
\title[Computer Vision]{Computer Vision\\Estimation}
\author{dsai.asia}
\institute[]{Asia Data Science and Artificial Intelligence Master's Program}
\date{}
% My math definitions
\renewcommand{\vec}[1]{\boldsymbol{#1}}
\newcommand{\mat}[1]{\mathtt{#1}}
\newcommand{\ten}[1]{\mathcal{#1}}
\renewcommand{\null}[1]{{\cal N}(#1)}
\def\Rset{\mathbb{R}}
\def\Pset{\mathbb{P}}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\argmin}{argmin}
\def\norm{\mbox{$\cal{N}$}}
\newcommand{\stereotype}[1]{\guillemotleft{{#1}}\guillemotright}
\newcommand{\myfig}[3]{\centerline{\includegraphics[width={#1}]{{#2}}}
\centerline{\scriptsize #3}}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% CONTENTS START HERE
%\setbeamertemplate{navigation symbols}{}
\frame{\titlepage}
%--------------------------------------------------------------------
%\part<presentation>{Part name}
%
%\frame{\partpage}
\begin{frame}
\frametitle{Readings}
Readings for these lecture notes:
\begin{itemize}
\item[-] Hartley, R., and Zisserman, A. {\em Multiple View Geometry in
Computer Vision}, Cambridge University Press, 2004, Chapter 4.
\item[-] Tomasi, C. {\em Mathematical Modeling of Continuous Systems},
online lecture notes from Duke University, 2004.
\end{itemize}
\medskip
If you are unfamiliar with the mathematics used in these lecture
notes, study Carlo Tomasi's beautiful lecture notes. You'll find the
notes on the course Web site under ``Readings.''
\medskip
These notes contain material $\copyright$ Hartley and Zisserman
(2004) and Tomasi (2004).
\end{frame}
%--------------------------------------------------------------------
\section{Introduction}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Introduction}
\framesubtitle{Estimation Problems}
In vision, we are frequently confronted with \alert{estimation} problems
in which parameters of some function must be estimated from measurements.
\medskip
Some examples of important estimation problems:
\begin{itemize}
\item \alert{2D homography}: Given a set of points $\vec{x}_i$ in
$\Pset^2$ and corresponding points $\vec{x}_i'$ in $\Pset^2$,
find a homography taking each $\vec{x}_i$ to $\vec{x}_i'$.
\item \alert{3D to 2D camera projection}: Given a set of points
$\vec{X}_i$ in 3D space and corresponding points $\vec{x}_i$
in an image, find the 3D to 2D projective mapping taking each
$\vec{X}_i$ to $\vec{x}_i$.
\item \alert{Fundamental matrix computation}: Given a set of
points $\vec{x}_i$ in one image and a set of corresponding
points $\vec{x}_i'$ in another image, find the fundamental
matrix $\mat{F}$ relating the two images.
\item \alert{Trifocal tensor computation}: Given a set of
point correspondences $\vec{x}_i \leftrightarrow \vec{x}_i'
\leftrightarrow \vec{x}_i''$ across three images, compute the
trifocal tensor $\ten{T}_i^{jk}$ relating points or lines in
three views.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Introduction}
\framesubtitle{Homography estimation}
First we'll consider homography estimation.
\medskip
We have a set of points $\vec{x}_i$ and corresponding points
$\vec{x}_i'$. We want to compute $\mat{H}$ such that $\forall i,
\mat{H} \vec{x}_i = \vec{x}_i'$.
\medskip
How many points do we need?
\begin{itemize}
\item We already saw that each point
correspondence gives us 2 constraints (equations), one for the $x$
component and one for the $y$ component.
\item Since $\mat{H}$ has 8
degrees of freedom we need at least 4 correspondences.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Introduction}
\framesubtitle{Cost functions}
We know that 4 correspondences yields an \alert{exact solution}.
\medskip
Due to \alert{measurement error} and \alert{correspondence error} we
should get \alert{more than 4 correspondences} then find the
homography $\mat{H}$ minimizing some \alert{cost function}.
\medskip
\begin{block}{Gold Standard algorithm (Hartley and Zisserman, 2004)}
An estimation algorithm minimizing the cost function that
is \alert{the best possible cost function} under certain assumptions.
\end{block}
\medskip
All algorithms should be evaluated with respect to the Gold Standard.
\end{frame}
%--------------------------------------------------------------------
\section{Direct Linear Transform (DLT)}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Another view of the homography estimation problem}
Now we'll look at another way to derive the exact linear estimate of
$\mat{H}$ from 4 points.
\medskip
For corresponding points $\vec{x}_i \leftrightarrow \vec{x}_i'$ we
want $\vec{x}_i' = k_i \mat{H} \vec{x}_i$ for some nonzero scaling
factor $k$.
\medskip
Thus we can say that $\vec{x}_i'$ and $\mat{H}\vec{x}_i$ must be collinear.
\medskip
Recall that the \alert{cross product} of two collinear vectors is the
0 vector.
\medskip
This means we can write this constraint in the form
$\vec{x}_i' \times \mat{H} \vec{x}_i = \vec{0}$.
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Deriving the linear system}
Let's use $\vec{h}^{j}$ to denote the $j$-th row of $\mat{H}$ written
as a vector. Then we have
\begin{equation*}
\mat{H}\vec{x}_i = \begin{pmatrix} \vec{h}^{1T}\vec{x}_i \\
\vec{h}^{2T}\vec{x}_i \\ \vec{h}^{3T}\vec{x}_i \end{pmatrix}.
\end{equation*}
\medskip
Now if $\vec{x}_i' = (x_i',y_i',w_i')^T$, the cross product can be written
\begin{equation*}
\vec{x}_i' \times \mat{H} \vec{x}_i = \begin{pmatrix}
y_i' \vec{h}^{3T}\vec{x}_i - w_i' \vec{h}^{2T}\vec{x}_i \\
w_i' \vec{h}^{1T}\vec{x}_i - x_i' \vec{h}^{3T}\vec{x}_i \\
x_i' \vec{h}^{2T}\vec{x}_i - y_i' \vec{h}^{1T}\vec{x}_i \end{pmatrix}.
\end{equation*}
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Deriving the linear system}
Since we want the cross product to be the zero vector, we can write
the linear system
\begin{equation*}
\begin{bmatrix}
\vec{0}^T & -w_i'\vec{x}_i^T & y_i'\vec{x}_i^T \\
w_i'\vec{x}_i^T & \vec{0}^T & -x_i'\vec{x}_i^T \\
-y_i'\vec{x}_i^T & x_i'\vec{x}_i^T & \vec{0}^T \end{bmatrix}
\begin{pmatrix} \vec{h}^1 \\ \vec{h}^2 \\ \vec{h}^3 \end{pmatrix} =
\vec{0}.
\end{equation*}
\medskip
We have three linear equations in 9 unknowns.
The third equation is actually a linear combination of the first
two.\footnote{To convince yourself of this, use the factor $-x_i'/w_i'$
for the first equation and $-y_i'/w_i'$ for the second equation.}
So we can drop it (see next slide)...
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Deriving the linear system}
Dropping the redundant third equation in the parameters of $\mat{H}$, we have:
\begin{equation}
\label{dlt-eqn}
\mat{A}_i\vec{h} =
\begin{bmatrix}
\vec{0}^T & -w_i'\vec{x}_i^T & y_i'\vec{x}_i^T \\
w_i'\vec{x}_i^T & \vec{0}^T & -x_i'\vec{x}_i^T \end{bmatrix}
\begin{pmatrix} \vec{h}^1 \\ \vec{h}^2 \\ \vec{h}^3 \end{pmatrix} =
\vec{0}.
\end{equation}
However, if $w_i' = 0$ we have an ideal point, and the first two
equations become linearly dependent. In this case we should use the
third equation and the first or second equation, or just use all three
equations all the time.
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Deriving the linear system}
With 8 equations in 9 unknowns, $\mat{A}$ is $8\times 9$ and has rank
8. It has a one-dimensional null space and we obtain $\vec{h} =
\null{\mat{A}}$.
\medskip
If we have \alert{more than 4 correspondences} the system
$\mat{A}\vec{h}=\vec{0}$ will be \alert{over-determined}:
\begin{itemize}
\item With \alert{perfect measurement}, the rank of $\mat{A}$ would
still be 8 and it would still have a one-dimensional null space.
\item But \alert{measurement noise} means no exact solution.
\item So we try to find the vector $\vec{h}$ minimizing some \alert{cost
function}.
\end{itemize}
\medskip
Since we want $\mat{A}\vec{h}$ to be as close as possible to
$\vec{0}$, a sensible cost function is the \alert{norm} of
$\mat{A}\vec{h}$, i.e., $\|\mat{A}\vec{h}\|$.
\medskip
However we must avoid the \alert{trivial solution} $\vec{h}=\vec{0}$,
so we impose a constraint $\|\vec{h}\|=1$. This is OK since $\mat{H}$
is homogeneous.
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Solving the linear system}
So now we have the minimization problem
\begin{equation*}
\hat{\vec{h}} = \argmin_{\vec{h}} \|\mat{A}\vec{h}\|, \text{subject to }
\|\vec{h}\|=1.
\end{equation*}
whose solution is to let $\vec{h}$ be the unit eigenvector of
$\mat{A}^T\mat{A}$ with the smallest eigenvalue, or the unit singular
vector of $\mat{A}$ corresponding to the smallest singular value of
$\mat{A}$.
\medskip
\alert{This is important to remember:} to solve an overconstrained
homogeneous linear system $\mat{A}\vec{x}=\vec{0}$ by minimizing
$\|\mat{A}\vec{h}\|$ subject to $\|\vec{h}\|=1$, we \alert{perform SVD
on $\mat{A}$} (see next section) or \alert{compute the eigenvector
corresponding to the smallest eigenvalue of $\mat{A}^T\mat{A}$}.
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{The basic DLT for $\mat{H}$}
\begin{block}{DLT: Objective}
Given $n \ge 4$ 2D to 2D point correspondences $\{ \vec{x}_i
\leftrightarrow \vec{x}_i' \}$, determine the 2D homography matrix
$\mat{H}$ such that $\vec{x}_i' = \mat{H}\vec{x}_i$.
\end{block}
\begin{block}{DLT: Algorithm}
\begin{itemize}
\item[(i)] For each correspondence $\vec{x}_i \leftrightarrow
\vec{x}_i'$, compute the matrix $\mat{A}_i$ as in equation
(1). When $w_i'=0$, use different rows.
\item[(ii)] Assemble the $n$ $2\times 9$ matrices $\mat{A}_i$ into a
single $2n\times 9$ matrix $\mat{A}$.
\item[(iii)] Obtain the SVD $\mat{A}=\mat{U}\mat{D}\mat{V}^T$. The
unit singular vector (column of $\mat{V}$) corresponding to the
smallest singular value (diagonal element of $\mat{D}$) is the
desired $\vec{h}$.
\item[(iv)] Rearrange $\vec{h}$ to obtain $\mat{H}$.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Similar approaches}
Other methods: we can select one element of $\vec{h}$ to be equal to 1
(or any other arbitrary value) to obtain an inhomogeneous linear
system which can be solved by the usual least squares methods (see
text).
\medskip
We can also compute $\mat{H}$ in the same way using line
correspondences or conic correspondences. The derivations are quite
similar.
\end{frame}
\begin{frame}
\frametitle{Direct Linear Transform (DLT)}
\framesubtitle{Example code}
For example code for the DLT in Octave, see \texttt{dlt\_demo.m} on
the course Web site.
\end{frame}
%--------------------------------------------------------------------
\section{Singuar value decomposition (SVD)}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Singular value decomposition (SVD)}
\framesubtitle{Definition}
The SVD is an incredibly useful factorization, particularly for the
kinds of estimation problems that come up in computer vision.
\medskip
\begin{block}{Singular Value Decomposition}
Given an $m\times n$ matrix $\mat{A}$, the \alert{singular value
decomposition} of $\mat{A}$ is
\begin{equation*}
\mat{A} = \mat{U}\mat{D}\mat{V}^T
\end{equation*}
where the columns of $\mat{U} \in \Rset^{m\times m}$ and $\mat{V} \in
\Rset^{n\times n}$ are orthogonal unit vectors and $\mat{D} \in
\Rset^{m\times n}$ is a diagonal matrix whose elements $\sigma_i$,
with $\sigma_1 \ge \cdots \ge \sigma_p \ge 0$, ($p=\min(m,n)$), are
called the \alert{singular values} of $\mat{A}$.
\end{block}
\end{frame}
\begin{frame}
\frametitle{Singular value decomposition (SVD)}
\framesubtitle{Geometric interpretation}
Writing $\mat{A}=\mat{U}\mat{D}\mat{V}^T$ models the transformation
$\vec{y}=\mat{A}\vec{x}$ as a rotation, a ``stretch'' of the unit
hypershpere into a hyperellipse, and a rotation of the hyperellipse.
Example:
\begin{columns}
\column{1.1in}
\begin{equation*}
\mat{A} = \frac{1}{\sqrt{2}}
\begin{bmatrix}
\sqrt{3} & \sqrt{3} \\
-3 & 3 \\
1 & 1
\end{bmatrix}
\end{equation*}
\column{2.6in}
\myfig{2.5in}{Tomasi-fig3-2}{}
\end{columns}
\vspace{-0.25in}
\centerline{\scriptsize Example from Tomasi (2004), Section 3.}
\end{frame}
\begin{frame}
\frametitle{Singular value decomposition (SVD)}
\framesubtitle{Properties of the SVD}
The SVD has many useful properties:
\begin{itemize}
\item If $m=n$ and $\sigma_i \not= 0, \forall i$, then $\mat{A}$ is
\alert{invertible}. The ratio $C=\sigma_1/\sigma_n$ is called the
\alert{condition number} of $\mat{A}$ and tells us how close
$\mat{A}$ is to singularity. When $1/C$ is close to numerical
precision, we say $\mat{A}$ is \alert{ill-conditioned} and should be
considered singular.
\item The number of nonzero $\sigma_i$ is the \alert{rank} of
$\mat{A}$. Numerically, we must specify a tolerance, e.g.\
$\epsilon=10^{-6}$, and say the number of singular values greater
than $\epsilon$ is the rank of $\mat{A}$.
\item We can get the \alert{inverse} or \alert{pseudoinverse} of
$\mat{A}$ using the SVD: $\mat{A}^{-1}=\mat{V}\mat{D}^{-1}\mat{U}^T$
or $\mat{A}^+ = \mat{V}\mat{D}_0^{-1}\mat{U}^T$, where
$\mat{D}_0^{-1}$ is $\mat{D}^{-1}$ for the nonzero singular values
and 0 otherwise.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Singular value decomposition (SVD)}
\framesubtitle{Properties of the SVD}
More properties:
\begin{itemize}
\item The \alert{columns} of $\mat{U}$ corresponding to the
\alert{nonzero} $\sigma_i$ span the \alert{range} of $\mat{A}$. The
columns of $\mat{V}$ corresponding to the zero singular values span
the null space of $\mat{A}$.
\item The \alert{squares of the nonzero singular values} of $\mat{A}$
are the \alert{nonzero eigenvalues} of $\mat{A}^T\mat{A}$ and
$\mat{A}\mat{A}^T$. The \alert{columns} of $\mat{U}$ are the
\alert{eigenvectors} of $\mat{A}\mat{A}^T$ and the \alert{columns}
of $\mat{V}$ are the \alert{eigenvectors} of $\mat{A}^T\mat{A}$.
Additionally, we can write $\mat{A}\vec{u}_k = \sigma_k \vec{v}_k$
and $\mat{A}^T\vec{v}_k=\sigma_k\vec{u}_k$ where $\vec{v}_k$ is the
$k$th column of $\mat{V}$ and $\vec{u}_k$ is the $k$th column of
$\mat{U}$.
\item The singular values of $\mat{A}$ are related to the
\alert{Frobenius norm} of $\mat{A}$:
\begin{equation*}
\|\mat{A}\|^2_F = \sum_{i,j}a_{i,j}^2 = \sum_i\sigma_i^2.
\end{equation*}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Singular value decomposition (SVD)}
\framesubtitle{Applications}
Here are some of the SVD's many uses:
\begin{itemize}
\item In \alert{inhomogeneous} linear least squares problems
($\mat{A}\vec{x}=\vec{b}$), we use the SVD to obtain the
pseudoinverse of A and let $\vec{x}=\mat{A}^+\vec{b}$.
\item In \alert{homogeneous} least squares problems, when we want to minimize
$\|\mat{A}\vec{x}\|$ subject to $\|\vec{x}\|=1$, we obtain the SVD
and let $\vec{x}$ be the last column of $\mat{V}$ (since it is also
the least eigenvector of $\mat{A}^T\mat{A}$).
\item In some cases we can use the SVD to \alert{enforce constraints
on an estimated matrix}. For example, if we obtain an estimate
$\mat{R}$ of a rotation matrix that is not quite orthogonal, we can
compute the orthogonal matrix
$\hat{\mat{R}}=\mat{U}\mat{I}\mat{V}^T$ that is closest to $\mat{R}$
measured by the Frobenius norm. We can use a similar approach for
rank constraints.
\end{itemize}
\end{frame}
%--------------------------------------------------------------------
\section{Cost functions}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Algebraic distance}
DLT minimizes $\|\mat{A}\vec{h}\|$. The vector $\vec{\epsilon} =
\mat{A}\vec{h}$ is called the \alert{residual vector}. Each of the
components of $\vec{\epsilon}$ comes from one of the individual
correspondences generating a row of $\mat{A}$.
\medskip
The part of the vector $\vec{\epsilon}_i$ contributed by one
correspondence $\vec{x}_i \leftrightarrow \vec{x}_i'$ is called the
\alert{algebraic error} for correspondence $i$ and homography
$\mat{H}$. The norm of $\vec{\epsilon}_i$ is called the
\alert{algebraic distance} between $\vec{x}_i'$ and
$\mat{H}\vec{x}_i$.
\medskip
The algebraic distance is a convenient cost function because it leads
to a straightforward linear solution, but \alert{it is not
geometrically or statistically meaningful}!
\medskip
We will find that \alert{normalization} is crucial to obtaining good
results from algorithms minimizing algebraic error.
\medskip
We will also look at algorithms that use DLT and similar linear
algebraic error-minimizing routines to obtain an initial solution,
then minimize a statistical or geometrical cost function from there.
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Geometric error}
Here we'll use $\vec{x}$ to represent a \alert{measured} image point,
$\hat{\vec{x}}$ to denote an \alert{estimated} point, and
$\bar{\vec{x}}$ to represent the \alert{true value} of a point.
\medskip
As a starting point, imagine we have perfect measurements in the first
image and error in the second image.
\medskip
Let $d(\vec{x},\vec{y})$ be the \alert{Euclidean} distance between the
\alert{inhomogeneous} representations of points $\vec{x}$ and
$\vec{y}$.
\medskip
We call the \alert{transfer error} for the set of correspondences
\begin{equation*}
\sum_i d(\vec{x}_i',\mat{H}\bar{\vec{x}}_i)^2.
\end{equation*}
We can estimate the homography $\hat{\mat{H}}$ minimizing the
transfer error.
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Geometric error}
In most practical situations, we don't actually know the true position
$\bar{\vec{x}}_i$ of the point $\vec{x}_i$. Then it makes sense to
measure the \alert{symmetric transfer error}, i.e., the transfer error
in \alert{both directions}:
\begin{equation*}
\sum_i d(\vec{x}_i,\mat{H}^{-1}\vec{x}_i')^2 +
d(\vec{x}_i',\mat{H}\vec{x}_i)^2.
\end{equation*}
We can estimate the homography $\hat{\mat{H}}$ minimizing the symmetric
transfer error.
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Reprojection error}
Another approach is to come up with not only an estimate
$\hat{\mat{H}}$, but also \alert{estimates} $\hat{\vec{x}}_i$ and
$\hat{\vec{x}}_i'$ of the \alert{true points} $\bar{\vec{x}}_i$ and
$\bar{\vec{x}}_i'$, ensuring that
$\hat{\mat{H}}\hat{\vec{x}}_i=\hat{\vec{x}}_i'$.
\medskip
In this case we want to minimize the \alert{reprojection error}
\begin{equation*}
\sum_i d(\vec{x}_i,\hat{\vec{x}}_i)^2 +
d(\vec{x}_i',\hat{\vec{x}}_i')^2, \text{\ subject\ to\ }
\hat{\vec{x}}_i' = \hat{\mat{H}}\hat{\vec{x}}_i, \forall i.
\end{equation*}
\medskip
Reprojection error will be the natural cost function when we are estimating
3D world points $\vec{X}_i$ projecting to $\vec{x}_i$ and
$\vec{x}_i'$.
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Comparison of transfer error and reprojection error}
Here is a comparison of symmetric transfer error and reprojection
error:
\medskip
\myfig{4in}{HZ-fig3-2}{Hartley and Zisserman (2004), Fig.\ 4.2}
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Comparison of algebraic and geometric distance}
The algebraic and geometric methods turn out to be \alert{equivalent}
whenever $\hat{w}_i' = w_i' = 1, \forall i$.
\medskip
This is always true in the case that $\mat{H}$ is an affinity, and the
DLT specializes to affinities without any problem (just set
$h_7=h_8=0$ in Equation (1)).
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Other cost functions}
There are other cost functions that attempt to model the simplicity of
algebraic error while approximating geometric error as closely as
possible.
\medskip
One is the \alert{Sampson error}. See text for details.
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Maximum likelihood estimation}
If we assume spherical Gaussian measurement errors in \alert{one} image, the
\alert{maximum likelihood estimate} of $\mat{H}$ is the one minimizing
the \alert{transfer error}
\begin{equation*}
\sum_i d(\vec{x}_i',\mat{H}\bar{\vec{x}}_i)^2.
\end{equation*}
If we assume spherical Gaussian measurement errors in \alert{both} images, the
maximum likelihood estimate of $\mat{H}$ turns out to be the one
minimizing the \alert{reprojection error}
\begin{equation*}
\sum_i
d(\vec{x}_i,\hat{\vec{x}}_i)^2+d(\vec{x}_i',\hat{\vec{x}}_i')^2.
\end{equation*}
\medskip
See text for the derivations. Notice that maximum likelihood with a
Gaussian noise model often leads to least-squares methods.
\end{frame}
\begin{frame}
\frametitle{Cost functions}
\framesubtitle{Maximum likelihood estimation}
That geometric error cost functions arise from maximum likelihood
means they have good theoretical justification: they are
\alert{statistically optimal} under certain assumptions.
\medskip
Is the assumption of \alert{Gaussian measurement noise} reasonable?
\medskip
It is reasonable if care is taken to \alert{eliminate outliers} prior
to performing the estimation.
\end{frame}
%--------------------------------------------------------------------
\section{Normalization}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Normalization}
\framesubtitle{Problem with the DLT}
What happens with the DLT when we replace $\vec{x}_i$ by
$\mat{T}\vec{x}_i$ and replace $\vec{x}_i'$ by $\mat{T}'\vec{x}_i'$
for arbitrary homographies $\mat{T}$ and $\mat{T}'$?
\begin{itemize}
\item We would prefer the DLT to give us a transformed homography
$\tilde{\mat{H}} = \mat{T}'\mat{H}\mat{T}^{-1}$, where $\mat{H}$ is
the homography we would obtain from the DLT using the untransformed
points.
\item If this were the case, we would get the same estimated
homography regardless of the image coordinate system, origin, etc.
\item However, \alert{it doesn't turn out that way}! The DLT is
\alert{not transformation invariant}.
\end{itemize}
The DLT's lack of transformation invariance is a big problem but can
be minimized through \alert{data normalization}.
\medskip
\alert{Geometric error minimization}, on the other hand, \alert{is}
invariant to \alert{similarity transformations}.
\end{frame}
\begin{frame}
\frametitle{Normalization}
\framesubtitle{Fixing the problem with the DLT}
Hartley and Zisserman find it is possible to \alert{pre-normalize}
$\vec{x}_i$ and $\vec{x}_i'$ with \alert{isotropic scaling} to obtain
reasonable solutions:
\begin{block}{Isotropic scaling}
\begin{itemize}
\item Transform the coordinates in each image so their centroid is at
the origin.
\item Then scale the coordinates so that the average distance from the
origin along each dimension is 1. In 2D, this means the
average magnitude of $(x_i,y_i)^T$ becomes $\sqrt{2}$.
\end{itemize}
\end{block}
\medskip
With isotropic scaling, the DLT becomes invariant to similarity
transformations.
\begin{block}{}
Data normalization is an {\em essential} step in the DLT algorithm.
It must {\em not} be considered optional (Hartley and Zisserman, 2004,
p.\ 108).
\end{block}
\end{frame}
\begin{frame}
\frametitle{Normalization}
\framesubtitle{The normalized DLT for $\mat{H}$}
\begin{block}{Normalized DLT: Objective}
Given $n \ge 4$ 2D to 2D point correspondences $\{ \vec{x}_i
\leftrightarrow \vec{x}_i' \}$, determine the 2D homography matrix
$\mat{H}$ such that $\vec{x}_i' = \mat{H}\vec{x}_i$.
\end{block}
\begin{block}{Normalized DLT: Algorithm}
\begin{itemize}
\item[(i)] Compute a similarity transform $\mat{T}$ consisting of a
translation and a scale that takes $\vec{x}_i$ to
$\tilde{\vec{x}}_i$ such that the centroid is $(0,0)^T$ and the
average distance from the origin is $\sqrt{2}$.
\item[(ii)] Do the same for $\vec{x}_i'$, estimating a similarity
$\mat{T}'$ taking $\vec{x}_i'$ to $\tilde{\vec{x}}_i'$.
\item[(iii)] Apply the \alert{basic DLT} to obtain $\tilde{\mat{H}}$
from $\tilde{\vec{x}}_i$ and $\tilde{\vec{x}}_i'$.
\item[(iv)] Return $\mat{H}=\mat{T}^{\prime -1} \tilde{\mat{H}} \mat{T}$.
\end{itemize}
\end{block}
\end{frame}
%--------------------------------------------------------------------
\section{Iterative minimization}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Motivation}
We saw that the DLT is a simple algorithm that minimizes the
\alert{algebraic error}.
\medskip
However, we would prefer to minimize the \alert{geometric error}, not
the algebraic error.
\medskip
For some problems, geometric error can be minimized analytically, but
more often minimizing geometric error requires \alert{iterative
methods} such as Gauss-Newton.
\medskip
The advantage of iterative minimization methods are their power. But
they have many disadvantages compared to linear minimization
techniques like the DLT:
\begin{itemize}
\item They are \alert{slower}.
\item They generally need an \alert{initial estimate}.
\item The \alert{might not always converge}.
\item Finding the right \alert{stopping criterion} can be difficult.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Formulating the problem}
Here are Hartley and Zisserman's steps for implementing an iterative
minimization algorithm:
\begin{itemize}
\item Decide on a \alert{cost function} such as reprojection error.
\item Find a suitable \alert{parameterization} of the entity to be
estimated. Over-parameterization is usually OK and even beneficial
(see text).
\item Write a \alert{function specification} expressing the cost as a
function of the parameters.
\item Use a linear method such as the DLT for \alert{initialization}
of the parameters.
\item Starting from the initial solution, perform \alert{iteration} to
minimize the cost function.
\end{itemize}
We will consider as an example the problem of minimizing
reprojection error for the homography estimation problem.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Function specification}
A large class of cost functions can be formulated in terms of
\alert{Mahalanobis distance} between a \alert{measurement vector}
$\vec{X \in \Rset^N}$ and points on a \alert{model} submanifold $S
\subset \Rset^N$.
\medskip
We formulate the problem in terms of a \alert{parameter vector}
$\vec{P} \in \Rset^M$ and write a function $\vec{f} : \Rset^M \mapsto
\Rset^N$ mapping $\vec{P}$ to an element of the measurement space.
\medskip
Then the cost function is the squared Mahalanobis distance
\begin{equation*}
\| \vec{X} - \vec{f}(\vec{P})\|^2_{\Sigma} =
(\vec{X}-\vec{f}(\vec{P}))^T \Sigma^{-1} (\vec{X}-\vec{f}(\vec{P})).
\end{equation*}
\begin{itemize}
\item If $\vec{f}(\vec{P})$ is \alert{linear}, we obtain a solution using a
generalized pseudoinverse.
\item If $\vec{f}(\vec{P})$ is \alert{nonlinear} (it usually is in vision
problems), we have a \alert{nonlinear least squares} problem that
must be solved iteratively. The most common algorithm is
\alert{Levenberg-Marquardt}.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Nonlinear least squares}
Here is the basic iterative scheme:
\begin{itemize}
\item Pick some initial solution $\vec{P}_0$ (hopefully close to the
actual solution).
\item Let $k=0$.
\item While $\vec{P}_k$ is not a minimum of $\|\vec{X}-\vec{f}(\vec{P})\|^2_{\Sigma}$, do:
\begin{itemize}
\item Compute a step $\delta\vec{P}$.
\item Let $\vec{P}_{k+1} = \vec{P}_{k} + \delta\vec{P}$.
\item Let $k=k+1$.
\end{itemize}
\end{itemize}
To find the best step $\delta\vec{P}$, we try to jump to the new point
$\vec{P}_k+\delta{\vec{P}}$ that would be optimal under some
simplified assumptions about $\vec{f}$. One technique is to
\alert{linearize} $\vec{f}$ around $\vec{P}_k$.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Linearizing $f$}
For vector-valued function $\vec{f}(\vec{P}) =
(f_1(\vec{P}),\ldots,f_N(\vec{P}))^T$, in the region of $\vec{P}$, we
can approximate $\vec{f}$ by the \alert{first-order Taylor expansion}
\begin{equation*}
\vec{f}(\vec{P}) =
\vec{f}\left(\vec{P}_0+(\vec{P}-\vec{P}_0)\right) =
\vec{f}(\vec{P}_0+\delta\vec{P}) \approx
\vec{f}(\vec{P}_0) + \mat{J}_{\vec{f}}(\vec{P}_0)\delta\vec{P},
\end{equation*}
where $\mat{J}_{\vec{f}}(\vec{P}_0)$ is the \alert{Jacobian} of $\vec{f}$
evaluated at $\vec{P}_0$:
\begin{equation*}
\mat{J}_{\vec{f}}(\vec{P}_0) =
\begin{pmatrix}
\nabla f_1^T(\vec{P}_0) \\
\vdots \\
\nabla f_N^T(\vec{P}_0)
\end{pmatrix}
\end{equation*}
and $\nabla f_i(\vec{P}_0)$ is the \alert{gradient} of $f_i$ evaluated
at $\vec{P}_0$, i.e.,
\begin{equation*}
\nabla f_i(\vec{P}_0) =
\begin{pmatrix}
\frac{\partial f_i}{\partial P_1}(\vec{P}_0),\ldots,
\frac{\partial f_i}{\partial P_M}(\vec{P}_0)
\end{pmatrix}^T.
\end{equation*}
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$}
We've approximated $\vec{f}(\vec{P})$ by the linear function
$\vec{f}(\vec{P}_0) + \mat{J}_{\vec{f}}(\vec{P}_0)(\vec{P}-\vec{P}_0)$.
\medskip
Let's assume for the moment that we use the $L_2$ norm
($\mat{\Sigma}=\mat{I}$) and that
$\vec{X}=\vec{0}$. Then minimizing
$\|\vec{X}-\vec{f}(\vec{P})\|^2_{\Sigma}$ just means minimizing
\begin{equation*}
E(\vec{P}) = \sum_{i=1}^N f_i^2(\vec{P}).
\end{equation*}
\medskip
If $\vec{P}$ is a minimum of $E(\vec{P})$, we know that
\begin{equation*}
\vec{F}(\vec{P})=\frac{1}{2} \nabla E(\vec{P}) = \vec{0}.
\end{equation*}
Substituting the definition for $E(\vec{P})$ and writing as a matrix
equation, we can obtain
\begin{equation*}
\vec{F}(\vec{P})=\mat{J}_{\vec{f}}^T(\vec{P})\vec{f}(\vec{P}) = \vec{0}.
\end{equation*}
\medskip
Now we'll use \alert{Newton's method} to find the zero of $\vec{F}$.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$}
Newton's method for a vector valued fuction $\vec{F}$ is to solve
\begin{equation*}
\mat{J}_{\vec{F}}(\vec{P}_k)\delta\vec{P} = -\vec{F}(\vec{P}_k)
\end{equation*}
for $\delta\vec{P}$ then jump to $\vec{P}_{k+1} = \vec{P}_k +
\delta\vec{P}$.
\medskip
In our case it turns out that
\begin{equation*}
\mat{J}_{\vec{F}}(\vec{P}) =
\mat{J}_{\vec{f}}^T(\vec{P})\mat{J}_{\vec{f}}(\vec{P}) + \sum_{i=1}^N
f_i(\vec{P}) \mat{H}_{f_i}(\vec{P}).
\end{equation*}
where $\mat{H}_{f_i}(\vec{P})$ is the \alert{Hessian} (matrix of
second derivatives of $f_i$) evaluated at $\vec{P}$.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$}
Finally, to apply Newton's method we solve the linear system
\begin{equation*}
\left[
\mat{J}_{\vec{f}}^T(\vec{P})\mat{J}_{\vec{f}}(\vec{P}) + \sum_{i=1}^N
f_i(\vec{P}) \mat{H}_{f_i}(\vec{P}) \right] \delta{\vec{P}} =
-\mat{J}^T_{\vec{f}}(\vec{P})\vec{f}(\vec{P}).
\end{equation*}
\medskip
Crazy! Let's do an example in one dimension to get the idea.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$ (example)}
\textbf{Example in one dimension.}
\medskip
Let $f(p) = p^2-4p+5$. We know the minimum is at $p=2$.
\medskip
The ``Jacobian'' in one dimension is actually just the derivative:
$$ \mat{J}_f(p) = \begin{bmatrix} f'(p) \end{bmatrix} =
\begin{bmatrix} 2p-4 \end{bmatrix}. $$
The error function is
$$ E(p) = f^2(p), $$
so the function we want to find the zero of using Newton's method is
$$ F(p) = \frac{1}{2}\nabla E(p) = \mat{J}_f^T(p)f(p) = (2p-4)(p^2-4p+5) .$$
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$ (example)}
\textbf{Example in one dimension, continued.}
\medskip
To find the zero of $F(p)$, we need
\begin{align*}
\mat{J}_F(p) & = \mat{J}^T_f(p)\mat{J}_f(p) +
\sum_{i=0}^N f_i(p)\mat{H}_{f_i}(p) \\
& = (2p-4)^2 + (p^2-4p+5)\cdot 2 \\
& = 6p^2-24p+26 .
\end{align*}
then we solve the linear system
$$ \mat{J}_F(p) \delta p = -F(p) .$$
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$}
Suppose our initial guess for the parameter is $p = 4$.
\bigskip
\centerline{\includegraphics[width=2.8in]{fig1}}
\bigskip
Verify that $\delta p = -10/13 \approx -0.76923$.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$}
Problem with Newton's method: the Hessians
\begin{equation*}
\begin{bmatrix}
\frac{\partial^2 f_i}{\partial p_1^2} &
\frac{\partial^2 f_i}{\partial p_1 \partial p_2} & \cdots \\
\vdots & \vdots & \vdots \\
\frac{\partial^2 f_i}{\partial p_M \partial p_1} &
\frac{\partial^2 f_i}{\partial p_M \partial p_2} & \cdots
\end{bmatrix}
\end{equation*}
are tedious and expensive to calculate, especially if $M$, the
dimensionality of the parameter vector, is large.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Minimizing the linearized $\vec{f}$}
Because of the difficulty of calculating Hessians we typically use
quick-and-dirty approximations rather than explicitly calculate them.
\medskip
The \alert{Gauss-Newton} algorithm drops the second-order terms
entirely.
\medskip
The \alert{Levenberg-Marquardt} algorithm approximates the second
order terms by a scaled identity matrix:
\begin{equation*}
\left[ \mat{J}_{\vec{f}}^T(\vec{P})\mat{J}_{\vec{f}}(\vec{P}) +
\mu\mat{I} \right] \delta{\vec{P}} =
-\mat{J}^T_{\vec{f}}(\vec{P})\vec{f}(\vec{P}).
\end{equation*}
where the parameter $\mu$ is adapted during optimization.
\medskip
Lucky for us, Levenberg-Marquardt is implemented in many packages:
Matlab {\tt lsqnonlin()}, Octave {\tt leasqr()}, and {\em Numerical
Recipes in C} {\tt mrqmin()}.
\medskip
See the text, Appendix 6, for information on adapting
Levenberg-Marquardt to very large problems.
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Iterative minimization applied to homography estimation}
In the case of \alert{homography estimation},
we have a set of 2D coordinates
of corresponding points $\vec{x}_i$ and $\vec{x}_i'$, so the
dimensionality of the \alert{measurement space} is $N=4n$ and the
measurement is a vector $\vec{X} \in \Rset^N$.
\medskip
Suppose our \alert{parameterization} was to choose a set of points
$\hat{\vec{x}}_i$ in the first image, then choose a homography
$\mat{H}$.
\begin{itemize}
\item The corresponding points
$\hat{\vec{x}}_i'=\hat{\mat{H}}\hat{\vec{x}}_i$ would then be
\alert{fixed}.
\item The \alert{parameter vector} $\vec{P} \in \Rset^M$, then, would
contain the $2n$ parameters of $\hat{\vec{x}}_i$ and the 9
parameters of $\hat{\mat{H}}$, so $M=2n+9$.
\item The resulting \alert{model} (the set of measurements in
$\Rset^N$ that can be be generated with our parameterization) is a
$2n+8$ dimensional submanifold $S \subset \Rset^N$.
\end{itemize}
\medskip
[Analogy: think about a circle as a 1D submanifold of $\Rset^2$.]
\end{frame}
\begin{frame}
\frametitle{Iterative minimization}
\framesubtitle{Iterative minimization applied to homography estimation}
Given the model in the previous page, it is straightforward to write
the mapping
\begin{equation*}
\vec{f} : (\vec{h},\hat{\vec{x}}_1,\ldots,\hat{\vec{x}}_n) \mapsto
(\hat{\vec{x}}_1,\hat{\vec{x}}_1',\ldots,\hat{\vec{x}}_n,\hat{\vec{x}}_n')
\end{equation*}
where $\hat{\vec{x}}_i' = \mat{H}\hat{\vec{x}}_i$.
\medskip
The reprojection error cost function becomes
$\|\vec{X}-\vec{f}(\vec{P})\|^2$, which is just the Mahalanobis cost
function with $\mat{\Sigma}=\mat{I}$.
\medskip
This means Levenberg-Marquardt applies. The resulting algorithm is
Hartley and Zisserman's \alert{Gold Standard} MLE algorithm for
estimating $\mat{H}$.
\end{frame}
%--------------------------------------------------------------------
\section{Gold Standard algorithm}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Gold Standard algorithm}
\framesubtitle{The idea}
The \alert{Gold Standard algorithm} for $\mat{H}$ tries to find
\begin{equation*}
(\hat{\mat{H}},\hat{\vec{x}}_1,\ldots,\hat{\vec{x}}_n) =
\argmax_{\hat{\mat{H}},\hat{\vec{x}}_1,\ldots,\hat{\vec{x}}_n}
P(\vec{x}_1,\vec{x}_1',\ldots,\vec{x}_n,\vec{x}_n' \mid
\mat{H},\hat{\vec{x}}_1,\ldots,\hat{\vec{x}}_n)
\end{equation*}
\medskip
We know that maximizing the likelihood in the equation, assuming
Gaussian measurement error, means minimizing the reprojection error
\begin{equation*}
\sum_i d(\vec{x}_i, \hat{\vec{x}}_i)^2 +
d(\vec{x}_i', \hat{\mat{H}}\hat{\vec{x}}_i)^2
\end{equation*}
\medskip
This is a nonlinear least squares problem, which Levenberg-Marquardt
can solve, but we need an \alert{initial solution}, which we can
obtain using the DLT.
\end{frame}
\begin{frame}
\frametitle{Gold Standard algorithm}
\framesubtitle{The algorithm}
\begin{block}{Gold Standard for $\mat{H}$: Objective}
Given $n>4$ image point correspondences $\{\vec{x}_i \leftrightarrow
\vec{x}_i' \}$, determine the maximum likelihood estimate
$\hat{\mat{H}}$.
\end{block}
\begin{block}{Gold Standard }%for $\mat{H}$: Algorithm}
\begin{itemize}
\item[(i)] Compute an initial estimate of $\hat{\mat{H}}$ using
the normalized DLT.
\item[(ii)] Compute an initial estimate of the subsidiary variables
$\{\hat{\vec{x}}_i\}$ using $\{\vec{x}_i\}$ (see text for a better
way).
\item[(iii)] Minimize the cost
\begin{equation*}
\sum_i d(\vec{x}_i, \hat{\vec{x}}_i)^2 +
d(\vec{x}_i', \hat{\mat{H}}\hat{\vec{x}}_i)^2
\end{equation*}
over $\hat{\mat{H}},\hat{\vec{x}}_1,\ldots,\hat{\vec{x}}_n$, using
Levenberg-Marquardt over $2n+9$ variables: $2n$ for the points
$\{\hat{\vec{x}}_i\}$ and 9 for the homography matrix
$\hat{\mat{H}}$.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Gold Standard algorithm}
\framesubtitle{Example code}
For example code for the Gold Standard
algorithm in Octave, see \texttt{gs\_demo.m} on
the course Web site.
\medskip
Note that if you're using Matlab you can use the Matlab port of the
Octave \texttt{leasqr()} function or use the Matlab Optimization Toolbox
function \texttt{lsqnonlin()}. But be careful as the two functions work
a bit differently.
\end{frame}
%--------------------------------------------------------------------
\section{Robust estimation}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Introduction}
The Gold Standard algorithm is optimal if the \alert{measurement
error} for the corresponding points $\vec{x}_i \leftrightarrow
\vec{x}_i'$ is actually \alert{Gaussian}.
\medskip
In practice, though, the points and their correspondences are obtained
through an \alert{automatic} procedure which makes \alert{mistakes}.
\medskip
These mistakes, or \alert{outliers}, will severely disrupt our
estimates, so they should be removed.
\medskip
We seek to obtain a set of \alert{inliers} that will be used for
estimation and a set of \alert{outliers} that will be ignored.
\medskip
This task is called \alert{robust estimation} because we want the
estimation method to be \alert{robust} to \alert{outliers} following
an unmodeled error distribution.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{RANSAC motivation}
Example: fitting a line $x'=ax+b$ to a set of points.
\medskip
\begin{columns}
\column{2.2in}
\myfig{2.1in}{HZ-fig3-7a}{Least squares fit is skewed by outliers.}
\column{2.2in}
\myfig{2.1in}{HZ-fig3-7b}{RANSAC support for two candidate lines.}
\end{columns}
\medskip
The \alert{RANSAC} (Random Sample Consensus, Fischler and Bolles,
1981) algorithm is one of the many robust methods to solve this
problem.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{RANSAC idea}
The idea of RANSAC is that we only need two points to determine a
line.
\medskip
So to begin, we pick two points \alert{at random} to define a line.
\medskip
The \alert{support} for this line is the number of points that lie
within a distance threshold $t$ of that line.
\medskip
We repeat for a while, and the \alert{line with the most support} is
deemed best.
\medskip
The points within the threshold are called \alert{inliers} and they
are said to make up the \alert{consensus set}.
\medskip
In the figure, we see that the line $\overline{\vec{a}\vec{b}}$ has a
support of 10 but the line $\overline{\vec{d}\vec{e}}$ has a support
of only 2. We would select $\overline{\vec{a}\vec{b}}$ as a better
fit to the data than $\overline{\vec{d}\vec{e}}$.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{RANSAC algorithm}
Hartley and Zisserman's (2004) adaptation of Fischler and Bolles'
(1981) RANSAC:
\begin{block}{RANSAC: Objective}
Robust fit of a model to a data set $S$ which contains outliers
\end{block}
\begin{block}{RANSAC: Algorithm}
\begin{itemize}
\item[(i)] Randomly select a sample $s$ from $S$ and instantiate the
model from $s$.
\item[(ii)] Find the consensus set (inliers) $S_i$ within distance
threshold $t$ of the model.
\item[(iii)] If $|S_i|\ge T$ re-estimate the model using all of the
points in $S_i$ and terminate; otherwise repeat from (i).
\item [(iv)] After $N$ trials, select the largest consensus set $S_i$
and re-estimate the model using all of the points in $S_i$.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{RANSAC parameters}
RANSAC has three \alert{free parameters}:
\begin{itemize}
\item $t$: the \alert{distance threshold},
\item $T$: the \alert{minimum number of inliers} for early termination,
\item $N$: the \alert{number of samples}.
\end{itemize}
\medskip
$t$ can be determined empirically, or, if the \alert{error
distribution} is \alert{known} to be Gaussian with standard
deviation $\sigma$, a 95\% or similar confidence interval can be
calculated.
\medskip
See Table 4.2 in the text for reasonable values of $t$ for various
vision problems.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{RANSAC parameters}
$N$ can be determined empirically, or if the proportion $w$ of inliers
is approximately known, we can choose $N$ giving (e.g.) a 99\%
probability that \alert{on some iteration} we will choose a sample
containing \alert{inliers only}.
\medskip
Example: in homography estimation our sample size would be 4. If we
assume a 50\% outlier rate, we should select $N=72$ samples to assure
a 99\% probability of sampling at least one set with 4 inliers.
\medskip
See Table 4.3 in the text for some example values and how to calculate
$N$ in general.
\medskip
$T$, the acceptable consensus set size, should be approximately the
\alert{number of inliers} thought to be in the data.
\medskip
For example, in the case of homography estimation, if we have 100
correspondences and a 50\% outlier rate, we should let $T=50$.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Adaptive version of RANSAC for unknown $w$}
One variant, when the percentage of inliers $w$ is unknown, is
\alert{adpative RANSAC}:
\begin{itemize}
\item Initialize $N=\infty$.
\item While running RANSAC, decrease $N$ whenever you obtain a sample
with a bigger consensus set than previously seen.
\item Terminate after $N$ iterations.
\end{itemize}
\medskip
A bigger consensus set means $w$ is bigger
than previously thought, so $N$ need not be so large.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Robust maximum likelihood estimation}
Note that step (iv) of RANSAC was to re-estimate the model from
\alert{all of the points} in $S_i$. We should use maximum likelihood
in this case.
\medskip
\alert{Problem}: the set of inliers could \alert{change} after we
compute the new maximum likelihood model.
\medskip
We could just \alert{accept the estimate anyway}.
\medskip
Some approaches \alert{recompute the inliers} after obtaining the
maximum likelihood model then \alert{repeat} maximum likelihood model
estimation until the consensus set converges.
\medskip
See text for detailed discussion and other alternative approaches.
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Using RANSAC to estimate $\mat{H}$}
\begin{block}{Automatic $\mat{H}$ estimation: Objective}
Given two images, compute the homography.
\end{block}
\begin{block}{Automatic $\mat{H}$ estimation: Algorithm}
\begin{itemize}
\item[(i)] Compute a set of \alert{interest points} in each image.
\item[(ii)] Compute \alert{putative correspondences} between the point
sets.
\item[(iii)] \alert{RANSAC robust estimation}: Repeat for $N$ samples,
where $N$ is determined adaptively as previously described:
\begin{itemize}
\item[(a)] Select a random sample of 4 correspondences and compute
$\mat{H}$.
\item[(b)] Calculate the distance $d_{\perp}$ for each putative
correspondence.
\item[(c)] Find the number of inliers for which $d_{\perp} < t =
\sqrt{5.99}\sigma$ pixels.
\end{itemize}
\item[(iv)] Re-estimate $\mat{H}$ using the inliers and the Gold
Standard algorithm
\item[(v)] (Optional) use the new $\mat{H}$ to recompute the matching set of
interest points and repeat from (iv) until convergence.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Using RANSAC to estimate $\mat{H}$}
There are two implementation details that need consideration: how to
measure the distance and how to select the samples.
\begin{itemize}
\item For distance, the \alert{symmetric transfer error}
$d^2_{\text{transfer}} = d(\vec{x},\mat{H}^{-1}\vec{x}')^2 +
d(\vec{x}',\mat{H}\vec{x})^2$ is appropriate since it is easy to
compute. Reprojection error is better but expensive.
\item Widespread distribution of the samples is good, to ensure good
interpolation in the rest of the image. The sampler can be biased
to pick points in different regions of the image rather than
uniformly.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Example results}
Example initial images (Hartley and Zisserman, 2004, Fig.\ 4.9):
\medskip
\begin{columns}[T]
\column{2.25in}
\myfig{2.2in}{HZ-fig3-9a}{Image 1}
\column{2.25in}
\myfig{2.2in}{HZ-fig3-9b}{\parbox{2in}{Image 2, related by rotation
around camera center}}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Example results}
Detected corners, about 500 on each image.
\begin{columns}[T]
\column{2.25in}
\myfig{2.2in}{HZ-fig3-9c}{}
\column{2.25in}
\myfig{2.2in}{HZ-fig3-9d}{}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Example results}
Initial set of 268 correspondences obtained by SSD of image patches
around the corners:
\medskip
\begin{columns}[T]
\column{2.25in} \myfig{2.2in}{HZ-fig3-9e}{\parbox{2in}{268 putative
correspondences, Hartley and Zisserman (2004), Fig.\ 4.9(e)}}
\column{2.25in} \myfig{2.2in}{HZ-fig3-9f}{\parbox{2in}{117/268
outliers, Hartley and Zisserman (2004), Fig.\ 4.9(f)}}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Example results}
Final set of 262 correspondences after RANSAC, guided matching, and MLE.
\medskip
\begin{columns}[T]
\column{2.25in} \myfig{2.2in}{HZ-fig3-9g}{\parbox{2in}{151 inliers
consistent with $\mat{H}$ found by RANSAC, Hartley and Zisserman
(2004), Fig.\ 4.9(g).}} \column{2.25in}
\myfig{2.2in}{HZ-fig3-9h}{\parbox{2in}{Final set of 262
correspondences after guided matching and MLE beginning from the
RANSAC solution, Hartley and Zisserman (2004), Fig.\ 4.9(h).}}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Robust estimation}
\framesubtitle{Implementation}
See the course Web site for a test using OpenCV, its built-in
\alert{Shi-Tomasi feature detector} and its built-in RANSAC $\mat{H}$
estimation function.
\medskip
See the Torr toolbox at {\small
\url{http://cms.brookes.ac.uk/staff/PhilipTorr/Code/master_code.htm}}
for a Matlab implementation using the \alert{Harris corner detector}.
\medskip
When there is significant rotation and/or scale in the image plane,
the \alert{SIFT} (Scale Invariant Feature Transform) feature detector is much
better than Harris or Shi-Tomasi.
\medskip
There is a large literature on feature detectors enabling
correspondence estimation over multiple views.
\medskip
Transform invariance matching will be very important when we move to
multiple views of a general scene and estimate the fundamental matrix
$\mat{F}$ rather than a homography $\mat{H}$.
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.6971807099,
"avg_line_length": 29.5687315634,
"ext": "tex",
"hexsha": "e2e9161e00bdccc50f7f2ede51ab59def69b343f",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-06-09T01:54:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-07-24T02:21:33.000Z",
"max_forks_repo_head_hexsha": "08b1f62cd200601c1aad57c6382d20e95506ea2f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "Alisa-Kunapinun/CV",
"max_forks_repo_path": "Lectures/03-Estimation/03-Estimation.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "08b1f62cd200601c1aad57c6382d20e95506ea2f",
"max_issues_repo_issues_event_max_datetime": "2021-06-23T04:19:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-06-23T04:19:55.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "Alisa-Kunapinun/CV",
"max_issues_repo_path": "Lectures/03-Estimation/03-Estimation.tex",
"max_line_length": 90,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "08b1f62cd200601c1aad57c6382d20e95506ea2f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "Alisa-Kunapinun/CV",
"max_stars_repo_path": "Lectures/03-Estimation/03-Estimation.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-24T20:05:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-07-21T02:47:47.000Z",
"num_tokens": 15884,
"size": 50119
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\title{Analysis/results Replication of Mani et al.~2013 - For Open
Science course}
\author{}
\date{\vspace{-2.5em}}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{iftex}
\ifPDFTeX
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Analysis/results Replication of Mani et al.~2013 - For Open Science course},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
\ifLuaTeX
\usepackage{selnolig} % disable illegal ligatures
\fi
\begin{document}
\maketitle
\hypertarget{r-markdown}{%
\subsection{R Markdown}\label{r-markdown}}
This is a document demonstrating how the analysis and results section of
the replication paper will be conducted. Everything is adapted from the
original Mani et al.~(2013) paper.
Three variables are added: the total\_iq\_score, a simple sum of the
number of correct answers in the iq test, and a priming, which
identifies if the participant saw the high (coded as 1) or low (coded as
0) priming. The last variable is whether the participant had a household
income, divided by the square root of the household size, above (1) or
below (0) the median income level (divided by household size) of the
sample. This was used in the original study as a proxy for if the
participants were ``poor'' or ``rich''.
In the original paper the main analysis of interest for the shopping
mall study was a two-way ANOVA so this is also used here, aov()-function
is used.
\% Error: Unrecognized object type.
\includegraphics{Markdown-mani-et-al-replication_files/figure-latex/analysis-1.pdf}
\hypertarget{temporary-outline-ntoes}{%
\subsection{Temporary outline ntoes}\label{temporary-outline-ntoes}}
\hypertarget{introduction}{%
\subsection{Introduction}\label{introduction}}
We tried to replicate experiment 4 from study 1 in mani et al (poverty
impedes cognitive function) -- the shopping mall study where people read
each scenario and then responded to a IQ test. \#\# Methods We increased
the sample size to 500 and collected via prolific.co. Different from
original where they were collected in person in a shopping mall. Only
americans. Good spread of income level. \#\#\# Tests Not ravens matrices
but very similar -- hagen matrices. Six rounds increasing difficulty. We
note that this is different to original where they got 3 rounds randomly
selected.
\hypertarget{results}{%
\subsection{Results}\label{results}}
We found no effects in an anova.
\hypertarget{discussion}{%
\subsection{Discussion}\label{discussion}}
\begin{verbatim}
Perhaps something wrong with priming, perhaps it does not work. But it did in the original. Also - Bickel et al. (2016) managed to get effects with a somewhat similar priming (negative income shock in a short narrative text, participants asked to simply think about it for a while). This was on temporal discounting though, and not the IQ-effect described here. IQ-effect probably not there.
\end{verbatim}
\end{document}
| {
"alphanum_fraction": 0.7756227031,
"avg_line_length": 39.184,
"ext": "tex",
"hexsha": "3d2f9e35e0d3b9809f13847ee0ef1b33fe8964db",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "48f47e2ab582bbd9ddf8f437f5fe70c91ffef2a4",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "eriksturen/scarcityreplication",
"max_forks_repo_path": "Markdown-mani-et-al-replication.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "48f47e2ab582bbd9ddf8f437f5fe70c91ffef2a4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "eriksturen/scarcityreplication",
"max_issues_repo_path": "Markdown-mani-et-al-replication.tex",
"max_line_length": 394,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "48f47e2ab582bbd9ddf8f437f5fe70c91ffef2a4",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "eriksturen/scarcityreplication",
"max_stars_repo_path": "Markdown-mani-et-al-replication.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1372,
"size": 4898
} |
\subsection{Coherence and Concepts}
\p{In Formal Concept Analysis, a concept is defined by the combination of
its instances and its indicators. Two distinct concepts can share
the same extension: the set of black US presidents
equals the set of Hawaian-born presidents.
Within a collection of objects and
properties, a \q{formal concept} is a set with both objects and
properties such that every object bears every property. In this
formal setting, a concept is a statistical artifact,
which may or may not correspond with concepts of thought and
language (despite its well-defined extent, in the singular person of Barack
Obama, \i{black Hawaian president} probably does not express a concept
with cognitive value beyond its constituent properties). Such analysis
however suggests a more general truth, that concepts depend on
both intension and extension; to be acquainted with a
concept is to understand to some degree both a set of instances and
the reasons or properties for why they belong.
}
\p{Moreover, within real-world concepts \mdash{}
typically imprecise and dynamically
evolving, insofar as they are cognitive and communicative tools
\mdash{} intension and extension evolve in consort. Borderline cases
need to be either excluded from or included into a concept's reach,
and this choice forces an evolution in intension. Suppose we
start with a simple, provisional account for a familiar concept
\mdash{} a \i{house} is a place of residence. We can then consider
places which we may or may not consider houses \mdash{} an apartment,
apartment building, hotel, cabin, tent, tree house, the White House.
Based on how we classify these examples, our posited \q{house
indicators} evolve; e.g., a house is a place of permanent, year-round
residence for at most a few families, where habitation (as opposed
to, say, government or commerce) is its primary purpose. Such
thought experiments do not \i{define} concepts, but perhaps they retrace
their history as linguistic and cognitive phenomena within a relevant community.
}
\p{Although I will consider how conceptual attribution suggests patterns
of \i{functional organization} within objects thereby identified, we
can also consider the \q{function} of concepts as serving thought:
what \i{roles} do typical concepts serve within mental life overall?
Given the intension/extension co-evolution, we recognize at least
two distinct roles; a concept both characterizes an \i{extension},
a set of instances, and also an \i{intension}, a set of properties
or indicators which provide a rationale, and a suggestion of
further characterization, for identifying an instance with a concept.
To assert that something tokenizes a concept is not only to
provide a very rough suggestion of what it is like, but to imply
a strategy or \q{script} for providing more detail. Once I
identify something as a \i{house}, I imply a kind of
organization \mdash{} a yard or some sort of surrounding property;
an inside and outside; an external style and architecture;
a set of several internal separated but interconnected rooms.
I also imply (in contrast to a motor home) that the house
is built in one fixed place and has a permanent location,
perhaps a postal address, that visiting someone at their
home means returning to the same location on each occasion.
But implying this general schema also \q{initiates} a script
for further specification; identifying something with a
concept is more often the start of a conceptual and linguistic
process, rather than a conclusion. I can learn the address
of your new house, directions to get there, you can describe
its various features, its yard and interior design and so forth.
}
\p{Concepts play different roles, and these can be \q{activated} in
turn by different situations, in particular by different
linguistic and grammatic formations. Similar meanings can be
achieved by subtle variation in which conceptual roles are
suggested, providing a case study for how grammar connotes role.
Consider Ronald Langacker's comparison of sentences like
\begin{sentenceList}\sentenceItem{} \label{itm:threes}Three times, students asked an interesting question.
\sentenceItem{} \label{itm:threean}Three times, a student asked an interesting question.
\end{sentenceList}
In (\ref{itm:threes}), the plural \i{students} reflects how a type of noteworthy
situation occurred multiple times; whereas the singular in (\ref{itm:threean}) reflects
how, on each occasion, one student was involved. The \q{student} in
(\ref{itm:threean}) does not designate a particular person, but is rather a generic
token of the concept \i{student} \q{conjured}, as Langacker says, to
provide a kind of unspecified cognition of a conceptual tokening without
implying reference to some specific token. Along these lines, consider
\begin{sentenceList}\sentenceItem{} \label{itm:rhinos}Rhinos are mammals.
\sentenceItem{} \label{itm:arhino}A rhino is a mammal.
\sentenceItem{} \label{itm:therhino}The rhino is a mammal.
\end{sentenceList}
While (\ref{itm:rhinos}) refers via the
set of all rhinos (each of them is a
mammal), (\ref{itm:arhino}) second
\q{conjures} a \q{generic} rhino, from an
abstract \q{plane} \mdash{} again using Langacker's terminology \mdash{}
to make the point that
any conceivable rhino is a mammal. That is, mammalness is a generic feature
of the entire species, it is not a quality which depends on the nature of
any given instance. In (\ref{itm:therhino}), the
signification by contrast selects from the
\q{plane} of all species types; so \i{rhino} here applies in
the guise of naming a discrete element in categorizing thought. These examples
illustrate roles of \i{selecting an extension}, \i{expressing intension}
\mdash{} in the specific sense of concept-intension; that is, envisioning a
representative abstract case which explifies the conjunction of properties
and indicators that are a concepts' signature \mdash{} and \i{naming a type},
referring into a space of \i{kinds} insofar as these are present in thought
as discrete units.
}
\p{Which of these roles is most clearly \q{activated},
given semantic and grammatic cues, then shapes the surrounding discourse:
\begin{sentenceList}\sentenceItem{} \label{itm:related}The rhino is related to the horse.
\sentenceItem{} \label{itm:young}Young rhinos are threatened by poachers.
\sentenceItem{} \label{itm:park}Rhinos in that park are threatened by poachers.
\end{sentenceList}
\noindent Here (\ref{itm:related}) forefronts rhinos as a natural kind, invoking a familiar
relation between species; (\ref{itm:young}), I would argue, invokes a more
intensional sense of the concept insofar as it adds a further
specifier to suggest a narrower concept (a young rhino has the
properties associated with rhinos in general, and then further those
associated with young animals, such as being exceptionally vulnerable and
being cared for by parents); while (\ref{itm:park}) seems to construct a narrower
concept by adding specifiers to \i{extent}, not \i{intent}. There
is no suggestion that rhinos \q{in that park} have any further
resemblance aside from their being there; as a result, (\ref{itm:park}) comes
to designate some set of animals by invoking the \i{set} of
rhinos and then adding semantics to focus on some select portion of that set.
In these examples the concept \i{rhino} plays different semantic
roles \mdash{} guise of a species, a bundle of typical properties and indicators,
and a set of jointly classified individually \mdash{} corresponding to
different cognitive roles.
}
\p{This multiplicity of roles precludes simplistic theorizations of
concepts as \i{just} instance-sets, or property-bundles, or taxonomic entries.
Rather than a single (meta-) definition, a general theory of concepts should
begin by classifying different ways that concepts are used. An initial
distinction is that a concept can \q{profile} (another Langacker term)
\i{either} an individual \i{or} a collection of individuals,
including (but not necessarily) a set of all (actual or possible) instances.
In the latter guise a concept (alone or in consort with other semantic
elements) demarcates a collection from some more general or expansive collection.
In the former case a concept \i{singles out} an individual; as I discussed earlier,
this implies some mixture of continuity and discontinuity, insofar as the
individual is (to some degree) posited as separable (in an act of cognition
and/or perception) from its surroundings and also as internally connected or
integrated. There is some \q{theory} of the \i{internal coherence}
of the individual, of how it is appropriate in some context to consider it
a discrete unit \mdash{} acting as as singular whole, causally integrated, or
in some other fashion disposed to function unitarily.
}
\p{Certainly the \q{individual coherence} of a totality may be provisional. Some
collections are bound together only by people's desire to group them, e.g.,
the books in a library. Nevertheless, even this \q{external} connectedness
is caussally efficacious, with non-neglibable effects of cohering the integral
whole more than a random baseline: books in a library are much more likely to
remain spatially proximate than a random pair of books which happen to be in
some one building at the same time. On the other hand, the degree to which
parts \i{do} cohere into a whole is an essential conceptual detail, and,
once a whole is characterized as an individual tokenizing a concept, part of that
concept's role is to suggest the degree and nature of this coherence.
For example, the United Nations is a more diffuse collection of political
units than the United States. Nevertheless, there is a well-established
semantics where the UN functions as a singular entity which can, say,
pass a resolution.
On the other hand, insofar as a totality has a strong degree of
internal coherence, this may result from a complex causal or
processual integration between many parts, or conversely by a relative lack of
internal complexity or \i{internal structuration}. A torpedo and a
school of fish may both follow a quantifiable trajectory through
water, but the directedness of the former's motion depends on
complex biosocial synchronization much different from the
latter's straightforward physics.
}
\p{For each conceptualized
totality, then, there is a measure of Individual Coherence
and of Internal Structuration \mdash{} which can be quite independent
from another (the \q{IS/IC} scale)
\mdash{} that specifies \i{in what sense} the whole
is a (single, unitary) individual. The concept used to
designate the whole, along with (in a language setting)
surrounding grammar and context, suggests a particular
account of the degree and schematic nature of this individuation.
}
\p{Grammatic choices, like singular/plural and defininte/indefinite article,
play a role here:
\begin{sentenceList}\sentenceItem{} \label{itm:arlake}A rhino is by the lake.
\sentenceItem{} \label{itm:therlake}The rhino is by the lake.
\sentenceItem{} \label{itm:rslake}Rhinos are by the lake.
\sentenceItem{} \label{itm:therslake}The rhinos are by the lake.
\end{sentenceList}
These grammatic variations induce subtle cues to construct or
indicate a \q{plane} from which individuation selects. In
(\ref{itm:arlake}), the hearer's attention is newly directed to something
which, the speaker suggests, either the hearer or both parties
had not previously observed. In other words, (\ref{itm:arlake}) wants to
shift the topic of conversation (even if only temporarily,
so as to register the asserted fact as something collectivly
realized) and so functions both to individuate the rhino,
to propose it as a new object collectively recognized as something
in their mutual surroundings, and to shift attention in its direction.
By contrast, the rhino in (\ref{itm:therlake}) has a presumptive
prior acknowledgement in the current discourse; the point of
the definite article is to select the rhino from among the
set of all entities which have in some fashion been
talked about already. These different \q{planes of selection}
carry over to the plural examples, but the designation there
selects a group of animals, not one single individual.
Nevertheless, (\ref{itm:therslake}) implies that the group has some
sort of internal connectedness, perhaps moving roughly
as a unit. By referring back, as does (\ref{itm:arlake}), to an earlier phase
in the discourse, (\ref{itm:therslake}) implies that the group represents (more or less)
\q{the same} rhinos as observed earlier, an effect of comparison
which helps call attention to the rhinos in their totality.
Without the definite article, (\ref{itm:rslake}) is
less specific in asserting such a totality; it may,
but need not, suggest that the rhinos by the lake are situated
in some spatial or interacting formation so that the speakers
should be disposed to consider them collectively. If the speaker
wanted to nudge the emphasis more or less toward this grouping interpretation,
she would have to select some further semantics: e.g.,
\i{A school of rhinos}, or conversely \i{Some rhinos}, are by the lake.
}
\p{In these examples, the concept \i{rhino} does not, on its own, guide one
interpretation or another (singular/plural; more itegral or
more scatterrred); it is rather a resource which is deployed
in a linguistic and communicative setting along with further
details, and the effect of conceptual resonance \i{in context} bears these
further connotations. Whether a concept designates a group or an individual,
it is the speaker (or a person using a concept as a tool in thought)
who uses ideas activated by the concept so as to evoke patterns
by which the selected individual or collection is internally
connected and externally separated. Any conceptualization presents
schema where continuities and discontinuities are mixed, but
in different ways. Suppose a game warden says:
\begin{sentenceList}\sentenceItem{} \label{itm:footprints}Those footprints were left by rhinos.
\sentenceItem{} \label{itm:trail} That trail of footprints was left by rhinos.
\sentenceItem{} \label{itm:tracks} Those tracks were left by rhinos.
\end{sentenceList}
I argued above that the concept \q{footprint} internally suggests both
continuity (of medium) and separation (of shape or contour). Analogously,
a collection of footprints as in (\ref{itm:footprints}) has a presumptive internal
separation into discreet individuals, but enough spatial proximity to be
plausibly grasped as a perceptual group, relative to a reasonably
typical vantage point. The choice of words in (\ref{itm:trail}) implies a further notion
of directedness, and a somewhat different relation of the observer's spatial position
\visavis{} the observed. While the two sentences might describe the
same situation, (\ref{itm:trail}) carries an additional implication that the
footprints suggest some line of direction which extends beyond the speakers'
immediate line of sight, and can perhaps be followed. This directionality
is also implied in (\ref{itm:tracks}), but in that case the internal continuity which
supports this sense of direction is further emphasized. In this context
\q{tracks} can mean not just footprints but also, say, bent grass or hairs
or broken twigs, so that each particular artifact which cumulatively
constitutes the \q{tracks} may have less precise distinctness than
a particular footprint. Moreover, the word \q{tracks} tends overall to
refer to physically continuous structures, like train tracks, whereas
\q{trail} can be used in more metaphorical senses and with greater
sense of internal pauses and separations (a dective \i{on the trail} of a
fugitive; a scientist \i{on the trail} of a discovery).
}
\p{So (\ref{itm:footprints})-(\ref{itm:tracks})
show progressively less attention paid to internal
constituents and more to the nature and direction of a whole,
even if some scenarios tolerate all three sentences.
Each usage would convey slightly different connotations to the
scene at hand. The point of this comparison is that individuation
through concepts depends on properly connoting the patterns of
continuity and discontinuity which define individuals' coherence
and separateness (as well as that of sets of individuals, when
concepts are used to profile collections). A concept implies a
schema both for individuals' internal structuration, and for
its suitability as being conceptually and perceptually isolated
from its surroundings. The function of concepts is, typically,
to \i{set the stage} for further characterization of their tokens,
first by individuating them as points of attention, and then
by implying the presence of \q{scripts} or \q{schema} leading to
more precise descriptions. Both of these roles present challenges
for \i{formal} semantics, if this means defining rigorous of
quasi-mathematical theories of how concepts select a \i{set} of
instances, or of how each instance \i{instantiates} a concept.
The act of linking a concept to an instance is a complex
cognitive/semantic event, one which occurs against the backdrop of a
mental and often dialogic context, and which should be theorized
as one step in an evolving process. A concept is an opening
onto a more detailed (process of) characterization. To say that
something is an instance of some concept is not a simple
act of classification, then, analogous to declaring a variable
in a programming language to be of some data type.
}
\p{Of particular importance, I believe, is the notion that conceptualization
(if we define this as concept-to-token identification) is only one
stage in an extended process. This means that no one single
concept provides a definitive account of an identified instance,
even a provisional or imprecise account. Obviously concepts
operate at some level of coarse-graining and can be combined
together for greater precision. However, if we fail to
theorize conceptualization as \i{process}, we can end up with a
view of concepts as imprecise but, \i{at some degree of detail},
self-contained mental \q{pictures} which cover some range of
possible cases. For each concept, there would correspond
(on this account) a \q{rough} sketch of how a token of the
concept is, insofar as it does bear the concept. The degree
of (im)precision in these \q{sketches} would depend on how
general or specific is the concept itself: scientific terms,
for example, are more specific than everyday words.
But each concept would connote a family of instances whose
features are more clearly or more vaguely aligned with a
set of indicators. Each house is vaguely aligned with a
\q{sketch} of some locale with a yard, a front door, interior
rooms, and so forth; each restaurant is aligned with a
general notion of a place to sit at a table and eat prepared food,
and so forth. The concept leaves particulars imprecise
(what kind of food, what kind of yard) but presents them at a
level of vagueness in which they match all examples. A
concept, in other words, finds a suitable mix of vagueness
and resemblance \mdash{} it abstracts from particular details
enough that all tokens can be considered to resemble each other.
}
\p{While my analysis has likewise argued for \i{characterizing features},
where I differ from this kind of theory is in emphasizing conceptual
imprecision as a matter of \i{process} and not of \i{vagueness},
or concepts as \i{scripts} rather than \i{sketches}. It is misleading,
I believe, to think of a concept as \i{imprecise} in that it seeks
to achieve a degree of nonspecifity which allows many somewhat mutually
resembling things to be grouped together. This notion seems to both
minimize the potential for using particular concepts to initiate
more precise further characterization, and also to overstate the
degree to which concept-tokens resemble each other. As mental
tools concepts are thereby weakened on two fronts: they are neither
precise enough to truly describe their instances, nor general enough
to group together these instance outside of some \q{resemblance} between
them. I would argue, however, that real-world concepts both trend
toward a degree of generality which is hard to account for in terms
of token-resemblance, \i{and also} serve within cognitive and linguistic
episodes in which general concepts provide a framework for precisely
describing individuals, up to a degree of resolution suited for a
given thought or dialog. The concept may not internally \i{imply}
these details, but it implies a \i{framework} for accumulating them.
}
\p{If we consider the concept of \i{fly}, for example, we note that it
covers many different cases: the flight of birds, planes, coments, commercial
passengers, leaves, kites, debris, shrapnel, types of aircraft
(\q{that model flew during World War II}), carriers (\q{we fly
Korean Air whenever possible}), their fleet (\q{that company flies the
youngest planes of any discount carrier}). All of these somehow
involve travel through air, so we might say that the role of \q{fly} is to
invoke a highly stylized or fuzzy sketch of \q{movement above the ground};
in other words, to find the point of resemblance among all of these cases.
The different senses of \i{fly} might then be considered \i{subconcepts},
each with a more precise picture, which finds more detailed resemblances
between its tokens. By this account, the concept \q{fly} is really a
loose aggregate of more precise concepts, which manifest as different
senses of the polysemous English word. However, such an analysis
seems to too neatly group these various senses into a conceptual
hierarchy, as if word-senses can assemble into a taxonomy akin to the
classification of living species, each sense finding a degree of
precision and inter-token resemblance relative to its hierarchical position.
I would argue that different \q{senses} of words or concepts do not
generally reveal such straightforward taxonomic patterns, at least
unless (as with biological names for species, genera, etc.) they are
deliberately constructed to do so. In normal usage new senses evolve
gradually and can emerge ad-hoc for some specific language community
(or \q{micro} community). For example, the sense of \q{fly} as
in \q{Commercial Flight} \mdash{} with its typical associations of
buying a ticket, going to an airport, checking baggage, passing
security, and so on \mdash{} emerged only gradually from the more general
sense of flying in an aircraft. Moreover, we can imagine specific
situations where new senses emerge \mdash{} for example, executives
of a multinational company who use the phrase \q{Flying to London}
to mean a trip to specific offices, thereby mixing (in this specific
context) the destination with the purpose of the trip.
}
\p{The fecundity with which these senses arises suggests to me that we should
not consider each \q{sense} as its own (sub)concept; instead, I would
consider \i{fly} to be \i{one} concept, not a loose assortment of
vaguely related concepts. This concept coverts many different cases,
but the cases can be compared and characterized according to certain
overall criteria, such as the \i{cause}, \i{reason}, \i{agent}, and
\i{destination} of the flight. Flying \i{debris}, \i{leaves}, and \i{kites}
all rely on the power of wind, but the former implies a relatively strong gust
to dislodge solid objects naturally resting on the ground, whereas the latter
implies a human agent who orients the kite to best gather the wind.
Flying \i{birds} and \i{planes} are both self-powered, but the latter relies
on human agency to provide fueal. The various senses of human flight are
comparable based on the kind of craft used, the relation between passengers,
pilots, and whoever owns the craft (contrast hobbyist who owns and flies
planes, an air force pilot, a passenger service where flying is a commercial
transaction). Each of these distinctions provide parameters for comparing
different examples of flight. No parameters, except perhaps for the
most general notion of airborn motion, apply to all cases and kinds of
flights; but each case involves some mixture of some relevant parameters,
and each parameter offers a ground of comparison. For example, only
commercial flight involves a ticket, but once this parameter is
\q{activated} we can compare the price of different flights.
}
\p{It is hard to identify isolated \q{subconcepts} within general
flight because different parameters overlap in different
ways. Passenger flight has unique aspects of cost and
value (and whatever legal and social considerations
come along with commercial transactions), but it shares
other parameters with other (but not all) senses:
relatively specific times and destinations of travel,
like the flight of birds but unlike that of leaves or debris;
the distinction between the subject and the agent of flight,
a \i{person} ia flying but by virtue of being on an aircraft,
so the subject/agent relation becomes a conceptual parameter,
which is also present in some other contexts, like cargo.
For an analogous example, we might think to subdivide the
concept of \q{restautant} into more specific cases \mdash{} after
all, merely expressing the desire to visit a restaurant
in general, or searching for one, can cover a wide spectrum
much of which may not actually reflect the speaker's/searcher's
interest. Someone may want a formal dining experience, a casual
spot for family dinner, a healthy and inexpensive lunch, a
quick meal, or a place to have a snack and go online. Certain
lexemes cover some part of these more specific spectra
\mdash{} bistro, steak house, diner, coffee shop, cafeteria \mdash{} yet none
of these compete with \q{restaurant}'s dominance of the relevant semantic
territory: one will often use the more general word even when one
of the more specific ones apply. I believe this can be explained,
in part, because the more general concept can be narrowed in
several ways at once, thereby providing a more effective point
of origin for a specified description; providing more
\q{avenues} for elaboration. For example, a \q{Five Dishes and Soup}
spot in a Chinatown might be considered a Chinese restaurant and a
cafeteria, but there does not appear to be an established
usage \q{Chinese Cafeteria}. Even if \q{cafeteria} might be
plausibly defined as any restaurant without table service, it
seems to connote further details about the kind and pricing of the food.
Instead of specifiers offering a clear-cut segmentation of the
\q{restaurant} spectrum, English speakers seem to prefer the more
general usage and to provide further specification by an unfolding,
dialogic, and open-ended process, relying on communicative cues to
converge on an image of the \q{kind of place} some restaurant is,
rather than trusting some narrower lexeme (like \i{cafeteria} or
\i{diner}) to impart these cues via convention. Arguably,
in some specific circumstances, a narrower lexeme has indeed
achieved something like canonical status (\i{pizzeria}, \i{coffee bar});
so a speaker using those terms, amongst a particular language community,
shows confidence that the allusions embedded in them convey a
sufficiently precise picture of the \q{kind of place} being mentioned.
But this specificity stands in semantic relation to the more general term,
even if the latter is replaced on some occasions: the ubiquity of the
word \q{restaurant} adds an extra dimension to a speaker's choice on
some occasion to instead say \i{pizzeria} or \i{coffee bar}. The concept
\q{flight}, I would argue, has a similar semantic ubiquity; it
encompasses a spectrum of cases so broadly that the occasional
choice of a different word (\q{I jetted to London}; \q{The birds migrated
south}) carries the semantics not only of the word used but the dominant
one \i{not} used.
}
\p{What this analysis suggests is that real-world language does not necessarily
show a tendency to semantic specificity, to a preference for more specific
usages and a relegation of more general terms to cases of deliberate
imprecision or abstraction. If a language community had an instinctive
drive toward semantic precision, then dominant but imprecise words like
\q{restaurant} would tend to gradually be replaced in many cases by
narrower concepts, and to be retained mostly when speakers deliberately
intend to avoid any narrower connotations. However, this does not
appear to be the way language in general evolves. I believe that,
to an important degree, this fact can be explained by appeal to
conceptual roles: the point of a concept is not to condence as
precise a picture of its tokens into as concise a semantic unit as
possible, but rather to initiate a further descriptive process
as efficiently as possible.
}
| {
"alphanum_fraction": 0.7853810798,
"avg_line_length": 63.3782608696,
"ext": "tex",
"hexsha": "838dce4a4bfc97e3ec8b9aef3d7a99812b1f6785",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/phcg",
"max_forks_repo_path": "nasm/section2.ngml.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/phcg",
"max_issues_repo_path": "nasm/section2.ngml.tex",
"max_line_length": 106,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/phcg",
"max_stars_repo_path": "nasm/section2.ngml.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6791,
"size": 29154
} |
%
%$Id: meanflowIntro.tex,v 1.4 2005-08-16 14:42:17 hb Exp $
%
\section{The mean flow model \label{sec:meanflowIntro}}
\subsection{Introduction}
This module contains the definitions of the most important
mean flow variables used in geophysical models. In GOTM, these
are
\begin{itemize}
\item the mean horizontal velocity components, $U$ and $V$
\item the mean potential temperature, $\Theta$, (or the mean buoyancy, $B$)
\item the mean salinity, $S$
\end{itemize}
Note that in general a variable $\phi$ describing a turbulent
field can be decomposed into a mean and a fluctuating part. In GOTM,
we use the notation
\begin{equation}
\label{decomposition}
\phi = \mean{\phi} + \phi'
\comma
\end{equation}
where $\mean{\rule{3mm}{0mm}}$ denotes the ensemble mean and the prime
the fluctuating part. In addition, for brevity, we use the following conventions:
\begin{equation}
\label{decompositionConventions}
\begin{array}{rcl}
U &=& \mean{u} \quad \text{for the x-velocity} \\
V &=& \mean{v} \quad \text{for the y-velocity} \\
P &=& \mean{p} \quad \text{for the pressure} \\
\Theta &=& \mean{\theta} \quad \text{for the potential temperature} \\
B &=& \mean{b} \quad \text{for the buoyancy} \\
S &=& \mean{s} \quad \text{for the salinity}
\end{array}
\end{equation}
Note that, if not explicitly mentioned, GOTM uses the units kg, m, s,
K. Further conventions are introduced in the turbulence chapter
\sect{sec:turbulenceIntro}. All operations on these meanflow variables
are executed and coordinated in the {\tt meanflow} module.
\subsubsection{Physics}\label{sec:meanflowIntroPhysics}
Due to the one-dimensional character of GOTM, the state-variables
listed above are assumed to be horizontally homogeneous, depending
only on the vertical $z$-coordinate. As a consequence, all
horizontal gradients have to be taken from observations, or they have
to be estimated, parameterised or neglected.
For example, the surface slopes $\partial_x\zeta$ and
$\partial_y\zeta$ representing the barotropic pressure-gradients may
be determined by means of local observations or results from
three-dimensional numerical models. It is also possible to prescribe a
time series of the near-bed velocity components for reconstructing the
barotropic pressure gradient, see \cite{Burchard99}. The
implementation of these options for the external pressure gradient is
carried out in {\tt extpressure.F90}, described in
\sect{sec:extpressure}. The internal pressure-gradient, which results
from horizontal density gradients, can be prescribed from observations
of horizontal gradients of $\Theta$ and $S$ or from three-dimensional
model results (see {\tt intpressure.F90} in \sect{sec:intpressure}).
These gradients may also be used for horizontally advecting $\Theta$
and $S$ (see \sect{sec:temperature} and \sect{sec:salinity}).
Another option in GOTM for parameterising the advection of $\Theta$
and $S$ is to relax the model results to observations. Evidently, this
raises questions about the physical consistency of the model, but it
might help to provide a more realistic density field for studies of
turbulence dynamics. Nudging is also possible for the horizontal
velocity components. This makes sense in order to initialise inertial
oscillations from observed velocity profiles, see \sect{sec:uequation}
and \sect{sec:vequation}. In the momentum equations, advection and
horizontal diffusion terms are neglected.
In hydrostatic ocean models, the vertical velocity is calculated by
means of the continuity equation, where the horizontal gradients of
$U$ and $V$ are needed. Since these are not available or set to zero,
the assumption of zero vertical velocity would be consistent. In many
applications however, a non-zero vertical velocity is needed in order
to reflect the vertical adiabatic motion of e.g.\ a thermocline. In
GOTM, we have thus included the option of prescribing a vertical
velocity time series at one height level which might be vertically
moving. Vertical velocities at the surface and at the bottom are
prescribed according to the kinematic boundary conditions ($w=0$ at
the bottom and $w=\partial_t\zeta$ at the surface), and between these
locations and the prescribed vertical velocity at a certain height,
linear interpolation is applied, see {\tt updategrid.F90} in
\sect{sec:updategrid}. This vertical velocity is then used for the
vertical advection of all prognostic quantities.
Standard relations according to the law of the wall are used for
deriving bottom boundary conditions for the momentum equations (see
{\tt friction.F90} in \sect{sec:friction}). At the sea surface, they
have to be prescribed or calculated from meteorological observations
with the aid of bulk formulae using the simulated or observed sea
surface temperature (see \sect{sec:airsea}). In {\tt
stratification.F90} described in \sect{sec:stratification}, the
buoyancy $b$ as defined in equation \eq{DefBuoyancy} is calculated by
means of the UNESCO equation of state (\cite{FofonoffMillard83})
or its linearised version. In
special cases, the buoyancy may also be calculated from a simple
transport equation. {\tt stratification.F90} is also used for
calculating the Brunt-V\"ais\"al\"a frequency, $N$.
The turbulent fluxes are calculated by means of various different
turbulence closure models described in great detail in the {\tt
turbulence} module, see \sect{sec:turbulence}. As a simplifying
alternative, mixing can be computed according to the so-called
`convective adjustment' algorithm, see \sect{sec:convective}.
Furthermore, the vertical grid is also defined in the meanflow module
(see {\tt updategrid.F90} in \sect{sec:updategrid}). Choices for the
numerical grid are so-called $\sigma$-coordinates with layers heights
having a fixed portion of the water depth throughout the
simulation. Equidistant and non-equidistant grids are possible.
\subsubsection{Numerics}\label{SectionNumericsMean}
For the spatial discretisation, the water column is divided into $N_i$
layers of not necessarily equal thickness $h_i$,
\begin{equation}
\label{grid}
h_i=(\gamma_i-\gamma_{i-1})D, \qquad i=1,\dots,N_i
\comma
\end{equation}
with nondimensional interfaces $\gamma_i$ with $\gamma_0=-1$,
$\gamma_{i-1}< \gamma_i$ and $\gamma_{N_i}=0$,
see \cite{BurchardPetersen97}.
The discrete values for the mean flow quantities $U$, $V$, $\Theta$,
and $S$ represent interval means and are therefore located at the
centres of the intervals, and the turbulent quantities like $k$, $L$,
$\epsilon$, $\nu_t$, $\nu'_t$, $N$, $P$, $G$, $c_{\mu}$, and
$c_{\mu}'$ are positioned at the interfaces of the intervals (see
\sect{sec:turbulence}). The indexing is such, that the interface
above an interval has the same index as the interval itself. This
means that mean flow quantities range from $i=1,..,N_i$ while
turbulent quantities range from $i=0,..,N_i$ (see \fig{FigGrid}).
\begin{figure}[!h]
\begin{center}
\scalebox{0.5}{\includegraphics{figures/gridvert.eps}}
\caption{Spatial organisation and indexing of the numerical grid.\label{FigGrid}}
\end{center}
\end{figure}
The staggering of the grid allows for a
straight-forward discretisation of the vertical fluxes of momentum
and tracers without averaging. However, for the vertical fluxes of
e.g.\ $k$ and $\epsilon$, averaging of the eddy diffusivities is
necessary. This is only problematic for the fluxes near the surface
and the bottom, where viscosities at the boundaries have to be
considered for the averaging. These can however be derived from the
law of the wall.
\begin{figure}
\begin{center}
\scalebox{0.5}{\includegraphics{figures/gridtime.eps}}
\caption{Temporal organisation and indexing of the numerical grid.
Here, a time stepping slightly more implicit than the
\index{Crank-Nicolson scheme} \cite{CrankNicolson47}
scheme with $\sigma=0.6$ is shown.\label{FigGridTime}}
\end{center}
\end{figure}
The time stepping is equidistant, based on two time levels and not
limited by Courant numbers, because of the absence of advection and an
implicit treatment of vertical diffusion, see \fig{FigGridTime}. In
the following, the discretisation of a simple diffusion equation,
\begin{equation}
\label{simpleDiffusion}
\partder{X}{t} - \partder{}{z} \left( \nu \partder{X}{z} \right) = 0
\comma
\end{equation}
will be illustrated for Neumann-type
boundary conditions
\begin{equation}
\nu \partder{X}{z} = F_s
\qquad \mbox{for } z=\zeta,\qquad
\end{equation}
and
\begin{equation}
\nu \partder{X}{z} = F_b
\qquad \mbox{for } z=-H.\qquad
\end{equation}
The semi-implicit discretisation of \eq{simpleDiffusion}
can then be written as
\begin{equation}\label{sigmafirst}
\displaystyle
\frac{X^{n+1}_{N_i}-X^n_{N_i}}{\Delta t}
-\frac{F_s
-\nu^n_{N_i-1}\frac{X^{n+\sigma}_{N_i}-X^{n+\sigma}_{N_i-1}}{0.5(h^{n+1}_{N_i}+h^{n+1}_{N_i-1})}}{h^{n
+1}_{N_i}}
=
\comma
\end{equation}
\begin{equation}\label{Xdiscrete}
\displaystyle
\frac{X^{n+1}_i-X^n_i}{\Delta t}
-\frac{\nu^n_i\frac{X^{n+\sigma}_{i+1}-X^{n+\sigma}_{i}}{0.5(h^{n+1}_{i+1}+h^{n+1}_i)}
-\nu^n_{i-1}\frac{X^{n+\sigma}_{i}-X^{n+\sigma}_{i-1}}{0.5(h^{n+1}_i+h^{n+1}_{i-1})}}{h^{n
+1}_i}
=0
\comma
\end{equation}
\begin{equation}\label{sigmalast}
\displaystyle
\frac{X^{n+1}_1-X^n_1}{\Delta t}
-\frac{\nu^n_1\frac{X^{n+\sigma}_{2}-X^{n+\sigma}_{1}}{0.5(h^{n+1}_{2}+h^{n+1}_1)}
-F_b}{h^{n+1}_1}
=0
\comma
\end{equation}
for $1<i<N_i$. Here, the semi-implicit time level is defined by
\begin{equation}
X^{n+\sigma}=\sigma X^{n+1}+(1-\sigma)X^n.
\end{equation}
Thus, for $\sigma=0$, a fully explicit, for $\sigma=1$ a fully
implicit, and for $\sigma=0.5$ the \cite{CrankNicolson47}
second-order scheme are obtained. \Fig{FigGridTime} shows an
example for $\sigma=0.6$. It should be noted that often a time
stepping is preferable which is slightly more implicit than the
\cite{CrankNicolson47} scheme in order to obtain
asymptotic stability. The resulting linear system of equations
(\ref{sigmafirst}) -- (\ref{sigmalast}) with tri-diagonal matrix
structure is solved by means of the simplified Gaussian elimination.
With the same strategy, a very similar system of equations can be
derived for variables located at the interfaces of the grid cells,
i.e. variables describing turbulence.
| {
"alphanum_fraction": 0.7426386233,
"avg_line_length": 45.0862068966,
"ext": "tex",
"hexsha": "a8e76eb44fae06537682832a3a19469175d28a51",
"lang": "TeX",
"max_forks_count": 51,
"max_forks_repo_forks_event_max_datetime": "2022-03-29T15:48:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-09T20:59:07.000Z",
"max_forks_repo_head_hexsha": "36f3ac35351c99f2f16f60f5d9701efed246293f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "lsht312/schism",
"max_forks_repo_path": "src/GOTM3.2.5/doc/meanflowIntro.tex",
"max_issues_count": 42,
"max_issues_repo_head_hexsha": "36f3ac35351c99f2f16f60f5d9701efed246293f",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T17:42:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-08-19T21:57:12.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "lsht312/schism",
"max_issues_repo_path": "src/GOTM3.2.5/doc/meanflowIntro.tex",
"max_line_length": 103,
"max_stars_count": 42,
"max_stars_repo_head_hexsha": "36f3ac35351c99f2f16f60f5d9701efed246293f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "lsht312/schism",
"max_stars_repo_path": "src/GOTM3.2.5/doc/meanflowIntro.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T03:08:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-12T21:48:24.000Z",
"num_tokens": 2939,
"size": 10460
} |
\section{Experiment}
For each of our experiments we use the bert-base-uncased BERT model. It has 109m parameters and consists of 12 transformer layers with 768 hidden dimensions, 12 attention heads and a sub word vocabulary of 30,522 tokens. We choose this model because of its success across NLP and question answering, and information retrieval. For ease of reproduction we have build all of our experiments using Hugging Face's Transformers and network pruning was implemented using Neural Magic's sparseml.
\subsection{Question Answering}
For question answering we train using mostly fixed hyperparamets where batch size is 16, learning rate is 3e-5 and we leverage adamW , max sequence length is 384, and all training is done using float16. Models are kept fixed except for train length and any layers that are removed. When we explore removing layers they are removed prior to training and follow the structure described in method. Our baseline matches previously reported results and is achieved in 2 epochs. Our teacher model for distillation is our sparse baseline and after manual experimentation we find a distill hardness of $\lambda=1$ and temperature=2 to be optimal. \\
Our first experiments shown in table \ref{tab:qa-prune-length} show that high sparsity models can retain model performance but training length has to be extended substantially. Our experiments also find that no sparsity is able to support zero shot pruning as this causes huge variations in end model performance (up to 30 absolute F1 percentages).
\begin{table}[b]
\resizebox{8cm}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
sparsity & total epochs & pruned & one shot & pruning epochs & F1 & EM\\ \hline
0 & 1 & no & no & 0 & 84.57 & 76.04\\ \hline
0 & 2 & no & no & 0 & 88.02 & 80.63\\ \hline
0 & 10 & no & no & 0 & 87.60 & 79.13\\ \hline
80 & 1 & yes & yes & 0 & 25.14 & 15.10\\ \hline
80 & 2 & yes & no & 0 & 66.96 & 53.88\\ \hline
80 & 10 & yes & no & 8 & 83.95 & 74.41\\ \hline
80 & 30 & yes & no & 18 & 84.06 & 74.64\\ \hline
90 & 1 & yes & yes & 0 & 16.06 & 07.79\\ \hline
90 & 2 & yes & no & 0 & 64.19 & 50.95\\ \hline
90 & 10 & yes & no & 8 & 79.09 & 68.18\\ \hline
90 & 30 & yes & no & 18 & 79.65 & 68.51\\ \hline
95 & 1 & yes & yes & 0 & 10.50 & 04.93\\ \hline
95 & 2 & yes & no & 0 & 24.45 & 14.44\\ \hline
95 & 10 & yes & no & 8 & 72.76 & 60.41\\ \hline
97 & 10 & yes & no & 6 & 70.26 & 57.02\\ \hline
97 & 30 & yes & no & 18 & 70.43 & 57.29\\ \hline
99 & 1 & yes & yes & 0 & 09.69 & 03.61\\ \hline
99 & 2 & yes & no & 0 & 17.43 & 07.87\\ \hline
99 & 10 & yes & no & 8 & 47.31 & 32.56\\ \hline
\end{tabular}}
\caption{Effect of pruning length at various sparsity's for question answering. Short pruning schedules produce irregular results while long pruning schedules produce minor drops in accuracy}
\label{tab:qa-prune-length}
\end{table}
\begin{table}[]
\resizebox{8cm}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
sparsity & params & Distilled & pruned & layers & pruning epochs & F1 Score & EM Score \\ \hline
0 & 108,893,186 & no & no & 12 & 0 & 88.00 & 80.63\\ \hline
0 & 108,893,186 & yes & no & 12 & 0 & 89.02 & 82.03 \\ \hline
0 & 87,629,570 & no & no & 9 & 0 & 86.70 & 78.82 \\ \hline
0 & 87,629,570 & yes & no & 9 & 0 & 87.94 & 80.46 \\ \hline
0 & 66,365,954 & no & no & 6 & 0 & 81.64 & 72.67 \\ \hline
0 & 66,365,954 & yes & no & 6 & 0 & 83.46 & 75.03 \\ \hline
0 & 45,102,338 & no & no & 3 & 0 & 51.75 & 39.11 \\ \hline
0 & 45,102,338 & yes & no & 3 & 0 & 43.83 & 33.06 \\ \hline
0 & 30,926,594 & no & no & 1 & 0 & 26.23 & 17.32 \\ \hline
0 & 30,926,594 & yes & no & 1 & 0 & 28.10 & 18.50 \\ \hline
20 & 108,893,186 & no & yes & 12 & 18 & 87.20 & 79.17 \\ \hline
20 & 108,893,186 & yes & yes & 12 & 18 & 89.56 & 82.74 \\ \hline
40 & 108,893,186 & no & yes & 12 & 18 & 86.27 & 78.08 \\ \hline
40 & 108,893,186 & yes & yes & 12 & 18 & 89.77 & 83.06 \\ \hline
60 & 108,893,186 & no & yes & 12 & 18 & 86.44 & 77.95 \\ \hline
60 & 108,893,186 & yes & yes & 12 & 18 & 89.38 & 82.29 \\ \hline
72 & 108,893,186 & no & yes & 12 & 18 & 85.50 & 76.43 \\ \hline
72 & 108,893,186 & yes & yes & 12 & 18 & 89.11 & 83.04 \\ \hline
80 & 108,893,186 & no & no & 12 & 18 & 84.06 & 74.64\\ \hline
80 & 108,893,186 & yes & yes & 12 & 18 & 88.03 & 80.81\\ \hline
80 & 66,365,954 & no & yes & 6 & 18 & 77.87 & 67.08 \\ \hline
80 & 66,365,954 & yes & yes & 6 & 18 & 84.69 & 76.57 \\ \hline
90 & 108,893,186 & no & no & 12 & 18 & 79.65 & 68.51\\ \hline
90 & 108,893,186 & yes & yes & 12 & 18 & 85.63 & 77.41\\ \hline
90 & 66,365,954 & no & yes & 6 & 18 & 73.52 & 61.22 \\ \hline
90 & 66,365,954 & yes & yes & 6 & 18 & 80.54 & 71.00 \\ \hline
97 & 108,893,186 & no & no & 12 & 18 & 70.43 & 57.29\\ \hline
97 & 108,893,186 & yes & yes & 12 & 18 & 75.013 & 63.95\\ \hline
97 & 66,365,954 & no & yes & 6 & 18 & 67.27 & 53.86 \\ \hline
97 & 66,365,954 & yes & yes & 6 & 18 & 72.36 & 60.82 \\ \hline
\end{tabular}}
\caption{Effect of layer dropping, distillation, and pruning on question answering BERT.}
\label{tab:qa-all}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{project/qaf1.png}
\caption{F1 Variations across question answering compressed models variants over training.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{project/qaloss.png}
\caption{Loss Variations across question answering compressed models variants over training.}
\end{figure}
Our experiment also find that the relative reduction in parameters caused by layer removal is far more detrimental than network pruning. A distilled model with 72 \% sparsity has about the same amount of parameters as a 3 layer model but its performance is over twice as good and even better than the baseline. We believe this is strong supporter of the effects of sparsity and model distillation as a training regularizer. Our final result show a model with only 2\% active weights only sees a 16 point drop in quality(97\% sparse, 6 layer, distilled) and a 80 sparse, distilled 6 layer model is only 4\% worse than the baseline. On a current state of the art NVIDIA V100 16GB GPU each sample takes 12.77ms \footnote{https://github.com/NVIDIA/DeepLearningExamples/blob/master/TensorFlow/LanguageModeling/BERT/README.md}. When we our sparse models with efficient inference engines like Deepsparse can achieve equal accuracy with 41.19ms/sample on a commodity CPU!
\subsection{Passage Ranking}
Our Approach for passage ranking mirrors our experiments for question answer with minor tweaks to account for the MSMARCO corpus. We update the batch size to 64 and max sequence length to 128. Since the training corpus is ~80m samples, we stop training when loss stops improving. As inference is expensive we do not perform relevance evaluation on each of the 8.8m passages for each query. We use a first stage BM25 model to retrieve and rank the 1000 most relevant pass sage and then re rank those items. Models are kept fixed except for sparsity, distillation, and layer that are removal. When we explore removing layers they are removed prior to training and follow the structure described in method. \\
As the MSMARCO dataset is over 800,000,000 training examples training on the full dataset is not feasible. Moreover, in long training experiments we do not find any improvement in loss or ranking ability after 1\% of the data is seen. Experiments with distillation found distillation less effective but best results come from a distill hardness of $\lambda=1$ and temperature=2. Experiments with pruning follow findings with our QA pruning experiments and train for 10x as long. It is worth noting that while our re-ranking method outperforms BM25, it does not perform as well as the methods on the MSMARCO leaderboard and are unsure as to why. We experimented with other mixtures of data selection like negative random sampling and negative samples being another positive sample but these dataset provided worse results than the existing MSMARCO dataset. As we could not improve model scores, we focus our efforts on the change with induced sparsity. \\
The MSMARCO passage ranking dataset has a development set of 6800 queries. For evaluation we follow other work on the dataset and use mean reciprocal rank (MRR) at depth 10 and also study recall at the same depth. For our baseline, we use the public MSMARCO baseline BM25 implementation. As evaluation on this entire dataset takes over 6 hours (6.8 m query url pairs to predict relevance on), we run all of our experiments on a subset of 500 queries as we found that at this size, volatility in metrics is low.\\
Looking at the results shown in table \ref{msmarco} we see that the results are very different than seen in question answering. In passage ranking, model performance does not seem to be heavily tied to model size. Variations in layer removal or pruning rate only have minor degradation's in performance if any. Additionally, unlike question answering, distillation tends to have a negative effect. This is in line with our expectations as the MSMARCO labels are binary and as a result any form of label smoothing is much less important than on probabilities in 384 token context window. Finally, unlike question answering, pruned IR models perform substantially worse despite distillation. It seems that the IR model has learned a ranking function that is more dependent on in layer relations than multi layer transformation. We believe this is interesting because if follows some of the work of other researchers with single layer transformer models. \\
While our results do not point to large differences in performance for compressed passage ranking, we believe this is more of a factor of our model never learning a good representation for the data. Performance for our model is over 30\% worse than some of the other BERT models and as a result model degradation may not be visible as the models have not learned the task well.
\begin{table}[h]
\resizebox{8cm}{!}{
\begin{tabular}{|l|l|l|l|l|l|}
\hline
sparsity & distill & layers & parameters & MRR@10 & Recall @10\\ \hline
BM25 & no & N/A & N/A & 0.167 & 0.558 \\ \hline
0 & no & 12 & 108,893,186 & 0.215 & 0.417 \\ \hline
0 & yes & 12 & 108,893,186 & 0.213 & 0.427 \\ \hline
0 & no & 9 & 87,629,570 & 0.208 & 0.397 \\ \hline
0 & no & 6 & 66,365,954 & 0.208 & 0.397 \\ \hline
0 & no & 3 & 45,102,338 & 0.207 & 0.395 \\ \hline
0 & no & 1 & 30,926,594 & 0.207 & 0.395 \\ \hline
80 & no & 12 & 108,893,186 & 0.199 & 0.403 \\ \hline
80 & yes & 12 & 108,893,186 & 0.001 & 0.001 \\ \hline
80 & no & 6 & 66,365,954 & 0.190 & 0.401 \\ \hline
80 & yes & 6 & 66,365,954 & 0.003 & 0.018 \\ \hline
90 & no & 12 & 108,893,186 & 0.157 & 0.353 \\ \hline
90 & yes & 12 & 108,893,186 & 0.001 & 0.002 \\ \hline
90 & no & 6 & 66,365,954 & 0.179 & 0.367 \\ \hline
90 & yes & 6 & 66,365,954 & 0.005 & 0.016 \\ \hline
97 & no & 12 & 108,893,186 & 0.101 & 0.251 \\ \hline
97 & yes & 12 & 108,893,186 & 0.001 & 0.004 \\ \hline
97 & no & 6 & 66,365,954 & 0.014 & 0.335 \\ \hline
97 & yes & 6 & 66,365,954 & 0.003 & 0.016 \\ \hline
\end{tabular}}
\caption{Effect of pruning length at various sparsity's for passage ranking. Removal of layers does not heavily effect performance pruning, distillation and a combination of the three do.}
\label{tab:msmarco}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{project/prunecurves.png}
\caption{Loss Variations across Passage ranking training corpora.}
\end{figure} | {
"alphanum_fraction": 0.5435859168,
"avg_line_length": 120.8278688525,
"ext": "tex",
"hexsha": "51767c3c084edb41787f9f31be40989984c9c213",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "spacemanidol/CS510IR",
"max_forks_repo_path": "project/experiment.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "spacemanidol/CS510IR",
"max_issues_repo_path": "project/experiment.tex",
"max_line_length": 963,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "spacemanidol/CS510IR",
"max_stars_repo_path": "project/experiment.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4203,
"size": 14741
} |
% ************ experimental **************
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{naturalnames}{hyperref}
\documentclass[]{article}
%opening
\title{Champbot Entry 2015}
\author{Bj{\"o}rn, Quentin \& Melinda\\Georgia VT}
% don't get so uptight
\setlength{\hfuzz}{3pc}
\setlength{\vfuzz}{2pc}
%setcounter{tocdepth}{2}
% text and typefaces
\usepackage{listings}
\usepackage{verbatim} %for block comments mostly
\usepackage{fmtcount} % equivalent to \usepackage[super]{nth}
% Euler for math | Palatino for rm | Helvetica for ss | Courier for tt
\usepackage[utf8]{inputenc} % input charater set is the good one
%\usepackage{lcmtt} % for my \texttt
%\renewcommand*\ttdefault{lcmtt}
\usepackage{tgbonum} % normal typeface
%\usepackage{urw-garamond}
%\usepackage[garamond]{mathdesign}
%\usepackage{kerkis} % normal typeface
%\usepackage[euler-digits,euler-hat-accent]{eulervm}
%usepackage{makeidx}
\usepackage{textcomp} %for special symbols like degrees, registered & copyright that look good
\usepackage[T1]{fontenc}
\normalfont
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{scalefnt}
\usepackage[pdftex,
pdfauthor={Bjorn \& Quentin},
pdftitle={Statement of Work},
pdfsubject={Champbot},
pdfkeywords={USV, Boat, Submarine, Remote Control},
pdfproducer={LaTeX2e with hyperref},
pdfcreator={pdfLaTeX}]{hyperref}
%\usepackage{tikz} % drawing package. Must be after "\def\pgfsysdriver{pgfsys-tex4ht.def}"
%\usetikzlibrary{matrix,arrows}
% better tables
\usepackage{booktabs}
\usepackage{graphicx}
\usepackage{array}
% ToC tuning
%usepackage{tocloft}% http://ctan.org/pkg/tocloft
%setlength{\cftsecnumwidth}{4 em}% Set length of number width in ToC for \section
%setlength{\cftsubsecnumwidth}{5 em}% Set length of number width in ToC for \subsection
%\setlength{\cftsubsubsecnumwidth}{6em}% Set length of number width in ToC for \subsubsection
% Bib tuning
%makeatletter
%renewcommand\@biblabel[1]{}
%makeatother
\usepackage{url} %sweet!
\usepackage[defaultlines=3,all=true]{nowidow}
%\usepackage[toc]{glossaries}
%\usepackage[title,titletoc,toc]{appendix}
%usepackage[titletoc]{appendix}
% ******** potentialy useful **********
%\usepackage[all]{hypcap}
%microtype makes justification look better, especially with narrow columns. Requires pdflatex. Users of normal latex can get a subset of microtypes features with: pdflatex -output-format=dvi
%booktabs much nicer rules and spacing in tables.
%amsmath makes a lot of common math constructs prettier and easier. Check out the good documentation.
%natbib flexible referencing system.
%subfigure allows you to create sub-figures optionally labeled with subcaptions preceded by (a), (b), etc. Use the [tight] option for better spacing. You can \ref and \subref labels within subfigures to get links to figure 1a) and a). As instructed by the subfigure documentation I now use subfig instead. Having a recent version of the caption package (which subfig includes) is recommended.
%url better than putting URLs inside \texttt{...}. Line-wrapping works better and you don't have to escape tildes.
%textpos allows absolute positioning on a page. Can be useful when press-ganging LaTeX into doing slides or a poster. I sometimes use this to put a DRAFT notice, or publication details in the top margin of a paper.
%\usepackage{booktabs}
\includeonly{}
\newcommand{\nl}{\newline}
\def\degc{~\textdegree C}
\def\degf{~\textdegree F}
% \makeindex
% \makeglossaries
% plain TeX level change for blank pages
%********* NOT FOR EPUB **********
\makeatletter % catcode shift of the at symbol from 12 to 11
\def\cleardoublepage{\clearpage\if@twoside%
\ifodd\c@page\else
\vspace*{\fill}
\hfill
\begin{center}
This page intentionally left blank.
\end{center}
\vspace{\fill}
\thispagestyle{empty}
\newpage
\if@twocolumn\hbox{}\newpage\fi\fi\fi
}
\makeatother % catcode shift of the at symbol from 11 back to 12
%********* NOT FOR EPUB **********
\begin{document}
\maketitle
\section{Project Summary}
This project will include the design and construction an ROV, closely resembling Champ.
This will successfully navigate the course of the Champbot Challenge as well as execute all optional maneuvers.
The education of ourselves and others will be a large part of this exercise.
The project will begin with a feature-matrix and a dFMEA (so we don't find ourselves DNF).
Since fire and blades are involved, a risk-assessment will be completed during development.
\section{Objectives}
\subsection{Navigation}
Navigation is non-optional and so has priority and will be implemented and tested first.
In negotiating the slalom around the five buoys, in the prescribed pattern, it must be reasonably quick.
Executing this in a reasonable time will better hold the attention of the spectators.
There will be two drive units providing speed and agility through pirouette turning.
\subsection{Ignition}
The target will be ignited using a gas flame from its mouth.
After all, if Champ will be spewing fire, this is where it should emanate.
It's hoped that the flames will be quite visible to the crowed on the shore.
The fuel will likely be butane, owing to the low pressure, bright flame and relative safety.
Since this is a wet environment, with submersion, ignition will be electro-thermal rather than arc based.
\subsection{Accuracy}
Like some of last year's entries, the float should be dropped from Champ's mouth.
Also like last-years entries, champs neck will be extended to better center into the target.
The mechanics will be a dropping jaw. This must be executed before ignition so that the mouth is clear.
\subsection{Submersion}
The Submersion for well over 10 seconds will be be supported.
The dive and rise should be startlingly quick, and repeatable, to impress the crowd.
Since this is a short-term dive, with no forward motion planned, power, rather than ballast, will be used. Downward-force is through the temporary re-purposing of the main-drive assembly as a rotary dive-plane. The pitch control will probably be through thrust reaction and not require another servo.
\subsection{Spectacle, Technical and Aesthetics}
Since these all have a point-value equal to the more direct tasks, they will figure heavily into the entire project.
\section{Statement of Work}
There is much to do and many unknowns but the success of this project, in its entirely, is very likely.
The schedule is ordered so that unforeseen delays are unlikely to result in a no-show or disqualification.
\subsection{Schedule}
\begin{tabular}{ r p{.7\textwidth} }
\\ Week 23 & Main drive/dive units complete; sealed and with mounts.
\\ Week 26 & Remote control and power systems complete. Firmware functional.
\\ Week 29 & Buoyant structure complete, along with motor control
\\ Week 30 & Early lake test, finalize firmware and spring-rates
\\ Week 32 & Jaw control added and tested
\\ Week 34 & Fire-control hardware and firmware added and tested
\\ Week 36 & Skeleton \& Skin complete and installed
\\ Week 37 & Final lake test to finalize buoyancy and address and handling issues
\\ Week 39 & Corrections will be ongoing until the event
\end{tabular}
\subsection{Budget}
Money is always a very limited resource.
Like many of last years entries, rather than the obvious industrial or hobby parts this project will reuse and re-purpose household appliances and material.
Simplicity should also keep costs down.
At this time it appears that the cost could be well under \$500.
\subsection{Team Composition and Expertise}
Bj{\"o}rn is an autodidactic engineer and life-long maker, building his first remote-control device, an optically controlled Automaton, in 1973.
Quentin is a brilliant \ordinalnum{1} grade student at Georgia Elementary \& Middle School (GEM), and Bj{\"o}rn's son, who loves electronics and chemistry.
Melinda is an industrial designer.
\section{Facilities}
All work will be performed within the home.
Github will be use for both source control and documentation at this location:
\url{https://github.com/bjornburton/champbot}
\end{document} | {
"alphanum_fraction": 0.7678593385,
"avg_line_length": 36.9681818182,
"ext": "tex",
"hexsha": "7f31cd92b3e4d5c98a31a232982b474616dc452c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a01c66537bb4c06f1f605f7c3518019c40a02d46",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bjornburton/champbot",
"max_forks_repo_path": "statementOfWork/statementOfWork.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a01c66537bb4c06f1f605f7c3518019c40a02d46",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bjornburton/champbot",
"max_issues_repo_path": "statementOfWork/statementOfWork.tex",
"max_line_length": 392,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a01c66537bb4c06f1f605f7c3518019c40a02d46",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bjornburton/champbot",
"max_stars_repo_path": "statementOfWork/statementOfWork.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2074,
"size": 8133
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.